automation

Building Your First AI-Powered Test Case Generator

📅 2026-01-12âąī¸ 12 minâœī¸ By Oleg Neskoromnyi

You don't need to be a machine learning expert to build an AI-powered test case generator. In fact, you can build one in an afternoon using the OpenAI API, some Python, and your QA expertise.

I built mine in 4 hours. It now generates 80% of my test cases, and I spend my time on the 20% that requires critical thinking.

Let me show you exactly how.

Prerequisites: Basic Python knowledge, OpenAI API key ($5 credit is enough to start), and your existing test case templates.

Why Build Your Own Instead of Using ChatGPT Directly?

Good question. Here's why a custom generator wins:

  1. Consistency - Same format every time, matches your team's template
  2. Context - Pre-loaded with your domain knowledge and business rules
  3. Batch processing - Generate test cases for 10 features in minutes
  4. Version control - Track prompt improvements over time
  5. Integration - Connect to Jira, TestRail, or your test management tool

Think of it as "ChatGPT, but tailored to your specific testing needs."

Architecture Overview

Here's what we're building:

|architecture.py
1# Simple architecture
2User Input (Feature Description)
3  ↓
4Prompt Template (Your QA expertise)
5  ↓
6OpenAI API (GPT-4)
7  ↓
8Test Cases (Your format)
9  ↓
10Output (CSV, JSON, or direct to test tool)

Step 1: Set Up Your Environment

First, install the OpenAI Python library:

|setup.sh
1# Create virtual environment
2python -m venv venv
3source venv/bin/activate  # On Windows: venv\Scripts\activate
4
5# Install dependencies
6pip install openai python-dotenv
7
8# Create .env file
9echo "OPENAI_API_KEY=your_key_here" > .env

Never commit your API key to git! Always use environment variables or a .env file (and add it to .gitignore).

Step 2: Create Your Prompt Template

This is where your QA expertise shines. The better your prompt, the better your test cases.

|prompt_template.py
1from string import Template
2
3# Your QA expertise goes here
4PROMPT_TEMPLATE = Template("""
5You are a senior QA engineer. Generate comprehensive test cases for the following feature.
6
7Feature Name: $feature_name
8Description: $description
9User Story: $user_story
10
11Requirements:
12$requirements
13
14Generate test cases in this exact format:
15
16Test Case ID | Title | Preconditions | Steps | Expected Result | Priority | Type
17
18Include:
19- Positive test cases (happy path)
20- Negative test cases (error handling)
21- Edge cases and boundary conditions
22- Security considerations
23- Performance edge cases
24
25Use Given-When-Then format for steps.
26Priority: Critical, High, Medium, Low
27Type: Functional, Security, Performance, Usability
28
29Generate 15-20 test cases covering all scenarios.
30""")

Notice what I included:

  • Clear role - "You are a senior QA engineer"
  • Exact format - No surprises in output
  • Specific categories - Positive, negative, edge cases
  • My team's conventions - Given-When-Then, priority levels

Step 3: Build the Generator

Now let's write the actual generator:

|test_case_generator.py
1import os
2from openai import OpenAI
3from dotenv import load_dotenv
4from prompt_template import PROMPT_TEMPLATE
5
6load_dotenv()
7
8class TestCaseGenerator:
9  def __init__(self):
10      self.client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
11
12  def generate(self, feature_name, description, user_story, requirements):
13      """Generate test cases for a feature"""
14
15      # Fill in the prompt template
16      prompt = PROMPT_TEMPLATE.substitute(
17          feature_name=feature_name,
18          description=description,
19          user_story=user_story,
20          requirements=requirements
21      )
22
23      # Call OpenAI API
24      response = self.client.chat.completions.create(
25          model="gpt-4",  # or "gpt-4-turbo" for faster/cheaper
26          messages=[
27              {
28                  "role": "system",
29                  "content": "You are an expert QA engineer specializing in test case design."
30              },
31              {
32                  "role": "user",
33                  "content": prompt
34              }
35          ],
36          temperature=0.7,  # Balance creativity and consistency
37          max_tokens=2000
38      )
39
40      return response.choices[0].message.content
41
42  def save_to_csv(self, test_cases, filename):
43      """Save generated test cases to CSV"""
44      with open(filename, 'w') as f:
45          f.write(test_cases)
46      print(f"Test cases saved to {filename}")
47
48# Usage
49if __name__ == "__main__":
50  generator = TestCaseGenerator()
51
52  test_cases = generator.generate(
53      feature_name="User Login",
54      description="Email/password authentication with remember me option",
55      user_story="As a user, I want to log in with my email and password so I can access my account",
56      requirements="""
57      - Email and password fields required
58      - "Remember me" checkbox (optional)
59      - Forgot password link
60      - Account lockout after 5 failed attempts
61      - Session timeout after 30 minutes of inactivity
62      """
63  )
64
65  generator.save_to_csv(test_cases, "login_test_cases.csv")
66  print(test_cases)

Cost Optimization: Start with GPT-4-turbo (cheaper and faster). Only use GPT-4 if you need deeper reasoning for complex features.

Step 4: Enhance with Domain Knowledge

Here's where you make it powerful. Add your domain-specific knowledge:

|domain_knowledge.py
1# Add this to your generator class
2class TestCaseGenerator:
3  def __init__(self, domain_context=None):
4      self.client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
5      self.domain_context = domain_context or self.default_domain_context()
6
7  def default_domain_context(self):
8      """Your domain expertise"""
9      return """
10      Domain: E-commerce web application
11
12      Standard Security Checks:
13      - SQL injection attempts
14      - XSS attacks
15      - CSRF protection
16      - Session hijacking
17
18      Common Edge Cases:
19      - Unicode characters in inputs
20      - Very long strings (>1000 chars)
21      - Special characters in email
22      - Concurrent user sessions
23
24      Compliance Requirements:
25      - GDPR data handling
26      - PCI DSS for payment data
27      - WCAG 2.1 AA accessibility
28
29      Performance Baselines:
30      - Page load: <2 seconds
31      - API response: <500ms
32      - Support 1000 concurrent users
33      """
34
35  def generate(self, feature_name, description, user_story, requirements):
36      # Prepend domain context to every prompt
37      full_prompt = f"{self.domain_context}\n\n{PROMPT_TEMPLATE.substitute(...)}"
38      # ... rest of the method

Now every test case generation includes your domain expertise automatically.

Step 5: Add Batch Processing

Generate test cases for multiple features at once:

|batch_processor.py
1import json
2from test_case_generator import TestCaseGenerator
3
4def batch_generate(features_file):
5  """Generate test cases for multiple features from JSON"""
6
7  generator = TestCaseGenerator()
8
9  # Load features from JSON
10  with open(features_file, 'r') as f:
11      features = json.load(f)
12
13  results = {}
14
15  for feature in features:
16      print(f"Generating test cases for: {feature['name']}...")
17
18      test_cases = generator.generate(
19          feature_name=feature['name'],
20          description=feature['description'],
21          user_story=feature['user_story'],
22          requirements=feature['requirements']
23      )
24
25      results[feature['name']] = test_cases
26
27      # Save individual files
28      filename = f"test_cases_{feature['name'].lower().replace(' ', '_')}.csv"
29      generator.save_to_csv(test_cases, filename)
30
31  return results
32
33# Usage: python batch_processor.py features.json
34if __name__ == "__main__":
35  import sys
36  batch_generate(sys.argv[1])

Example features.json:

|features.json
1[
2{
3  "name": "User Login",
4  "description": "Email/password authentication",
5  "user_story": "As a user, I want to log in...",
6  "requirements": "- Email required\n- Password required..."
7},
8{
9  "name": "Product Search",
10  "description": "Search products by keyword",
11  "user_story": "As a user, I want to search...",
12  "requirements": "- Search bar\n- Filters..."
13}
14]

Step 6: Integrate with Your Tools

Connect to Jira, TestRail, or your test management tool:

|jira_integration.py
1from jira import JIRA
2
3class JiraIntegration:
4  def __init__(self, server, email, api_token):
5      self.jira = JIRA(server=server, basic_auth=(email, api_token))
6
7  def create_test_case(self, project_key, test_case):
8      """Create a test case in Jira"""
9      issue_dict = {
10          'project': {'key': project_key},
11          'summary': test_case['title'],
12          'description': self.format_test_case(test_case),
13          'issuetype': {'name': 'Test'},
14      }
15
16      new_issue = self.jira.create_issue(fields=issue_dict)
17      return new_issue.key
18
19  def format_test_case(self, test_case):
20      """Format test case for Jira"""
21      return f"""
22      *Preconditions:*
23      {test_case['preconditions']}
24
25      *Steps:*
26      {test_case['steps']}
27
28      *Expected Result:*
29      {test_case['expected_result']}
30
31      *Priority:* {test_case['priority']}
32      *Type:* {test_case['type']}
33      """
34
35# Usage
36jira = JiraIntegration(
37  server="https://your-domain.atlassian.net",
38  email="your-email@example.com",
39  api_token="your_api_token"
40)
41
42jira.create_test_case("PROJ", test_case)

For TestRail integration, check out the testrail-api Python package. The pattern is similar to Jira.

Real-World Example: Before vs After

Before (Manual):

  • Time: 2-3 hours per feature
  • Test cases: 8-12 (missing edge cases)
  • Consistency: Varies by mood

After (AI Generator):

  • Time: 15 minutes per feature (including review)
  • Test cases: 15-20 (comprehensive)
  • Consistency: Same format every time

My workflow now:

  1. Copy user story from Jira (30 seconds)
  2. Run generator (2 minutes)
  3. Review and enhance with domain knowledge (10 minutes)
  4. Import to TestRail (3 minutes)

Common Issues and Solutions

Issue 1: Inconsistent Output Format

Solution: Be extremely specific in your prompt:

|solution.md
1Bad: "Generate test cases"
2Good: "Generate test cases in this EXACT format: [format]
3Do not add any extra text before or after the table."

Issue 2: Generic Test Cases

Solution: Add more domain context and examples:

|context.py
1# Add example test cases to your prompt
2EXAMPLE_TEST_CASE = """
3Example of a good test case:
4TC-001 | Login with valid credentials | User has account | GIVEN user on login page WHEN enters valid email and password THEN redirected to dashboard | Success | Critical | Functional
5"""

Issue 3: Missing Edge Cases

Solution: Explicitly ask for them:

|edge-cases.md
1"Also consider these edge cases:
2- Very long inputs (>1000 characters)
3- Special characters: <>"'&
4- Concurrent requests
5- Network timeouts
6- Database failures"

Cost Estimation

Based on my usage:

  • Per feature: $0.10 - $0.30 (GPT-4-turbo)
  • Monthly (20 features): $2 - $6
  • Time saved: 30-40 hours/month

ROI: If your hourly rate is $50, you save $1,500-$2,000/month for a $6 investment.

Next Steps: Making It Production-Ready

Once you have the basics working:

  1. Add error handling - API timeouts, rate limits
  2. Implement caching - Don't regenerate identical test cases
  3. Version control - Track prompt changes in git
  4. Add UI - Streamlit or Flask for non-technical users
  5. Metrics tracking - How many test cases generated, time saved

Start small: Get it working for one feature first. Then expand. Don't try to build the perfect system on day one.

The Code Repository

Want the complete working code? I've created a GitHub template with:

  • ✅ Complete generator code
  • ✅ Example prompts
  • ✅ Jira/TestRail integrations
  • ✅ Docker setup
  • ✅ Documentation

Coming soon on the contact page - drop your email if interested!


Ready to build yours? Start with the basic generator and iterate. The key is making it work for YOUR domain and YOUR team's format.

Questions? Reach out on the contact page. I'd love to hear what you build!

Continue Reading

How I Reduced Test Case Writing Time by 60% Using AI

Read more →

10 ChatGPT Prompts Every QA Should Have in Their Toolkit

Read more →

Stay Connected

Get notified when I publish new articles on AI-powered testing, automation strategies, and quality leadership. Privacy-first, no spam, unsubscribe anytime.