Skip to main content
AI-Powered Testing·Lesson 2 of 5

Generating Tests with AI

The best use of AI in test writing is generating first drafts — tests you review, refine, and own. Here is how to get high-quality test output from AI tools.

Prompting for Test Generation

The quality of AI-generated tests depends almost entirely on the quality of your prompt. Vague prompts produce vague tests. Specific prompts produce useful tests.

Weak prompt:

Write tests for the login form.

Strong prompt:

Write Playwright tests for a login form at /login. The form has email and password fields and a submit button. Test: successful login redirects to /dashboard, empty submission shows validation errors for both fields, invalid email format shows an email error, wrong password shows "Invalid credentials" error. Use page.getByRole and page.getByLabel for selectors.

The strong prompt specifies the framework, the URL, the selectors to use, and the exact scenarios to cover. The output will be immediately usable with minor edits.

Using Claude for Test Generation

Paste your component code or user story into Claude and ask it to generate tests:

Here is a React checkout form component:
[paste component code]

Generate Playwright end-to-end tests covering:
1. Successful checkout with valid card details
2. Invalid card number shows error
3. Expired card shows error
4. Missing required fields show validation errors
5. Successful checkout redirects to /order-confirmation

Use TypeScript, page.getByRole and page.getByLabel for locators.

Claude understands the component structure and will generate tests that match the actual UI elements.

Using Copilot in Your Editor

With GitHub Copilot or Cursor, you can drive test generation inline:

  1. Create a new test file
  2. Write a comment describing what you want to test
  3. Let the AI complete the test
// Test the search functionality:
// - User types in the search box and results appear
// - Empty search shows all results
// - No results found state appears for unmatched queries
// - Clicking a result navigates to the detail page

test('search returns relevant results', async ({ page }) => {
  // Copilot completes from here

The inline approach keeps you in flow — you describe intent, AI generates structure, you verify correctness.

Reviewing AI-Generated Tests

Never commit AI-generated tests without reviewing them. Check:

  • Selectors are correct — does the locator actually match an element on the page?
  • Assertions are meaningful — does the test actually verify the right thing?
  • Edge cases are covered — did the AI miss any important scenarios?
  • The test is not fragile — does it rely on exact text that might change, or on roles and labels that are stable?

Run the generated tests against the real application before committing. A test that passes against a non-existent element is worse than no test.

Iterating on the Output

AI-generated tests are a starting point. Treat them like a junior developer's PR — review, suggest changes, and iterate. The second and third rounds of refinement are where the real quality comes from.