Skip to main content

What should be a test?

A good example of a test is something a user would like to do, or achieve. A user story or user journey should probably map 1:1 to a test case.

Test examples

  • Log in with correct credentials
  • Log in with incorrect credentials
  • Create a new task
  • Edit a task’s due date

Creating a test

You can create test cases through the UI with AI-generated suggestions, or conversationally through the AI Chat Assistant.

Create Tests via UI

Click the “Add Test Case” button to open the test creation modal with two options: Add Test Case Button Suggested tests - QA.tech continuously crawls your application to discover testable features and interactions. These appear as AI-generated suggestions you can select and add to your project. Click “Analyze my site” to trigger a new crawl if you want fresh suggestions. Suggested tests modal Create your own test - Describe what you want to test in natural language. The AI agent will understand your goal and attempt to generate the test steps automatically. Create your own test modal Key fields:
  • Name: Clear, descriptive name (e.g., “Create admin user and verify access”)
  • Goal: What the test should accomplish in natural language
  • Expected Result (optional): What success looks like
  • Dependencies: Configure test execution order (see Test Dependencies for WAIT_FOR and RESUME_FROM)
  • Configurations: Add required data like credentials or test accounts
  • Advanced: Agent selection and other settings

Choosing an AI Agent

When creating a test, you can select which AI agent executes it. Click the Advanced section in the test creation modal or test settings to see agent options.
AgentSpeedBest For
Claude Haiku 4.5 (default)FastestMost tests - recommended for day-to-day testing
Claude Sonnet 4.5ModerateComplex scenarios requiring deeper reasoning
Claude Haiku 4.5 is the default for all new tests. It provides the fastest execution while handling most testing scenarios effectively - form filling, navigation, verification, and standard user flows. When to consider Sonnet: If a test consistently fails with Haiku on complex multi-step reasoning or edge cases, try switching to Sonnet for that specific test.
Most users never need to change the default agent. QA.tech selects Haiku 4.5 because it offers the best balance of speed and capability for typical testing workflows.

Create Tests via AI Chat Assistant

The AI Chat Assistant provides a conversational, exploratory approach to test creation: Natural conversation - Describe your testing needs in plain English. Ask for multiple tests, request specific coverage areas, or iterate on suggestions through back-and-forth conversation. Upload context - Drag and drop specification documents, design files, or requirements (PDFs, images, text files) directly into the chat. The AI uses this context to generate more accurate, relevant tests. Safe experimentation - Tests generated in chat aren’t committed to your project until you explicitly click “Add Selected Tests”. You can review, refine, or discard them without affecting your team. Uploaded files remain isolated to that chat conversation only.
Example prompts:
  • “Generate 5 tests for the checkout flow”
  • “What areas of my product should I cover with test cases first?”
  • “Create a test that validates login and checks the user profile page”
See AI Chat Assistant for more examples.

Review and Refine

After creating a test, the AI agent automatically attempts to execute it and generate test steps. Click the review button to inspect the results: Review generated test Edit test page Left sidebar - View and edit the goal, expected result, and generated steps. The Settings tab lets you configure dependencies, add required configurations, or adjust agent settings. Right panel - Inspect the execution trace showing exactly what the agent did. This helps you verify the test behaves as intended. Update steps as needed and click “Save & Run” to test your changes. You can stop execution at any time with the “Stop” button.

Refine Tests via Chat

The fastest way to fix and improve tests: describe changes in the AI Chat Assistant, review the diff, and run immediately. Fix after failures - When a test fails, stay in chat and describe the fix. No context switching - see the failure, fix it, validate. Build iteratively - Create a rough test, watch it run, refine through conversation. The AI remembers what you both just saw. Bulk refinement - Describe what you want across your test suite and let the AI handle the details. It reads your tests, identifies which ones to change, and proposes edits for each.

Example Prompts

You say…What happens
”Change step 3 to wait for the spinner first”AI proposes single step edit
”Also verify the success message appears”AI adds verification to current test
”Make this resume from my Login test”AI updates dependency
”Add email verification to all checkout tests”AI finds and edits multiple tests
”Create tests for returns similar to checkout”AI reads your tests, generates new ones
The AI already knows your test suite and remembers your conversation. You can say “that test” or “where it failed” - no need to be overly specific. Describe what you want in plain language and let the AI figure out the details.
Only for existing tests: Editing works on tests that have been created. For suggestions you haven’t added yet, keep describing changes and the AI will regenerate the suggestion.

Activating and Organizing Tests

After creating and reviewing your test, you’ll want to activate it and organize it within your project.

Activate Your Test

Tests are created in draft mode so you can review and refine them before they run. When you’re ready, click the “Activate” button to enable the test and make it part of your active test suite.
If your test has dependencies that are also in draft mode, QA.tech will prompt you to activate them together to ensure proper execution order.
Once activated, the button changes to “Convert to draft” - use it if you need to temporarily disable the test for maintenance or updates.

Organize with Scenarios

On the Test Cases page, you can drag and drop tests to organize them into Scenario groups. Scenarios help you:
  • Keep related tests together (e.g., “Checkout Flow”, “User Management”)
  • Create logical test groupings for better organization
  • Get a clear overview during test execution
  • Set up test dependencies within related workflows

Writing Effective Tests

Writing a Good Goal

The goal is the main objective of the test. The agent uses this to build steps and adapt when your application changes. Focus on describing what to do, not what to validate (use expected result for that). Good goal examples:
  • Search for ‘Chair’, navigate to a product and add it to the cart
  • Invite a new member with Admin role to the project
  • Open the customer support chat and send a message
Keep your goals:
  • Action-oriented - Start with verbs like “Create”, “Search”, “Navigate”, “Add”
  • Specific - Include exact details (product name, user role, button labels)
  • Focused - Describe actions to take, not validation criteria

Writing a Good Expected Result

The expected result defines what the agent should verify at the end of the test. Describe what should be visible or observable when the test completes successfully. Good expected result examples:
  • The page should contain a user avatar
  • A success message appears and the user is redirected to the product list
  • The user receives an email with a password reset link
Keep your expected results:
  • Observable - Focus on things that can be verified visually or through system responses
  • Specific - Include exact elements, messages, or states to check
  • Outcome-focused - Describe the end state, not how to get there
Keep tests to 10 steps or less. If you need more steps, create a new test with a dependency instead. Shorter tests are faster to execute and easier to maintain.
Performance tip: Agent Cache is enabled by default, speeding up test execution by reusing AI reasoning from previous successful runs. Consider disabling cache when debugging flaky tests or testing new features where you want fresh AI analysis.

Test Dependencies

Tests can depend on other tests to control execution order and reuse browser state. This is essential for complex workflows where one test needs data or state from another. Learn more in Test Dependencies.
Testing with Multiple Users: If your test requires multiple users logged in simultaneously (e.g., collaboration, sharing), create separate login tests for each user. Each login test becomes the root of an independent chain with its own isolated browser session, ensuring users don’t interfere with each other. Learn more about Multi-User Testing Scenarios.

Organizing Tests for Complex Projects

For projects with multiple products, versions, or environments, see Projects, Applications, Environments for organization patterns and best practices.