Skip to main content
The Create Test Case API allows you to programmatically create new test cases for your projects. Use this endpoint to import tests from external systems, generate tests dynamically based on code analysis or custom logic, or automate test creation as part of your CI/CD workflows. For more examples and integration ideas, see CI/CD Integration. The Create Test Case API provides a single endpoint to create new test cases:
  • Endpoint: POST /projects/{projectUuid}/test-cases
  • Authentication: Bearer token
  • Content-Type: application/json

Request Parameters

Path Parameters

ParameterTypeRequiredDescription
projectUuidstring (UUID)YesYour project’s UUID (36-character format, NOT the short ID from URLs)

Headers

HeaderValueRequiredDescription
Content-Typeapplication/jsonYesMedia type of the request body
AuthorizationBearer YOUR_API_TOKENYesBearer token authentication

Request Body

The request body must be a JSON object with the following fields:

Required Fields

FieldTypeDescription
namestringTest case name (1-80 characters)
goalstringThe goal/objective of the test case. This describes what the test should accomplish.
applicationIdstring (UUID)The UUID of the application this test belongs to. Must belong to the specified project.

Optional Fields

FieldTypeDescription
expectedResultstringExpected result description. Provides additional context about what success looks like for this test.
configIdsarray of UUIDsArray of configuration IDs (e.g., login credentials, API keys). These configs will be available to the test during execution.
scenarioIdstring (UUID)Scenario ID if grouping tests into scenarios. Must belong to the specified project.
resumeFromDependencyProjectTestCaseIdstring (UUID)ID of a test case to resume browser state from (Resume From dependency). See Dependencies Deep Dive section below.
waitForDependenciesProjectTestCaseIdsarray of UUIDsArray of test case IDs that must complete before this test runs (Wait For dependencies). See Dependencies Deep Dive section below.
All ID fields (applicationId, configIds, scenarioId, and dependency IDs) require full UUIDs, not short IDs. All IDs must belong to the specified project.

Authentication & Setup

This endpoint requires Bearer token authentication and your project UUID. For detailed instructions on authentication and finding your project UUID, see API Introduction and Finding Your Project UUID.

Test Dependencies in API

Test cases can depend on other test cases to control execution order and share browser state. Dependencies enable you to build test workflows where tests run in sequence, reuse expensive setup like authentication, and pass data between tests. The API supports two dependency types:
  • Resume From: Continues the browser session from another test, preserving cookies, localStorage, and sessionStorage. Use this for sequential workflows that build on each other (e.g., login → create → edit).
  • Wait For: Waits for other tests to complete before starting, but uses a fresh browser session. Use this when you need execution order and data passing but want isolated browser state.
For detailed information about how dependencies work, execution order, browser isolation, and multi-user testing scenarios, see Test Dependencies.

Resume From Dependency

The resumeFromDependencyProjectTestCaseId field creates a Resume From dependency. This allows your test to resume browser state from a previous test case, effectively continuing the browser session. How it works:
  • The test resumes browser state (cookies, localStorage, sessionStorage) from the dependency test
  • Skips login/setup steps that were already completed
  • Use case: Sequential tests that build on each other (e.g., login test → profile update test → logout test)
Example: A “Update Profile” test can resume from a “Login” test, skipping the login step and starting directly at the profile page.

Wait For Dependencies

The waitForDependenciesProjectTestCaseIds field creates Wait For dependencies. This ensures your test waits for one or more other tests to complete before starting execution. How it works:
  • The test waits for all specified dependency tests to complete successfully (status: COMPLETED, result: PASSED)
  • If any dependency fails, the dependent test will not run
  • Tests can run in parallel, but this test waits for all dependencies to pass
  • Use case: Parallel setup tests, then sequential verification (e.g., setup test A and B run in parallel, verification test C waits for both to pass)
Example: A “Final Verification” test can wait for multiple setup tests to complete before running its checks.

Resume From vs Wait For

FeatureResume FromWait For
PurposeContinue browser sessionWait for completion
Browser StateResumes from dependencyFresh browser session
Execution OrderSequential (runs after dependency)Parallel setup, sequential execution
Use CaseMulti-step flows in same sessionParallel setup, then verification
Number of DependenciesSingle test caseMultiple test cases
You can use both dependency types together. For example, a test can wait for multiple setup tests to complete, then resume browser state from one of them.

Response Format

Success Response (200)

{
  "success": true,
  "testCase": {
    "id": "550e8400-e29b-41d4-a716-446655440000",
    "name": "Login Test",
    "url": "https://app.qa.tech/p/project-slug_abc123/test-cases/550e8400-e29b-41d4-a716-446655440000/edit"
  }
}
The test case is created with generationStatus: 'GENERATING' and automatically undergoes a burn-in process to generate test steps. During burn-in, QA.tech executes the test against your application to discover and record the steps needed to accomplish your goal. The AI agent navigates your application, performs actions, and captures the sequence of steps that achieve the test objective. Once burn-in completes, the test steps are saved and the test case status changes to COMPLETED.
Test cases created via API are disabled by default (isEnabled: false). Enable them in the QA.tech UI before they run in test plans. The createdBy field is automatically set to the QA.tech system user.

Error Responses

All error responses use the same format with a statusText field containing the error message.

Bad Request (400)

Returned when the request body is malformed, missing required fields, validation fails, or invalid IDs are provided. Examples: Validation error:
{
  "statusText": "Validation error: Test name is required"
}
Invalid application ID:
{
  "statusText": "Invalid application ID for this project"
}

Forbidden (403)

Returned when the bearer token is invalid or when the organization is suspended. Examples: Invalid token:
{
  "statusText": "Unknown Error"
}
Organization suspended:
{
  "statusText": "Organization is suspended"
}

Not Found (404)

Returned when the specified project UUID doesn’t exist. Example:
{
  "statusText": "Project not found"
}

Internal Server Error (500)

Returned when there’s an internal server error during test creation. Example:
{
  "statusText": "Failed to create test case"
}

Example Requests

Basic Test Case Creation

curl -X POST https://app.qa.tech/api/projects/YOUR_PROJECT_UUID/test-cases \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_TOKEN" \
  -d '{
    "name": "Login Test",
    "goal": "Verify that users can log in successfully with valid credentials",
    "expectedResult": "User should be redirected to dashboard after successful login",
    "applicationId": "550e8400-e29b-41d4-a716-446655440000"
  }'

Test Case with Configurations

curl -X POST https://app.qa.tech/api/projects/YOUR_PROJECT_UUID/test-cases \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_TOKEN" \
  -d '{
    "name": "Checkout Flow Test",
    "goal": "Complete a purchase from cart to order confirmation",
    "expectedResult": "Order should be placed successfully and confirmation email sent",
    "applicationId": "550e8400-e29b-41d4-a716-446655440000",
    "configIds": [
      "a1b2c3d4-e5f6-4789-a012-3456789abcde",
      "b2c3d4e5-f6a7-4890-b123-456789abcdef"
    ]
  }'

Test Case with Resume From Dependency

curl -X POST https://app.qa.tech/api/projects/YOUR_PROJECT_UUID/test-cases \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_TOKEN" \
  -d '{
    "name": "Update Profile Test",
    "goal": "Update user profile information",
    "expectedResult": "Profile should be updated and changes reflected immediately",
    "applicationId": "550e8400-e29b-41d4-a716-446655440000",
    "resumeFromDependencyProjectTestCaseId": "a1b2c3d4-e5f6-4789-a012-3456789abcde"
  }'
This creates a test that will resume browser state from the login test, skipping the need to log in again.

Test Case with Wait For Dependencies

curl -X POST https://app.qa.tech/api/projects/YOUR_PROJECT_UUID/test-cases \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_TOKEN" \
  -d '{
    "name": "Final Verification Test",
    "goal": "Verify all setup steps completed successfully",
    "expectedResult": "All setup tests passed and system is ready",
    "applicationId": "550e8400-e29b-41d4-a716-446655440000",
    "waitForDependenciesProjectTestCaseIds": [
      "a1b2c3d4-e5f6-4789-a012-3456789abcde",
      "b2c3d4e5-f6a7-4890-b123-456789abcdef"
    ]
  }'
This test will wait for both setup tests to complete successfully before starting execution.

CI/CD Integration

This API endpoint works with any CI/CD system. For platform-specific guides and examples on integrating QA.tech into your CI/CD workflows: