README.md

November 25, 2025 · View on GitHub

spectest logo

api testing + truly declarative x lightning fast & absurdly simple


Build Test NPM License

Getting Started

A spectest is a collection of test cases. The only required properties of a test case are

  • name - used to identify the case
  • endpoint - the API endpoint under test.

Most test cases include a request property which mirrors the fetch Request schema, and a response property that is used for assertions. The response property also mirrors the fetch Response schema.

1. Define the server environment

At the root of your project, create the file spectest.config.js. For the purposes of this guide, we'll be using https://jsonplaceholder.typicode.com, which provides "free fake and reliable API for testing and prototyping".

// spectest.config.js

export default {
  baseUrl: 'https://jsonplaceholder.typicode.com',
  testDir: './test',
  filePattern: '\\.spectest\\.',
};

For actual testing, the baseUrl should be the hostname of your server like http://localhost:3000 for Express servers. The CLI would search for and process files that match filePattern under the testDir. For more config options, see Config reference.

2. Create some spec test cases

Create the file test/jsonpayload.spectest.js and paste the code below in it. Note: the file name needs to match the pattern defined in the config above.

// jsonpayload.spectest.js

const tests = [
  // This basic example fetches {baseUrl}/todos/1 and asserts response status is 'OK'.
  {
    name: "Fetch TODO 1",
    endpoint: "/todos/1",
  },

  // In this case, spectest would assert the actual response matches both status code and json body. 
  {
    name: "Create a post",
    endpoint: "/posts",
    request: {
      method: "POST",
      headers: { "Content-Type": "application/json; charset=UTF-8" },
      body: { title: "foo", body: "bar", userId: 1 },
    },
    response: {
      status: 201,
      json: { id: 101, title: "foo", body: "bar", userId: 1 },
    },
  },
];

export default tests;

A *.spectest.js file can have classes, functions, use imports and do anything you would with Javascript. The default export may be either an array of test cases or an object with a tests array and optional name property. For the full schema of a test case, see Testcase reference

3. Run the test cases

You can run the test cases with:

$ npx spectest

# Run the cases in a specific file
$ npx spectest jsonpayload.spectest.js

# Set/override the configs in step 1
$ npx spectest --base-url="https://jsonplaceholder.typicode.com" jsonpayload.spectest.js

All three commands above will have the same output, and it would look like:

📊 Test Summary:
 [✅] Fetch TODO 1 (53ms)
 [✅] Create a post (108ms)

✅ 2/2 tests passed!
📋 Server logs captured: 0
⏱️ Latency: min 53ms; avg 80ms; max 108ms
⏱️ Testing time: 0.11s; Total time: 0.18s

Why Spectest

While building an API, I kept running into the same frustrating loop: after writing comprehensive Jest tests, I still had to manually “verify” the API by running it through a frontend client.

Here’s why Jest alone wasn’t enough:

  1. Mocks obscure reality – Jest enables mocking which simulate behavior but can hide real issues in production.

  2. Multi-step flows were painful – Chaining flows like login → verify → fetch was hard to write and even harder to maintain.

  3. No browser-like behavior – Jest couldn’t replicate real-world HTTP behavior like cookie persistence or automatic attachment.

  4. API-centric needs were missing – Load testing, proxying, and concurrency checks weren’t feasible out of the box.

What I really needed was a way to verify my API contract the same way a frontend does—but without reaching for heavyweight tools like Selenium or Playwright.

That’s where Spectest was born—out of necessity.

API Reference

Test case options

OptionDescriptionDefault
nameHuman readable test namerequired
operationIdUnique identifier for the operationname
phaseExecution phase of the test (setup, main, or teardown)main
dependsOnArray of operationId strings that must pass before this test runsnone
endpointRequest path relative to the base URLrequired
request.methodHTTP methodGET
request.headersAdditional request headersnone
request.bodyRequest payloadnone
request.*Other valid fetch Request option keys, e.g. cache, mode.none
response.statusExpected HTTP status200
response.jsonExpected partial JSON bodynone
response.schemaZod or JSON schema for responsenone
response.headersExpected response headersnone
response.*Other valid fetch Response response keys e.g. statusText, type.none
beforeSendFunction used to finalize the requestnone
postTestFunction used to process response, usually to extract and save data.none
tagsTags used for filteringnone
skipSkip the test casefalse
focusRun only focused tests when presentfalse
repeatExtra sequential runs of the test0
bombardAdditional simultaneous runs of the test0
delayMilliseconds to wait before runningnone
timeoutPer-test timeout overrideruntime timeout (60000ms)

Config options

OptionDescriptionDefault
configFilePath to an extra config filenone
baseUrlBase URL of the APIhttp://localhost:3000
testDirDirectory containing test suites./test
filePatternRegex for suite filenames\.spectest\.
startCmdCommand to start the test servernpm run start
buildCmdCommand to build the test servernone
runningServerHandling for an existing server (reuse, fail, or kill)reuse
tagsString list used for filtering tests[]
rpsRequests per second rate limitInfinity
timeoutDefault request timeout in milliseconds60000
snapshotFilePath to write a snapshot filenone
randomizeShuffle tests ordering before executionfalse
happyRun only tests expecting 2xx status. Quick filter for testing the happy path.false
filterRegex or smart filter to select tests (happy, failures)none
verboseVerbose output with logsfalse
userAgentBrowser User-Agent string to send or one of the predefined user-agents.chrome_windows
suiteFileRun only the specified suite filenone
projectRoot (--dir)Root directory of the projectcurrent working directory

API Testing Tips

Setup and Teardown

For tests that need to run before all others (e.g., checking server status) or after all others (e.g., logging out), you can use the phase property. The phase can be set to setup, main (default), or teardown.

This is syntactic sugar for dependsOn, ensuring that setup tests block main and teardown tests, and main tests block teardown tests.

export default [
  {
    name: 'Ping Server',
    endpoint: '/ping',
    response: { status: 200 },
    phase: 'setup',
  },
  {
    name: 'Main Test',
    endpoint: '/some-data',
    response: { status: 200 },
  },
  {
    name: 'Logout',
    endpoint: '/logout',
    response: { status: 200 },
    phase: 'teardown',
  },
];

Alternatively, you can structure your suite with setup, tests, and teardown properties:

export default {
  name: 'My Suite',
  setup: [
    {
      name: 'Ping Server',
      endpoint: '/ping',
      response: { status: 200 },
    },
  ],
  tests: [
    {
      name: 'Main Test',
      endpoint: '/some-data',
      response: { status: 200 },
    },
  ],
  teardown: [
    {
      name: 'Logout',
      endpoint: '/logout',
      response: { status: 200 },
    },
  ],
};

Making dynamic assertions

The responses from APIs are often dynamic, however we often know their structure. For example, expecting an API to return a timestamp but unable to assert on a timestamp without mocking. For these cases, use Zod schema to describe the shape and properties of the data expected in the response.

The second example response above could have been written as:

import { z } from 'zod';

const tests = [
  {
    name: "Create a post",
    endpoint: "/posts",
    request: {
      method: "POST",
      headers: { "Content-Type": "application/json; charset=UTF-8" },
      body: { title: "foo", body: "bar", userId: 1 },
    },
    response: {
      status: 201,
      schema: z.object({
        id: z.number(),
        title: z.string(),
        body: z.literal('foo'),
        userId: z.number().min(1)
      }),
    },
  },
];

export default tests

With schema, you can describe the shape of the data while allowing it to take on different literal values. You can use both json and schema for assertions in the same test case.

Controlling concurrency

By default, test requests are sent in parallel, if your API calls other APIs during test, this might result in unintentionally spamming that 3P backend. To avoid this, set rps in spectest.config.js or pass --rps=<number> on the command line. A rate limiter ensures that no more than the configured number of requests are sent each second.

Testing multi-step flows

Tests execute in parallel by default. After each successful test, its response is saved under state.completedCases[operationId].response. The beforeSend hook of a later test can read that data to craft the next request. To ensure strict sequencing, set rps to 1 or insert explicit delay values.

// auth.spectest.js

export default [
  {
    name: 'Login',
    operationId: 'login',
    endpoint: '/login',
    request: {
      method: 'POST',
      body: { username: 'admin', password: 'secret' }
    },
    response: { status: 200 }
  },
  {
    name: 'Fetch profile',
    dependsOn: ['login'],
    endpoint: '/profile',
    beforeSend: (req, state) => {
      const token = state.completedCases.login.response.data.token;
      req.headers = { ...req.headers, Authorization: `Bearer ${token}` };
    },
    response: { status: 200 }
  }
];

Using proxies

The CLI uses the native fetch API which has proxy support from Node 24+. See https://nodejs.org/api/http.html#built-in-proxy-support for more information on how to set it up.

Filtering test cases

There are multiple strategies for filtering test cases

Filter by tag

If test cases are tagged, the tags can be used to filter them. A test case can have as many tags as possible.

export default [
  {
    name: "Fetch TODOs",
    endpoint: "/todos/",
    tags: ['todo', 'collection']
  },
  {
    name: "Fetch TODO 1",
    endpoint: "/todos/1",
    tags: ['todo', 'item']
  },
  {
    name: "Fetch Comments",
    endpoint: "/comments/",
    tags: ['comments', 'collection']
  },
  {
    name: "Fetch Comment 1",
    endpoint: "/comments/1",
    tags: ['comments', 'item']
  },
];

You can run only todo tests with npx spectest --tags=todo, and can combine multiple tags npx spectest --tags=todo,collections.

Specify name of test file

npx spectest sometest.spectest.js will run only the suites in sometest.spectest.js

Specify pattern for a group of test files

npx spectest --filePattern="auth*" will run all tests in files with auth prefix.

Use smart filters

Use --filter=<pattern> to run tests whose names match <pattern>. Several smart filters are provided:

--filter=happy filters to only the tests expecting a 2xx status, a quick way to verify the happy path.

--filter=failures reruns only the tests that failed in the snapshot from the previous run.

Test timeout

Use the timeout option to limit how long each test case may run. Specify timeout in spectest.config.js or pass --timeout=<milliseconds> on the command line. The default is 60000 (30 seconds). Individual tests can override this by including a timeout property. When a request exceeds the effective timeout, the test fails with a indicator in the summary.

Check for robustness of API

  • Randomize tests: Run tests with --randomize to uncover unexpected test order dependencies. This is especially useful for serverless functions that should be stateless.

  • Explicit dependencies: Use the dependsOn array on a test case to run it only after the listed operations succeed. Independent tests run concurrently as their prerequisites complete.

  • Load testing: Use the --bombard parameter to literally bombard the API with requests. It can also be set at the individual test case level to determine how an API would handle a flooding of that endpoint.

  • Simulating request from mobile devices: The --user-agent param can be used to set the request UserAgent to that of mobile devices. Spectest provides definitions for the user-agents of popular desktop and mobile devices.

Updating tests from failed responses

Use the --snapshot=<file> option to write the executed test cases to a JSON file. Each case records the final request that was sent, the actual server response, in addition to the result status (pass, fail, or timeout) and other metadata.

You can easily update failed tests by copying the responses in the snapshot file into the test cases.

Working with large test suites

The spectest/helpers module contain utility functions for batch modifying test cases. Most attributes that can be applied to a test case has a similarly named batch helper.

The collection below

const suite = [
  {
    name: "Get todo list",
    endpoint: "/todos",
    delay: 500,
    focus: true,
  },
  {
    name: "Fetch TODO 1",
    endpoint: "/todos/1",
    delay: 500,
    focus: true,
  },
];
export default suite;

Is the same as

import {focus, delay} from 'spectest/helpers';

const suite = [
  {
    name: "Get todo list",
    endpoint: "/todos",
  },
  {
    name: "Fetch TODO 1",
    endpoint: "/todos/1",
  },
];
export default focus(delay(suite, 500));

And you can create your own helpers to reduce repetition of common request/response properties!

Test formats

Test cases can be written in .js, plain .json and .yaml files, or .mjs for ESM and .cjs for CommonJs modules.

Typescript (.ts) files are not yet supported, you'd need to transpile them to any of the supported Js modules above.