Tutorial

Turn Any OpenAPI Spec into a Running Mock Server

One CLI command spins up a mock server from any OpenAPI 3.0 or 3.1 spec — with realistic fake data, scenario-based error responses, and VCR cassette recording for deterministic test suites.
May 15, 2026 · 7 min read · Revenue Holdings
Share this article:

Mocking APIs should be the easiest part of your development workflow. Instead, it's often the most frustrating: standing up a separate server, hand-crafting JSON responses, maintaining fake data that doesn't match the schema, and updating everything when the spec changes.

APIGhost eliminates all of that. Point it at any OpenAPI 3.0 or 3.1 specification — a local file or a URL — and it immediately becomes a running mock server. Responses are generated with realistic fake data (powered by Faker, using property names as hints). Need to test error handling? Switch scenarios. Need deterministic tests? Record interactions and replay them as VCR cassettes.

No Docker containers. No configuration files. No standing up a separate Express server.

┌──────────────┐ ┌──────────────┐ ┌─────────────────┐ │ OpenAPI Spec │ │ APIGhost │ │ Mock API │ │ petstore.yaml│──→│ serve │──→│ GET /pets │ │ │ │ (port 8080) │ │ → 200 + data │ └──────────────┘ └──────────────┘ │ POST /pets │ │ → 201 + data │ │ GET /pets/1 │ │ → 200 + fake │ └─────────────────┘

1. Install APIGhost

$ pip install apighost

Or from GitHub:

$ pip install git+https://github.com/Coding-Dev-Tools/apighost.git

Verify it's installed:

$ apighost --help

You'll see the available commands: serve, record, replay, scenario, and generate.

2. Mock a Pet Store API from a Spec

Let's use the standard Petstore example from the OpenAPI specification. APIGhost can load specs from URLs directly:

$ apighost serve https://raw.githubusercontent.com/OAI/OpenAPI-Specification/main/examples/v3.0/petstore.yaml

That's it. Your mock server is running on http://localhost:8080. Now hit it with curl:

$ curl http://localhost:8080/api/pets

You'll get a response like:

[
  {
    "id": 12345,
    "name": "Whiskers",
    "tag": "cat"
  },
  {
    "id": 67890,
    "name": "Buddy",
    "tag": "dog"
  }
]

APIGhost parsed the schema, inferred the data types, and generated realistic values. The name field gets a real pet name. The id gets a plausible integer. The tag gets a category string.

Try a POST:

$ curl -X POST http://localhost:8080/api/pets -H "Content-Type: application/json" -d '{"name":"Oreo","tag":"cat"}'
{
  "id": 54321,
  "name": "Oreo",
  "tag": "cat"
}

The server validates your input against the OpenAPI schema, generates an appropriate response, and picks the correct HTTP status code from the spec.

💡 Serve from a local file: You can also use a local spec: apighost serve ./petstore.yaml or use a custom port: apighost serve petstore.yaml -p 3000.

3. Generate Sample Data

Need to populate your mock with realistic data before your frontend starts developing? Use generate to create a full scenario of sample responses:

$ apighost generate petstore.yaml

This introspects the entire spec and pre-generates plausible data for every endpoint — names, emails, IDs, timestamps, nested objects — using property name hints from the schema. The generated output is saved as a scenario that you can use immediately.

4. Scenarios — Switch Between Happy Path and Error States

This is where APIGhost becomes a real testing tool. A scenario is a named set of response overrides. When your frontend or integration tests need to see error states, switch scenarios instead of modifying test code.

First, create a scenario for error testing:

$ apighost scenario create error-test -d "API error scenarios"

Then configure specific endpoints to return errors:

$ apighost scenario edit error-test "GET /api/pets/1" --status 404 --body '{"error":"Pet not found"}'
$ apighost scenario edit error-test "GET /api/pets" --status 500 --body '{"error":"Internal server error"}'

Now serve with the error scenario:

$ apighost serve petstore.yaml --scenario error-test
# Normal endpoint still works
$ curl http://localhost:8080/api/pets/2
→ 200 {"id": 67890, "name": "Buddy"}

# Overridden endpoint returns your error
$ curl http://localhost:8080/api/pets/1
→ 404 {"error": "Pet not found"}

# Another overridden endpoint
$ curl http://localhost:8080/api/pets
→ 500 {"error": "Internal server error"}

List your available scenarios:

$ apighost scenario list

Delete when you're done:

$ apighost scenario delete error-test
💡 Why scenarios matter: Instead of hard-coding error-handling paths in your frontend, run the same test suite against different scenarios. Your "rate limit exceeded" handler, "server error" retry logic, and "not found" fallback all get exercised without changing a single line of test code.

5. VCR Recording for Deterministic Tests

Scenarios are great for manual testing. For automated test suites, you need deterministic responses — every test run must return exactly the same data. That's where VCR recording comes in.

Record mode

Start APIGhost in recording mode, make requests against your real API (or the mock), and it captures every interaction:

$ apighost record petstore.yaml
# In another terminal, make some requests $ curl http://localhost:8080/api/pets $ curl http://localhost:8080/api/pets/1

When you stop the server (Ctrl+C), the interactions are saved to a VCR cassette file on disk.

Replay mode

Now replay the exact same interactions — bit-for-bit identical responses every time:

$ apighost replay ./recordings/my-recording
$ curl http://localhost:8080/api/pets
→ Same JSON, same IDs, same ordering as recorded

This is perfect for:

6. CI/CD Integration

Use APIGhost in your test pipeline to provide a deterministic mock API:

# .github/workflows/frontend-tests.yml
name: Frontend Tests

on: [push]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Install APIGhost
        run: pip install apighost

      - name: Start mock API
        run: |
          apighost serve spec.yaml -p 8080 &
          sleep 2  # wait for server to be ready

      - name: Run frontend tests
        run: npm test

      - name: Stop mock API
        run: kill %1

For even faster CI, use replay mode with a pre-recorded cassette — no spec parsing needed on every run:

      - name: Replay recorded API
        run: |
          apighost replay ./cassettes/frontend-tests -p 8080 &
          sleep 1
          npm test
          kill %1

Command Reference

CommandDescription
apighost serve <spec>Start mock server from OpenAPI spec (local or URL)
apighost serve <spec> -p 3000Custom port
apighost serve <spec> --scenario <name>Use a named scenario for response overrides
apighost serve <spec> --recordRecord interactions to a VCR cassette
apighost record <spec>Start recorder, capture interactions, save cassette
apighost replay <cassette>Replay a recorded cassette deterministically
apighost scenario create <name>Create a named response scenario
apighost scenario edit <name> <path>Override a specific endpoint's response
apighost generate <spec>Generate sample data from the spec

Real-World Use Case: Parallel Frontend and Backend Development

A team was building a dashboard frontend against a new GraphQL API that wouldn't be ready for three weeks. Instead of waiting, they ran APIGhost against the draft OpenAPI spec, generated realistic sample data, and started building immediately. When the backend changed the response format, updating the spec and restarting APIGhost was faster than any other mocking approach they'd tried.

The same team used the scenario system to test their error handling: the "503 maintenance mode" flow was verified weeks before the actual maintenance page was needed. By the time the real API shipped, the frontend had already handled every error condition.


Next Steps

APIGhost is one of 10 tools in the Revenue Holdings suite. Other tools that pair well with API testing:

Read the full docs →

Get Notified About New Tutorials

DevOps guides, API testing tips, and product updates.