May 15, 2026 · 12 min read
Zero to CI Safety Net
This tutorial walks you through each Revenue Holdings CLI tool with real sample files you can download, run, and inspect. By the end, you'll have a combined GitHub Actions pipeline that catches API contract breaks, bad seed data, infrastructure blast radius, and config drift — all in under 30 seconds.
Prerequisites: Python 3.10+, a GitHub repository, and 15 minutes.
Tutorial Structure
- API Contract Guardian — compare two OpenAPI specs and find breaking changes
- json2sql — convert sample JSON to type-safe SQL
- DeployDiff — analyze infrastructure changes for blast radius
- ConfigDrift — compare environments for configuration drift
- Combined Pipeline — one GitHub Actions workflow to rule them all
Step 1: API Contract Guardian
Goal: Catch breaking API changes before they reach consumers
Create two OpenAPI spec files in your repo. Save this as spec-v1.yaml:
openapi: 3.0.0
info:
title: User API
version: 1.0.0
paths:
/users:
get:
summary: List users
responses:
'200':
description: OK
content:
application/json:
schema:
type: array
items:
type: object
required: [id, name, email]
properties:
id: {type: integer}
name: {type: string}
email: {type: string}
/users/{id}:
get:
summary: Get user
parameters:
- name: id
in: path
required: true
schema: {type: integer}
responses:
'200':
description: OK
content:
application/json:
schema:
type: object
required: [id, name, email]
properties:
id: {type: integer}
name: {type: string}
email: {type: string}
Now save this breaking change as spec-v2-breaking.yaml — notice we removed the email field from the response:
openapi: 3.0.0
info:
title: User API
version: 2.0.0
paths:
/users:
get:
summary: List users
responses:
'200':
description: OK
content:
application/json:
schema:
type: array
items:
type: object
required: [id, name] # email removed from required
properties:
id: {type: integer}
name: {type: string}
# email field removed — BREAKING CHANGE
/users/{id}:
get:
summary: Get user
parameters:
- name: id
in: path
required: true
schema: {type: integer}
responses:
'200':
description: OK
content:
application/json:
schema:
type: object
required: [id, name]
properties:
id: {type: integer}
name: {type: string}
Run the check:
$ api-contract-guardian check spec-v2-breaking.yaml --prev spec-v1.yaml
Expected output:
🔴 BREAKING: Response body missing required field 'email' in GET /users
🔴 BREAKING: Response body missing required field 'email' in GET /users/{id}
✖ FAILED: 2 breaking changes detected
Now try a safe change — save this as spec-v2-safe.yaml with only an added field:
# ...same as v1 but with an optional 'phone' field added
properties:
id: {type: integer}
name: {type: string}
email: {type: string}
phone: {type: string} # new field — safe!
Expected output:
✅ COMPATIBLE: Added optional field 'phone' to response
✔ PASSED: 0 breaking changes
Step 2: json2sql
Goal: Convert JSON seed data to type-safe SQL — catch type mismatches before they hit the database
Save this sample data as seed.json:
{
"users": [
{"id": 1, "name": "Alice", "email": "alice@example.com", "role": "admin", "active": true, "score": 95.5, "created_at": "2026-01-15"},
{"id": 2, "name": "Bob", "email": "bob@example.com", "role": "editor", "active": true, "score": 87.0, "created_at": "2026-02-20"},
{"id": 3, "name": "Charlie", "email": "charlie@example.com", "role": "viewer", "active": false, "score": 72.3, "created_at": "2026-03-10"}
],
"articles": [
{"id": 1, "title": "Getting Started with CI/CD", "author_id": 1, "published": true, "views": 1200, "tags": ["devops", "ci"]},
{"id": 2, "title": "API Design Best Practices", "author_id": 2, "published": true, "views": 890, "tags": ["api", "design"]}
]
}
Convert it to PostgreSQL INSERT statements:
$ json2sql seed.json --dialect postgres
Expected output includes properly typed SQL:
-- Table: users
INSERT INTO "users" ("id", "name", "email", "role", "active", "score", "created_at")
VALUES
(1, 'Alice', 'alice@example.com', 'admin', TRUE, 95.5, '2026-01-15'::date),
(2, 'Bob', 'bob@example.com', 'editor', TRUE, 87.0, '2026-02-20'::date),
(3, 'Charlie', 'charlie@example.com', 'viewer', FALSE, 72.3, '2026-03-10'::date);
-- Table: articles
INSERT INTO "articles" ("id", "title", "author_id", "published", "views", "tags")
VALUES
(1, 'Getting Started with CI/CD', 1, TRUE, 1200, ARRAY['devops','ci']),
(2, 'API Design Best Practices', 2, TRUE, 890, ARRAY['api','design']);
Notice how json2sql automatically:
- Infers
active: true→ PostgreSQLBOOLEAN→TRUE - Infers
score: 95.5→NUMERIC→95.5 - Infers
created_at: "2026-01-15"→DATE→'2026-01-15'::date - Converts
tags: [...]→ PostgreSQLARRAYsyntax
null in a NOT NULL column or a string where a number is expected. json2sql will flag the type mismatch and tell you exactly which field and row has the problem. This is the same check you'd run in CI.
Step 3: DeployDiff
Goal: Understand the blast radius of every infrastructure change before you click deploy
Save this Terraform plan output as tfplan.json (simulated):
{
"resource_changes": [
{
"address": "aws_db_instance.main",
"change": {
"actions": ["update"],
"before": {"engine": "postgres", "engine_version": "14", "storage": 100, "multi_az": false, "backup_retention_period": 7},
"after": {"engine": "postgres", "engine_version": "15", "storage": 200, "multi_az": true, "backup_retention_period": 30}
}
},
{
"address": "aws_s3_bucket.logs",
"change": {"actions": ["delete"], "before": {"bucket": "logs-prod"}, "after": null}
},
{
"address": "aws_lambda_function.processor",
"change": {"actions": ["update"], "before": {"memory_size": 128, "timeout": 30}, "after": {"memory_size": 256, "timeout": 60}}
}
]
}
Run DeployDiff analysis:
$ deploydiff diff --plan tfplan.json --format summary
Expected output:
🔴 DESTRUCTION: aws_s3_bucket.logs will be DELETED
🟡 REPLACEMENT: aws_db_instance.main — major version upgrade (14→15), possible replacement
🟢 SAFE UPDATE: aws_lambda_function.processor — memory 128→256, timeout 30→60
Blast radius: 2 of 3 changes affect critical infrastructure (database, S3)
Recommendation: Review aws_s3_bucket.logs deletion — no backup bucket detected
DeployDiff turned a dense JSON blob into actionable information. Your reviewer can now see at a glance: "we're deleting an S3 bucket and upgrading the database — let's review those first."
Step 4: ConfigDrift
Goal: Catch environment configuration drift before it causes production incidents
Create two environment files. Save as .env.production:
DB_HOST=prod-db.internal
DB_PORT=5432
DB_POOL_SIZE=25
CACHE_TTL=300
CACHE_PROVIDER=redis
LOG_LEVEL=warn
FEATURE_NEW_DASHBOARD=true
API_RATE_LIMIT=1000
MAX_UPLOAD_SIZE_MB=50
Save as .env.staging:
DB_HOST=staging-db.internal
DB_PORT=5432
DB_POOL_SIZE=10
CACHE_TTL=30
CACHE_PROVIDER=redis
LOG_LEVEL=debug
MAX_UPLOAD_SIZE_MB=50
Run drift check:
$ configdrift check --baseline .env.production --target .env.staging
Expected output:
🔴 MISSING: FEATURE_NEW_DASHBOARD (present in production, missing in staging)
🔴 DRIFT: DB_POOL_SIZE (staging=10, production=25) — delta: 150%
🔴 DRIFT: CACHE_TTL (staging=30, production=300) — delta: 900%
🟡 DRIFT: LOG_LEVEL (staging=debug, production=warn) — acceptable difference
🟢 OK: DB_PORT, CACHE_PROVIDER, MAX_UPLOAD_SIZE_MB match
🟢 OK: API_RATE_LIMIT (only in production, not expected in staging)
⚠ SKIPPED: DB_HOST (expected to differ between environments)
📊 Drift score: 38/100 — HIGH DRIFT
Now you know: staging is missing the new dashboard feature flag, has a tiny connection pool, and a cache TTL that's 10x shorter than production. Before your next deploy, fix these mismatches.
Step 5: The Combined CI Pipeline
Goal: One GitHub Actions workflow that runs all four checks in parallel
Save this as .github/workflows/pre-deploy.yml in your repository:
name: Pre-Deploy Safety Checks
on:
pull_request:
branches: [main]
push:
branches: [main]
jobs:
api-contract-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 2
- run: pip install git+https://github.com/Coding-Dev-Tools/api-contract-guardian.git
- run: api-contract-guardian check openapi/*.yaml --prev origin/main --fail-on breaking
seed-data-validation:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: pip install git+https://github.com/Coding-Dev-Tools/json2sql.git
- run: json2sql data/seeds/*.json --check-types --output /dev/null
infra-blast-radius:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: pip install git+https://github.com/Coding-Dev-Tools/deploydiff.git
- run: deploydiff diff --plan terraform/tfplan.json --fail-on destruction --format markdown
config-drift-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: pip install git+https://github.com/Coding-Dev-Tools/configdrift.git
- run: configdrift check --baseline .env.production --target .env.staging --fail-on drift
combined-report:
needs: [api-contract-check, seed-data-validation, infra-blast-radius, config-drift-check]
runs-on: ubuntu-latest
steps:
- run: echo "✅ All safety checks passed — safe to deploy!"
Each job runs in parallel and completes in under 30 seconds. The combined-report job only runs if all four pass — giving you a single gate to put before your deploy step.
What's Next?
You now have a CI safety net that catches the four most expensive classes of production bugs before they ship. Here's how to go further:
- Custom rules — API Contract Guardian supports custom severity overrides and regex-based field matchers
- Team billing — Suite plans ($49/mo for all tools) include a dashboard that aggregates results across repos
- MCP integration — coming soon: run any tool as an MCP server for AI IDE integration (Cursor, VS Code Insiders)
Get Early Access
PyPI publishing is coming soon. Leave your email and we'll notify you the moment these tools ship.