Convert JSON to SQL in One Command
Every developer has done this: you get a JSON export from an API, a CSV someone converted to JSON, or a fixture file for a new feature — and you need to get it into a database. So you write a Python script, iterate through objects, manually escape strings, handle nulls, and eventually produce a SQL file. Then you do it again next week with different data.
There's a better way. json2sql is a zero-dependency CLI that converts JSON to SQL INSERT statements in one command. It handles nested objects, arrays, type inference, and multiple SQL dialects — all without a configuration file.
1. Install
Or from GitHub:
Zero dependencies beyond Python 3.10+ and the Typer CLI framework.
2. Basic Conversion
Consider a simple JSON array of user objects saved as users.json:
[
{"id": 1, "name": "Alice Chen", "email": "alice@example.com", "active": true},
{"id": 2, "name": "Bob Rivera", "email": "bob@example.com", "active": false},
{"id": 3, "name": "Charlie Park", "email": "charlie@example.com", "active": true}
]
Convert it to SQL:
Output:
CREATE TABLE users (
id INTEGER,
name TEXT,
email TEXT,
active BOOLEAN
);
INSERT INTO users (id, name, email, active) VALUES
(1, 'Alice Chen', 'alice@example.com', TRUE),
(2, 'Bob Rivera', 'bob@example.com', FALSE),
(3, 'Charlie Park', 'charlie@example.com', TRUE);
json2sql automatically inferred the column names, types, and created the table definition. Zero configuration.
3. Choose Your Dialect
Specify the target SQL dialect with --dialect:
PostgreSQL output uses SERIAL, BOOLEAN, and dollar-quoted strings:
CREATE TABLE users (
id SERIAL,
name TEXT,
email TEXT,
active BOOLEAN
);
INSERT INTO users (id, name, email, active) VALUES
(1, 'Alice Chen', 'alice@example.com', TRUE),
(2, 'Bob Rivera', 'bob@example.com', FALSE),
(3, 'Charlie Park', 'charlie@example.com', TRUE);
MySQL dialect:
CREATE TABLE users (
id INT,
name VARCHAR(255),
email VARCHAR(255),
active TINYINT(1)
);
INSERT INTO users (id, name, email, active) VALUES
(1, 'Alice Chen', 'alice@example.com', 1),
(2, 'Bob Rivera', 'bob@example.com', 0),
(3, 'Charlie Park', 'charlie@example.com', 1);
SQLite dialect:
Available dialects: postgres, mysql, sqlite.
4. Nested JSON → Relational Tables
This is where json2sql shines. Consider an API response with nested objects:
{
"orders": [
{
"order_id": 1001,
"customer": {
"name": "Alice Chen",
"email": "alice@example.com"
},
"items": [
{"sku": "WIDGET-A", "qty": 2, "price": 19.99},
{"sku": "GADGET-B", "qty": 1, "price": 49.99}
],
"total": 89.97
},
{
"order_id": 1002,
"customer": {
"name": "Bob Rivera",
"email": "bob@example.com"
},
"items": [
{"sku": "WIDGET-A", "qty": 5, "price": 19.99}
],
"total": 99.95
}
]
}
Convert with flattening:
This automatically produces three relational tables with foreign key relationships:
-- Table: orders
CREATE TABLE orders (
order_id INTEGER,
customer_name TEXT,
customer_email TEXT,
total NUMERIC
);
INSERT INTO orders (order_id, customer_name, customer_email, total) VALUES
(1001, 'Alice Chen', 'alice@example.com', 89.97),
(1002, 'Bob Rivera', 'bob@example.com', 99.95);
-- Table: orders_items
CREATE TABLE orders_items (
order_id INTEGER,
sku TEXT,
qty INTEGER,
price NUMERIC
);
INSERT INTO orders_items (order_id, sku, qty, price) VALUES
(1001, 'WIDGET-A', 2, 19.99),
(1001, 'GADGET-B', 1, 49.99),
(1002, 'WIDGET-A', 5, 19.99);
The nested customer object was flattened into columns. The items array was extracted into a child table with a foreign key back to the parent order_id. No manual schema design needed.
--table to override the auto-detected table name when your JSON wraps data in a key like {"data": [...]}:
$ json2sql convert api_response.json --table products
5. Pipe from stdin
json2sql reads from stdin, making it ideal for pipeline workflows:
This lets you chain json2sql with curl, jq, or any command that produces JSON — no intermediate files required.
6. CI/CD Integration
Use json2sql to prepare test data in CI pipelines:
# .github/workflows/test-with-seed-data.yml
name: Test with seed data
on: [push]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install json2sql
run: pip install json2sql
- name: Generate seed data
run: |
json2sql convert tests/fixtures/orders.json \
--dialect sqlite -o seed.sql
sqlite3 test.db < seed.sql
- name: Run tests
run: pytest --db=test.db
This pattern is ideal for:
- Integration tests that need realistic data volumes
- Database migrations with known-good seed data
- Demo environments that need reproducible database states
- ETL pipelines that transform JSON files before loading into warehouse
tests/fixtures/ and generate SQL at test time. This keeps test data human-readable (JSON) while still testing against real SQL schemas.
Command Reference
| Command | Description |
|---|---|
json2sql convert <file> | Convert JSON file to SQL INSERT statements |
--dialect postgres|mysql|sqlite | Target SQL dialect (default: generic SQL) |
--table <name> | Override auto-detected table name |
--flatten | Extract nested objects/arrays into relational tables |
-o <file> | Write output to file instead of stdout |
stdin pipe | Read JSON from stdin (e.g., curl ... | json2sql convert) |
When to Use json2sql vs Writing a Script
json2sql replaces hand-rolled conversion scripts in most common scenarios:
| Scenario | json2sql | Custom Script |
|---|---|---|
| One-off data import | 1 command, 0 files | Write, debug, run, delete |
| Nested JSON flattening | Auto-detected tables + FKs | Manual schema design |
| Multiple SQL dialects | --dialect postgres | Write per-dialect logic |
| CI pipeline seed data | pip install + 1 cmd | Script + deps + maintenance |
| Type inference | Built-in (int, bool, null, text, numeric) | Manual type mapping |
You still need a custom script when you need complex transformations, data enrichment, or joins across multiple source files. But for the common case — "I have JSON, I need SQL" — json2sql saves you 30 minutes of scripting every time.
Next Steps
json2sql is one of 10 tools in the Revenue Holdings suite. After you've got your data in SQL, check out the tools that protect it:
- API Contract Guardian — Catch breaking API schema changes in PRs
- ConfigDrift — Detect environment drift before it breaks production
- DeployDiff — See the full cost and blast radius of every infra change
- DeadCode — Remove unused exports, dead routes, and orphaned CSS
Stay Updated
New tutorials and tools every week. No spam, unsubscribe anytime.