Tutorial

Convert JSON to SQL in One Command

Stop writing seed scripts. A single CLI command turns any JSON file — flat or nested — into clean SQL INSERT statements for PostgreSQL, MySQL, or SQLite.
May 15, 2026 · 6 min read · Revenue Holdings
Share this article:

Every developer has done this: you get a JSON export from an API, a CSV someone converted to JSON, or a fixture file for a new feature — and you need to get it into a database. So you write a Python script, iterate through objects, manually escape strings, handle nulls, and eventually produce a SQL file. Then you do it again next week with different data.

There's a better way. json2sql is a zero-dependency CLI that converts JSON to SQL INSERT statements in one command. It handles nested objects, arrays, type inference, and multiple SQL dialects — all without a configuration file.

┌──────────────┐ ┌──────────┐ ┌───────────────────────────┐ │ data.json │ │ json2sql │ │ INSERT INTO users │ │ [ │ │ convert │ │ (id, name, email) │ │ {"id": 1, │ ──→│ data.js │──→│ VALUES (1, 'Alice', ...) │ │ "name": │ │ on │ │ INSERT INTO users │ │ "Alice"} │ │ │ │ VALUES (2, 'Bob', ...) │ │ ] │ └──────────┘ └───────────────────────────┘
⚡ Already have json2sql? Skip to nested JSON handling or CI pipeline integration.

1. Install

$ pip install json2sql

Or from GitHub:

$ pip install git+https://github.com/Coding-Dev-Tools/json2sql.git

Zero dependencies beyond Python 3.10+ and the Typer CLI framework.

2. Basic Conversion

Consider a simple JSON array of user objects saved as users.json:

[
  {"id": 1, "name": "Alice Chen", "email": "alice@example.com", "active": true},
  {"id": 2, "name": "Bob Rivera", "email": "bob@example.com", "active": false},
  {"id": 3, "name": "Charlie Park", "email": "charlie@example.com", "active": true}
]

Convert it to SQL:

$ json2sql convert users.json

Output:

CREATE TABLE users (
  id INTEGER,
  name TEXT,
  email TEXT,
  active BOOLEAN
);

INSERT INTO users (id, name, email, active) VALUES
  (1, 'Alice Chen', 'alice@example.com', TRUE),
  (2, 'Bob Rivera', 'bob@example.com', FALSE),
  (3, 'Charlie Park', 'charlie@example.com', TRUE);

json2sql automatically inferred the column names, types, and created the table definition. Zero configuration.

3. Choose Your Dialect

Specify the target SQL dialect with --dialect:

$ json2sql convert users.json --dialect postgres -o users.sql

PostgreSQL output uses SERIAL, BOOLEAN, and dollar-quoted strings:

CREATE TABLE users (
  id SERIAL,
  name TEXT,
  email TEXT,
  active BOOLEAN
);

INSERT INTO users (id, name, email, active) VALUES
  (1, 'Alice Chen', 'alice@example.com', TRUE),
  (2, 'Bob Rivera', 'bob@example.com', FALSE),
  (3, 'Charlie Park', 'charlie@example.com', TRUE);

MySQL dialect:

$ json2sql convert users.json --dialect mysql
CREATE TABLE users (
  id INT,
  name VARCHAR(255),
  email VARCHAR(255),
  active TINYINT(1)
);

INSERT INTO users (id, name, email, active) VALUES
  (1, 'Alice Chen', 'alice@example.com', 1),
  (2, 'Bob Rivera', 'bob@example.com', 0),
  (3, 'Charlie Park', 'charlie@example.com', 1);

SQLite dialect:

$ json2sql convert users.json --dialect sqlite -o seed.sql && sqlite3 test.db < seed.sql

Available dialects: postgres, mysql, sqlite.

4. Nested JSON → Relational Tables

This is where json2sql shines. Consider an API response with nested objects:

{
  "orders": [
    {
      "order_id": 1001,
      "customer": {
        "name": "Alice Chen",
        "email": "alice@example.com"
      },
      "items": [
        {"sku": "WIDGET-A", "qty": 2, "price": 19.99},
        {"sku": "GADGET-B", "qty": 1, "price": 49.99}
      ],
      "total": 89.97
    },
    {
      "order_id": 1002,
      "customer": {
        "name": "Bob Rivera",
        "email": "bob@example.com"
      },
      "items": [
        {"sku": "WIDGET-A", "qty": 5, "price": 19.99}
      ],
      "total": 99.95
    }
  ]
}

Convert with flattening:

$ json2sql convert orders.json --flatten --dialect postgres

This automatically produces three relational tables with foreign key relationships:

-- Table: orders
CREATE TABLE orders (
  order_id INTEGER,
  customer_name TEXT,
  customer_email TEXT,
  total NUMERIC
);

INSERT INTO orders (order_id, customer_name, customer_email, total) VALUES
  (1001, 'Alice Chen', 'alice@example.com', 89.97),
  (1002, 'Bob Rivera', 'bob@example.com', 99.95);

-- Table: orders_items
CREATE TABLE orders_items (
  order_id INTEGER,
  sku TEXT,
  qty INTEGER,
  price NUMERIC
);

INSERT INTO orders_items (order_id, sku, qty, price) VALUES
  (1001, 'WIDGET-A', 2, 19.99),
  (1001, 'GADGET-B', 1, 49.99),
  (1002, 'WIDGET-A', 5, 19.99);

The nested customer object was flattened into columns. The items array was extracted into a child table with a foreign key back to the parent order_id. No manual schema design needed.

💡 Pro tip: Use --table to override the auto-detected table name when your JSON wraps data in a key like {"data": [...]}:
$ json2sql convert api_response.json --table products

5. Pipe from stdin

json2sql reads from stdin, making it ideal for pipeline workflows:

$ curl -s https://api.example.com/users | json2sql convert --dialect postgres --table users
$ cat data.json | json2sql convert --dialect sqlite -o seed.sql

This lets you chain json2sql with curl, jq, or any command that produces JSON — no intermediate files required.

6. CI/CD Integration

Use json2sql to prepare test data in CI pipelines:

# .github/workflows/test-with-seed-data.yml
name: Test with seed data

on: [push]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Install json2sql
        run: pip install json2sql

      - name: Generate seed data
        run: |
          json2sql convert tests/fixtures/orders.json \
            --dialect sqlite -o seed.sql
          sqlite3 test.db < seed.sql

      - name: Run tests
        run: pytest --db=test.db

This pattern is ideal for:

💡 Pro tip: Store your JSON fixtures in tests/fixtures/ and generate SQL at test time. This keeps test data human-readable (JSON) while still testing against real SQL schemas.

Command Reference

CommandDescription
json2sql convert <file>Convert JSON file to SQL INSERT statements
--dialect postgres|mysql|sqliteTarget SQL dialect (default: generic SQL)
--table <name>Override auto-detected table name
--flattenExtract nested objects/arrays into relational tables
-o <file>Write output to file instead of stdout
stdin pipeRead JSON from stdin (e.g., curl ... | json2sql convert)

When to Use json2sql vs Writing a Script

json2sql replaces hand-rolled conversion scripts in most common scenarios:

Scenariojson2sqlCustom Script
One-off data import1 command, 0 filesWrite, debug, run, delete
Nested JSON flatteningAuto-detected tables + FKsManual schema design
Multiple SQL dialects--dialect postgresWrite per-dialect logic
CI pipeline seed datapip install + 1 cmdScript + deps + maintenance
Type inferenceBuilt-in (int, bool, null, text, numeric)Manual type mapping

You still need a custom script when you need complex transformations, data enrichment, or joins across multiple source files. But for the common case — "I have JSON, I need SQL" — json2sql saves you 30 minutes of scripting every time.


Next Steps

json2sql is one of 10 tools in the Revenue Holdings suite. After you've got your data in SQL, check out the tools that protect it:

🔌 All tools work with click-to-mcp: Turn any Revenue Holdings CLI into an MCP server so Claude Code, Codex, and Cursor can use them directly. Learn more →

Read the full docs →

Stay Updated

New tutorials and tools every week. No spam, unsubscribe anytime.