How to Improve SQLite INSERT Performance

Posted by Kyle Hankinson December 5, 2024


SQLite is fast for reads, but naive inserts can be surprisingly slow. A few configuration changes can improve insert performance by 50x or more.

The Problem

By default, each individual INSERT in SQLite is wrapped in its own transaction. Each transaction forces a disk sync — a slow operation. Inserting 10,000 rows with 10,000 separate INSERTs means 10,000 disk syncs.

1. Wrap Inserts in a Transaction

The single biggest improvement:

BEGIN TRANSACTION;
INSERT INTO logs (message, level) VALUES ('Starting', 'INFO');
INSERT INTO logs (message, level) VALUES ('Processing', 'DEBUG');
-- ... thousands more inserts
INSERT INTO logs (message, level) VALUES ('Done', 'INFO');
COMMIT;

This changes 10,000 disk syncs to just one. In benchmarks, this alone can improve insert speed from ~85 inserts/second to ~50,000 inserts/second.

2. Use WAL Mode

Write-Ahead Logging mode allows concurrent reads during writes and batches disk writes more efficiently:

PRAGMA journal_mode = WAL;

WAL mode is persistent — you only need to set it once per database file. Benefits:

  • Readers do not block writers
  • Writers do not block readers
  • Better write performance due to sequential I/O

3. Reduce Synchronous Level

PRAGMA synchronous = NORMAL;
Value Safety Speed
FULL (default) Maximum durability Slowest
NORMAL Safe with WAL mode Fast
OFF Risk of corruption on crash Fastest

NORMAL is a good balance — with WAL mode, it is safe against application crashes (but not power loss during a write).

4. Use Prepared Statements

Parsing SQL is expensive. Prepare the statement once, then bind and execute repeatedly:

# Python example
cursor = conn.cursor()
stmt = "INSERT INTO logs (message, level) VALUES (?, ?)"

conn.execute("BEGIN")
for msg, level in data:
    cursor.execute(stmt, (msg, level))
conn.execute("COMMIT")
// Swift example
let stmt = try db.prepare("INSERT INTO logs (message, level) VALUES (?, ?)")
try db.transaction {
    for (msg, level) in data {
        try stmt.run(msg, level)
    }
}

5. Increase Cache Size

The default page cache is 2MB. For large imports, increase it:

PRAGMA cache_size = -20000;  -- 20MB (negative = kilobytes)

More cache means fewer disk reads during the import.

6. Use Memory-Mapped I/O

PRAGMA mmap_size = 268435456;  -- 256MB

Memory mapping lets the OS handle caching more efficiently for large databases.

7. Multi-Row INSERT

SQLite 3.7.11+ supports multi-row VALUES:

INSERT INTO logs (message, level) VALUES
    ('Starting', 'INFO'),
    ('Processing', 'DEBUG'),
    ('Done', 'INFO');

This reduces parsing overhead compared to individual INSERT statements.

Benchmark Summary

Inserting 100,000 rows (typical results):

Configuration Time Rows/sec
Default (no transaction) ~20 min ~85
With transaction ~1.5 sec ~65,000
Transaction + WAL ~0.8 sec ~125,000
Transaction + WAL + prepared ~0.5 sec ~200,000
All optimizations ~0.3 sec ~300,000+

Complete Setup

For maximum insert performance, run these PRAGMAs at connection time:

PRAGMA journal_mode = WAL;
PRAGMA synchronous = NORMAL;
PRAGMA cache_size = -20000;
PRAGMA mmap_size = 268435456;
PRAGMA temp_store = MEMORY;

Then wrap all inserts in explicit transactions using prepared statements.

Using SQLPro Studio

SQLPro Studio opens SQLite database files directly — just open any .db, .sqlite, or .sqlite3 file. You can run PRAGMA commands and test insert performance in the query editor, and import data from CSV or JSON files using the built-in import feature.


Tags: SQLite

About the authorKyle Hankinson is the founder and sole developer of SQLPro Studio and the Hankinsoft Development suite of database tools. He has been building native macOS and iOS applications since 2010.

Try SQLPro Studio — A powerful database manager for MySQL, PostgreSQL, Microsoft SQL Server, SQLite, Oracle, and Snowflake. Available on macOS, iOS, and Windows.

Download Free Trial View Pricing Compare