How to Improve SQLite INSERT Performance
Posted by Kyle Hankinson December 5, 2024
SQLite is fast for reads, but naive inserts can be surprisingly slow. A few configuration changes can improve insert performance by 50x or more.
The Problem
By default, each individual INSERT in SQLite is wrapped in its own transaction. Each transaction forces a disk sync — a slow operation. Inserting 10,000 rows with 10,000 separate INSERTs means 10,000 disk syncs.
1. Wrap Inserts in a Transaction
The single biggest improvement:
BEGIN TRANSACTION;
INSERT INTO logs (message, level) VALUES ('Starting', 'INFO');
INSERT INTO logs (message, level) VALUES ('Processing', 'DEBUG');
-- ... thousands more inserts
INSERT INTO logs (message, level) VALUES ('Done', 'INFO');
COMMIT;
This changes 10,000 disk syncs to just one. In benchmarks, this alone can improve insert speed from ~85 inserts/second to ~50,000 inserts/second.
2. Use WAL Mode
Write-Ahead Logging mode allows concurrent reads during writes and batches disk writes more efficiently:
PRAGMA journal_mode = WAL;
WAL mode is persistent — you only need to set it once per database file. Benefits:
- Readers do not block writers
- Writers do not block readers
- Better write performance due to sequential I/O
3. Reduce Synchronous Level
PRAGMA synchronous = NORMAL;
| Value | Safety | Speed |
|---|---|---|
FULL (default) |
Maximum durability | Slowest |
NORMAL |
Safe with WAL mode | Fast |
OFF |
Risk of corruption on crash | Fastest |
NORMAL is a good balance — with WAL mode, it is safe against application crashes (but not power loss during a write).
4. Use Prepared Statements
Parsing SQL is expensive. Prepare the statement once, then bind and execute repeatedly:
# Python example
cursor = conn.cursor()
stmt = "INSERT INTO logs (message, level) VALUES (?, ?)"
conn.execute("BEGIN")
for msg, level in data:
cursor.execute(stmt, (msg, level))
conn.execute("COMMIT")
// Swift example
let stmt = try db.prepare("INSERT INTO logs (message, level) VALUES (?, ?)")
try db.transaction {
for (msg, level) in data {
try stmt.run(msg, level)
}
}
5. Increase Cache Size
The default page cache is 2MB. For large imports, increase it:
PRAGMA cache_size = -20000; -- 20MB (negative = kilobytes)
More cache means fewer disk reads during the import.
6. Use Memory-Mapped I/O
PRAGMA mmap_size = 268435456; -- 256MB
Memory mapping lets the OS handle caching more efficiently for large databases.
7. Multi-Row INSERT
SQLite 3.7.11+ supports multi-row VALUES:
INSERT INTO logs (message, level) VALUES
('Starting', 'INFO'),
('Processing', 'DEBUG'),
('Done', 'INFO');
This reduces parsing overhead compared to individual INSERT statements.
Benchmark Summary
Inserting 100,000 rows (typical results):
| Configuration | Time | Rows/sec |
|---|---|---|
| Default (no transaction) | ~20 min | ~85 |
| With transaction | ~1.5 sec | ~65,000 |
| Transaction + WAL | ~0.8 sec | ~125,000 |
| Transaction + WAL + prepared | ~0.5 sec | ~200,000 |
| All optimizations | ~0.3 sec | ~300,000+ |
Complete Setup
For maximum insert performance, run these PRAGMAs at connection time:
PRAGMA journal_mode = WAL;
PRAGMA synchronous = NORMAL;
PRAGMA cache_size = -20000;
PRAGMA mmap_size = 268435456;
PRAGMA temp_store = MEMORY;
Then wrap all inserts in explicit transactions using prepared statements.
Using SQLPro Studio
Tags: SQLite
How to Copy or Clone a Table in SQL
Posted by Kyle Hankinson October 24, 2024
Cloning a table is useful for backups, testing, and schema experiments. Every database can do it, but what gets copied varies.
MySQL
Structure + Data
CREATE TABLE users_copy AS SELECT * FROM users;
Warning: This copies data and column definitions but NOT indexes, primary keys, auto-increment, or foreign keys.
Structure Only (No Data)
CREATE TABLE users_copy LIKE users;
LIKE copies the full structure including indexes and auto-increment. To then copy data:
INSERT INTO users_copy SELECT * FROM users;
Partial Copy
CREATE TABLE active_users AS
SELECT * FROM users WHERE status = 'active';
PostgreSQL
Structure + Data
CREATE TABLE users_copy AS SELECT * FROM users;
Like MySQL, this does not copy indexes, constraints, or defaults.
Structure with Constraints
CREATE TABLE users_copy (LIKE users INCLUDING ALL);
INCLUDING ALL copies defaults, constraints, indexes, comments, and identity columns. You can be selective:
CREATE TABLE users_copy (LIKE users INCLUDING DEFAULTS INCLUDING CONSTRAINTS);
Then copy data separately:
INSERT INTO users_copy SELECT * FROM users;
SQL Server
Structure + Data
SELECT * INTO users_copy FROM users;
SELECT INTO creates the new table automatically. It copies column definitions and data but not indexes, constraints, or triggers.
Structure Only
SELECT * INTO users_copy FROM users WHERE 1 = 0;
The WHERE 1 = 0 ensures no rows are copied.
With Specific Columns or Filters
SELECT id, name, email INTO active_users
FROM users WHERE status = 'active';
Oracle
-- Structure + Data
CREATE TABLE users_copy AS SELECT * FROM users;
-- Structure Only
CREATE TABLE users_copy AS SELECT * FROM users WHERE 1 = 0;
To get the full DDL including indexes and constraints:
SELECT DBMS_METADATA.GET_DDL('TABLE', 'USERS') FROM DUAL;
Then edit the output to change the table name.
SQLite
-- Structure + Data
CREATE TABLE users_copy AS SELECT * FROM users;
-- Structure Only
CREATE TABLE users_copy AS SELECT * FROM users WHERE 0;
SQLite's CREATE TABLE AS does not copy primary keys or autoincrement.
What Gets Copied
| Feature | CREATE AS SELECT | LIKE / Structure Copy |
|---|---|---|
| Column names & types | Yes | Yes |
| Data | Yes | No |
| Primary key | No | Yes (MySQL LIKE, PG INCLUDING ALL) |
| Indexes | No | Yes (MySQL LIKE, PG INCLUDING ALL) |
| Auto-increment | No | Yes (MySQL LIKE, PG INCLUDING ALL) |
| Foreign keys | No | No (must add manually) |
| Triggers | No | No |
| Constraints | No | Yes (PG INCLUDING ALL) |
Using SQLPro Studio
In
Tags: MySQL PostgreSQL Microsoft SQL Server SQLite
How to Check if a Table Exists in SQL
Posted by Kyle Hankinson February 15, 2024
Writing idempotent SQL scripts — ones that can run multiple times without errors — requires checking if tables exist before creating or dropping them.
IF NOT EXISTS / IF EXISTS
MySQL
CREATE TABLE IF NOT EXISTS users (
id INT AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(100) NOT NULL
);
DROP TABLE IF EXISTS temp_data;
PostgreSQL
CREATE TABLE IF NOT EXISTS users (
id SERIAL PRIMARY KEY,
name VARCHAR(100) NOT NULL
);
DROP TABLE IF EXISTS temp_data;
SQLite
CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL
);
DROP TABLE IF EXISTS temp_data;
SQL Server (2016+)
-- SQL Server 2016+ supports DROP IF EXISTS
DROP TABLE IF EXISTS temp_data;
-- CREATE IF NOT EXISTS is NOT supported — use a check instead
IF NOT EXISTS (SELECT * FROM sys.tables WHERE name = 'users')
BEGIN
CREATE TABLE users (
id INT IDENTITY(1,1) PRIMARY KEY,
name VARCHAR(100) NOT NULL
);
END;
SQL Server (older versions)
IF OBJECT_ID('dbo.temp_data', 'U') IS NOT NULL
DROP TABLE dbo.temp_data;
'U' means user table. Use 'V' for views, 'P' for procedures.
Oracle
Oracle does not have IF NOT EXISTS syntax. Use PL/SQL exception handling:
BEGIN
EXECUTE IMMEDIATE 'DROP TABLE temp_data';
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE != -942 THEN RAISE; END IF; -- -942 = table does not exist
END;
Or check the data dictionary first:
DECLARE
v_count NUMBER;
BEGIN
SELECT COUNT(*) INTO v_count FROM user_tables WHERE table_name = 'USERS';
IF v_count = 0 THEN
EXECUTE IMMEDIATE 'CREATE TABLE users (id NUMBER PRIMARY KEY, name VARCHAR2(100))';
END IF;
END;
Checking If a Table Exists in a Query
MySQL
SELECT COUNT(*)
FROM information_schema.tables
WHERE table_schema = DATABASE()
AND table_name = 'users';
PostgreSQL
SELECT EXISTS (
SELECT FROM pg_tables
WHERE schemaname = 'public'
AND tablename = 'users'
);
SQL Server
SELECT CASE
WHEN OBJECT_ID('dbo.users', 'U') IS NOT NULL THEN 1
ELSE 0
END AS table_exists;
Quick Reference
| Database | CREATE IF NOT EXISTS | DROP IF EXISTS |
|---|---|---|
| MySQL | Yes | Yes |
| PostgreSQL | Yes | Yes |
| SQLite | Yes | Yes |
| SQL Server 2016+ | No (use IF NOT EXISTS check) | Yes |
| SQL Server < 2016 | No (use OBJECT_ID check) | No (use OBJECT_ID check) |
| Oracle | No (use PL/SQL) | No (use PL/SQL) |
Checking for Columns
Before adding a column, check if it already exists:
-- MySQL / PostgreSQL
SELECT COUNT(*) FROM information_schema.columns
WHERE table_name = 'users' AND column_name = 'email';
-- SQL Server
IF COL_LENGTH('dbo.users', 'email') IS NULL
ALTER TABLE dbo.users ADD email VARCHAR(255);
Using SQLPro Studio
Tags: MySQL PostgreSQL Microsoft SQL Server SQLite
How to Reset Auto-Increment and Identity Columns in SQL
Posted by Kyle Hankinson July 18, 2023
After deleting test data or truncating a table, you often want to reset the auto-increment counter so IDs start from 1 again. Every database handles this differently.
MySQL
-- Reset to a specific value
ALTER TABLE users AUTO_INCREMENT = 1;
-- Or truncate the table (resets auto-increment automatically)
TRUNCATE TABLE users;
Check the current value:
SHOW TABLE STATUS LIKE 'users';
Important: The new value must be greater than or equal to the current maximum value in the column. MySQL will silently adjust it upward if needed.
PostgreSQL
PostgreSQL uses sequences for auto-increment (via SERIAL or GENERATED ALWAYS AS IDENTITY):
-- Reset the sequence to 1
ALTER SEQUENCE users_id_seq RESTART WITH 1;
-- Or if you don't know the sequence name
SELECT setval(pg_get_serial_sequence('users', 'id'), 1, false);
The false parameter means the next nextval() call returns 1. With true, it would return 2.
To reset based on the current max value:
SELECT setval(pg_get_serial_sequence('users', 'id'), COALESCE(MAX(id), 0) + 1, false)
FROM users;
Truncate also resets the sequence:
TRUNCATE TABLE users RESTART IDENTITY;
SQL Server
-- Reset identity to 0 (next insert will be 1)
DBCC CHECKIDENT ('users', RESEED, 0);
-- Reset to a specific value
DBCC CHECKIDENT ('users', RESEED, 100);
Check the current identity value:
SELECT IDENT_CURRENT('users');
Note: If you reseed to a value lower than existing data, you may get duplicate key errors on the next insert.
To insert a row with a specific identity value:
SET IDENTITY_INSERT users ON;
INSERT INTO users (id, name) VALUES (1, 'Alice');
SET IDENTITY_INSERT users OFF;
Oracle
Oracle uses sequences, which are separate objects:
-- There's no ALTER SEQUENCE ... RESTART in older Oracle
-- The workaround: drop and recreate
DROP SEQUENCE users_seq;
CREATE SEQUENCE users_seq START WITH 1;
-- Oracle 18c+ supports RESTART
ALTER SEQUENCE users_seq RESTART START WITH 1;
SQLite
SQLite tracks auto-increment in the sqlite_sequence table:
-- Reset for a specific table
UPDATE sqlite_sequence SET seq = 0 WHERE name = 'users';
-- Or delete the tracking row entirely
DELETE FROM sqlite_sequence WHERE name = 'users';
Note: sqlite_sequence only exists if you used AUTOINCREMENT in the table definition. If you used INTEGER PRIMARY KEY without AUTOINCREMENT, SQLite uses max(rowid) + 1 automatically and there is nothing to reset.
Quick Reference
| Database | Reset Command |
|---|---|
| MySQL | ALTER TABLE t AUTO_INCREMENT = 1 |
| PostgreSQL | ALTER SEQUENCE seq RESTART WITH 1 |
| SQL Server | DBCC CHECKIDENT('t', RESEED, 0) |
| Oracle 18c+ | ALTER SEQUENCE seq RESTART START WITH 1 |
| SQLite | UPDATE sqlite_sequence SET seq = 0 WHERE name = 't' |
When to Reset (and When Not To)
Do reset in development and test environments after clearing test data.
Don't reset in production. Gaps in IDs are normal and harmless. Reusing IDs can cause confusion in logs, caches, and external systems that reference old IDs. If an ID was once assigned to a deleted record, reusing it can lead to subtle bugs.
Tags: MySQL PostgreSQL Microsoft SQL Server SQLite
How to Limit Query Results and Paginate in SQL
Posted by Kyle Hankinson June 20, 2023
Every database supports limiting query results, but the syntax varies. This is essential for pagination — showing results page by page.
MySQL, PostgreSQL, and SQLite: LIMIT / OFFSET
-- First 10 rows
SELECT * FROM products ORDER BY name LIMIT 10;
-- Rows 11-20 (page 2)
SELECT * FROM products ORDER BY name LIMIT 10 OFFSET 10;
-- Rows 21-30 (page 3)
SELECT * FROM products ORDER BY name LIMIT 10 OFFSET 20;
The formula: OFFSET = (page_number - 1) * page_size
SQL Server: OFFSET / FETCH NEXT (2012+)
-- First 10 rows
SELECT * FROM products ORDER BY name
OFFSET 0 ROWS FETCH NEXT 10 ROWS ONLY;
-- Rows 11-20
SELECT * FROM products ORDER BY name
OFFSET 10 ROWS FETCH NEXT 10 ROWS ONLY;
ORDER BY is required when using OFFSET ... FETCH.
SQL Server: TOP (older approach)
-- First 10 rows
SELECT TOP 10 * FROM products ORDER BY name;
TOP does not support an offset directly. For pagination with TOP, you need a subquery or CTE.
Oracle: FETCH FIRST (12c+)
-- First 10 rows
SELECT * FROM products ORDER BY name
FETCH FIRST 10 ROWS ONLY;
-- Rows 11-20
SELECT * FROM products ORDER BY name
OFFSET 10 ROWS FETCH NEXT 10 ROWS ONLY;
This syntax follows the SQL:2008 standard and is identical to SQL Server's.
Oracle: ROWNUM (Legacy)
-- First 10 rows
SELECT * FROM (
SELECT p.*, ROWNUM rn FROM products p ORDER BY name
) WHERE rn <= 10;
-- Rows 11-20
SELECT * FROM (
SELECT p.*, ROWNUM rn FROM (
SELECT * FROM products ORDER BY name
) p WHERE ROWNUM <= 20
) WHERE rn > 10;
This is verbose — prefer FETCH FIRST on Oracle 12c and above.
Quick Reference
| Database | Syntax |
|---|---|
| MySQL | LIMIT 10 OFFSET 20 |
| PostgreSQL | LIMIT 10 OFFSET 20 |
| SQLite | LIMIT 10 OFFSET 20 |
| SQL Server 2012+ | OFFSET 20 ROWS FETCH NEXT 10 ROWS ONLY |
| Oracle 12c+ | OFFSET 20 ROWS FETCH NEXT 10 ROWS ONLY |
| Oracle (legacy) | ROWNUM with nested subqueries |
The Performance Problem with Large Offsets
OFFSET 100000 means the database must find and discard 100,000 rows before returning your results. This gets slower as the offset increases.
Keyset Pagination (Cursor-Based)
A faster alternative for large datasets. Instead of an offset, use the last seen value:
-- Page 1
SELECT * FROM products ORDER BY id LIMIT 10;
-- Last row has id = 10
-- Page 2 (instead of OFFSET 10)
SELECT * FROM products WHERE id > 10 ORDER BY id LIMIT 10;
-- Last row has id = 20
-- Page 3
SELECT * FROM products WHERE id > 20 ORDER BY id LIMIT 10;
This is fast regardless of how deep you paginate because it uses the index on id directly.
Trade-off: Keyset pagination does not support jumping to an arbitrary page number. It only supports "next page" and "previous page."
Getting the Total Count
For displaying "Page 3 of 47", you need the total row count alongside paginated results:
-- PostgreSQL / MySQL
SELECT COUNT(*) OVER () AS total_count, *
FROM products
ORDER BY name
LIMIT 10 OFFSET 20;
This uses a window function to return the total count with every row, avoiding a separate COUNT query.