sqlite is enough until concurrent writes prove otherwise. a defense.
SQLite is a file on disk. That's it. That is the whole architecture. There is no server process. No connection pooling. No pg_hba.conf you will misread at 11pm. No Docker container running postgres that is using 300MB of RAM to hold twelve rows of data for your blog. Just a file. Your app opens it. The data is there.
It is also, as of right now, the most widely deployed database engine in history. It lives in your phone. Your browser. Your desktop apps. Every Electron app you have cursed at has an SQLite database somewhere. Billions of instances. It is fine. It has always been fine. You are the one who wasn't fine.
A personal blog does not need a database server before it has database problems. SQLite keeps the data local, inspectable, and easy to back up. cp myapp.db myapp.db.bak - that's your backup. That's it. Done. Go outside.
// the uncomfortable reality about 95% of businesses
Let's do the math that nobody wants to do. The average SaaS, indie app, internal tool, content site, or startup at the idea-to-Series-A stage has:
| METRIC | WHAT FOUNDERS IMAGINE | ACTUAL REALITY | SQLITE HANDLES IT? |
|---|---|---|---|
| Daily active users | 100,000+ (conservatively) | 120. 40 are bots. | Yes. Effortlessly. |
| Database writes/sec | Millions, probably | ~3. Maybe 10 on Mondays. | Yes. It's yawning. |
| Database size | Petabytes. Obviously. | 14MB after 2 years. | Yes. Max is 281TB. |
| Query complexity | Distributed joins across 6 services | SELECT * WHERE user_id = ? | Yes. Child's play. |
| Concurrent users | Viral moment incoming, surely | 4 (you, cofounder, 2 testers) | Yes. It barely notices. |
| Infrastructure team | SRE team of 12 | You. At midnight. | Yes. Nothing to manage. |
This is not a knock on ambition. It's a knock on premature infrastructure complexity that costs time, money, and sleep when you could just be shipping. The database doesn't need to be Postgres before the product needs to be Postgres. When that day comes - and it might, genuinely, come - migrating from SQLite to Postgres is documented, tooled, and survivable. Migrating away from an over-engineered mess of microservices is a years-long grief process.
// wal mode: the thing that fixes the one real problem
The one legitimate complaint about SQLite is write concurrency. The default journal mode uses a writer lock that blocks all readers. This is real. This is annoying. This was solved in 2010.
WAL - Write-Ahead Logging - is a mode you enable with one line. One. When WAL is on, writers do not block readers and readers do not block writers. Multiple concurrent reads happen simultaneously. A write queues behind the current write, not behind every read. For 95% of workloads, this is completely sufficient and you never need to think about it again.
-- returns: wal
-- that's it. you're done. go for a walk.
-- also while you're here:
PRAGMA synchronous=NORMAL; -- faster, still durable
PRAGMA cache_size=-64000; -- 64MB cache in memory
PRAGMA foreign_keys=ON; -- because you're not an animal
PRAGMA busy_timeout=5000; -- wait 5s on lock instead of crashing
-- this entire config fits in 5 lines.
-- your postgres config is 80 lines and you don't know what half of them do.
WAL mode also survives crashes better than the default journal mode. The database does not corrupt on power loss. Your WAL file checkpoints back to the main file automatically. This is not a hack or a workaround. This is the intended production configuration. The SQLite docs literally recommend it. You have been sleeping on this.
What WAL does not fix is extremely high write concurrency - think thousands of simultaneous writes per second from dozens of processes. At that point, yeah, you want Postgres. But if you are at that point, you already know you are at that point. You have the traffic numbers. You have the metrics. You have engineers. This post is not for you. This post is for the person Dockering up a Postgres container to store 400 rows of blog posts.
// the genius move: sharding by file
Here is where SQLite gets interesting in a way that nobody talks about enough. When you hit the limits of a single database - whether write throughput, or just wanting to isolate tenants, or managing data lifecycle - you can shard by creating multiple SQLite files.
One SQLite file per tenant. One per user. One per month of data. One per geographic region. The routing logic lives in your application layer and it is just an if statement or a hash function. user_id % 8 gives you 8 shards. Each shard is a file. Each file is independent. Each file can be on a different disk. Each file can be backed up, restored, deleted, or migrated independently. You have just built a horizontally partitioned database system using the oldest trick in the book: putting things in different places.
This is not a compromise. This is a genuinely powerful pattern used by serious production systems. Expensify ran SQLite at scale for years. Notion used it. Cloudflare's D1 is distributed SQLite. Turso is literally just SQLite with replication bolted on and VCs are calling it revolutionary.
function getDb(tenantId) {
const shard = tenantId % 8; // 8 shards
return new Database(`./data/shard-${shard}.db`);
}
// or per-tenant isolation (even cleaner):
function getTenantDb(tenantId) {
return new Database(`./data/tenant-${tenantId}.db`);
}
// backup tenant 42:
$ cp ./data/tenant-42.db ./backups/tenant-42-$(date +%Y%m%d).db
// delete tenant 42's data (GDPR, whatever):
$ rm ./data/tenant-42.db
// that's it. no DELETE FROM. no cascades. no anxiety.
The per-tenant SQLite file pattern also means your GDPR deletion request is literally just rm tenant-42.db. That's it. The data is gone. No scanning tables. No cascading deletes across seventeen related tables. No accidentally leaving a row in some audit log table you forgot existed. ONE COMMAND. The file is gone. The data is gone. The regulator is happy. You slept. This is beautiful.
// the actual limits. real ones. for honesty.
This is a blog post that loves SQLite, not one that lies about it. Here are the actual cases where you should use something else:
| SITUATION | USE INSTEAD | WHY |
|---|---|---|
| High concurrent writes from many processes/servers | Postgres | SQLite is single-writer. If you have 10 app servers all writing simultaneously, WAL won't save you. |
| Multiple application servers reading/writing the same file | Postgres or LiteFS/Turso | SQLite lives on one machine. You can't mount the same file over NFS under write load and expect happiness. |
| +10M transactions/second | Postgres, Cassandra, whatever | At that point you have very specific needs and a very specific budget and this post is not for you. |
| Complex replication requirements | LiteFS, Turso, or Postgres | SQLite doesn't replicate natively. Tooling exists but adds complexity. |
| Your team knows Postgres deeply | Use Postgres then | Familiarity has value. Use what your team can operate at 3am. This post is about not defaulting to complexity, not about dogma. |
None of these limits apply to a side project, a small SaaS, an internal tool, a personal blog, a developer portfolio, or literally anything an indie developer is building right now at any stage before meaningful traction. The limit you will hit first is not the database. It's finding users.
// the verdict. same as it ever was.
SQLite is not a toy database. It is not a "development only" database. It is not a database you graduate from like it's training wheels. It is a legitimate, battle-tested, production-grade database engine that happens to be a file, which makes it simpler to operate than anything with a server process, a config file, and a port number.
Enable WAL. Set a busy timeout. Keep your database local to the app process. Back it up with cp or sqlite3 mydb.db ".backup backup.db". Shard by file if you need scale. Migrate to Postgres the day your metrics actually demand it, not the day you imagine they might.
SQLite has been running continuously in production on billions of devices since 2000. It was written by a guy named Richard Hipp who wanted a database that didn't need a server. It has 100% branch test coverage - one of the most tested pieces of software in existence. The source code is in the public domain. It will outlive every Postgres-as-a-Service startup that charged you $50/month for a hobby database. RESPECT THE FILE.
// filed under: obvious in retrospect · written on a machine running SQLite right now · no Postgres containers were harmed in the production of this post