Database Optimization
This guide covers database optimization strategies using Drizzle ORM, including query patterns, connection management, N+1 prevention, indexing strategies, and query analysis.
Architecture Overview
Database Access Architecture
==============================
Server Component / API Route
|
v
+-------------------+
| Service Layer | <-- lib/services/
+-------------------+
|
v
+-------------------+
| Repository Layer | <-- lib/repositories/
+-------------------+
|
v
+-------------------+
| Drizzle ORM | <-- Schema, queries, relations
+-------------------+
|
v
+-------------------+
| Database Driver | <-- PostgreSQL (prod) / SQLite (dev)
+-------------------+
|
v
+-------------------+
| Connection Pool | <-- Managed by driver
+-------------------+
Drizzle Query Patterns
Basic Select with Filtering
import { db } from '@/lib/db';
import { items } from '@/lib/db/schema';
import { eq, and, like, desc } from 'drizzle-orm';
// Simple select
const allItems = await db.select().from(items);
// Filtered select with ordering
const published = await db
.select()
.from(items)
.where(and(
eq(items.status, 'published'),
like(items.title, `%${searchTerm}%`)
))
.orderBy(desc(items.createdAt))
.limit(20)
.offset(0);
Relational Queries
Use Drizzle's relational query API to avoid N+1 problems:
// Good: Single query with relations
const itemsWithCategories = await db.query.items.findMany({
with: {
category: true,
tags: {
with: {
tag: true,
},
},
},
where: eq(items.status, 'published'),
limit: 20,
});
// Bad: N+1 pattern (do NOT do this)
const allItems = await db.select().from(items);
for (const item of allItems) {
// This creates N additional queries!
const category = await db.select().from(categories)
.where(eq(categories.id, item.categoryId));
}
Pagination Pattern
interface PaginationOptions {
page: number;
limit: number;
sortBy?: string;
sortOrder?: 'asc' | 'desc';
}
async function getPaginatedItems(options: PaginationOptions) {
const { page, limit, sortBy = 'createdAt', sortOrder = 'desc' } = options;
const offset = (page - 1) * limit;
const [data, countResult] = await Promise.all([
db.select()
.from(items)
.orderBy(sortOrder === 'desc' ? desc(items[sortBy]) : asc(items[sortBy]))
.limit(limit)
.offset(offset),
db.select({ count: count() })
.from(items),
]);
return {
data,
meta: {
page,
limit,
total: countResult[0].count,
totalPages: Math.ceil(countResult[0].count / limit),
},
};
}
Batch Operations
// Batch insert
await db.insert(items).values([
{ title: 'Item 1', status: 'draft' },
{ title: 'Item 2', status: 'draft' },
{ title: 'Item 3', status: 'draft' },
]);
// Batch update with conditions
await db.update(items)
.set({ status: 'archived' })
.where(and(
eq(items.status, 'published'),
lt(items.updatedAt, thirtyDaysAgo)
));
// Upsert (insert or update)
await db.insert(items)
.values({ id: itemId, title: 'Updated Title' })
.onConflictDoUpdate({
target: items.id,
set: { title: 'Updated Title', updatedAt: new Date() },
});
Connection Pooling
PostgreSQL (Production)
// lib/db/index.ts
import { drizzle } from 'drizzle-orm/node-postgres';
import { Pool } from 'pg';
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 20, // Maximum connections
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 5000,
});
export const db = drizzle(pool);
SQLite (Development)
// lib/db/index.ts (development)
import { drizzle } from 'drizzle-orm/better-sqlite3';
import Database from 'better-sqlite3';
const sqlite = new Database('dev.db');
export const db = drizzle(sqlite);
Connection Pool Sizing
| Environment | Max Connections | Rationale |
|---|---|---|
| Development | 5 | SQLite or minimal PG |
| Staging | 10 | Moderate load testing |
| Production | 20 | Handle concurrent requests |
| Serverless | 1-5 | Connection limits per instance |
For serverless (Vercel), consider using connection poolers like PgBouncer or Neon's serverless driver.
N+1 Prevention
Detecting N+1 Queries
Enable query logging in development to detect N+1 patterns:
const db = drizzle(pool, {
logger: {
logQuery(query, params) {
console.log('[DB]', query.substring(0, 100));
},
},
});
Watch for patterns like:
[DB] SELECT * FROM items LIMIT 20
[DB] SELECT * FROM categories WHERE id = $1 -- repeated 20 times
[DB] SELECT * FROM categories WHERE id = $1
[DB] SELECT * FROM categories WHERE id = $1
...
Solving N+1 with Batch Loading
// Instead of loading relations one by one, batch them:
const allItems = await db.select().from(items).limit(20);
// Collect all unique category IDs
const categoryIds = [...new Set(allItems.map(i => i.categoryId).filter(Boolean))];
// Single query for all categories
const allCategories = await db
.select()
.from(categories)
.where(inArray(categories.id, categoryIds));
// Map categories to items
const categoryMap = new Map(allCategories.map(c => [c.id, c]));
const enrichedItems = allItems.map(item => ({
...item,
category: categoryMap.get(item.categoryId),
}));
Indexing Strategies
Essential Indexes
-- Primary lookup patterns
CREATE INDEX idx_items_status ON items(status);
CREATE INDEX idx_items_slug ON items(slug);
CREATE INDEX idx_items_created_at ON items(created_at DESC);
-- Composite indexes for common queries
CREATE INDEX idx_items_status_created ON items(status, created_at DESC);
CREATE INDEX idx_items_category_status ON items(category_id, status);
-- Full-text search (PostgreSQL)
CREATE INDEX idx_items_title_search ON items USING gin(to_tsvector('english', title));
-- Foreign key indexes (often missed)
CREATE INDEX idx_comments_item_id ON comments(item_id);
CREATE INDEX idx_favorites_user_id ON favorites(user_id);
CREATE INDEX idx_tags_items_item_id ON tags_to_items(item_id);
CREATE INDEX idx_tags_items_tag_id ON tags_to_items(tag_id);
Drizzle Index Definitions
// In schema definition
import { index, pgTable, text, timestamp, varchar } from 'drizzle-orm/pg-core';
export const items = pgTable('items', {
id: text('id').primaryKey(),
title: varchar('title', { length: 255 }).notNull(),
slug: varchar('slug', { length: 255 }).notNull().unique(),
status: varchar('status', { length: 50 }).notNull().default('draft'),
categoryId: text('category_id').references(() => categories.id),
createdAt: timestamp('created_at').defaultNow(),
}, (table) => ({
statusIdx: index('idx_items_status').on(table.status),
slugIdx: index('idx_items_slug').on(table.slug),
statusCreatedIdx: index('idx_items_status_created').on(table.status, table.createdAt),
}));
Query Analysis
Using EXPLAIN
// PostgreSQL EXPLAIN
const explanation = await db.execute(
sql`EXPLAIN ANALYZE SELECT * FROM items WHERE status = 'published' ORDER BY created_at DESC LIMIT 20`
);
console.log(explanation.rows);
Reading EXPLAIN Output
Limit (cost=0.42..1.23 rows=20 width=512) (actual time=0.034..0.089 rows=20 loops=1)
-> Index Scan using idx_items_status_created on items (cost=0.42..45.67 rows=1200 width=512)
Index Cond: (status = 'published')
Planning Time: 0.123 ms
Execution Time: 0.134 ms
Key indicators:
- Index Scan: Good -- using an index
- Seq Scan: Warning -- full table scan, consider adding an index
- Nested Loop: Check if the inner scan uses an index
- Execution Time: Target under 10ms for common queries
Optimized Repository Patterns
The template uses a repository pattern for data access:
// lib/repositories/items.repository.ts
export class ItemsRepository {
async findPublished(options: PaginationOptions) {
return db.query.items.findMany({
where: eq(items.status, 'published'),
with: { category: true },
limit: options.limit,
offset: (options.page - 1) * options.limit,
orderBy: [desc(items.createdAt)],
});
}
async findBySlug(slug: string) {
return db.query.items.findFirst({
where: eq(items.slug, slug),
with: {
category: true,
tags: { with: { tag: true } },
},
});
}
}
Performance Considerations
- Select only needed columns: Use
.select({ id: items.id, title: items.title })instead of selecting all columns. - Use
findFirstfor single results: AddsLIMIT 1automatically. - Avoid
count()on large tables: Consider approximate counts or cached counters. - Use transactions for multi-step operations: Ensures data consistency.
- Set query timeouts: Prevent runaway queries from blocking the connection pool.
// Transaction example
await db.transaction(async (tx) => {
await tx.insert(items).values(newItem);
await tx.insert(auditLog).values({ action: 'create', itemId: newItem.id });
});
Database Commands
# Generate migrations from schema changes
pnpm db:generate
# Run pending migrations
pnpm db:migrate
# Seed development data
pnpm db:seed
# Open Drizzle Studio (visual DB browser)
pnpm db:studio
Troubleshooting
Slow queries in production
- Enable query logging and identify slow queries.
- Run
EXPLAIN ANALYZEon the slow query. - Check if the query uses an index (look for
Seq Scanwarnings). - Add composite indexes for common filter + sort combinations.
Connection pool exhaustion
- Check
maxpool setting relative to expected concurrency. - Ensure connections are returned to the pool (avoid holding connections in long-running operations).
- Monitor active/idle connections with pool metrics.
- For serverless, use a connection pooler.
Migration conflicts
- Always run
pnpm db:generatebeforepnpm db:migrate. - Review generated SQL before applying in production.
- Use
pnpm db:studioto inspect the current schema state.