Troubleshooting
Common performance issues, diagnostic techniques, and step-by-step solutions to resolve bottlenecks and optimize application performance.
Performance Troubleshooting
Identify, diagnose, and resolve common performance issues to maintain optimal application performance.
Common Performance Issues
Slow Database Queries
Symptoms:
- High response times for API endpoints
- Database connection pool exhaustion
- Timeouts on database operations
Diagnostic Steps:
// Enable query logging to identify slow queries
const client = await pool.connect()
try {
console.time('query-time')
const result = await client.query('SELECT * FROM large_table WHERE complex_condition')
console.timeEnd('query-time')
// Log query plan
const explain = await client.query('EXPLAIN ANALYZE SELECT * FROM large_table WHERE complex_condition')
console.log('Query plan:', explain.rows)
} finally {
client.release()
}
Solutions:
- Add Missing Indexes:
-- Identify missing indexes
EXPLAIN ANALYZE SELECT * FROM products WHERE category = 'electronics';
-- Add appropriate index
CREATE INDEX idx_products_category ON products(category);
-- Composite index for multiple conditions
CREATE INDEX idx_products_category_status ON products(category, status);
- Optimize Query Structure:
-- Instead of this inefficient query
SELECT * FROM products p
WHERE p.id IN (SELECT product_id FROM inventory WHERE quantity > 0);
-- Use JOIN for better performance
SELECT p.* FROM products p
INNER JOIN inventory i ON p.id = i.product_id
WHERE i.quantity > 0;
Memory Leaks
Symptoms:
- Gradually increasing memory usage
- Out of memory errors
- Application crashes
Diagnostic Techniques:
// Monitor memory usage patterns
function monitorMemory() {
const usage = process.memoryUsage()
console.log({
rss: `${Math.round(usage.rss / 1024 / 1024)}MB`,
heapUsed: `${Math.round(usage.heapUsed / 1024 / 1024)}MB`,
heapTotal: `${Math.round(usage.heapTotal / 1024 / 1024)}MB`,
})
}
// Check for event listeners that aren't cleaned up
process.on('warning', (warning) => {
if (warning.name === 'MaxListenersExceededWarning') {
console.warn('Potential memory leak - too many event listeners:', warning)
}
})
Common Causes and Solutions:
- Unclosed Database Connections:
// Problem: Not releasing connections
async function badExample() {
const client = await pool.connect()
const result = await client.query('SELECT * FROM products')
// Missing client.release() - causes connection leak
return result
}
// Solution: Always release connections
async function goodExample() {
const client = await pool.connect()
try {
const result = await client.query('SELECT * FROM products')
return result
} finally {
client.release() // Always release
}
}
- Caching Without Expiration:
// Problem: Cache that grows indefinitely
const cache = new Map()
function badCaching(key: string, data: any) {
cache.set(key, data) // Never expires
}
// Solution: Implement TTL and size limits
class LimitedCache {
private cache = new Map()
private maxSize = 1000
set(key: string, value: any, ttl: number = 300000) {
// Remove oldest entries if at capacity
if (this.cache.size >= this.maxSize) {
const firstKey = this.cache.keys().next().value
this.cache.delete(firstKey)
}
this.cache.set(key, {
value,
expiry: Date.now() + ttl
})
}
get(key: string) {
const item = this.cache.get(key)
if (!item || Date.now() > item.expiry) {
this.cache.delete(key)
return null
}
return item.value
}
}
High CPU Usage
Symptoms:
- Slow response times
- High server load
- Unresponsive application
Diagnostic Steps:
// Profile CPU-intensive operations
console.time('cpu-intensive-operation')
// Example: Inefficient data processing
const largeArray = Array.from({ length: 1000000 }, (_, i) => i)
// Problem: Blocking operation
const result = largeArray
.filter(n => n % 2 === 0)
.map(n => n * 2)
.reduce((sum, n) => sum + n, 0)
console.timeEnd('cpu-intensive-operation')
Solutions:
- Optimize Algorithms:
// Instead of nested loops (O(n²))
function inefficientSearch(products: Product[], queries: string[]) {
return queries.map(query =>
products.filter(product =>
product.name.toLowerCase().includes(query.toLowerCase())
)
)
}
// Use efficient data structures (O(n))
function efficientSearch(products: Product[], queries: string[]) {
// Build index once
const searchIndex = new Map<string, Product[]>()
products.forEach(product => {
const words = product.name.toLowerCase().split(' ')
words.forEach(word => {
if (!searchIndex.has(word)) {
searchIndex.set(word, [])
}
searchIndex.get(word)!.push(product)
})
})
// Fast lookups
return queries.map(query =>
searchIndex.get(query.toLowerCase()) || []
)
}
- Use Asynchronous Processing:
// Break up CPU-intensive work
async function processLargeDataset(data: any[]) {
const batchSize = 1000
const results = []
for (let i = 0; i < data.length; i += batchSize) {
const batch = data.slice(i, i + batchSize)
const batchResult = await processBatch(batch)
results.push(...batchResult)
// Yield control to event loop
await new Promise(resolve => setImmediate(resolve))
}
return results
}
Debugging Tools and Techniques
Performance Profiling
// Built-in Node.js profiler
function profileFunction(fn: Function, name: string) {
return function(...args: any[]) {
const start = process.hrtime.bigint()
const result = fn.apply(this, args)
const end = process.hrtime.bigint()
const duration = Number(end - start) / 1000000 // Convert to milliseconds
console.log(`${name} took ${duration.toFixed(2)}ms`)
return result
}
}
// Usage
const optimizedFunction = profileFunction(expensiveOperation, 'ExpensiveOperation')
Request Tracing
// Add request tracing
export async function middleware(request: NextRequest) {
const requestId = crypto.randomUUID()
const startTime = Date.now()
console.log(`[${requestId}] ${request.method} ${request.url} - START`)
const response = NextResponse.next()
// Add trace headers
response.headers.set('X-Request-ID', requestId)
response.headers.set('X-Response-Time', `${Date.now() - startTime}ms`)
console.log(`[${requestId}] ${request.method} ${request.url} - END (${Date.now() - startTime}ms)`)
return response
}
Database Query Analysis
// Database query analyzer
export class QueryAnalyzer {
static async analyzeQuery(query: string, params?: any[]) {
const client = await pool.connect()
try {
// Get query execution plan
const plan = await client.query(`EXPLAIN (ANALYZE, BUFFERS, FORMAT JSON) ${query}`, params)
const analysis = plan.rows[0]['QUERY PLAN'][0]
console.log('Query Analysis:', {
executionTime: analysis['Execution Time'],
planningTime: analysis['Planning Time'],
totalCost: analysis.Plan['Total Cost'],
actualRows: analysis.Plan['Actual Rows'],
})
// Check for performance issues
if (analysis['Execution Time'] > 1000) {
console.warn('Slow query detected:', query.substring(0, 100))
}
if (analysis.Plan['Node Type'] === 'Seq Scan') {
console.warn('Sequential scan detected - consider adding index')
}
return analysis
} finally {
client.release()
}
}
}
Performance Optimization Checklist
Frontend Performance
- Bundle size optimized (< 250KB initial)
- Images optimized and properly sized
- Fonts preloaded
- Critical CSS inlined
- JavaScript code-split
- Lazy loading implemented
- Caching headers configured
Backend Performance
- Database queries optimized
- Proper indexes in place
- Connection pooling configured
- Response compression enabled
- Static assets served via CDN
- API response times < 200ms
- Memory usage stable
Database Performance
- Query execution plans reviewed
- Slow query log monitored
- Index usage analyzed
- Connection pool sized appropriately
- Database maintenance scheduled
- Backup strategy doesn't impact performance
Monitoring and Alerting
- Performance metrics collected
- Alerts configured for critical thresholds
- Error tracking implemented
- Log aggregation set up
- Performance dashboard created
- Regular performance reviews scheduled
Emergency Performance Recovery
Immediate Actions for Performance Crisis
- Scale Horizontally (if possible):
# Add more server instances
docker-compose up --scale web=3
- Enable Aggressive Caching:
// Temporary aggressive caching
export async function emergencyCache(request: NextRequest) {
const url = new URL(request.url)
// Cache everything for 5 minutes during crisis
const response = NextResponse.next()
response.headers.set('Cache-Control', 'public, max-age=300')
return response
}
- Rate Limiting:
// Implement emergency rate limiting
const rateLimiter = new Map<string, { count: number; resetTime: number }>()
export function emergencyRateLimit(clientId: string, limit: number = 10) {
const now = Date.now()
const windowMs = 60000 // 1 minute
const client = rateLimiter.get(clientId)
if (!client || now > client.resetTime) {
rateLimiter.set(clientId, { count: 1, resetTime: now + windowMs })
return true
}
if (client.count >= limit) {
return false
}
client.count++
return true
}
- Disable Non-Critical Features:
// Feature flag for emergency mode
const EMERGENCY_MODE = process.env.EMERGENCY_MODE === 'true'
export function renderDashboard() {
if (EMERGENCY_MODE) {
// Return minimal dashboard
return <MinimalDashboard />
}
return <FullDashboard />
}
Systematic troubleshooting and having emergency procedures in place help maintain application performance and quickly resolve issues when they occur.