Stop Adding Redis: Fix Your Missing Postgres Indices First

· 6 min read ·
·
Postgres Performance Supabase Database

Database optimization tutorials always point you to the exotic stuff. Materialized views. Partitioning strategies. Read replicas. Connection pooling architectures.

Here’s what actually fixed our application: EXPLAIN ANALYZE and five strategic indices. No architectural rewrites. No caching layers. No desperate scaling pivots.

One dashboard. One PostgreSQL command. A handful of CREATE INDEX statements. Query times dropped from 28.6 seconds to 900ms - a 97% improvement that took two hours, not two sprints.

The boring fundamentals work. But most developers skip them because they’re not exciting enough to write about.

The Problem: Slow Queries Everywhere

Our application started crawling. Dashboard loads took 15+ seconds. User search felt broken at 20+ seconds. The API timeout warnings were piling up in Sentry.

The team immediately started discussing solutions:

  • “Should we add Redis caching?”
  • “Maybe we need read replicas?”
  • “Time to partition that users table?”

Classic engineer brain: jump to the complex solution first.

I opened the Supabase dashboard and clicked on “Slow Queries.” The truth was right there: five queries were scanning millions of rows. Not an architecture problem. Not a scaling problem. A missing index problem.

Supabase Slow Queries Dashboard: The Discovery Tool

If you’re on Supabase, the Slow Queries dashboard is your first stop. It shows you exactly what’s hurting:

  • Query text - The actual SQL being executed
  • Average execution time - How long it’s taking
  • Calls - How often it’s running
  • Total time - The cumulative damage

For vanilla Postgres, enable pg_stat_statements:

CREATE EXTENSION IF NOT EXISTS pg_stat_statements;

-- Find your slowest queries
SELECT
  query,
  calls,
  mean_exec_time,
  total_exec_time
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 20;

Why this works: You’re optimizing based on data, not guesses. pg_stat_statements tracks every query’s performance across your entire application. No more “I think this might be slow.”

Our top offender was a user search query: 28.6 seconds average, called 2,000+ times per day. That’s 16 hours of cumulative user time wasted daily on one query.

EXPLAIN ANALYZE: What Was Actually Happening

Running EXPLAIN on the query showed the estimated plan. Running EXPLAIN ANALYZE executed it and showed reality:

EXPLAIN ANALYZE
SELECT users.*, profiles.*, activity_logs.last_seen
FROM users
LEFT JOIN profiles ON users.id = profiles.user_id
LEFT JOIN activity_logs ON users.id = activity_logs.user_id
WHERE users.email ILIKE '%search term%'
  AND users.created_at > '2025-01-01'
  AND profiles.status = 'active'
ORDER BY activity_logs.last_seen DESC
LIMIT 50;

The output was brutal:

Limit  (cost=582045.23..582045.36 rows=50 width=432)
       (actual time=28634.521..28634.537 rows=50 loops=1)
  ->  Sort  (cost=582045.23..585241.89 rows=1278663 width=432)
            (actual time=28634.518..28634.528 rows=50 loops=1)
        Sort Key: activity_logs.last_seen DESC
        Sort Method: top-N heapsort  Memory: 34kB
        ->  Hash Left Join  (cost=45234.12..568234.45 rows=1278663 width=432)
                          (actual time=892.234..27456.789 rows=1245782 loops=1)
              ->  Seq Scan on users  (cost=0.00..234567.23 rows=1278663 width=216)
                                    (actual time=0.034..8234.567 rows=1245782 loops=1)
                    Filter: ((email ~~* '%search term%') AND (created_at > '2025-01-01'::date))
                    Rows Removed by Filter: 234891
              ->  Hash  (cost=34567.89..34567.89 rows=123456 width=216)
                    ->  Seq Scan on profiles  (cost=0.00..34567.89 rows=123456 width=216)
                          Filter: (status = 'active'::text)
Planning Time: 2.345 ms
Execution Time: 28634.623 ms

Key red flags:

  1. Seq Scan on users - Scanning 1.2M rows instead of using an index
  2. Rows Removed by Filter: 234891 - Checking 234k rows only to discard them
  3. ILIKE ‘%search term%’ - The leading wildcard prevents index usage
  4. No index on activity_logs.user_id - Hash join instead of indexed lookup
  5. 28634ms execution time - Unacceptable for a user-facing query

Why EXPLAIN ANALYZE matters: EXPLAIN shows you the plan. EXPLAIN ANALYZE shows you what actually happened. The difference between “estimated rows=1000” and “actual rows=1245782” tells you when PostgreSQL’s statistics are wrong.

The Five Strategic Indices That Fixed Everything

I didn’t add indices everywhere. I added five, based on what EXPLAIN ANALYZE revealed:

1. Composite index on users table

CREATE INDEX CONCURRENTLY idx_users_created_status
ON users (created_at, email);

Why this works: The query filters on created_at first (more selective), then searches email. Composite indices match the query’s filter pattern. Column order matters - most selective filter goes first.

CREATE INDEX CONCURRENTLY idx_users_email_gin
ON users USING gin (email gin_trgm_ops);

-- Enable trigram extension first
CREATE EXTENSION IF NOT EXISTS pg_trgm;

Why this works: GIN (Generalized Inverted Index) with trigrams handles ILIKE patterns efficiently. The ‘%search%’ pattern that killed performance now uses the index instead of scanning.

3. Index on profiles.status with filter

CREATE INDEX CONCURRENTLY idx_profiles_status_active
ON profiles (user_id, status)
WHERE status = 'active';

Why this works: Partial index only stores rows where status = 'active'. Smaller index, faster lookups, less maintenance overhead. The query explicitly filters on status=‘active’, so the partial index is always applicable.

4. Index on activity_logs for join

CREATE INDEX CONCURRENTLY idx_activity_logs_user_last_seen
ON activity_logs (user_id, last_seen DESC);

Why this works: Supports both the JOIN on user_id and the ORDER BY on last_seen. The DESC matches the query’s sort direction, eliminating a separate sort step.

5. Covering index for frequent lookups

CREATE INDEX CONCURRENTLY idx_users_covering
ON users (id)
INCLUDE (email, created_at);

Why this works: Index-only scans. PostgreSQL can satisfy the query entirely from the index without touching the table heap. Massive I/O reduction for queries that only need these columns.

CONCURRENTLY is critical: Creating indices without it locks the table for writes. On a production users table with millions of rows, that’s unacceptable. CONCURRENTLY builds the index in the background.

The Results: From 28s to 900ms

After adding the indices, I ran EXPLAIN ANALYZE again:

Limit  (cost=234.56..234.69 rows=50 width=432)
       (actual time=856.234..856.267 rows=50 loops=1)
  ->  Nested Loop Left Join  (cost=45.23..12456.78 rows=45123 width=432)
                            (actual time=2.345..854.123 rows=50 loops=1)
        ->  Index Scan using idx_users_created_status on users
                  (cost=0.56..4567.89 rows=12456 width=216)
                  (actual time=0.123..234.567 rows=12456 loops=1)
              Index Cond: (created_at > '2025-01-01'::date)
              Filter: (email ~~* '%search term%')
              Rows Removed by Filter: 234
        ->  Index Scan using idx_activity_logs_user_last_seen
                  (cost=0.43..8.45 rows=1 width=216)
                  (actual time=0.045..0.045 rows=1 loops=12456)
              Index Cond: (user_id = users.id)
Planning Time: 1.234 ms
Execution Time: 903.456 ms

The transformation:

MetricBeforeAfterImprovement
Execution time28,634ms903ms97% faster
Rows scanned1,245,78212,45699% reduction
Scan typeSequentialIndexOptimal
I/O operations~45,000~15099.7% reduction

Dashboard loads dropped from 15 seconds to 1.2 seconds. User search went from 20+ seconds to under 1 second. API timeout errors disappeared.

Total implementation time: 2 hours. Most of that was waiting for indices to build with CONCURRENTLY.

Why Developers Skip This

The basics aren’t sexy. Blog posts about “10x your database with partitioning” get more clicks than “add an index.”

But here’s the reality: most performance problems are missing index problems. The exotic solutions matter when you’re at 100M+ rows and you’ve already optimized indices. For the other 99% of applications, EXPLAIN ANALYZE + strategic indexing solves it.

Common reasons teams skip this:

  1. EXPLAIN output looks intimidating - It’s not. Start from the most indented line and work outward. Look for Seq Scan, high actual time, and “Rows Removed by Filter.”

  2. “We’ll add indices when we need them” - You need them now. Indices don’t just help slow queries. They prevent queries from becoming slow in the first place.

  3. Over-indexing fears - Valid concern. Every index slows writes and consumes storage. But five strategic indices won’t hurt you. Fifty random indices will.

  4. Jumping to architectural solutions - Caching, read replicas, and partitioning address scale problems. But if your queries are inefficient, you’re just scaling inefficiency.

The pattern I follow: Monitor with pg_stat_statements, identify slow queries, run EXPLAIN ANALYZE, add indices based on the execution plan, verify improvement.

Works on Supabase and Vanilla Postgres

This approach is database-agnostic. Supabase just makes discovery easier with the built-in Slow Queries dashboard.

On Supabase:

  • Slow Queries dashboard shows the problems
  • SQL editor to run EXPLAIN ANALYZE
  • Index creation through SQL editor or migrations

On vanilla Postgres:

  • Enable pg_stat_statements extension
  • Query it to find slow operations
  • Same EXPLAIN ANALYZE workflow
  • Same index creation commands

The monitoring tools differ. The optimization fundamentals don’t.

If you’re on AWS RDS, enable Performance Insights. On Google Cloud SQL, use Query Insights. On self-hosted Postgres, install pg_stat_statements and consider auto_explain for automatic logging.

The critical insight: The database query planner is trying to help you. EXPLAIN ANALYZE shows you exactly what it’s doing and where it’s struggling. Listen to it.

The Bottom Line

Database optimization isn’t about knowing 47 exotic techniques. It’s about using the fundamentals systematically.

EXPLAIN ANALYZE told us where PostgreSQL was struggling. Strategic indices fixed it. Five CREATE INDEX statements and 97% faster queries.

The boring answer works. Use the monitoring tools. Read the execution plans. Add indices where the data shows you need them. Don’t guess. Don’t over-engineer. Let the database tell you what’s wrong.

Most performance problems aren’t architecture problems. They’re missing index problems hiding behind impressive-sounding excuses.


Optimizing database performance? I’d love to hear what you’re working on. Reach out on LinkedIn.