PostgreSQL Execution Plan Analysis

PostgreSQL Execution Plan

PostgreSQL execution plan analysis is critical for diagnosing query performance issues. Here’s how to interpret optimiser decisions and identify bottlenecks:

Generating Execution Plans

  • First, Generate basic plan estimation:
EXPLAIN SELECT * FROM employees WHERE salary > 50000;

Shows estimated costs without executing the query

  • Next, obtain actual runtime metrics:
EXPLAIN ANALYZE SELECT * FROM employees WHERE salary > 50000;

Executes the query and provides real-timing data

  • Additionally, for transaction-safe analysis:
BEGIN; EXPLAIN ANALYZE ...; ROLLBACK;

For write operations to prevent data changes

Key Plan Components to Analyze

1. Cost Metrics

Seq Scan on employees (cost=0.00..16.50 rows=173 width=118)
  • Startup cost: Resources to return first row (0.00)
  • Total cost: Resources for full execution (16.50)
  • Rows/Width: Estimated rows and row size

2. Node Types

Node Type Indicates Common Issues
Seq Scan Full table scan Missing indexes on filtered columns
Index Scan Index utilization Non-selective indexes
Nested Loop Small table joins Cartesian products without conditions
Hash Join Large dataset joins Insufficient work_mem for hash tables

3. Actual vs Estimated

(actual time=0.018..0.018 rows=0 loops=1)
  • Large discrepancies between estimated and actual rows indicate outdated statistics
  • Similarly, High actual time relative to cost estimates suggests planner miscalculations

Identifying Bottlenecks

  • Sequential Scans on Large Tables
    • Look for Seq Scan on tables with >1,000 rows
    • Solution: Add appropriate indexes
  • Sort/Merge Operations
    • Check work_mem settings if spills to disk occur
  • Lock Contention
    • Use pg_stat_activity to identify blocking transactions
  • Suboptimal Join Strategies
    • Nested Loop with large inner tables
    • Hash Join exceeding work_mem

Optimizer Decision Factors

  • Statistics from pg_statistic (updated via ANALYZE)
  • Cost constants in postgresql.conf (cpu_tuple_cost, random_page_cost)
  • Data distribution (histogram bounds for value distribution)
  • Index availability (only visible, enabled indexes are considered)

Advanced Analysis Tools

  • Visual Explain (pgAdmin, DBeaver, other PostgreSQL GUI tools)
    • Graphical plan representation
    • Color-coded costly nodes
  • Plan stabilization
ALTER SYSTEM SET plan_cache_mode = 'force_custom_plan';

Helps control plan generation behavior

  • Wait Event Analysis
SELECT wait_event_type, COUNT(*) FROM pg_stat_activity GROUP BY 1;

Identify system resource contention

Optimization Workflow

  1. Start by -Run EXPLAIN ANALYZE on slow query
  2. Identify highest-cost node in plan tree
  3. Check for:
    • Sequential scans where indexes exist
    • Sort operations exceeding work_mem
    • Incorrect row estimates
  4. Test solutions:
    • Add missing indexes
    • Refresh statistics with ANALYZE
    • Adjust cost parameters
  5. Finally, Re-run analysis to verify improvements

By systematically analyzing these elements, you can pinpoint why PostgreSQL’s optimizer chooses specific execution paths and address underlying performance issues.



Strategic Indexing: Making Informed Column Indexing Decisions in PostgreSQL

 

 

How to compute cost of slow query in PostgreSQL?

 

Tips and tricks to troubleshoot indexes with high maintenance costs in PostgreSQL

About MinervaDB Corporation 60 Articles
A boutique private-label enterprise-class MySQL, MariaDB, MyRocks, PostgreSQL and ClickHouse consulting, 24*7 consultative support and remote DBA services company with core expertise in performance, scalability and high availability. Our consultants have several years of experience in architecting and building web-scale database infrastructure operations for internet properties from diversified verticals like CDN, Mobile Advertising Networks, E-Commerce, Social Media Applications, SaaS, Gaming and Digital Payment Solutions. Our globally distributed team working on multiple timezones guarantee 24*7 Consulting, Support and Remote DBA Services delivery for MySQL, MariaDB, MyRocks, PostgreSQL and ClickHouse.