Fine-Tuning PostgreSQL: Understanding Selectivity and Cardinality Estimations

Deciphering PostgreSQL Query Optimization: Selectivity and Cardinality Estimation

PostgreSQL’s query optimizer makes decisions about query plans based on selectivity and cardinality estimations. These estimations are critical for determining the most efficient way to retrieve data. Here’s an explanation of how PostgreSQL’s optimizer decides selectivity and cardinality:

Selectivity Estimation:

Selectivity refers to the percentage of rows in a table that meet a particular condition in a query’s WHERE clause. The selectivity estimation helps the optimizer choose the most efficient access methods (e.g., index scans, sequential scans) and join strategies (e.g., nested loop joins, hash joins).

  1. Statistics: PostgreSQL maintains statistics about the distribution of data in each table and column. The most important statistic is the histogram, which represents the distribution of values in a column. The optimizer uses these statistics to estimate selectivity.
  2. Histograms: For columns with histograms, PostgreSQL estimates selectivity based on the histogram’s shape and the values in the WHERE clause. If there’s a filter condition on a column with a histogram, PostgreSQL will consult the histogram to estimate the selectivity.
  3. Operators: PostgreSQL supports various comparison operators (e.g., =, <, >, LIKE) that affect selectivity differently. The optimizer uses operator statistics to make more accurate estimates.
  4. AND/OR Conditions: When multiple conditions are combined with AND or OR operators, PostgreSQL estimates selectivity for each condition and combines them using set theory rules.
  5. Functional Indexes: For expressions involving functions in the WHERE clause, PostgreSQL may use functional indexes if available, and their selectivity is estimated based on statistics.

Cardinality Estimation:

Cardinality refers to the number of distinct values in a column or the number of rows returned by a query. Accurate cardinality estimation is crucial for optimizing join orders and deciding on join strategies.

  1. Statistics: PostgreSQL maintains statistics on the number of distinct values (distinct cardinality) and the total number of rows (total cardinality) for each column. These statistics help estimate the cardinality of query results.
  2. Join Cardinality: When joining multiple tables, PostgreSQL uses join cardinality estimation to determine the expected number of rows in the join result. It considers the cardinality of the joined columns, join conditions, and statistics.
  3. Aggregate Functions: When aggregate functions like COUNT, SUM, AVG, etc., are used in queries, PostgreSQL estimates the cardinality of the aggregated result based on statistics and grouping criteria.
  4. Subqueries: PostgreSQL estimates the cardinality of subquery results, which can affect the overall cardinality estimation of the main query.
  5. Sampling: In some cases, PostgreSQL may use sampling to estimate cardinality, especially for large tables. It takes a random sample of data to make cardinality estimates.

Challenges and Tuning:

Estimating selectivity and cardinality accurately can be challenging, especially for complex queries or when dealing with skewed data distributions. Inaccurate estimations can lead to suboptimal query plans. Database administrators can improve estimation accuracy by:

  • Regularly updating statistics: Keep statistics up to date to reflect changes in data distribution.
  • Analyzing complex queries: For complex queries, analyze and understand the estimated execution plan to identify any discrepancies between the estimates and actual query behavior.
  • Using hints: PostgreSQL allows the use of query hints to influence query plans when necessary.
  • Adjusting statistics thresholds: Customize the default statistics thresholds for specific columns or tables to achieve more accurate estimations.

Understanding how PostgreSQL’s optimizer estimates selectivity and cardinality is crucial for database administrators and developers to fine-tune query performance and optimize their database systems effectively.

About Shiv Iyer 390 Articles
Open Source Database Systems Engineer with a deep understanding of Optimizer Internals, Performance Engineering, Scalability and Data SRE. Shiv currently is the Founder, Investor, Board Member and CEO of multiple Database Systems Infrastructure Operations companies in the Transaction Processing Computing and ColumnStores ecosystem. He is also a frequent speaker in open source software conferences globally.