eBPF-based PostgreSQL tracing

There's a good chance you're reading this because something around ebpf postgresql just broke, or you suspect it's about to. Either way, this post is the runbook we hand new senior PostgreSQL engineers at MinervaDB when they encounter it for the first time.

Quick answer

Quick recipe for ebpf postgresql: capture the current behavior in pg_stat_statements, identify whether the issue lives in the planner, autovacuum, replication, or I/O, then make one targeted change and re-measure. Wrap the fix with an alert and a one-page runbook entry so the next engineer on call doesn't have to rediscover it. Worked examples and SQL snippets follow.

What is ebpf postgresql?

At its core, ebpf postgresql is the set of moving parts inside PostgreSQL that govern how Ebpf postgresql behaves under production load. It draws from database monitoring, intersects with PostgreSQL observability, and depends heavily on performance monitoring. Ignore any of those three and the rest of your tuning won't hold.

In practice, ebpf postgresql touches five PostgreSQL internals: shared buffers, WAL, the cost-based planner, MVCC and autovacuum, and the process-per-connection backend model. We'll move through each in the order they tend to fail, which usually isn't the order they appear in alerting reference documentation.

Why ebpf postgresql matters in production

Here's what production PostgreSQL teams see across hundreds of PostgreSQL engagements: ebpf postgresql rarely fails in dramatic, obvious ways. It fails as a creeping degradation that nobody notices until customer support tickets pile up. The diagnostic flow below is built to catch it earlier.

Three production patterns surface ebpf postgresql reliably. The first is a multi-tenant SaaS where one tenant's traffic destabilizes shared PostgreSQL resources for everyone else. The second is a regulated workload where the operational change you'd normally make conflicts with an audit constraint. The third is a cost-optimization mandate that arrives the same week as a P0 latency incident. The right answer depends on which one you're actually facing.

A useful mental model: every PostgreSQL change has a cost, a blast radius, and a reversibility. The cheapest, smallest, most reversible change that actually moves your metric is almost always the right first step. It may not be the change you eventually want in steady state, but it buys you the time and confidence to make the bigger one safely.

How ebpf postgresql works in PostgreSQL

PostgreSQL behavior around ebpf postgresql is governed by five subsystems. Each can quietly affect throughput in ways that aren't visible from query logs alone.

  • Buffer manager. The shared_buffers pool decides what stays hot in PostgreSQL memory versus the OS page cache.
  • Write-ahead log. Every change is written to WAL before it touches the heap. Replication, PITR, and crash recovery all depend on it.
  • Planner and statistics. The cost-based optimizer interacts with statistics gathered by ANALYZE to choose query plans.
  • Autovacuum. Background workers reclaim dead tuples produced by MVCC. Mistuned autovacuum is the single most common cause of Prometheus PostgreSQL regressions.
  • Process model. PostgreSQL forks a backend per connection. work_mem is allocated per-backend, which is exactly the surprise that takes down clusters during connection storms.

Knowing which layer your symptom belongs to determines the fix. A p99 spike caused by checkpoint I/O is configuration. A regression caused by stale planner statistics is operational. A correlation between table growth and write latency is almost always autovacuum starvation. The diagnostic queries below help you place the symptom on this map before you change anything.

How to diagnose ebpf postgresql issues

Diagnosis comes before tuning, every time. Skip this step and you'll be guessing for the next three days. The PostgreSQL system catalogs and statistics views are the single best diagnostic surface in any open-source database, and The queries below are the standard first-pass diagnostics for production PostgreSQL.

Step 1. Trace pread64 latency on PostgreSQL backends with bpftrace.

bpftrace -e '
kprobe:vfs_read /comm == "postgres"/ { @start[tid] = nsecs; }
kretprobe:vfs_read /@start[tid]/
 { @us = hist((nsecs - @start[tid])/1000); delete(@start[tid]); }
'

Read the output with two questions in mind. Does the shape match what you expected? And what's the worst-case row? The shape tells you whether your mental model of the cluster matches reality. The worst-case row tells you where the next surprise will come from in your wait event analysis workflow.

How to fix ebpf postgresql step by step

eBPF integration for PostgreSQL observability

There are three jobs here, not one: change the right thing, ship the change safely, and verify it actually moved the metric. Most failed PostgreSQL fixes Production deployments show got two out of three.

On managed PostgreSQL services like AWS RDS, Aurora, Cloud SQL, and Azure Flexible Server, schema changes still happen via plain SQL. Configuration changes happen through parameter group rebuilds. Some parameters take effect immediately, others require a reboot. Verify with SELECT name, context FROM pg_settings WHERE name = '<param>'; before scheduling the change window.

Step 2. Top metrics to alert on: connection saturation, lag, long-running transactions.

SELECT count(*) FILTER (WHERE state='active') AS active,
 count(*) AS total,
 (SELECT setting::int FROM pg_settings WHERE name='max_connections') AS cap
FROM pg_stat_activity;

SELECT extract(epoch FROM now() - pg_last_xact_replay_timestamp()) AS lag_s;

SELECT pid, now() - xact_start AS xact_age, query
FROM pg_stat_activity
WHERE xact_start IS NOT NULL
ORDER BY xact_age DESC LIMIT 10;

Step 3. Datadog Agent config for PostgreSQL with custom queries.

init_config:

instances:
 - host: prod-pg.example.local
 port: 5432
 username: Datadog
 password: "<from-vault>"
 dbname: appdb
 collect_database_size_metrics: true
 relations:
 - relation_regex:.*
 custom_queries:
 - metric_prefix: postgresql.repl
 query: |
 SELECT extract(epoch FROM now() - pg_last_xact_replay_timestamp())
 columns:
 - name: lag_seconds
 type: gauge

Step 4. Validation. Re-run your baseline query and compare the results. If the change didn't move the metric you set out to improve, revert before chasing a second hypothesis. Tuning one PostgreSQL parameter at a time is the only way to keep your sanity, and your audit trail, intact.

Production guardrails and monitoring

Now wrap the fix. In our experience, teams that skip guardrails ship the same fix three times across two years because the regression keeps coming back unnoticed. Spend the 30 minutes now to save the future hours.

  • Add a Datadog or Prometheus alert on the metric you just improved at a threshold 20 percent above your new baseline.
  • Capture an EXPLAIN (ANALYZE, BUFFERS) for any regressed query into your runbook so the on-call engineer has the next-step diagnostic ready.
  • Document the rollback path: the exact SQL or ALTER SYSTEM sequence to restore the prior state if the change misbehaves.
  • Set a calendar reminder to re-validate after the next major PostgreSQL version upgrade. Planner behaviors and default GUC values do change.
  • Record the pg_stat_statements query ID and a representative plan in your team wiki so you can compare against future regressions in log analysis.
  • Schedule a follow-up review 30 days after the change to confirm the improvement persisted under realistic production traffic.

Going deeper with cross-checks

Once the basic fix is in place, the next layer of validation cross-checks against complementary signals. The query below is the one we run on production PostgreSQL deployments to confirm the change has propagated everywhere it should.

Run postgres_exporter via Docker with a custom queries file.

docker run -d --name postgres_exporter \
 -e DATA_SOURCE_NAME="postgresql://exporter:pass@prod-pg:5432/postgres?sslmode=require" \
 -v $(pwd)/queries.yaml:/queries.yaml \
 -p 9187:9187 \
 quay.io/prometheuscommunity/postgres-exporter \
 --extend.query-path=/queries.yaml

Common mistakes and anti-patterns

Here's the short list of mistakes we see most often. Calling them anti-patterns sounds harsh, but every one of these started as a reasonable decision under different circumstances and outlived the circumstances that justified it.

  • Tuning ebpf postgresql by copy-pasting from a 2014 blog post without re-validating against PostgreSQL 14, 15, 16, or 17 behavior.
  • Changing more than one PostgreSQL parameter at a time without measurement.
  • Forgetting to ANALYZE after a large data load, then wondering why the planner picked a sequential scan over your shiny new index.
  • Trusting an unverified backup or untested failover for PostgreSQL telemetry.
  • Treating autovacuum as something to disable rather than something to tune.
  • Allowing developers to write production queries with no EXPLAIN review.

PostgreSQL on AWS, Aurora, GCP, Azure

On managed PostgreSQL services, the techniques in this guide apply with three adjustments. Configuration changes happen via parameter groups instead of ALTER SYSTEM. OS-level interventions like kernel tuning and ZFS aren't available. And Aurora's storage-decoupled architecture changes the calculus for several configuration parameters because the storage layer doesn't use the OS page cache the same way self-managed PostgreSQL does.

Specifics worth memorizing. AWS RDS PostgreSQL on gp3 storage gives you provisioned IOPS, but the maximum is per-volume, not per-instance. That fact surprises customers scaling vCPU and expecting linear I/O. Google AlloyDB's columnar engine is opt-in per table; turning it on is a one-line SQL call, but the analytical workload eligibility rules aren't always obvious until you read the EXPLAIN plan. Azure Database for PostgreSQL Flexible Server exposes a broader set of extensions than RDS or Aurora, including pg_partman, pgvector, TimescaleDB, and Citus on the Citus-flavored variant.

When this approach is the wrong starting point

This technique assumes a roughly normal OLTP PostgreSQL workload with healthy autovacuum. It's the wrong starting point if your workload is dominated by long analytical queries against a Citus or TimescaleDB hypertable, if you run on Aurora's storage-decoupled architecture (where buffer-pool semantics differ), or if the symptom is actually a network or kernel-level issue masquerading as a PostgreSQL problem.

Another pattern we see often. A SaaS company had been paying for Datadog for three years without enabling the PostgreSQL integration's relation_metrics. Turning them on instantly surfaced 11 unindexed sequential scans on tables larger than 100 GB.

Frequently asked questions

What are the top PostgreSQL metrics to alert on?

Replication lag, connection saturation, transaction-id age, autovacuum staleness, p99 query latency, and disk space free. Everything else belongs on dashboards but not on your pager.

Should Production teams use Datadog or pganalyze for PostgreSQL monitoring?

Datadog fits teams that already standardize on it for application observability. pganalyze fits teams where PostgreSQL is the most important system you operate. Its query-level analysis is unmatched in the market.

Is pg_stat_statements safe to leave enabled in production?

Yes. pg_stat_statements is the single most useful PostgreSQL extension for performance work. Overhead is well under 1 percent on every workload we have measured. Enable it on every PostgreSQL deployment.

How often should I rotate PostgreSQL logs?

Daily, with at least 14 days of online retention, and longer cold storage if you ship logs to a SIEM. Use log_rotation_age = 1d plus a logrotate or vector pipeline to ship to your log analysis platform.

What is PostgreSQL wait event analysis?

PostgreSQL records what each backend is waiting on: locks, I/O, IPC. Sampling pg_stat_activity.wait_event at one-second intervals tells you where contention lives, often more clearly than queries alone.

Where should I start if I’m new to ebpf-based postgresql tracing?

Read this guide end to end, then run the diagnostic SQL queries against a non-production PostgreSQL database to build intuition. Most engineers we coach are productive within a day. Bookmark this page, then move on to the cluster posts linked below for deeper dives.

Further Reading

Using eBPF to Troubleshoot Process Contention in PostgreSQL: A Guide to Monitoring Locks, CPU, and I/O Performance

Troubleshooting Redis Performance Using eBPF

Minimizing Performance Impact: Best Practices for PostgreSQL Tracing and Monitoring

About MinervaDB Corporation 243 Articles
Full-stack Database Infrastructure Architecture, Engineering and Operations Consultative Support(24*7) Provider for PostgreSQL, MySQL, MariaDB, MongoDB, ClickHouse, Trino, SQL Server, Cassandra, CockroachDB, Yugabyte, Couchbase, Redis, Valkey, NoSQL, NewSQL, SAP HANA, Databricks, Amazon Resdhift, Amazon Aurora, CloudSQL, Snowflake and AzureSQL with core expertize in Performance, Scalability, High Availability, Database Reliability Engineering, Database Upgrades/Migration, and Data Security.