Pgpool-II Configuration for Optimal Performance

Pgpool-II Configuration for Optimal Performance: Complete Monitoring and Troubleshooting Guide



Pgpool-II serves as a critical middleware component between applications and PostgreSQL databases, providing connection pooling, load balancing, and high availability features. Proper configuration and monitoring of Pgpool-II operations are essential for maintaining optimal database performance and quickly resolving issues in production environments.

Understanding Pgpool-II Performance Architecture

Pgpool-II operates as a proxy server that manages database connections through multiple processes and shared memory segments. The performance characteristics depend on several key factors:

  • Connection pool management and process allocation
  • Load balancing algorithms and backend node distribution
  • Memory usage patterns and buffer management
  • Network latency and connection overhead
  • Query routing efficiency and caching mechanisms

Understanding these components enables administrators to fine-tune configurations for specific workload patterns and performance requirements.

Core Performance Configuration Parameters

Connection Pool Optimization

Configure connection pooling parameters for maximum efficiency:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# /etc/pgpool2/pgpool.conf
# Process and connection management
num_init_children = 64
max_pool = 8
child_life_time = 300
child_max_connections = 0
connection_life_time = 600
client_idle_limit = 300
# Connection queue management
listen_backlog_multiplier = 2
serialize_accept = off
reserved_connections = 5
# Memory allocation
memqcache_total_size = 256MB
memqcache_max_num_cache = 10000
memqcache_expire = 3600
# /etc/pgpool2/pgpool.conf # Process and connection management num_init_children = 64 max_pool = 8 child_life_time = 300 child_max_connections = 0 connection_life_time = 600 client_idle_limit = 300 # Connection queue management listen_backlog_multiplier = 2 serialize_accept = off reserved_connections = 5 # Memory allocation memqcache_total_size = 256MB memqcache_max_num_cache = 10000 memqcache_expire = 3600
# /etc/pgpool2/pgpool.conf

# Process and connection management
num_init_children = 64
max_pool = 8
child_life_time = 300
child_max_connections = 0
connection_life_time = 600
client_idle_limit = 300

# Connection queue management
listen_backlog_multiplier = 2
serialize_accept = off
reserved_connections = 5

# Memory allocation
memqcache_total_size = 256MB
memqcache_max_num_cache = 10000
memqcache_expire = 3600

Load Balancing Configuration

Optimize load balancing for read query distribution:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# Load balancing settings
load_balance_mode = on
ignore_leading_white_space = on
white_function_list = 'now,current_timestamp,current_date'
black_function_list = 'nextval,setval,lastval'
# Backend weight configuration
backend_weight0 = 1
backend_weight1 = 1
backend_weight2 = 0.5
# Statement-level load balancing
statement_level_load_balance = on
allow_sql_comments = on
disable_load_balance_on_write = 'transaction'
# Load balancing settings load_balance_mode = on ignore_leading_white_space = on white_function_list = 'now,current_timestamp,current_date' black_function_list = 'nextval,setval,lastval' # Backend weight configuration backend_weight0 = 1 backend_weight1 = 1 backend_weight2 = 0.5 # Statement-level load balancing statement_level_load_balance = on allow_sql_comments = on disable_load_balance_on_write = 'transaction'
# Load balancing settings
load_balance_mode = on
ignore_leading_white_space = on
white_function_list = 'now,current_timestamp,current_date'
black_function_list = 'nextval,setval,lastval'

# Backend weight configuration
backend_weight0 = 1
backend_weight1 = 1
backend_weight2 = 0.5

# Statement-level load balancing
statement_level_load_balance = on
allow_sql_comments = on
disable_load_balance_on_write = 'transaction'

Memory and Buffer Management

Configure memory usage for optimal performance:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# Shared memory settings
shared_preload_libraries = 'pgpool_adm,pgpool_recovery'
log_min_messages = warning
log_connections = off
log_hostname = off
# Buffer and cache settings
enable_shared_preload_libraries = on
memqcache_method = 'memcached'
memqcache_memcached_host = 'localhost'
memqcache_memcached_port = 11211
# Query cache configuration
cache_safe_memqcache_table_list = 'public.users,public.products'
cache_unsafe_memqcache_table_list = 'public.sessions,public.logs'
# Shared memory settings shared_preload_libraries = 'pgpool_adm,pgpool_recovery' log_min_messages = warning log_connections = off log_hostname = off # Buffer and cache settings enable_shared_preload_libraries = on memqcache_method = 'memcached' memqcache_memcached_host = 'localhost' memqcache_memcached_port = 11211 # Query cache configuration cache_safe_memqcache_table_list = 'public.users,public.products' cache_unsafe_memqcache_table_list = 'public.sessions,public.logs'
# Shared memory settings
shared_preload_libraries = 'pgpool_adm,pgpool_recovery'
log_min_messages = warning
log_connections = off
log_hostname = off

# Buffer and cache settings
enable_shared_preload_libraries = on
memqcache_method = 'memcached'
memqcache_memcached_host = 'localhost'
memqcache_memcached_port = 11211

# Query cache configuration
cache_safe_memqcache_table_list = 'public.users,public.products'
cache_unsafe_memqcache_table_list = 'public.sessions,public.logs'

Advanced Performance Tuning

Process Management Optimization

Implement dynamic process management for varying workloads:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# Create performance tuning script
sudo nano /usr/local/bin/pgpool_performance_tuner.sh
# Create performance tuning script sudo nano /usr/local/bin/pgpool_performance_tuner.sh
# Create performance tuning script
sudo nano /usr/local/bin/pgpool_performance_tuner.sh
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
#!/bin/bash
# Dynamic Pgpool-II performance tuning script
PGPOOL_CONF="/etc/pgpool2/pgpool.conf"
LOG_FILE="/var/log/pgpool_performance.log"
# Get current connection count
ACTIVE_CONNECTIONS=$(sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -t -c "SELECT count(*) FROM pg_stat_activity WHERE state = 'active';" 2>/dev/null | tr -d ' ')
# Get current pool processes
POOL_PROCESSES=$(pcp_proc_count -h localhost -p 9898 -U pgpool 2>/dev/null || echo "0")
# Dynamic scaling based on load
if [ "$ACTIVE_CONNECTIONS" -gt 80 ]; then
if [ "$POOL_PROCESSES" -lt 128 ]; then
echo "$(date): High load detected, increasing pool processes" >> $LOG_FILE
sed -i 's/num_init_children = .*/num_init_children = 128/' $PGPOOL_CONF
systemctl reload pgpool2
fi
elif [ "$ACTIVE_CONNECTIONS" -lt 20 ]; then
if [ "$POOL_PROCESSES" -gt 32 ]; then
echo "$(date): Low load detected, decreasing pool processes" >> $LOG_FILE
sed -i 's/num_init_children = .*/num_init_children = 32/' $PGPOOL_CONF
systemctl reload pgpool2
fi
fi
# Log performance metrics
echo "$(date): Active connections: $ACTIVE_CONNECTIONS, Pool processes: $POOL_PROCESSES" >> $LOG_FILE
#!/bin/bash # Dynamic Pgpool-II performance tuning script PGPOOL_CONF="/etc/pgpool2/pgpool.conf" LOG_FILE="/var/log/pgpool_performance.log" # Get current connection count ACTIVE_CONNECTIONS=$(sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -t -c "SELECT count(*) FROM pg_stat_activity WHERE state = 'active';" 2>/dev/null | tr -d ' ') # Get current pool processes POOL_PROCESSES=$(pcp_proc_count -h localhost -p 9898 -U pgpool 2>/dev/null || echo "0") # Dynamic scaling based on load if [ "$ACTIVE_CONNECTIONS" -gt 80 ]; then if [ "$POOL_PROCESSES" -lt 128 ]; then echo "$(date): High load detected, increasing pool processes" >> $LOG_FILE sed -i 's/num_init_children = .*/num_init_children = 128/' $PGPOOL_CONF systemctl reload pgpool2 fi elif [ "$ACTIVE_CONNECTIONS" -lt 20 ]; then if [ "$POOL_PROCESSES" -gt 32 ]; then echo "$(date): Low load detected, decreasing pool processes" >> $LOG_FILE sed -i 's/num_init_children = .*/num_init_children = 32/' $PGPOOL_CONF systemctl reload pgpool2 fi fi # Log performance metrics echo "$(date): Active connections: $ACTIVE_CONNECTIONS, Pool processes: $POOL_PROCESSES" >> $LOG_FILE
#!/bin/bash
# Dynamic Pgpool-II performance tuning script

PGPOOL_CONF="/etc/pgpool2/pgpool.conf"
LOG_FILE="/var/log/pgpool_performance.log"

# Get current connection count
ACTIVE_CONNECTIONS=$(sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -t -c "SELECT count(*) FROM pg_stat_activity WHERE state = 'active';" 2>/dev/null | tr -d ' ')

# Get current pool processes
POOL_PROCESSES=$(pcp_proc_count -h localhost -p 9898 -U pgpool 2>/dev/null || echo "0")

# Dynamic scaling based on load
if [ "$ACTIVE_CONNECTIONS" -gt 80 ]; then
    if [ "$POOL_PROCESSES" -lt 128 ]; then
        echo "$(date): High load detected, increasing pool processes" >> $LOG_FILE
        sed -i 's/num_init_children = .*/num_init_children = 128/' $PGPOOL_CONF
        systemctl reload pgpool2
    fi
elif [ "$ACTIVE_CONNECTIONS" -lt 20 ]; then
    if [ "$POOL_PROCESSES" -gt 32 ]; then
        echo "$(date): Low load detected, decreasing pool processes" >> $LOG_FILE
        sed -i 's/num_init_children = .*/num_init_children = 32/' $PGPOOL_CONF
        systemctl reload pgpool2
    fi
fi

# Log performance metrics
echo "$(date): Active connections: $ACTIVE_CONNECTIONS, Pool processes: $POOL_PROCESSES" >> $LOG_FILE

Query Routing Optimization

Configure intelligent query routing for optimal performance:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# Query routing configuration
master_slave_mode = on
master_slave_sub_mode = 'stream'
# Read query routing
sr_check_period = 10
sr_check_user = 'pgpool'
sr_check_password = 'pgpool_password'
sr_check_database = 'postgres'
# Delay threshold for read queries
delay_threshold = 10000000
# Parallel query execution
parallel_mode = off
enable_query_cache = on
# Query routing configuration master_slave_mode = on master_slave_sub_mode = 'stream' # Read query routing sr_check_period = 10 sr_check_user = 'pgpool' sr_check_password = 'pgpool_password' sr_check_database = 'postgres' # Delay threshold for read queries delay_threshold = 10000000 # Parallel query execution parallel_mode = off enable_query_cache = on
# Query routing configuration
master_slave_mode = on
master_slave_sub_mode = 'stream'

# Read query routing
sr_check_period = 10
sr_check_user = 'pgpool'
sr_check_password = 'pgpool_password'
sr_check_database = 'postgres'

# Delay threshold for read queries
delay_threshold = 10000000

# Parallel query execution
parallel_mode = off
enable_query_cache = on

Comprehensive Monitoring Setup

Real-time Performance Monitoring

Implement comprehensive monitoring for Pgpool-II operations:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# Create monitoring script
sudo nano /usr/local/bin/pgpool_monitor.sh
# Create monitoring script sudo nano /usr/local/bin/pgpool_monitor.sh
# Create monitoring script
sudo nano /usr/local/bin/pgpool_monitor.sh
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
#!/bin/bash
# Comprehensive Pgpool-II monitoring script
MONITOR_LOG="/var/log/pgpool_monitor.log"
ALERT_THRESHOLD_CONNECTIONS=80
ALERT_THRESHOLD_RESPONSE_TIME=5000
EMAIL_ALERT="admin@yourdomain.com"
# Function to log with timestamp
log_message() {
echo "$(date '+%Y-%m-%d %H:%M:%S'): $1" >> $MONITOR_LOG
}
# Check Pgpool-II service status
check_service_status() {
if ! systemctl is-active --quiet pgpool2; then
log_message "CRITICAL: Pgpool-II service is not running"
echo "Pgpool-II service is down" | mail -s "CRITICAL: Pgpool-II Alert" $EMAIL_ALERT
return 1
fi
return 0
}
# Monitor connection pool status
monitor_connections() {
local pool_info=$(pcp_pool_status -h localhost -p 9898 -U pgpool 2>/dev/null)
local active_connections=$(echo "$pool_info" | grep -c "CONNECT_OK")
local total_processes=$(pcp_proc_count -h localhost -p 9898 -U pgpool 2>/dev/null)
log_message "Active connections: $active_connections, Total processes: $total_processes"
if [ "$active_connections" -gt $ALERT_THRESHOLD_CONNECTIONS ]; then
log_message "WARNING: High connection count detected: $active_connections"
echo "High connection count: $active_connections" | mail -s "WARNING: Pgpool-II Alert" $EMAIL_ALERT
fi
}
# Monitor backend node status
monitor_backends() {
local backend_status=$(pcp_node_info -h localhost -p 9898 -U pgpool 0 2>/dev/null)
local node_count=$(pcp_node_count -h localhost -p 9898 -U pgpool 2>/dev/null)
for ((i=0; i<$node_count; i++)); do
local node_info=$(pcp_node_info -h localhost -p 9898 -U pgpool $i 2>/dev/null)
local node_status=$(echo "$node_info" | cut -d' ' -f3)
local node_host=$(echo "$node_info" | cut -d' ' -f1)
if [ "$node_status" != "up" ]; then
log_message "CRITICAL: Backend node $i ($node_host) is $node_status"
echo "Backend node $node_host is $node_status" | mail -s "CRITICAL: Backend Alert" $EMAIL_ALERT
else
log_message "Backend node $i ($node_host) is healthy"
fi
done
}
# Monitor query performance
monitor_query_performance() {
local avg_response_time=$(sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -t -c "
SELECT COALESCE(AVG(EXTRACT(EPOCH FROM (clock_timestamp() - query_start)) * 1000), 0)
FROM pg_stat_activity
WHERE state = 'active' AND query_start IS NOT NULL;" 2>/dev/null | tr -d ' ')
if [ ! -z "$avg_response_time" ]; then
log_message "Average query response time: ${avg_response_time}ms"
if [ $(echo "$avg_response_time > $ALERT_THRESHOLD_RESPONSE_TIME" | bc -l) -eq 1 ]; then
log_message "WARNING: High average response time: ${avg_response_time}ms"
echo "High response time detected: ${avg_response_time}ms" | mail -s "WARNING: Performance Alert" $EMAIL_ALERT
fi
fi
}
# Monitor memory usage
monitor_memory_usage() {
local pgpool_memory=$(ps aux | grep pgpool | grep -v grep | awk '{sum += $6} END {print sum/1024}')
local system_memory=$(free -m | awk 'NR==2{printf "%.1f", $3*100/$2}')
log_message "Pgpool-II memory usage: ${pgpool_memory}MB, System memory usage: ${system_memory}%"
if [ $(echo "$system_memory > 90" | bc -l) -eq 1 ]; then
log_message "WARNING: High system memory usage: ${system_memory}%"
echo "High memory usage: ${system_memory}%" | mail -s "WARNING: Memory Alert" $EMAIL_ALERT
fi
}
# Main monitoring execution
main() {
log_message "Starting Pgpool-II monitoring cycle"
check_service_status || exit 1
monitor_connections
monitor_backends
monitor_query_performance
monitor_memory_usage
log_message "Monitoring cycle completed"
}
# Execute monitoring
main
#!/bin/bash # Comprehensive Pgpool-II monitoring script MONITOR_LOG="/var/log/pgpool_monitor.log" ALERT_THRESHOLD_CONNECTIONS=80 ALERT_THRESHOLD_RESPONSE_TIME=5000 EMAIL_ALERT="admin@yourdomain.com" # Function to log with timestamp log_message() { echo "$(date '+%Y-%m-%d %H:%M:%S'): $1" >> $MONITOR_LOG } # Check Pgpool-II service status check_service_status() { if ! systemctl is-active --quiet pgpool2; then log_message "CRITICAL: Pgpool-II service is not running" echo "Pgpool-II service is down" | mail -s "CRITICAL: Pgpool-II Alert" $EMAIL_ALERT return 1 fi return 0 } # Monitor connection pool status monitor_connections() { local pool_info=$(pcp_pool_status -h localhost -p 9898 -U pgpool 2>/dev/null) local active_connections=$(echo "$pool_info" | grep -c "CONNECT_OK") local total_processes=$(pcp_proc_count -h localhost -p 9898 -U pgpool 2>/dev/null) log_message "Active connections: $active_connections, Total processes: $total_processes" if [ "$active_connections" -gt $ALERT_THRESHOLD_CONNECTIONS ]; then log_message "WARNING: High connection count detected: $active_connections" echo "High connection count: $active_connections" | mail -s "WARNING: Pgpool-II Alert" $EMAIL_ALERT fi } # Monitor backend node status monitor_backends() { local backend_status=$(pcp_node_info -h localhost -p 9898 -U pgpool 0 2>/dev/null) local node_count=$(pcp_node_count -h localhost -p 9898 -U pgpool 2>/dev/null) for ((i=0; i<$node_count; i++)); do local node_info=$(pcp_node_info -h localhost -p 9898 -U pgpool $i 2>/dev/null) local node_status=$(echo "$node_info" | cut -d' ' -f3) local node_host=$(echo "$node_info" | cut -d' ' -f1) if [ "$node_status" != "up" ]; then log_message "CRITICAL: Backend node $i ($node_host) is $node_status" echo "Backend node $node_host is $node_status" | mail -s "CRITICAL: Backend Alert" $EMAIL_ALERT else log_message "Backend node $i ($node_host) is healthy" fi done } # Monitor query performance monitor_query_performance() { local avg_response_time=$(sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -t -c " SELECT COALESCE(AVG(EXTRACT(EPOCH FROM (clock_timestamp() - query_start)) * 1000), 0) FROM pg_stat_activity WHERE state = 'active' AND query_start IS NOT NULL;" 2>/dev/null | tr -d ' ') if [ ! -z "$avg_response_time" ]; then log_message "Average query response time: ${avg_response_time}ms" if [ $(echo "$avg_response_time > $ALERT_THRESHOLD_RESPONSE_TIME" | bc -l) -eq 1 ]; then log_message "WARNING: High average response time: ${avg_response_time}ms" echo "High response time detected: ${avg_response_time}ms" | mail -s "WARNING: Performance Alert" $EMAIL_ALERT fi fi } # Monitor memory usage monitor_memory_usage() { local pgpool_memory=$(ps aux | grep pgpool | grep -v grep | awk '{sum += $6} END {print sum/1024}') local system_memory=$(free -m | awk 'NR==2{printf "%.1f", $3*100/$2}') log_message "Pgpool-II memory usage: ${pgpool_memory}MB, System memory usage: ${system_memory}%" if [ $(echo "$system_memory > 90" | bc -l) -eq 1 ]; then log_message "WARNING: High system memory usage: ${system_memory}%" echo "High memory usage: ${system_memory}%" | mail -s "WARNING: Memory Alert" $EMAIL_ALERT fi } # Main monitoring execution main() { log_message "Starting Pgpool-II monitoring cycle" check_service_status || exit 1 monitor_connections monitor_backends monitor_query_performance monitor_memory_usage log_message "Monitoring cycle completed" } # Execute monitoring main
#!/bin/bash
# Comprehensive Pgpool-II monitoring script

MONITOR_LOG="/var/log/pgpool_monitor.log"
ALERT_THRESHOLD_CONNECTIONS=80
ALERT_THRESHOLD_RESPONSE_TIME=5000
EMAIL_ALERT="admin@yourdomain.com"

# Function to log with timestamp
log_message() {
    echo "$(date '+%Y-%m-%d %H:%M:%S'): $1" >> $MONITOR_LOG
}

# Check Pgpool-II service status
check_service_status() {
    if ! systemctl is-active --quiet pgpool2; then
        log_message "CRITICAL: Pgpool-II service is not running"
        echo "Pgpool-II service is down" | mail -s "CRITICAL: Pgpool-II Alert" $EMAIL_ALERT
        return 1
    fi
    return 0
}

# Monitor connection pool status
monitor_connections() {
    local pool_info=$(pcp_pool_status -h localhost -p 9898 -U pgpool 2>/dev/null)
    local active_connections=$(echo "$pool_info" | grep -c "CONNECT_OK")
    local total_processes=$(pcp_proc_count -h localhost -p 9898 -U pgpool 2>/dev/null)

    log_message "Active connections: $active_connections, Total processes: $total_processes"

    if [ "$active_connections" -gt $ALERT_THRESHOLD_CONNECTIONS ]; then
        log_message "WARNING: High connection count detected: $active_connections"
        echo "High connection count: $active_connections" | mail -s "WARNING: Pgpool-II Alert" $EMAIL_ALERT
    fi
}

# Monitor backend node status
monitor_backends() {
    local backend_status=$(pcp_node_info -h localhost -p 9898 -U pgpool 0 2>/dev/null)
    local node_count=$(pcp_node_count -h localhost -p 9898 -U pgpool 2>/dev/null)

    for ((i=0; i<$node_count; i++)); do
        local node_info=$(pcp_node_info -h localhost -p 9898 -U pgpool $i 2>/dev/null)
        local node_status=$(echo "$node_info" | cut -d' ' -f3)
        local node_host=$(echo "$node_info" | cut -d' ' -f1)

        if [ "$node_status" != "up" ]; then
            log_message "CRITICAL: Backend node $i ($node_host) is $node_status"
            echo "Backend node $node_host is $node_status" | mail -s "CRITICAL: Backend Alert" $EMAIL_ALERT
        else
            log_message "Backend node $i ($node_host) is healthy"
        fi
    done
}

# Monitor query performance
monitor_query_performance() {
    local avg_response_time=$(sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -t -c "
        SELECT COALESCE(AVG(EXTRACT(EPOCH FROM (clock_timestamp() - query_start)) * 1000), 0)
        FROM pg_stat_activity
        WHERE state = 'active' AND query_start IS NOT NULL;" 2>/dev/null | tr -d ' ')

    if [ ! -z "$avg_response_time" ]; then
        log_message "Average query response time: ${avg_response_time}ms"

        if [ $(echo "$avg_response_time > $ALERT_THRESHOLD_RESPONSE_TIME" | bc -l) -eq 1 ]; then
            log_message "WARNING: High average response time: ${avg_response_time}ms"
            echo "High response time detected: ${avg_response_time}ms" | mail -s "WARNING: Performance Alert" $EMAIL_ALERT
        fi
    fi
}

# Monitor memory usage
monitor_memory_usage() {
    local pgpool_memory=$(ps aux | grep pgpool | grep -v grep | awk '{sum += $6} END {print sum/1024}')
    local system_memory=$(free -m | awk 'NR==2{printf "%.1f", $3*100/$2}')

    log_message "Pgpool-II memory usage: ${pgpool_memory}MB, System memory usage: ${system_memory}%"

    if [ $(echo "$system_memory > 90" | bc -l) -eq 1 ]; then
        log_message "WARNING: High system memory usage: ${system_memory}%"
        echo "High memory usage: ${system_memory}%" | mail -s "WARNING: Memory Alert" $EMAIL_ALERT
    fi
}

# Main monitoring execution
main() {
    log_message "Starting Pgpool-II monitoring cycle"

    check_service_status || exit 1
    monitor_connections
    monitor_backends
    monitor_query_performance
    monitor_memory_usage

    log_message "Monitoring cycle completed"
}

# Execute monitoring
main

Performance Metrics Collection

Create detailed performance metrics collection:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# Create metrics collection script
sudo nano /usr/local/bin/pgpool_metrics.sh
# Create metrics collection script sudo nano /usr/local/bin/pgpool_metrics.sh
# Create metrics collection script
sudo nano /usr/local/bin/pgpool_metrics.sh
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
#!/bin/bash
# Pgpool-II performance metrics collection
METRICS_DIR="/var/log/pgpool_metrics"
TIMESTAMP=$(date '+%Y%m%d_%H%M%S')
# Create metrics directory
mkdir -p $METRICS_DIR
# Collect connection statistics
collect_connection_stats() {
local output_file="$METRICS_DIR/connections_$TIMESTAMP.log"
echo "=== Connection Statistics ===" > $output_file
echo "Timestamp: $(date)" >> $output_file
# Pool process information
pcp_proc_info -h localhost -p 9898 -U pgpool 2>/dev/null >> $output_file
# Connection pool status
echo -e "\\n=== Pool Status ===" >> $output_file
pcp_pool_status -h localhost -p 9898 -U pgpool 2>/dev/null >> $output_file
# Active connections from PostgreSQL
echo -e "\\n=== Active Database Connections ===" >> $output_file
sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -c "
SELECT
datname,
usename,
client_addr,
state,
query_start,
state_change
FROM pg_stat_activity
WHERE state != 'idle'
ORDER BY query_start;" >> $output_file 2>/dev/null
}
# Collect performance statistics
collect_performance_stats() {
local output_file="$METRICS_DIR/performance_$TIMESTAMP.log"
echo "=== Performance Statistics ===" > $output_file
echo "Timestamp: $(date)" >> $output_file
# Query statistics
echo -e "\\n=== Query Statistics ===" >> $output_file
sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -c "
SELECT
schemaname,
tablename,
seq_scan,
seq_tup_read,
idx_scan,
idx_tup_fetch,
n_tup_ins,
n_tup_upd,
n_tup_del
FROM pg_stat_user_tables
ORDER BY seq_scan + idx_scan DESC
LIMIT 20;" >> $output_file 2>/dev/null
# Lock statistics
echo -e "\\n=== Lock Statistics ===" >> $output_file
sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -c "
SELECT
mode,
count(*) as lock_count
FROM pg_locks
GROUP BY mode
ORDER BY lock_count DESC;" >> $output_file 2>/dev/null
}
# Collect backend node statistics
collect_backend_stats() {
local output_file="$METRICS_DIR/backends_$TIMESTAMP.log"
echo "=== Backend Node Statistics ===" > $output_file
echo "Timestamp: $(date)" >> $output_file
# Node information
local node_count=$(pcp_node_count -h localhost -p 9898 -U pgpool 2>/dev/null)
for ((i=0; i<$node_count; i++)); do
echo -e "\\n=== Node $i Information ===" >> $output_file
pcp_node_info -h localhost -p 9898 -U pgpool $i 2>/dev/null >> $output_file
done
# Replication statistics
echo -e "\\n=== Replication Statistics ===" >> $output_file
sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -c "
SELECT
client_addr,
state,
sent_lsn,
write_lsn,
flush_lsn,
replay_lsn,
write_lag,
flush_lag,
replay_lag
FROM pg_stat_replication;" >> $output_file 2>/dev/null
}
# Execute metrics collection
collect_connection_stats
collect_performance_stats
collect_backend_stats
# Cleanup old metrics files (keep last 7 days)
find $METRICS_DIR -name "*.log" -mtime +7 -delete
#!/bin/bash # Pgpool-II performance metrics collection METRICS_DIR="/var/log/pgpool_metrics" TIMESTAMP=$(date '+%Y%m%d_%H%M%S') # Create metrics directory mkdir -p $METRICS_DIR # Collect connection statistics collect_connection_stats() { local output_file="$METRICS_DIR/connections_$TIMESTAMP.log" echo "=== Connection Statistics ===" > $output_file echo "Timestamp: $(date)" >> $output_file # Pool process information pcp_proc_info -h localhost -p 9898 -U pgpool 2>/dev/null >> $output_file # Connection pool status echo -e "\\n=== Pool Status ===" >> $output_file pcp_pool_status -h localhost -p 9898 -U pgpool 2>/dev/null >> $output_file # Active connections from PostgreSQL echo -e "\\n=== Active Database Connections ===" >> $output_file sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -c " SELECT datname, usename, client_addr, state, query_start, state_change FROM pg_stat_activity WHERE state != 'idle' ORDER BY query_start;" >> $output_file 2>/dev/null } # Collect performance statistics collect_performance_stats() { local output_file="$METRICS_DIR/performance_$TIMESTAMP.log" echo "=== Performance Statistics ===" > $output_file echo "Timestamp: $(date)" >> $output_file # Query statistics echo -e "\\n=== Query Statistics ===" >> $output_file sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -c " SELECT schemaname, tablename, seq_scan, seq_tup_read, idx_scan, idx_tup_fetch, n_tup_ins, n_tup_upd, n_tup_del FROM pg_stat_user_tables ORDER BY seq_scan + idx_scan DESC LIMIT 20;" >> $output_file 2>/dev/null # Lock statistics echo -e "\\n=== Lock Statistics ===" >> $output_file sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -c " SELECT mode, count(*) as lock_count FROM pg_locks GROUP BY mode ORDER BY lock_count DESC;" >> $output_file 2>/dev/null } # Collect backend node statistics collect_backend_stats() { local output_file="$METRICS_DIR/backends_$TIMESTAMP.log" echo "=== Backend Node Statistics ===" > $output_file echo "Timestamp: $(date)" >> $output_file # Node information local node_count=$(pcp_node_count -h localhost -p 9898 -U pgpool 2>/dev/null) for ((i=0; i<$node_count; i++)); do echo -e "\\n=== Node $i Information ===" >> $output_file pcp_node_info -h localhost -p 9898 -U pgpool $i 2>/dev/null >> $output_file done # Replication statistics echo -e "\\n=== Replication Statistics ===" >> $output_file sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -c " SELECT client_addr, state, sent_lsn, write_lsn, flush_lsn, replay_lsn, write_lag, flush_lag, replay_lag FROM pg_stat_replication;" >> $output_file 2>/dev/null } # Execute metrics collection collect_connection_stats collect_performance_stats collect_backend_stats # Cleanup old metrics files (keep last 7 days) find $METRICS_DIR -name "*.log" -mtime +7 -delete
#!/bin/bash
# Pgpool-II performance metrics collection

METRICS_DIR="/var/log/pgpool_metrics"
TIMESTAMP=$(date '+%Y%m%d_%H%M%S')

# Create metrics directory
mkdir -p $METRICS_DIR

# Collect connection statistics
collect_connection_stats() {
    local output_file="$METRICS_DIR/connections_$TIMESTAMP.log"

    echo "=== Connection Statistics ===" > $output_file
    echo "Timestamp: $(date)" >> $output_file

    # Pool process information
    pcp_proc_info -h localhost -p 9898 -U pgpool 2>/dev/null >> $output_file

    # Connection pool status
    echo -e "\\n=== Pool Status ===" >> $output_file
    pcp_pool_status -h localhost -p 9898 -U pgpool 2>/dev/null >> $output_file

    # Active connections from PostgreSQL
    echo -e "\\n=== Active Database Connections ===" >> $output_file
    sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -c "
        SELECT
            datname,
            usename,
            client_addr,
            state,
            query_start,
            state_change
        FROM pg_stat_activity
        WHERE state != 'idle'
        ORDER BY query_start;" >> $output_file 2>/dev/null
}

# Collect performance statistics
collect_performance_stats() {
    local output_file="$METRICS_DIR/performance_$TIMESTAMP.log"

    echo "=== Performance Statistics ===" > $output_file
    echo "Timestamp: $(date)" >> $output_file

    # Query statistics
    echo -e "\\n=== Query Statistics ===" >> $output_file
    sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -c "
        SELECT
            schemaname,
            tablename,
            seq_scan,
            seq_tup_read,
            idx_scan,
            idx_tup_fetch,
            n_tup_ins,
            n_tup_upd,
            n_tup_del
        FROM pg_stat_user_tables
        ORDER BY seq_scan + idx_scan DESC
        LIMIT 20;" >> $output_file 2>/dev/null

    # Lock statistics
    echo -e "\\n=== Lock Statistics ===" >> $output_file
    sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -c "
        SELECT
            mode,
            count(*) as lock_count
        FROM pg_locks
        GROUP BY mode
        ORDER BY lock_count DESC;" >> $output_file 2>/dev/null
}

# Collect backend node statistics
collect_backend_stats() {
    local output_file="$METRICS_DIR/backends_$TIMESTAMP.log"

    echo "=== Backend Node Statistics ===" > $output_file
    echo "Timestamp: $(date)" >> $output_file

    # Node information
    local node_count=$(pcp_node_count -h localhost -p 9898 -U pgpool 2>/dev/null)

    for ((i=0; i<$node_count; i++)); do
        echo -e "\\n=== Node $i Information ===" >> $output_file
        pcp_node_info -h localhost -p 9898 -U pgpool $i 2>/dev/null >> $output_file
    done

    # Replication statistics
    echo -e "\\n=== Replication Statistics ===" >> $output_file
    sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -c "
        SELECT
            client_addr,
            state,
            sent_lsn,
            write_lsn,
            flush_lsn,
            replay_lsn,
            write_lag,
            flush_lag,
            replay_lag
        FROM pg_stat_replication;" >> $output_file 2>/dev/null
}

# Execute metrics collection
collect_connection_stats
collect_performance_stats
collect_backend_stats

# Cleanup old metrics files (keep last 7 days)
find $METRICS_DIR -name "*.log" -mtime +7 -delete

Troubleshooting Common Issues

Connection Pool Exhaustion

Diagnose and resolve connection pool exhaustion:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# Create connection troubleshooting script
sudo nano /usr/local/bin/pgpool_connection_debug.sh
# Create connection troubleshooting script sudo nano /usr/local/bin/pgpool_connection_debug.sh
# Create connection troubleshooting script
sudo nano /usr/local/bin/pgpool_connection_debug.sh
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
#!/bin/bash
# Connection pool troubleshooting script
DEBUG_LOG="/var/log/pgpool_connection_debug.log"
# Function to analyze connection patterns
analyze_connections() {
echo "=== Connection Analysis $(date) ===" >> $DEBUG_LOG
# Check pool process status
echo "Pool Process Status:" >> $DEBUG_LOG
pcp_proc_info -h localhost -p 9898 -U pgpool 2>/dev/null >> $DEBUG_LOG
# Check for long-running queries
echo -e "\\nLong-running queries:" >> $DEBUG_LOG
sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -c "
SELECT
pid,
usename,
datname,
client_addr,
query_start,
state,
EXTRACT(EPOCH FROM (now() - query_start)) as duration_seconds,
left(query, 100) as query_preview
FROM pg_stat_activity
WHERE state != 'idle'
AND EXTRACT(EPOCH FROM (now() - query_start)) > 30
ORDER BY duration_seconds DESC;" >> $DEBUG_LOG 2>/dev/null
# Check for idle in transaction connections
echo -e "\\nIdle in transaction connections:" >> $DEBUG_LOG
sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -c "
SELECT
pid,
usename,
datname,
client_addr,
state_change,
EXTRACT(EPOCH FROM (now() - state_change)) as idle_duration
FROM pg_stat_activity
WHERE state = 'idle in transaction'
ORDER BY idle_duration DESC;" >> $DEBUG_LOG 2>/dev/null
}
# Function to resolve connection issues
resolve_connection_issues() {
# Kill long-running idle transactions
local idle_pids=$(sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -t -c "
SELECT pid
FROM pg_stat_activity
WHERE state = 'idle in transaction'
AND EXTRACT(EPOCH FROM (now() - state_change)) > 300;" 2>/dev/null)
if [ ! -z "$idle_pids" ]; then
echo "Terminating idle in transaction connections: $idle_pids" >> $DEBUG_LOG
for pid in $idle_pids; do
sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -c "SELECT pg_terminate_backend($pid);" 2>/dev/null
done
fi
# Restart pool children if necessary
local active_children=$(pcp_proc_count -h localhost -p 9898 -U pgpool 2>/dev/null)
local max_children=$(grep "num_init_children" /etc/pgpool2/pgpool.conf | cut -d'=' -f2 | tr -d ' ')
if [ "$active_children" -lt $((max_children / 2)) ]; then
echo "Reloading Pgpool-II configuration due to low active children count" >> $DEBUG_LOG
systemctl reload pgpool2
fi
}
# Execute troubleshooting
analyze_connections
resolve_connection_issues
#!/bin/bash # Connection pool troubleshooting script DEBUG_LOG="/var/log/pgpool_connection_debug.log" # Function to analyze connection patterns analyze_connections() { echo "=== Connection Analysis $(date) ===" >> $DEBUG_LOG # Check pool process status echo "Pool Process Status:" >> $DEBUG_LOG pcp_proc_info -h localhost -p 9898 -U pgpool 2>/dev/null >> $DEBUG_LOG # Check for long-running queries echo -e "\\nLong-running queries:" >> $DEBUG_LOG sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -c " SELECT pid, usename, datname, client_addr, query_start, state, EXTRACT(EPOCH FROM (now() - query_start)) as duration_seconds, left(query, 100) as query_preview FROM pg_stat_activity WHERE state != 'idle' AND EXTRACT(EPOCH FROM (now() - query_start)) > 30 ORDER BY duration_seconds DESC;" >> $DEBUG_LOG 2>/dev/null # Check for idle in transaction connections echo -e "\\nIdle in transaction connections:" >> $DEBUG_LOG sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -c " SELECT pid, usename, datname, client_addr, state_change, EXTRACT(EPOCH FROM (now() - state_change)) as idle_duration FROM pg_stat_activity WHERE state = 'idle in transaction' ORDER BY idle_duration DESC;" >> $DEBUG_LOG 2>/dev/null } # Function to resolve connection issues resolve_connection_issues() { # Kill long-running idle transactions local idle_pids=$(sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -t -c " SELECT pid FROM pg_stat_activity WHERE state = 'idle in transaction' AND EXTRACT(EPOCH FROM (now() - state_change)) > 300;" 2>/dev/null) if [ ! -z "$idle_pids" ]; then echo "Terminating idle in transaction connections: $idle_pids" >> $DEBUG_LOG for pid in $idle_pids; do sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -c "SELECT pg_terminate_backend($pid);" 2>/dev/null done fi # Restart pool children if necessary local active_children=$(pcp_proc_count -h localhost -p 9898 -U pgpool 2>/dev/null) local max_children=$(grep "num_init_children" /etc/pgpool2/pgpool.conf | cut -d'=' -f2 | tr -d ' ') if [ "$active_children" -lt $((max_children / 2)) ]; then echo "Reloading Pgpool-II configuration due to low active children count" >> $DEBUG_LOG systemctl reload pgpool2 fi } # Execute troubleshooting analyze_connections resolve_connection_issues
#!/bin/bash
# Connection pool troubleshooting script

DEBUG_LOG="/var/log/pgpool_connection_debug.log"

# Function to analyze connection patterns
analyze_connections() {
    echo "=== Connection Analysis $(date) ===" >> $DEBUG_LOG

    # Check pool process status
    echo "Pool Process Status:" >> $DEBUG_LOG
    pcp_proc_info -h localhost -p 9898 -U pgpool 2>/dev/null >> $DEBUG_LOG

    # Check for long-running queries
    echo -e "\\nLong-running queries:" >> $DEBUG_LOG
    sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -c "
        SELECT
            pid,
            usename,
            datname,
            client_addr,
            query_start,
            state,
            EXTRACT(EPOCH FROM (now() - query_start)) as duration_seconds,
            left(query, 100) as query_preview
        FROM pg_stat_activity
        WHERE state != 'idle'
        AND EXTRACT(EPOCH FROM (now() - query_start)) > 30
        ORDER BY duration_seconds DESC;" >> $DEBUG_LOG 2>/dev/null

    # Check for idle in transaction connections
    echo -e "\\nIdle in transaction connections:" >> $DEBUG_LOG
    sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -c "
        SELECT
            pid,
            usename,
            datname,
            client_addr,
            state_change,
            EXTRACT(EPOCH FROM (now() - state_change)) as idle_duration
        FROM pg_stat_activity
        WHERE state = 'idle in transaction'
        ORDER BY idle_duration DESC;" >> $DEBUG_LOG 2>/dev/null
}

# Function to resolve connection issues
resolve_connection_issues() {
    # Kill long-running idle transactions
    local idle_pids=$(sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -t -c "
        SELECT pid
        FROM pg_stat_activity
        WHERE state = 'idle in transaction'
        AND EXTRACT(EPOCH FROM (now() - state_change)) > 300;" 2>/dev/null)

    if [ ! -z "$idle_pids" ]; then
        echo "Terminating idle in transaction connections: $idle_pids" >> $DEBUG_LOG
        for pid in $idle_pids; do
            sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -c "SELECT pg_terminate_backend($pid);" 2>/dev/null
        done
    fi

    # Restart pool children if necessary
    local active_children=$(pcp_proc_count -h localhost -p 9898 -U pgpool 2>/dev/null)
    local max_children=$(grep "num_init_children" /etc/pgpool2/pgpool.conf | cut -d'=' -f2 | tr -d ' ')

    if [ "$active_children" -lt $((max_children / 2)) ]; then
        echo "Reloading Pgpool-II configuration due to low active children count" >> $DEBUG_LOG
        systemctl reload pgpool2
    fi
}

# Execute troubleshooting
analyze_connections
resolve_connection_issues

Load Balancing Issues

Debug load balancing problems:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
-- Check load balancing statistics
SELECT
backend_id,
backend_hostname,
backend_port,
backend_status,
select_cnt,
load_balance_node
FROM pgpool_adm_pcp_node_info();
-- Monitor query distribution
SELECT
datname,
usename,
client_addr,
backend_start,
query
FROM pg_stat_activity
WHERE state = 'active'
ORDER BY backend_start;
-- Check load balancing statistics SELECT backend_id, backend_hostname, backend_port, backend_status, select_cnt, load_balance_node FROM pgpool_adm_pcp_node_info(); -- Monitor query distribution SELECT datname, usename, client_addr, backend_start, query FROM pg_stat_activity WHERE state = 'active' ORDER BY backend_start;
-- Check load balancing statistics
SELECT
    backend_id,
    backend_hostname,
    backend_port,
    backend_status,
    select_cnt,
    load_balance_node
FROM pgpool_adm_pcp_node_info();

-- Monitor query distribution
SELECT
    datname,
    usename,
    client_addr,
    backend_start,
    query
FROM pg_stat_activity
WHERE state = 'active'
ORDER BY backend_start;

Performance Degradation Analysis

Create performance analysis tools:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# Create performance analysis script
sudo nano /usr/local/bin/pgpool_performance_analysis.sh
# Create performance analysis script sudo nano /usr/local/bin/pgpool_performance_analysis.sh
# Create performance analysis script
sudo nano /usr/local/bin/pgpool_performance_analysis.sh
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
#!/bin/bash
# Pgpool-II performance degradation analysis
ANALYSIS_LOG="/var/log/pgpool_performance_analysis.log"
# Analyze query patterns
analyze_query_patterns() {
echo "=== Query Pattern Analysis $(date) ===" >> $ANALYSIS_LOG
# Top slow queries
sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -c "
SELECT
query,
calls,
total_time,
mean_time,
rows,
100.0 * shared_blks_hit / nullif(shared_blks_hit + shared_blks_read, 0) AS hit_percent
FROM pg_stat_statements
ORDER BY mean_time DESC
LIMIT 10;" >> $ANALYSIS_LOG 2>/dev/null
# Connection distribution
echo -e "\\nConnection Distribution:" >> $ANALYSIS_LOG
sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -c "
SELECT
datname,
count(*) as connection_count,
count(*) FILTER (WHERE state = 'active') as active_count,
count(*) FILTER (WHERE state = 'idle') as idle_count
FROM pg_stat_activity
GROUP BY datname
ORDER BY connection_count DESC;" >> $ANALYSIS_LOG 2>/dev/null
}
# Check system resources
check_system_resources() {
echo -e "\\n=== System Resource Analysis ===" >> $ANALYSIS_LOG
# CPU usage
echo "CPU Usage:" >> $ANALYSIS_LOG
top -bn1 | grep "Cpu(s)" >> $ANALYSIS_LOG
# Memory usage
echo -e "\\nMemory Usage:" >> $ANALYSIS_LOG
free -h >> $ANALYSIS_LOG
# Disk I/O
echo -e "\\nDisk I/O:" >> $ANALYSIS_LOG
iostat -x 1 1 >> $ANALYSIS_LOG 2>/dev/null
# Network connections
echo -e "\\nNetwork Connections:" >> $ANALYSIS_LOG
netstat -an | grep :9999 | wc -l >> $ANALYSIS_LOG
}
# Generate recommendations
generate_recommendations() {
echo -e "\\n=== Performance Recommendations ===" >> $ANALYSIS_LOG
local active_connections=$(sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -t -c "SELECT count(*) FROM pg_stat_activity WHERE state = 'active';" 2>/dev/null | tr -d ' ')
local total_connections=$(sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -t -c "SELECT count(*) FROM pg_stat_activity;" 2>/dev/null | tr -d ' ')
if [ "$active_connections" -gt 50 ]; then
echo "- Consider increasing num_init_children parameter" >> $ANALYSIS_LOG
echo "- Review query optimization for long-running queries" >> $ANALYSIS_LOG
fi
if [ "$total_connections" -gt 200 ]; then
echo "- Implement connection pooling at application level" >> $ANALYSIS_LOG
echo "- Consider reducing connection_life_time parameter" >> $ANALYSIS_LOG
fi
local memory_usage=$(free | awk 'NR==2{printf "%.1f", $3*100/$2}')
if [ $(echo "$memory_usage > 85" | bc -l) -eq 1 ]; then
echo "- Increase system memory or optimize memory usage" >> $ANALYSIS_LOG
echo "- Review memqcache_total_size configuration" >> $ANALYSIS_LOG
fi
}
# Execute analysis
analyze_query_patterns
check_system_resources
generate_recommendations
#!/bin/bash # Pgpool-II performance degradation analysis ANALYSIS_LOG="/var/log/pgpool_performance_analysis.log" # Analyze query patterns analyze_query_patterns() { echo "=== Query Pattern Analysis $(date) ===" >> $ANALYSIS_LOG # Top slow queries sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -c " SELECT query, calls, total_time, mean_time, rows, 100.0 * shared_blks_hit / nullif(shared_blks_hit + shared_blks_read, 0) AS hit_percent FROM pg_stat_statements ORDER BY mean_time DESC LIMIT 10;" >> $ANALYSIS_LOG 2>/dev/null # Connection distribution echo -e "\\nConnection Distribution:" >> $ANALYSIS_LOG sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -c " SELECT datname, count(*) as connection_count, count(*) FILTER (WHERE state = 'active') as active_count, count(*) FILTER (WHERE state = 'idle') as idle_count FROM pg_stat_activity GROUP BY datname ORDER BY connection_count DESC;" >> $ANALYSIS_LOG 2>/dev/null } # Check system resources check_system_resources() { echo -e "\\n=== System Resource Analysis ===" >> $ANALYSIS_LOG # CPU usage echo "CPU Usage:" >> $ANALYSIS_LOG top -bn1 | grep "Cpu(s)" >> $ANALYSIS_LOG # Memory usage echo -e "\\nMemory Usage:" >> $ANALYSIS_LOG free -h >> $ANALYSIS_LOG # Disk I/O echo -e "\\nDisk I/O:" >> $ANALYSIS_LOG iostat -x 1 1 >> $ANALYSIS_LOG 2>/dev/null # Network connections echo -e "\\nNetwork Connections:" >> $ANALYSIS_LOG netstat -an | grep :9999 | wc -l >> $ANALYSIS_LOG } # Generate recommendations generate_recommendations() { echo -e "\\n=== Performance Recommendations ===" >> $ANALYSIS_LOG local active_connections=$(sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -t -c "SELECT count(*) FROM pg_stat_activity WHERE state = 'active';" 2>/dev/null | tr -d ' ') local total_connections=$(sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -t -c "SELECT count(*) FROM pg_stat_activity;" 2>/dev/null | tr -d ' ') if [ "$active_connections" -gt 50 ]; then echo "- Consider increasing num_init_children parameter" >> $ANALYSIS_LOG echo "- Review query optimization for long-running queries" >> $ANALYSIS_LOG fi if [ "$total_connections" -gt 200 ]; then echo "- Implement connection pooling at application level" >> $ANALYSIS_LOG echo "- Consider reducing connection_life_time parameter" >> $ANALYSIS_LOG fi local memory_usage=$(free | awk 'NR==2{printf "%.1f", $3*100/$2}') if [ $(echo "$memory_usage > 85" | bc -l) -eq 1 ]; then echo "- Increase system memory or optimize memory usage" >> $ANALYSIS_LOG echo "- Review memqcache_total_size configuration" >> $ANALYSIS_LOG fi } # Execute analysis analyze_query_patterns check_system_resources generate_recommendations
#!/bin/bash
# Pgpool-II performance degradation analysis

ANALYSIS_LOG="/var/log/pgpool_performance_analysis.log"

# Analyze query patterns
analyze_query_patterns() {
    echo "=== Query Pattern Analysis $(date) ===" >> $ANALYSIS_LOG

    # Top slow queries
    sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -c "
        SELECT
            query,
            calls,
            total_time,
            mean_time,
            rows,
            100.0 * shared_blks_hit / nullif(shared_blks_hit + shared_blks_read, 0) AS hit_percent
        FROM pg_stat_statements
        ORDER BY mean_time DESC
        LIMIT 10;" >> $ANALYSIS_LOG 2>/dev/null

    # Connection distribution
    echo -e "\\nConnection Distribution:" >> $ANALYSIS_LOG
    sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -c "
        SELECT
            datname,
            count(*) as connection_count,
            count(*) FILTER (WHERE state = 'active') as active_count,
            count(*) FILTER (WHERE state = 'idle') as idle_count
        FROM pg_stat_activity
        GROUP BY datname
        ORDER BY connection_count DESC;" >> $ANALYSIS_LOG 2>/dev/null
}

# Check system resources
check_system_resources() {
    echo -e "\\n=== System Resource Analysis ===" >> $ANALYSIS_LOG

    # CPU usage
    echo "CPU Usage:" >> $ANALYSIS_LOG
    top -bn1 | grep "Cpu(s)" >> $ANALYSIS_LOG

    # Memory usage
    echo -e "\\nMemory Usage:" >> $ANALYSIS_LOG
    free -h >> $ANALYSIS_LOG

    # Disk I/O
    echo -e "\\nDisk I/O:" >> $ANALYSIS_LOG
    iostat -x 1 1 >> $ANALYSIS_LOG 2>/dev/null

    # Network connections
    echo -e "\\nNetwork Connections:" >> $ANALYSIS_LOG
    netstat -an | grep :9999 | wc -l >> $ANALYSIS_LOG
}

# Generate recommendations
generate_recommendations() {
    echo -e "\\n=== Performance Recommendations ===" >> $ANALYSIS_LOG

    local active_connections=$(sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -t -c "SELECT count(*) FROM pg_stat_activity WHERE state = 'active';" 2>/dev/null | tr -d ' ')
    local total_connections=$(sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -t -c "SELECT count(*) FROM pg_stat_activity;" 2>/dev/null | tr -d ' ')

    if [ "$active_connections" -gt 50 ]; then
        echo "- Consider increasing num_init_children parameter" >> $ANALYSIS_LOG
        echo "- Review query optimization for long-running queries" >> $ANALYSIS_LOG
    fi

    if [ "$total_connections" -gt 200 ]; then
        echo "- Implement connection pooling at application level" >> $ANALYSIS_LOG
        echo "- Consider reducing connection_life_time parameter" >> $ANALYSIS_LOG
    fi

    local memory_usage=$(free | awk 'NR==2{printf "%.1f", $3*100/$2}')
    if [ $(echo "$memory_usage > 85" | bc -l) -eq 1 ]; then
        echo "- Increase system memory or optimize memory usage" >> $ANALYSIS_LOG
        echo "- Review memqcache_total_size configuration" >> $ANALYSIS_LOG
    fi
}

# Execute analysis
analyze_query_patterns
check_system_resources
generate_recommendations

Automated Maintenance and Optimization

Automated Configuration Tuning

Implement automated configuration optimization:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# Create auto-tuning script
sudo nano /usr/local/bin/pgpool_auto_tune.sh
# Create auto-tuning script sudo nano /usr/local/bin/pgpool_auto_tune.sh
# Create auto-tuning script
sudo nano /usr/local/bin/pgpool_auto_tune.sh
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
#!/bin/bash
# Automated Pgpool-II configuration tuning
PGPOOL_CONF="/etc/pgpool2/pgpool.conf"
BACKUP_CONF="/etc/pgpool2/pgpool.conf.backup.$(date +%Y%m%d_%H%M%S)"
TUNE_LOG="/var/log/pgpool_auto_tune.log"
# Backup current configuration
cp $PGPOOL_CONF $BACKUP_CONF
# Function to update configuration parameter
update_config() {
local param=$1
local value=$2
if grep -q "^$param" $PGPOOL_CONF; then
sed -i "s/^$param.*/$param = $value/" $PGPOOL_CONF
else
echo "$param = $value" >> $PGPOOL_CONF
fi
echo "$(date): Updated $param to $value" >> $TUNE_LOG
}
# Analyze current performance and adjust
tune_performance() {
local avg_connections=$(sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -t -c "SELECT AVG(numbackends) FROM pg_stat_database;" 2>/dev/null | tr -d ' ')
local max_connections=$(sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -t -c "SELECT setting FROM pg_settings WHERE name = 'max_connections';" 2>/dev/null | tr -d ' ')
# Adjust num_init_children based on load
if [ ! -z "$avg_connections" ]; then
local optimal_children=$((avg_connections * 2))
if [ $optimal_children -gt 256 ]; then
optimal_children=256
elif [ $optimal_children -lt 16 ]; then
optimal_children=16
fi
update_config "num_init_children" $optimal_children
fi
# Adjust connection_life_time based on connection patterns
local idle_connections=$(sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -t -c "SELECT count(*) FROM pg_stat_activity WHERE state = 'idle';" 2>/dev/null | tr -d ' ')
if [ "$idle_connections" -gt 50 ]; then
update_config "connection_life_time" 300
update_config "client_idle_limit" 180
else
update_config "connection_life_time" 600
update_config "client_idle_limit" 300
fi
}
# Execute tuning and reload configuration
tune_performance
systemctl reload pgpool2
echo "$(date): Auto-tuning completed, configuration reloaded" >> $TUNE_LOG
#!/bin/bash # Automated Pgpool-II configuration tuning PGPOOL_CONF="/etc/pgpool2/pgpool.conf" BACKUP_CONF="/etc/pgpool2/pgpool.conf.backup.$(date +%Y%m%d_%H%M%S)" TUNE_LOG="/var/log/pgpool_auto_tune.log" # Backup current configuration cp $PGPOOL_CONF $BACKUP_CONF # Function to update configuration parameter update_config() { local param=$1 local value=$2 if grep -q "^$param" $PGPOOL_CONF; then sed -i "s/^$param.*/$param = $value/" $PGPOOL_CONF else echo "$param = $value" >> $PGPOOL_CONF fi echo "$(date): Updated $param to $value" >> $TUNE_LOG } # Analyze current performance and adjust tune_performance() { local avg_connections=$(sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -t -c "SELECT AVG(numbackends) FROM pg_stat_database;" 2>/dev/null | tr -d ' ') local max_connections=$(sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -t -c "SELECT setting FROM pg_settings WHERE name = 'max_connections';" 2>/dev/null | tr -d ' ') # Adjust num_init_children based on load if [ ! -z "$avg_connections" ]; then local optimal_children=$((avg_connections * 2)) if [ $optimal_children -gt 256 ]; then optimal_children=256 elif [ $optimal_children -lt 16 ]; then optimal_children=16 fi update_config "num_init_children" $optimal_children fi # Adjust connection_life_time based on connection patterns local idle_connections=$(sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -t -c "SELECT count(*) FROM pg_stat_activity WHERE state = 'idle';" 2>/dev/null | tr -d ' ') if [ "$idle_connections" -gt 50 ]; then update_config "connection_life_time" 300 update_config "client_idle_limit" 180 else update_config "connection_life_time" 600 update_config "client_idle_limit" 300 fi } # Execute tuning and reload configuration tune_performance systemctl reload pgpool2 echo "$(date): Auto-tuning completed, configuration reloaded" >> $TUNE_LOG
#!/bin/bash
# Automated Pgpool-II configuration tuning

PGPOOL_CONF="/etc/pgpool2/pgpool.conf"
BACKUP_CONF="/etc/pgpool2/pgpool.conf.backup.$(date +%Y%m%d_%H%M%S)"
TUNE_LOG="/var/log/pgpool_auto_tune.log"

# Backup current configuration
cp $PGPOOL_CONF $BACKUP_CONF

# Function to update configuration parameter
update_config() {
    local param=$1
    local value=$2

    if grep -q "^$param" $PGPOOL_CONF; then
        sed -i "s/^$param.*/$param = $value/" $PGPOOL_CONF
    else
        echo "$param = $value" >> $PGPOOL_CONF
    fi

    echo "$(date): Updated $param to $value" >> $TUNE_LOG
}

# Analyze current performance and adjust
tune_performance() {
    local avg_connections=$(sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -t -c "SELECT AVG(numbackends) FROM pg_stat_database;" 2>/dev/null | tr -d ' ')
    local max_connections=$(sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -t -c "SELECT setting FROM pg_settings WHERE name = 'max_connections';" 2>/dev/null | tr -d ' ')

    # Adjust num_init_children based on load
    if [ ! -z "$avg_connections" ]; then
        local optimal_children=$((avg_connections * 2))
        if [ $optimal_children -gt 256 ]; then
            optimal_children=256
        elif [ $optimal_children -lt 16 ]; then
            optimal_children=16
        fi

        update_config "num_init_children" $optimal_children
    fi

    # Adjust connection_life_time based on connection patterns
    local idle_connections=$(sudo -u postgres psql -h localhost -p 9999 -U pgpool -d postgres -t -c "SELECT count(*) FROM pg_stat_activity WHERE state = 'idle';" 2>/dev/null | tr -d ' ')

    if [ "$idle_connections" -gt 50 ]; then
        update_config "connection_life_time" 300
        update_config "client_idle_limit" 180
    else
        update_config "connection_life_time" 600
        update_config "client_idle_limit" 300
    fi
}

# Execute tuning and reload configuration
tune_performance
systemctl reload pgpool2

echo "$(date): Auto-tuning completed, configuration reloaded" >> $TUNE_LOG

This comprehensive guide provides the foundation for optimizing Pgpool-II performance and implementing robust monitoring and troubleshooting procedures. Regular application of these techniques ensures optimal database middleware performance and rapid issue resolution in production environments.

The monitoring scripts and performance analysis tools enable proactive identification of potential issues before they impact application performance, while the automated tuning capabilities help maintain optimal configuration parameters as workload patterns evolve.

 

 

 

About MinervaDB Corporation 85 Articles
Full-stack Database Infrastructure Architecture, Engineering and Operations Consultative Support(24*7) Provider for PostgreSQL, MySQL, MariaDB, MongoDB, ClickHouse, Trino, SQL Server, Cassandra, CockroachDB, Yugabyte, Couchbase, Redis, Valkey, NoSQL, NewSQL, Databricks, Amazon Resdhift, Amazon Aurora, CloudSQL, Snowflake and AzureSQL with core expertize in Performance, Scalability, High Availability, Database Reliability Engineering, Database Upgrades/Migration, and Data Security.

Be the first to comment

Leave a Reply