PostgreSQL Table Performance Metrics Guide - Optimization & Analysis

Comprehensive guide to PostgreSQL table metrics. Monitor table size, bloat, cache efficiency, vacuum/analyze operations, and query performance for optimal database health and tuning.

Table Size Information

  • What it measures: The percentage contribution of a single table to the overall database size, showing its storage footprint relative to the entire database.
  • Why it matters: Tables that consume disproportionate storage can lead to performance bottlenecks, maintenance challenges, and storage issues. Large tables may require special partitioning strategies, impact backup/restore times, and affect query performance across the database.
  • Ideal value & Best Practice:
    • Normal (less than 30%): Table size within expected limits
    • Warning (30% to 50%): Monitor growth and consider partitioning
    • Critical (50% or more): Immediate action required - implement partitioning, archiving, or optimization
    • Regularly monitor large table growth patterns
    • Consider table partitioning for tables exceeding 30% of total size
    • Implement data archiving strategies for historical data

Dead Tuple Percentage

  • What it measures: The percentage of dead tuples (rows marked for deletion but not yet removed) relative to total tuples in a table.
  • Why it matters: High dead tuple percentages indicate table bloat, which wastes disk space, slows query performance (due to scanning unnecessary rows), and increases I/O overhead. This metric directly impacts vacuum efficiency and overall database performance.
  • Ideal value & Best Practice:
    • Healthy (less than 5%): Well-maintained table with minimal bloat
    • Warning (5% to 20%): Monitor and optimize VACUUM settings
    • Critical (20% or more): Immediate action required - perform aggressive vacuuming
    • Adjust autovacuum parameters for frequently updated tables
    • Monitor tables with high update/delete activity more frequently
    • Consider more aggressive vacuum settings for volatile tables

Last Analyze & Vacuum Day Age

  • What it measures: The number of days since last ANALYZE (statistics update) and last VACUUM (storage reclamation) operations.
  • Why it matters: Stale statistics lead to poor query planning and suboptimal execution plans. Infrequent vacuuming allows dead tuple accumulation and table bloat. Both operations are crucial for maintaining query performance and storage efficiency.
  • Ideal value & Best Practice:
    • ANALYZE should run at least weekly for most tables
    • VACUUM frequency depends on update/delete activity
    • Set appropriate autovacuum_analyze_scale_factor and autovacuum_vacuum_scale_factor
    • Monitor tables with no recent analyze/vacuum operations
    • Consider manual maintenance for critical tables
    • Use pg_stat_all_tables to track last_analyze and last_vacuum times

Seq Scan Percentage

  • What it measures: The percentage of sequential scans relative to all scans (sequential + index) on a table.
  • Why it matters: High sequential scan percentages indicate inefficient index usage, leading to full table scans that degrade performance, especially on large tables. This metric helps identify indexing opportunities and query optimization needs.
  • Ideal value & Best Practice:
    • Good (10% or less): Effective index usage
    • Warning (10% to 30%): Moderate reliance on sequential scans
    • Critical (more than 30%): Excessive sequential scans - optimize queries and indexes
    • Create appropriate indexes for common query patterns
    • Review query plans for missing indexes
    • Consider partial indexes for large tables
    • Monitor query performance regularly

Hot Update Percentage

  • What it measures: The percentage of HOT (Heap-Only Tuple) updates relative to all updates on a table.
  • Why it matters: HOT updates allow in-place updates without creating new row versions, reducing table bloat and vacuum overhead. Low percentages indicate inefficient update patterns that contribute to table fragmentation and maintenance overhead.
  • Ideal value & Best Practice:
    • Low (less than 10%): Inefficient HOT updates - review fillfactor and index design
    • Optimal (more than 50%): Efficient in-place updates with reduced bloat
    • Adjust fillfactor for frequently updated tables (80-90%)
    • Avoid updating indexed columns frequently
    • Consider index redesign for update-heavy tables
    • Monitor update patterns for optimization opportunities

Table Bloat (Free Percent)

  • What it measures: The percentage of unused space within a table's allocated storage.
  • Why it matters: High free percentages indicate wasted disk space and suboptimal storage utilization. This can lead to increased I/O, poorer cache efficiency, and overall performance degradation due to unnecessary disk space consumption.
  • Ideal value & Best Practice:
    • Healthy (less than 30%): Efficient space utilization
    • Warning (30% to 50%): Significant free space - consider maintenance
    • Critical (50% or more): High wasted space - immediate action required
    • Perform regular VACUUM FULL or pg_repack for bloated tables
    • Monitor tables with high update/delete patterns
    • Consider table reorganization for severely bloated tables

Table Hit Percentage

  • What it measures: The efficiency of serving table data from memory cache instead of disk.
  • Why it matters: High cache hit rates indicate optimal performance with most reads served from memory. Low percentages signal excessive disk I/O, which significantly slows query performance and indicates potential memory or indexing issues.
  • Ideal value & Best Practice:
    • Healthy (99% or more): Efficient cache usage
    • Warning (80% to 90%): Moderate disk reliance
    • Critical (less than 80%): Severe performance degradation
    • Increase shared_buffers for better caching
    • Optimize queries to reduce full table scans
    • Consider more RAM for database server
    • Monitor cache patterns for frequently accessed tables

Table Index Hit Percentage

  • What it measures: The efficiency of serving index data from memory cache instead of disk.
  • Why it matters: Index cache efficiency is crucial for query performance. Low index hit rates force disk reads for index lookups, dramatically slowing query execution and indicating insufficient memory allocation or inefficient index usage.
  • Ideal value & Best Practice:
    • Healthy (99% or more): Efficient index caching
    • Warning (80% to 90%): Moderate disk reliance for indexes
    • Critical (less than 80%): Severe performance issues
    • Ensure adequate memory for index caching
    • Review index usage and remove unused indexes
    • Consider covering indexes for critical queries
    • Monitor index performance regularly

Try pghealth Free Today ๐Ÿš€

Start your journey toward a healthier PostgreSQL with pghealth.
You can explore all features immediately with a free trial โ€” no installation required.

๐Ÿ‘‰ Start Free Trial