0
(disabled). For systems with battery-backed write cache, set to 0
. For systems without write cache, consider values like 32
or 64
pages (256-512KB) to reduce I/O overhead. Adjust based on your durability requirements and storage performance.200-300
. For RAID arrays: 2-4
per spindle. For single HDD: 1
. Start with conservative values and increase while monitoring I/O wait times and system performance.0
(system default). For most modern storage, leave at default. For specialized storage with optimal large I/O sizes, set to match the storage's preferred I/O size (e.g., 8192
for 8KB operations).-1
(same as effective_io_concurrency). Set to a value 2-4 times higher than effective_io_concurrency for maintenance windows when you want to accelerate maintenance operations. Reduce during normal operation to avoid impacting production workload.2
. Set to 50-75% of available CPU cores for maintenance-heavy systems. For example, with 16 cores, use 8-12
. Consider available memory and I/O capacity when increasing this value.8
. Set to approximately 50-75% of total CPU cores. For example, with 32 cores, set to 16-24
. Ensure adequate memory is available for parallel workers.2
. Set to 25-50% of total CPU cores. For example, with 16 cores, use 4-8
. Adjust based on your typical query workload and available resources.8
. Set to at least the sum of max_parallel_workers and autovacuum_max_workers, plus overhead for other background processes. For busy systems, set to max_parallel_workers + autovacuum_max_workers + 4
.on
is recommended for most workloads. Set to off
if you notice the leader process becoming a bottleneck in parallel queries, particularly for queries with expensive aggregation or sorting operations.Start your journey toward a healthier PostgreSQL with pghealth.
You can explore all features immediately with a free trial β no installation required.
π Start Free Trial