Reporting and Logging / When to Log

Master PostgreSQL logging timing parameters. Learn optimal settings for slow query logging, error reporting, statement sampling, and transaction logging for effective database monitoring.

log_min_duration_sample

  • What it does: Sets the minimum execution time threshold for sampling statements to be logged, working in conjunction with log_statement_sample_rate to control which statements are logged based on duration.
  • Why it matters: This parameter enables intelligent sampling of statements that exceed a certain duration threshold, allowing you to capture representative slow queries without logging every single occurrence. This is particularly valuable in high-volume production environments where logging all slow queries would generate excessive log volume, while still providing visibility into performance issues through statistical sampling.
  • Ideal value & Best Practice: Default -1 (disabled). Set to a value like 100ms (for OLTP) or 1000ms (for data warehouses) to sample slow queries. Combine with log_statement_sample_rate = 0.1 to log approximately 10% of qualifying statements. Adjust based on your log volume tolerance and monitoring needs.

log_min_duration_statement

  • What it does: Sets the minimum execution time threshold above which all completed statements will be logged, regardless of their type or origin.
  • Why it matters: This is one of the most important parameters for performance troubleshooting as it helps identify slow queries that may need optimization. By capturing all statements that exceed a specified duration, you can pinpoint performance bottlenecks, inefficient queries, and resource-intensive operations that impact database responsiveness. The logging occurs after statement completion, ensuring accurate duration measurement.
  • Ideal value & Best Practice: Default -1 (disabled). Set to 100ms for OLTP systems or 1000ms for analytical workloads. Start with a higher value and gradually decrease until you find the right balance between capturing meaningful performance data and managing log volume. Avoid setting to 0 in production as it will log all statements regardless of duration.

log_min_error_statement

  • What it does: Controls which statements are logged based on the severity level of errors they generate, causing statements that trigger errors at or above the specified severity level to be recorded.
  • Why it matters: This parameter is essential for debugging and troubleshooting application issues, as it helps identify problematic SQL statements that cause errors. By logging statements that result in errors, you can quickly pinpoint the source of application failures, data integrity issues, or permission problems. The logging includes both the error details and the statement that caused it.
  • Ideal value & Best Practice: Default error is appropriate for most production environments. For development and testing, consider warning to capture more potential issues. For highly sensitive production systems, you might use log or higher to reduce noise while still capturing critical errors.

log_min_messages

  • What it does: Sets the minimum severity level for messages to be written to the server log, controlling the verbosity of logged information based on message importance.
  • Why it matters: This parameter determines how much detail appears in your database logs, affecting both troubleshooting capability and log volume. Higher severity levels (like error) produce less log output but may miss important diagnostic information. Lower severity levels (like debug1) provide extensive detail but can generate substantial log volume.
  • Ideal value & Best Practice: Default warning is reasonable for production systems. Use error for high-volume production environments where log size is a concern. For development and troubleshooting, consider info or debug1 to get more detailed information. Adjust based on your specific monitoring needs and log storage capacity.

log_startup_progress_interval

  • What it does: Sets the time interval between progress updates for long-running startup operations, providing visibility into database initialization and recovery processes.
  • Why it matters: During database startup, especially after a crash or with large databases, recovery operations can take significant time. This parameter ensures that progress information is logged at regular intervals, helping administrators monitor startup progress and estimate completion time. Without these progress updates, long startup operations might appear hung or stalled.
  • Ideal value & Best Practice: Default 10000ms (10 seconds) is reasonable for most environments. Increase to 30000ms for very large databases with long recovery times. Decrease to 5000ms for debugging startup issues. Set to 0 to disable progress logging if not needed.

log_statement_sample_rate

  • What it does: Specifies the fraction of statements that exceed the log_min_duration_sample threshold that will actually be logged, enabling statistical sampling of slow queries.
  • Why it matters: This parameter works with log_min_duration_sample to implement intelligent sampling of slow queries, preventing log overload in high-volume environments while still providing representative performance data. By logging only a fraction of qualifying statements, you can maintain visibility into performance trends without excessive log growth.
  • Ideal value & Best Practice: Default 1.0 (log all qualifying statements). For high-volume systems, set to 0.1 or 0.2 to log 10-20% of slow queries. Adjust based on your query volume and monitoring requirements. Higher values provide more complete data but increase log volume.

log_transaction_sample_rate

  • What it does: Sets the fraction of transactions from which all statements will be logged, regardless of their duration or type.
  • Why it matters: This parameter enables transaction-level logging, which can be invaluable for debugging complex application issues that involve multiple statements within a transaction. By capturing all statements from a sample of transactions, you can understand complete workflow execution and identify issues that might not be apparent from individual statement logging.
  • Ideal value & Best Practice: Default 0.0 (disabled). Set to 0.01 or 0.001 for production systems to capture a small sample of complete transactions. For debugging specific issues, temporarily increase to higher values. Be cautious with higher settings as they can significantly increase log volume.

Try pghealth Free Today πŸš€

Start your journey toward a healthier PostgreSQL with pghealth.
You can explore all features immediately with a free trial β€” no installation required.

πŸ‘‰ Start Free Trial