Prometheus exporter for PostgreSQL server metrics. Supported PostgreSQL versions: 9.0 and up to 10th.
| Name | Description |
|---|---|
| web.listen-address | Address to listen on for web interface and telemetry. |
| web.telemetry-path | Path under which to expose metrics. |
| db.names | Comma-separated list of monitored DB. |
| db.consider-query-slow | Queries with execution time higher than this value will be considered as slow (in seconds). 5 seconds by default. |
| db.tables | Comma-separated list of tables to track. Pass * to track all tables from DSN database |
The PostgreSQL data source name
must be set via the DATA_SOURCE_NAME environment variable.
Format and available parameters is described at http://godoc.org/github.com/lib/pq#hdr-Connection_String_Parameters
Exporter will send following stats to prometheus
buffers_checkpoint- Number of buffers written during checkpointsbuffers_clean- Number of buffers written by the background writermaxwritten_clean- Number of times the background writer stopped a cleaning scan because it had written too many buffersbuffers_backend- Number of buffers written directly by a backendbuffers_backend_fsync- Number of times a backend had to execute its own fsync call (normally the background writer handles those even when the backend does its own write)buffers_alloc- Number of buffers allocated
numbackends- Number of backends currently connected to this databasetup_returned- Number of rows returned by queries in this databasetup_fetched- Number of rows fetched by queries in this databasetup_inserted- Number of rows inserted by queries in this databasetup_updated- Number of rows updated by queries in this databasetup_deleted- Number of rows deleted by queries in this databasexact_commit- Number of transactions in this database that have been committedxact_rollback- Number of transactions in this database that have been rolled backdeadlocks- Number of deadlocks detected in this databasetemp_files- Number of temporary files created by queries in this databasetemp_bytes- Total amount of data written to temporary files by queries in this databasesize_bytes- Database sizecache_hit_ratio- Database cache hit ratio
seq_scan- Number of sequential scans initiated on this tableseq_tup_read- Number of live rows fetched by sequential scansvacuum_count- Number of times this table has been manually vacuumed (not counting VACUUM FULL)autovacuum_count- Number of times this table has been vacuumed by the autovacuum daemonanalyze_count- Number of times this table has been manually analyzedautoanalyze_count- Number of times this table has been analyzed by the autovacuum daemonn_tup_ins- Number of rows insertedn_tup_upd- Number of rows updatedn_tup_del- Number of rows deletedn_tup_hot_upd- Number of rows HOT updated (i.e., with no separate index update required)n_live_tup- Estimated number of live rowsn_dead_tup- Estimated number of dead rowstable_cache_hit_ratio- Table cache hit ration in percentstable_items_count- Table overall items counttable_size- Total table size including indexes in bytes
slow_queries- Number of slow queriesslow_select_queries- Number of slow SELECT queriesslow_dml_queries- Number of slow data manipulation queries (INSERT, UPDATE, DELETE)
You need latest version of go to build.
go build
export DATA_SOURCE_NAME='user=username dbname=database password=password sslmode=disable'
./postgresql_exporter <flags>
Since we do not want to use superuser for monitoring, we need to create a separate user for it.
It has no access to query details in pg_catalog.pg_stat_activity table.
So you need also prepare SQL function in order to make work queries for slow-log if your PostgreSQL version is less than 10+.
If your PostgreSQL version is 10+, you should use role pg_read_all_stats and use pg_catalog.pg_stat_activity table right without function and view (see below).
The function created by postgres user for your monitoring user, so monitoring user must use postgres database since pq: cross-database references are not implemented: error raised if you use another database for monitoring purposes.
here is the function itself and setup:
CREATE OR REPLACE FUNCTION public.pg_stat_activity() RETURNS SETOF pg_catalog.pg_stat_activity
AS $BODY$
DECLARE
rec RECORD;
BEGIN
FOR rec IN SELECT * FROM pg_catalog.pg_stat_activity
LOOP
RETURN NEXT rec;
END LOOP;
RETURN;
END;
$BODY$ LANGUAGE plpgsql SECURITY DEFINER;
CREATE GROUP monitoring;
CREATE USER monitoring LOGIN NOSUPERUSER;
ALTER GROUP monitoring ADD USER monitoring;
CREATE SCHEMA monitoring;
GRANT USAGE ON SCHEMA monitoring TO GROUP monitoring;
CREATE VIEW monitoring.pg_stat_activity AS SELECT * FROM public.pg_stat_activity();
GRANT SELECT ON monitoring.pg_stat_activity TO GROUP monitoring;
ALTER ROLE monitoring SET search_path = monitoring, pg_catalog,"$user", public;