Configure the Splunk Distribution of OpenTelemetry Collector to collect metrics and events from your PostgreSQL server instance.
This page explains how to configure the postgresql receiver within the Splunk Distribution of the OpenTelemetry Collector (splunk-otel-collector) or Community (OSS) version of the OpenTelemetry Collector (opentelemetry-collector-contrib).
Splunk Database Monitoring supports these PostgreSQL versions and platforms:
-
Versions: Azure Database for PostgreSQL Flexible Server versions 14.20 and 17.7, Amazon RDS for PostgreSQL versions 14.15 and 17.5
-
Platforms: Microsoft Azure as a managed PaaS (single server instance), AWS as a managed PaaS (RDS instance)
- Deploy a supported OpenTelemetry collector if it's not already deployed.
- Configure the
postgresql receiver to collect infrastructure metrics from your PostgreSQL server instance.
- Enable Database Monitoring by modifying your OpenTelemetry Collector configuration.
- In your collector's YAML file, add additional configuration to the
postgresql receiver:
The example below shows the minimum required configuration, but you can add other options. See the community (OSS) OpenTelemetry Collector postgresql README.
Important:
In the Splunk Distribution of OpenTelemetry Collector, the following postgresql options have default values that are sufficient for database monitoring needs:
collection_interval (Default: 10s)
query_sample_collection.max_rows_per_query (Default: 100)
top_query_collection.max_rows_per_query (Default: 1000)
The Splunk Distribution of OpenTelemetry Collector uses carefully selected default values to support database monitoring without affecting the performance of the database or the collector. If you increase these values you might adversely affect the performance of your database or collector, and this could result in ingest throttling.
Example:
postgresql:
collection_interval: 10s
databases:
- postgres
endpoint: your-service-endpoint:5432
events:
db.server.query_sample:
enabled: true
db.server.top_query:
enabled: true
username: postgres
password: your-password
tls:
insecure: true
- Add an additional exporter, named
dbmon, with the configuration shown in the codeblock below.
Note: The dbmon exporter sends OpenTelemetry-formatted logs to your Splunk Observability Cloud event endpoint rather than to its metrics endpoint.
# Exporters define where the telemetry data is sent to
exporters:
# Exports dbmon events as logs
otlphttp/dbmon:
headers:
X-SF-Token: your-splunk-access-token
X-splunk-instrumentation-library: dbmon
logs_endpoint: https://ingest.your-splunk-realm.signalfx.com/v3/event
sending_queue:
batch:
flush_timeout: 15s
max_size: 10485760 # 10 MiB
sizer: bytes
- Add a new logs pipeline named
dbmon, with the following components:
Important:
About the processors section:
Use identical processors for both your metrics and your logs/dbmon pipelines, and include these processors in the same order. For example, if you use memory_limiter, and k8sattributes, batch processors in your metrics pipeline, use the same combination and order for your logs/dbmon pipeline.
service:
pipelines:
…
logs/dbmon:
receivers:
- postgresql
processors:
- memory_limiter
- batch
exporters:
- otlphttp/dbmon
- Restart your collector.
The command to restart the collector varies depending on which collector you deployed, which platform you deployed it on, and what tool you used to deploy it. For the Splunk Distribution of the OpenTelemetry Collector deployed on Kubernetes platforms with Helm, the restart command is:
helm upgrade your-splunk-otel-collector splunk-otel-collector-chart/splunk-otel-collector -f your-override-values.yaml
where splunk-otel-collector-chart is the name you gave to the Helm chart in the helm repo add command (in other words, your command was helm repo add splunk-otel-collector-chart https://signalfx.github.io/splunk-otel-collector-chart ).
Your PostgreSQL server instance should now be visible as database service in Database Monitoring ().