Monitor hosts with collectd and OpenTelemetry
Use collectd and native OpenTelemetry to monitor services in Splunk Observability Cloud. See benefits, install, configuration, and metrics.
To monitor your infrastructure with collectd using native OpenTelemetry in Splunk Observability Cloud, install a collectd daemon in your host and connect it to your Collector instance as described in this document.
Benefits
After you configure the integration, you can access these features:
-
View metrics. You can create your own custom dashboards, and most monitors provide built-in dashboards as well. For information about dashboards, see View dashboards in Splunk Observability Cloud.
-
View a data-driven visualization of the physical servers, virtual machines, AWS instances, and other resources in your environment that are visible to Infrastructure Monitoring. For information about navigators, see Use navigators in Splunk Infrastructure Monitoring.
-
Access the Metric Finder and search for metrics sent by the monitor. For information, see Search the Metric Finder and Metadata Catalog.
Configuration
Install a collectd daemon in your host and connect it to an OpenTelemetry Collector with the following steps:
-
Install and configure collectd
-
Configure the OpenTelemetry Collector
-
Build and run
1. Install and configure collectd
Follow these steps to install and configure the collectd daemon:
-
Install collectd as a Debian or Yum package in your host
-
collectd/metrics.conf
-
collectd/http.conf
In this example, the host is represented by an Ubuntu 24.04 docker image.
services:
collectd:
build: collectd
container_name: collectd
depends_on:
- otelcollector
volumes:
- ./collectd/http.conf:/etc/collectd/collectd.conf.d/http.conf
- ./collectd/metrics.conf:/etc/collectd/collectd.conf.d/metrics.conf
# OpenTelemetry Collector
otelcollector:
image: quay.io/signalfx/splunk-otel-collector:latest
container_name: otelcollector
command: ["--config=/etc/otel-collector-config.yml", "--set=service.telemetry.logs.level=debug"]
volumes:
- ./otel-collector-config.yml:/etc/otel-collector-config.yml
The http and metrics configuration files look like this:
# http.conf
# The minimal configuration required to have collectd send data to an OpenTelemetry Collector
# with a collectdreceiver deployed on port 8081.
LoadPlugin write_http
<Plugin "write_http">
<Node "collector">
URL "http://otelcollector:8081"
Format JSON
VerifyPeer false
VerifyHost false
</Node>
</Plugin>
# metrics.conf
# An example of collectd plugin configuration reporting free disk space on the host.
<LoadPlugin df>
Interval 3600
</LoadPlugin>
<Plugin df>
ValuesPercentage true
</Plugin>
2. Configure the OpenTelemetry Collector
Set up your Collector instance to listen for traffic from the collectd daemon over HTTP with the CollectD receiver:
receivers:
collectd:
endpoint: "0.0.0.0:8081"
exporters:
debug:
verbosity: detailed
service:
pipelines:
metrics:
receivers: [collectd]
exporters: [debug]
0.0.0.0
to expose port 8081 over the Docker network interface so that both Docker containers can interact.3. Build and run
Run the example with the instruction to start the docker-compose setup and build the collectd container:
$> docker compose up --build
Check that the Collector is receiving metrics and logging them to stdout
via the debug exporter:
$> docker logs otelcollector
A typical output is:
StartTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2024-12-20 19:55:44.006000128 +0000 UTC
Value: 38.976566
Metric #17
Descriptor:
-> Name: percent_bytes.reserved
-> Description:
-> Unit:
-> DataType: Gauge
NumberDataPoints #0
Data point attributes:
-> plugin: Str(df)
-> plugin_instance: Str(etc-hosts)
-> host: Str(ea1d62c7a229)
-> dsname: Str(value)
StartTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2024-12-20 19:55:44.006000128 +0000 UTC
Value: 5.102245
{"kind": "exporter", "data_type": "metrics", "name": "debug"}
Troubleshooting
If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.
Available to Splunk Observability Cloud customers
-
Submit a case in the Splunk Support Portal.
-
Contact Splunk Support.
Available to prospective customers and free trial users
-
Ask a question and get answers through community support at Splunk Answers.
-
Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups.