Docker Containers
Use this Splunk Observability Cloud integration for the Docker monitor. See benefits, install, configuration, and metrics
The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the docker-container-stats
monitor type to read container stats from a Docker API server. Note it doesn’t currently support CPU share/quota metrics.
This integration is available for Kubernetes, Linux, and Windows.
Benefits
After you configure the integration, you can access these features:
-
View metrics. You can create your own custom dashboards, and most monitors provide built-in dashboards as well. For information about dashboards, see View dashboards in Splunk Observability Cloud.
-
View a data-driven visualization of the physical servers, virtual machines, AWS instances, and other resources in your environment that are visible to Infrastructure Monitoring. For information about navigators, see Use navigators in Splunk Infrastructure Monitoring.
-
Access the Metric Finder and search for metrics sent by the monitor. For information, see Search the Metric Finder and Metadata Catalog.
Installation
Follow these steps to deploy this integration:
-
Deploy the Splunk Distribution of the OpenTelemetry Collector to your host or container platform:
-
Configure the integration, as described in the Configuration section.
-
Restart the Splunk Distribution of the OpenTelemetry Collector.
Configuration
To use this integration of a Smart Agent monitor with the Collector:
-
Include the Smart Agent receiver in your configuration file.
-
Add the monitor type to the Collector configuration, both in the receiver and pipelines sections.
-
See how to Use Smart Agent monitors with the Collector.
-
See how to set up the Smart Agent receiver.
-
For a list of common configuration options, refer to Common configuration settings for monitors.
-
Learn more about the Collector at Get started: Understand and use the Collector.
-
If you’re using this integration with the default Docker daemon domain
socket, you might need to add the splunk-otel-collector
user to the
docker
group to have permission to access the Docker API.
usermod -aG docker splunk-otel-collector
For a walkthrough of how to send Docker container logs to a Splunk Enterprise instance, see Tutorial: Use the Collector to send container logs to Splunk Enterprise.
Example
To activate this integration, add the following to your Collector configuration:
receivers:
smartagent/docker-container-stats:
type: docker-container-stats
... # Additional config
Next, add the monitor to the service.pipelines.metrics.receivers
section of your configuration file:
service:
pipelines:
metrics:
receivers: [smartagent/docker-container-stats]
Configuration settings
The following table shows the configuration options for this integration:
Option |
Required |
Type |
Description |
---|---|---|---|
|
no |
|
Sends all extra block IO metrics. The default value is |
|
no |
|
Sends all extra CPU metrics. The default value is |
|
no |
|
Sends all extra memory metrics. The default value is |
|
no |
|
Sends all extra network metrics. The default value is |
|
no |
|
The URL of the docker server. The default value is
|
|
no |
|
The maximum amount of time to wait for docker API requests. The default value is |
|
no |
|
The time to wait before resyncing the list of containers the monitor maintains through the docker event listener. An example
is |
|
no |
|
A mapping of container label names to dimension names. The corresponding label values become the dimension value for the
mapped name. For example,
|
|
no |
|
A mapping of container environment variable names to dimension names. The corresponding env var values become the dimension
values on the emitted metrics. For example,
|
|
no |
|
A list of filters of images to exclude. Supports literals, globs, and regex. |
Metrics
The following metrics are available for this integration:
https://raw.githubusercontent.com/signalfx/splunk-otel-collector/main/internal/signalfx-agent/pkg/monitors/docker/metadata.yaml
Notes
-
To learn more about the available in Splunk Observability Cloud see Metric types.
-
In host-based subscription plans, default metrics are those metrics included in host-based subscriptions in Splunk Observability Cloud, such as host, container, or bundled metrics. Custom metrics are not provided by default and might be subject to charges. See Metric categories for more information.
-
In MTS-based subscription plans, all metrics are custom.
-
To add additional metrics, see how to configure
extraMetrics
in Add additional metrics.
Troubleshooting
If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.
Available to Splunk Observability Cloud customers
-
Submit a case in the Splunk Support Portal.
-
Contact Splunk Support.
Available to prospective customers and free trial users
-
Ask a question and get answers through community support at Splunk Answers.
-
Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups.
Protocol not available error
If you get the following error message when configuring the monitor on a Windows host:
Error: Error initializing Docker client: protocol not available
edit the configuration and replace unix:///var/run/docker.sock
with
npipe:////.//pipe//docker_engine
.