Configure the Collector to enable Related Content for Infra and APM
Configure the Collector to enable Related Content for APM.
Configure the Collector in host monitoring (agent) mode to enable Related Content
To view your infrastructure data in the APM service dashboards, you need to enable certain components in the OpenTelemetry Collector. To learn more, see Collector components and Process your data with pipelines.
Collector configuration in host monitoring mode
These are the configuration details required:
hostmetrics receiver
Enable cpu, memory, filesystem and network to collect their metrics.
To learn more, see Host metrics receiver.
signalfx exporter
The SignalFx exporter aggregates the metrics from the hostmetrics receiver. It also sends metrics such as cpu.utilization, which are referenced in the relevant APM service charts.
To learn more, see SignalFx exporter.
Correlation flag
By default, correlation is activated, utilizing standard SignalFx exporter configurations. This setup enables the Collector to execute relvant API calls, thereby linking your spans with the associated infrastructure metrics.
The SignalFx exporter must be enabled for both the metrics and traces pipelines. To adjust the correlation option further, see the SignalFx exporter’s options at Settings.
resourcedetection processor
This processor enables a unique host.name value to be set for metrics and traces. The host.name is determined by either the EC2 host name or the system host name.
Use the following configuration:
Use the cloud provider or the environment variable to set
host.nameEnable
override
To learn more, see Resource detection processor.
resource/add_environment processor (optional)
APM charts require the environment span attribute to be set correctly.
To set this attribute you have two options:
Configure the attribute in instrumentation
Use this processor to insert a
deployment.environmentspan attribute to all spans
To learn more, see Resource detection processor.
Example
Here are the relevant config snippets from each section:
receivers:
hostmetrics:
collection_interval: 10s
scrapers:
cpu:
disk:
filesystem:
memory:
network:
processors:
resourcedetection:
detectors: [system,env,gcp,ec2]
override: true
resource/add_environment:
attributes:
- action: insert
value: staging
key: deployment.environment
exporters:
# Traces
otlphttp:
access_token: "${SPLUNK_ACCESS_TOKEN}"
endpoint: "${SPLUNK_TRACE_URL}"
# Metrics + Events + APM correlation calls
signalfx:
access_token: "${SPLUNK_ACCESS_TOKEN}"
api_url: "${SPLUNK_API_URL}"
ingest_url: "${SPLUNK_INGEST_URL}"
sync_host_metadata: true
correlation:
service:
extensions: [health_check, http_forwarder, zpages]
pipelines:
traces:
receivers: [jaeger, zipkin]
processors: [memory_limiter, batch, resourcedetection, resource/add_environment]
exporters: [otlphttp, signalfx]
metrics:
receivers: [hostmetrics]
processors: [memory_limiter, batch, resourcedetection]
exporters: [signalfx]Configure the Collector to enable Related Content from host monitoring (agent) mode to data forwarding (gateway) mode
If you need to run the Opentelemetry Collector in both host monitoring (agent) and data forwarding (gateway) modes, refer to the following sections.
For more information, see Collector deployment modes.
Configure the agent
Follow the same steps as mentioned in the previous section and include the following changes:
http_forwarder extension
The http_forwarder listens on port 6060 and sends all the REST API calls directly to Splunk Observability Cloud.
If your agent cannot talk to the Splunk SaaS backend directly, use the egress endpoint to change to the URL of the gateway.
signalfx exporter
traces pipeline.If you want, you can also use the exporter for metrics, although it’s best to use the OTLP exporter. See otlp exporter (optional) for more details.
Use the following configuration:
Set the
api_urlendpoint to the URL of the gateway. Specify the ingress port of thehttp_forwarderof the gateway, which is6060by default.Set the
ingest_urlendpoint to the URL of the gateway. Specify the ingress port of thesignalfxreceiver of the gateway, which is9943by default.
All pipelines
Send all metrics, traces and logs pipelines to the appropriate receivers on the gateway.
Example
Here are the relevant config snippets from each section:
receivers:
hostmetrics:
collection_interval: 10s
scrapers:
cpu:
disk:
filesystem:
memory:
network:
processors:
resourcedetection:
detectors: [system,env,gcp,ec2]
override: true
resource/add_environment:
attributes:
- action: insert
value: staging
key: deployment.environment
exporters:
# Traces
otlp:
endpoint: "${SPLUNK_GATEWAY_URL}:4317"
tls:
insecure: true
# Metrics + Events + APM correlation calls
signalfx:
access_token: "${SPLUNK_ACCESS_TOKEN}"
api_url: "http://${SPLUNK_GATEWAY_URL}:6060"
ingest_url: "http://${SPLUNK_GATEWAY_URL}:9943"
service:
extensions: [health_check, http_forwarder, zpages]
pipelines:
traces:
receivers: [jaeger, zipkin]
processors: [memory_limiter, batch, resourcedetection, resource/add_environment]
exporters: [otlp, signalfx]
metrics:
receivers: [hostmetrics]
processors: [memory_limiter, batch, resourcedetection]
exporters: [otlp]Configure the gateway
In gateway mode, the relevant receivers to match the exporters from the Agent. In addition, you need to make the following changes.
http_forwarder extension
The http_forwarder listens on port 6060 and sends all the REST API calls directly to Splunk Observability Cloud.
In Gateway mode, set the egress endpoint to the Splunk Observability Cloud SaaS endpoint.
signalfx exporter
Set both the translation_rules and exclude_metrics flags to their default value, and thus can be commented out or simply removed. This ensures that the hostmetrics aggregations that are normally performed by the SignalFx exporter on the agent are performed by the SignalFx exporter on the gateway instead.
Example
Here are the relevant config snippets from each section:
extensions:
http_forwarder:
egress:
endpoint: "https://api.${SPLUNK_REALM}.signalfx.com"
receivers:
otlp:
protocols:
grpc:
http:
signalfx:
exporters:
# Traces
otlphttp:
access_token: "${SPLUNK_ACCESS_TOKEN}"
traces_endpoint: "https://ingest.${SPLUNK_REALM}.signalfx.com/v2/trace/otlp"
# Metrics + Events
signalfx:
access_token: "${SPLUNK_ACCESS_TOKEN}"
realm: "${SPLUNK_REALM}"
service:
extensions: [http_forwarder]
pipelines:
traces:
receivers: [otlp]
processors:
- memory_limiter
- batch
exporters: [otlphttp]
metrics:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [signalfx]Use the SignalFx exporter on both Collector modes
Alternatively, if you want to use the SignalFx exporter for metrics on both host monitoring (agent) and data forwarding (gateway) modes, you need to disable the aggregation at the gateway. To do so, you must set the translation_rules and exclude_metrics to empty lists.
Example
Configure the agent in gateway mode as follows:
exporters:
# Traces
otlphttp:
access_token: "${SPLUNK_ACCESS_TOKEN}"
endpoint: "https://ingest.${SPLUNK_REALM}.signalfx.com/v2/trace/otlp"
# Metrics + Events
signalfx:
access_token: "${SPLUNK_ACCESS_TOKEN}"
realm: "${SPLUNK_REALM}"
translation_rules: []
exclude_metrics: []
service:
extensions: [http_forwarder]
pipelines:
traces:
receivers: [otlp]
processors:
- memory_limiter
- batch
exporters: [otlphttp]
metrics:
receivers: [signalfx]
processors: [memory_limiter, batch]
exporters: [signalfx]