SignalFx receiver

The SignalFx receiver collects metrics and logs in SignalFx proto format.

The SignalFx receiver is a native OpenTelemetry component that allows the Splunk Distribution of the OpenTelemetry Collector to collect data in SignalFx proto format. This component is included in the default configuration of the Splunk Distribution of the OpenTelemetry Collector in the metrics and logs/signalfx pipelines when you deploy the collector in host monitoring (agent) mode.

Supported pipeline types are metrics and logs. See Process your data with pipelines for more information.

The SignalFx receiver accepts:

Deploy the collector

See Deploy the Splunk Distribution of the OpenTelemetry Collector.

Configure the receiver

Edit the OpenTelemetry Collector configuration file to add the SignalFx receiver:

  1. Add signalfx to the receivers section of your OpenTelemetry Collector configuration file.

    Tip: The old method for passing your access token through the access_token_passthrough parameter is deprecated. Replace this method with either the headers_setter extension or the combination of include_metadata: true and the batch processor. The latter is the recommended method because it ensures that the access token used by default is what the receiver sends. Both methods are illustrated below.
    Default configuration
    CAUTION: Don't remove the signalfx receiver from the default configuration. If you need to change its settings, use the existing receiver or create a separate receiver configuration.
    receivers:
      signalfx:
        endpoint: 0.0.0.0:9943
    
    Separate configuration with headers_setter

    Use header_setter extension to pass through specific key-value pairs (such as your access token) in the receiver's HTTP response headers.

    extensions:
      headers_setter:
        headers:
          - action: upsert
            key: X-SF-TOKEN
            from_context: X-SF-TOKEN
            default_value: "${SPLUNK_OBSERVABILITY_ACCESS_TOKEN}"
    
    receivers:
      signalfx: null
      signalfx/allsettings:
        endpoint: 'localhost:9943'
        
      signalfx/tls:
        tls:
          cert_file: /test.crt
          key_file: /test.key
    
    Separate configuration with include_metadata and the batch processor

    Use the include_metadata: true parameter combined with the batch processor as shown in the example below.

    receivers:
      signalfx: null
      signalfx/allsettings:
        endpoint: 0.0.0.0:9943
        include_metadata: true
      signalfx/tls:
        tls:
          cert_file: /test.crt
          key_file: /test.key
    
    processors:
      batch:
        metadata_keys:
          - X-SF-Token
    
  2. Configure advanced settings.

  3. Add the signalfx to both the metrics and logs pipelines, in both the receivers and the exporters arrays:

    service:
      pipelines:
        metrics:
          receivers: [signalfx]
          processors: [memory_limiter, batch]
          exporters: [signalfx]
        logs:
          receivers: [signalfx]
          processors: [memory_limiter, batch]
          exporters: [signalfx]

Restart the collector

To apply your configuration changes, restart the collector. The command to restart the Splunk Distribution of the OpenTelemetry Collector varies depending on what platform you deployed it on and what tool you used to deploy it, but here are general examples of the restart command:

Settings

The following table shows the configuration options for the SignalFx receiver:

CAUTION: If you use the access_token_passthrough setting with any exporter other than the SignalFx exporter, the receiver might reveal all organization access tokens. If you activate this setting, you must use the SignalFx receiver with the SignalFx exporter.

included

https://raw.githubusercontent.com/splunk/collector-config-tools/main/cfg-metadata/receiver/signalfx.yaml

Metrics

The following metrics, resource attributes, and attributes, are available.

included

https://raw.githubusercontent.com/splunk/collector-config-tools/main/metric-metadata/signalfxreceiver.yaml

Activate or deactivate specific metrics

You can activate or deactivate specific metrics by setting the enabled field in the metrics section for each metric. For example:

receivers:
  samplereceiver:
    metrics:
      metric-one:
        enabled: true
      metric-two:
        enabled: false

The following is an example of host metrics receiver configuration with activated metrics:

receivers:
  hostmetrics:
    scrapers:
      process:
        metrics:
          process.cpu.utilization:
            enabled: true
Note: Deactivated metrics aren’t sent to Splunk Observability Cloud.
Billing
  • If you’re in a MTS-based subscription, all metrics count towards metrics usage.

  • If you’re in a host-based plan, metrics listed as active (Active: Yes) on this document are considered default and are included free of charge.

Learn more at Infrastructure Monitoring subscription usage (Host and metric plans).

Troubleshooting

If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.

Available to Splunk Observability Cloud customers

Available to prospective customers and free trial users

  • Ask a question and get answers through community support at Splunk Answers.

  • Join the Splunk community #observability Slack channel to communicate with customers, partners, and Splunk employees worldwide.