Send traces from Istio to Splunk Observability Cloud

Send telemetry from your Istio service mesh to Splunk Observability Cloud.

You can configure your Istio service mesh to send traces, metrics, and logs to Splunk Observability Cloud by configuring both the Splunk Distribution of OpenTelemetry Collector and Istio.

Requirements

To send telemetry from Istio to Splunk Observability Cloud you need the following:

  • Istio 1.16.1 and higher (this is when OpenTelemetry tracing support was introduced)

  • Splunk Distribution of OpenTelemetry Collector running on Kubernetes in host monitoring (agent) mode. See Install the Collector for Kubernetes using Helm.

  • Splunk APM instrumentation with W3C Trace Context and W3C Baggage context propagation. To set this up, set the OTEL_PROPAGATORS environment variable to "tracecontext,baggage".

Enable tracing on your Istio service mesh

Apply the following configuration:

apiVersion: telemetry.istio.io/v1
kind: Telemetry
metadata:
  name: otel-demo # This can be changed to whatever the user wants
spec:
  tracing:
    - providers:
        - name: otel-tracing # This is the name of the Trace Operator
      randomSamplingPercentage: 100 # Keep all traces
      customTags:
        "environment.deployment":
          literal:
            value: "dev"

Install and configure the Splunk Distribution of OpenTelemetry Collector

Deploy the Splunk Distribution of OpenTelemetry Collector onto your Kubernetes cluster in host monitoring (agent) mode. The required collector components depend on product entitlements and the data you want to collect. See Configure the Collector for Kubernetes with Helm.

In the Helm chart for the collector, set the autodetect.istio parameter to true by passing --set autodetect.istio=true to the helm install or helm upgrade commands.

You can also add the following snippet to your values YAML file, which you can pass using the -f myvalues.yaml argument:

autodetect:
   istio: true

Ensure that data forwarding doesn’t generate telemetry

Forwarding telemetry from Istio to the Collector might generate undesired telemetry. To avoid this, do one of the following:

  • Run the Collector in a separate namespace that lacks the Istio proxy.

  • Add a label to the Collector pods to prevent the injection of the Istio proxy. This is default configuration when the autodetect.istio parameter is set to true.

  • If you need the Istio proxy in the Collector pods, deactivate tracing in the Collector pods. For example:

    # ...
    otelK8sClusterReceiver:
       podAnnotations:
          proxy.istio.io/config: '{"tracing":{}}'
    otelCollector:
       podAnnotations:
          proxy.istio.io/config: '{"tracing":{}}'
Note: The instrumentation pod is a DaemonSet and isn’t injected with a proxy by default. If Istio injects proxies in instrumentation pods, deactivate tracing using a podAnnotation.

Configure the Istio Operator

Follow these steps:

  1. Configure the OpenTelemetry tracer to send data to the Splunk Distribution of OpenTelemetry Collector running on the host.

    For example:

    apiVersion: install.istio.io/v1alpha1
    kind: IstioOperator
    spec:
      meshConfig:
        # Requires Splunk Log Observer
        accessLogFile: /dev/stdout
        # Requires Splunk APM
        enableTracing: true
        extensionProviders:
          - name: otel-tracing
            opentelemetry:
              port: 4318
              service: soc-chart-splunk-otel-collector-agent.default.svc.cluster.local
              maxTagLength: 99999
              http:
                path: "/v1/traces"
                timeout: 5s
              resource_detectors:
                environment: {}
    Note:
    • The IstioOperator.meshConfig.extensionProviders.opentelemetry.service parameter must start with the name of the Splunk agent service (soc-chart-splunk-otel-collector-agent) followed by the namespace (default), followed by svc.cluster.local. See details on this parameter.

    • Set the custom_tags and sampling parameters in the Telemetry resource, rather than here, to conform to updated Istio resource definitions.
  2. Activate the new configuration:

    istioctl install -f ./tracing.yaml
  3. Restart the pods that contain the Istio proxy to activate the new tracing configuration.

Update all pods in the service mesh

Update all pods that are in the Istio service mesh to include an app label. Istio uses this to define the Splunk service.

Note: If you don’t set the app label, identifying the relationship between the proxy and your service is more difficult.

Recommendations

To make the best use of full-fidelity data retention, configure Istio to send as much trace data as possible by configuring sampling and maximum tag length as follows:

  • Set otel-tracing.opentelemetry.sampling to 100 to ensure that all traces have correct root spans.

  • Set otel-tracing.opentelemetry.maxTagLength to 99999 to avoid key tags getting truncated.

For more information on how to configure Istio, see the Istio distributed tracing installation documentation.

If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.

Available to Splunk Observability Cloud customers

Available to prospective customers and free trial users

  • Ask a question and get answers through community support at Splunk Answers.

  • Join the Splunk community #observability Slack channel to communicate with customers, partners, and Splunk employees worldwide.