Instrument your Python application for Splunk Observability Cloud

The Splunk OpenTelemetry Python agent can automatically instrument your Python application or service. Follow these steps to get started.

Note: Due to changes in the upstream OpenTelemetry documentation, "automatic instrumentation" has been changed to "zero-code instrumentation". For more information, see Instrumentation methods for Splunk Observability Cloud.

The Python agent from the Splunk Distribution of OpenTelemetry Python can automatically instrument your Python application by dynamically patching supported libraries.

To get started, use the guided setup or follow the instructions manually.

Generate customized instructions using the guided setup

To generate all the basic installation commands for your environment and application, use the Python guided setup. To access the Python guided setup, follow these steps:

  1. Log in to Splunk Observability Cloud.

  2. Open the Python guided setup. Optionally, you can navigate to the guided setup on your own:

    1. In the navigation menu, select Data Management.

    2. Go to the Available integrations tab, or select Add Integration in the Deployed integrations tab.

    3. In the integration filter menu, select By Product.

    4. Select the APM product.

    5. Select the Python tile to open the Python guided setup.

Install the Splunk Distribution of OpenTelemetry Python manually

If you don’t use the guided setup, follow these instructions to manually install the Splunk Distribution of OpenTelemetry Python:

Install and activate the Python agent

Follow these steps to automatically instrument your application using the Python agent:

  1. Check that you meet the requirements. See Python agent compatibility and requirements.

  2. Install the splunk-opentelemetry package:

    BASH
    pip install splunk-opentelemetry

    If you’re using a requirements.txt or pyproject.toml file, add splunk-opentelemetry to it.

  3. Run the bootstrap script to install instrumentation for every supported package in your environment:

    BASH
    opentelemetry-bootstrap -a install

    To print the instrumentation packages to the console instead of installing them, run opentelemetry-bootstrap --action=requirements. You can then add the output to your requirements or Pipfile.

  4. Set the OTEL_SERVICE_NAME environment variable:

    Linux
    SHELL
    export OTEL_SERVICE_NAME=<yourServiceName>
    Windows PowerShell
    SHELL
    $env:OTEL_SERVICE_NAME=<yourServiceName>
  5. (Optional) Set the endpoint URL if the Splunk Distribution of OpenTelemetry Collector is running on a different host:

    Linux
    SHELL
    export OTEL_EXPORTER_OTLP_ENDPOINT=<yourCollectorEndpoint>:<yourCollectorPort>
    Windows PowerShell
    SHELL
    $env:OTEL_EXPORTER_OTLP_ENDPOINT=<yourCollectorEndpoint>:<yourCollectorPort>
  6. (Optional) Set the deployment environment and service version:

    Linux
    BASH
    export OTEL_RESOURCE_ATTRIBUTES='deployment.environment=<envtype>,service.version=<version>'
    Windows PowerShell
    SHELL
    $env:OTEL_RESOURCE_ATTRIBUTES='deployment.environment=<envtype>,service.version=<version>'
  7. Activate the Splunk OTel Python agent by editing your Python service command.

    For example, open your Python application as follows:

    BASH
    python3 main.py --port=8000

    Then prefix the command with opentelemetry-instrument:

    BASH
    opentelemetry-instrument python3 main.py --port=8000
    Note: To instrument uWSGI applications, see Manually instrument Python applications for Splunk Observability Cloud.
  8. (Optional) Perform additional steps if you’re using the Django framework. See Instrument your Django application.

Application metrics are collected by default. See Metrics and attributes collected by the Splunk Distribution of OpenTelemetry Python for more information.

If no data appears in APM, see Troubleshoot Python instrumentation for Splunk Observability Cloud.

Activate AlwaysOn Profiling

Note: AlwaysOn Profiling for Python is in beta development. This feature is provided by Splunk to you "as is" without any warranties, maintenance and support, or service-level commitments. Use of this feature is subject to the Splunk General Terms .

To activate AlwaysOn Profiling, set the SPLUNK_PROFILER_ENABLED environment variable to true or call the start_profiling function in your application code.

The following example shows how to activate the profiler from your application code:

PYTHON
from splunk_otel.profile import start_profiling

# Activates CPU profiling
start_profiling()

See Get data into Splunk APM AlwaysOn Profiling for more information. For additional settings, see Python settings for AlwaysOn Profiling.

Call Graph collection

Call graph collection enables snapshot‑based profiling in Python applications using the Splunk Distribution of OpenTelemetry Python. It is disabled by default and must be explicitly enabled using environment variables.

Enable call graph collection

Use the following environment variables to configure snapshot profiling and call graph collection:

Environment variable Description Default value
SPLUNK_SNAPSHOT_PROFILER_ENABLED Enables the snapshot profiler required for call graph generation. false
SPLUNK_SNAPSHOT_SAMPLING_INTERVAL Sampling interval in milliseconds. Lower values increase stack capture frequency. 10ms
SPLUNK_SNAPSHOT_SELECTION_PROBABILITY Fraction of traces selected for snapshot profiling. Maximum value is 1.0. 0.01
SPLUNK_PROFILER_LOGS_ENDPOINT Overrides the OTLP logs endpoint used to export profiling data. Applies to both continuous and snapshot profiling. Uses OTEL_EXPORTER_OTLP_LOGS_ENDPOINT
Note: Adjust sampling and selection rates based on your application's workload and performance sensitivity. Lower intervals increase visibility but may introduce overhead

Compatibility with Continuous Profiler

If thread sampling via the continuous profiler is enabled (SPLUNK_PROFILER_ENABLED=true), both continuous profiling and call graph (snapshot) profiling operate simultaneously in the Splunk Distribution of OpenTelemetry Python.

Continuous profiling samples all threads at the interval defined by SPLUNK_PROFILER_CALL_STACK_INTERVAL (default 1000 ms), while call graph profiling collects stack traces for selected traces at the interval defined by SPLUNK_SNAPSHOT_SAMPLING_INTERVAL (default 10 ms). Because snapshot profiling uses a much higher sampling frequency, it provides more detailed call graph data for selected traces, while continuous profiling offers a lower‑overhead, system‑wide view of thread activity.

Both profilers export their data through the OTel logs pipeline, which can then export to console, OTLP, or other exporters. Enabling both at the same time increases profiling overhead, so test configuration changes in a staging environment before applying them to production.

Call Graph visibility

In asynchronous, event-driven environments, sampled traces may occasionally lack call stack data. This is expected behavior and typically occurs when the application is waiting for external operations such as:

  • Awaiting a remote service response over HTTP or gRPC.

  • Application logic executing entirely between sampling intervals.

During these periods, no user code is actively executing. If the snapshot profiler samples during such a gap, no stack is captured. As a result, even a successfully processed request may appear without call graph data.

This behavior reflects the limitations of sampling-based profiling and does not indicate a misconfiguration.

Configure the Python agent

In most cases, the only configuration setting you need to enter is the service name. You can also define other basic settings, like the deployment environment, the service version, and the endpoint, among others.

For advanced configuration of the Python agent, like changing trace propagation formats, correlating traces and logs, or configuring server trace data, see Configure the Python agent for Splunk Observability Cloud.

Deploy the Python agent in Kubernetes

To deploy the Python agent in Kubernetes, configure the Kubernetes Downward API to expose environment variables to Kubernetes resources.

The following example shows how to update a deployment to expose environment variables by adding the agent configuration under the .spec.template.spec.containers.env section:
YAML
apiVersion: apps/v1
kind: Deployment
spec:
  selector:
    matchLabels:
      app: your-application
  template:
    spec:
      containers:
        - name: myapp
          env:
            - name: SPLUNK_OTEL_AGENT
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: OTEL_EXPORTER_OTLP_ENDPOINT
              value: "http://$(SPLUNK_OTEL_AGENT):4317"
            - name: OTEL_SERVICE_NAME
              value: "<serviceName>"
            - name: OTEL_RESOURCE_ATTRIBUTES
              value: "deployment.environment=<environmentName>"

Send data directly to Splunk Observability Cloud

By default, the agent sends all telemetry to the local instance of the Splunk Distribution of OpenTelemetry Collector.

To send data directly to Splunk Observability Cloud, set the following environment variables:

Linux
BASH
export SPLUNK_ACCESS_TOKEN=<access_token>
export SPLUNK_REALM=<realm>
Windows PowerShell
SHELL
$env:SPLUNK_ACCESS_TOKEN=<access_token>
$env:SPLUNK_REALM=<realm>

To obtain an access token, see Retrieve and manage user API access tokens using Splunk Observability Cloud.

To find your Splunk realm, see the not about realms in Configure SSO integrations for Splunk Observability Cloud.

Note: For more information on the ingest API endpoints, see Send APM traces.

Specify the source host

To override the host used by the agent, use the environment variable OTEL_RESOURCE_ATTRIBUTES to set your host’s name to the desired source:

Windows PowerShell
SHELL
$env:OTEL_RESOURCE_ATTRIBUTES=host.name=<host_name>
Linux
BASH
export OTEL_RESOURCE_ATTRIBUTES=host.name=<host_name>

Instrument Lambda functions

You can instrument AWS Lambda functions using the Splunk OpenTelemetry Lambda Layer. See Instrument your AWS Lambda function for Splunk Observability Cloud for more information.