Configure zero-code instrumentation for Python AI applications

Instrument your backend Python AI applications to send metrics, traces, and evaluations to Splunk Observability Cloud.

Instrument your backend Python AI applications to send traces and metrics to Splunk Observability Cloud.

Zero-code instrumentation exports telemetry data using the Splunk Distribution for OpenTelemetry Generative AI utility and does not require changes to your application code. The instrumentation agent configures the source application to export data in a supported format to an OTLP endpoint, on either an OTLP receiver or the Splunk Observability Cloud backend.

Prerequisite

Zero-code instrumentation requires Python 3.10 or higher.

Additional prerequisites may apply for certain instrumentation frameworks. For more details, see the respective section for your instrumentation framework.

Zero-code instrumentation integrations

CrewAI

Prerequisites

To instrument a CrewAI application, you must have access to an LLM provider, either through the OpenAI API key or OAuth2 credentials.

The CrewAI instrumentation captures workflow orchestration for crews, tasks, agents, and tools, but does not directly instrument LLM and embedding calls. For complete observability, including LLM call details such as token usage, model names, and latency, you must also install and enable provider-specific instrumentation packages:

  • OpenAI and Azure OpenAI: pip install opentelemetry-instrumentation-openai-v2

  • Anthropic: pip install opentelemetry-instrumentation-anthropic

  • Other providers: Check the splunk-otel-python-contrib/instrumentation-genai directory for available providers.

Without provider instrumentation, you'll see workflow structure but not the detailed LLM call spans shown in the trace view. These LLM call spans are represented by chat (OpenAI/LiteLLM) in the Expected Trace Structure diagram.

Steps

  1. Install the instrumentation package:
    CODE
    pip install splunk-otel-instrumentation-crewai
  2. Run your application with the instrumentation:
    CODE
    opentelemetry-instrument python <your_app>.py

LangChain/LangGraph

Prerequisites

If your application is already instrumented with an OpenTelemetry SDK, you must upgrade your OpenTelemetry dependencies to version 1.38.0 or higher.

Note: Avoid double-instrumenting applications with the LangChain/LangGraph instrumentation. This may lead to duplicated evaluation results.

Steps

  1. Configure your LangChain/LangGraph application for AI agent monitoring:
    1. Set the agent_name for your Chains. This setting ensures that the instrumentation will promote your Chains to the AgentInvocation and evaluate your Chains with LLM-as-a-Judge evaluators. For example:
      JSON
      agent = _create_react_agent(llm, tools=[]).with_config( 
        "metadata": { "agent_name": "coordinator"} 
      )
    2. Set the workflow_name to promote the Chain or Graph to your workflow. For example:
      JSON
      app = StateGraph(state).compile().with_config(metadata={"workflow_name": "multi_agent_travel_planner"})
  2. Install the instrumentation package:
    CODE
    pip install splunk-otel-instrumentation-langchain
  3. Run your application with the instrumentation:
    CODE
    opentelemetry-instrument python <your_app>.py

LlamaIndex

Steps

  1. Install the instrumentation package:
    CODE
    pip install splunk-otel-instrumentation-llamaindex
  2. Run your application with the instrumentation:
    CODE
    opentelemetry-instrument python <your_app>.py

OpenAI

Prerequisites

To instrument an OpenAI agent application, you must have an OpenAI API key (OPENAI_API_KEY) or access to an OpenAI-compatible LLM endpoint.

Steps
Note: By default, telemetry from OpenAI is suppressed by LangChain and is not configurable with the OpenAI instrumentation.
  1. Install the instrumentation package:
    CODE
    pip install splunk-otel-instrumentation-openai
  2. Run your application with the instrumentation:
    CODE
    opentelemetry-instrument python <your_app>.py

OpenAI agents

Prerequisites

To instrument an OpenAI agent application, you must meet the following requirements.
  • You have an OpenAI API key (OPENAI_API_KEY) or access to an OpenAI-compatible LLM endpoint.

  • You have installed openai-agents-python SDK 0.3.3 or higher.

Steps

  1. Install the instrumentation package:
    CODE
    pip install splunk-otel-instrumentation-openai-agents
  2. Run your application with the instrumentation:
    CODE
    opentelemetry-instrument python <your_app>.py

Weaviate

Steps

  1. Install the instrumentation package:
    CODE
    pip install splunk-otel-instrumentation-weaviate
  2. Start a local Weaviate instance:
    CODE
    docker run -d -p 8080:8080 -p 50051:50051 cr.weaviate.io/semitechnologies/weaviate:latest
  3. Run your application with the instrumentation:
    CODE
    opentelemetry-instrument python <your_app>.py

Next steps

To finish setting up AI Agent Monitoring, proceed to the next step in Set up AI Agent Monitoring.