Configure zero-code instrumentation for Python AI applications
Instrument your backend Python AI applications to send metrics, traces, and evaluations to Splunk Observability Cloud.
Instrument your backend Python AI applications to send traces and metrics to Splunk Observability Cloud.
Zero-code instrumentation exports telemetry data using the Splunk Distribution for OpenTelemetry Generative AI utility and does not require changes to your application code. The instrumentation agent configures the source application to export data in a supported format to an OTLP endpoint, on either an OTLP receiver or the Splunk Observability Cloud backend.
Prerequisite
Zero-code instrumentation requires Python 3.10 or higher.
Additional prerequisites may apply for certain instrumentation frameworks. For more details, see the respective section for your instrumentation framework.
Zero-code instrumentation integrations
CrewAI
Prerequisites
To instrument a CrewAI application, you must have access to an LLM provider, either through the OpenAI API key or OAuth2 credentials.
The CrewAI instrumentation captures workflow orchestration for crews, tasks, agents, and tools, but does not directly instrument LLM and embedding calls. For complete observability, including LLM call details such as token usage, model names, and latency, you must also install and enable provider-specific instrumentation packages:
-
OpenAI and Azure OpenAI:
pip install opentelemetry-instrumentation-openai-v2 -
Anthropic:
pip install opentelemetry-instrumentation-anthropic -
Other providers: Check the splunk-otel-python-contrib/instrumentation-genai directory for available providers.
Without provider instrumentation, you'll see workflow structure but not the detailed LLM call spans shown in the trace view. These LLM call spans are represented by chat (OpenAI/LiteLLM) in the Expected Trace Structure diagram.
Steps
- Install the instrumentation package:
CODE
pip install splunk-otel-instrumentation-crewai -
Run your application with the instrumentation:CODE
opentelemetry-instrument python <your_app>.py
LangChain/LangGraph
Prerequisites
If your application is already instrumented with an OpenTelemetry SDK, you must upgrade your OpenTelemetry dependencies to version 1.38.0 or higher.
Steps
- Configure your LangChain/LangGraph application for AI agent monitoring:
-
Set the
agent_namefor your Chains. This setting ensures that the instrumentation will promote your Chains to theAgentInvocationand evaluate your Chains with LLM-as-a-Judge evaluators. For example:JSONagent = _create_react_agent(llm, tools=[]).with_config( "metadata": { "agent_name": "coordinator"} ) -
Set the
workflow_nameto promote the Chain or Graph to your workflow. For example:JSONapp = StateGraph(state).compile().with_config(metadata={"workflow_name": "multi_agent_travel_planner"})
-
-
Install the instrumentation package:CODE
pip install splunk-otel-instrumentation-langchain -
Run your application with the instrumentation:CODE
opentelemetry-instrument python <your_app>.py
LlamaIndex
Steps
- Install the instrumentation package:
CODE
pip install splunk-otel-instrumentation-llamaindex -
Run your application with the instrumentation:CODE
opentelemetry-instrument python <your_app>.py
OpenAI
Prerequisites
To instrument an OpenAI agent application, you must have an OpenAI API key (OPENAI_API_KEY) or access to an OpenAI-compatible LLM endpoint.
- Install the instrumentation package:
CODE
pip install splunk-otel-instrumentation-openai -
Run your application with the instrumentation:CODE
opentelemetry-instrument python <your_app>.py
OpenAI agents
Prerequisites
-
You have an OpenAI API key (
OPENAI_API_KEY) or access to an OpenAI-compatible LLM endpoint. -
You have installed
openai-agents-pythonSDK 0.3.3 or higher.
Steps
- Install the instrumentation package:
CODE
pip install splunk-otel-instrumentation-openai-agents -
Run your application with the instrumentation:CODE
opentelemetry-instrument python <your_app>.py
Weaviate
Steps
- Install the instrumentation package:
CODE
pip install splunk-otel-instrumentation-weaviate -
Start a local Weaviate instance:CODE
docker run -d -p 8080:8080 -p 50051:50051 cr.weaviate.io/semitechnologies/weaviate:latest -
Run your application with the instrumentation:CODE
opentelemetry-instrument python <your_app>.py
Next steps
To finish setting up AI Agent Monitoring, proceed to the next step in Set up AI Agent Monitoring.