Configure the Python agent for AI applications
Configure the Python agent from the Splunk Distribution of the OpenTelemetry Python to meet your AI application instrumentation and evaluation needs.
You can configure the Python agent from the Splunk Distribution of the OpenTelemetry Python to meet your AI application instrumentation and evaluation needs. For more information about the Python agent, see About the Splunk Distribution of OpenTelemetry Python.
Configuration methods
export OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE=DELTA
Instrumentation configuration settings
| Configuration setting | Description | Required? |
|---|---|---|
OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE |
Determines if the OTLP metric exporter reports cumulative totals, deltas, or low-memory-friendly temporality for emitted metrics.
Accepted values:
|
Yes |
OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED |
Enriches the Python logger to include trace/span correlation fields. Defaults to Accepted values: |
No |
OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT |
Determines if input/output/system messages are included in spans/logs. Defaults to Accepted values: |
No |
OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT_MODE |
Designates the telemetry to include messages on. Options include spans as attributes, events as bodies, or both. If Accepted values:
|
No |
OTEL_INSTRUMENTATION_GENAI_EMITTERS |
Controls what telemetry data is generated and emitted during GenAI operations (LLM calls, agent invocations, etc.). Defaults to Accepted values:
|
No |
OTEL_INSTRUMENTATION_GENAI_DEBUG |
Enables opt-in debug logging for GenAI telemetry operations. Helps troubleshoot instrumentation issues by logging internal events without dumping full message content. Accepted values: |
No |
Evaluation configuration settings
| Configuration setting | Description | Required? |
|---|---|---|
OTEL_INSTRUMENTATION_GENAI_EVALS_EVALUATORS |
Determines the metric types that are run by the evaluator for Examples of accepted values:
|
No |
OTEL_INSTRUMENTATION_GENAI_EVALS_RESULTS_AGGREGATION |
Condenses evaluations into a single event. Defaults to Accepted values: |
No |
OTEL_INSTRUMENTATION_GENAI_EVALUATION_SAMPLE_RATE |
Determines the sample rate of traces for evaluations. Sampling decisions are made probabilistically based on rate, where rate is the probability (between 0.0 and 1.0) that a span will be sampled for evaluation. If this setting isn't configured, it defaults to Accepted values: Values between |
No |
OTEL_INSTRUMENTATION_GENAI_EVALS_SEPARATE_PROCESS |
Determines whether the instrumentation framework runs evaluations in a separate process. Use Defaults to |
|
OTEL_INSTRUMENTATION_GENAI_EMITTERS_EVALUATION |
Customizes which emitters handle evaluation results. Accepted values: |
No |
OTEL_GENAI_EVAL_DEBUG_SKIPS |
Specifies if logs are created when measurements are skipped. Accepted values: |
No |
OTEL_GENAI_EVAL_DEBUG_EACH |
Specifies if a log is created for each evaluation result. Accepted values: |
No |
DEEPEVAL_FILE_SYSTEM |
Determines if DeepEval can write temporary artifacts to the filesystem. Use Accepted values: |
No |
DeepEval custom LLM provider settings
| Configuration setting | Description | Required? |
|---|---|---|
DEEPEVAL_LLM_BASE_URL |
The custom LLM endpoint URL. Required if you want to use a custom LLM provider for DeepEval evaluations instead of OpenAI. This setting creates a |
No |
DEEPEVAL_LLM_MODEL |
The LLM model name. Defaults to gpt-4o-mini. |
No |
DEEPEVAL_LLM_PROVIDER |
The LLM provider identifier for the model prefix. Defaults to openai. |
No |
DEEPEVAL_LLM_API_KEY |
The static API key. Only used for providers that do not require OAuth2 token-based authentication. Use this setting or DEEPEVAL_LLM_TOKEN_URL (which enables OAuth2), not both. |
No |
DEEPEVAL_LLM_EXTRA_HEADERS |
A JSON-formatted string containing key-value pairs that will be added as HTTP headers to all LLM API requests. Use this setting if your API gateway requires custom headers for authentication or tracking. Example value: LiteLLM does not natively support setting |
No |
DEEPEVAL_LLM_CLIENT_APP_NAME |
The application key and name. | No |
DEEPEVAL_LLM_TOKEN_URL |
The OAuth2 token endpoint. Used for providers that require OAuth2 token-based authentication. This setting enables OAuth2 mode for DeepEval. Example value: |
No |
DEEPEVAL_LLM_CLIENT_ID |
The OAuth2 client ID. Used for providers that require OAuth2 token-based authentication. Requires |
No |
DEEPEVAL_LLM_CLIENT_SECRET |
The OAuth2 client secret. Used for providers that require OAuth2 token-based authentication. Requires |
No |