Set up AI Agent Monitoring
Get data in and start monitoring your AI agents and applications.
Complete the following steps to set up AI Agent Monitoring.
- To collect traces and metrics from your AI agents and applications, deploy the Splunk Distribution of OpenTelemetry Collector on the hosts that your applications are running on.
Splunk Observability Cloud offers OpenTelemetry Collector distributions for Linux, Windows, and Kubernetes. These distributions integrate the collection of data from hosts and data forwarding to Splunk Observability Cloud.
- Linux, Windows
-
Complete the following steps to use the guided setup wizards to deploy the Collector on a host.Note: (Optional) For advanced installation instructions, see Install the Collector for Linux with the installer script and Install the Collector for Windows with the installer script.
-
In the Splunk Observability Cloud main menu, select Data Management > Available integrations.
-
Select Deploy OpenTelemetry collectors > Deploy Splunk OpenTelemetry Collector for other environments.
-
Follow the on-screen instructions to deploy the Collector on your host.
-
- Kubernetes
-
Complete the following steps to install the Collector for Kubernetes and configure the Python agent to send telemetry to Splunk Observability Cloud.
-
(Optional) Collect logs and events with the Collector for Kubernetes.
-
To deploy the Python agent in Kubernetes, configure the Kubernetes Downward API to expose environment variables to Kubernetes resources.
The following example shows how to update a deployment to expose environment variables by adding the agent configuration under the.spec.template.spec.containers.envsection:YAMLapiVersion: apps/v1 kind: Deployment spec: selector: matchLabels: app: your-application template: spec: containers: - name: myapp env: - name: SPLUNK_OTEL_AGENT valueFrom: fieldRef: fieldPath: status.hostIP - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://$(SPLUNK_OTEL_AGENT):4317" - name: OTEL_SERVICE_NAME value: "<serviceName>" - name: OTEL_RESOURCE_ATTRIBUTES value: "deployment.environment=<environmentName>"Note: (Optional) To configure the Python agent to send telemetry to Splunk Observability Cloud using other methods, seeInstrument your Python application for Splunk Observability Cloud.
To troubleshoot the Splunk Distribution of the OpenTelemetry Collector, see Troubleshoot the Collector. - Instrument or translate data from AI applications using one or more of the following options:
Option Description Zero-code instrumentation Exports telemetry data without changes to your application's source code. Code-based instrumentation Exports telemetry data and requires modifying your application's source code. Translate data from third-party instrumentation libraries Converts data from applications already instrumented with supported third-party libraries and sends the data to Splunk Observability Cloud. - (Optional; only required to enable evaluations) Collect logs and events from AI agents and applications:
- Create an events index in Splunk Enterprise to store and process your data. For instructions, see Create events indexes.
- Create an HTTP Event Collector (HEC) token in Splunk Enterprise. The HEC token enables you to send data and application events to your Splunk index over HTTP protocol using token-based authentication. For requirements and instructions, see Configure HTTP Event Collector on Splunk Enterprise.
- Set up Log Observer Connect for Splunk Enterprise.
- Configure the Splunk HEC exporter to allow the OpenTelemetry Collector to send logs to Splunk HEC endpoints.
The following example shows a Splunk HEC exporter instance configured for a logs pipeline in the Collector configuration file:Next, add the exporter to theCODE
exporters: # ... splunk_hec: token: "<hec-token>" endpoint: "<hec-endpoint>" # Source. See https://docs.splunk.com/Splexicon:Source source: "otel" # Source type. See https://docs.splunk.com/Splexicon:Sourcetype sourcetype: "otel" # ...servicessection of your configuration file:CODEservice: # ... pipelines: logs: receivers: [fluentforward, otlp] processors: - memory_limiter - batch - resourcedetection exporters: [splunk_hec]
- (Optional; only required to enable evaluations) To correlate AI user conversations with your APM traces, configure Splunk Observability Cloud to query your Splunk Enterprise connection and index:
- In Splunk Observability Cloud, use the main menu to select Settings > AI Agent Monitoring.
- Under Connection selection, select your Splunk Enterprise instance.
- For Index selection, select the events index that you created.
- Select Apply.
- (Optional; only required to enable evaluations) Enable evaluations by installing the required packages and setting the environment variables.
For more information on the configuration settings in this step, see Configure the Python agent for AI applications.
- Zero-code, code-based instrumentation
-
-
Install the packages:CODE
pip install splunk-otel-genai-evals-deepeval pip install splunk-otel-genai-emitters-splunk -
To send evaluation results to Splunk Observability Cloud, set the following environment variables in your
.envfile.CODE# Emitters (span_metric_event for full telemetry, splunk for Splunk-specific features) OTEL_INSTRUMENTATION_GENAI_EMITTERS=span_metric_event,splunk # Content Capture OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT_MODE=SPAN_AND_EVENT # Evaluations OTEL_INSTRUMENTATION_GENAI_EVALS_RESULTS_AGGREGATION=true OTEL_INSTRUMENTATION_GENAI_EMITTERS_EVALUATION=replace-category:SplunkEvaluationResults # Logs OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true # Metrics OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE=delta -
By default, the instrumentation frameworks run evaluations in the same process as your application. LLM calls from evaluation frameworks such as DeepEval are instrumented alongside application telemetry.
To run evaluations in a child process with the OpenTelemetry SDK deactivated and prevent evaluation LLM calls from polluting application telemetry, setOTEL_INSTRUMENTATION_GENAI_EVALS_SEPARATE_PROCESS=truein your.envfile.Note: This setting is required when evaluations are enabled for the OpenAI instrumentation. This setting is optional for all other instrumentation frameworks that have evaluations enabled. -
To enable LLM-as-a-Judge evaluations with DeepEval, use one of the following options.
-
To use OpenAI as the LLM provider for evaluations, set the
OPENAI_API_KEYin your.envfile. This is the default option. -
To use a custom LLM provider, run
pip install litellmto install the LiteLLM dependencies and set the following environment variables in your.envfile. Route evaluations through your own LLM provider instead of OpenAI.JSONDEEPEVAL_LLM_BASE_URL=https://<your-llm-gateway>/openai/v1 DEEPEVAL_LLM_MODEL=gpt-4o-mini DEEPEVAL_LLM_PROVIDER=openai DEEPEVAL_LLM_CLIENT_APP_NAME=<your-app-key> DEEPEVAL_FILE_SYSTEM=READ_ONLY # For providers that don't require OAuth2 DEEPEVAL_LLM_API_KEY=<your-api-key> # For providers that require OAuth2 DEEPEVAL_LLM_TOKEN_URL=https://<your-identity-provider>/oauth2/token DEEPEVAL_LLM_CLIENT_ID=<your-oauth2-client-id> DEEPEVAL_LLM_CLIENT_SECRET=<your-oauth2-client-secret> # Add custom headers as JSON to LLM API requests DEEPEVAL_LLM_EXTRA_HEADERS='{"system-code": "APP-123", "x-custom-header": "value"}'
-
-
Restart your application.
-
- Translators
-
-
Install the packages:
CODEpip install splunk-otel-genai-emitters-splunk pip install splunk-otel-genai-evals-deepeval -
(Optional) To prevent DeepEval telemetry from appearing in the trace waterfall view when monitoring AI agents, set
DEEPEVAL_TELEMETRY_OPT_OUT="YES"in your.envfile after you install the packages. -
Restart your application.
-
- Verify that your data is being ingested by using the Splunk Observability Cloud main menu to navigate to APM > Agents. If you don't see data on this page:
- Refer to step 4 and ensure that your Log Observer Connect index is set to the index that contains your AI trace data.
- Troubleshoot your Python instrumentation.