Set up AI Agent Monitoring

Get data in and start monitoring your AI agents and applications.

Complete the following steps to set up AI Agent Monitoring.

  1. To collect traces and metrics from your AI agents and applications, deploy the Splunk Distribution of OpenTelemetry Collector on the hosts that your applications are running on.

    Splunk Observability Cloud offers OpenTelemetry Collector distributions for Linux, Windows, and Kubernetes. These distributions integrate the collection of data from hosts and data forwarding to Splunk Observability Cloud.

    Linux, Windows
    Complete the following steps to use the guided setup wizards to deploy the Collector on a host.
    1. In the Splunk Observability Cloud main menu, select Data Management > Available integrations.

    2. Select Deploy OpenTelemetry collectors > Deploy Splunk OpenTelemetry Collector for other environments.

    3. Follow the on-screen instructions to deploy the Collector on your host.

    Kubernetes

    Complete the following steps to install the Collector for Kubernetes and configure the Python agent to send telemetry to Splunk Observability Cloud.

    1. Install the Collector for Kubernetes using Helm.

    2. (Optional) Collect logs and events with the Collector for Kubernetes.

    3. To deploy the Python agent in Kubernetes, configure the Kubernetes Downward API to expose environment variables to Kubernetes resources.

      The following example shows how to update a deployment to expose environment variables by adding the agent configuration under the .spec.template.spec.containers.env section:
      YAML
      apiVersion: apps/v1
      kind: Deployment
      spec:
        selector:
          matchLabels:
            app: your-application
        template:
          spec:
            containers:
              - name: myapp
                env:
                  - name: SPLUNK_OTEL_AGENT
                    valueFrom:
                      fieldRef:
                        fieldPath: status.hostIP
                  - name: OTEL_EXPORTER_OTLP_ENDPOINT
                    value: "http://$(SPLUNK_OTEL_AGENT):4317"
                  - name: OTEL_SERVICE_NAME
                    value: "<serviceName>"
                  - name: OTEL_RESOURCE_ATTRIBUTES
                    value: "deployment.environment=<environmentName>"
      Note: (Optional) To configure the Python agent to send telemetry to Splunk Observability Cloud using other methods, see

      Instrument your Python application for Splunk Observability Cloud.

    To troubleshoot the Splunk Distribution of the OpenTelemetry Collector, see Troubleshoot the Collector.
  2. Instrument or translate data from AI applications using one or more of the following options:
    Option Description
    Zero-code instrumentation Exports telemetry data without changes to your application's source code.
    Code-based instrumentation Exports telemetry data and requires modifying your application's source code.
    Translate data from third-party instrumentation libraries Converts data from applications already instrumented with supported third-party libraries and sends the data to Splunk Observability Cloud.
  3. (Optional; only required to enable evaluations) Collect logs and events from AI agents and applications:
    1. Create an events index in Splunk Enterprise to store and process your data. For instructions, see Create events indexes.
    2. Create an HTTP Event Collector (HEC) token in Splunk Enterprise. The HEC token enables you to send data and application events to your Splunk index over HTTP protocol using token-based authentication. For requirements and instructions, see Configure HTTP Event Collector on Splunk Enterprise.
    3. Set up Log Observer Connect for Splunk Enterprise.
    4. Configure the Splunk HEC exporter to allow the OpenTelemetry Collector to send logs to Splunk HEC endpoints.
      The following example shows a Splunk HEC exporter instance configured for a logs pipeline in the Collector configuration file:
      CODE
      exporters:
        # ...
        splunk_hec:
          token: "<hec-token>"
          endpoint: "<hec-endpoint>"
          # Source. See https://docs.splunk.com/Splexicon:Source
          source: "otel"
          # Source type. See https://docs.splunk.com/Splexicon:Sourcetype
          sourcetype: "otel"
      
      # ...
      Next, add the exporter to the services section of your configuration file:
      CODE
      service:
        # ...
        pipelines:
          logs:
            receivers: [fluentforward, otlp]
            processors:
            - memory_limiter
            - batch
            - resourcedetection
            exporters: [splunk_hec]
  4. (Optional; only required to enable evaluations) To correlate AI user conversations with your APM traces, configure Splunk Observability Cloud to query your Splunk Enterprise connection and index:
    1. In Splunk Observability Cloud, use the main menu to select Settings > AI Agent Monitoring.
    2. Under Connection selection, select your Splunk Enterprise instance.
    3. For Index selection, select the events index that you created.
    4. Select Apply.
  5. (Optional; only required to enable evaluations) Enable evaluations by installing the required packages and setting the environment variables.

    For more information on the configuration settings in this step, see Configure the Python agent for AI applications.
    Zero-code, code-based instrumentation
    1. Install the packages:
      CODE
      pip install splunk-otel-genai-evals-deepeval
      pip install splunk-otel-genai-emitters-splunk
    2. To send evaluation results to Splunk Observability Cloud, set the following environment variables in your .env file.
      CODE
      # Emitters (span_metric_event for full telemetry, splunk for Splunk-specific features)
      OTEL_INSTRUMENTATION_GENAI_EMITTERS=span_metric_event,splunk
      
      # Content Capture
      OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true
      OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT_MODE=SPAN_AND_EVENT
      
      # Evaluations
      OTEL_INSTRUMENTATION_GENAI_EVALS_RESULTS_AGGREGATION=true
      OTEL_INSTRUMENTATION_GENAI_EMITTERS_EVALUATION=replace-category:SplunkEvaluationResults
      
      # Logs
      OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true
      
      # Metrics
      OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE=delta
    3. By default, the instrumentation frameworks run evaluations in the same process as your application. LLM calls from evaluation frameworks such as DeepEval are instrumented alongside application telemetry.

      To run evaluations in a child process with the OpenTelemetry SDK deactivated and prevent evaluation LLM calls from polluting application telemetry, set OTEL_INSTRUMENTATION_GENAI_EVALS_SEPARATE_PROCESS=true in your .env file.
      Note: This setting is required when evaluations are enabled for the OpenAI instrumentation. This setting is optional for all other instrumentation frameworks that have evaluations enabled.
    4. To enable LLM-as-a-Judge evaluations with DeepEval, use one of the following options.

      1. To use OpenAI as the LLM provider for evaluations, set the OPENAI_API_KEY in your .env file. This is the default option.

      2. To use a custom LLM provider, run pip install litellm to install the LiteLLM dependencies and set the following environment variables in your .env file. Route evaluations through your own LLM provider instead of OpenAI.
        JSON
        DEEPEVAL_LLM_BASE_URL=https://<your-llm-gateway>/openai/v1
        DEEPEVAL_LLM_MODEL=gpt-4o-mini
        DEEPEVAL_LLM_PROVIDER=openai
        DEEPEVAL_LLM_CLIENT_APP_NAME=<your-app-key>
        DEEPEVAL_FILE_SYSTEM=READ_ONLY
        
        # For providers that don't require OAuth2
        DEEPEVAL_LLM_API_KEY=<your-api-key>
        
        # For providers that require OAuth2
        DEEPEVAL_LLM_TOKEN_URL=https://<your-identity-provider>/oauth2/token
        DEEPEVAL_LLM_CLIENT_ID=<your-oauth2-client-id>
        DEEPEVAL_LLM_CLIENT_SECRET=<your-oauth2-client-secret>
        
        # Add custom headers as JSON to LLM API requests
        DEEPEVAL_LLM_EXTRA_HEADERS='{"system-code": "APP-123", "x-custom-header": "value"}'
    5. Restart your application.

    Translators
    1. Install the packages:

      CODE
      pip install splunk-otel-genai-emitters-splunk
      pip install splunk-otel-genai-evals-deepeval
    2. (Optional) To prevent DeepEval telemetry from appearing in the trace waterfall view when monitoring AI agents, set DEEPEVAL_TELEMETRY_OPT_OUT="YES" in your .env file after you install the packages.

    3. Restart your application.

  6. Verify that your data is being ingested by using the Splunk Observability Cloud main menu to navigate to APM > Agents. If you don't see data on this page:
    1. Refer to step 4 and ensure that your Log Observer Connect index is set to the index that contains your AI trace data.
    2. Troubleshoot your Python instrumentation.
After you set up AI Agent Monitoring, you can Monitor AI agents with Splunk APM.