Configure the library in gateway mode

Attention:

Alpha features described in this document are provided by Splunk to you "as is" without any warranties, maintenance and support, or service-level commitments. Splunk makes this alpha feature available in its sole discretion and may discontinue it at any time. These documents are not yet publicly available and we ask that you keep such information confidential. Use of alpha features is subject to the Splunk Pre-Release Agreement for Hosted Services.

In gateway mode the library extracts the X-Cisco-AI-Defense-Event-Id HTTP header in requests or responses and adds it (as the attribute gen_ai.security.event_id ) to the current LLM span.

Configure your LLM SDK to use the Cisco AI Defense Gateway URL as the base URL:
Standard deployments

Set the base URL as in this example:

PYTHON
from openai import OpenAI
from opentelemetry.instrumentation.langchain import LangchainInstrumentor
from opentelemetry.instrumentation.aidefense import AIDefenseInstrumentor

# Instrument (LangChain first, then AI Defense)
LangchainInstrumentor().instrument()
AIDefenseInstrumentor().instrument()

# Configure OpenAI to use AI Defense Gateway
client = OpenAI(
    base_url="https://gateway.aidefense.security.cisco.com/{tenant}/connections/{conn}/v1",
    api_key="your-llm-api-key",
)

# LLM calls automatically get gen_ai.security.event_id in spans
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Hello!"}]
)
Custom Cisco AI Defense Gateway deployments

Set the environment variable OTEL_INSTRUMENTATION_AIDEFENSE_GATEWAY_URLS as follows

BASH
export OTEL_INSTRUMENTATION_AIDEFENSE_GATEWAY_URLS="custom-gateway.internal,my-proxy.corp"

This configuration results in spans and traces like this:

Span example:

CODE
POST /api/chat
└── ChatOpenAI                              ← LangChain span
    ├── gen_ai.request.model: gpt-4o-mini
    ├── gen_ai.response.id: chatcmpl-...
    └── gen_ai.security.event_id: e91a8f7a-...  ← Added by Gateway Mode

Trace example:

CODE
POST /travel/plan
└── workflow LangGraph
    ├── step flight_specialist
    │   └── ChatOpenAI                 ← LLM call through Gateway
    │       ├── gen_ai.request.model: gpt-4o-mini
    │       └── gen_ai.security.event_id: e91a8f7a-...  ← From Gateway
    ├── step hotel_specialist
    │   └── ChatOpenAI
    │       └── gen_ai.security.event_id: f82b9c6d-...
    └── step activity_specialist
        └── ChatOpenAI                 ← BLOCKED by AI Defense
            └── gen_ai.security.event_id: 203d272b-...