Translate and collect data from AI applications instrumented with third-party libraries
Enable a translator to convert your telemetry data to the OpenTelemetry format and send the data to Splunk Observability Cloud.
If you've already instrumented your AI application with a third-party instrumentation library, you can enable a translator to convert the telemetry data to the OpenTelemetry format and send the data to Splunk Observability Cloud.
This solution sends traces and metrics from instrumented AI applications to Splunk Observability Cloud without requiring changes to upstream code.
LangSmith translator
Prerequisites
- You have installed a LangSmith instrumentation SDK to instrument your AI application.
-
You have installed the Splunk Distribution for OpenTelemetry Generative AI packages using the following command:CODE
pip install splunk-otel-util-genai opentelemetry-sdk>=1.31.1 langsmith >= 0.6.2 langchain>=1.2.3 langchain-core >= 1.2.7 langchain_openai >= 1.1.7 python-dotenv openai>=2.6.0
-
Install the LangSmith translator package:CODE
pip install splunk-otel-util-genai-translator-langsmithRu -
Run or modify the example application from GitHub to automatically translate LangSmith SDK spans to GenAI semantic conventions. When you run this example:
-
The LangSmith SDK creates spans with LangSmith-specific attributes (such as
gen_ai.prompt,gen_ai.completion,langsmith.metadata.ls_provider,langsmith.metadata.ls_temperature, etc.). -
The
LangsmithSpanProcessorautomatically intercepts the LangSmith-specific keys and mutates it with the corresponding GenAI-compliant attributes. - The translation happens transparently and does not require modifying any application code. The processor runs the translation automatically without any additional code changes.
- The original LangSmith-specific keys remain in the span.
-
Key mapping reference: LangSmith and GenAI attributes
gen_ai. and langsmith. and adds the following corresponding GenAI keys:
| LangSmith key | GenAI mapped key |
|---|---|
gen_ai.prompt |
gen_ai.input.messages |
gen_ai.content.prompt |
gen_ai.input.messages |
gen_ai.completion |
gen_ai.output.messages |
gen_ai.content.completion |
gen_ai.output.messages |
gen_ai.output_messages |
gen_ai.output.messages |
langsmith.entity.input |
gen_ai.input.messages |
langsmith.entity.output |
gen_ai.output.messages |
langsmith.metadata.ls_provider |
gen_ai.system |
langsmith.metadata.ls_model_name |
gen_ai.request.model |
langsmith.metadata.ls_model_type |
gen_ai.operation.name |
langsmith.model_name |
gen_ai.request.model |
langsmith.provider |
gen_ai.system |
langsmith.run_type |
gen_ai.operation.name |
langsmith.metadata.ls_temperature |
gen_ai.request.temperature |
langsmith.metadata.ls_max_tokens |
gen_ai.request.max_tokens |
langsmith.metadata.ls_top_p |
gen_ai.request.top_p |
langsmith.metadata.ls_top_k |
gen_ai.request.top_k |
langsmith.metadata.ls_presence_penalty |
gen_ai.request.presence_penalty |
langsmith.metadata.ls_frequency_penalty |
gen_ai.request.frequency_penalty |
langsmith.metadata.ls_seed |
gen_ai.request.seed |
langsmith.metadata.ls_stop_sequences |
gen_ai.request.stop_sequences |
langsmith.temperature |
gen_ai.request.temperature |
langsmith.max_tokens |
gen_ai.request.max_tokens |
langsmith.top_p |
gen_ai.request.top_p |
langsmith.top_k |
gen_ai.request.top_k |
gen_ai.token.usage.input |
gen_ai.usage.input_tokens |
gen_ai.token.usage.output |
gen_ai.usage.output_tokens |
gen_ai.token.usage.total |
gen_ai.usage.total_tokens |
langsmith.token_usage.prompt_tokens |
gen_ai.usage.input_tokens |
langsmith.token_usage.completion_tokens |
gen_ai.usage.output_tokens |
langsmith.token_usage.total_tokens |
gen_ai.usage.total_tokens |
langsmith.usage.prompt_tokens |
gen_ai.usage.input_tokens |
langsmith.usage.completion_tokens |
gen_ai.usage.output_tokens |
langsmith.usage.total_tokens |
gen_ai.usage.total_tokens |
langsmith.session_id |
gen_ai.conversation.id |
langsmith.parent_run_id |
gen_ai.parent_run.id |
langsmith.agent.name |
gen_ai.agent.name |
langsmith.agent.type |
gen_ai.agent.type |
langsmith.agent.description |
gen_ai.agent.description |
langsmith.workflow.name |
gen_ai.workflow.name |
langsmith.chain.name |
gen_ai.workflow.name |
langsmith.error |
gen_ai.error.message |
langsmith.error.type |
gen_ai.error.type |
langsmith.status |
gen_ai.response.status |
langsmith.response.model |
gen_ai.response.model |
langsmith.response.id |
gen_ai.response.id |
langsmith.finish_reason |
gen_ai.response.finish_reasons |
OpenLit translator
Prerequisites
To translate and collect data from AI applications instrumented with OpenLit, you must meet the following requirements.
-
You have installed an OpenLit instrumentation SDK to instrument your AI application.
-
You have installed the Splunk Distribution for OpenTelemetry Generative AI packages using the following command:CODE
pip install splunk-otel-util-genai opentelemetry-sdk>=1.31.1 openlit>=1.35.0 python-dotenv openai>=2.6.0
Steps
-
Install the OpenLit translator package:CODE
pip install splunk-otel-util-genai-translator-openlit -
Run or modify the example application from GitHub to automatically translate OpenLit SDK spans to GenAI semantic conventions. When you run this example:
-
The OpenLit SDK creates spans with OpenLit-specific attributes (such as
gen_ai.prompt,gen_ai.completion,gen_ai.llm.provider, etc.). -
The
OpenLitSpanProcessorautomatically intercepts the OpenLit-specific spans and mutates it with the corresponding GenAI-compliant attributes. -
The translation happens transparently and does not require modifying any application code. After you initialize OpenLit, the processor runs the translation automatically without any additional code changes.
-
Key mapping reference: OpenLit and GenAI attributes
gen_ai. and adds the following corresponding GenAI keys:
| OpenLit key | GenAI mapped key |
|---|---|
gen_ai.completion.0.content |
gen_ai.output.messages |
gen_ai.prompt.0.content |
gen_ai.input.messages |
gen_ai.prompt |
gen_ai.input.messages |
gen_ai.completion |
gen_ai.output.messages |
gen_ai.content.prompt |
gen_ai.input.messages |
gen_ai.content.completion |
gen_ai.output.messages |
gen_ai.request.embedding_dimension |
gen_ai.embeddings.dimension.count |
gen_ai.token.usage.input |
gen_ai.usage.input_tokens |
gen_ai.token.usage.output |
gen_ai.usage.output_tokens |
gen_ai.llm.provider |
gen_ai.provider.name |
gen_ai.llm.model |
gen_ai.request.model |
gen_ai.llm.temperature |
gen_ai.request.temperature |
gen_ai.llm.max_tokens |
gen_ai.request.max_tokens |
gen_ai.llm.top_p |
gen_ai.request.top_p |
gen_ai.operation.type |
gen_ai.operation.name |
gen_ai.output_messages |
gen_ai.output.messages |
gen_ai.session.id |
gen_ai.conversation.id |
gen_ai.tool.args |
gen_ai.tool.call.arguments |
gen_ai.tool.result |
gen_ai.tool.call.result |
gen_ai.vectordb.name |
gen_ai.vectordb.name |
gen_ai.vectordb.search.query |
db.query.text |
gen_ai.vectordb.search.results_count |
db.response.returned_rows |
Traceloop translator
The Traceloop translator promotes legacy traceloop.* attributes attached to an LLMInvocation into OpenTelemetry GenAI semantic convention (or forward-looking custom gen_ai.*) attributes before the standard semantic convention span emitter runs.
Prerequisites
To translate and collect data from AI applications instrumented with Traceloop, you must meet the following requirements.
-
If you installed the Splunk Distribution of the OpenTelemetry Python instrumentation for LangChain or LangGraph, uninstall the instrumentation:CODE
pip uninstall splunk-otel-instrumentation-langchain -
You have installed the OpenAI SDK.
-
You have installed a Traceloop instrumentation SDK to instrument your AI application.
-
You have installed the Splunk Distribution for OpenTelemetry Generative AI packages using the following command:CODE
pip install splunk-otel-util-genai opentelemetry-sdk>=1.31.1 traceloop-sdk>=0.47.4 python-dotenv openai>=2.6.0
Steps
-
Install the Traceloop translator package:CODE
pip install splunk-otel-util-genai-translator-traceloop -
Run or modify the example application from GitHub to automatically translate Traceloop SDK spans to GenAI semantic conventions. When you run this example:
-
The Traceloop SDK creates spans with Traceloop-specific attributes (such as
traceloop.workflow.name,traceloop.entity.name, etc.). -
The
TraceloopSpanProcessorautomatically intercepts the Traceloop-specific spans and mutates it with the corresponding GenAI-compliant attributes. -
The translation happens transparently and does not require modifying any application code. After you initialize Traceloop, the processor runs the translation automatically without any additional code changes.
-
Key mapping reference: Traceloop and GenAI attributes
The Traceloop translator scans invocation.attributes for keys beginning with traceloop. and adds the following corresponding GenAI keys:
| Traceloop key | GenAI mapped key | Description |
|---|---|---|
traceloop.entity.name |
gen_ai.agent.name |
Approximates entity as agent name. |
traceloop.entity.input |
gen_ai.input.messages |
Serialized form. |
traceloop.entity.output |
gen_ai.output.messages |
Serialized form. |
traceloop.correlation.id |
gen_ai.conversation.id |
Approximate mapping. |
Custom key mapping reference: workflow and entity hierarchy
To align Traceloop telemetry data with the OpenTelemetry entity data model, the Traceloop translator scans invocation.attributes for keys beginning with traceloop. and adds the following corresponding custom GenAI keys:
| Traceloop key | GenAI mapped key |
|---|---|
traceloop.workflow.name |
gen_ai.workflow.name |
traceloop.association.properties |
gen_ai.association.properties |
traceloop.entity.path |
gen_ai.workflow.path |
traceloop.span.kind |
gen_ai.span.kind |
traceloop.entity.version |
gen_ai.workflow.version |
Next steps
To finish setting up AI Agent Monitoring, proceed to the next step in Set up AI Agent Monitoring.