Collect metrics and traces from a LiteLLM proxy
Send metrics and traces from a LiteLLM proxy service to the Splunk Distribution of the OpenTelemetry Collector.
You can configure the LiteLLM proxy service to collect metrics and traces and send them to the Splunk Distribution of the OpenTelemetry Collector. The LiteLLM instrumentation sends metrics and traces to the OTLP receiver.
pip3 install 'opentelemetry-distro[otlp]'Configuration settings
Learn about the configuration settings for the OTLP receiver.
To view the configuration options for the OTLP receiver, see Settings.
Metrics
The following metrics are available for LiteLLM proxy services. These metrics fall under the default metric category. For more information on these metrics, see Semantic conventions for generative AI metrics in the OpenTelemetry documentation.
| Metric name | Data type | Metric type | Unit | Description |
|---|---|---|---|---|
gen_ai.client.operation.duration | float | histogram | seconds | The duration of the GenAI operation. |
gen_ai.client.token.usage | int | histogram | count | Measures the number of input (prompt) and output (completions) tokens used. |
Attributes
The following resource attributes are available for LiteLLM proxy services.
| Attribute name | Type | Description |
|---|---|---|
gen_ai.operation.name | string | The name of the operation being performed. Possible values:
|
gen_ai.system | string | The name of the GenAI system that produced the tokens. |
gen_ai.request.model | string | The name of the GenAI model a request is being made to. |
gen_ai.framework | string | The name of the library used to interact with the GenAI system. |
gen_ai.token.type | string | The type of token being counted. For example, input or output. This attribute is only applicable for the |
Next steps
After you set up data collection, the data populates built-in dashboards that you can use to monitor and troubleshoot LiteLLM proxy services.
For more information on using built-in dashboards in Splunk Observability Cloud, see: