Get data into Splunk APM AlwaysOn Profiling

Follow these instructions to get data into Splunk APM AlwaysOn Profiling.

Follow these instructions to get profiling into Splunk APM AlwaysOn Profiling.

Prerequisites

To get data into Splunk APM AlwaysOn Profiling, you need the following:

AlwaysOn Profiling is activated for all host-based subscriptions. For subscriptions based on traces analyzed per minute (TAPM), check with your Splunk support representative.

Helm chart deployments

If you’re deploying the Splunk Distribution of the OpenTelemetry Collector using Helm, pass the following value when installing the chart:

--set splunkObservability.profilingEnabled='true'

You can also edit the parameter in the values.yaml file itself. For example:

# This option enables only the shared pipeline for logs and profiling data.
# There is no active collection of profiling data.
# Instrumentation libraries must be configured to send it to the collector.
# If you don't use AlwaysOn Profiling for Splunk APM, you can disable it.
profilingEnabled: false

If you are using a version of the OTel Collector lower than 0.78.0, make sure to turn off logs collection:

logsEnabled: false
Note: Setting profileEnabled to true creates the logs pipeline required by AlwaysOn Profiling, but doesn’t install the APM instrumentation. To install the instrumentation, see Get profiling data in.

Get profiling data in

Follow these instructions to get profiling data into Splunk APM using AlwaysOn Profiling:

  1. Instrument your application or service.

  2. Activate AlwaysOn Profiling.

  3. Check that Splunk Observability Cloud is receiving profiling data.

Instrument your application or service

AlwaysOn Profiling requires APM tracing data to correlate stack traces to your application requests. To instrument your application for Splunk APM, follow the steps for the appropriate programming language:

Language

Available instrumentation

Documentation

Java

Splunk Distribution of OpenTelemetry Java version 1.14.2 or higher

OpenJDK versions 15.0 to 17.0.8 are not supported for memory profiling. See https://bugs.openjdk.org/browse/JDK-8309862 in the JDK bug tracker for more information.

Node.js

Splunk Distribution of OpenTelemetry JS version 2.0 or higher

Instrument your Node.js application for Splunk Observability Cloud

.NET

Splunk Distribution of OpenTelemetry .NET version 1.3.0 or higher

Instrument your .NET application for Splunk Observability Cloud (OpenTelemetry)

Python

Splunk Distribution of OpenTelemetry Python version 1.15 or higher

Note: See Data retention in Application Performance Monitoring (APM) for information on profiling data retention.

Activate AlwaysOn Profiling

After you’ve instrumented your service for Splunk Observability Cloud and checked that APM data is getting into Splunk APM, activate AlwaysOn Profiling.

To activate AlwaysOn Profiling, follow the steps for the appropriate programming language:

Java

Activate CPU and memory profiling

  • To use CPU profiling, activate the splunk.profiler.enabled system property, or set the SPLUNK_PROFILER_ENABLED environment variable to true.

  • Activate memory profiling by setting the splunk.profiler.memory.enabled system property or the SPLUNK_PROFILER_MEMORY_ENABLED environment variable to true. To activate memory profiling, the splunk.profiler.enabled property must be set to true.

Configure profiling

  • Check that the OTLP endpoint that exports profiling data is set correctly:
    • The profiling-specific endpoint is configured through the splunk.profiler.logs-endpoint system property or the SPLUNK_PROFILER_LOGS_ENDPOINT environment variable.

    • If that endpoint is not set, then the generic OTLP endpoint is used, configured through the otel.exporter.otlp.endpoint system property or the OTEL_EXPORTER_OTLP_ENDPOINT environment variable.

    • If that endpoint is not set either, it defaults to http://localhost:4317.

    • For non-Kubernetes deployments, the OTLP endpoint has to point to http://${COLLECTOR_IP}:4317. If the collector and the profiled application run on the same host, then use http://localhost:4317. Otherwise, make sure there are no firewall rules blocking access to port 4317 from the profiled host to the collector host.

    • For Kubernetes deployments, the OTLP endpoint has to point to http://$(K8S_NODE_IP):4317 where the K8S_NODE_IP is fetched from the Kubernetes downstream API by setting the environment configuration on the Kubernetes pod running the application. For example:

      env:
      - name: K8S_NODE_IP
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: status.hostIP
  • Port 9943 is the default port for the SignalFx receiver in the collector distribution. If you change this port in your collector configuration, you need to pass the custom port to the JVM.

The following example shows how to activate the profiler using the system property:

java -javaagent:./splunk-otel-javaagent.jar \
-Dsplunk.profiler.enabled=true \
-Dsplunk.profiler.memory.enabled=true \
-Dotel.exporter.otlp.endpoint=http(s)://collector:4317 \
-jar <your_application>.jar

For more configuration options, including setting a separate endpoint for profiling data, see Java settings for AlwaysOn Profiling.

Note: AlwaysOn Profiling is not supported on Oracle JDK 8 and IBM J9.
Node.js

Requirements

AlwaysOn Profiling requires Node.js 16 and higher.

Instrumentation

  • Activate the profiler by setting the SPLUNK_PROFILER_ENABLED environment variable to true.

  • Activate memory profiling by setting the SPLUNK_PROFILER_MEMORY_ENABLED environment variable to true.

  • Check the OTLP the endpoint in the splunk.profiler.logs-endpoint system property or the SPLUNK_PROFILER_LOGS_ENDPOINT environment variable:
    • For non-Kubernetes deployments, the OTLP endpoint has to point to http://${COLLECTOR_IP}:4317. If the collector and the profiled application run on the same host, then use http://localhost:4317. Otherwise, make sure there are no firewall rules blocking access to port 4317 from the profiled host to the collector host.

    • For Kubernetes deployments, the OTLP endpoint has to point to http://$(K8S_NODE_IP):4317 where the K8S_NODE_IP is fetched from the Kubernetes downstream API by setting the environment configuration on the Kubernetes pod running the application. For example:

      env:
      - name: K8S_NODE_IP
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: status.hostIP

The following example shows how to activate the profiler from your application’s code:

start({
   serviceName: '<service-name>',
   endpoint: 'collectorhost:port',
   profiling: {                       // Activates CPU profiling
      memoryProfilingEnabled: true,   // Activates Memory profiling
   }
});

For more configuration options, including setting a separate endpoint for profiling data, see Node.js settings for AlwaysOn Profiling.

.NET

Requirements

AlwaysOn Profiling requires .NET 8.0 or higher.

Note: .NET Framework is not supported.

Instrumentation

  • Activate the profiler by setting the SPLUNK_PROFILER_ENABLED environment variable to true for your .NET process.

  • Activate memory profiling by setting the SPLUNK_PROFILER_MEMORY_ENABLED environment variable to true.

  • SPLUNK_PROFILER_LOGS_ENDPOINT environment variable by default points to http://localhost:4318/v1/logs. It can be reconfigured to the Splunk Distribution of OpenTelemetry Collector.

For more configuration options, including setting a separate endpoint for profiling data, see .NET OTel settings for AlwaysOn Profiling.

Python
Note: AlwaysOn Profiling for Python is in beta development. This feature is provided by Splunk to you "as is" without any warranties, maintenance and support, or service-level commitments. Use of this feature is subject to the Splunk General Terms .

Requirements

AlwaysOn Profiling requires Python 3.7.2 or higher.

Instrumentation

Activate the profiler by setting the SPLUNK_PROFILER_ENABLED environment variable to true or call the start_profiling function in your application code.

Check the OTLP endpoint in the SPLUNK_PROFILER_LOGS_ENDPOINT environment variable:

  • For non-Kubernetes environments, make sure that the SPLUNK_PROFILER_LOGS_ENDPOINT environment variable points to http://localhost:4317.

  • For Kubernetes deployments, the OTLP endpoint has to point to http://$(K8S_NODE_IP):4317 where the K8S_NODE_IP is fetched from the Kubernetes downstream API by setting the environment configuration on the Kubernetes pod running the application. For example:

    env:
    - name: K8S_NODE_IP
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: status.hostIP

The following example shows how to activate the profiler from your application’s code:

from splunk_otel.profiling import start_profiling

# Activates CPU profiling
# All arguments are optional
start_profiling(
   service_name='my-python-service',
   resource_attributes={
      'service.version': '3.1'
      'deployment.environment': 'production',
   }
   endpoint='http://localhost:4317'
)

For more configuration options, see Python settings for AlwaysOn Profiling.

Check that Splunk Observability Cloud is receiving profiling data

After you set up and activate AlwaysOn Profiling, check that profiling data is coming in:

  1. Log in to Splunk Observability Cloud.

  2. In the navigation menu, select APM.

  3. In Splunk APM, select AlwaysOn Profiling.

  4. Select a service, and switch from the CPU view to the Memory view.

  5. If your service runs in multiple instances, select the instance that you’re interested in by selecting the host, container, and process ID.

  6. If you’ve activated memory profiling, explore memory metrics. See Memory profiling metrics.

Activate AlwaysOn Profiling in a gateway deployment

Follow these steps to set up AlwaysOn Profiling with a collector in data forwarding or gateway mode, similar to the following example gateway deployment:

Step one. Point the instrumentation agent to the collector in host (agent) monitoring mode. Step two. Configure the collector in host (agent) monitoring mode. Step three. Configure the collector in data forwarding (gateway) mode. Step four. Send data to Splunk Observability Cloud.

  1. Point the instrumentation agent to the OTLP gRPC receiver for the collector in host monitoring (agent) mode. The OTLP gRPC receiver must be running on the same host and port as the collector in host monitoring (agent) mode.

  2. Configure the collector in host monitoring (agent) mode with the following components:

    1. An OTLP gRPC receiver

    2. An OTLP exporter pointed at the collector in data forwarding (gateway) mode

    3. A logs pipeline that connects the receiver and the exporter. For example, see the default agent configuration with the necessary adjustment to send to a gateway in the Splunk Opentelemetry Collector on GitHub.

    service:
       pipelines:
          logs:
             receivers: [otlp]
             processors:
             - memory_limiter
             - batch
             - resourcedetection
             #- resource/add_environment
             #exporters: [splunk_hec, splunk_hec/profiling]
             # Use instead when sending to gateway
             exporters: [otlp]
  3. Configure the collector in data forwarding (gateway) mode (3) with the following components:
    1. An OTLP gRPC receiver

    2. A splunk_hec exporter

    3. A logs pipeline that connects the receiver and the exporter