OpenShift cluster
Use this Splunk Observability Cloud integration for the OpenShift cluster monitor. See benefits, install, configuration, and metrics
The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the openshift-cluster
monitor type to collect cluster-level metrics from the Kubernetes API server, which includes all metrics from Kubernetes cluster (deprecated) with additional OpenShift-specific metrics. You only need to use the openshift-cluster
monitor for OpenShift deployments, as it incorporates the kubernetes-cluster
monitor automatically.
This monitor is available on Kubernetes, Linux, and Windows.
Behavior
Since the agent is generally running in multiple places in a Kubernetes cluster, and since it is generally more convenient to share the same configuration across all agent instances, this monitor by default makes use of a leader election process to ensure that it is the only agent sending metrics in a cluster.
All of the agents running in the same namespace that have this monitor configured decide amongst themselves which agent should send metrics for this monitor. This agent becomes the leader agent. The remaining agents stand by, ready to activate if the leader agent dies. You can override leader agent election by setting the alwaysClusterReporter
option to true
, which makes the monitor always report metrics.
Benefits
After you configure the integration, you can access these features:
-
View metrics. You can create your own custom dashboards, and most monitors provide built-in dashboards as well. For information about dashboards, see View dashboards in Splunk Observability Cloud.
-
View a data-driven visualization of the physical servers, virtual machines, AWS instances, and other resources in your environment that are visible to Infrastructure Monitoring. For information about navigators, see Use navigators in Splunk Infrastructure Monitoring.
-
Access the Metric Finder and search for metrics sent by the monitor. For information, see Search the Metric Finder and Metadata Catalog.
Installation
Follow these steps to deploy this integration:
-
Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform.
By default the Collector is installed in the namespace you’re logged into. To deploy the Collector into a different namespace, use the
--namespace
flag to indicate where to place the Collector in.-
Install on Kubernetes When installing Kubernetes using the Helm chart, use the
--set distribution='openshift'
option to generate specific OpenShift metrics, in addition to the standard Kubernetes metrics.For example:
helm install --set cloudProvider=' ' --set distribution='openshift' --set splunkObservability.accessToken='******' --set clusterName='cluster1' --namespace='namespace1' --set splunkObservability.realm='us0' --set gateway.enabled='false' --generate-name splunk-otel-collector-chart/splunk-otel-collector``
Find more information in our GitHub repos.
-
-
Configure the monitor, as described in the Configuration section.
-
Restart the Splunk Distribution of OpenTelemetry Collector.
Configuration
To use this integration of a Smart Agent monitor with the Collector:
-
Include the Smart Agent receiver in your configuration file.
-
Add the monitor type to the Collector configuration, both in the receiver and pipelines sections.
-
See how to Use Smart Agent monitors with the Collector.
-
See how to set up the Smart Agent receiver.
-
For a list of common configuration options, refer to Common configuration settings for monitors.
-
Learn more about the Collector at Get started: Understand and use the Collector.
-
Configuration options
The following table shows the configuration options for this monitor:
Option |
Required |
Type |
Description |
---|---|---|---|
|
no |
| If true , leader election is skipped and metrics are alwaysreported. The default value is |
|
no |
| If specified, only resources within the given namespace are monitored. If omitted (blank), all supported resources across all namespaces are monitored. |
|
no |
|
Config for the K8s API client |
|
no |
|
A list of node status condition types to report as metrics. The metrics are reported as data points of the form |
The nested kubernetesAPI
configuration object has the following fields:
Option |
Required |
Type |
Description |
---|---|---|---|
|
no |
| How to authenticate to the K8s API server. This can be one of
|
|
no |
| Whether to skip verifying the TLS cert from the API server. Almost never needed. The default value is |
|
no |
| The path to the TLS client cert on the pod’s filesystem, if using |
|
no |
| The path to the TLS client key on the pod’s filesystem, if using
|
|
no |
| Path to a CA certificate to use when verifying the API server’s TLS cert. Generally, this is provided by Kubernetes alongside the service account token, which is picked up automatically, so this should rarely be necessary to specify. |
Metrics
The following metrics are available for this integration:
https://raw.githubusercontent.com/signalfx/splunk-otel-collector/main/internal/signalfx-agent/pkg/monitors/kubernetes/cluster/metadata.yaml
Notes
-
To learn more about the available in Splunk Observability Cloud see Metric types.
-
In host-based subscription plans, default metrics are those metrics included in host-based subscriptions in Splunk Observability Cloud, such as host, container, or bundled metrics. Custom metrics are not provided by default and might be subject to charges. See Metric categories for more information.
-
In MTS-based subscription plans, all metrics are custom.
-
To add additional metrics, see how to configure
extraMetrics
in Add additional metrics.
Troubleshooting
If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.
Available to Splunk Observability Cloud customers
-
Submit a case in the Splunk Support Portal.
-
Contact Splunk Support.
Available to prospective customers and free trial users
-
Ask a question and get answers through community support at Splunk Answers.
-
Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups.