How Splunk monitors Splunk Cloud Platform

The Splunk Cloud Service Level Schedule describes Splunk's service level commitment for Splunk Cloud Platform. This topic describes some of the monitoring efforts that Splunk performs in support of that service level commitment. Splunk monitors the service with the following goals:

  • Detect issues
  • Restore service as quickly as possible
  • Keep customers and their stakeholders informed about outages

Splunk Cloud Platform is monitored 24x7 worldwide by our Network Operations Center (NOC). During U.S. business hours, specialized teams work to resolve and identify causes for novel issues.

Splunk Network Operations Center

The NOC takes action in response to automated alerts. For consistency and repeatability, the NOC uses runbooks to respond to alerts and files proactive incidents when novel issues occur.

The NOC manages more than 100 priority-one automated alerts that monitor the following components in particular. See the Splunk Cloud Platform Monitoring Matrix for more detail.

  • Disk usage
  • Indexers, search heads, cluster manager, KV store, or Inputs Data Managers (IDMs) down
  • User Interface unresponsive
  • Search head synchronization issues

Specialized Teams

Specialized teams monitor critical product functionality and resolve issues during U.S. business hours. These specialized teams investigate to determine and remediate root causes and feed back into the development process to improve code resilience.

Specialized teams work on critical functions such as the following:

  • Search
  • Ingest
  • Login

See Splunk Cloud Platform Service Details for more information.

Splunk Cloud Platform Monitoring Matrix

The following table lists Splunk Cloud features that are monitored and the Splunk response when issues are detected. This is a representative list of Splunk Cloud Platform monitoring features. It is not exhaustive and is subject to change without notice. This document does not describe Splunk Cloud Platform for FedRAMP Moderate, Splunk Cloud Platform for FedRAMP High, or Splunk Cloud Platform for DoD IL5.

FeatureIssueSupportSplunk Action
Federated Search Federated search issuesU.S. business hoursInvestigate cause.
Inputs Data Manager (IDM)
(Note 1)
IDM requires upsizing24x7Schedule maintenance window to upsize IDM.
Indexing Indexer down24x7Check bundle push and detention status. Verify available disk space and potentially restart service. Create proactive incident if required.
Indexing latency >5 minutes (Note 2)U.S. business hoursInvestigate cause.
Indexing queues blockedU.S. business hoursInvestigate cause.
Infrastructure Disk space full24x7Rotate logs to clear old backups or expand disk space (Note 3). Create proactive incident if required.
Ingestion Splunk-to-Splunk (S2S) ingestion port down24x7Check bundle push and detention status. Verify available disk space and potentially restart service. Create proactive incident if required.
Ingestion HTTP Event CollectorU.S. business hoursInvestigate cause.
Ingestion S2S connection acceptanceU.S. business hoursInvestigate cause.
KV store KV store down24x7Check data store health, certificates, disk space, and potentially restart service. Create proactive incident if required.
Login Splunk native authenticationU.S. business hoursInvestigate cause.
Identity provider authenticationU.S. business hoursInvestigate cause.
Search Search Head Cluster (SHC) out of sync24x7Check knowledge object replication and potentially re-sync cluster members. Create proactive incident if required.
Search peer isolated24x7Check for unavailable or stuck search peers, potentially restart service or remove unresponsive peer. Create proactive incident if required.
Search initiation24x7Check health of the indexer running searches, bundle synchronization, down peers, and cluster manager health, and potentially restart services. Create proactive incident if required.
Search executionU.S. business hoursInvestigate cause.
Search performanceU.S. business hoursNote search performance reductions relative to customer's historical performance. Investigate cause.
Skipped search percentageU.S. business hoursInvestigate cause.
API unavailable24x7Check for system overload and disk space issues, and potentially restart the service. Create proactive incident if required.
Splunk Web user interface unavailable (Search Head or Enterprise Security Search Head)24x7Check certificates and potentially restart processes or instances. Create proactive incident if required.

Notes

  1. IDM applies to Classic Experience only.
  2. Indexing latency applies to Victoria Experience only.
  3. Disk expansion limited by entitlement.