Set up an Edge Processor in a Kubernetes cluster
Deploy Edge Processor to a Kubernetes cluster using Splunk's official Helm chart. The chart packages all required resources (for example, StatefulSet, Service, ConfigMap, etc.) into a single, versioned release to simplify deployment and lifecycle management.
For container deployments outside of Kubernetes, see the Docker guide topic in this manual.
Prerequisites
-
Access to a functioning Splunk Cloud Platform tenant.
-
Ability to deploy resources in a Kubernetes (version 1.25 or later) cluster.
-
Helm version 3.x or later installed on the machine used to deploy the Helm chart.
Architecture overview
Edge Processor deploys as a StatefulSet, rather than a Deployment. Doing so provides the following advantages:
-
Stable network identifiers: Each pod receives a predictable, human-readable hostname (for example,
splunk-edge-processor-0), which is displayed as the instance name in the tenant's web UI. -
Persistent storage: Each pod maintains its own
PersistentVolumeClaim, enabling Edge Processor instances to preserve their identity and event queues across pod restarts. -
Parallel scaling: Splunk's Helm chart sets the
podManagementPolicytoParallel, which lets pods scale simultaneously, rather than sequentially. Because of this, the cluster is able to respond faster to ingestion volume changes.
Shared service principal
During installation, a one-time Kubernetes Job provisions a service principal, which is used to interact with other Splunk Cloud Platform services. The generated credentials are then stored in a Kubernetes Secret and mounted into each pod at /opt/splunk-edge/etc/principal.yaml, allowing all replicas in the StatefulSet to share the same service principal.
Deployment and port exposure
By default, the chart deploys three replicas, enables horizontal pod autoscaling (with target utilization thresholds of 70% for CPU and 80% for memory), and exposes the following ports:
| Port | Protocol | Description |
|---|---|---|
| 8088 | TCP | HTTP Event Collector (HEC) |
| 9997 | TCP | Splunk forwarders |
| 10514 | TCP/UDP | Syslog |
Service resource (for example, by using helm upgrade) to reflect the latest values.
Helm chart installation
Perform the following tasks to install the Splunk Helm chart.
helm repo add splunk https://splunk.github.io/edge-processor-helm-charts
Deploy the Helm chart
The minimum configuration required to deploy the Helm chart consists of three values, all of which can be found in your tenant's web UI:
-
TENANT: Typically matches the ID of the Splunk Cloud Platform stack paired with the tenant and appears in the top right-right corner of the page, below the signed-in user's name. -
GROUP_ID: Found by selecting a processor from the Edge Processors page, then copying its associated ID field (for example,431e1ead-fd5b-4af8-ac89-ccae2ae81eda). -
TOKEN: Navigate to the Edge Processors page, and select or create a processor. Choose Actions, and then Install/Uninstall from the dropdown menu in the top right-hand corner, then set the Instance type to Kubernetes. The token will be visible in the resulting script.
helm install <release-name> splunk/edge-processor --version <version> \
--namespace <target-namespace> \
--set config.TENANT=<tenant-id> \
--set config.GROUP_ID=<edge-processor-id> \
--set config.TOKEN=<access-token>
--namespace is not specified, Helm installs the release into the namespace configured in your current Kubernetes context, resolving to default if none is set.
overwritten-values.yaml file that contains only the parameters you wish to override, and pass it, using the --values flag. At render time, Helm merges your file with the chart's defaults, applying your overrides while preserving unspecified settings. For example:
# overwritten-values.yaml
# (Optional) Specify required config values here
# rather than providing them via the `--set` flag
config:
TENANT: splunk-foobar
GROUP_ID: 431e1ead-fd5b-4af8-ac89-ccae2ae81eda
TOKEN: ey···QmnGg
# Expect forwarded events to be received on port 9998
# (instead of 9997). Also, explicitly disable ingestion
# from the HTTP Event Collector (HEC) and Syslog
ports:
forwarder:
port: 9998
hec:
enabled: false
syslog:
enabled: false
# Scale the number of replicas managed by the StatefulSet
# from 3 → 10 to process higher event volumes by default
deployment:
replicaCount: 10
helm install <release-name> splunk/edge-processor --version <version> \
--namespace <target-namespace> \
--values overwritten-values.yaml
For a complete list of available configuration options, inspect the chart's default values by running helm show values splunk/edge-processor, or by visiting the following: https://github.com/splunk/edge-processor-helm-charts/blob/main/charts/edge-processor/values.yaml.
Upgrade Helm chart
The best practice is to keep your deployment aligned with the latest stable chart release. New versions may include bug fixes, security patches, updates to the Edge Processor container image, as well as enhancements to the chart itself.
helm repo update
helm search repo splunk/edge-processor --versions
helm upgrade <release-name> splunk/edge-processor --version <version> \
--namespace <target-namespace> \
--values overwritten-values.yaml
helm upgrade <release-name> splunk/edge-processor --version <version> \
--namespace <target-namespace> \
--reuse-values
StatefulSet uses a rolling update strategy, replacing pods one at a time to maintain availability. This behavior is independent of podManagementPolicy, which applies only to scaling operations.
Uninstall and cleanup Helm chart
Uninstalling a Helm release removes the Kubernetes resources managed by the chart; however, PersistentVolumeClaims (PVCs) are not automatically deleted. Instead, they are retained to preserve persistent data and prevent accidental loss, such as when a release is uninstalled unintentionally or as part of a reinstall workflow.
helm uninstall <release-name> --namespace <target-namespace>
Persistent data cleanup
The retained PVCs store each instance's persistent state, including its local event queue and identity information; therefore, deleting them permanently removes all queued events and erases the instance's stored identity.
kubectl delete pvc -l app.kubernetes.io/name=<release-name> -n <target-namespace>
Namespace removal
kubectl delete namespace <target-namespace>
Additional considerations
-
Scaling down is disabled by default. The chart's
HorizontalPodAutoscaleris configured to scale up aggressively, but not scale down automatically. Because each replica maintains a local event queue on its PVC, terminating a pod before its queue fully drains will delay those events until the pod is rescheduled and reattaches to its volume. If you need to reduce replica count, either wait for queues to drain before scaling down manually, or configure a gradual scale-down policy with a stabilization window long enough for queues to clear. -
Port changes made in the Edge Processor Shared settings page are not automatically propagated to existing Helm releases. If the ports configured in the UI diverge from those specified in your chart, the
Servicewill route traffic to container ports the Edge Processor service is no longer listening on, resulting in data loss. To prevent this, runhelm upgradewith the updated values any time S2S, HEC, or Syslog ports are changed. -
The default
Servicetype isClusterIP, meaning Edge Processor is only reachable from within the Kubernetes cluster. No external traffic can reach it directly. If your data sources live outside the cluster (for example, on-premise forwarders or external syslog sources), you'll need to change theServicetype to make the deployment reachable. For example, setservice.type: LoadBalancerto provision a cloud load balancer orservice.type: NodePortto expose theServiceon a static port across all cluster nodes. Additionally, cloud-specific annotations for internal and external load balancers can be passed viaservice.annotations, if needed.