Elasticsearch receiver
The Elasticsearch receiver queries the Elasticsearch node stats, cluster health and index stats endpoints in order to scrape metrics from a running Elasticsearch cluster.
The Elasticsearch receiver queries Elasticsearch’s node stats, cluster health and index stats endpoints to scrape metrics from a running Elasticsearch cluster. The supported pipeline type is metrics
. See Process your data with pipelines for more information.
To learn more about the queried endpoints, see the following topics in the Elastic documentation:
-
For Nodes stats API, see https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html
-
For Cluster health API, see https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html
-
For Index stats API, see https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-stats.html
Prerequisites
This receiver supports Elasticsearch versions 7.9 or higher.
If Elasticsearch security features are enabled, you must have either the monitor or manage cluster privilege. See https://www.elastic.co/guide/en/elasticsearch/reference/current/authorization.html for role-based access control and https://www.elastic.co/guide/en/elasticsearch/reference/current/security-privileges.html for security privileges in the Elastic documentation for more information.
Get started
Follow these steps to configure and activate the component:
-
Deploy the Splunk Distribution of the OpenTelemetry Collector to your host or container platform:
-
Configure the Elasticsearch receiver as described in the next section.
-
Restart the Collector.
Sample configuration
-
To activate the receiver, add
elasticsearch
to thereceivers
section of your configuration file:receivers: elasticsearch:
-
Next, include the receiver in the
metrics
pipeline of theservice
section of your configuration file:service: pipelines: metrics: receivers: - elasticsearch
Advanced configuration
The following settings are optional:
-
nodes
.["_all"]
by default. Allows you to specify node filters that define which nodes are scraped for node-level and cluster-level metrics.-
For allowed filters, see the Cluster APIs Node specification at https://www.elastic.co/guide/en/elasticsearch/reference/7.9/cluster.html#cluster-nodes in the Elastic documentation.
-
If empty, then the receiver doesn’t scrape any node-level metrics, and only metrics related to the cluster’s health are scraped at the cluster level.
-
-
skip_cluster_metrics
.false
by default. Iftrue
, cluster-level metrics are not scraped. -
indices
.["_all"]
by default. Allows you to specify index filters that define which indices are scraped for index-level metrics.-
For allowed filters, see Cluster APIs Path parameters at https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-stats.html#index-stats-api-path-params in the Elastic documentation.
-
If empty, then the receiver doesn’t scrape any index-level metrics.
-
-
endpoint
.http://localhost:9200
by default. The base URL of the Elasticsearch API for the cluster to monitor. -
username
. No default. Specifies the username used to authenticate with Elasticsearch using basic auth. -
password
. No default. Specifies the password used to authenticate with Elasticsearch using basic auth. -
collection_interval
.10s
by default. This receiver collects metrics on an interval determined by this setting. This value must be a string readable by Golang’s time.ParseDuration. See https://pkg.go.dev/time#ParseDuration.-
On larger clusters, you might need to increase this interval, as querying Elasticsearch for metrics takes longer on clusters with more nodes.
-
-
initial_delay
.1s
by default. Defines how long this receiver waits before starting.
Configuration example
See the following configuration example:
receivers:
elasticsearch:
metrics:
elasticsearch.node.fs.disk.available:
enabled: false
nodes: ["_local"]
skip_cluster_metrics: true
indices: [".geoip_databases"]
endpoint: http://localhost:9200
username: otel
password: password
collection_interval: 10s
Settings
The following table shows the configuration options for the Elasticsearch receiver:
included
https://raw.githubusercontent.com/splunk/collector-config-tools/main/cfg-metadata/receiver/elasticsearch.yaml
Metrics
The following metrics, resource attributes, and attributes, are available.
included
https://raw.githubusercontent.com/splunk/collector-config-tools/main/metric-metadata/elasticsearchreceiver.yaml
Activate or deactivate specific metrics
You can activate or deactivate specific metrics by setting the enabled
field in the metrics
section for each metric. For example:
receivers:
samplereceiver:
metrics:
metric-one:
enabled: true
metric-two:
enabled: false
The following is an example of host metrics receiver configuration with activated metrics:
receivers:
hostmetrics:
scrapers:
process:
metrics:
process.cpu.utilization:
enabled: true
-
If you’re in a MTS-based subscription, all metrics count towards metrics usage.
-
If you’re in a host-based plan, metrics listed as active (Active: Yes) on this document are considered default and are included free of charge.
Learn more at Infrastructure Monitoring subscription usage (Host and metric plans).
Metrics with versions
The following metrics are available with versions:
-
elasticsearch.indexing_pressure.memory.limit
. Available in versions 7.10 or higher. -
elasticsearch.node.shards.data_set.size
. Available in versions 7.13 or higher. -
elasticsearch.cluster.state_update.count
. Available in versions 7.16.0 or higher. -
elasticsearch.cluster.state_update.time
. Available in versions 7.16.0 or higher.
Troubleshooting
If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.
Available to Splunk Observability Cloud customers
-
Submit a case in the Splunk Support Portal.
-
Contact Splunk Support.
Available to prospective customers and free trial users
-
Ask a question and get answers through community support at Splunk Answers.
-
Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups.