Logstash TCP
Use this Splunk Observability Cloud integration for the Logstash TCP monitor. See benefits, install, configuration, and metrics
The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the logstash-tcp
monitor type to monitor the health and performance of Logstash deployments through Logstash Monitoring APIs. It fetches events from the logstash tcp output plugin operating in either server
or client
mode and converts them to data points.
This integration is meant to be used in conjunction with the Logstash Metrics filter plugin that turns events into metrics. You can only use autodiscovery when this monitor is in client
mode.
This receiver is available on Linux and Windows.
Benefits
After you configure the integration, you can access these features:
-
View metrics. You can create your own custom dashboards, and most monitors provide built-in dashboards as well. For information about dashboards, see View dashboards in Splunk Observability Cloud.
-
View a data-driven visualization of the physical servers, virtual machines, AWS instances, and other resources in your environment that are visible to Infrastructure Monitoring. For information about navigators, see Use navigators in Splunk Infrastructure Monitoring.
-
Access the Metric Finder and search for metrics sent by the monitor. For information, see Search the Metric Finder and Metadata Catalog.
Installation
Follow these steps to deploy this integration:
-
Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform:
-
Configure the monitor, as described in the Configuration section.
-
Restart the Splunk Distribution of OpenTelemetry Collector.
Configuration
To use this integration of a Smart Agent monitor with the Collector:
-
Include the Smart Agent receiver in your configuration file.
-
Add the monitor type to the Collector configuration, both in the receiver and pipelines sections.
-
See how to Use Smart Agent monitors with the Collector.
-
See how to set up the Smart Agent receiver.
-
For a list of common configuration options, refer to Common configuration settings for monitors.
-
Learn more about the Collector at Get started: Understand and use the Collector.
-
Example
To activate this integration, add the following to your Collector configuration:
receivers:
smartagent/logstash-tcp:
type: logstash-tcp
... # Additional config
Next, add the monitor to the service.pipelines.metrics.receivers
section of your configuration file:
service:
pipelines:
metrics:
receivers: [smartagent/logstash-tcp]
Example: Logstash Metrics plugin configuration
The following example shows how to use both timer
and meter
metrics from the Logstash Metrics filter plugin:
input {
file {
path => "/var/log/auth.log"
start_position => "beginning"
tags => ["auth_log"]
}
# A contrived file that contains timing messages
file {
path => "/var/log/durations.log"
tags => ["duration_log"]
start_position => "beginning"
}
}
filter {
if "duration_log" in [tags] {
dissect {
mapping => {
"message" => "Processing took %{duration} seconds"
}
convert_datatype => {
"duration" => "float"
}
}
if "_dissectfailure" not in [tags] { # Filter out bad events
metrics {
timer => { "process_time" => "%{duration}" }
flush_interval => 10
# This makes the timing stats pertain to only the previous 5 minutes
# instead of since Logstash last started.
clear_interval => 300
add_field => {"type" => "processing"}
add_tag => "metric"
}
}
}
# Count the number of logins using SSH from /var/log/auth.log
if "auth_log" in [tags] and [message] =~ /sshd.*session opened/ {
metrics {
# This determines how often metric events will be sent to the agent, and
# thus how often data points will be emitted.
flush_interval => 10
# The name of the meter will be used to construct the name of the metric
# in Splunk Infrastructure Monitoring. For this example, a data point called `logins.count` would
# be generated.
meter => "logins"
add_tag => "metric"
}
}
}
output {
# This can be helpful to debug
stdout { codec => rubydebug }
if "metric" in [tags] {
tcp {
port => 8900
# The agent will connect to Logstash
mode => "server"
# Needs to be "0.0.0.0" if running in a container.
host => "127.0.0.1"
}
}
}
Once Logstash is configured with the above configuration, the logstash-tcp
monitor collects logins.count
and process_time.<timer_field>
.
Configuration settings
The following table shows the configuration options for this monitor type:
Option |
Required |
Type |
Description |
---|---|---|---|
|
yes |
| If mode: server , the local IP address to listen on. If
|
|
no |
| If mode: server , the local port to listen on. If
|
|
no |
| Whether to act as a server or client . The correspondingsetting in the Logtash |
|
no |
|
(default: |
|
no |
| How long to wait before reconnecting if the TCP connection cannot be made or after it gets broken. (default: |
|
no |
| If true , events received from Logstash will be dumped to theagent’s stdout in deserialized form (default: |
Metrics
There are no metrics available for this integration.
Troubleshooting
If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.
Available to Splunk Observability Cloud customers
-
Submit a case in the Splunk Support Portal.
-
Contact Splunk Support.
Available to prospective customers and free trial users
-
Ask a question and get answers through community support at Splunk Answers.
-
Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups.