GitLab
Use this Splunk Observability Cloud integration for the GitLab monitor. See benefits, install, configuration, and metrics
The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the GitLab monitor type to monitor GitLab.
GitLab is bundled with Prometheus exporters, which can be configured to export performance metrics of itself and of the bundled software that GitLab depends on. These exporters publish Prometheus metrics at endpoints that are scraped by this monitor type.
This integration allows you to monitor the following:
-
Gitaly and Gitaly Cluster: Gitaly is a git remote procedure call (RPC) service for handling all git calls made by GitLab. This monitor scrapes the Gitlab Gitaly git RPC server.
-
GitLab Runner: GitLab Runner can be monitored using Prometheus. See the GitLab Runner documentation on GitLab Docs for more information.
-
GitLab Sidekiq: It scrapes the Gitlab Sidekiq Prometheus Exporter.
-
GitLab Unicorn server: It comes with a Prometheus exporter. The IP address of the container or host needs to be allowed for the collector to access the endpoint. See the
IP allowlist
documentation on GitLab Docs for more information. -
GitLab Webservice: It provides the GitLab Rails webserver with two Webservice workers per pod.
-
GitLab Workhorse: The GitLab service that handles slow HTTP requests. Workhorse includes a built-in Prometheus exporter that this monitor hits to gather metrics.
This monitor type is available on Kubernetes, Linux, and Windows using GitLab version 9.3 or higher.
Benefits
After you configure the integration, you can access these features:
-
View metrics. You can create your own custom dashboards, and most monitors provide built-in dashboards as well. For information about dashboards, see View dashboards in Splunk Observability Cloud.
-
View a data-driven visualization of the physical servers, virtual machines, AWS instances, and other resources in your environment that are visible to Infrastructure Monitoring. For information about navigators, see Use navigators in Splunk Infrastructure Monitoring.
-
Access the Metric Finder and search for metrics sent by the monitor. For information, see Search the Metric Finder and Metadata Catalog.
Installation
Follow these steps to deploy this integration:
-
Deploy the Splunk Distribution of the OpenTelemetry Collector to your host or container platform:
-
Configure the integration, as described in the Configuration section.
-
Restart the Splunk Distribution of the OpenTelemetry Collector.
Configure GitLab to monitor Prometheus endpoints
Follow the instructions on Monitoring GitLab with Prometheus to configure the GitLab Prometheus exporters to expose metric endpoint targets.
The following Prometheus endpoint targets are available:
Monitor type |
Reference |
Default port |
Standard path |
---|---|---|---|
|
GitLab exporter |
9168 |
/metrics |
|
Gitaly and Gitaly Cluster |
9236 |
/metrics |
|
GitLab Runner |
9252 |
/metrics |
|
GitLab SideKiq |
3807 |
/metrics |
|
GitLab Unicorn |
8080 |
/metrics |
|
GitLab Webservice |
8083 |
/metrics |
|
GitLab Workhorse |
9229 |
/metrics |
|
Monitoring GitLab with Prometheus |
8060 |
/metrics |
|
Node exporter |
9100 |
/metrics |
|
PostgreSQL Server Exporter |
9187 |
/metrics |
|
Monitoring GitLab with Prometheus |
9090 |
/metrics |
|
Redis exporter |
9121 |
/metrics |
Important notes
-
If you configure GitLab by editing
/etc/gitlab/gitlab.rb
, run the commandgitlab-ctl reconfigure
for the changes to take effect. -
If you configure nginx by editing the file
/var/opt/gitlab/nginx/conf/nginx-status.conf
, run the commandgitlab-ctl restart
.-
Note that changes to the configuration file
/var/opt/gitlab/nginx/conf/nginx-status.conf
in particular are erased by subsequent runs ofgitlab-ctl reconfigure
becausegitlab-ctl reconfigure
restores the original configuration file.
-
-
You need to configure GitLab Prometheus exporters, nginx, and GitLab Runner to accept requests from the host or Docker container of the OpenTelemetry Collector.
Examples
The following configuration in /etc/gitlab/gitlab.rb
configures the GitLab Postgres Prometheus exporter to allow network connections on port 9187
from any IP address:
postgres_exporter['listen_address'] = '0.0.0.0:9187'
postgres_exporter['listen_address'] = ':9187'
The file /var/opt/gitlab/nginx/conf/nginx-status.conf
configures nginx, and the location /metrics
block shows metric-related configuration. Use the statement allow 172.17.0.0/16;
to allow network connection in the 172.17.0.0/16
IP range, assuming that the IP address associated with the OpenTelemetry Collector is in that IP range.
server {
...
location /metrics {
...
allow 172.17.0.0/16;
deny all;
}
}
The /etc/gitlab-runner/config.toml
file configures GitLab Runner. To configure GitLab Runner’s Prometheus metrics HTTP server to allow network connection on port 9252
from any IP address use:
listen_address = "0.0.0.0:9252"
...
Configuration
To use this integration of a Smart Agent monitor with the Collector:
-
Include the Smart Agent receiver in your configuration file.
-
Add the monitor type to the Collector configuration, both in the receiver and pipelines sections.
-
See how to Use Smart Agent monitors with the Collector.
-
See how to set up the Smart Agent receiver.
-
For a list of common configuration options, refer to Common configuration settings for monitors.
-
Learn more about the Collector at Get started: Understand and use the Collector.
-
Example
To activate this integration, add the following to your Collector configuration:
receivers:
smartagent/gitlab:
type: gitlab
... # Additional config
Next, add the services you want to monitor to the service.pipelines.metrics.receivers
section of your configuration file:
receivers:
smartagent/gitlab-sidekiq:
type: gitlab
host: gitlab-webservice-default.default
port: 3807
smartagent/gitlab-workhorse:
type: gitlab
host: gitlab-webservice-default.default
port: 9229
# ... Other sections
service:
pipelines:
metrics:
receivers:
- smartagent/gitlab-sidekiq
- smartagent/gitlab-workhorse
# ... Other sections
Configuration options
The following table shows the configuration options for this monitor:
Option |
Required |
Type |
Description |
---|---|---|---|
|
no |
| HTTP timeout duration for both read and writes. This should be a duration string that is accepted by ParseDuration The default value is |
|
no |
|
Basic Auth username to use on each request, if any. |
|
no |
|
Basic Auth password to use on each request, if any. |
|
no |
| If true , the collector connects to the server usingHTTPS instead of plain HTTP. The default value is |
|
no |
| A map of HTTP header names to values. Comma-separated multiple values for the same message-header is supported. |
|
no |
| If useHTTPS is true and this option is also true ,the exporter’s TLS cert is not verified. The default value is |
|
no |
| Path to the CA cert that has signed the TLS cert, unnecessary if
|
|
no |
|
Path to the client TLS cert to use for TLS required connections |
|
no |
|
Path to the client TLS key to use for TLS required connections |
|
yes |
|
Host of the exporter. |
|
yes |
|
Port of the exporter |
|
no |
| Use pod service account to authenticate. The default value is
|
|
no |
| Path to the metrics endpoint on the exporter server, usually
|
|
no |
| Send all the metrics that come out of the Prometheus exporter without any filtering. This option has no effect when using the Prometheus exporter monitor directly since there is no built-in filtering, only when embedding it in other monitors. The default value is |
Metrics
The following metrics are available for this integration.
https://raw.githubusercontent.com/signalfx/splunk-otel-collector/main/internal/signalfx-agent/pkg/monitors/gitlab/metadata.yaml
Notes
-
To learn more about the available in Splunk Observability Cloud see Metric types.
-
In host-based subscription plans, default metrics are those metrics included in host-based subscriptions in Splunk Observability Cloud, such as host, container, or bundled metrics. Custom metrics are not provided by default and might be subject to charges. See Metric categories for more information.
-
In MTS-based subscription plans, all metrics are custom.
-
To add additional metrics, see how to configure
extraMetrics
in Add additional metrics.
Troubleshooting
If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.
Available to Splunk Observability Cloud customers
-
Submit a case in the Splunk Support Portal.
-
Contact Splunk Support.
Available to prospective customers and free trial users
-
Ask a question and get answers through community support at Splunk Answers.
-
Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups.