Centralized Log Management UI Details

General Configuration Tab

Field DescriptionRequired
Source NameName of this source rule. This name must be unique. Appears in the list of defined source rules.Yes
Source Type

Specifies the event type of log source file. This field is prepopulated when you start from a Splunk AppDynamics template or an existing source rule. If you are creating a new source rule from scratch, you can specify any value. This value is used to identify this specific type of log event and can be used in searching and filtering the collected log data. The value shows up in the field list for the log data.

Yes
Source File

The location and name of the log file to serve as a log source. The location must be on the same machine as the analytics-agent.

You can use wild cards and you can specify whether to match files one level deep or all log files in the path directory structure.

  • Example for multi-level matching:

    path: /var/log/**/*.logThis matches both /var/log/apache2/logs/error.log and /var/log/cassandra/system.log

  • Example for one level matching:path: /var/log/*/*.logThis searches for .log files one level deep in the /var/log directory (matches on /var/log/cassandra/system.log but not on /var/log/apache2/logs/error.log).

Note: For optimal performance, we recommend limiting the number of log files per log source to 6000.
Required when collection type is from the local filesystem.
Exclude Files

Exclude, or blocklist, files from the defined source rule(s). Input the relative path of the file(s) to exclude, using wildcards to exclude multiple files.

  • Example for a single file:
  • Source File: /var/log/**/*.log Exclude Files: cassandra/system.log
  • Example for multiple files: Source File: /var/log/**/*.log Exclude Files: */system.log
No
TCP PortSpecifies the port for the analytics-agent to collect log files from a network connection. This field is not present if the collection type is From local file system.

If no port number is provided, port 514 is used. Both the syslog utility and analytics-agent should have root access to send logs to port 514 (binding to ports less than 1024 requires root access).

Required when collection type is from a network connection.
Enable path extraction

This checkbox enables extraction of fields from the path name. Add a grok pattern in the text box.

The syntax for a grok pattern is {%{SYNTAX:SEMANTIC}. For example, to extract AdminSever from /opt/apps/oracle/middleware/user_projects/domains/ouaf_domain1/servers/AdminServer/logs/AdminServer.log, enter the following GROK pattern: servers/%{DATA:servername}/%{GREEDYDATA}, where %{DATA:server} filters the server name and %{GREEDYDATA} filters rest of the message from the log path.

No
Thread Count

Specify the number of threads that the Analytics Agent will use for log processing. You must specify the thread counts after analyzing the performance of your environment as increased thread counts might impact system resources.

Minimum Value: 5

Maximum Value: 25

Required if the number of log files to be processed is more than five.
File Wait Time

Specify the time in seconds that the agent will wait to update a log file. After this time limit, the agent begins monitoring a new log file. The default value is 30 seconds.

No
Start collecting from

Indicate where to begin tailing (collecting the log records). Options are:

  • From the beginning of the log file
  • From the end of the log file
  • A specific time range (in hours) - For example, if you set this to four hours, then when you enable the rule and analytics starts tailing the log data, only the last four hours of data is ingested. When tailing starts, the watermark state is maintained for that file. If the agent is now stopped and restarted, it will start tailing from where it left off for any old file. For new files, it will only tail the last four hours of data.
The default is to collect log records from the beginning of the file.
Override time zone

Use this to override the time zone for eventTimestamp in log events. If your log events don't have a time zone assigned and you are not overriding it, the host machine time zone is assigned to the log events. You cannot override the timezone in the pickupTimestamp field.If you override the time zone, you must provide a timestamp format. Time zone formats should conform to Joda-Time Available Time Zones, and the timestamp format should be created with the Joda-Time formatting system.

No
Override timestamp format

You can override the format if needed. Time zone formats should conform to Joda-Time Available Time Zones, and the timestamp format should be created with the Joda-Time formatting system.

Yes.
The override timestamp format is needed to extract out time correctly from the log file. If you do not provide this field, the eventTimestamp will be equal to pickupTimestamp despite log line having a timestamp.
Auto-Correct duplicate TimestampsEnable this option to preserve the order of original log messages if the messages contain duplicate timestamps. This adds a counter to preserve the order.No
Collect Gzip filesEnable this to also find gzip files in the specified path.No

Field Extraction Tab

The following table describes the fields and actions on this tab. For a detailed procedure on using Auto or Manual Field Extraction, see Field Extraction for Source Rules.

Section/Field or ActionDescription
Add Grok Pattern
Message Pattern

This is the grok pattern used to extract fields from the log message. A pattern may be prepopulated when you use a Splunk AppDynamics template or an existing source rule as your starting point. You can add or remove grok patterns as needed.

Multiline FormatFor log files that include log records that span multiple lines (spanning multiple line breaks), use this field to indicate how the individual records in the log file should be identified. You have two options:
  • startsWith: A simple prefix that matches the start of the multiline log record.

  • regex: A regular expression that matches the multiline log record.

The multiline format is not supported when you are collecting log data from a network connection.

Extract Key-Value Pairs
FieldShows the field selected for key-value pair configuration.
SplitThe delimiter used to separate the key from its value. In this example, "key=value": the split delimiter is the equal sign "=". Multiple comma-separated values can be added.
SeparatorThe delimiter used to separate out two key-value pairs. In this example, "key1=value1;key2=value2": the separator is the semi-colon ";". Multiple comma-separated values can be added.
TrimA list of characters to remove from the starting and/or the end of the key/value before storing them. In this example, "_ThreadID_": you can specify the underscore "_" as the trim character to result in "ThreadID". Multiple comma-separated values can be added.
IncludeA list of key names to capture from the "source". You must provide keys in the include field. If the include field is left blank no key-value pairs are collected.
Actions
Upload Sample fileBrowse to a local file to upload a sample log file.
PreviewUse to refresh the Preview grid to see the results of the specified field extraction pattern.
Auto Field Extraction
Definer Sample

Select a message from the preview grid that is representative of the fields that you want to extract from the log records. You can select only one definer sample per source rule.

Refiner SamplesRefines the regular expression to include values that were not included in the original definer sample.
Counter SampleThis specifies something to ignore. Refines the regular expression to exclude values that were included by the original definer or refiner sample.
Manual Field Extraction
Regular ExpressionAdd a regular expression to define the field you want to extract.
Field TypeSpecify the type for the field.
Field NameAutomatically generated within the regular expression pattern. This name appears in the Fields list in the Analytics Search UI.
Actions
Add FieldUse to add additional regular expressions for extracting more fields. You cannot add more than 20 fields.

Preview

  • All
  • Matching
  • Non-Matching
Use these buttons to filter the viewable results in the preview grid.
Upload Sample FileUpload a sample file from your local file system to use in the preview grid.

Field Management Tab

Field/ActionDescription
Select Any Field to CustomizeSelect the field to add customizations. You can add multiple customizations to a single field.
Field NameThis column lists the fields that have customizations.
CustomizeThis column shows the specific customizations. For static fields, the display name is shown. Static fields cannot be further customized.
Mask ValueThis customization option masks values in the collected data. Specify the starting and ending position in the data and the character to use as the masking value.
Replace ValueThis customization option replaces the entire value of the field with a static string.
RenameThis customization option enables you to rename a field to a more recognizable display name.
Field Type

Use to change the data type of the field. For example, from string to number. Available types are String, Boolean, and Number.

Note that after a source rule has been saved with a field of a specified data type, you cannot later change the data type of that field. Once the fields are indexed in the analytics database trying to specify a new type will cause a validation error.

RemoveThis customization turns off data collection for the field. It can be reversed at a later time.
Add Static FieldThis action allows you to add a static field to all the log events collected from this source. This field can then be used to search and filter the log data. For example, use this to add Tier, Node, and Application Name to the log data.