into command: Overview, syntax, and usage

The SPL2 into command appends to or replaces the contents of a dataset in the search or pipeline. The dataset must be a writeable dataset.

The into command is a terminating command, which is a command that does not return any search results and must be the last command in your search or pipeline.

Use these links to quickly navigate to the main sections in this topic:

How the SPL2 into command works

The into command works differently in different product contexts:

In searches

Let's start with this search:

The following table describes what each command and clause is doing in the search:

Command or clause Description
FROM command Retrieves data from the main dataset.
WHERE clause Specifies to search only the last 5 minutes, starting at the beginning of the minute and stop at the beginning of the current minute.
GROUP BY clause Categorizes the results by the host field.
SELECT clause Uses a calculation to sum the data in the bytes field and place the results in a field called sum. In addition, returns the host field.
HAVING clause Filters the aggregated results to return only the sum of the bytes that are greater than 1 MB.
into command Appends the results to the bytesUsage dataset.

By default, the into command appends search results to a dataset that you have write access to. The mode argument is only valid when the dataset is a lookup kind of dataset. See Dataset kinds in the SPL2 Search Manual.

In pipelines

The into command sends data that was processed upstream in the Edge Processor or Ingest Processor pipeline to a destination dataset. For example, the data can be sent to an index or an Amazon S3 bucket.

Consider the following pipeline:

$pipeline = | from $source | eval index="main" | into $destination

The following table describes what each command is doing in the pipeline:

Command or clause Description
from command Selects a subset of the data received by the Edge Processor or Ingest Processor. This subset is determined by the partition of the pipeline, which you configure in the pipeline builder.
eval command Sets the value of the index field to main for all of the events in the selected subset of data.
into command Sends the processed data to the destination dataset specified by the pipeline settings, which you configure in the pipeline builder.

Syntax

The SPL2 into command supports different syntaxes in different product contexts.

Syntax for searches

In searches, the into command enables you to specify whether the data is appended or replaced to the dataset.

The required syntax is in bold.

into

[mode = (append | replace)]

<dataset>

Syntax for pipelines

In pipelines, the into command is used to specify which destination dataset to append the data to.

The required syntax is in bold.

| into <$destination>

Required arguments

The required arguments are different in each product contexts.

For searches

dataset

Syntax: <dataset>

Description: This argument must be set to the name of a dataset that you have access to. This can be a dataset that you created or a dataset that you are authorized to use.

For pipelines

destination

Syntax: <$destination>

Description: This argument must be set to the $destination parameter. The $destination parameter refers to the destination dataset specified in the pipeline settings. See the Pipeline examples in the into command examples topic.

Optional arguments

mode

Syntax: mode=( append | replace )

Description: Specifies whether to append results to or replace results in the specified dataset. The mode only applies to lookups in searches.

Default: append

Usage

The into command is a new command in SPL2. The into command is a terminating command, which is a command that does not return any search results and must be the last command in your search or pipeline. If you want search results to be returned, use the thru command instead. See thru command overview.

You can use the into command to pipe data into different kinds of datasets. For example, when you use this command in a search, you can pipe data into a lookup table dataset or an index dataset. As another example, when you use this command in an Edge Processor or Ingest Processor pipeline, you can pipe data into a destination dataset.