Use the decrypt command to decrypt data in the Ingest Processor solution

The Ingest Processor solution allows you to send encrypted data through your pipelines, and decrypt it before it reaches its destination. That way, you do not have to decrypt your data before processing it in Ingest Processor pipelines. To decrypt your data, apply the decrypt command to your pipelines.

The decrypt command is an SPL2 command that requires a private key, which must be stored in a lookup table. The decrypt command has four required fields: the field to decrypt, the name of the lookup table that your private key is stored under, the specific lookup field name within your lookup table where your private key is stored, and the name of the field where the decrypted value will be outputted.

The Ingest Processor itself does not encrypt data, so your data must already be encrypted before it enters the pipeline.

Prerequisites

  • The data must already be encrypted using the RSA algorithm and PKCS#1 v1.5 padding.
  • The private key must be stored in a lookup table. If an invalid private key is used, the decrypt command will return a placeholder NIL string. For more information on using lookups for the Ingest Processor solution, see

    Enrich Data with Lookups using an Ingest Processor. One column in the lookup table must have the exact title private_key. See the following example of a lookup table CSV file:

private_key, device_id MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCO0wIiso9DBXCIR82prtAf+TnN1aKvZ7oC7rSpaJSIoAI2ijmJh/q+5fhn7Ku7ktBXvM5fw+UcknVBJJewz9MVb3OzvL2DFUydq7dpU+1hEWkNH6skSFVX, 3F2504E0
Note: Please note that RSA decryption is a resource-intensive operation. As a result, you may observe a decrease in throughput depending on the amount of load being sent, especially when compared to a regular pipeline without decryption enabled. This behavior is expected due to the additional computational overhead introduced by RSA decryption.

Use the decrypt command

To create a pipeline that decrypts your data, complete the following steps:

  1. Navigate to the Pipelines page and then select New pipeline, then Ingest Processor pipeline.

  2. On the Get started page, select Blank pipeline and then Next.

  3. On the Define your pipeline's partition page, do the following:

    1. Select how you want to partition your incoming data that you want to send to your pipeline. you can partition by source type, source, and host.

    2. Enter the conditions for your partition, including the operator and the value. Your pipeline will receive and process teh incoming data that meets these conditions.

    3. Select Next to confirm the pipeline partition.

  4. On the Add sample data page, do the following:

    1. Enter or upload sample data for generating previews that show how your pipeline processes data.

    2. Select Next to confirm the sample data that you want to use for your pipeline.

  5. Select the name of the destination that you want to send data to.

  6. (Optional) If you selected a Splunk platform S2S or Splunk platform HEC destination, you can configure index routing:
    1. Select one of the following options in the expanded destinations panel:
      Option Description
      Default The pipeline does not route events to a specific index.

      If the event metadata already specifies an index, then the event is sent to that index. Otherwise, the event is sent to the default index of the Splunk platform deployment.

      Specify index for events with no index The pipeline only routes events to your specified index if the event metadata did not already specify an index.
      Specify index for all events The pipeline routes all events to your specified index.
    2. If you selected Specify index for events with no index or Specify index for all events, then in the Index name field, select or enter the name of the index that you want to send your data to.
    Note: If you're sending data to a Splunk Cloud Platform deployment, be aware that the destination index is determined by a precedence order of configurations. See How does Ingest Processor know which index to send data to? for more information
  7. Select Done to confirm the data destination.
  8. In the SPL2 editor, Select the plus icon (This image shows an icon of a plus sign.) in the Actions menu and select Decrypt field using lookup.

  9. In the menu, provide the name of the lookup table that your private key is stored in, the name of the specific lookup match field, the Field to decrypt and the Decrypted output field where you will store your output.

  10. Select Apply to add to your SPL2 pipeline statement.

  11. To save your pipeline, do the following:

    1. Select Save pipeline.

    2. In the Name field, enter a name for your pipeline.

    3. (Optional) In the Description field, enter a description for your pipeline.

    4. Select Save.

      The pipeline is now listed on the Pipelines page, and you can apply it to Ingest Processor as needed.

  12. To apply this pipeline to an Edge Processor, do the following:
    1. Navigate to the Pipelines page.
    2. In the row that lists your pipeline, select the Actions icon (Image of the Actions icon) and then select Apply.
    3. Select the pipeline that you want to apply, and then select Save.
    Note: You can only apply pipelines to Edge Processors that are in the Healthy status.

    It can take a few minutes for the Ingest Processor service to finish applying your pipeline. During this time, all applied pipelines enter the Pending status. Once the operation is complete, the Pending Apply status icon (Image of pending status icon) stops displaying beside the pipeline. Refresh your browser to check if the icon no longer displays.

Your applied pipeline can now decrypt the data that it receives according to the private key in the lookup that you entered.

Example: Use the decrypt command to decrypt data

Consider a scenario where the pipeline receives the following event with an encrypted SerialNumber field:
{"device_id": "3F2504E0", "device_type": "router", "serial_number": "U2FsdGVkX1+9K2pQ7c3gX0yH4mN5v6wR1aB8zLpDqFjEwXcVxYtZsGhIuO0P1r2sY"}
The private key to decrypt the encrypted field is stored in the cproc-decrypt.csv CSV lookup table. It could also be stored in a KV Store equivalent:
cproc-decrypt.csv (or KV Store equivalent)

private_key, device_id

MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCO0wIiso9DBXCIR82prtAf+TnN1aKvZ7oC7rSpaJSIoAI2ijmJh/q+5fhn7Ku7ktBXvM5fw+UcknVBJJewz9MVb3OzvL2DFUydq7dpU+1hEWkNH6skSFVX, 3F2504E0
See the following example code to decrypt your encrypted field for this scenario:
decrypt encrypted_payload='serial_number' keystore='cproc-decrypt.csv' key_config='device_id' decrypted_output_field='decrypted_field_output'
Where:
  • encrypted_payload is the encrypted data field to be decrypted

  • keystore is the lookup table name that contains the private key to decrypt the encrypted field

  • key_config is the specific lookup field name within your lookup table where your private key is stored

  • decrypted_output_field is the name of the field where the decrypted value will be stored