Troubleshoot the Nascent sidecar

Track the health status of the Nascent sidecar to identify potential etcd issues in the search head cluster.

Track the health status of the Nascent sidecar

To track the health status of the Nascent sidecar on a search head, take the following steps:
  • Check the following log files located in the $SPLUNK_HOME/var/log/splunk directory:
    sup-pkg-nascent.log

    System logs for the Nascent sidecar

    sup-pkg-nascent-stdout.log

    System logs for the Nascent sidecar

    etcd.log

    The log for etcd

    splunkd.log

    The primary log file for Splunk Enterprise. It contains system logs for the Nascent sidecar running on an etcd proxy node.

    The following example log contains the MonitorEtcdHealth: Cluster status ok string which indicates that the Nascent sidecar is healthy on the server1.test.com node:
    ("level":"INFO", "time":"2025-11- 04T00:31:03.139Z","location":"health/healthcheck.go:185", "message":"MonitorEtcdHealth: Cluster status ok","service": "nascent", "hostname":"server1.test.com", "healthy Count": 5, "total": 5, "thisNode Flavor": "full"}
  • To check the health of Nascent on all search heads, run the following query with a time range set to the last 2 minutes. If any search heads are missing from the results, there might be issues with etcd on those nodes.
    index=_internal source="/opt/splunk/var/log/splunk/sup-pkg-nascent*" "MonitorEtcdHealth: Cluster status ok" | stats count by host

Troubleshoot etcd issues with etcdctl

With the etcdctl command line (CLI) tool, you can interact with an etcd server and cluster. Using this tool, you can communicate with the etcd API to manage and inspect the cluster state.

To learn how to use etcdctl, see https://etcd.io/docs/v3.4/dev-guide/interacting_v3/ on the etcd website.

There are some scenarios related to the Nascent sidecar that require specific troubleshooting steps.

Troubleshoot unhealthy Nascent on some nodes

The postgres process cannot connect to etcd on some search heads.

The etcd service failed to start properly.

  1. Run the following search setting a time range to the last 2 minutes:
    index=_internal source="/opt/splunk/var/log/splunk/sup-pkg-nascent*" "MonitorEtcdHealth: Cluster status ok" | stats count by host
    If all nodes are listed, it means Nascent is healthy on each node.
  2. If not all nodes are listed, identify missing nodes and investigate possible root causes:
    Possible root causeSolution
    splunkd stopped on that nodeRestart splunkd.
    splunkd is running, but Nascent stopped on that nodeCheck the supervisor.log file to investigate why Nascent sidecar is not started. See Troubleshoot with log files.
    The node_flavor.json file with flavor assignments is incorrect on the node.
    1. Select a node where Nascent is heatly, for example, search head 6 (sh6).

    2. On the selected node, run the following search to retrieve the latest information about the node flavor assignments.
      index=_internal source="/opt/splunk/var/log/splunk/sup-pkg-nascent*" host=sh6 "SaveAssignmentsToFile"
      Example result:
      { [-]
         assignments: [{"name":"etcd","data":{"full":["sh1.us-west-2.compute.internal","sh4.us-west-2.compute.internal","sh7.us-west-2.compute.internal","sh3.us-west-2.compute.internal","sh6.us-west-2.compute.internal"],"proxy":["sh2.us-west-2.compute.internal","sh5.us-west-2.compute.internal"]}}]
         hostname: sh6
         level: INFO
         location: nodeflavor/nodeflavor.go:198
         message: SaveAssignmentsToFile completed.
         service: nascent
         time: 2025-10-30T17:23:58.719Z
      }
      
    3. Log in to the node where Nascent is unhealthy.
    4. Check if the node_flavor.json file in the <SPLUNKHOME>/var/run/nascent folder, includes the same assignments as on the node with healthy Nascent.

    5. If the assignments differ, take the following steps:

      1. Copy the assignments from the node with healthy Nascent, that is sh6.
        [{"name":"etcd","data":{"full":["sh1.us-west-2.compute.internal","sh4.us-west-2.compute.internal","sh7.us-west-2.compute.internal","sh3.us-west-2.compute.internal","sh6.us-west-2.compute.internal"],"proxy":["sh2.us-west-2.compute.internal","sh5.us-west-2.compute.internal"]}}]
      2. Edit the node_flavor.json file on the node with unhealthy Nascent to replace its content.

    6. Restart splunkd.

    etcd configuration on that node is corrupted.

    It might be the case, if the node is assigned the full flavor.

    Take the following mitigation steps:

    1. Remove the Nascent runtime files located in the $SPLUNK_HOME/var/run/nascent folder:
      • etcd_data directory

      • node_flavor.json file with node flavor assignments

    2. Remove the search head with unhealthy Nascent from the search head cluster.

    3. Wait until Nascent removes the search head from the etcd list of search heads assigned the full flavor.

    4. Add the search head to the search head cluster.