Known issues in Splunk UBA
This version of Splunk UBA has the following known issues and workarounds.
If no issues are listed, none have been reported.
Date filed | Issue number | Description |
---|---|---|
2024-04-30 | UBA-18862 | Error Encountered When Cloning Splunk Datasource and Selecting Source TypesWorkaround:Re-enter the password on the Connection page for the Splunk endpoint. |
2023-06-08 | UBA-17446 | Upon applying the Ubuntu security patches, postgresql got removed causing UBA unable to startWorkaround:Stop all UBA Services :
Re-install postgres package, replace <uba ext packages> with your package folder in below command. For example, for 5.0.5 its uba-ext-pkgs-5.0.5 :
Start all UBA Services :
|
2022-12-22 | UBA-16722 | Error in upgrade log, /bin/bash: which: line 1: syntax error: unexpected end of file |
2022-12-05 | UBA-16617 | Repeated Kafka warning message "Received a PartitionLeaderEpoch assignment for an epoch < latestEpoch. This implies messages have arrived out of order"Workaround:1) On zookeeper node (typically node 2 on a multi-node deployment), find all leader-epoch-checkpoint files: (can also use a find command if locate isn't available)
a) Copy result into a script, adding ">" prior to each result. i.e.
b) Make script executable:
2) On node 1, run:
3) On zookeeper node, run:
4) On node 1, run:
5) Check logs to see if warn messages still show up on zookeeper node:
6) If you see the following warning repeated:
a) Clear cleaner-offset-checkpoint on zookeeper node by running:
b) Then on node 1, run:
|
2022-07-26 | UBA-15997 | Benign error messages on CaspidaCleanup: Relations do not exist, Kafka topic does not exist on ZK path |
2022-02-14 | UBA-15364 | Spark HistoryServer running out of memory for large deployments with error: "java.lang.OutOfMemoryError: GC overhead limit exceeded"Workaround:Open the following file to edit on the Spark History Server: /var/vcap/packages/spark/conf/spark-env.sh
You can check deployments.conf field spark.history to find out which node runs the Spark History Server. Update the following setting to 3G:
Afterwards, restart the spark services:
|
2021-08-30 | UBA-14755 | Replication.err logging multiple errors - Cannot delete snapshot s_new from path /user: the snapshot does not exist. |
2020-04-07 | UBA-13804 | Kubernetes certificates expire after one yearWorkaround:Run the following commands on the Splunk UBA master node:
|
2019-10-07 | UBA-13227 | Backend anomaly and custom model names are displayed in Splunk UBAWorkaround:Click the reload button in the web browser to force reload the UI page. |
2019-08-29 | UBA-13020 | Anomalies migrated from test-mode to active-mode won't be pushed to ES |
2019-08-06 | UBA-12910 | Splunk Direct - Cloud Storage does not expose src_ip fieldWorkaround:When ingesting Office 365 Sharepoint/OneDrive logs through Splunk Direct - Cloud Storage, add an additional field mapping for src_ip in the final SPL to be mapped from ClientIP (| eval src_ip=ClientIP ). Make sure to add src_ip in the final list of fields selected using the fields command. For example:
|
2017-04-05 | UBA-6341 | Audit events show up in the UBA UI with 30 minute delay |