Known issues for Splunk SOAR (On-premises)

Release 6.2.2

Date filed Issue number Description
2025-03-05PSAAS-22290Server side database connection terminations are not gracefully handled by add_to_searchindex
2025-01-14PSAAS-21386REHL8 migration Soar upgrade 6.2.2 ( /admin/event_settings/response) not rendering
2024-11-20PSAAS-20760Restarting phantom with telemetry off stops logs from being written to spawn.log
2024-11-06PSAAS-20434VPE: Utility block pin API does not support all pin colors
2024-11-01PSAAS-20358Reporting : "Events resolved" and "Closed events" logic mismatch
2024-10-28PSAAS-20310Deleting role and recreating it with the same name caused issue in playbook prompt block
Workaround:
On SOAR (On-premises), the duplicate entries for the same role name can be removed from the table "role" and ensure the remaining one is not turned off.
2024-10-21PSAAS-20142VPE: Discarding changes doesn't reset the playbook editor for unconfigured action block
2024-10-04PSAAS-19942/opt/phantom/var/log/nginx/error.log is hard coded in config leads to error: No such file or directory
2024-10-02PSAAS-19905VPE: Filter condition fails to process empty list if it comes from action block
2024-09-27PSAAS-19836Input playbooks, missing menu to provide the inputs to test the playbook in the debugger
Workaround:
The feature is missing, user cannot test the playbook
2024-09-12PSAAS-19457phantom.get_notes() fails with "failed to retrieve note Error: That page contains no results" when number of notes is multiple of page size
2024-09-06PSAAS-19321Non Root Installs: Do not allow phantom services to be run with root on non-root installs
2024-09-06PSAAS-19320warm-standby: "--standby-mode --convert-to-primary" results in "File exists" when keystore is a mounted filesystem, and leaves soar instance in an unusable state with the potential for data loss
2024-08-16PSAAS-19075warm-standby: hot_standby parameter still on after "--standby-mode --off", resulting in unexpected behavior
Workaround:
After turning off warm standby using the command phenv python setup_warm_standby.pyc --standby-mode --off use the command phsvc restart postgresql to restart PostgreSQL.
2024-08-13PSAAS-19036About page shows "Splunk Version" and "Splunk Build", which are not accurate as Splunk no longer ships with SOAR
2024-08-08PSAAS-18987Splunk SOAR (On-premises) Installer fails due to centos 8 mirror deprecation
Workaround:
  • If you are not building or upgrading a cluster, you can skip the glusterfs install step and continue the installation of Splunk SOAR.
    1. Rerun the install command for Splunk SOAR. Make sure you do not skip any prompts. Do not use the -y or --no-prompt command line arguments.
    2. The installer will prompt you to install glusterfs. You can answer no if you are not building or upgrading a clustered deployment.
  • If you are building or upgrading a cluster:
    1. Modify the install_common.py file
      1. On or around line 208, modify the base URL set for the GLUSTER_RPM_SOURCE_BASE_URL_EL8 variable to use vault instead of mirror.
                              GLUSTER_RPM_SOURCE_BASE_URL_EL8 = (
                                "[https://vault.centos.org/centos/8-stream/storage/x86_64/gluster-9/Packages/|https://vault.centos.org/centos/8-stream/storage/x86_64/gluster-9/Packages/] "
                                )
                            
      2. Re-run the installer.
2024-08-05PSAAS-18888warm-standby: "--standby-mode --convert-to-primary" results in "File exists" when keystore is a mounted filesystem, and leaves soar instance in an unusable state with the potential for data loss
2024-07-25PSAAS-18798VPE: Missing data paths in prompt block
2024-07-03PSAAS-18317Deleting the Playbook Run, or removing from the database the Asset, User, or App which created a container may cascade into deleting that container and its associated data
Workaround:
Upgrade to Splunk SOAR 6.3.0 or higher to remove the possibility of unintended container loss by any cause. If you cannot upgrade to Splunk SOAR 6.3.0 or higher at this time, you can use the SOAR's shell, (phenv phantom_shell) to manually prepare for the deletion of playbok_runs, for which you will need a list of affected playbook_run IDs. For assets, users, or ingestion apps, deleting via the REST API is a "soft delete" and is generally safe, with one notable exception for apps listed below
  • Playbook Runs
    phenv phantom_shell
    
    >>> ids = [<list>, <of>, <playbook_run>, <ids>]
    
    >>> Container.objects.filter(closing_rule_run_id__in=ids).update(closing_rule_run=None
  • Apps

    Normally, apps are soft deleted. However, there is an edge case to be aware of: installing a previously deleted app for which all assets have been deleted or orphaned may delete containers and associated data originally created by the app, if and only if the app reinstallation process fails. This can be prevented by incrementing an app package's version in order to upgrade, instead of performing a delete and reinstall of the same app version.

2024-06-10PSAAS-17997Playbook Listing page tabs show incorrect list, except "All" tab
2024-05-14PSAAS-17715VPE CF block resource warning not being removed upon reconfiguring
Workaround:
The warning is only cosmetic and will not impact playbook run. To remove the warning, instead of reconfiguring the block just completely delete and re-add a utility block.
2024-03-13PSAAS-16695VPE: Action block using Splunk app marked unconfigured when optional parameters not specified
2024-03-06PSAAS-16642VPE: Deleting conditions from a filter block changes the conditions for downstream blocks instead of deleting them
Workaround:
If you have already deleted multiple conditions in the filter block configuration panel:

If you have multiple condition labels on the connections downstream from the filter block, check to see if the labels match the conditions you specified in the filter block configuration panel.

  • If the conditions match: No further action is required.
  • If the conditions do not match: For all downstream connections, re-select the condition labels to match the conditions in the filter block configuration panel.
2024-02-22PSAAS-16477Podman does not currently work with redirected image URLs due to Docker Hub authentication token changes
Workaround:
Manually change the image: line in docker-compose.yaml to point to
docker.io/phantomsaas/automation_broker:<$SOAR_VERSION>.
2024-01-30PSAAS-16206Global Environment Variables are incorrectly applied by the Automation Broker when the variable is named as all lowercase letters
Workaround:
Use uppercase letters only.
2023-08-25PSAAS-14609Automation Broker: Broker status should be updated if the broker directory is no longer present
2023-04-26PSAAS-13255Deleting a container with 1000+ artifacts causes UWSGI to run out of memory.
Workaround:
For SOAR 6.3.0 we have swapped the deletion mechanism of containers in the UI from a django deletion to a raw deletion.

This helps us avoid running out of memory in Django while preserving audit capability when performing a deletion thanks to a new pg trigger that was added.

In SOAR versions pre 6.3.0, customers running into an OOM error when deleting a container with 1000+ artifacts should delete the container via a raw delete using the delete_db_containers management command. If this is a cloud customer, then SOAR on-call will need to delete the container for them with their permission.