Events Service Data Migration Failure: Multiple Indices Detected for the Same Alias

Events Service data migration fails with the following errors:

  1. Check the {HOME}/migration-tool/SERVICE-MIGRATION-STATUS.txt file to verify the Events Service data migration status.
    CODE
    Migration Status - Updated: 2025-12-09 13:24:55
    ====================================================
    EUM        | Status: running
    SYNTH      | Status: running         
    CONTROLLER | Status: saturated 
    EVENTS     | Status: failed
    ====================================================
  2. If the Events Service data migration status is failed, in some logs, you may observe the following pattern:
    CODE
    config_manager - restore_worker - ERROR - Elasticsearch-migration-restore-worker: Restore submit failed: ApiError(500, 'illegal_state_exception', 'alias [<alias_name>] has more than one write index [<index_1>,<index_2>]')
    Example log:
    CODE
    config_manager - restore_worker - ERROR - Elasticsearch-migration-restore-worker: Restore submit failed: ApiError(500, 'illegal_state_exception', 'alias [ati-onprem-qse___mobiledevicemetrics___insert] has more than one write index [ati-onprem-qse___mobiledevicemetrics___2026-03-07_13-30-00,ati-onprem-qse___mobiledevicemetrics___2026-02-26_13-25-00]')

This error pattern appear if the migration tool detects one or more indices for the same alias. This creates the alias conflict because multiple indices have the write access. As a result, the Events Service data migration fails.

Update the respective index configuration (is_write_index) to false in the old index.

  1. Log in to Virtual Appliance and run following command:
    CODE
    appduser@appdva-test7-vm348:~$ kubectl get svc -n es
    Sample Output:
    CODE
    NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
    appd-es-http            ClusterIP   10.0.0.1         <none>        9200/TCP   6d19h 
    appd-es-internal-http   ClusterIP   10.0.0.2         <none>        9200/TCP   6d19h
    appd-es-node            ClusterIP   None             <none>        9200/TCP   6d19h
    appd-es-transport       ClusterIP   None             <none>        9300/TCP   6d19h
  2. Run the following commands in the Elasticsearch cluster on the VA host.
    1. Check the current alias conflict on the VA.
      CODE
      curl -s "http://<Cluster IP of appd-es-http>:9200/_alias/<alias_name>?pretty" -u username:password
      Note: Obtain the username and password for this command from the config.yaml file. Go to the services.events section and copy the username and password.
      CODE
      username: va_es_user
      password: va_es_password
      Sample Output:
      JSON
      {
        "ati-onprem-qse___mobiledevicemetrics___2026-03-07_13-30-00" : {
          "aliases" : {
            "ati-onprem-qse___mobiledevicemetrics___insert" : {
            }
          }
        }
      }
    2. Update the index configuration ( is_write_index) to false in the old index. The default value of this setting is true.
      JSON
      curl -s -X POST "http://<Cluster IP of appd-es-http>:9200/_aliases?pretty" -u username:password -H 'Content-Type: application/json' -d '{
          "actions": [
              {
                  "add": {
                      "index": "<old_index_name>",
                      "alias": "<alias",
                      "is_write_index": false
                  }
              }
          ]
      }'
  3. Verify whether the alias conflict is fixed.
    CODE
    curl -s "http://<Cluster IP of appd-es-http>:9200/_alias/<alias_name>?pretty" -u username:password
    Sample Output:
    JSON
    {
      "ati-onprem-qse___mobiledevicemetrics___2026-03-07_13-30-00" : {
        "aliases" : {
          "ati-onprem-qse___mobiledevicemetrics___insert" : {
            "is_write_index" : false
          }
        }
      }
    }
  4. Check the cluster health, ensure it indicates green or yellow.
    CODE
    curl -s "http://<Cluster IP of appd-es-http>:9200/_cluster/health?pretty" -u username:password
    Sample Output:
    JSON
    {
      "cluster_name" : "appd",
      "status" : "green",
      "timed_out" : false,
      "number_of_nodes" : 3,
      "number_of_data_nodes" : 3,
      "active_primary_shards" : 887,
      "active_shards" : 891,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 0,
      "unassigned_primary_shards" : 0,
      "delayed_unassigned_shards" : 0,
      "number_of_pending_tasks" : 0,
      "number_of_in_flight_fetch" : 0,
      "task_max_waiting_in_queue_millis" : 0,
      "active_shards_percent_as_number" : 100.0
    }
  5. Restart the migration tool.