Start the Data Migration

When you run the migration tool, it takes the data backup to NFS server and restores the data on your Virtual Appliance. The tool transfers the existing MySQL, Elasticsearch, Filestore data from all the selected services in full. In the next cycles, it detects only the new data and transfers them to the Virtual Appliance.

Enable or disable each service in the config.yaml under the services section.

CODE
services:
  # EUM Filesystem Service
  eum_filestore:
    enabled: true                        # Enable/disable this service
  # Synth Filesystem Service
  synth_filestore:
    enabled: true                        # Enable/disable this service
  # Controller Database Service
  controller:
    enabled: true                        # Enable/disable this servic
  # EUM Database Service
  eum:
    enabled: true                        # Enable/disable this service
  # Events Elasticsearch Service
  events:
    enabled: true                        # Enable/disable this service

When you start the data migration, the tool migrates data of the services that you have enabled.

  1. Run the following command to verify if the VA Controller is active :
    CODE
    kubectl get pods -A
    Alternatively, you can log in to the Virtual Appliance by accessing https://<DNS-Name>or<Cluster-Node-IP>/.
  2. Start the data migration using the following command:
    CODE
    nohup python3 ./migration_tool.py migrate all
    Note: The migrate all command dynamically reads the service configuration from config.yaml and automatically migrates all the enabled services. This migration includes data from the following services:
    • EUM Filestore

    • Synthetic Filestore

    • Controller Database

    • EUM Database

    • Events Service

    Alternatively, you can start the migration for individual components:

    Component Command
    Events Service (Elasticsearch) nohup python3 ./migration_tool.py migrate events --datastore elasticsearch
    End User Monitoring (Filestore) nohup python3 ./migration_tool.py migrate eum --datastore fs
    Controller (MySQL) nohup python3 ./migration_tool.py migrate controller --datastore mysql
    End User Monitoring (MySQL) nohup python3 ./migration_tool.py migrate eum --datastore mysql
    Synthetic Monitoring (Filestore) python3 ./migration_tool.py migrate synth --datastore fs
    Wait for this command to restore the data on completely on the Virtual Appliance. After the initial data backup and restore, the controller_full_backup_restore_completed.marker file created in the {HOME}/migration-tool.
  3. Check the {HOME}/migration-tool/SERVICE-MIGRATION-STATUS.txt file whether the migration status of all the components are saturated.
    Sample Output:
    CODE
    Migration Status - Updated: 2025-12-09 13:24:55
    ====================================================
    EUM        | Status: saturated
    SYNTH      | Status: saturated         
    CONTROLLER | Status: saturated 
    EVENTS     | Status: saturated
    ====================================================
    Note: If the migration of any component fails, stop the migration tool and review the logs to identify the issue. Once you have addressed any problems, start the migration tool. The tool will resume the migration process from the point where it was interrupted.

After the data migration is saturated, stop the migration tool and run update config on Virtual Appliance to view data in Virtual Appliance Controller.

Stop the Migration Tool

To stop the migration tool, complete the following steps:
  1. End the migration process that is running in the background.

    This command displays the active migration process.

    CODE
    ps -aef | grep python
    CODE
    kill -2 <pid> # Here pid is the process id of migration tool process
  2. List all the services within Elasticsearch namespace.
    CODE
    kubectl get svc -n es
    CODE
    NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
    appd-es-http            ClusterIP   10.0.0.1        <none>        9200/TCP   6d19h
    appd-es-internal-http   ClusterIP   10.0.0.2        <none>        9200/TCP   6d19h
    appd-es-node            ClusterIP   None             <none>        9200/TCP   6d19h
    appd-es-transport       ClusterIP   None             <none>        9300/TCP   6d19h
  3. Specify the cluster IP address of <appd-es-http> in this command.
    CODE
    curl http://10.0.0.1:9200/_cat/indices --insecure -u <username>:<password>
    Note: Obtain the username and password for this command from the config.yaml file. Go to the services.events section and copy the username and password.
    CODE
    username: va_es_user
    password: va_es_password

    Wait until all the indices change to green and open state.

  4. Start the Controller in Virtual Appliance.
    CODE
    kubectl scale deployment controller-deployment -n cisco-controller --replicas=1

Update Configuration on Virtual Appliance

Splunk AppDynamics recommends you to update the migration tool configuration on Virtual Appliance after the data migration is saturated. This approach minimizes data inconsistencies and helps ensure a smoother transition during migration cutover.

However, if you wish to verify that the initial data has been correctly transferred to the Virtual Appliance, you may update the configuration immediately after the initial backup and restore process is finished. Before proceeding, ensure that the controller_full_backup_restore_completed.marker file exists in {HOME}/migration-tool. This file confirms that the initial data back up and restore is successful.

  1. Stop the Migration Tool.
  2. Run this command to update the required configuration on Virtual Appliance.
    CODE
    python3 ./migration_tool.py update configs
    This command restarts only the EUM and Events services in the Virtual Appliance. You can verify the Events and EUM services are running by using the following command:
    CODE
    kubectl get pods -n cisco-eum cisco-events
  3. Run the following command to update the Controller access key:
    CODE
    python3 ./migration_tool.py --skip-checks --verbose encrypt credentials
  4. Restart the Controller in Virtual Appliance and wait until the Controller is active.
    CODE
    kubectl rollout restart deployment controller-deployment -n cisco-controller

After you update the migration tool configuration in Virtual Appliance, log in to the Virtual Appliance Controller using the Classic Controller credentials. Then, verify whether the Classic Controller data exists in the Virtual Appliance.