Set up a load balancer for your Virtual Appliance to receive agent traffic.
Ensure to complete the following tasks before starting the cutover:
- Update the Classic On-Premises DNS to point to the Virtual Appliance load balancer. Map the classic on-premises port to VA port as follows
- Restart the Synthetic server in Virtual Appliance.
- Shutdown the classic on-premises Controller.
platform-admin.sh stop-controller-appserver
- Start the migration tool and wait until the process reaches saturation.
nohup python3 ./migration_tool.py migrate all &
This step transfers the remaining data received during the cutover from the classic on-premises environment to the Virtual Appliance. Once the transfer is complete,
verify the migration status.
Note: If all services reach a saturated state except for the Controller, perform the steps from
Step 5.
- Verify whether the following logs exist run the commands to ensure the Controller MySQL data is saturated:
coordinator - controller_mysql_worker_0 - WARNING - No new binlog data to process, waiting...
coordinator - controller_mysql_worker_0 - INFO - Incremental backup returned None (no new data) so skipping incremental backup
coordinator - controller_mysql_worker_1 - WARNING - No SQL, SQL.gz files or MySQL Shell dumps found in /mnt/nfs_controller/ Waiting for 60 seconds...
- Run the following command to navigate to the Controller folder:
- Run the grep commands:
ls -larth | grep inc | wc -l
ls -larth | grep inc | grep restored | wc -l
If the grep commands result with the same count, it indicates that the data migration for all services is complete.
- Stop the Migration Tool.
- Update the Resource Permissions for EUM and Synthetic Services.
Synthetic jobs might be missing in the Controller UI after data migration. To troubleshoot this issue, see Missing Synthetic Jobs in the Controller UI After Data Migration.
Once these steps are completed, you can begin using the Virtual Appliance to monitor your applications with the data previously present in Classic On-Premises.