Configure the Migration Tool

The migration tool includes a configuration file named config.yaml that stores all necessary settings required for a smooth migration process. You must update this file with your specific environment details before running the migration tool.
  1. Navigate to the following file path:
    CODE
    /home/appduser/migration-tool/config
  2. Edit the config.yaml file.
    CODE
    vi config.yaml
  3. Update the authentication details in the hosts section.
    • For password-based authentication, update the Host IP address, username, and password for each host.

    • To authenticate host using SSH key pair, specify the corresponding SSH private key files for each host to the Virtual Appliance node.

    Note:
    • Ensure that all the SSH keys exist under ~/.ssh on the Virtual Appliance node. The ~/.ssh/id_* files must have owner permissions (chmod 600).
    • Add, remove, or comment nodes whenever required by your environment.
    In the classic_events configuration, list all the Events Service hosts as comma-separated values. Example:
    CODE
    host: [<Events_IP_1>,<Events_IP_2>,<Events_IP_3>]

    Sample Configuration file

    CODE
    # HOST CONFIGURATION (all systems in the environment)
    # ============================================================================
    # Add new hosts here as needed for additional services
    
    hosts:
      # --------------------------------------------------------------------------
      # Source systems (Classic AppDynamics)
      # --------------------------------------------------------------------------
      classic_controller:
        host: <classic_controller_host>        # Example: 10.0.0.1 or controller.example.com
        user: <classic_controller_user>        # Example: appduser
        password: <classic_controller_password> # Example: passwd123 (if using password auth)
        ssh_key: <classic_controller_ssh_key>  # Example: ~/.ssh/id_classic_controller (if using SSH key)
        home_dir: /home/appduser
    
      classic_events:
        host: [<classic_events_host>]          # Example: [10.0.20.1,10.0.20.2,10.0.20.3]
        user: <classic_events_user>            # Example: appduser
        password: <classic_events_password>    # Example: passwd123
        ssh_key: <classic_events_ssh_key_path>      # Example: ~/.ssh/id_classic_events
        home_dir: <home_directory>
    
      classic_eum:
        host: <classic_eum_host>               # Example: 10.0.20.4 or eum.example.com
        user: <classic_eum_user>               # Example: appduser
        password: <classic_eum_password>       # Example: passwd123
        ssh_key: <classic_eum_ssh_key>         # Example: ~/.ssh/id_classic_eum
        home_dir: /home/appduser
    
      classic_synth:
        host: <classic_synth_host>             # Example: 10.0.20.5 or synth.example.com
        user: <classic_synth_user>             # Example: appduser
        password: <classic_synth_password>     # Example: passwd123
        ssh_key: <classic_synth_ssh_key>       # Example: ~/.ssh/id_classic_synth
        home_dir: /home/appduser
    
      # --------------------------------------------------------------------------
      # Infrastructure systems
      # --------------------------------------------------------------------------
      nfs:
        host: <classic_nfs_host>               # Example: 10.0.20.6 or nfs.example.com
        user: <classic_nfs_user>               # Example: appduser
        password: <classic_nfs_password>       # Example: passwd123
        ssh_key: <classic_nfs_ssh_key>         # Example: ~/.ssh/id_nfs
        home_dir: /home/appduser
  4. In the va.nodes section, do not specify the hostname discovered on Virtual Appliance hosts. Instead, run the following command on your Classic On-Premises and add all the VA hosts in this section.
    CODE
    host <VA-host-IP>
    For example:
    CODE
    host 10.0.0.1
    Sample Output:
    CODE
    38.85.115.10.in-addr.arpa domain name pointer ip-10-115-85-38.us-west-2.compute.internal

    In this case, you must use ip-10-115-85-38.us-west-2.compute.internal.

    Sample configuration file:
    CODE
    hosts:
      # --------------------------------------------------------------------------
      # Target systems (Virtual Appliances)
      # --------------------------------------------------------------------------
      va:
        nodes:
          - host: <va_node_1_host>             # Example: 10.0.20.9 or va-node1.example.com
            user: <va_node_1_user>             # Example: appduser
            password: <va_node_1_password>     # Example: passwd123
    	   host_name: <va_node_1_hostname> 	 # Example: 38.85.115.10.in-addr.arpa domain name pointer ip-10-115-85-38.us-west-2.compute.internal
     	   ssh_key: <va_node_1_ssh_key>       # Example: ~/.ssh/id_va_ip1
            home_dir: /home/appduser
    
    # ============================================================================
    # You can uncomment and add more nodes if needed
    # ============================================================================
    #      - host: <va_node_2_host>             # Example: 10.0.20.10 or va-node2.example.com
    #        user: <va_node_2_user>             # Example: appduser
    #        password: <va_node_2_password>     # Example: passwd123
    #		host_name: <va_node_2_hostname> 
    #        ssh_key: <va_node_2_ssh_key>       # Example: ~/.ssh/id_va_ip2
    #        home_dir: /home/appduser
    #
    #      - host: <va_node_3_host>             # Example: 10.0.20.11 or va-node3.example.com
    #        user: <va_node_3_user>             # Example: appduser
    #        password: <va_node_3_password>     # Example: passwd123
    #		host_name: <va_node_3_hostname>
    #        ssh_key: <va_node_3_ssh_key>       # Example: ~/.ssh/id_va_ip3
    #        home_dir: /home/appduser
  5. Update the services section to specify the source and target location of respective data store:
    PYTHON
    # ============================================================================
    # SERVICE CONFIGURATION (add new services here)
    # ============================================================================
    # Each service should have its own section with clear documentation
    # Engineers: Add your service configuration in the appropriate section below
    # ============================================================================
    
    services:
        eum_filestore:
    	    classic_source: "{HOME}/AppDynamics/EUM/store/" #Example: "/home/appduser/AppDynamics/EUM/store/"
    	
        synth_filestore:
    	    classic_source: "{HOME}/blobstore/"  #Example: "/home/appduser/blobstore/" 
    
        controller:
        	# Source database configuration
        	source_database:
    			host: localhost # Don't change it as by default we use localhost
          		port: <classic_controller_mysql_port>  # Example: 3388
          		username: <classic_controller_db_username> # Example: appduser
          		password: <classic_controller_db_password> # Example: pass123
          		socket_path: {HOME}/appdynamics/platform/controller/db/mysql.sock  # Example: /home/appduser/platform/controller/db/mysql.sock
          		mysql_binary_path: {HOME}/appdynamics/platform/controller/db/bin/  # Example: /home/appduser/appdynamics/platform/controller/db/bin/
    
       	 	# Target database configuration
        	target_database:
          		host: 0.0.0.0   # Don't change it. We will be fetching this config value from mysql cluster IP directly 
          		port: <va_mysql_port>         # Example: 3306
          		username: <va_mysql_username> # Example: appduser
          		password: <va_mysql_password> # Example: pass123
        
        eum_db:
        	# Source database configuration
        	source_database:
          		port: <classic_eum_mysql_port>  # Example: 3388
          		username: <classic_eum_db_username> # Example: appduser
          		password: <classic_eum_db_password> # Example: pass123
          		socket_path: {HOME}/AppDynamics/EUM/mysql/mysql.sock  # Example: /home/appduser/AppDynamics/EUM/mysql/mysql.sock
          		mysql_binary_path: {HOME}/AppDynamics/EUM/mysql/bin/  # Example: /home/appduser/AppDynamics/EUM/mysql/bin/
    
       	 	# Target database configuration
        	target_database:
          		host: 0.0.0.0   # Don't change it. We will be fetching this config value from mysql cluster IP directly  
          		port: <va_mysql_port>         # Example: 3306
          		username: <va_mysql_username> # Example: appduser
          		password: <va_mysql_password> # Example: pass123  		      
    	
    	events:
            classic_es_user: <classic_events_es_user> # Example: appduser
       		classic_es_password: <classic_events_es_password> # Example: pass123
            classic_es_host: localhost # Don't change it
            classic_es_port: <classic_events_es_port> # Example: 9200 (default port)
    
            va_es_user: <username> # Don't change it
            va_es_password: <password> # Don't change it
            va_es_cluster_ip: 0.0.0.0 # Don't change it. We will be fetching this config value from Elasticsearch cluster IP directly  
    	    va_es_port: 9200 # Don't change it
    
            classic_events_config_path: {HOME}/events-service/processor/conf/events-service-api-store.properties 
    	    events_service_path: <classic_events_service_absolute_path> # Example: /home/appduser/appdynamics/platform/events-service
  6. Specify the absolute path in the logging section to save the log files.
    CODE
    # Logging Configuration
    logging:
    	dir: <absolute_log_directory_path> #Example: /home/appduser/logs
  7. In the credential_encryption section, update account_name and classic_access_key values from the Classic Controller.
    Complete the following steps on each account to find the Controller key:
    1. Log in to your Classic Splunk AppDynamics On-Premises Controller UI.
    2. Navigate to License > Account
    3. In Access Key, click Show and copy the Controller key.
    The migration tool uses the values to automatically encrypt and update the credentials during migration.
    Note: Ensure to add all the account details under multi-tenancy. If you skip adding the account details, you will be unable to log in to the Virtual Appliance Controller with those account credentials after data migration.

    Sample Configuration

    PYTHON
    # ============================================================================
    # CREDENTIAL ENCRYPTION CONFIGURATION
    # ============================================================================
    # Automates the process of:
    # 1. Fetching encrypted store password from VA Controller
    # 2. Retrieving plaintext access key from Classic Controller
    # 3. Encrypting the key using SCS tool in VA Controller pod
    # 4. Updating the account table in VA Controller database
    # ============================================================================
    
    credential_encryption:
      enabled: true
    
      accounts:
        # Default accounts created by controller. Should not be removed.
        - account_name: "system"
          access_key: "<classic_controller_system_account_access_key>". # If not modified in the admin.jsp, use '-'
    
        - account_name: "root"
          access_key: "<classic_controller_system_account_access_key>"    # Same as system account
    
        - account_name: "<classic_controller_account_name>"      # Example: customer1
          access_key: "<classic_controller_access_key>"          # Example: 0cf47883-a241-4b05-a001-6d82cb963a56
    
      # Add additional accounts and their corresponding access keys as needed
      #  - account_name: "<classic_controller_account_name_2>"      # Example: customer2
      #    access_key: "<classic_controller_access_key_2>"          # Example: asdf7883-a441-4bd5-a001-6d82c2f5234
               
      # Optional configurations (modify if needed)
      namespace: cisco-controller                  # Controller pod namespace
      pod_label: app.kubernetes.io/name=controller # Label selector for controller pod
      #scs_keystore_password: "<encrypted_password>" # Optional; auto-fetched if not provided
    
      # Fixed paths for encryption tools (update only if AppDynamics paths differ)
      jre_bin_path: "/opt/appdynamics/platform/product/jre/17.0.15/bin"
      scs_tool_path: "/opt/appdynamics/platform/product/controller/tools/lib/scs-tool.jar"
      keystore_path: "/opt/appdynamics/platform/product/controller/.appd.scskeystore"
    
      # Update account table instead of global configuration cluster
      update_account_table: true
  8. (Optional) Update the properties of the migration tool to fine tune the migration tool that is required by your environment.
    EUM Filestore
    Navigate to the services.eum_filestore.saturation_detection section and configure the following parameters:
    CODE
    # Saturation detection for MySQL backup/restore cycles
    saturation_detection:
      convergence_threshold_seconds: 5   
      min_cycles_for_saturation: 3       
    	
    # ETA calculation: Estimated transfer rate for progress estimation
    eta_calculation:
      estimated_transfer_rate_mbps: 10   	#Estimated transfer rate in mbps.
    Synthetic Filestore
    Navigate to the services.synth_filestore.saturation_detection section and configure the following parameters:
    CODE
    # Saturation detection for MySQL backup/restore cycles
    saturation_detection:
      convergence_threshold_seconds: 5   
      min_cycles_for_saturation: 3       
    	
    # ETA calculation: Estimated transfer rate for progress estimation
    eta_calculation:
      estimated_transfer_rate_mbps: 10 	#Estimated transfer rate in mbps
    Controller MySQL
    Navigate to the services.controller section and configure the following parameters:
    CODE
    # Saturation detection for MySQL backup/restore cycles
    saturation_detection:
      convergence_threshold_seconds: 5   
      min_cycles_for_saturation: 3
    EUM MySQL
    Navigate to the services.eum_db section and configure the following parameters:
    CODE
    # Saturation detection for MySQL backup/restore cycles
    saturation_detection:
      convergence_threshold_seconds: 5   
      min_cycles_for_saturation: 3
    Events Elasticsearch
    Navigate to the services.events section and configure the following parameters:
    CODE
    backup_worker_sleep_seconds: 120      
    restore_worker_sleep_seconds: 10      
    monitor_worker_sleep_seconds: 180     
    generic_ssh_timeout_seconds: 60      
    snapshot_poll_interval_seconds: 30          
    snapshot_create_max_wait_seconds: 86400     
    
    cluster_health_timeout_seconds: 300         
    shard_recover_timeout_seconds: 600          
    poll_restore_status_timeout_seconds: 600    
    es_index_close_timeout_seconds: 600          
    
    # ======================================================================
    # Restore Performance Tuning (TB-scale optimization)
    # ======================================================================
    restore_performance:
      # Network bandwidth per node (increase for faster restore)
      max_bytes_per_sec: "400mb"
    
      # Concurrent file downloads per shard recovery
      max_concurrent_snapshot_file_downloads: 10
    
      # Total concurrent downloads per node (across all recoveries)
      max_concurrent_snapshot_file_downloads_per_node: 25
    
      # Concurrent shard recoveries per node
      node_concurrent_recoveries: 5
    
    # Saturation detection for Elasticsearch backup/restore cycles
    saturation_detection:
      convergence_threshold_seconds: 30   
      min_cycles_for_saturation: 3        
    
    # ETA calculation: Estimated transfer rate for progress estimation
    eta_calculation:
      estimated_backup_transfer_rate_mbps: 40  	#Estimated transfer rate in mbps during the data backup.
      estimated_restore_transfer_rate_mbps: 18 	#Estimated transfer rate in mbps during data restore.

Fine Tune the Properties of Migration Tool

The migration tool applies default values for each component property from the config.yaml file. However, you can adjust these values based on the size of your data. This page explains the available parameters and how you can configure them to optimize the performance of the migration tool.

Saturation Detection

The migration tool uses the values in this section to determine if data migration from Classic On-Premises to Virtual Appliance is saturated. The saturation detection section appears in the following sections of each component:

  • services.eum_filestore

  • services.synth_filestore

  • services.controller

  • services.eum_db

  • services.events

You can update the following properties to ensure accurate calculations for your data migration.
  • convergence_threshold_seconds: This parameter defines the allowable time difference between two consecutive migration (backup+restore) cycle for corresponding services.

    Increasing this value may cause the respective services to reach saturation more quickly, which can lead to potential data loss.

  • min_cycles_for_saturation: The number of consecutive comparisons required to confirm that the migration state has stabilized.

    For example, if set to 3, the tool identifies saturation only after three consecutive differences fall within the convergence_threshold_seconds.

    Increasing this value improves confidence that the data migration is saturated before the cutover.

Events Service Elasticsearch Parameters

In the services.events section, you can update the following values to ensure the worker threads do not starve for system resources and improve system performance:
  • backup_worker_sleep_seconds: The duration the backup worker thread waits after completing one backup before starting the next.

  • restore_worker_sleep_seconds: The duration the restore worker thread waits after completing one restore before starting the next.

  • monitor_worker_sleep_seconds: The interval (in seconds) between monitor worker cycles to update migration progress and evaluate convergence.

  • generic_ssh_timeout_seconds: General-purpose timeout for individual Elasticsearch API calls, SSH commands, and NFS state operations.

  • snapshot_poll_interval_seconds: Polling interval for shard recovery progress checks and wait loops when fixing unassigned shards.

  • snapshot_create_max_wait_seconds: Specifies the maximum time limit to wait for snapshot creation.

  • cluster_health_timeout_seconds: Specifies the timeout for checking cluster health during a restore.

  • shard_recover_timeout_seconds: Maximum time shard recovery can remain stalled before a reroute is attempted.

  • poll_restore_status_timeout_seconds: The maximum amount of time permitted to monitor the recovery of all shards for a single snapshot. If shard recovery does not complete within this time frame, the restore operation will be marked as timed out, even if shards are still making progress.

  • es_index_close_timeout_seconds: Timeout for closing existing indices on the destination before restoring a snapshot over them.

Restore Performance Tuning:
  • max_bytes_per_sec: Network bandwidth on each node. You can increase this value to achieve faster restore of backed up data

  • max_concurrent_snapshot_file_downloads: Concurrent file downloads for each shrad recovery.

  • max_concurrent_snapshot_file_downloads_per_node: Total concurrent downloads on each node across all recoveries.

  • node_concurrent_recoveries: Concurrent shard recoveries on each node.