Troubleshoot Virtual Appliance Issues
Follow the troubleshooting steps if you face the following issues during or after installing Splunk AppDynamics On-Premises Virtual Appliance.
Update DNS Configuration for an Air-Gapped Environment
An air-gapped environment is a network setup that does not have Internet connectivity. In this environment, DNS may become unreachable. To fix this issue, configure a DNS server that can be reached.
Following are example details used to explain how to update DNS configuration:
The IP addresses 10.0.0.1
, 10.0.0.2
, and 10.0.0.3
belong to the Virtual Appliance cluster.
The 10.0.0.5
is the IP address of the standalone Controller.
standalone-controller
is the DNS of the standalone on-premises Controller.
Update CIDR of the Pod
If you require to change the default CIDR of the pod, you can update the CIDR to the available subnet range. Perform the following steps to update CIDR of the pod:
Error Appears for appdctl show boot
When you run the appdctl show boot
command, the following error appears if any background processes are pending:
Error: Get "https://127.0.0.1/boot": Socket /var/run/appd-os.sock not found. Bootstrapping maybe in progress
Please check appd-os service status with following command:
systemctl status appd-os
Run the command after few minutes.
Insufficient Permissions to Access Microk8s
Sometimes this error appears if the terminal was inactive between installation steps. If you face this error, re-login to the terminal.
Restore the MySQL Service
If a Virtual Machine restarts in the cluster, the MySQL service does not automatically start. To start the MySQL services, complete the following:
EUM Health is Failing After Multiple Retries
Run the following commands to restart the Events and EUM pod:
kubectl delete pod events-ss-0 -n cisco-events
kubectl delete pod eum-ss-0 -n cisco-eum
IOException Error Occurs in the Controller UI
In the Controller UI, when you select IOException error occurs:
, the followingIOException while calling 'https://pi.appdynamics.com/pi-rca/alarms/modelSensitivityType/getAll?accountId=2&controllerId=onprem&startRecordNo=0&appId=7&recordCount=1'
To workaround this issue, run the following commands:
kubectl get pods -n cisco-controller
kubectl delete pod <Controller-Pod-Name> -n cisco-controller
Issues after Restarting Virtual Appliance Services in Hybrid Deployment
You must regenerate the hybrid configure file and reconfigure the Controller properties in Kuberenetes CLI. See the following sections: