Common Issues and Resolution

Why can’t I host ADRUM JavaScript agent files inside the OVA EUM pod?

In the AppDynamics On-Prem Virtual Appliance (OVA) Kubernetes deployment, the End User Monitoring (EUM) pod does not support hosting ADRUM JavaScript agent files internally as in classic on-premises deployments. The EUM pod contains only a "License" folder and lacks a "wwwroot" directory or any built-in static file server to serve these files. Hence, if you attempt to place ADRUM files inside the EUM pod, you encounter 404 or unreachable errors when accessing these files via expected URLs.

Guidance:

  • Download ADRUM JavaScript Files

    • Log in to the AppDynamics Controller GUI.

    • Navigate to User Experience > Browser Application.

    • Select the option to host JavaScript files locally to download the full ADRUM JavaScript packag

  • Set Up Internal Web Server

    • Deploy a web server (e.g., Apache, Nginx, IIS) inside your internal network.
    • Upload the downloaded ADRUM JavaScript files to this server.
  • Configure EUM Settings
    • In the Controller GUI, update the EUM configuration to point to the internal web server URL hosting the ADRUM files.
  • Verify Access and Functionality
    • Confirm internal users can access the ADRUM files via the internal URL.
    • Test that browser real user monitoring is functioning correctly.

How can I enable internal users to load ADRUM JavaScript agent files for browser monitoring?

Download the ADRUM JavaScript files from the Controller GUI under User Experience > Browser Application with the option to host files locally. Then, host these files on a separate internal web server accessible to internal users (for example, Apache, Nginx, IIS). Configure the EUM settings to point to this internal server URL for ADRUM files. This workaround bypasses the OVA limitation.

What causes beacon sending failures in EUM, and how do I fix them?

Incorrect beacon URLs in the ADRUM agent configuration cause beacon sending failures due to which no end-user experience data gets collected.

Configuration Guidance:

  • Verify ADRUM Files Hosting

    • Ensure ADRUM JavaScript files are hosted on an accessible internal web server.

  • Check Beacon URL Configuration

    • Review the ADRUM agent configuration for beacon URLs:

      • config.beaconUrlHttp

      • config.beaconUrlHttps

  • Confirm these URLs point to the correct EUM server hostname.

  • Update Beacon URLs

    • Replace incorrect URLs (for example, https://appdynamics.pra-rakevet.co.il ) with the valid internal hostname (for example, https://rkv-appdynamics1)

  • Save Configuration and Retest

    • Save the updated configuration.

    • Test beacon sending function to confirm beacons are received.

  • Troubleshoot Further if Needed

    • Capture browser HAR files and console logs if issues persist.

    • Verify network connectivity and DNS resolution for the EUM server hostname.

How do I verify and troubleshoot EUM beacon URL issues?

  • Confirm EUM collector accessibility via browser or curl commands.
  • Check DNS resolution on client machines.
  • Collect browser HAR files and console logs if beacon sending fails.
  • Review EUM pod logs for beacon processing errors.

How do I configure JVM options for the OVA Controller since there is no domain.xml file?

In OVA deployments, the traditional domain.xml file used for JVM option configuration is absent. JVM options must be configured via Helm chart modifications.

Configuration Steps:

Understand Configuration Change Location

  • In newer AppDynamics versions with Jetty appserver, JVM options are not set in domain.xml.

Modify JVM Options via Helm Chart (Preferred for OVA)

  • Backup the Helm chart file: ~/appd-charts/charts/controller/templates/controller-configmap.yml.
  • Edit the file to add the JVM option:
    -Dmaximum.bts.per.application=NEW_LIMIT 
    under the JVM options list.

Start the controller using AppDynamics CLI

appdcli start appd <profile>

Verify the Change

  • Confirm the new Business Transaction limit is applied after restart.

What are the key steps for certificate and cluster setup in OVA deployments?

Proper hostname, IP synchronization, and certificate installation are essential for cluster node communication and secure operation in OVA deployments.

Steps:

  • Prepare DNS and Certificate

    • Create a DNS alias resolving to all cluster node IPs.
    • Procure a certificate with the common hostname in CN or SAN including all FQDNs of cluster nodes.
  • Update Hostname and IP on Nodes

    • Stop the first VM node.
    • Update hostname and IP via vSphere client.
    • Power on the VM to sync details.
  • Copy Certificate and Key

    • Copy private key and signed certificate to /var/appd/config on the first node as ingress.key and ingress.crt in PEM format with correct permissions.
  • Modify Configuration File

    • Edit /var/appd/config/globals.yaml.gotmpl:
      • Update dnsDomain with the common domain name.
      • Update dnsNames with cluster node domain names.
      • Comment out internal IP lines if necessary.
      • Set defaultCert: false under ingress.
  • Restart Services

    • Stop services:
      appdcli stop aiops
      appdcli stop appd
      appdcli stop operators
      
    • Start services:
      appdcli start appd medium
      appdcli start aiops medium
      
  • Verify

    • Confirm services pick up the custom certificate and function correctly.

How do I handle instrumentation and agent configuration for applications like SAP Java and .NET Reporting Services?

Advanced instrumentation techniques are required for certain applications like SAP Java and .NET reporting services behind load balancers.

Guidance:

  • Enable Debug Logging

    • Modify log4j2.xml in Java Agent to enable debug level logging.
  • Analyze Logs

    • Review Java Agent logs for class loading errors and reflection exceptions.
  • Modify Java Agent Configuration

    • Edit app-agent-config.xml to add excludes for problematic SAP classes in the <excludes> section.
  • Add JVM Startup Parameters

    • Add OSGi boot delegation parameters:
      -Dorg.osgi.framework.bootdelegation=com.singularity.*
      -Dorg.osgi.framework.bootdelegation=com.singularity.*,com.sap.*
      
  • Update Java Security Policy

    • Modify java.policy file to grant necessary permissions:
      grant codeBase "file:/-" {
        permission javax.security.AllPermission;
        permission javax.management.MBeanPermission "", "";
      };
  • Restart Application

    • Restart SAPPI application with updated Java Agent configuration.
  • Verify Functionality

    • Confirm interface pages load and function without errors.

What are typical causes of pod crashes in the Cisco AppDynamics On-Premises Virtual Appliance, and how can these issues be addressed?

Pod crashes in the Virtual Appliance often stem from a combination of configuration, resource, and certificate management issues within the Kubernetes environment. Common causes include:

  • Certificate and Secrets Mismanagement: Retaining old or mismatched Kubernetes secrets and certificates after uninstallations or upgrades can cause communication failures and serialization/deserialization errors in Kafka streams, leading to pod crashes.

  • Resource Constraints: Insufficient memory allocation in the deployment profile can cause Out-Of-Memory (OOM) events, resulting in pod restarts and instability.

  • Improper Service Restart Sequence: Restarting services in an incorrect order may cause failures in dependent components, especially when certificates or configurations have changed.

  • Namespace and Persistent Volume Residue: Leftover Kubernetes namespaces, Persistent Volume Claims (PVCs), and Persistent Volumes (PVs) from previous deployments can interfere with clean reinstallation and cause unexpected pod behavior.

Recommended Resolution Steps:

  • Clean Up Kubernetes Environment: Delete stale namespaces, secrets, PVCs, and PVs to ensure a clean state before reinstallation.

  • Regenerate and Update Certificates: Generate fresh certificates and update Kubernetes secrets accordingly to avoid communication and serialization errors.

  • Adjust Resource Profiles: Review and increase memory limits in the deployment profile configuration (for example, medium.yaml) to prevent OOM kills, especially for memory-intensive pods.

  • Follow Correct Service Restart Procedures: Stop and start services in the recommended sequence using CLI commands to ensure proper initialization and communication.

  • Monitor Pod and Service Status: Use Kubernetes commands to verify pod health and monitor application-specific statuses (for example, Anomaly Detection model states) to confirm successful recovery.

  • Handle Seccomp Warnings: Recognize that some security-related warnings (for example, seccomp) may be benign and do not necessarily indicate functional issues.

This generic guidance helps users troubleshoot and resolve common pod crash scenarios in the Virtual Appliance, improving deployment stability and operational reliability.