Install Splunk AppDynamics Services in the Standard Deployment

With the standard deployment option, Splunk AppDynamics On-Premises Virtual Appliance installs infrastructure and Splunk AppDynamics Services in your Kubernetes cluster.

Prepare to Install Splunk AppDynamics Services

Complete the following steps to prepare the environment:

  1. Log in to one of the node console using the appduser credentials.
  2. Navigate to the following folder:
    cd /var/appd/config
  3. Edit the globals.yaml.gotmpl file with the required configuration.
    vi globals.yaml.gotmpl
    1. (Optional) Configure a custom ingress certificate (by default the ingress controller installs a fully-configured self-signed certificate). The custom ingress certificate needs certain SANs added to it. Run the following script on the console of your primary node to view a list of SANs. Add those SANs to your custom ingress certificate. See ingress in Customize the Helm File for instructions on how to configure the custom ingress certificate and key.
      #!/bin/bash
      set -euo pipefail
      TENANT=$(helm secrets decrypt /var/appd/config/secrets.yaml.encrypted  | yq .hybrid.controller.tenantAccountName)
      DNS_DOMAIN=$(grep -v "^ *\t* *{{" /var/appd/config/globals.yaml.gotmpl | yq -r '.dnsDomain')
      DNS_NAMES=$(grep -v "^ *\t* *{{" /var/appd/config/globals.yaml.gotmpl | yq -r '.dnsNames|join(" ")')
      echo Verify the Virtual Appliance tenant should be \'${TENANT}\'
      echo Verify the Virtual Appliance domain name should be \'${DNS_DOMAIN}\'
      echo Verify the Virtual Appliance node names are: ${DNS_NAMES}
      
      echo If creating and importing into VA a Custom Ingress Certificate, include the following SANs:
      for server_name in "$DNS_DOMAIN" "${TENANT}.${DNS_DOMAIN}" "*.${DNS_DOMAIN}" "${TENANT}.auth.${DNS_DOMAIN}" "${TENANT}-tnt-authn.${DNS_DOMAIN}" $DNS_NAMES; do
              echo "  ${server_name}"
      done

      Sample output of the script above:

      Verify the Virtual Appliance tenant should be 'customer1'
      Verify the Virtual Appliance domain name should be 'va.mycompany.com'
      Verify the Virtual Appliance node names are: localhost vanodename-1
      If creating and importing into VA a Custom Ingress Certificate, include the following SANs:
        va.mycompany.com
        customer1.va.mycompany.com
        *.va.mycompany.com
        customer1.auth.va.mycompany.com
        customer1-tnt-authn.va.mycompany.com
        localhost
        vanodename-1
    2. (Optional) Add any custom CA certificates for Controller outbound traffic by configuring appdController.customCaCerts in Customize the Helm File.
    3. (Optional) Disable the self-monitoring for the Controller.
      enableClusterAgent: false
  4. (Optional) Edit the /var/appd/config/secrets.yaml file to update usernames and passwords of the Splunk AppDynamics Services.
    vi /var/appd/config/secrets.yaml
    Note: When you install the Splunk AppDynamics service, the secrets.yaml file becomes encrypted.
    See Edit the secrets.yaml.encrypted file.
  5. Save the following script to the console of your primary virtual appliance node as dnsinfo.sh and run it. Follow the instructions in its output:
    Note: If you are running this script for the first time, copy the code for plain YAML. If you are running this script after installing the services, copy the code for encrypted YAML.
    Plain YAML
    #!/bin/bash
    set -euo pipefail
    TENANT=$(helm secrets decrypt /var/appd/config/secrets.yaml  .hybrid.controller.tenantAccountName)
    DNS_DOMAIN=$(grep -v "^ *\t* *{{" /var/appd/config/globals.yaml.gotmpl | yq -r '.dnsDomain')
    
    echo Verify the Virtual Appliance tenant should be \'${TENANT}\'
    echo Verify the Virtual Appliance domain name should be \'${DNS_DOMAIN}\'
    
    for server_name in "${TENANT}.auth.${DNS_DOMAIN}" "${TENANT}-tnt-authn.${DNS_DOMAIN}"; do
      if ! getent hosts "${server_name}" > /dev/null; then
        echo "Please double-check that DNS can resolve '${server_name}' as the VA ingress IP"
      fi
    done 
    Encrypted YAML
    #!/bin/bash
    set -euo pipefail
    TENANT=$(helm secrets decrypt /var/appd/config/secrets.yaml.encrypted  | yq .hybrid.controller.tenantAccountName)
    DNS_DOMAIN=$(grep -v "^ *\t* *{{" /var/appd/config/globals.yaml.gotmpl | yq -r '.dnsDomain')
    
    echo Verify the Virtual Appliance tenant should be \'${TENANT}\'
    echo Verify the Virtual Appliance domain name should be \'${DNS_DOMAIN}\'
    
    for server_name in "${TENANT}.auth.${DNS_DOMAIN}" "${TENANT}-tnt-authn.${DNS_DOMAIN}"; do
      if ! getent hosts "${server_name}" > /dev/null; then
        echo "Please double-check that DNS can resolve '${server_name}' as the VA ingress IP"
      fi
    done 
    #!/bin/bash
    set -euo pipefail
    TENANT=$(helm secrets decrypt /var/appd/config/secrets.yaml.encrypted  | yq .hybrid.controller.tenantAccountName)
    DNS_DOMAIN=$(grep -v "^ *\t* *{{" /var/appd/config/globals.yaml.gotmpl | yq -r '.dnsDomain')
    
    echo Verify the Virtual Appliance tenant should be \'${TENANT}\'
    echo Verify the Virtual Appliance domain name should be \'${DNS_DOMAIN}\'
    
    for server_name in "${TENANT}.auth.${DNS_DOMAIN}" "${TENANT}-tnt-authn.${DNS_DOMAIN}"; do
      if ! getent hosts "${server_name}" > /dev/null; then
        echo "Please double-check that DNS can resolve '${server_name}' as the VA ingress IP"
      fi
    done 

    Sample output:

    Verify the Virtual Appliance tenant should be 'customer1'
    Verify the Virtual Appliance domain name should be 'va.mycompany.com'
    Please double-check that DNS can resolve 'customer1.auth.va.mycompany.com' as the VA ingress IP
    Please double-check that DNS can resolve 'customer1-tnt-authn.va.mycompany.com' as the VA ingress IP 
  6. Copy the license files as the license.lic file to the node in the following location.
    cd /var/appd/config
    This license will be automatically used to provision Splunk AppDynamics Services. If you do not have the license file at this time, you can apply the license and provision the services later using appdcli.
    Note: For End User Monitoring, if you are using the Infrastructure-based Licensing model, make sure to specify EUM account and license key in the Administration Console. See Access the Administration Console.

Create a Three-Node Cluster

  1. Log in to the primary node console.
  2. Verify the boot status of each node of the cluster:
    appdctl show boot
    Note:
  3. Run the following command in the primary node and specify the IP address of the peer nodes:
    cd /home/appduser
    appdctl cluster init <Node-2-IP> <Node-3-IP>
  4. Run the following command to verify the node status:
    appdctl show cluster
    microk8s status

    Ensure that the output displays the Running status as true for the nodes that are part of the cluster.

    Sample Output

     NODE           | ROLE  | RUNNING 
    ----------------+-------+---------
     10.0.0.1:19001 | voter | true    
     10.0.0.2:19001 | voter | true    
     10.0.0.3:19001 | voter | true 
    Note: You must re-login to the terminal if the following error is displayed:
    Insufficient Permissions to Access Microk8s 

Install Services in the Cluster

  1. Log in to the cluster node console.
  2. Run the command to install services:
    appdcli start appd [Profile]
    Small Profile
    appdcli start appd small
    Medium Profile
    appdcli start appd medium
    Large Profile
    appdcli start appd large
    Extra Large Profile
    appdcli start appd xlarge

    This command installs the Splunk AppDynamics services. We recommend that you specify the same VA profile that you selected to create a virtual machine. See, Sizing Requirements.

    Sample Output

    NAME               CHART                     VERSION   DURATION
    cert-manager-ext   charts/cert-manager-ext   0.0.1           0s
    ingress-nginx      charts/ingress-nginx      4.8.3           1s
    redis-ext          charts/redis-ext          0.0.1           1s
    ingress            charts/ingress            0.0.1           2s
    cluster            charts/cluster            0.0.1           2s
    reflector          charts/reflector          7.1.216         2s
    monitoring-ext     charts/monitoring-ext     0.0.1           2s
    minio-ext          charts/minio-ext          0.0.1           2s
    eum                charts/eum                0.0.1           2s
    fluent-bit         charts/fluent-bit         0.39.0          2s
    postgres           charts/postgres           0.0.1           2s
    mysql              charts/mysql              0.0.1           3s
    redis              charts/redis              18.1.6          3s
    controller         charts/controller         0.0.1           3s
    events             charts/events             0.0.1           4s
    cluster-agent      charts/cluster-agent      1.16.37         4s
    kafka              charts/kafka              0.0.1           6s
    minio              charts/minio              5.0.14         47s
  3. Verify the status of the installed pods and service endpoints:
    • Pods: kubectl get pods --all-namespaces
    • Service endpoints: appdcli ping

      +---------------------+---------+
      |  Service Endpoint   | Status  |
      +=====================+=========+
      | Controller          | Success |
      +---------------------+---------+
      | Events              | Success |
      +---------------------+---------+
      | EUM Collector       | Success |
      +---------------------+---------+
      | EUM Aggregator      | Success |
      +---------------------+---------+
      | EUM Screenshot      | Success |
      +---------------------+---------+
      | Synthetic Shepherd  | Success |
      +---------------------+---------+
      | Synthetic Scheduler | Success |
      +---------------------+---------+
      | Synthetic Feeder    | Success |
      +---------------------+---------+
      | AD/RCA Services     | Failed  |
      +---------------------+---------+
Note:

By default, Virtual Appliance installs the Cluster Agent. This agent helps you monitor nodes, CPU, memory and storage. For more information, see View Container Details.

Install the Anomaly Detection Services in the Cluster

  1. Log in to the cluster node console.
  2. Run the command to install services:
    Small Profile
    appdcli start aiops small
    Medium Profile
    appdcli start aiops medium
    Large Profile
    appdcli start aiops large
    Extra Large Profile
    appdcli start aiops xlarge
  3. Verify the status of the installed pods and service endpoints:
    • Pods: kubectl get pods -n cisco-aiops
    • Service endpoints: appdcli ping

      The status of the Anomaly Detection service appears as Success.

See Anomaly Detection.
Note: Sometimes, IOException error occurs when you access Anomaly Detection in the Controller UI. See Troubleshoot Virtual Appliance Issues.

Install OpenTelemetry Service

Ensure the following conditions are met:

  1. Go to the Controller DNS to verify that the Controller is active.
  2. Log in to the cluster node console.
  3. Run the following command and wait until the Controller service status is Success.
    appdcli ping
    Sample Output:
    +---------------------+---------------+
    |  Service Endpoint   |    Status     |
    +=====================+===============+
    | Controller          | Success       |
    +---------------------+---------------+
    | Events              | Success       |
    +---------------------+---------------+
    | EUM Collector       | Success       |
    +---------------------+---------------+
    | EUM Aggregator      | Success       |
    +---------------------+---------------+
    | EUM Screenshot      | Success       |
    +---------------------+---------------+
    | Synthetic Shepherd  | Success       |
    +---------------------+---------------+
    | Synthetic Scheduler | Success       |
    +---------------------+---------------+
    | Synthetic Feeder    | Success       |
    +---------------------+---------------+
    | OTIS                | Not Installed |
    +---------------------+---------------+
    
    
  4. Run the following command to install the OpenTelemetry™ service:
    Small Profile
    appdcli start otis small
    Medium Profile
    appdcli start otis medium
    Large Profile
    appdcli start otis large
    Extra Large Profile
    appdcli start otis xlarge

    This command installs the OpenTelemetry™ service in the cisco-otis namespace.

  5. Verify the status of the installed pods and service endpoints:
    • Pods: kubectl get pods -n cisco-otis
    • Service endpoints: appdcli ping

      The status of the OpenTelemetry™ service appears as Success.
      +---------------------+---------------+
      |  Service Endpoint   |    Status     |
      +=====================+===============+
      | Controller          | Success       |
      +---------------------+---------------+
      | Events              | Success       |
      +---------------------+---------------+
      | EUM Collector       | Success       |
      +---------------------+---------------+
      | EUM Aggregator      | Success       |
      +---------------------+---------------+
      | EUM Screenshot      | Success       |
      +---------------------+---------------+
      | Synthetic Shepherd  | Success       |
      +---------------------+---------------+
      | Synthetic Scheduler | Success       |
      +---------------------+---------------+
      | Synthetic Feeder    | Success       |
      +---------------------+---------------+
      | OTIS                | Success       |
      +---------------------+---------------+

    You can also access the endpoint URL to verify the installation. See Verify the Service Endpoints Paths.

Configure AppDynamics for OpenTelemetry.

Install ATD Services

Ensure that the AuthN service is installed along with Splunk AppDynamics services.
kubectl get pods -nauthn
Ensure that:
  • DNS entries are added that are required for the resolution of the ATD services. Else, configure the DNS entries using the encrypted secrets file. See Configure DNS Entries.

  • Ingress certificate and key with all necessary Subject Alternative Names (SANs) are generated and updated the globals.yaml.gotmpl file accordingly. Configure Ingress certificates using the encrypted secrets file. See Configure Ingress Certificates (Only for SSL Certificates).

  • AuthN service is installed along with Splunk AppDynamics services.
    kubectl get pods -nauthn

Follow the steps to install Automatic Transaction Diagnostics service in the Virtual Appliance:

  1. Log in to the cluster node console.
  2. Run the command to install services:
    Demo Profile
    appdcli start atd demo
    Small Profile
    appdcli start atd small
    Medium Profile
    appdcli start atd medium
    Large Profile
    appdcli start atd large
    Extra Large Profile
    appdcli start atd xlarge
  3. Verify the status of installed pods and service endpoints.
    kubectl get pods -ncisco-atd

    You can also access the endpoint URL to verify the installation. See Verify the Service Endpoints Paths.

    For more information about ATD, see 自動トランザクション診断のワークフロー.

Install Universal Integration Layer Service

Ensure that:
  • DNS entries are added that are required for the resolution of the Anomaly services. Else, configure the DNS entries using the encrypted secrets file. See Configure DNS Entries.

  • The Ingress certificate and key with all necessary Subject Alternative Names (SANs) are generated and the globals.yaml.gotmpl file is updated accordingly. Configure Ingress certificates using the encrypted secrets file. See Configure Ingress Certificates (Only for SSL Certificates).

  • The authentication settings to the Controller configuration are added. See Configure Standalone Controller.

To integrate Splunk AppDynamics 自己ホスト型仮想アプライアンス with Splunk Enterprise, you must install the Universal Integration Layer (UIL) service in the cluster:
  1. Log in to the cluster node console.
  2. Run the command to install the service:
    Small Profile
    appdcli start uil small
    Medium Profile
    appdcli start uil medium
    Large Profile
    appdcli start uil large
    Extra Large Profile
    appdcli start uil xlarge
  3. Verify the status of the installed pods and service endpoints:
    • Pods: kubectl get pods -n cisco-uil

      The status of the universal integration layer pod must be displayed as Running.

      uil_pods
    • Service endpoints: appdcli ping

      The status of the UIL service should be displayed as Success.

      uil_service_endpoints

    You can also access the endpoint URL to verify the installation. See Verify the Service Endpoints Paths.

To continue with the integration, see Integrate Splunk AppDynamics 自己ホスト型仮想アプライアンス with Splunk Enterprise.

Apply Licenses to AppDynamics Services

Use appdcli to apply licenses after installing Splunk AppDynamics Services.

  1. Log in to the cluster node console.
  2. Copy the license files as the license.lic file to the node in the following location.
    cd /var/appd/config
  3. Run the following commands to apply licenses:
    Controller

    Update the Controller license.

    appdcli license controller license.lic
    End User Monitoring
    1. Update the EUM license.
      appdcli license eum license.lic
    2. (Optional) If you are using the Infrastructure-based Licensing model, make sure to specify EUM account and license key in the Administration Console. See Access the Administration Console. Follow the steps to add EUM account and license key:
      1. From Account Settings, select the Controller account that have EUM licenses and click Edit.

      2. Enter the EUM license key and the EUM account name in the EUM License Key and the EUM Account Name fields.

      3. Click Save.

    For more information, see Virtual Appliance CLI.

Verify the Service Endpoints Paths

Log in to the Controller UI by accessing https://<DNS-Name>or<Cluster-Node-IP>/.

The Ingress controller checks the URL of an incoming request and redirects to the respective Splunk AppDynamics Service.

Service EndpointInstallation Path
Controllerhttps://<ingress>/controller
Events

https://<ingress>/events

https://<Node-IP>:32105/events

End User MonitoringAggregatorhttps://<ingress>/eumaggregator
Screenshotshttps://<ingress>/screenshots
Collectorhttps://<ingress>/eumcollector
SyntheticShepherdhttps://<ingress>/synthetic/shepherd
Schedulerhttps://<ingress>/synthetic/scheduler
Feederhttps://<ingress>/synthetic/feeder
Note: By default, the Controller UI username is set to admin and the password is set to welcome.