Install Infrastructure Visibility with the Kubernetes CLI

This page describes how to install the Machine Agent and Network Agents in a Kubernetes cluster where the Cluster Agent Operator is installed.

The Cluster Agent Operator provides a custom resource definition called InfraViz. You can use InfraViz to simplify deploying the Machine and Network Agents as a daemonset in a Kubernetes cluster. Additionally, you can deploy these agents by creating a daemonset YAML which does not require the Cluster Agent Operator. For more information, see these examples.

To deploy the Analytics Agent as a daemonset in a Kubernetes cluster, see Install Agent-Side Components in Kubernetes.

Note: Windows Containers are not supported for this deployment.

Requirements

Before you begin, verify that you have:

  • Installed kubectl >= 1.16
  • Cluster Agent >= 21.3.1
  • Met these requirements: Cluster Agent Requirements and Supported Environments.
  • If Server Visibility is required, sufficient Server Visibility licenses based on the number of worker nodes in your cluster.
  • Permissions to view servers in the Splunk AppDynamics Controller.

Installation Procedure

  1. Install the Cluster Agent. From this Alpine Linux example:
    1. Download the Cluster Agent bundle.
    2. Unzip the Cluster Agent bundle.
    3. Deploy the Cluster Agent Operator using the CLI specifying the correct Kubernetes and OpenShift version (if applicable):
      unzip appdynamics-cluster-agent-alpine-linux-<version>.zip
      kubectl create namespace appdynamics
      Kubernetes >= 1.16
      kubectl create -f cluster-agent-operator.yaml
      OpenShift with Kubernetes >= 1.16
      kubectl create -f cluster-agent-operator-openshift.yaml
      OpenShift with Kubernetes <= 1.15
      kubectl create -f cluster-agent-operator-openshift-1.15-or-less.yaml
      Note: You can also install Cluster Agent Operator from OpenShift OperatorHub in your OpenShift cluster.
  2. Create a Cluster Agent secret using the Machine Agent access key to connect to the Controller. If a cluster-agent-secret does not exist, you must create one, see Install the Cluster Agent with the Kubernetes CLI.
    kubectl -n appdynamics create secret generic cluster-agent-secret --from-literal=controller-key=<access-key>
  3. (Optional) Create an Infrastructure Visibility secret by using the keystore credentials.
    1. Run the following command to import your CA certificate from custom-ssl.pem file:
      keytool -import -alias rootCA -file custom-ssl.pem -keystore cacerts.jks -storepass <your-password>
    2. Create keystore file secret.
      kubectl -n appdynamics create secret generic <cacertinfraviz> --from-file=cacerts.jks
    3. Create Keystore password secret
      kubectl -n appdynamics create secret generic <kspassinfraviz> --from-literal=keystore-password="<your-password>"
      Here, cacertinfraviz is the keystore filename and kspassinfraviz is the keystore password of Infrastructure Visibility.
      Note: The the keystore file and password that you specify here should be included in the infraviz.yaml file to apply the custom SSL configuration. For example,
      keyStoreFileSecret: cacertinfraviz
      keystorePasswordSecret: kspassinfraviz
  4. Update the infraviz.yaml file to set the controllerUrl, and account values based on the information from the Controller's License page.To enable Server Visibility, set enableServerViz to true (shown in the infraviz.yaml configuration example).To deploy a Machine Agent without Server Visibility enabled, set enableServerViz to false.

    infraviz.yaml Configuration File with Server Visibility Enabled

    apiVersion: v1
    kind: ServiceAccount
    metadata:
    name: appdynamics-infraviz
    namespace: appdynamics
    ---
    apiVersion: cluster.appdynamics.com/v1alpha1
    kind: InfraViz
    metadata:
    name: appdynamics-infraviz
    namespace: appdynamics
    spec:
    controllerUrl: "https://mycontroller.saas.appdynamics.com"
    image: "docker.io/appdynamics/machine-agent:latest"
    account: "<your-account-name>"
    globalAccount: "<your-global-account-name>"
    enableContainerHostId: true
    enableServerViz: true
    resources:
    limits:
    cpu: 500m
    memory: "1G"
    requests:
    cpu: 200m
    memory: "800M"

    The infraviz.yaml configuration file example deploys a daemonset that runs a single pod per node in the cluster. Each pod runs a single container from where the Machine Agent, or Server Visibility Agent runs.

  5. Toenable the Network Visibility Agent to run in a second container in the same pod, add the netVizImage and netVizPort keys and values as shown in this configuration file example:

    infraviz.yaml Configuration File with Second Container in a Single Pod

    apiVersion: v1
    kind: ServiceAccount
    metadata:
    name: appdynamics-infraviz
    namespace: appdynamics
    ---
    apiVersion: cluster.appdynamics.com/v1alpha1
    kind: InfraViz
    metadata:
    name: appdynamics-infraviz
    namespace: appdynamics
    spec:
    controllerUrl: "https://mycontroller.saas.appdynamics.com"
    image: "docker.io/appdynamics/machine-agent:latest"
    account: "<your-account-name>"
    enableContainerHostId: true
    enableServerViz: true
    netVizImage: appdynamics/machine-agent-netviz:latest
    netVizPort: 3892
    resources:
    limits:
    cpu: 500m
    memory: "1G"
    requests:
    cpu: 200m
    memory: "800M"
  6. Use kubectl to deploy infraviz.yaml
    Note:
    • For environments where Kubernetes >=1.25, PodSecurityPolicy is removed from Kubernetes >= 1.25 (https://kubernetes.io/blog/2022/08/23/kubernetes-v1-25-release/#pod-security-changes). Pod security restrictions are now applied at the namespace level (https://kubernetes.io/docs/concepts/security/pod-security-admission/) using Pod Security Standard levels . Therefore you must set the level as Privileged to the namespace in which Infrastructure Visibility pod is running.
    • For environments where Kubernetes <1.25, PodSecurityPolicies block certain pod security context configuration, such as privileged pods, you must deploy the infraviz-pod-security-policy.yaml before editing the infraviz.yaml file. You must attach PodSecurityPolicy to appdynamics-infraviz service account explictl
    • For environments where OpenShift SecurityContextConstraints block certain pod security context configuration, such as privileged pods, you must deploy the infraviz-security-context-constraint-openshift.yaml before editing the infraviz.yaml file.
    Kubernetes
    kubectl create -f infraviz.yaml
    Kubernetes<1.25 with Pod Security Policy
    kubectl create -f infraviz-pod-security-policy.yaml
    kubectl create -f infraviz.yaml
    Kubernetes>=1.25 with Pod Security Admission
    1. Specify the following Kubernetes labels to the namespace where Infrastructure Visibility is installed:

      • pod-security.kubernetes.io/<MODE>: <LEVEL> (Required)

      • pod-security.kubernetes.io/<MODE>-version: <VERSION> (Optional)For more info see, https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/.

        sample-namespace.yaml
        apiVersion: v1
        kind: Namespace
        metadata:
        name: appdynamics
        labels:
        pod-security.kubernetes.io/enforce: privileged
        pod-security.kubernetes.io/enforce-version: v1.27
        pod-security.kubernetes.io/audit: privileged
        pod-security.kubernetes.io/audit-version: v1.27
        pod-security.kubernetes.io/warn: privileged
        pod-security.kubernetes.io/warn-version: v1.27
    2. Run the following command:

      kubectl create -f infraviz.yaml
    OpenShift
    kubectl create -f infraviz-security-context-constraint-openshift.yaml
    kubectl create -f infraviz.yaml
  7. Confirm that the appdynamics-infraviz pod is running, and the Machine Agent, Server Visibility Agent, and Network Agent containers are ready:
    kubectl -n appdynamics get pods
    NAME                                    READY   STATUS    RESTARTS   AGE
    appdynamics-infraviz-shkhj                     2/2     Running   0          18s
  8. To verify that the agents are registering with the Controller, review the logs and confirm that the agents display in the Agents Dashboard of the Controller Administration UI. In the Controller, if Server Visibility is enabled, the nodes are visible under Controller > Servers.
    kubectl -n appdynamics logs appdynamics-infraviz-shkhj -c appd-infra-agent
    ...
    Started Machine Agent Successfully

InfraVizConfiguration Settings

To configure Infrastructure Visibility, you can modify these parameters in the infraviz.yaml file included with the download package. After changing the file, delete and re-create the InfraViz deployment to ensure the changes are applied.

ParameterDescriptionRequired/OptionalDefault
account

Splunk AppDynamics account name

RequiredN/A
appName Name of the cluster displayed on the Controller UI as your cluster name. This configuration groups the nodes of the cluster based on the master , worker , infra , worker-infra roles and displays them on the Metric Browser.OptionalN/A
args List of command argumentsOptionalN/A
controllerUrl

URL of the Splunk AppDynamics Controller

RequiredN/A
enableContainerd

Enable containerd visibility on Machine Agent. Specify either true false

Optional false
enableContainerHostId Flag that determines how container names are derived; specify either true or false .Required true
enableMasters By default, only Worker nodes are monitored. When set to true , Server Visibility is provided for Master nodes. For managed Kubernetes providers, the flag has no effect because the Master plane is not accessible.Optional false
enableServerViz Enable Server VisibilityRequired false
enableDockerViz Enable Docker VisibilityRequired false
env List environment variablesOptionalN/A
eventServiceUrl Event Service EndpointOptionalN/A
globalAccount Global account nameOptionalN/A
image Retrieves the most recent version of the Machine Agent image.Optional appdynamics/machine-agent:latest
imagePullPolicy The image pull policy for the InfraViz pod.Optional

imagePullPolicy: Always

imagePullSecret Name of the pull secret imageOptionalN/A
logLevel Level of logging verbosity. Valid options are: info or debug .Optional info
metricsLimit Maximum number of metrics that the Machine Agent sends to the Controller.OptionalN/A
netVizImage Retrieves the most recent version of Network Agent image.Optional appdynamics/machine-agent-netviz:latest
netVizPort When > 0, the Network Agent is deployed in a sidecar with the Machine Agent. By default, the Network Visibility Agent works with port 3892 .Optional 3892
netVizSecurityContext

You can include the following parameters under securityContext:

runAsGroup: If you configured the application container as a non-root user, provide the groupId of the corresponding group.

This sets the appropriate file permission on the agent artifacts.

This value is applied to all the instrumented resources.

Add this parameter, if you require to override the default value of runAsGroupthat is configured for default instrumentation, or if you require a specific value for the resources that satisfy this rule.

OptionalN/A

runAsUser: If you configured the application container as a non-root user, it provides the userId of the corresponding user.

This sets the appropriate file permission on the agent artifacts.

This value is applied to all the instrumented resources.

Add this parameter, if you require to override the default value of runAsUserthat is configured for default instrumentation, or if you require a specific value for the resources that satisfy this rule.

OptionalN/A

allowPrivilegeEscalation: To control if a process can get more privileges than its parent process. The value is true when the container runs as:

  • Privileged container
  • CAP_SYS_ADMIN

If you do not set this parameter, the helm uses the default value as true.

Note:
  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
OptionalN/A

capabilities: To add or remove POSIX capabilities from the running containers. This uses the default set of capabilities during container runtime.

Note:
  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
Optional

["NET_ADMIN","NET_RAW"]

Note: The default values are not overridden by the specified values. When you specify a value for capabilities, the value is considered along with the default values.

privileged: To run container in privileged mode, which is equivalent to root on the host.

If you do not set this parameter, the helm uses the default value as true.

Note:
  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
OptionalN/A

procMount: The type of proc mount to use for the containers.

Note: This parameter is currently available for Deployment and DeploymentConfig mode.
OptionalN/A

readOnlyRootFilesystem: To specify if this container has a read-only root filesystem.

Note:
  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
OptionalN/A

runAsNonRoot: To specify if the container must run as a non-root user.

If the value is true, the Kubelet validates the image at runtime to ensure that the container fails to start when run as root. If this parameter is not specified or if the value is false, there is no validation.

Note: This parameter is currently available for Deployment and DeploymentConfig mode.
OptionalN/A

seLinuxOptions: To apply the SELinux context to the container. If this parameter is not specified, the container runtime allocates a random SELinux context for each container.

Note:
  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
OptionalN/A

seccompProfile: To specify the seccomp options used by the container. If seccomp options are specified at both the pod and container level, the container options override the pod options.

Note:
  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
OptionalN/A

windowsOptions: To specify Windows-specific options for every container.

Note:
  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
OptionalN/A
nodeSelector OS specific label that identifies nodes for scheduling of the daemonset pods.Optional linux

overrideVolumeMounts

The list of volumeMounts.Optional

overrideVolumeMounts:

- proc - sys - etc

priorityClassName

Name of the priority class that determines priority when a pod needs to be evicted.OptionalN/A
propertyBag String with any other Machine Agent parametersOptionalN/A
proxyUrl URL of the proxy server (protocol://domain:port)OptionalN/A
proxyUser Proxy user credentials (user@password)OptionalN/A
resources Definitions of resources and limits for the Machine Agent

Optional

N/A
resourcesNetViz Set resources for the Network Visibility (NetViz) containerOptional

Request

  • CPU: 100m
  • Memory: 150Mi

Limit

  • CPU: 200m
  • Memory: 300Mi
runAsUser

The UID (User ID) to run the entry point of the container process. If you do not specify the UID, this defaults to the user id specified in the image.

docker.io/appdynamics/machine-agent

docker.io/appdynamics/machine-agent-analytics:latest

If you require to run on any other UID, change the UID for runAsUser without changing the group ID.

Note: This parameter is deprecated. We recommend that you use the runAsUser child parameter under the

securityContext

parameter.
Optional

UID: 1001

Username: appdynamics

s

runAsGroup The GID (Group ID) to run the entry point of the container process. If you do not specify the ID, this uses the UID specified in the image,

docker.io/appdynamics/machine-agent

docker.io/appdynamics/machine-agent-analytics:latest

Note: This parameter is deprecated. We recommend that you use the runAsGroup child parameter under the

securityContext

parameter..
Optional GID: 1001 Username: appdynamics

securityContext

Note:

For OpenShift version > 4.14, ensure that all the child parameters within securityContext are specified based on the permissible values outlined by the security context constraints (SCCs). See Managing Security Context Constraints in the Red Hat OpenShift documentation.

For example, if you want to use RunAsUser property, then user ID (UID) should be in the permissible range. The SCCs permissible range for UID is 1000 to 9001. Therefore, you can add the RunAsUser value within this range only. The same applies to other security context parameters.

You can include the following parameters under securityContext:

runAsGroup: If you configured the application container as a non-root user, provide the groupId of the corresponding group.

This sets the appropriate file permission on the agent artifacts.

This value is applied to all the instrumented resources.

Add this parameter, if you require to override the default value of runAsGroupthat is configured for default instrumentation, or if you require a specific value for the resources that satisfy this rule.

You can include the following parameters under securityContext:

runAsGroup: If you configured the application container as a non-root user, provide the groupId of the corresponding group.

This sets the appropriate file permission on the agent artifacts.

This value is applied to all the instrumented resources.

Add this parameter, if you require to override the default value of runAsGroupthat is configured for default instrumentation, or if you require a specific value for the resources that satisfy this rule.

OptionalNA

runAsUser: If you configured the application container as a non-root user, it provides the userId of the corresponding user.

This sets the appropriate file permission on the agent artifacts.

This value is applied to all the instrumented resources.

Add this parameter, if you require to override the default value of runAsUserthat is configured for default instrumentation, or if you require a specific value for the resources that satisfy this rule.

OptionalNA

allowPrivilegeEscalation: To control if a process can get more privileges than its parent process. The value is true when the container runs as:

  • Privileged container
  • CAP_SYS_ADMIN

If you do not set this parameter, the helm uses the default value as true.

Note:
  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
Optionaltrue

capabilities: To add or remove POSIX capabilities from the running containers. This uses the default set of capabilities during container runtime.

Note:
  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
OptionalNA

privileged: To run container in privileged mode, which is equivalent to root on the host.

If you do not set this parameter, the helm uses the default value as true.

Note:
  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
Optionaltrue

procMount: The type of proc mount to use for the containers.

Note: This parameter is currently available for Deployment and DeploymentConfig mode.
OptionalNA

readOnlyRootFilesystem: To specify if this container has a read-only root filesystem.

Note:
  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
OptionalNA

runAsNonRoot: To specify if the container must run as a non-root user.

If the value is true, the Kubelet validates the image at runtime to ensure that the container fails to start when run as root. If this parameter is not specified or if the value is false, there is no validation.

Note: This parameter is currently available for Deployment and DeploymentConfig mode.
OptionalNA

seLinuxOptions: To apply the SELinux context to the container. If this parameter is not specified, the container runtime allocates a random SELinux context for each container.

Note:
  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
OptionalNA

seccompProfile: To specify the seccomp options used by the container. If seccomp options are specified at both the pod and container level, the container options override the pod options.

Note:
  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
OptionalNA

windowsOptions: To specify Windows-specific options for every container.

Note:
  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
OptionalNA
stdoutLogging Determines if logs are saved to a file or redirected to the Console.Optional false
tolerations List of tolerations based on the taints that are associated with nodes.OptionalN/A
uniqueHostId

Unique host ID in Splunk AppDynamics. Valid options are: spec.nodeName status.hostIP

Optional spec.nodeName