Install the Cluster Agent with Helm Charts
This page describes how to use Cluster Agent Helm Charts to deploy the Cluster Agent.
Helm is a package manager for Kubernetes. Helm charts are a collection of files that describe a set of Kubernetes resources. The Cluster Agent Helm chart is a convenient method to deploy the Splunk AppDynamics Operator and Cluster Agent. You can also use the Cluster Agent Helm chart to deploy multiple Cluster Agents in a single cluster. This may be necessary for larger clusters that exceed the pod monitoring limit for a single Cluster Agent. See Cluster Agent Requirements and Supported Environments.
Requirements
- Cluster Agent version >= 20.6
- Controller version >= 20.6
- Cluster Agent Helm charts are compatible with Helm 3.0
-
It is recommended to use Cluster Agent Helm Charts version >= v1.1.0.
-
Use Cluster Agent Helm Charts version >= v1.10.0 to install Cluster Agent >= 23.2.0 and Splunk AppDynamics Operator version >= 23.2.0.
-
Use Cluster Agent Helm Charts version >= v1.1.0 to install Cluster Agent >= 21.12.0 and Splunk AppDynamics Operator version >= 21.12.0.
-
You can install Cluster Agent version <= 21.10.0 and Splunk AppDynamics Operator version <= 0.6.11 using the older major version of Cluster Agent Helm Charts (<=0.1.19).
Install a Single Cluster Agent in a Cluster
-
Delete all the previously installed CustomResourceDefinition (CRDs) related to Splunk AppDynamics Agent by using these commands:
$ kubectl get crds $ kubectl delete crds <crd-names>
-
Add the chart repository to Helm:
helm repo add appdynamics-cloud-helmcharts https://appdynamics.jfrog.io/artifactory/appdynamics-cloud-helmcharts/
-
Create a namespace for appdynamics in your cluster:
kubectl create namespace appdynamics
Create a Helm values file, in the example called values-ca1.yaml. Update the controllerInfo properties with the credentials from your Controller. Update the clusterAgent properties to set the namespace and pods to monitor. See Configure the Cluster Agent for information about the available properties nsToMonitorRegex, nsToExcludeRegex and podFilter.
values-ca1.yaml
# To install Cluster Agent installClusterAgent: true # controller info controllerInfo: url: https://<controller-url>:443 account: <appdynamics-controller-account> username: <appdynamics-controller-username> password: <appdynamics-controller-password> accessKey: <appdynamics-controller-access-key> # Cluster agent config clusterAgent: nsToMonitorRegex: dev-.*
Note: From Cluster Agent 24.9 onwards, you no longer require server monitoring user credentials or API-user credentials to mark associated nodes as historical upon pod deletion. The associated node in the Controller is automatically marked as historical when a pod is deleted.See Configuration Options for values.yaml for more information regarding the available options. Also, you can download a copy of values.yaml from the Helm Chart repository using this command:helm show values appdynamics-cloud-helmcharts/cluster-agent
- (Optional) If you require multiple Cluster Agents to monitor a single cluster, set up the Target Allocator. In the values-ca1.yaml file, enable the Target Allocator with the number of Cluster Agent replicas to enable the operator to create Cluster Agent replicas.values-ca1.yaml
... # Cluster agent config clusterAgent: nsToMonitorRegex: dev-.* #Instrumentation config instrumentationConfig: enabled: false # Target allocator Config targetAllocator: enabled: true clusterAgentReplicas: 3 autoScaling enabled: false #false by default replicaProfile: Default maxClusterAgentReplicas: 12 scaleDown: stabilizationWindowSeconds: 86400 #In Seconds # Target Allocator pod specific properties targetAllocatorPod: imagePullPolicy: "" imagePullSecret: "" priorityClassName: "" nodeSelector: {} tolerations: [] resources: limits: cpu: "500m" memory: "500Mi" requests: cpu: "200m" memory: "200Mi" labels: {} securityContext: {}
- (Optional) Create a secret based on the Controller Access Key.
kubectl -n appdynamics create secret generic cluster-agent-secret --from-literal=controller-key='<access-key>' --from-literal=api-user='<username@account:password>'
-
If you have not installed the Kubernetes metrics-server in the cluster (usually located in the kube-system namespace), then set install.metrics-server to true in the values file to invoke the subchart to install it.
install: metrics-server: true
Note: Setting install.metrics-server installs metrics-server in the namespace with the --namespace flag which is located in the same namespace as the Cluster Agent. -
Deploy the Cluster Agent to the appdynamics namespace:
helm install -f ./values-ca1.yaml "<my-cluster-agent-helm-release>" appdynamics-cloud-helmcharts/cluster-agent --namespace=appdynamics
Enable Auto-Instrumentation
Once you have validated that the Cluster Agent was successfully installed, you can add additional configuration to the instrumentationConfig section of the values YAML file to enable auto-instrumentation. In this example, instrumentationConfig.enabled has been set to true, and multiple instrumentationRules have been defined. See Auto-Instrument Applications with the Cluster Agent.
# To install Cluster Agent
installClusterAgent: true
# AppDynamics controller info
controllerInfo:
url: https://<controller-url>:443
account: <appdynamics-controller-account>
username: <appdynamics-controller-username>
password: <appdynamics-controller-password>
accessKey: <appdynamics-controller-access-key>
# Cluster agent config
clusterAgent:
nsToMonitorRegex: ecom|books|groceries
instrumentationConfig:
enabled: true
instrumentationMethod: Env
nsToInstrumentRegex: ecom|books|groceries
defaultAppName: Ecommerce
tierNameStrategy: manual
enableInstallationReport: false
imageInfo:
java:
image: "docker.io/appdynamics/java-agent:latest"
agentMountPath: /opt/appdynamics
imagePullPolicy: Always
instrumentationRules:
- namespaceRegex: groceries
language: dotnetcore
tierName: tier
imageInfo:
image: "docker.io/appdynamics/dotnet-core-agent:latest"
agentMountPath: /opt/appdynamics
imagePullPolicy: Always
- namespaceRegex: books
matchString: openmct
language: nodejs
imageInfo:
image: "docker.io/appdynamics/nodejs-agent:20.5.0-alpinev10"
agentMountPath: /opt/appdynamics
imagePullPolicy: Always
analyticsHost: <hostname of the Analytics Agent>
analyticsPort: 443
analyticsSslEnabled: true
After saving the values-ca1.yaml file with the added auto-instrumentation configuration, you must upgrade the Helm Chart:helm upgrade -f ./ca1-values.yaml "<my-cluster-agent-helm-release>" appdynamics-cloud-helmcharts/cluster-agent --namespace appdynamics
Configuration Options
Config option | Description | Required |
---|---|---|
installClusterAgent
| Used for installing Cluster Agent. This must be set to true. | Optional (Defaults to true) |
Image config options (Config options under imageInfo key in values.yaml) | ||
imageInfo.agentImage
| Cluster agent image address in format <registryUrl>/<registryAccount>/<project>
| Optional (Defaults to the Docker Hub image) |
imageInfo.agentTag
| Cluster agent image tag/version | Optional (Defaults to latest) |
imageInfo.operatorImage
| Operator image address in format <registryUrl>/<registryAccount>/<project>
| Optional (Defaults to the Docker Hub image) |
imageInfo.operatorTag
| Operator image tag/version | Optional (Defaults to latest) |
imageInfo.imagePullPolicy
| Image pull policy for the operator pod | Optional |
Controller config options (Config options under controllerInfo key in values.yaml) | ||
controllerInfo.accessKey
|
Controller accessKey |
Required Note: This is not required if you have created Access Key secret based on the access Key. See Create Secret.
|
controllerInfo.account
| Controller account | Required |
controllerInfo.authenticateProxy
| true/false if the proxy requires authentication | Optional |
controllerInfo.customSSLCert
| Base64 encoding of PEM formatted SSL certificate | Optional |
controllerInfo.password
| Controller password |
Password for local user from the Controller. Required only when auto-instrumentation is enabled. Note: This is not required if you have created Access Key secret based on the access Key. See Create Secret.
|
controllerInfo.proxyPassword
| Password for proxy authentication | Optional |
controllerInfo.proxyUrl
| Proxy URL if the Controller is behind some proxy | Optional |
controllerInfo.proxyUser
| Username for proxy authentication | Optional |
controllerInfo.url
| Controller URL | Required |
controllerInfo.username
| Controller username |
Username for local user from the Controller. Required only when auto-instrumentation is enabled. |
Cluster Agent Config (Config options under clusterAgent key in values.yaml) Note: For OpenShift version > 4.14, ensure that all the child parameters within securityContext are specified based on the permissible values outlined by the security context constraints (SCCs). See Managing Security Context Constraints in the Red Hat OpenShift documentation. For example, if you want to use RunAsUser property, then user ID (UID) should be in the permissible range. The SCCs permissible range for UID is 1000 to 9001. Therefore, you can add RunAsUser value within this range only. The same applies for other security context parameters. | ||
clusterAgent.appName | Name of the cluster; displays in the Controller UI as your cluster name. | Required |
clusterAgent.eventUploadInterval |
How often Kubernetes warning and state-change events are uploaded to the Controller in seconds. See Monitor Kubernetes Events. | Optional |
clusterAgent.httpClientTimeout |
If no response is received from the Controller, number of seconds after which the server call is terminated. | Optional |
clusterAgent.imagePullSecret |
Credential file used to authenticate when pulling images from your private Docker registry or repository. Based on your Docker registry configuration, you may need to create a secret file for the Splunk AppDynamics Operator to use when pulling the image for the Cluster Agent. See Create a Secret by providing credentials on the command line. | Optional |
clusterAgent.instrumentationMaxPollingAttempts |
The maximum number of times Cluster Agent checks for the successful rollout of instrumentation before marking it as failed. | Optional |
clusterAgent.logProperties.logFileSizeMb |
Maximum file size of the log in MB. | Optional |
clusterAgent.logProperties.logFileBackups |
Maximum number of backups saved in the log. When the maximum number of backups is reached, the oldest log file after the initial log file is deleted. | Optional |
clusterAgent.logProperties.logLevel |
Number of log details. INFO, WARNING, DEBUG, or TRACE. | Optional |
clusterAgent.logProperties.maxPodLogsTailLinesCount |
Number of lines to be tailed while collecting logs. To use this parameter, enable the log capturing feature. See Enable Log Collection for Failing Pods. | Optional |
clusterAgent.logProperties.stdoutLogging |
By default, the Cluster Agent writes to a log file in the logs directory. Additionally, the | Optional |
clusterAgent.nsToMonitorRegex |
The regular expression for selecting the required namespaces to be monitored in the cluster. If you require to monitor multiple namespaces, separate the namespaces using | without spaces. If you are using Target Allocator, you must specify all the namespaces that you require to monitor. Target Allocator will auto-allocate these Namespaces to individual Cluster Agent replicas. See Edit Namespaces. Note: Any modification to the namespaces in the UI takes the precedence over the YAML configuration.
| Optional |
clusterAgent.nsToExcludeRegex |
The regular expression for the namespaces that must be excluded from the selected namespaces that match the regular expression mentioned for Note:
This parameter can be used only if you have specified a value for the | Optional |
clusterAgent.priorityClassName |
The name of the pod priority class, which is used in the pod specification to set the priority. | Optional |
clusterAgent.securityContext.runAsGroup |
If you configured the application container as a non-root user, provide the groupId (GID) of the corresponding group. This sets the appropriate file permission on the agent artifacts. This value is applied to all the instrumented resources. Add this parameter, if you require to override the default value of Cluster Agent image contains a group with GID 9001. |
Optional |
clusterAgent.securityContext.runAsUser |
If you configured the application container as a non-root user, it provides the userId (UID) of the corresponding user. This sets the appropriate file permission on the agent artifacts. This value is applied to all the instrumented resources. Add this parameter, if you require to override the default value of Cluster Agent image contains a user with UID 9001. |
Optional |
clusterAgent.securityContext.allowPrivilegeEscalation |
To control if a process can get more privileges than its parent process. The value is true when the container runs as:
Note:
|
Optional
|
clusterAgent.securityContext.capabilities |
To add or remove POSIX capabilities from the running containers. This uses the default set of capabilities during container runtime. Note:
|
Optional
|
clusterAgent.securityContext.privileged |
To run container in privileged mode, which is equivalent to root on the host. Note:
|
Optional
|
clusterAgent.securityContext.procMount |
The type of proc mount to use for the containers. Note: This parameter is currently available for Deployment and Deployment Config mode.
|
Optional
|
clusterAgent.securityContext.readOnlyRootFilesystem |
To specify if this container has a read-only root filesystem. Note:
|
Optional
|
clusterAgent.securityContext.runAsNonRoot |
To specify if the container must run as a non-root user. If the value is true, the Kubelet validates the image at runtime to ensure that the container fails to start when run as root. If this parameter is not specified or if the value is false, there is no validation. Note: This parameter is currently available for Deployment and Deployment Config mode.
| Optional |
clusterAgent.securityContext.seLinuxOptions |
To apply the SELinux context to the container. If this parameter is not specified, the container runtime allocates a random SELinux context for each container. Note:
| Optional |
clusterAgent.securityContext.seccompProfile |
To specify the seccomp options used by the container. If seccomp options are specified at both the pod and container level, the container options override the pod options. Note:
| Optional |
clusterAgent.securityContext.windowsOptions |
To specify Windows-specific options for every container. Note:
| Optional |
Cluster Agent Pod Config | ||
agentPod.labels |
Adds any required pod labels to the Cluster Agent pod. These labels are also added to the deployment of Cluster Agent. | Optional |
agentPod.nodeSelector |
The Agent pod runs on the node that includes the specified key-value pair within its labels property. See nodeSelector. | Optional |
agentPod.resources |
Requests and limits of CPU and memory resources for the Cluster Agent. | Optional |
agentPod.tolerations |
An array of tolerations required for the pod. See Taint and Tolerations. | Optional |
Pod Filter Config | ||
podFilter | Blocklist or allowlist pods based on:
Blocklisting or allowlisting by name takes preference over blocklisting or allowlisting by labels. For example, if you have the podFilter as: podFilter: blocklistedLabels: - release: v1 allowlistedNames: - ^podname This blocks all the pods which have the label '
| Optional |
Target Allocator Config | ||
targetAllocator.enabled |
Enables the use of auto allocation of namespaces to available Cluster Agent replicas. This is disabled by default. To enable this property, set enabled to true. For information about Target Allocator, see Target Allocator. | Optional |
targetAllocator.clusterAgentReplicas |
The number of cluster Agent replicas. If Target Allocator is enabled, the default value is 3. Set the number of replicas based on your requirements. To decide how many replicas are required, see Cluster Agent Requirements and Supported Environments. | Optional This is required when |
targetAllocator.autoScaling.enabled | The default value is false. Specify true to enable auto-scaling for creating replicas. | Optional |
targetAllocator.autoScaling.replicaProfile |
The profile to be used. Currently only the | Optional. Required when auto-scaling is enabled. |
targetAllocator.autoScaling.maxClusterAgentReplicas | Specify the maximum number of replicas that you require to auto-scale. | Optional |
targetAllocator.autoScaling.scaleDown.stabilizationWindowSeconds |
Specify the time in seconds after which Target Allocator can scale down the replicas. Scale-down may result in the metrics drop. By default, this parameter is disabled. | Optional |
Target Allocator Pod Config | ||
targetAllocatorPod.agentPod.labels |
Adds any required pod labels to the pod. These labels are also added to the deployment of Target Allocator. | Optional |
targetAllocatorPod.nodeSelector |
The Target Allocator pod runs on the node that includes the specified key-value pair within its labels property. See nodeSelector. | Optional |
targetAllocatorPod.agentPod.resources |
Requests and limits of CPU and memory resources for the Target Allocator. | Optional |
targetAllocatorPod.agentPod.tolerations |
An array of tolerations required for the pod. See Taint and Tolerations. | Optional |
targetAllocatorPod.imagePullPolicy |
Image pull policy for Target Allocator. | Optional |
targetAllocatorPod.imagePullSecret |
Credential file used to authenticate when pulling images from your private Docker registry or repository. Based on your Docker registry configuration, you may need to create a secret file for the Splunk AppDynamics Operator to use when pulling the image for the Target Allocator, which is the same as Cluster Agent image. See Create a Secret by providing credentials on the command line. | Optional |
targetAllocatorPod.priorityClassName |
The name of the pod priority class, which is used in the pod specification to set the priority. | Optional |
targetAllocatorPod.nodeSelector |
The Target Allocator pod runs on the node that includes the specified | Optional |
targetAllocatorPod.tolerations |
An array of tolerations required for the pod. See Taint and Tolerations. | Optional |
targetAllocatorPod.resources |
Requests and limits of CPU and memory resources for the Cluster Agent. | Optional |
targetAllocatorPod.securityContext Note:
For OpenShift version > 4.14, ensure that all the child parameters within securityContext are specified based on the permissible values outlined by the security context constraints (SCCs). See Managing Security Context Constraints in the Red Hat OpenShift documentation. For example, if you want to use |
You can include the following parameters under
This sets the appropriate file permission on the agent artifacts. This value is applied to all the instrumented resources. Add this parameter, if you require to override the default value of | Optional |
This sets the appropriate file permission on the agent artifacts. This value is applied to all the instrumented resources. Add this parameter, if you require to override the default value of | ||
If you do not set this parameter, the helm uses the default value as true. Note: This parameter is currently available for Deployment and Deployment Config mode.
| ||
capabilities: To add or remove POSIX capabilities from the running containers. This uses the default set of capabilities during container runtime. Note: This parameter is currently available for Deployment and DeploymentConfig mode.
| ||
privileged: To run container in privileged mode, which is equivalent to root on the host. If you do not set this parameter, the helm uses the default value as true. Note: This parameter is currently available for Deployment and DeploymentConfig mode.
| ||
procMount: The type of proc mount to use for the containers.
Note: This parameter is currently available for Deployment and Deployment Config mode.
| ||
readOnlyRootFilesystem: To specify if this container has a read-only root filesystem. Note: This parameter is currently available for Deployment and Deployment Config mode.
| ||
runAsNonRoot: To specify if the container must run as a non-root user.
If the value is true, the Kubelet validates the image at runtime to ensure that the container fails to start when run as root. If this parameter is not specified or if the value is false, there is no validation. Note: This parameter is currently available for Deployment and Deployment Config mode.
| ||
seLinuxOptions: To apply the SELinux context to the container. If this parameter is not specified, the container runtime allocates a random SELinux context for each container.
Note: This parameter is currently available for Deployment and Deployment Config mode.
| ||
seccompProfile: To specify the seccomp options used by the container. If seccomp options are specified at both the pod and container level, the container options override the pod options.
Note: This parameter is currently available for Deployment and Deployment Config mode.
| ||
windowsOptions:To specify Windows-specific options for every container.
Note: This parameter is currently available for Deployment and Deployment Config mode.
|
Install Multiple Cluster Agents in a Cluster
The Cluster Agent Helm Chart supports multiple Cluster Agent installations in a cluster. This may be necessary for larger clusters that exceed the pod monitoring limit for a single Cluster Agent. See Cluster Agent Requirements and Supported Environments.
If you do not require to use auto-instrumentation and manual correlation (for Kubernetes >=1.25), then you can install Cluster Agent with Target Allocator.
The Target Allocator:
- simplifies the monitoring of large clusters by creating the specified number of replicas of the Cluster Agent.
- auto-allocates namespaces to the available Cluster Agent replicas.
- aggregates the cluster data to send to Controller.
Each Cluster Agent that is deployed must have different configuration. This is achieved by limiting the monitoring to a distinct set of namespaces and pods using the nsToMonitorRegex, nsToMonitorExcludeRegex and podFilter properties. See Configure the Cluster Agent.
To install Cluster Agents:
Cluster Agent Helm Chart Configuration Examples
These examples display various configurations for the Cluster Agent Helm chart:
Use the Cluster Agent Helm Chart to Enable Custom SSL
user-values.yaml
controllerInfo:
url: https://<controller-url>:443
account: <appdynamics-controller-account>
accessKey: <appdynamics-controller-access-key>
#=====
customSSLCert: "<base64 of PEM formatted cert>"
#=====
agentServiceAccount: appdynamics-cluster-agent-ssl # Can be any valid name
operatorServiceAccount: appdynamics-operator-ssl # Can be any valid name
Use the Cluster Agent Helm Chart to Enable the Proxy Controller
Without authentication:
user-values.yaml
# To install Cluster Agent
installClusterAgent: true
controllerInfo:
url: https://<controller-url>:443
account: <appdynamics-controller-account>
accessKey: <appdynamics-controller-access-key>
#=====
proxyUrl: http://proxy-url.appd-controller.com
#=====
agentServiceAccount: appdynamics-cluster-agent-ssl # Can be any valid name
operatorServiceAccount: appdynamics-operator-ssl # Can be any valid name
With authentication:
user-values.yaml
# To install Cluster Agent
installClusterAgent: true
controllerInfo:
url: https://<controller-url>:443
account: <appdynamics-controller-account>
accessKey: <appdynamics-controller-access-key>
#=====
authenticateProxy: true
proxyUrl: http://proxy-url.appd-controller.com
proxyUser: hello
proxyPassword: world
#=====
agentServiceAccount: appdynamics-cluster-agent-ssl # Can be any valid name
operatorServiceAccount: appdynamics-operator-ssl # Can be any valid name
Use the Cluster Agent Helm Chart to add nodeSelector and tolerations
user-values.yaml
agentPod:
nodeSelector:
nodeLabelKey: nodeLabelValue
tolerations:
- effect: NoExecute
operator: Equal
key: key1
value: val1
tolerationSeconds: 11
operatorPod:
nodeSelector:
nodeLabelKey: nodeLabelValue
anotherNodeLabel: anotherNodeLabel
tolerations:
- operator: Exists
key: key1
Best Practices for Sensitive Data
We recommend using multiple values.yaml files to separate sensitive data in separate values.yaml files. Examples of these values are:
- controllerInfo.password
- controllerInfo.accessKey
- controllerInfo.customSSLCert
- controllerInfo.proxyPassword
Each values file follows the structure of the default values.yaml enabling you to easily share files with non-sensitive configuration properties yet keep sensitive values safe.
user-values.yaml
# To install Cluster Agent
installClusterAgent: true
imageInfo:
agentImage: dtr.corp.appdynamics.com/sim/cluster-agent
agentTag: latest
operatorImage: docker.io/appdynamics/cluster-agent-operator
operatorTag: latest
imagePullPolicy: Always
controllerInfo:
url: https://<controller-url>:443
account: <appdynamics-controller-account>
username: <appdynamics-controller-username>
password: <appdynamics-controller-password>
accessKey: <appdynamics-controller-access-key>
agentServiceAccount: appdynamics-cluster-agent-ssl # Can be any valid name
operatorServiceAccount: appdynamics-operator-ssl # Can be any valid name
user-values-sensitive.yaml
controllerInfo:
password: welcome
accessKey: abc-def-ghi-1516
When installing the Helm Chart, use multiple -f
helm install -f ./user-values.yaml -f ./user-values-sensitive.yaml "<my-cluster-agent-helm-release>" appdynamics-cloud-helmcharts/cluster-agent --namespace ca-appdynamics