Helm チャートを使用したクラスタエージェントのインストール

このページでは、クラスタエージェントの Helm チャートを使用してクラスタエージェントを展開する方法について説明します。

Helm は Kubernetes のパッケージマネージャです。Helm チャートは、一連の Kubernetes リソースについて説明するファイルのコレクションです。クラスタエージェントの Helm チャートは、Splunk AppDynamics オペレータとクラスタエージェントを展開するのに便利な方法です。クラスタエージェントの Helm チャートを使用して、1 つのクラスタに複数のクラスタエージェントを展開することもできます。これは、単一のクラスタエージェントのポッドモニタリング制限を超える大規模なクラスタで必要になることがあります。「クラスタエージェントの要件およびサポート対象環境」を参照してください。

要件

  • クラスタ エージェント バージョン 20.6 以降
  • コントローラバージョン 20.6 以降
  • クラスタエージェントの Helm チャートが Helm 3.0 と互換性を持っている
注:
  • バージョン v1.1.0 以降のクラスタエージェントの Helm チャートを使用することを推奨します。

  • Splunk AppDynamicsクラスタエージェント 23.2.0 以降および オペレータバージョン 23.2.0 以降をインストールするために、クラスタエージェントの Helm チャートバージョン v1.10.0 以降を使用します。

  • Splunk AppDynamicsクラスタエージェント 21.12.0 以降および オペレータバージョン 21.12.0 以降をインストールするために、クラスタエージェントの Helm チャートバージョン v1.1.0 以降を使用します。

  • Splunk AppDynamics古いメジャーバージョンのクラスタエージェントの Helm チャート(0.1.19 以前)を使用して、バージョン 21.10.0 以前のクラスタエージェントおよびバージョン 0.6.11 以前のオペレータをインストールできます。

クラスタへの単一のクラスタエージェントのインストール

  1. 次のコマンドを使用して、Splunk AppDynamics エージェントに関連する、以前にインストールされたすべての CustomResourceDefinition(CRD)を削除します。

    $ kubectl get crds 
    $ kubectl delete crds <crd-names>
  2. チャートリポジトリを Helm に追加します。

    helm repo add appdynamics-cloud-helmcharts https://appdynamics.jfrog.io/artifactory/appdynamics-cloud-helmcharts/
  3. クラスタで appdynamics の名前空間を作成します。

    kubectl create namespace appdynamics
  4. values-ca1.yaml の例では、Helm 値ファイルを作成します。コントローラのログイン情報を使用して controllerInfo プロパティを更新します。clusterAgent プロパティを更新して、モニターする名前空間とポッドを設定します。使用可能なプロパティ nsToMonitorRegexnsToExcludeRegex、および podFilter の詳細については、「クラスターエージェントの設定」を参照してください。

    values-ca1.yaml

    # To install Cluster Agent 
    installClusterAgent: true
    
    
    # controller info
    controllerInfo:
      url: https://<controller-url>:443
      account: <appdynamics-controller-account>                   
      username: <appdynamics-controller-username>                          
      password: <appdynamics-controller-password>                                 
      accessKey: <appdynamics-controller-access-key>  
    
    # Cluster agent config
    clusterAgent:
      nsToMonitorRegex: dev-.*
    注: クラスタエージェント 24.9 以降では、ポッドの削除時に関連ノードを履歴としてマークするためのサーバーモニタリングユーザーのログイン情報または API ユーザーログイン情報は必要なくなりました。ポッドが削除されると、コントローラの関連ノードは自動的に履歴としてマークされます。
    使用可能なオプションの詳細については、「values.yaml の設定オプション」を参照してください。また、次のコマンドを使用して、Helm チャートリポジトリから values.yaml のコピーをダウンロードできます。
    helm show values appdynamics-cloud-helmcharts/cluster-agent
  5. (オプション)複数のクラスターエージェントで単一のクラスターをモニターする必要がある場合は、ターゲットアロケータを設定します。オペレータがクラスターエージェントのレプリカを作成できるようにするには、values-ca1.yaml ファイルで、クラスタエージェントのレプリカの数を使用してターゲットアロケータを有効にします。
    values-ca1.yaml
    ...  
    
    # Cluster agent config
    clusterAgent:
      nsToMonitorRegex: dev-.*
    
    #Instrumentation config
    instrumentationConfig:
      enabled: false
    
    # Target allocator Config
    targetAllocator:
      enabled: true
      clusterAgentReplicas: 3
      autoScaling
        enabled: false #false by default
        replicaProfile: Default 
        maxClusterAgentReplicas: 12
        scaleDown:
          stabilizationWindowSeconds: 86400 #In Seconds
    
    # Target Allocator pod specific properties
    targetAllocatorPod:
      imagePullPolicy: ""
      imagePullSecret: ""
      priorityClassName: ""
      nodeSelector: {}
      tolerations: []
      resources:
        limits:
          cpu: "500m"
          memory: "500Mi"
        requests:
          cpu: "200m"
          memory: "200Mi"
      labels: {}
      securityContext: {}
    ターゲットアロケータの詳細については、「ターゲットアロケータ」を参照してください。
  6. (オプション)コントローラアクセスキーに基づいてシークレットを作成します。create-secret-accesskey
    kubectl -n appdynamics create secret generic cluster-agent-secret --from-literal=controller-key='<access-key>' --from-literal=api-user='<username@account:password>'
  7. クラスタ(通常は kube-system 名前空間に配置)に Kubernetes metrics-server をインストールしていない場合は、値ファイルで install.metrics-servertrue に設定し、サブチャートを呼び出してインストールします。

    install:
      metrics-server: true
    注: install.metrics-server を設定すると、--namespace フラグが付いた名前空間に metrics-server がインストールされます。この名前空間は、クラスタエージェントと同じです。
  8. クラスタエージェントを appdynamics 名前空間に展開します。

    helm install -f ./values-ca1.yaml "<my-cluster-agent-helm-release>" appdynamics-cloud-helmcharts/cluster-agent --namespace=appdynamics

Enable Auto-Instrumentation

Once you have validated that the Cluster Agent was successfully installed, you can add additional configuration to the instrumentationConfig section of the values YAML file to enable auto-instrumentation. In this example, instrumentationConfig.enabled has been set to true, and multiple instrumentationRules have been defined. See Auto-Instrument Applications with the Cluster Agent.

values-ca1.yaml with Auto-Instrumentation Enabled
# To install Cluster Agent 
installClusterAgent: true


# AppDynamics controller info
controllerInfo:
  url: https://<controller-url>:443
  account: <appdynamics-controller-account>                   
  username: <appdynamics-controller-username>                          
  password: <appdynamics-controller-password>                                 
  accessKey: <appdynamics-controller-access-key>  

# Cluster agent config
clusterAgent:
  nsToMonitorRegex: ecom|books|groceries

instrumentationConfig:
  enabled: true
  instrumentationMethod: Env
  nsToInstrumentRegex: ecom|books|groceries
  defaultAppName: Ecommerce
  tierNameStrategy: manual
  enableInstallationReport: false
  imageInfo:
    java:
      image: "docker.io/appdynamics/java-agent:latest"
      agentMountPath: /opt/appdynamics
      imagePullPolicy: Always
  instrumentationRules:
    - namespaceRegex: groceries
      language: dotnetcore
	  tierName: tier
      imageInfo:
        image: "docker.io/appdynamics/dotnet-core-agent:latest"
        agentMountPath: /opt/appdynamics
        imagePullPolicy: Always
    - namespaceRegex: books
      matchString: openmct
      language: nodejs
      imageInfo:
        image: "docker.io/appdynamics/nodejs-agent:20.5.0-alpinev10"
        agentMountPath: /opt/appdynamics
        imagePullPolicy: Always
      analyticsHost: <hostname of the Analytics Agent>
      analyticsPort: 443
      analyticsSslEnabled: true
After saving the values-ca1.yaml file with the added auto-instrumentation configuration, you must upgrade the Helm Chart:
helm upgrade -f ./ca1-values.yaml "<my-cluster-agent-helm-release>" appdynamics-cloud-helmcharts/cluster-agent --namespace appdynamics

Configuration Options

Config optionDescriptionRequired
installClusterAgentUsed for installing Cluster Agent. This must be set to true.Optional (Defaults to true)
Image config options (Config options under imageInfo key in values.yaml)
imageInfo.agentImageCluster agent image address in format <registryUrl>/<registryAccount>/<project> Optional (Defaults to the Docker Hub image)
imageInfo.agentTagCluster agent image tag/versionOptional (Defaults to latest)
imageInfo.operatorImageOperator image address in format <registryUrl>/<registryAccount>/<project> Optional (Defaults to the Docker Hub image)
imageInfo.operatorTagOperator image tag/versionOptional (Defaults to latest)
imageInfo.imagePullPolicyImage pull policy for the operator podOptional
Custom Kubernetes API server configuration options (Config options under customKubeconfig key in values.yaml)
enableTo enable the customKubeconfig parameter. The value is set to false by default. Set this parameter to true to use the Kubernetes API server.Optional
serverThe Kubernetes API server URL for the cluster.Required when customKubeconfig is enabled.
clusterThe name of the cluster.Required when customKubeconfig is enabled.
userThe username to access the Kubernetes server API.Required when customKubeconfig is enabled.
Controller config options (Config options under controllerInfo key in values.yaml)
controllerInfo.accessKey

Controller accessKey

Required

Note: This is not required if you have created Access Key secret based on the access Key. See Create Secret.
controllerInfo.accountController accountRequired
controllerInfo.authenticateProxytrue/false if the proxy requires authenticationOptional
controllerInfo.customSSLCertBase64 encoding of PEM formatted SSL certificateOptional
controllerInfo.passwordController password

Password for local user from the Controller.

Required only when auto-instrumentation is enabled.

Note: This is not required if you have created Access Key secret based on the access Key. See Create Secret.
controllerInfo.proxyPasswordPassword for proxy authenticationOptional
controllerInfo.proxyUrlProxy URL if the Controller is behind some proxyOptional
controllerInfo.proxyUserUsername for proxy authenticationOptional
controllerInfo.urlController URLRequired
controllerInfo.usernameController username

Username for local user from the Controller.

Required only when auto-instrumentation is enabled.

Cluster Agent Config (Config options under clusterAgent key in values.yaml)
Note: For OpenShift version > 4.14, ensure that all the child parameters within securityContext are specified based on the permissible values outlined by the security context constraints (SCCs). See Managing Security Context Constraints in the Red Hat OpenShift documentation.

For example, if you want to use RunAsUser property, then user ID (UID) should be in the permissible range. The SCCs permissible range for UID is 1000 to 9001. Therefore, you can add RunAsUser value within this range only. The same applies for other security context parameters.

clusterAgent.appNameName of the cluster; displays in the Controller UI as your cluster name.Required
clusterAgent.eventUploadInterval

How often Kubernetes warning and state-change events are uploaded to the Controller in seconds. See Monitor Kubernetes Events.

Optional
clusterAgent.httpClientTimeout

If no response is received from the Controller, number of seconds after which the server call is terminated.

Optional
clusterAgent.imagePullSecret

Credential file used to authenticate when pulling images from your private Docker registry or repository. Based on your Docker registry configuration, you may need to create a secret file for the Splunk AppDynamics Operator to use when pulling the image for the Cluster Agent. See Create a Secret by providing credentials on the command line.

Optional
clusterAgent.instrumentationMaxPollingAttempts

The maximum number of times Cluster Agent checks for the successful rollout of instrumentation before marking it as failed.

Optional
clusterAgent.logProperties.logFileSizeMb

Maximum file size of the log in MB.

Optional
clusterAgent.logProperties.logFileBackups

Maximum number of backups saved in the log. When the maximum number of backups is reached, the oldest log file after the initial log file is deleted.

Optional
clusterAgent.logProperties.logLevel

Number of log details. INFO, WARNING, DEBUG, or TRACE.

Optional
clusterAgent.logProperties.maxPodLogsTailLinesCount

Number of lines to be tailed while collecting logs.

To use this parameter, enable the log capturing feature. See Enable Log Collection for Failing Pods.

Optional
clusterAgent.logProperties.stdoutLogging

By default, the Cluster Agent writes to a log file in the logs directory. Additionally, the stdoutLogging parameter is provided to send logs to the container stdout.

Optional
clusterAgent.nsToMonitorRegex

The regular expression for selecting the required namespaces to be monitored in the cluster.

If you require to monitor multiple namespaces, separate the namespaces using | without spaces.

If you are using Target Allocator, you must specify all the namespaces that you require to monitor. Target Allocator will auto-allocate these Namespaces to individual Cluster Agent replicas.

See Edit Namespaces.

Note: Any modification to the namespaces in the UI takes the precedence over the YAML configuration.
Optional
clusterAgent.nsToExcludeRegex

The regular expression for the namespaces that must be excluded from the selected namespaces that match the regular expression mentioned for nsToMonitorRegex.

Note:
  • This parameter is supported in Cluster Agent >= 20.9, and Controller >= 20.10.
  • Any modification to the namespaces in the UI takes the precedence over the YAML configuration.

This parameter can be used only if you have specified a value for the nsToMonitorRegex parameter.

Optional
clusterAgent.priorityClassName

The name of the pod priority class, which is used in the pod specification to set the priority.

Optional

clusterAgent.securityContext.runAsGroup

If you configured the application container as a non-root user, provide the groupId (GID) of the corresponding group.

This sets the appropriate file permission on the agent artifacts.

This value is applied to all the instrumented resources.

Add this parameter, if you require to override the default value of runAsGroup that is configured for default instrumentation, or if you require a specific value for the resources that satisfy this rule.

Cluster Agent image contains a group with GID 9001.

Optional

clusterAgent.securityContext.runAsUser

If you configured the application container as a non-root user, it provides the userId (UID) of the corresponding user.

This sets the appropriate file permission on the agent artifacts.

This value is applied to all the instrumented resources.

Add this parameter, if you require to override the default value of runAsUserthat is configured for default instrumentation, or if you require a specific value for the resources that satisfy this rule.

Cluster Agent image contains a user with UID 9001.

Optional

clusterAgent.securityContext.allowPrivilegeEscalation

To control if a process can get more privileges than its parent process. The value is true when the container runs as:

  • Privileged container
  • CAP_SYS_ADMIN
Note:
  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
Optional
clusterAgent.securityContext.capabilities

To add or remove POSIX capabilities from the running containers. This uses the default set of capabilities during container runtime.

Note:
  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
Optional
clusterAgent.securityContext.privileged

To run container in privileged mode, which is equivalent to root on the host.

Note:
  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
Optional
clusterAgent.securityContext.procMount

The type of proc mount to use for the containers.

Note: This parameter is currently available for Deployment and Deployment Config mode.
Optional
clusterAgent.securityContext.readOnlyRootFilesystem

To specify if this container has a read-only root filesystem.

Note:
  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and Deployment Config mode.
Optional
clusterAgent.securityContext.runAsNonRoot

To specify if the container must run as a non-root user.

If the value is true, the Kubelet validates the image at runtime to ensure that the container fails to start when run as root. If this parameter is not specified or if the value is false, there is no validation.

Note: This parameter is currently available for Deployment and Deployment Config mode.
Optional
clusterAgent.securityContext.seLinuxOptions

To apply the SELinux context to the container. If this parameter is not specified, the container runtime allocates a random SELinux context for each container.

Note:
  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
Optional
clusterAgent.securityContext.seccompProfile

To specify the seccomp options used by the container. If seccomp options are specified at both the pod and container level, the container options override the pod options.

Note:
  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and Deployment Config mode.
Optional
clusterAgent.securityContext.windowsOptions

To specify Windows-specific options for every container.

Note:
  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and Deployment Config mode.
Optional
Cluster Agent Pod Config
agentPod.labels

Adds any required pod labels to the Cluster Agent pod. These labels are also added to the deployment of Cluster Agent.

Optional
agentPod.nodeSelector

The Agent pod runs on the node that includes the specified key-value pair within its labels property. See nodeSelector.

Optional
agentPod.resources

Requests and limits of CPU and memory resources for the Cluster Agent.

Optional
agentPod.tolerations

An array of tolerations required for the pod. See Taint and Tolerations.

Optional
Pod Filter Config
podFilterBlocklist or allowlist pods based on:
  • Regular expressions for pod names
  • Pod labels

Blocklisting or allowlisting by name takes preference over blocklisting or allowlisting by labels. For example, if you have the podFilter as:

podFilter:
blocklistedLabels: - release: v1
allowlistedNames: - ^podname

This blocks all the pods which have the label 'release=v1 ' except for the ones which have the names starting with 'podname'.

  • When a pod is listed as allowed by name and blocked by name, it will be allowlisted.
  • When a pod is listed as allowed by a label and blocked by a label, it will be allowlisted.
Optional
Target Allocator Config
targetAllocator.enabled

Enables the use of auto allocation of namespaces to available Cluster Agent replicas. This is disabled by default. To enable this property, set enabled to true.

For information about Target Allocator, see Target Allocator.

Optional
targetAllocator.clusterAgentReplicas

The number of cluster Agent replicas. If Target Allocator is enabled, the default value is 3. Set the number of replicas based on your requirements. To decide how many replicas are required, see Cluster Agent Requirements and Supported Environments.

Optional

This is required when targetAllocator.enabled is set to true.

targetAllocator.autoScaling.enabledThe default value is false. Specify true to enable auto-scaling for creating replicas. Optional
targetAllocator.autoScaling.replicaProfile

The profile to be used. Currently only the Default profile is available. The Default profile uses the 1550mi memory and 3750m CPU to monitor 2500 pods.

Optional.

Required when auto-scaling is enabled.

targetAllocator.autoScaling.maxClusterAgentReplicas Specify the maximum number of replicas that you require to auto-scale.Optional
targetAllocator.autoScaling.scaleDown.stabilizationWindowSeconds

Specify the time in seconds after which Target Allocator can scale down the replicas.

Scale-down may result in the metrics drop. By default, this parameter is disabled.

Optional
Target Allocator Pod Config
targetAllocatorPod.agentPod.labels

Adds any required pod labels to the pod. These labels are also added to the deployment of Target Allocator.

Optional
targetAllocatorPod.nodeSelector

The Target Allocator pod runs on the node that includes the specified key-value pair within its labels property. See nodeSelector.

Optional
targetAllocatorPod.agentPod.resources

Requests and limits of CPU and memory resources for the Target Allocator.

Optional
targetAllocatorPod.agentPod.tolerations

An array of tolerations required for the pod. See Taint and Tolerations.

Optional
targetAllocatorPod.imagePullPolicy

Image pull policy for Target Allocator.

Optional
targetAllocatorPod.imagePullSecret

Credential file used to authenticate when pulling images from your private Docker registry or repository. Based on your Docker registry configuration, you may need to create a secret file for the Splunk AppDynamics Operator to use when pulling the image for the Target Allocator, which is the same as Cluster Agent image. See Create a Secret by providing credentials on the command line.

Optional
targetAllocatorPod.priorityClassName

The name of the pod priority class, which is used in the pod specification to set the priority.

Optional
targetAllocatorPod.nodeSelector

The Target Allocator pod runs on the node that includes the specified key-value pair within its labels property. See nodeSelector.

Optional
targetAllocatorPod.tolerations

An array of tolerations required for the pod. See Taint and Tolerations.

Optional
targetAllocatorPod.resources

Requests and limits of CPU and memory resources for the Cluster Agent.

Optional

targetAllocatorPod.securityContext

Note:

For OpenShift version > 4.14, ensure that all the child parameters within securityContext are specified based on the permissible values outlined by the security context constraints (SCCs). See Managing Security Context Constraints in the Red Hat OpenShift documentation.

For example, if you want to use RunAsUserproperty, then user ID (UID) should be in the permissible range. The SCCs permissible range for UID is 1000 to 9001. Therefore, you can add the RunAsUser value within this range only. The same applies to other security context parameters.

You can include the following parameters under securityContext:

runAsGroup: If you configured the application container as a non-root user, provide the groupId of the corresponding group.

This sets the appropriate file permission on the agent artifacts.

This value is applied to all the instrumented resources.

Add this parameter, if you require to override the default value of runAsGroupthat is configured for default instrumentation, or if you require a specific value for the resources that satisfy this rule.

Optional

runAsUser: If you configured the application container as a non-root user, it provides the userId of the corresponding user.

This sets the appropriate file permission on the agent artifacts.

This value is applied to all the instrumented resources.

Add this parameter, if you require to override the default value of runAsUserthat is configured for default instrumentation, or if you require a specific value for the resources that satisfy this rule.

allowPrivilegeEscalation: To control if a process can get more privileges than its parent process. The value is true when the container runs as:

  • Privileged container
  • CAP_SYS_ADMIN

If you do not set this parameter, the helm uses the default value as true.

Note: This parameter is currently available for Deployment and Deployment Config mode.
capabilities: To add or remove POSIX capabilities from the running containers. This uses the default set of capabilities during container runtime.
Note: This parameter is currently available for Deployment and DeploymentConfig mode.

privileged: To run container in privileged mode, which is equivalent to root on the host.

If you do not set this parameter, the helm uses the default value as true.

Note: This parameter is currently available for Deployment and DeploymentConfig mode.
procMount: The type of proc mount to use for the containers.
Note: This parameter is currently available for Deployment and Deployment Config mode.
readOnlyRootFilesystem: To specify if this container has a read-only root filesystem.
Note: This parameter is currently available for Deployment and Deployment Config mode.
runAsNonRoot: To specify if the container must run as a non-root user.

If the value is true, the Kubelet validates the image at runtime to ensure that the container fails to start when run as root. If this parameter is not specified or if the value is false, there is no validation.

Note: This parameter is currently available for Deployment and Deployment Config mode.
seLinuxOptions: To apply the SELinux context to the container. If this parameter is not specified, the container runtime allocates a random SELinux context for each container.
Note: This parameter is currently available for Deployment and Deployment Config mode.
seccompProfile: To specify the seccomp options used by the container. If seccomp options are specified at both the pod and container level, the container options override the pod options.
Note: This parameter is currently available for Deployment and Deployment Config mode.
windowsOptions:To specify Windows-specific options for every container.
Note: This parameter is currently available for Deployment and Deployment Config mode.

Install Multiple Cluster Agents in a Cluster

The Cluster Agent Helm Chart supports multiple Cluster Agent installations in a cluster. This may be necessary for larger clusters that exceed the pod monitoring limit for a single Cluster Agent. See Cluster Agent Requirements and Supported Environments.

注:

If you do not require to use auto-instrumentation and manual correlation (for Kubernetes >=1.25), then you can install Cluster Agent with Target Allocator.

The Target Allocator:

  • simplifies the monitoring of large clusters by creating the specified number of replicas of the Cluster Agent.
  • auto-allocates namespaces to the available Cluster Agent replicas.
  • aggregates the cluster data to send to Controller.

Each Cluster Agent that is deployed must have different configuration. This is achieved by limiting the monitoring to a distinct set of namespaces and pods using the nsToMonitorRegex, nsToMonitorExcludeRegex and podFilter properties. See Configure the Cluster Agent.

To install Cluster Agents:

  1. Create a new values file, called values-ca2.yaml as the example, that uses the same controllerInfo properties as the first Cluster Agent. Add additional properties, such as nsToMonitorRegex and podFilter, to set the monitoring scope for this Cluster Agent.
    values-ca2.yaml
    # To install Cluster Agent
    installClusterAgent: true
    # AppDynamics controller info
    controllerInfo:
    url: https://<controller-url>:443
    account: <appdynamics-controller-account>
    accessKey: <appdynamics-controller-access-key>
    # Cluster agent config
    clusterAgent:
    nsToMonitorRegex: stage.*
    podFilter:
    allowlistedLabels:
    - label1: value1
    - label2: value2
    blocklistedLabels: []
    allowlistedNames: []
    blocklistedNames: []
  2. Create a namespace distinct from the previous namespace used for the first installation:
    kubectl create ns appdynamics-ca2
  3. Install the additional Cluster Agent:
    helm install -f ./values-ca2.yaml "<my-2nd-cluster-agent-helm-release>" appdynamics-cloud-helmcharts/cluster-agent --namespace=appdynamics-ca2

Cluster Agent Helm Chart Configuration Examples

These examples display various configurations for the Cluster Agent Helm chart:

Use the Cluster Agent Helm Chart to Enable Custom SSL

user-values.yaml

controllerInfo:
url: https://<controller-url>:443
account: <appdynamics-controller-account>
accessKey: <appdynamics-controller-access-key>
#=====
customSSLCert: "<base64 of PEM formatted cert>"
#=====
agentServiceAccount: appdynamics-cluster-agent-ssl     # Can be any valid name
operatorServiceAccount: appdynamics-operator-ssl       # Can be any valid name

Use the Cluster Agent Helm Chart to Enable the Proxy Controller

Without authentication:

user-values.yaml

# To install Cluster Agent
installClusterAgent: true
controllerInfo:
url: https://<controller-url>:443
account: <appdynamics-controller-account>
accessKey: <appdynamics-controller-access-key>
#=====
proxyUrl: http://proxy-url.appd-controller.com
#=====
agentServiceAccount: appdynamics-cluster-agent-ssl     # Can be any valid name
operatorServiceAccount: appdynamics-operator-ssl       # Can be any valid name

With authentication:

user-values.yaml

# To install Cluster Agent
installClusterAgent: true
controllerInfo:
url: https://<controller-url>:443
account: <appdynamics-controller-account>
accessKey: <appdynamics-controller-access-key>
#=====
authenticateProxy: true
proxyUrl: http://proxy-url.appd-controller.com
proxyUser: hello
proxyPassword: world
#=====
agentServiceAccount: appdynamics-cluster-agent-ssl     # Can be any valid name
operatorServiceAccount: appdynamics-operator-ssl       # Can be any valid name

Use the Cluster Agent Helm Chart to add nodeSelector and tolerations

user-values.yaml

agentPod:
nodeSelector:
nodeLabelKey: nodeLabelValue
tolerations:
- effect: NoExecute
operator: Equal
key: key1
value: val1
tolerationSeconds: 11
operatorPod:
nodeSelector:
nodeLabelKey: nodeLabelValue
anotherNodeLabel: anotherNodeLabel
tolerations:
- operator: Exists
key: key1

Best Practices for Sensitive Data

We recommend using multiple values.yaml files to separate sensitive data in separate values.yaml files. Examples of these values are:

  • controllerInfo.password
  • controllerInfo.accessKey
  • controllerInfo.customSSLCert
  • controllerInfo.proxyPassword

Each values file follows the structure of the default values.yaml enabling you to easily share files with non-sensitive configuration properties yet keep sensitive values safe.

This is the default user-values.yaml file example:

# To install Cluster Agent 
installClusterAgent: true
 
imageInfo:
  agentImage: dtr.corp.appdynamics.com/sim/cluster-agent
  agentTag: latest
  operatorImage: docker.io/appdynamics/cluster-agent-operator
  operatorTag: latest
  imagePullPolicy: Always                            
 
controllerInfo:
  url: https://<controller-url>:443
  account: <appdynamics-controller-account>                   
  username: <appdynamics-controller-username>                          
  password: <appdynamics-controller-password>                                 
  accessKey: <appdynamics-controller-access-key>
 
agentServiceAccount: appdynamics-cluster-agent-ssl     # Can be any valid name
operatorServiceAccount: appdynamics-operator-ssl       # Can be any valid name

user-values-sensitive.yaml

controllerInfo:
  password: welcome
  accessKey: abc-def-ghi-1516

When installing the Helm Chart, use multiple -f

helm install -f ./user-values.yaml -f ./user-values-sensitive.yaml "<my-cluster-agent-helm-release>" appdynamics-cloud-helmcharts/cluster-agent --namespace ca-appdynamics