Set up PSA in Amazon EKS

Set up the Web Monitoring PSA and API Monitoring PSA in Amazon Elastic Kubernetes Service (Amazon EKS) as follows. If you want to set up PSA in an existing Kubernetes cluster, skip the Create the Kubernetes Cluster section.

Deploy ManuallyDeploy Using the Automation Script
  1. Create the Kubernetes Cluster.
  2. Pull the Docker image
  3. Tag and push images to the Registry
  4. Deploy PSA Manually
  5. Monitor the Kubernetes cluster
  1. Create the Kubernetes Cluster
  2. Deploy PSA Using the Automation Script
  3. Monitor the Kubernetes cluster
Warning: This document contains links to AWS CLI documentation. Splunk AppDynamics \ makes no representation as to the accuracy of AWS CLI documentation because AWS CLI controls its own documentation.
Note: You can deploy PSA on an existing Kubernetes cluster in public or private clouds. The automation scripts do not support Kubernetes cluster creation.
Note:
  • If you use the automated script, you must manually set up the Kubernetes cluster and nodes and log in to container registries before deploying PSA.
  • If you use a separate registry, specify the registry in the automated script before deploying PSA:
    1. Open the install_psa file and go to the push_images_to_docker_registry() function.

    2. Under that function, after ${DOCKER_REGISTRY_URL}/, specify the registry names of sum-chrome-agent, sum-api-monitoring-agent, and sum-heimdall.
    3. Under the generate_psa_k8s_deployment() function, update the repository names on the YAML values.

  • You must build the images on the host with the same OS type of Kubernetes cluster nodes.

Create the Kubernetes Cluster

To create a Kubernetes cluster in Amazon EKS:

  1. Install and configure AWS CLI.
  2. Based on your platform, install eksctl installation instructions.
  3. To create a Kubernetes cluster, enter:
    EKSCTL_CLUSTER_NAME=eks-heimdall-onprem-cluster
    EKSCTL_NODEGROUP_NAME=eks-heimdall-onprem-worker-nodes
    EKSCTL_KUBERNETES_VERSION=1.x.x
    eksctl create cluster \
    --name ${EKSCTL_CLUSTER_NAME} \
    --version ${EKSCTL_KUBERNETES_VERSION} \
    --region us-west-2 \
    --zones us-west-2a,us-west-2b,us-west-2c \
    --nodegroup-name ${EKSCTL_NODEGROUP_NAME} \
    --node-type t3.2xlarge \
    --nodes 4 \
    --nodes-min 2 \
    --nodes-max 6 \
    --ssh-access \
    --ssh-public-key ~/.ssh/id_rsa.pub \
    --managed \
    --vpc-nat-mode Disable
    Note: Replace the EKSCTL_KUBERNETES_VERSION with one of the

    EKS Kubernetes versions

    .
    Note: The node-type, nodes, nodes-min, and nodes-max in the code snippet are selected based on the recommended configuration type. You can specify a configuration of your choice with a different type and number of nodes. See

    EC2 instance types

    .

Access the Cluster

To access the Kubernetes cluster, follow these instructions to install kubectl, a utility to interact with the cluster.

To verify that the cluster is running, enter:

kubectl get nodes

(Optional) Configure Proxy Server

When you configure a proxy server, it applies to all the domains. Configure a proxy server by specifying the proxy server address on the values.yaml file. See Key-Value Pairs Configuration.

To bypass any domains from the proxy server, perform the following steps:

Note: Configuring the bypass list is supported only on Web Monitoring PSA.
  1. Open the values.yaml
  2. Add the domain URLs in the bypassList browserMonitoringAgent:
    browserMonitoringAgent:
    enabled: true
    server: "<proxy server address>"
    bypassList: "<specify the domain URLs that you want to bypass separated by semicolon>"

    For example, bypassList: "*abc.com;*xyz1.com;*xyz2.com" Domain URLs that you specify in bypassList are not redirected to the proxy server. You can add any number of domains in the bypassList. All other unspecified domain URLs are redirected to the proxy server.

Pull the Docker Image

Pull the pre-built docker images for sum-chrome-agent, sum-api-monitoring-agent, and sum-heimdall from DockerHub. The pre-built images include the dependent libraries, so you can use these images even when you do not have access to the Internet.

Run the following commands to pull the agent images:

docker pull appdynamics/heimdall-psa
docker pull appdynamics/chrome-agent-psa
docker pull appdynamics/api-monitoring-agent-psa

Alternatively, you can also download the .tar file from the Splunk AppDynamics Download Center. This file includes pre-built docker images for sum-chrome-agent, sum-api-monitoring-agent, sum-heimdall, ignite, and the dependent libraries. So, you can use these images when you do not have access to the Internet and DockerHub.

Unzip the .tar file and load the images using the following commands:
  • sum-chrome-agent:
    docker load < ${webAgentTag}
  • sum-api-monitoring-agent:
    docker load < ${apiAgentTag}
  • sum-heimdall:
    docker load < ${heimdallTag}
  • ignite:
    docker load < ${igniteTag}
For example:
# Load all Docker images
docker load -i heimdall-25.7.3098.tar
docker load -i api-monitoring-agent-1.0-415.tar
docker load -i chrome-agent-1.0-1067.tar
docker load -i ignite-2.16.0-jdk11.tar
Verify that all the images are loaded:
docker images | grep -E "(heimdall|api-monitoring|chrome-agent|ignite)"
When the images are loaded successfully, an output similar to the following is displayed:
```
829771730735.dkr.ecr.us-west-2.amazonaws.com/sum/heimdall                   25.7.3098    abc123def456   2 hours ago     500MB
829771730735.dkr.ecr.us-west-2.amazonaws.com/sum/api-monitoring-agent       1.0-415      def456ghi789   2 hours ago     300MB
829771730735.dkr.ecr.us-west-2.amazonaws.com/sum/chrome-agent               1.0-1067     ghi789jkl012   2 hours ago     800MB
apacheignite/ignite                                                         2.16.0       jkl012mno345   2 hours ago     400MB
```

(Optional) Add Custom Python Libraries

In addition to the available standard set of libraries, you can add custom Python libraries to the agent to use in scripted measurements. You build a new image based on the image you loaded as the base image.

Note: You do not require these steps if you are using pre-built image from the DockerHub repository.
  1. Create a Dockerfile and then create RUN directives to run python pip. For example, to install the library algorithms you can create a Dockerfile:
    # Use the sum-chrome-agent image you just loaded as the base image
    FROM appdynamics/chrome-agent-psa:<agent-tag>
    USER root
    RUN apk add py3-pip
    USER appdynamics
    # Install algorithm for python3 on top of that
    RUN python3 -m pip install algorithms==0.1.4 --break-system-packages
    Note: You can create any number of RUN directives to install the required libraries.
  2. To build the new image, enter:
    docker build -t sum-chrome-agent:<agent-tag> - < Dockerfile
    The newly built agent image contains the required libraries.

Tag and Push Images to the Registry

You must tag and push the images to a registry for the cluster to access it. The Amazon EKS clusters pull the images from Elastic Container Registry (ECR), which is the managed registry provided by AWS.

Tag the Images

docker tag appdynamics/heimdall-psa:<heimdall-tag> <aws_account_id>.dkr.ecr.<region>.amazonaws.com/sum/sum-heimdall:<heimdall-tag>
docker tag appdynamics/chrome-agent-psa:<agent-tag> <aws_account_id>.dkr.ecr.<region>.amazonaws.com/sum/sum-chrome-agent:<agent-tag>
docker tag appdynamics/api-monitoring-agent-psa:<agent-tag> <aws_account_id>.dkr.ecr.<region>.amazonaws.com/sum/sum-api-monitoring-agent:<agent-tag>

You need to replace <aws_account_id> & <region> with your account id and region values.

To create repositories, enter:

aws ecr create-repository --repository-name sum/sum-heimdall
aws ecr create-repository --repository-name sum/sum-chrome-agent

Push the Images

aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin <aws_account_id>.dkr.ecr.<region>.amazonaws.com
docker push <aws_account_id>.dkr.ecr.<region>.amazonaws.com/sum/sum-heimdall:<heimdall-tag>
docker push <aws_account_id>.dkr.ecr.<region>.amazonaws.com/sum/sum-chrome-agent:<agent-tag>
docker push <aws_account_id>.dkr.ecr.<region>.amazonaws.com/sum/sum-api-monitoring-agent:<agent-tag>

Deploy PSA Manually

The application is deployed to the cluster after the images are in the Registry. You use the Helm chart to deploy and create all Kubernetes resources in the required order.

  1. Install Helm following these instructions.
  2. Create a new namespace to run the Apache Ignite pods.
    Warning: Ensure that you first run the Apache Ignite commands and then run the Heimdall commands.

    To create a new namespace for Ignite, enter:

    kubectl create namespace measurement

    Beforeyou deploy Apache Ignite, you must set some configuration options. To view the configuration options, navigate to the previously downloaded ignite-psa.tgz file and enter:

    helm show values ignite-psa.tgz > values-ignite.yaml
  3. (Optional) Enable Amazon Elastic File System (Amazon EFS) for Apache Ignite.Open the values-ignite.yaml file and update the following details under persistence:
    persistence:
    # To turn on persistence, kindly set enabled to true. Do not modify onPremMode value
    enabled: true
    onPremMode: true
    enableEfs: false  # Set to true if you are using efs
    persistenceVolume:
    size: 8Gi
    provisioner: efs.csi.aws.com # Replace with your provisioner value
    provisionerParameters:
    type: efs # Specify that it's for Amazon EFS
    fileSystemId: fs-01285427ea904c120  # Replace with your EFS file system ID
    provisioningMode: efs-ap # Replace with your provisioner mode
    walVolume:
    size: 8Gi
    provisioner: efs.csi.aws.com # Replace with your provisioner value
    provisionerParameters:
    type: efs  # Specify that it's for Amazon EFS
    fileSystemId: fs-01285427ea904c120  # Replace with your EFS file system ID
    provisioningMode: efs-ap # Replace with your provisioner mode
    enableHostPath: false  # Set to true if you are using hostpath
    hostPath:
    persistenceMount: /mnt/ignite/persistence
    walMount: /mnt/ignite/wal
    Note: After deploying the Helm chart, you can run the following command to verify if the Persistent Volume (PV) and Persistent Volume Claim (PVC) are running:
    kubectl get pv -n measurement
    kubectl get pvc -n measurement
    When the status of PV and PVC changes to Bound, it indicates that the Amazon EFS configuration is complete.
    Note: To enable Amazon EFS for Heimdall, update the configuration in the values.yaml file.
  4. To deploy the Helm chart using the above-mentioned configuration, navigate to the previously downloaded ignite-psa.tgz file and enter:
    helm install synth ignite-psa.tgz --values values-ignite.yaml --namespace measurement
    All the Kubernetes resources are created in the cluster, and you can use Apache Ignite. After a few seconds, Apache Ignite initializes and is visible in the Controller.
  5. To verify if the pods are running, enter:
    kubectl get pods --namespace measurement

    Proceed to the next steps only after the Apache Ignite pods run successfully.

    Using a single command, you can deploy the Helm chart, which contains the deployment details. To deploy the agent, use the Helm chart sum-psa-heimdall.tgz in the zip file that you downloaded previously. Before you deploy the Private Synthetic Agent, you must set some configuration options. To view the configuration options, navigate to the previously downloaded sum-psa-heimdall.tgz file and enter:

    helm show values sum-psa-heimdall.tgz > values.yaml

    These are the configuration key-value pairs that you need to edit in the values.yaml file:

    Configuration Key Value
    heimdall > repository<aws_account_id>.dkr.ecr.<region>.amazonaws.com/sum/sum-heimdall
    heimdall > tag<heimdall-tag>
    heimdall > pullPolicyAlways
    chromeAgent > repository<aws_account_id>.dkr.ecr.<region>.amazonaws.com/sum/sum-chrome-agent
    chromeAgent > tag<agent-tag>
    shepherd > urlShepherd URL
    shepherd > credentialscredentials
    shepherd > locationagent location
    persistence

    (Optional) Specify the following details if you want to enable Amazon EFS for Heimdall:

    # To turn on persistence, kindly set enabled to true
    persistence:
    enabled: false
    storageClass:
    provisioner: efs.csi.aws.com # Specify that it's for Amazon EFS
    parameters:
    type: efs  # Specify that it's for Amazon EFS
    fileSystemId: fs-01285427ea904c120   # Replace with your EFS file system ID
    provisioningMode: efs-ap  # Replace with your provisioner mode
    Note: After deploying the Helm chart, you can run the following command to verify if the PV and PVC are running:
    kubectl get pv -n measurement
    kubectl get pvc -n measurement
    When the status of PV and PVC changes to Bound, it indicates that the Amazon EFS configuration is complete.
    measurementPodMetadata(Optional) Change the values of enableCustomTolerations, enableCustomAffinity, and enableCustomLabels to true to enable toleration, affinity, and labels:
    measurementPodMetadata:
      enableCustomTolerations: false  # Enable use of custom tolerations from this config
      enableCustomAffinity: false     # Enable use of custom affinity rules from this config
      enableCustomLabels: false      # Enable use of custom labels from this config
      automountServiceAccountToken: false # Automatically mount service account token in pods
      labels: # Custom labels to apply to the Pod metadata
        team: "qa"             # Example: assign pod to QA team
        priority: "high"       # Example: set custom priority level
        app.kubernetes.io/managed-by: "scheduler-service"  # Standard label
      tolerations:
        - key: "dedicated"                      # Tolerate nodes tainted with key=dedicated
          operator: "Equal"                     # Match taint value exactly
          value: "measurement"                  # Accept taint with value=measurement
          effect: "NoSchedule"                  # Allow scheduling on such tainted nodes
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: "kubernetes.io/hostname"  # Node must have this hostname
                    operator: "In"                 # Match exact value
                    values:
                      - "node-1"                   # Only allow node named node-1
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 80                           # Strong preference weight
              preference:
                matchExpressions:
                  - key: "topology.kubernetes.io/zone"  # Prefer node in this zone
                    operator: "In"
                    values:
                      - "us-central1-a"
        podAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"                     # Co-locate with pod having app=frontend
                    operator: "In"
                    values:
                      - "frontend"
              topologyKey: "kubernetes.io/hostname"  # Must be on the same node
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 50                            # Medium preference
              podAffinityTerm:
                labelSelector:
                  matchExpressions:
                    - key: "app"                   # Prefer proximity to app=logger pods
                      operator: "In"
                      values:
                        - "logger"
                topologyKey: "topology.kubernetes.io/zone"  # Prefer same zone
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"                     # Do NOT schedule with app=backend pods
                    operator: "In"
                    values:
                      - "backend"
              topologyKey: "kubernetes.io/hostname"  # Must not be on same node
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 30                            # Low preference to avoid
              podAffinityTerm:
                labelSelector:
                  matchExpressions:
                    - key: "env"                   # Prefer to avoid pods in env=production
                      operator: "In"
                      values:
                        - "production"
                topologyKey: "topology.kubernetes.io/region"  # Prefer different region
    createChromePodServiceAccountSpecify true to create a service account for the Web Monitoring pod.
    createApiPodServiceAccountSpecify true to create a service account for the API Monitoring pod.

    You can leave the rest of the values set to their defaults or configure them based on your requirements. See Configure Web Monitoring PSA and API Monitoring PSA for details on shepherd URL, credentials, location, and optional key-value pairs.

    Note:

    If the Kubernetes cluster is locked down, and you cannot make cluster-wide configuration, you can make pod-level changes.

    For example, if you want to change the pod-level DNS server setting to use your internal nameservers for DNS name resolution, specify the following details in the values.yaml file:

    Configuration KeyValue
    agentDNSConfig:
    enabled: true
    dnsConfig:
    nameservers:["4.4.4.4"]
    searches:["svc.cluster.local", "cluster.local"]
  6. To deploy the Helm chart using the above-mentioned configuration, navigate to the previously downloaded sum-psa-heimdall.tgz file and enter:
    helm install heimdall-onprem sum-psa-heimdall.tgz --values values.yaml --namespace measurement

    All the Kubernetes resources are created in the cluster, and you can use Heimdall. After a few seconds, Heimdall initializes and is visible in the Controller.

  7. To verify if the pods are running, enter:
    kubectl get pods --namespace measurement

    To make any changes to values.yaml after the initial deployment, navigate to the previously downloaded sum-psa-heimdall.tgz file and enter:

    helm upgrade heimdall-onprem sum-psa-heimdall.tgz --values values.yaml --namespace measurement
    Warning:

    To remove the deployment:

    helm uninstall heimdall-onprem --namespace measurement

    This is not recommended unless it is required.

Deploy PSA Using the Automation Script

Download the PSA installation zip file from the Splunk AppDynamics Download Center or from the beta upload tool. This file contains Docker files for sum-chrome-agent, sum-api-monitoring-agent, sum-heimdall, Helm charts, and automation scripts. To build an image for sum-chrome-agent, sum-api-monitoring-agent, and sum-heimdall, ensure that Docker is installed. You can download and install Docker from here if it is not installed.

Perform the following steps to install PSA:

  1. Unzip the PSA installation zip file.
  2. Run the following command to install PSA in EKS:A sample installation command looks like this:
    ./install_psa -e kubernetes -l -v -u <Shepherd-URL> -a <EUM-account> -k <EUM-key> -c <location-code> -d <location-description> -t <location-name> -s <location-state> -o <location-country> -i <location-latitude> -g <location-longitude> -p <PSA-tag> -r <heimdall-replica-count> -z <agent-type> -m <chrome-agent_min/max-memory> -n <API-agent_min/max-memory> -x <chrome-agent_min/max-CPU> -y <API-agent_min/max-CPU> -b <heimdall_min/max-memory> -f <heimdall_min/max-CPU> -q <ignite-persistence> -w <heimdall_proxy_server>~<api_monitoring_proxy_server>~<web_monitoring_proxy_server> -B <"bypassURL1;bypassURL2;bypassURL3"> -C true -A <serviceaccount-name> -U <userID> -G <groupID> -N <run_as_a_non-root_user> -F <file_system_groupID> -O <override_the_security_context>

    A sample installation command looks like this:

    ./install_psa -e kubernetes -u <Shepherd-URL> -a <EUM-account> -k <EUM-key> -c DEL -d Delhi -t Delhi -s DEL -o India -i 28.70 -g 77.10 -p 23.5 -r 1 -z all -m 100Mi/500Mi -n 100Mi/100Mi -x 0.5/1.5 -y 0.1/0.1 -b 2Gi/2Gi -f 2/2 -q true -w 127.0.0.1:8887~127.0.0.1:8888~127.0.0.1:8889 -B "*abc.com;*xyz1.com;*xyz2.com" -C true -A serviceaccount-name -U 9001 -G 9001 -N true -F 9001 -O true
The following table describes the usage of the flags in the command. Asterisk (*) on the description denotes mandatory parameters.
FlagDescription
-e

*Environment

For example, Docker, Minikube, or Kubernetes.

-lLoad images to the Minkube environment
-vDebug mode
-u

*Shepherd URL

For example, https://sum-shadow-master-shepherd.saas.appd-test.com/

For the list of Shepherd URLs, see Shepherd URL.

-a

*EUM Account

For example, Ati-23-2-saas-nov2

-k

*EUM Key

For example, 2d35df4f-92f0-41a8-8709-db54eff7e56c

-c

*Location Code

For example, DEL NY

-d

*Location Description

For example, 'Delhi, 100001'

-t

*Location City

For example, Delhi

-s

*Location State

For example, CA

-o

*Location Country

For example, India, United States

-i

Location Latitude

For example, 28.70

-g

Location Longitude

For example, 77.10

-p

*PSA release tag

For example, 23.12

-r*Heimdall replica count
-z

*Agent type

For example, web, api, or all

-m

*Minimum/Maximum memory in Mi/Gi for sum-chrome-agent
-n*Minimum/Maximum memory in Mi/Gi for sum-api-monitoring-agent
-x*Minimum/Maximum CPU for sum-chrome-agent
-y*Minimum/Maximum CPU for sum-api-monitoring-agent
-b*Minimum/Maximum memory in Mi/Gi for sum-heimdall
-f*Minimum/Maximum CPU for sum-heimdall
-qSpecify true or false to enable or disable Ignite Persistence.
-w

Specify the proxy servers for Heimdall, API, and Web monitoring, separated by a tilde(~).

If you do not need to set up any proxy server, you can leave it blank.

-B

Specify the domain URLs that you want to bypass from the proxy server.

For example, "*abc.com;*xyz1.com;*xyz2.com"

-CSpecify true to enable performance logs on the Chrome browser. The default value is false.
-A Specify the service account of the sum-chrome-agent and sum-api-monitoring-agent pod.
-USpecify the user ID that the sum-chrome-agent or sum-api-monitoring-agent container should run as.
-GSpecify the group ID that the sum-chrome-agent or sum-api-monitoring-agent container should run as.
-NSpecify if the sum-chrome-agent or sum-api-monitoring-agent container should run as a non-root user. The default value is true.
-FSpecify the file system group ID of the sum-chrome-agent or sum-api-monitoring-agent container.
-OSpecify true to override the security context for Web and API monitoring. The default value is false.

Monitor the Kubernetes Cluster

The Helm chart sum-psa-monitoring.tgz in the zip you downloaded installs the monitoring stack. This Helm chart installs kube-prometheus-stack along with a custom Grafana dashboard to monitor the Private Simple Synthetic Agent.

Note: Monitoring the deployment is optional; however, we highly recommend that you monitor the cluster to check its health periodically.

Install the Monitoring Stack

  1. To create a separate monitoring namespace, enter:
    kubectl create namespace monitoring

    To review configuration options, enter:

    helm show values sum-psa-monitoring.tgz > values-monitoring.yaml

    This generates a values-monitoring.yaml file that contains all the configuration options. To modify and pass the generated values-monitoring.yaml file while installing the Helm chart, enter:

    helm install psa-monitoring sum-psa-monitoring.tgz --values values-monitoring.yaml --namespace monitoring
  2. After the monitoring stack is installed, you can Launch Grafana (which runs inside the cluster) to view the dashboard. To access Grafana from outside of the cluster, you can configure port forwarding or set up Ingress. To configure port forward to access it locally, enter:
    kubectl port-forward svc/psa-monitoring-grafana 3000:80 --namespace monitoring
  3. Launch localhost:3000 from the browser and log in using the default credentials with username as admin and password as prom-operator. A dashboard named Private Simple Synthetic Agent displays and provides details about the Kubernetes cluster, Apache Ignite, Heimdall, and running measurements.

Uninstall PSA

To uninstall PSA, run the following command:

./uninstall_psa -e kubernetes -p

Upgrade PSA in Amazon Elastic Kubernetes Service

Pull the Docker Image

Pull the pre-built docker images for sum-chrome-agent, sum-api-monitoring-agent, and sum-heimdall from DockerHub. The pre-built images include the dependent libraries, so you can use these images even when you do not have access to the Internet.

Run the following commands to pull the agent images:

docker pull appdynamics/heimdall-psa
docker pull appdynamics/chrome-agent-psa
docker pull appdynamics/api-monitoring-agent-psa

Add Custom Python Libraries

This is an optional step. In addition to the available standard set of libraries, you can add custom Python libraries to the agent to use in scripted measurements. You build a new image based on the image you loaded as the base image.

  1. Create a Dockerfile and then create RUN directives to run pythonpip. For example, to install the library algorithms you can create a Dockerfile:
    # Use the sum-chrome-agent image you just loaded as the base image
    FROM appdynamics/chrome-agent-psa:<agent-tag>
    USER root
    RUN apk add py3-pip
    USER appdynamics
    # Install algorithm for python3 on top of that
    RUN python3 -m pip install algorithms==0.1.4 --break-system-packages
    Note: You can create any number of RUN directives to install the required libraries.
  2. To build the new image, run the following commands: Web Monitoring PSA:
    docker build -t sum-chrome-agent:<agent-tag> - < Dockerfile
    API Monitoring PSA:
    docker build -f Dockerfile-PSA -t sum-api-monitoring-agent:<agent-tag> .
    You must build the images on the host with the same OS type of Kubernetes cluster nodes. For example, if you are pushing the image to AWS, then run the following command:
    docker buildx build -f Dockerfile-PSA --platform=linux/amd64 -t sum-api-monitoring-agent:<api-tag> .
    The newly built agent image contains the required libraries.

Tag and Push Images to the Registry

Note: Managed Kubernetes services, such as EKS or AKS, provide container registries where you can push your image. No other configuration is needed. Kubernetes cluster within EKS or AKS will have the access to these images.

You must tag and push the images to a registry for the cluster to access them. The Amazon EKS clusters pull the images from Elastic Container Registry (ECR), which is the managed registry provided by AWS.

Since the Vanilla K8S runs on AWS infrastructure, Kubernetes Operations (kops) creates and assigns appropriate roles to the cluster nodes and they can directly access the ECR. You do not need any other configuration. Hence, the process is the same for both EKS and Vanilla K8S using EC2.

To tag the images, enter:

docker tag sum-heimdall:<heimdall-tag> <aws_account_id>.dkr.ecr.<region>.amazonaws.com/sum/sum-heimdall:<heimdall-tag>
docker tag sum-chrome-agent:<agent-tag> <aws_account_id>.dkr.ecr.<region>.amazonaws.com/sum/sum-chrome-agent:<agent-tag>
docker tag sum-api-monitoring-agent:<agent-tag> <aws_account_id>.dkr.ecr.<region>.amazonaws.com/sum/sum-api-monitoring-agent:<agent-tag>

You need to replace <aws_account_id> & <region> with your account id and region values.

To create repositories, enter:

aws ecr create-repository --repository-name sum/sum-heimdall
aws ecr create-repository --repository-name sum/sum-chrome-agent
aws ecr create-repository --repository-name sum/sum-api-monitoring-agent

To push the images, enter:

aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin <aws_account_id>.dkr.ecr.<region>.amazonaws.com
docker push <aws_account_id>.dkr.ecr.<region>.amazonaws.com/sum/sum-heimdall:<heimdall-tag>
docker push <aws_account_id>.dkr.ecr.<region>.amazonaws.com/sum/sum-chrome-agent:<agent-tag>
docker push <aws_account_id>.dkr.ecr.<region>.amazonaws.com/sum/sum-api-monitoring-agent:<agent-tag>

Update the Helm Chart

Follow these steps and update the configuration key value pairs in the values.yaml file:

Upgrade the PSA

Note: From PSA 23.12 onwards, you must deploy Ignite and Heimdall in a single namespace named measurement.
  1. Navigate to the new Linux distribution folder and run the following command:
    helm install synth ignite-psa.tgz --values values-ignite.yaml --namespace measurement
  2. Wait until the status of Ignite pods changes to running. Then, run the following command:
    helm upgrade heimdall-onprem sum-psa-heimdall.tgz --values values.yaml --namespace measurement
  3. After the status of the new Heimdall and Ignite pods changes to running, uninstall the old Ignite namespace:
    helm uninstall synth -n ignite