Node allocation and Resource limits

You can find the information on how the different services of HCL DevOps Velocity (Velocity) has to be allocated to different nodes in Kubernetes and how the enforce limits on Argo resources.

Node allocation

Kubernetes automatically allocate services and orchestrate the deployment of application in multiple nodes based on availability.

You can add labels such as background, transactional, external and so on to the nodes. Use the following command to add a label to the node:

kubectl label nodes <node_name> acc/workload-class=<workload_class>
Replace the variables in the above command with the values specified in the following table:
Variable Values
<node_name>

Name of the node that you want to label.

For example, Node_1

<workload_class>

The workload class for the node.

For example, the possible values are external, transactional, background.

For better performance of Velocity, you must allocate the following services to three different nodes manually:
  • Node 1: "external" pods from 3rd party vendors (MongoDB, RabbitMQ) and integration pods.
  • Node 2: "transactional" pods which are necessary to drive the UI and ensure the product remains responsive in the browser.
  • Node 3: "background" pods which perform resource-intensive calculations that are not immediately needed for interacting with the UI.

The three node allocation is to ensure that the resource-intensive actions are not impacting the performance of Velocity. On start up Kubernetes automatically schedule each service across the available nodes.

The proper allocation of nodes improve the performance and provide stability to the application and its features such as Value Streams, Pipeline, and Releases. The manual allocation increases the performance and stability of the application by 30%.

Resource limits

To enforce limits on Argo resources, modify the executor parameter in the accelerate/templates/workflow-controller-configmap.yaml

For example, to limit ephemeral Argo pods, you can use the following parameters and values where requests refers to the initial allocation for a pod when started, and limits is the allocation for a pod that we set throughout its lifecycle before it is eligible to be deleted when there is any resource constraints.
 
data:
   config: |
     containerRuntimeExecutor: kubelet
     namespace: accelerate
     executor:
       resources:
         limits:
           cpu: 1
           memory: 2Gi
         requests:
           cpu: 50m 
           memory: 512Mi
You can modify the values for limits by using the following commands:
  • To set memory limits for all pods, use the following command:
    helm upgrade velocity -n <custom_namespace> --set resources.limits.memory.default=<memory_string>
    where <memory_string> is a formatted string of memory such as 512Mi.
  • To set CPU limits for all pods, use the following command:
    helm upgrade velocity -n <custom_namespace> --set resources.limits.cpu.default=<cpu_string>
    where <cpu_string> is a formatted string of cpu such as 50m.
  • To set memory limits for a specific pod, use the following command:
    helm upgrade velocity -n <custom_namespace> --set resources.limits.memory.<microservice_name>=<memory_string>
    where <microservice_name> is the name of the microservice to target such as release-events-api or security-api and <memory_string> is a formatted string of memory such as 512Mi.
  • To set cpu limits for a specific pod, use the following command:
    helm upgrade velocity -n <custom_namespace> --set resources.limits.cpu.<microservice_name>=<memory_string>
    where <microservice_name> is the name of the microservice to target such as release-events-api or security-api and <memory_string> is a formatted string of memory such as 512Mi.
You can modify the values for requests by using the following commands:
  • To set memory requests for all pods, use the following command:
    helm upgrade velocity -n <custom_namespace> --set resources.requests.memory.default=<memory_string>
    where <memory_string> is a formatted string of memory such as 512Mi.
  • To set CPU requests for all pods, use the following command:
    helm upgrade velocity -n <custom_namespace> --set resources.requests.cpu.default=<cpu_string>
    where <cpu_string> is a formatted string of cpu such as 50m.
  • To set memory requests for a specific pod, use the following command:
    helm upgrade velocity -n <custom_namespace> --set resources.requests.memory.<microservice_name>=<memory_string>
    where <microservice_name> is the name of the microservice to target such as release-events-api or security-api and <memory_string> is a formatted string of memory such as 512Mi.
  • To set CPU requests for a specific pod, use the following command:
    helm upgrade velocity -n <custom_namespace> --set resources.requests.cpu.<microservice_name>=<memory_string>
    where <microservice_name> is the name of the microservice to target such as release-events-api or security-api and <memory_string> is a formatted string of memory such as 512Mi.