Labeling and tainting worker nodes for Elasticsearch

Use Kubernetes to label and taint worker nodes so that they are reserved for use by the Elasticsearch offering of Component Pack for IBM Connections. Skip this topic if you are using an "All-on-one" box deployment for a proof of concept.

Before you begin

This task should be performed before any Component Pack services are installed. If you already have Component Pack pods running on the worker nodes you wish to use as dedicated Elasticsearch workers, drain the pods off the node by running the following command on the master node:
kubectl drain node --force --delete-local-data --ignore-daemonsets 

About this task

For best results, deploy dedicated worker nodes that will host only the Elasticsearch pods. In production, best practice is to deploy three dedicated worker nodes for Elasticsearch to make use of the pod anti-affinity rules and create a highly available worker solution. If you plan to also install Orient Me or Customizer, then you should deploy separate worker nodes to host the pods belonging to those services. Labeling and tainting the dedicated worker nodes ensures that they can only be used by Elasticsearch.

Note: If you are deploying the version of Elastic Stack that is included with Component Pack, then the pods that make up this offering (Filebeat, Logstash, and Kibana) will also run on the labeled/tainted worker nodes.

Procedure

  1. Determine which nodes will be dedicated for use by Elasticsearch.

    You can view a list of all nodes in your cluster by running the following command:

    kubectl get nodes
  2. Label and taint one node by running the following commands on the master node, replacing node with the node you wish to use as a dedicated Elasticsearch worker:
    kubectl label nodes node type=infrastructure --overwrite 
    kubectl taint nodes node dedicated=infrastructure:NoSchedule --overwrite 
    
  3. Repeat the step 2 for every node that you wish to use as a dedicated Elasticsearch worker.
  4. If you ran the kubectl drain command to drain pods off the node, then run the following command to allow pods to run on that node again:
    Important: Do not run this command until you have completed the labeling and tainting of dedicated Elasticsearch nodes.
    kubectl uncordon node

What to do next

If you later decide to remove the taint and label from a node, you can run the following commands:

kubectl taint nodes node dedicated:NoSchedule-
kubectl label node node type-