Configure OneDB Affinity/Anti-Affinity

We have two helm chart parameters that can be set with OneDB SQL Data store. The OneDB SQL Data store uses these helm chart parameters for both the onedb and onedbcm sections of the helm chart.
onedb: 
    nodeSelectorRequired: true 
    nodeSelector: 
        type: database 
. . . 
onedbcm: 
    nodeSelectorRequired: true 
    nodeSelector: 
        type: cm 

The default values for onedb/onedbcm nodeSelectorRequired is true. When this is set to true the requiredDuringSchedulingIgnoredDuringExecution is used for Pod anti-affinity.

The effect of this is that, a OneDB Database server will not be scheduled on the same node where another OneDB Database server pod is running. And a OneDB Connection manager will not be scheduled on the same node where another OneDB Connection manager pod is running.

When we set the nodeSelector helm chart parameter for either onedb or onedbcm OneDB will use requiredDuringSchedulingIgnoredDuringExecution and Node affinity is enabled. This will require that all Pods be scheduled on nodes that have been labeled accordingly.

Example Labeling of Nodes:
kubectl label nodes gke-worker2 type=database –overwrite 
kubectl label nodes gke-worker4 type=database –overwrite 

kubectl label nodes gke-worker3 type=cm –overwrite 
kubectl label nodes gke-worker5 type=cm –overwrite

With the above helm chart values set the OneDB Database server pods must run on a kubernetes nodes that are labled with type:database, and OneDB Connection manager pods must run on kubernetes nodes that are labeled with type:cm.

OneDB SQL Data store sets up an HA cluster with an HDR primary and HDR secondary. If nodeSelectorRequired is set to true, then we must have more than 1 node labeled when use nodeSelector. The same applies to the OneDB Connection manager based on how many replicas are running.

Note: When configuring pod scheduling it is important to have a good understanding of how this works or you may run into a situation where a pod is not able to be scheduled.