Kubernetes reference implementation

Create a Kubernetes cluster to support the Component Pack for HCL Connections. You can refer to the reference implementation for guidelines on validated settings.

Kubernetes cluster validation

Component Pack 6.5 CR1 was validated on a Kubernetes v1.17.2 cluster that was set up on virtual machines using the kubeadm tool.

The validated deployment included the following options:

  • Red Hat 7.6 and Cent OS 7.6
  • Docker 19.0.3.5 (CE or EE - configured with devicemapper storage)
  • Kubernetes version 1.17.2
  • Stacked masters - etcd members and control plane nodes (master nodes) co-located
  • Calico v3.11 used as the network add-on
  • Helm v2.16.3

Deployment considerations

Review the following considerations before creating the Kubernetes cluster for Component Pack:
  • In production, best practice is to use the devicemapper storage driver in direct-lvm mode for Docker. This mode uses block devices to create the thin pool. In Docker v19.0.3.5+, direct-lvm mode can be configured manually. For steps on how to do this, see the Configure direct-lvm mode manually section of the Docker documentation.

  • The installation of the Kubernetes platform, as well as Component Pack, requires Internet access in order to pull images; however it is possible to use a proxy with Internet access to complete the installation on a server that does not have Internet access. The following environment variables need to be set for this to work:
    export http_proxy=http[s]://proxy-host:port
    export https_proxy=http[s]://proxy-host:port
    export ftp_proxy=http[s]://proxy-host:port
    export no_proxy=localhost,127.0.0.1,10.96.0.0/12,192.168.0.0/16 
    where:
    • 10.96.0.0/12 is the Kubernetes service range (this is the default range)
    • 192.168.0.0/16 is the default Calico service range (if using a different network add-on, use the applicable CIDR address for that network)

    When using these environment variables, complete the additional steps for Docker as described in the HTTP/HTTPS proxy section of the Docker documentation.

  • There are several approaches you can take when setting up master HA load balancing. Applications such as HAProxy or NGINX work well as master load balancers. The approach and application you wish to use depends on your environment needs. Some examples include:
    • A stand-alone load balancing cluster: Deploy a cluster of load balancers on their own dedicated servers, which will do health checks of the kube-apiserver on each of the master nodes and load-balance the requests to the healthy instance(s) in the cluster. This approach is very highly available, but is more expensive and requires an additional support burden.

    • Co-located load balancing cluster: Deploy the load balancer application of your choice on every master node and point each load balancer to every master node in your cluster. You can use a service such as keepalived to determine which load balancer is the current primary. This approach still provides high availability, but with fewer required resources.

    • Active/passive failover without load balancing. You can use a service such as keepalived to provides an active/passive failover of the master nodes themselves, using a health script to determine which is the current primary master. With this approach, there is no load balancing done.

  • HA deployments can only tolerate master node failure of (n-1)/2 nodes. For example, in the case of 3 masters, a deployment can only tolerate losing 1 master node. This is due to a limitation with etcd. For more information, see the CoreOS FAQs.
  • If using kubeadm to deploy the kubernetes platform, swap must be disabled (swapoff -a) on all nodes. Make sure to disable it in /etc/fstab and run mount -a to apply the change so that it does not get enabled again after an operating system restart.

  • If using kubeadm to deploy the kubernetes platform, remember to disable SELinux by running the following command: setenforce 0