Kubernetes reference implementation

Create a Kubernetes cluster to support the Component Pack for IBM Connections. You can refer to the reference implementation for guidelines on validated settings.

Kubernetes cluster validation

The Component Pack was validated on a Kubernetes v1.11.6 cluster that was set up on virtual machines using the kubeadm tool. The validated deployment included the following options:
  • Red Hat 7.6 and Cent OS 7.6
  • Docker 17.03 (CE or EE - configured with devicemapper storage) / Docker 18.06.2+ (CE - configured with devicemapper storage).
    • If you are a Docker CE customer, it is recommended that you install/upgrade to 18.06.2+. This is due to the runc vulnerability: CVE-2019-5736 .
    • If you are a Docker EE customer, it is recommended that you install/remain on 17.03.x.
  • Kubernetes version 1.11.9
  • Stacked masters - etcd members and control plane nodes (master nodes) co-located
  • Calico v3.3 used as the network add-on
  • Helm v2.11.0

Deployment considerations

Review the following considerations before creating the Kubernetes cluster for Component Pack:
  • In production, best practice is to use the devicemapper storage driver in direct-lvm mode for Docker. This mode uses block devices to create the thin pool. In Docker v17.03/v18.06.2+, direct-lvm mode can be configured manually. For steps on how to do this, see the Configure direct-lvm mode manually section of the Docker documentation.

  • The installation of the Kubernetes platform, as well as Component Pack, requires Internet access in order to pull images; however it is possible to use a proxy with Internet access to complete the installation on a server that does not have Internet access. The following environment variables need to be set for this to work:
    export http_proxy=http[s]://proxy-host:port
    export https_proxy=http[s]://proxy-host:port
    export ftp_proxy=http[s]://proxy-host:port
    export no_proxy=localhost,127.0.0.1,10.96.0.0/12,192.168.0.0/16 
    where:
    • 10.96.0.0/12 is the Kubernetes service range (this is the default range)
    • 192.168.0.0/16 is the default Calico service range (if using a different network add-on, use the applicable CIDR address for that network)

    When using these environment variables, complete the additional steps for Docker as described in the HTTP/HTTPS proxy section of the Docker documentation.

  • There are several approaches you can take when setting up master HA load balancing. Applications such as HAProxy or NGINX work well as master load balancers. The approach and application you wish to use depends on your environment needs. Some examples include:
    • A stand-alone load balancing cluster: Deploy a cluster of load balancers on their own dedicated servers, which will do health checks of the kube-apiserver on each of the master nodes and load-balance the requests to the healthy instance(s) in the cluster. This approach is very highly available, but is more expensive and requires an additional support burden.

    • Co-located load balancing cluster: Deploy the load balancer application of your choice on every master node and point each load balancer to every master node in your cluster. You can use a service such as keepalived to determine which load balancer is the current primary. This approach still provides high availability, but with fewer required resources.

    • Active/passive failover without load balancing. You can use a service such as keepalived to provides an active/passive failover of the master nodes themselves, using a health script to determine which is the current primary master. With this approach, there is no load balancing done.

  • HA deployments can only tolerate master node failure of (n-1)/2 nodes. For example, in the case of 3 masters, a deployment can only tolerate losing 1 master node. This is due to a limitation with etcd. For more information, see the CoreOS FAQs.
  • If using kubeadm to deploy the kubernetes platform, swap must be disabled (swapoff -a) on all nodes. Make sure to disable it in /etc/fstab and run mount -a to apply the change so that it does not get enabled again after an operating system restart.

  • If using kubeadm to deploy the kubernetes platform, remember to disable SELinux by running the following command: setenforce 0

Sample deployment guide using kubeadm

The following sample guides can be used to deploy the Kubernetes platform: