Upgrading to the latest reference implementation

This topic describes how to upgrade to the latest reference implementation of Component Pack Services.

Before you begin

Note:
  • The upgrade steps will only provide zero down-time to your environment if you have three or more master machines, two or more generic workers, and two or more infrastructure workers. If this is not the case, then schedule downtime for the upgrade.
  • If you enable the Pod Security Policies admission controller during the cluster upgrade, and you are still running Component Pack 6.0.0.6/7 pods without Pod Security Policies enabled, there will be downtime as pods will fail to be recreated during the failover process because they will not meet any of the policy rules until you upgrade to 6.0.0.8.
  • In High Availability deployments, it is advised to ensure all evicted pods are in a ready state on an alternate node, before moving on to upgrading the next node.
Before beginning the upgrade, ensure that the following requirements are in place:
  • Your current reference implementation should have a kubeadm Kubernetes cluster running any version from 1.11.0 to 1.11.6 as there is a known issue (kubeadm upgrade apply failure with same kubernetes version) when trying to do a "same version" kubeadm cluster upgrade on v1.11. You may want to do a "same version" upgrade if you are already on v1.11.9 and do not have the Pod Security Policy admission controller enable, but would now like to enable it. See the following technote for a suggested workaround: kubeadm upgrade apply failure with same kubernetes version.
  • Yum must be working on all servers to install the new required packages.
  • You must have sudo privileges on all servers.
  • Ensure that there are enough resources to handle failover on all machine's by complying with the required minimum specifications as per Component Pack installation roadmap and requirements.

About this task

Upgrading the reference implementation involves upgrading the primary master, the secondary masters, all worker nodes, and the Calico network.

Procedure

Begin by completing the following steps to upgrade the primary master:
  1. Determine the primary master node name:
    kubectl get nodes
    
  2. Drain the primary master node:
    kubectl drain <node> --delete-local-data --ignore-daemonsets
    
  3. Add the kubernetes repo:
    sudo bash -c 'cat << EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    exclude=kube*
    EOF'
  4. Enable the kubernetes repo:
    sudo yum-config-manager --enable kubernetes*
  5. Upgrade kubeadm, kubelet and kubectl:
    sudo yum upgrade -y kubeadm-1.11.9* kubelet-1.11.9* kubectl-1.11.9* --disableexcludes=kubernetes
  6. Restart kubelet and reload it:
    sudo systemctl restart kubelet
    sudo systemctl daemon-reload
    
  7. Create a file called kubeadm-config.yaml with the current cluster configuration:
    sudo kubeadm config view | tee kubeadm-config.yaml
  8. Edit the kubeadm-config.yaml file and make the following changes:
    1. Update the value of kubernetesVersion to v1.11.9
    2. (Optional) Enable the PodSecurityPolicy admission plug-in, refer to Pod Security Policies for additional information. Locate the apiServerExtraArgs settings, and enable the PodSecurityPolicy admission plug-in.
    For example
    apiServerExtraArgs:
    	  enable-admission-plugins: PodSecurityPolicy
  9. If you enabled the PodSecurityPolicy admission plug-in in the previous step, then complete the following steps to apply some policies that enable the pods to run:
    1. Download the Component Pack installation kit from the HCL FlexNet portal.
    2. Extract the Component Pack installation package to a temporary location:
      unzip IC-ComponentPack-6.0.0.8.zip
    3. Apply the privileged-psp-with-rbac.yaml so that the kube-system pods can successfully restart after the cluster upgrade:
      kubectl apply -f extractedFolder/microservices_connections/hybridcloud/support/psp/privileged-psp-with-rbac.yaml
    4. Install the k8s-psp helm chart which is required for Component Pack applications to run:
      helm install --name=k8s-psp extractedFolder/microservices_connections/hybridcloud/helmbuilds/k8s-psp-<tbd>.tgz
  10. Upgrade the cluster:
    sudo kubeadm upgrade apply v1.11.9 --config=kubeadm-config.yaml -f
  11. Disable the repo:
    sudo yum-config-manager --disable kubernetes*
  12. Un-cordon the node:
    kubectl uncordon <node>

Upgrading the secondary masters

Procedure

  1. Determine the node name:
    kubectl get nodes
  2. Drain the secondary master node you want to upgrade:
    kubectl drain <node> --delete-local-data --ignore-daemonsets
  3. Add the kubernetes repo:
    sudo bash -c 'cat << EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    exclude=kube*
    EOF'
  4. Enable the kubernetes repo: 
    sudo yum-config-manager --enable kubernetes*
  5. Upgrade kubeadm, kubelet and kubectl:
    sudo yum upgrade -y kubeadm-1.11.9* kubelet-1.11.9* kubectl-1.11.9* --disableexcludes=kubernetes
  6. Restart kubelet and reload it:
    sudo systemctl restart kubelet
    sudo systemctl daemon-reload
    
  7. Upgrade the cluster:
    sudo kubeadm upgrade apply v1.11.9 -f
  8. Disable the repo:
    sudo yum-config-manager --disable kubernetes*
  9. Uncordon the node:
    kubectl uncordon <node>
    Repeat steps 1-9 for all secondary masters.

Upgrading the worker nodes

Procedure

  1. Determine the node name:
    kubectl get nodes
  2. Drain the worker node you want to upgrade:
    kubectl drain <node> --delete-local-data --ignore-daemonsets
    Note: If the drain command hangs, you can force quit out of the command, and manually delete any non-daemonset pods that remain on that node. They will get recreated on the other available nodes. To view what pods are running on what nodes, you can use the command kubectl get pods -n connections -o wide. Force delete a pod using the command : (kubectl delete pod <pod_name> -n=<namespace> --grace-period=0 --force)
  3. Add the kubernetes repo:
    sudo bash -c 'cat << EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    exclude=kube*
    EOF'
  4. Enable the kubernetes repo: 
    sudo yum-config-manager --enable kubernetes*
  5. Upgrade kubeadm, kubelet and kubectl:
    sudo yum upgrade -y kubeadm-1.11.9* kubelet-1.11.9* kubectl-1.11.9* --disableexcludes=kubernetes
    
  6. Restart kubelet:
    sudo systemctl restart kubelet
    sudo systemctl daemon-reload
    
  7. Disable the repo:
    sudo yum-config-manager --disable kubernetes*
  8. Un-cordon the node:
    kubectl uncordon <node>
    Repeat steps 1-8 for all worker nodes.
    Validation: Confirm the upgrade was successful by running: kubectl get nodes. You should see all the nodes in a Ready state and the version updated to v1.11.9.
    Note: At this moment, the last generic and infrastructure worker nodes that were upgraded will have no Component Pack pods running on them after they were drained and upgraded. Once the Component Pack helm charts are upgraded as part of the Upgrading Component Services, the pods will successfully be deployed evenly between the nodes again.

Upgrading the Calico network

Procedure

Complete the following steps to upgrade the Calico network add-on to version 3.3. The upgrade path can be followed without affecting connectivity or network policy for any existing pods. However, it is recommended that you do not deploy new pods to a node that is being upgraded.
  1. On any kubernetes master, run the following command to update the roles and role bindings for Calico’s components:
    kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
  2. Use the following command to initiate a rolling update:
    kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
  3. Watch the status of the upgrade as follows
    watch kubectl get pods -n kube-system
  4. Verify that the status of all Calico pods indicate Running.