Upgrading from 6.0.0.4/6.0.0.5

Upgrade the HCL Connections Component Pack from 6.0.0.4/6.0.0.5.

Before you begin

These upgrade steps are only valid when upgrading from a 6.0.0.4 or 6.0.0.5 deployment of Component Pack and you wish to keep your data after the upgrade.

Attention: The upgrade process requires application downtime and should be scheduled accordingly.

Uninstall the previous release

  1. On the master server, check the arguments you used for the original installation by running the following command:
    cat /opt/deployCfC/.last_args.txt
    If the .last_args.txt file does not exist (or is empty), then the run the following command to retrieve the command from the /var/log/cfc.log file:
    sudo grep deployCfC.sh /var/log/cfc.log
  2. Run the deployCfC.sh installation script with all of the same flags that you used during the original installation, along with the uninstall=cleanest argument.

    On your storage server, do not delete the directories where your data is stored. The default is data directory is /pv-connections. This data will be reused in Component Pack 6.0.0.6.

    Example:

    The following command will uninstall all Component Pack services, IBM Cloud Private, Kubernetes, and Docker. It will not delete any data in the persistent volumes.

    cd /opt/deployCfC/ 
    
    sudo sh deployCfC.sh --boot=bootserver.example.com \
    --master_list=masterserver.example.com \
    --worker_list=workerserver1.example.com,workerserver2.example.com,workerserver3.example.com \
    --proxy_list=proxyserver1.example.com \
    --skip_ssh_prompts \
    --root_login_passwd=rootpassword \
    --uninstall=cleanest

Install the latest version

  1. Complete all of the prerequisites before beginning the installation tasks.
  2. Deploy Component Pack.
  3. Complete the configuration tasks as explained in Configuring the Component Pack.
Attention: When setting up your environment for Component Pack for the latest version, you can use the same or different servers that you used with the previous release to set up your Kubernetes cluster; however, you must use the same storage server with the data still on it. If the storage server is set up using NFS, then ensure that it is in the same subnet as the Kubernetes nodes. If you are using the connections-persistent-storage-nfs-0.1.0.tgz helm chart to set up the persistent volumes, the default value for persistentVolumeReclaimPolicy is set to Retain, meaning you can re-use your existing data (for more information, see Setting up persistent volumes with NFS). During the upgrade, you must create the Kubernetes persistent volumes again, even if you are re-using the existing data directories.

Post-install steps for MongoDB data migrated from 6.0.0.5

If you are re-using MongoDB data from a Component Pack 6.0.0.5 deployment, then you might need to run some clean-up steps on the scoring database.

Determine whether you need to run the clean- by running the following commands to check the logs of the people-scoring pods:

kubectl logs -l app=people-scoring -n connections > people-scoring.log
grep CursorNotFound people-scoring.log

If the search finds the string "CursorNotFound" in the logs, complete the following steps to clear users' historical recommendations (community suggestions and top updates).

Note: If the string was not found, skip this procedure (you should not clear the historical data).
  1. SSH into the master Kubernetes node.

  2. Run the following command to determine which Mongo node is the primary node:
    kubectl exec -it mongo-0 -c mongo -n connections -- mongo --ssl --sslPEMKeyFile /etc/mongodb/x509/user_admin.pem --sslCAFile /etc/ca/internal-ca-chain.cert.pem --host mongo-0.mongo.connections.svc.cluster.local --authenticationMechanism=MONGODB-X509 --authenticationDatabase '$external' -u C=IE,ST=Ireland,L=Dublin,O=IBM,OU=Connections-Middleware-Clients,CN=admin,emailAddress=admin@mongodb --eval "rs.status().members" | grep "id\|name\|health\|stateStr\|ok\|optimeDate"
    

    Example output:"_id" : 0, "name" : "mongo-1.mongo.connections.svc.cluster.local:27017", "health" : 1, "stateStr" : "PRIMARY", "optimeDate" : ISODate("2018-07-28T13:19:44Z"), "_id" : 1, "name" : "mongo-0.mongo.connections.svc.cluster.local:27017", "health" : 1, "stateStr" : "SECONDARY", "optimeDate" : ISODate("2018-07-28T13:19:44Z"), "_id" : 2, "name" : "mongo-2.mongo.connections.svc.cluster.local:27017", "health" : 1, "stateStr" : "SECONDARY", "optimeDate" : ISODate("2018-07-28T13:19:44Z"),

  3. Run the following command to execute into the mongo primary pod (mongo-1 in previous example):
    kubectl exec -it mongo-1 -c mongo -n connections -- bash
  4. Run the following command to connect to the primary mongo:
    mongo --ssl --sslPEMKeyFile /etc/mongodb/x509/user_admin.pem --sslCAFile /etc/ca/internal-ca-chain.cert.pem --host mongo-1.mongo.connections.svc.cluster.local
    
  5. Run the following command to authenticate and authorize access:
    db.getSiblingDB("$external").auth({mechanism: "MONGODB-X509",user: "C=IE,ST=Ireland,L=Dublin,O=IBM,OU=Connections-Middleware-Clients,CN=admin,emailAddress=admin@mongodb"})
    
  6. Run the following command to set collabsocredb as the current db:
    use collabsocredb
  7. Run the following command to drop score collections:
    db.score.drop()