Upgrading Component Pack to 6.0.0.4

Follow these steps to upgrade to the Component Pack 6.0.0.4.

Before you install

Follow these steps to back up your existing data, uninstall a prior version of the application, install the latest version using Docker version 17.03, and restore data. Make sure that a version of Component Pack, such as Orient Me, 6.0.0.1 or later, is installed. The process will require some application downtime.

If you do not want to preserve data you can skip the sections pertaining to backing up and restoring data.

Note: The steps below assume that NFS is the storage type used with your deployment.

Download the Component Pack package

Download IC-ComponentPack-6.0.0.4.zip from Fix Central and extract it to a folder of your choosing on the ''boot'' node.

Create an NFS export for Solr index backup

On your storage server (boot node if you don't have a dedicated storage server) follow the steps below to create an NFS export for Solr index backup:

  1. Create the directory and set permissions.
    Note: This directory should be created in the same directory as your current persistent volumes. If they are not in /pv, then replace /pv with the directory they are in for in these commands.
    sudo mkdir -p /pv/solr-index-backup  
    sudo chmod -R 777 /pv/solr-index-backup
  2. Update the exports file with the new NFS folder:
    echo "/pv/solr-index-backup        `sudo grep /pv/solr-data-1 /etc/exports | sudo awk '{ print $2 }'`" >> /etc/exports
    
  3. Make sure the directory /pv/solr-index-backup is added to the end of the export file:
    cat /etc/exports
  4. Reload NFS exports:
    sudo exportfs -ra

Create a PV and PVC for Solr index backup

Follow these steps to create a Solr index backup.

  1. On the boot node, replace the string ___NFS_SERVER_IP___ in the pv-solr-index-backup-for-upgrade.yml file with the IP address of the NFS server using one of these commands:
    • If you do not have a dedicated storage server, run this command:
      sudo sed -i "s/___NFS_SERVER_IP___/$(hostname -i)/g" <extractedFolder>/microservices/hybridcloud/doc/samples/pv-solr-index-backup-for-upgrade.yml
    • If you have a dedicated storage server, run this command, replacing <shareServerIpAddress> with the actual IP address of your storage server:
      sudo sed -i "s/___NFS_SERVER_IP___/<shareServerIpAddress>/g" <extractedFolder>/microservices/hybridcloud/doc/samples/pv-solr-index-backup-for-upgrade.yml
  2. If you created your persistent storage in a directory other than /pv, then you will need to update the yml file with the correct directory location. To do this, run the following command, replacing $(pv_dir) with your PV directory:
    sudo sed -i "s/path: /pv/solr-index-backup: $(pv_dir)/solr-index-backup/g" <extractedFolder>/microservices/hybridcloud/doc/samples/pv-solr-index-backup-for-upgrade.yml
  3. Create the PV with the following command:
    sudo /usr/local/bin/kubectl  apply -f <extractedFolder>/microservices/hybridcloud/doc/samples/pv-solr-index-backup-for-upgrade.yml
  4. Create PVC by running the following command:
    sudo /usr/local/bin/kubectl  apply -f <extractedFolder>/microservices/hybridcloud/doc/samples/pvc-solr-index-backup-for-upgrade.yml
  5. Update the current Solr deployments with the backup index details by running this script:
    sudo bash <extractedFolder>/microservices/hybridcloud/bin/solrUpgradePatch.sh
    You should see something like this:
    [root@pink01 Downloads]# sudo bash microservices/hybridcloud/bin/solrUpgradePatch.sh
    
    Changed location to bin:
      /root/Downloads/microservices/hybridcloud/bin
      (relative path:  microservices/hybridcloud/bin)
    "solr1" patched
    "solr1" patched
    "solr2" patched
    "solr2" patched
    "solr3" patched
    "solr3" patched
    One or more PODs not ready yet. Retrying...
    One or more PODs not ready yet. Retrying...
    All Solr pods running

Connecting to a dedicated storage server

Complete these steps if you are using a dedicated storage server. If your persistent volumes are stored on your boot node, then skip to the next section.

  1. On the boot node, create a directory where you want to mount the storage server file system.
    sudo mkdir -p /pv
  2. Run the following command on the boot node to mount the storage node directory to the boot node. Replace <shareServerIpAddress> with the IP address of your storage node:
    sudo mount <shareServerIpAddress>:/pv /pv

Back up the Solr data

On the boot node, run the following script to make a backup of Solr data.
Note: If you created your persistent storage in a directory other than /pv, then you will need to update the script with the correct directory location. To do this, run this command, replacing <pv_dir> with your PV directory:
sudo sed -i "s/ NFS_SHARED_DIRECTORY="/pv/solr-index-backup" : NFS_SHARED_DIRECTORY="<pv_dir>/solr-index-backup" /g" hybridcloud/bin/solrBackupAndRestore.sh
sudo bash <extracted_folder>/microservices/hybridcloud/bin/solrBackupAndRestore.sh BACKUP
Note: If the backup fails due to Solr pod restarts, ensure that the pods are up by running the following command, and then, once they are all running, retry the previous command.
sudo /usr/local/bin/kubectl  get pods | grep solr

Once completed, you should see the message: Backup finished Successfully !!!

Back up MongoDB

Follow these steps to back up the MongoDB using a backup/restore script:
sudo bash <extracted_folder>/microservices/hybridcloud/bin/mongoBackupAndRestore.sh --action=backup --namespace=default --backup_gz_file_name=mongo_backup.gz

A log file, /tmp/mongo_backup.gz.backup.dbs, is generated that you can view after the backup has completed.

Back up the deployCFC folder

Backup the /opt/deployCFC folder on your boot node by running the following command:
sudo mv /opt/deployCfC/ /opt/deployCfC_backup

Uninstalling the previous version

Follow these steps to uninstall the Component Pack component:

  1. On the designated Boot server, move the 6.0.0.4 version of deployCfC to the /opt/ directory:
    sudo mv -f <extracted_folder>/microservices/hybridcloud/deployCfC /opt/
  2. Set permissions:
    sudo chmod -R 755 /opt/deployCfC
  3. Uninstall CfC with the uninstall=cleaner and --alt_cfc_version=1.1.0 flags. For example, to uninstall for a standard (non-HA) deployment:
    cd /opt/deployCfC/
    
    sudo bash deployCfC.sh \
    --boot=bootserver.example.com \
    --master_list=masterserver.example.com \
    --worker_list=workerserver1.example.com,workerserver2.example.com,workerserver3.example.com \
    --proxy_list=proxyserver1.example.com \
    --uninstall=cleaner \
    --alt_cfc_version=1.1.0
    To uninstall for a high availability deployment:
    sudo bash deployCfC.sh \
    --boot=masterserver.example.com \
    --master_list=masterserver1.example.com,masterserver2.example.com,masterserver3.example.com \
    --worker_list=workerserver1.example.com,workerserver2.example.com,workerserver3.example.com \
    --proxy_list=proxyserver1.example.com,proxyserver2.example.com,proxyserver3.example.com \
    --proxy_HA_vip=9.x.x.y \
    --master_HA_vip=9.x.x.n \
    --master_HA_iface=<net iface> \
    --proxy_HA_iface=<net iface> \
    --uninstall=cleaner \
    --alt_cfc_version=1.1.0

For more information on uninstalling, see the topic Uninstalling Component Pack.

Deploying IBM Cloud Private

Deploy IBM Cloud Private, starting with Step 4 in the topic Installing IBM Cloud Private.
Note:

If deploying a HA system, make sure to follow the prerequisites here:ICp high-availability (HA) requirements and configuration prerequisites and ensure the mounts are created before attempting the deployment.

For more information on deploying IBM Cloud Private, see the topic Installing Cloud Private.

Set up persistent volumes

Use the steps in one of these topics (non-HA or HA) to set up new persistent volumes for your deployment:

Copy content to the new persistent volumes

Run the following commands to copy Solr content from the old persistent volumes to the new ones:
sudo mkdir -p /pv-connections/solr-data-solr-0/backup
sudo mkdir -p /pv-connections/solr-data-solr-1/backup
sudo mkdir -p /pv-connections/solr-data-solr-2/backup
sudo bash <extracted_folder>/microservices/hybridcloud/bin/solrUpgradeCopy.sh /pv/solr-index-backup/ /pv-connections/solr-data-solr-0/backup
sudo bash <extracted_folder>/microservices/hybridcloud/bin/solrUpgradeCopy.sh /pv/solr-index-backup/ /pv-connections/solr-data-solr-1/backup
sudo bash <extracted_folder>/microservices/hybridcloud/bin/solrUpgradeCopy.sh /pv/solr-index-backup/ /pv-connections/solr-data-solr-2/backup
sudo chmod a+rw /pv-connections/solr-data-solr-0/backup -R
sudo chmod a+rw /pv-connections/solr-data-solr-1/backup -R
sudo chmod a+rw /pv-connections/solr-data-solr-2/backup -R 
Note: If you set up persistent volumes in different directories than above, update the commands with the directories you used.

Install Component Pack 6.0.0.4

Use the following command to install Component Pack 6.0.0.4:
sudo bash <extracted_folder>/microservices/hybridcloud/install.sh
Note: The previous command installs all components in Component Pack. If you want to install a single component, such as Orient Me, you can install using Starter Stack.

Connecting to a dedicated storage server

Complete these steps if you are using a dedicated storage server. If your persistent volumes are stored on your boot node, then skip to the next section.

  1. On the boot node, create a directory where you want to mount the storage server file system.
    sudo mkdir -p /pv-connections
  2. Run the following command on the boot node to mount the storage node directory to the boot node. Replace<shareServerIpAddress> with the IP address of your storage node:
    sudo mount <shareServerIpAddress>:/pv-connections /pv-connections

Restore Solr data

Make sure that all three Solr instances are running by using the following command:
sudo /usr/local/bin/kubectl  get pods -n connections | grep solr
You will see something like this:

solr-0      1/1       Running   0          1h
solr-1      1/1       Running   0          1h
solr-2      1/1       Running   0          1h    

Do not proceed to the next step until all three Solr pods show as Running.

Run the Solr restore script on the boot node using the following command:
sudo bash <extracted_folder>/microservices/hybridcloud/bin/solrBackupAndRestore.sh RESTORE

Restore MongoDB

Follow these steps to restore the MongoDB data:
Note: If you set up persistent volumes in directories other than /pv and /pv-connections then you need to update the commands in this step with the directories you used.
  1. Copy the backed up Mongo data from the old PV to the new one:
    sudo cp /pv/mongo-node-0/data/db/mongo_backup.gz /pv-connections/mongo-node-0/data/db
  2. Verify that the copy has been successful by comparing the md5sum of the two files:
    sudo md5sum /pv/mongo-node-0/data/db/mongo_backup.gz /pv-connections/mongo-node-0/data/db/mongo_backup.gz
  3. If the md5sum of the two files match, then delete the original backup file:
    sudo rm -f /pv/mongo-node-0/data/db/mongo_backup.gz
  4. Restore the data:
    sudo bash <extracted_folder>/microservices/hybridcloud/bin/mongoBackupAndRestore.sh --action=restore --namespace=connections --backup_gz_file_name=mongo_backup.gz
A log file is generated that you can view after the restore has completed. It will be located in /tmp/mongo_backup.gz.restore.db.

Clean up

The upgrade is now complete. If you want, you can now delete the old persistent storage data by running the following command:
sudo rm -rf /pv/*
Attention: If you created your original persistent storage in a directory other than /pv, then you will need to update the above command with the correct directory location. Make sure you DO NOT delete your new persistent storage directory.
You can delete the backup data after restoring it:
sudo rm -rf /pv-connections/solr-data-solr-0/backup
sudo rm -rf /pv-connections/solr-data-solr-1/backup
sudo rm -rf /pv-connections/solr-data-solr-2/backup
You can also delete all of the lines in the file /etc/exports that reference the old persistent storage directory (/pv).