Deploying the IBM Workload Automation components

Deploying the product components involves the following high-level steps:

  1. Securing communication using either IBM CertManager Tool or using your custom certificates.
  2. Creating a secrets file to store passwords for the console and server components, or if you use custom certificates, to add your custom certificates to the Certificates Secret.
  3. Installing Automation Hub plug-ins.
  4. Installing custom plug-ins.
  5. Deploying the IBM Workload Automation product components.

Before you begin

  1. You must have already deployed the IBM Workload Automation operator before you install the product components. For the instructions on how to start the operator, see the README for Deploying the IBM Workload Automation Operator

  2. You need to log in to your cluster by using the OpenShift CLI. You can also log in to your cluster by using the OpenShift web console. Open your OpenShift web console. From the web console, click the drop-down menu in the upper right corner and then click Copy Login Command. Paste the copied command in your command terminal.

Managing certificates

You can manage certificates either using IBM Cert Manager Operator or using your custom certificates.

Managing certificates using IBM CertManager tool

Certificates are a prerequisite for installing IBM Workload Automation.

To generate default certificates, perform the following steps:

  1. Create the certificate authority (CA):

openssl.exe genrsa -out ca.key 2048

openssl.exe req -x509 -new -nodes -key ca.key -subj "/CN=WA_ROOT_CA" -days 3650 -out ca.crt

.\openssl genrsa -out ca.key 2048

.\openssl req -x509 -new -nodes -key ca.key -subj "/CN=WA_ROOT_CA" -days 3650 -out ca.crt

  1. Define the CA in the secret:

oc create secret tls SECRET_NAME --cert=ca.crt --key=ca.key --namespace=<workload-automation-project>

where
SECRET_NAME is the name of your secret

  1. Install IBM Cert Manager Operator from IBM Common Services.

    • If you are using an IBM cluster, common services are enabled by default.
    • If you are not using an IBM cluster, enable common services by following the procedure available at: operand-lifecycle-management
  2. Create the issuer.

    a. Click IBM Cert Manager Operator on ibm-common-services project.

    b. In the Provided APIs panel, move to the Issuer section and select Create instance.

    c. In the panel displayed, replace the existing text with the following:

apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
 labels:
   app.kubernetes.io/instance: ibm-cert-manager-operator
   app.kubernetes.io/managed-by: ibm-cert-manager-operator
   app.kubernetes.io/name: cert-manager
 name: wa-ca-issuer
 namespace: <workload-automation-project>
spec: 
 ca:
  secretName: SECRET_NAME

where

Ensure you maintain the wa-ca-issuer.yaml file name.

Certificates expire by default after 90 days. WebSphere Application Server Liberty Base checks the secrets every 5 seconds. To change the certificates, update the secrets and the modification is immediately effective.
The expiration date cannot be modified. To understand how it is calculated, consider the following properties, which define the expiration date.

duration: 2160h # 90d

renewBefore: 360h # 15d

where

These properties are not available for modification.

Add your custom certificates to the Certificates Secret

If you use customized certificates, useCustomizedCert:true, you must create a secret containing the customized files that will replace the Server default ones in the <workload-automation-project>. Customized files must have the same name as the default ones.

If you want to use custom certificates, set useCustomizedCert:true and use oc to apply the secret in the <workload-automation-project>.
For the master domain manager, type the following command:

oc create secret generic waserver-cert-secret --from-file=TWSClientKeyStore.kdb --from-file=TWSClientKeyStore.sth --from-file=TWSClientKeyStoreJKS.jks --from-file=TWSClientKeyStoreJKS.sth --from-file=TWSServerKeyFile.jks --from-file=TWSServerKeyFile.jks.pwd --from-file=TWSServerTrustFile.jks --from-file=TWSServerTrustFile.jks.pwd --from-file=ltpa.keys -n <workload-automation-project>   
   

For the Dynamic Workload Console, type the following command:

 oc create secret generic waconsole-cert-secret --from-file=TWSServerKeyFile.jks --from-file=TWSServerKeyFile.jks.pwd --from-file=TWSServerTrustFile.jks --from-file=TWSServerTrustFile.jks.pwd --from-file=ltpa.keys -n <workload-automation-project>   
   

For the dynamic agent, type the following command:

oc create secret generic waagent-cert-secret --from-file=TWSClientKeyStore.kdb --from-file=TWSClientKeyStore.sth --from-file=TWSClientKeyStoreJKS.jks --from-file=TWSClientKeyStoreJKS.sth -n <workload-automation-project>    

where, TWSClientKeyStoreJKS.sth, TWSClientKeyStore.kdb, TWSClientKeyStore.sth, TWSClientKeyStoreJKS.jks, TWSServerTrustFile.jks and TWSServerKeyFile.jks are the Container keystore and stash file containing your customized certificates.

For details about custom certificates, see Connection security overview.

Note: Passwords for “TWSServerTrustFile.jks” and “TWSServerKeyFile.jks” files must be entered in the respective “TWSServerTrustFile.jks.pwd” and “TWSServerKeyFile.jks.pwd” files.

(**) Note: if you set db.sslConnection:true, you must also set the useCustomizeCert setting to true (on both server and console charts) and, in addition, you must add the following certificates in the customized SSL certificates secret on both the server and console charts:

Customized files must have the same name as the ones listed above.

If you want to use SSL connection to DB, set db.sslConnection:true and useCustomizedCert:true, then use oc to create the secret in the same namespace where you want to deploy the chart:

  `bash
  $ oc create secret generic release_name-secret --from-file=TWSServerTrustFile.jks --from-file=TWSServerKeyFile.jks --from-file=TWSServerTrustFile.jks.pwd --from-file=TWSServerKeyFile.jks.pwd --namespace=<workload-automation-project>
  `   

If you define custom certificates, you are in charge of keeping them up to date, therefore, ensure you check their duration and plan to rotate them as necessary. To rotate custom certificates, delete the previous secret and upload a new secret, containing new certificates. The pod restarts automatically and the new certificates are applied.

Defining Single Sign-On options

When defining Single Sign-On (SSO), you have the following options:

Secrets

Create a secrets file to store passwords for the console and server components, or if you use custom certificates, to add your custom certificates to the Certificates Secret.

Create secrets file to store passwords for the console and server components

Manually create a mysecret.yaml file to store passwords. The mysecret.yaml file must contain the following parameters:

apiVersion: v1
kind: Secret
metadata:
  name: wa-pwd-secret
  namespace: <workload-automation-project>
type: Opaque
data:
   WA_PASSWORD: <hidden_password>
   DB_ADMIN_PASSWORD: <hidden_password>
   DB_PASSWORD: <hidden_password>
   SSL_PASSWORD: <hidden_password>

where:

Note a: The SSL_PASSWORD parameter is required only if you use custom certificates in PEM format.
Note b: The command must be launched separately, for each password that must be entered in the mysecret.yaml:
WA_PASSWORD:
DB_ADMIN_PASSWORD:
DB_PASSWORD:
SSL_PASSWORD:
Once the file has been created and filled in, it must be imported; to import it:

  1. From the command line, log on to the OpenShift cluster:
    oc login --token=xxx --server=https://<OpenShift_server>

  2. Launch the following command:

     oc apply -f <my_path>/mysecret.yaml
    

where <my_path> is the location path of the mysecret.yaml file.

IBM Identity and Access Management

IBM Cloud™ Identity and Access Management (IAM) enables you to securely authenticate users for platform services and control access to resources consistently across IBM Cloud.
Perform the following steps to enable IAM on your workload:

  1. Create a client using an OpenID Connect (OIDC) registration yaml file. See the following file for reference:
apiVersion: oidc.security.ibm.com/v1
kind: Client
metadata:
  name: workload-automation-oidc-client
  namespace: <workload-automation-project>
spec:
  oidcLibertyClient:
    post_logout_redirect_uris:
       - >-
         https://wa-console-route-<workload-automation-project>.apps.<domain_name>
    redirect_uris:
      - >-
        https://wa-console-route-<workload-automation-project>.apps.<domain_name>/oidcclient/redirect/client01
    trusted_uri_prefixes:
      - >-
        https://wa-console-route-<workload-automation-project>.apps.<domain_name>
  secret: workload-automation-oidc-secret
  1. Create the OAuth client configuration file. See the following file for reference:
kind: OAuthClient
apiVersion: oauth.openshift.io/v1
metadata:
  name: <clientId>  
secret: workload-automation-oidc-secret
redirectURIs:
   - >-
     https://wa-console-route-<workload-automation-project>.apps.<domain_name>/api/auth/callback
grantMethod: auto

where

clientId is the clientId specified in the workload-automation-oidc-secret file. The generated workload-automation-oidc-secret includes the client ID and client secret.

The client name and secret should match the client ID and client secret generated by workload-automation-oidc-secret. The name of the client secret must match the value specified for the console.openIdClientName parameter in the installation .yaml file.

  1. Install the service on the operator with Single Sign-On with JWT (enableSSO=true). Add to the operator an admin group you have previously defined on IAM with the following syntax: icp:<group_name>:admin. In the following image, the group name is workload_automation.
    Admin group
  2. Ensure at least one user is present in the icp:<group_name>:admin group in IBM Cloud Pak.

Installing Automation Hub integrations in the Case

You can extend IBM Workload Automation with a number of out-of-the-box integrations, or plug-ins. Complete documentation for the integrations is available on Automation Hub. Use this procedure to integrate only the integrations you need to automate your business workflows.

Note: You must perform this procedure before deploying the server and console containers. Any changes made post-installation are applied the next time you perform an upgrade.

The following procedure describes how you can create and customize a configMap file to identify the integrations you want to make available in your Workload Automation environment:

  1. Ensure you are logged in to the OpenShift Enterprise cluster.

  2. Set the <workload-automation-project>: oc project <workload-automation-project>

  3. Create a .yaml file, for example, plugins-config.yaml, with the following content. This file name will need to be specified in a subsequent step.

     ####################################################################
     # Licensed Materials Property of HCL*
     # (c) Copyright HCL Technologies Ltd. 2021. All rights reserved.
     #
     # * Trademark of HCL Technologies Limited
     ####################################################################
    
     apiVersion: v1
     kind: ConfigMap
     metadata:
       name: <configmap_name>
     data:
       plugins.properties: |
           com.hcl.scheduling.agent.kubernetes
           com.hcl.scheduling.agent.udeploycode
           com.hcl.wa.plugin.ansible
           com.hcl.wa.plugin.automationanywherebotrunner
           com.hcl.wa.plugin.automationanywherebottrader
           com.hcl.wa.plugin.awscloudformation
           com.hcl.wa.plugin.awslambda
           com.hcl.wa.plugin.awssns
           com.hcl.wa.plugin.awssqs
           com.hcl.wa.plugin.azureresourcemanager
           com.hcl.wa.plugin.blueprism
           com.hcl.wa.plugin.compression
           com.hcl.wa.plugin.encryption
           com.hcl.wa.plugin.gcpcloudstorage
           com.hcl.wa.plugin.gcpdeploymentmanager
           com.hcl.wa.plugin.jdedwards
           com.hcl.wa.plugin.obiagent
           com.hcl.wa.plugin.odiloadplan
           com.hcl.wa.plugin.oraclehcmdataloader
           com.hcl.wa.plugin.oracleucm
           com.hcl.wa.plugin.saphanaxsengine
           com.hcl.waPlugin.chefbootstrap
           com.hcl.waPlugin.chefrunlist
           com.hcl.waPlugin.obirunreport
           com.hcl.waPlugin.odiscenario
           com.ibm.scheduling.agent.apachespark
           com.ibm.scheduling.agent.aws
           com.ibm.scheduling.agent.azure
           com.ibm.scheduling.agent.biginsights
           com.ibm.scheduling.agent.centralizedagentupdate
           com.ibm.scheduling.agent.cloudant
           com.ibm.scheduling.agent.cognos
           com.ibm.scheduling.agent.database
           com.ibm.scheduling.agent.datastage
           com.ibm.scheduling.agent.ejb
           com.ibm.scheduling.agent.filetransfer
           com.ibm.scheduling.agent.hadoopfs
           com.ibm.scheduling.agent.hadoopmapreduce
           com.ibm.scheduling.agent.j2ee
           com.ibm.scheduling.agent.java
           com.ibm.scheduling.agent.jobdurationpredictor
           com.ibm.scheduling.agent.jobmanagement
           com.ibm.scheduling.agent.jobstreamsubmission
           com.ibm.scheduling.agent.jsr352javabatch
           com.ibm.scheduling.agent.mqlight
           com.ibm.scheduling.agent.mqtt
           com.ibm.scheduling.agent.mssqljob
           com.ibm.scheduling.agent.oozie
           com.ibm.scheduling.agent.openwhisk
           com.ibm.scheduling.agent.oracleebusiness
           com.ibm.scheduling.agent.pichannel
           com.ibm.scheduling.agent.powercenter
           com.ibm.scheduling.agent.restful
           com.ibm.scheduling.agent.salesforce
           com.ibm.scheduling.agent.sapbusinessobjects
           com.ibm.scheduling.agent.saphanalifecycle
           com.ibm.scheduling.agent.softlayer
           com.ibm.scheduling.agent.sterling
           com.ibm.scheduling.agent.variabletable
           com.ibm.scheduling.agent.webspheremq
           com.ibm.scheduling.agent.ws
    
  4. In the plugins-config.yaml file, assign a name of your choice to the configmap:

     name: <configmap_name>
    
  5. Assign this same name to the Global.customPlugins parameter in the custom resource. See the following readme file for more information about this global parameter: Readme file.

  6. Delete the lines related to the integrations you do not want to make available in your environment. The remaining integrations will be integrated into Workload Automation at deployment time. Save your changes to the file.

    You can always refer back to this readme file and add an integration back into the file in the future. The integration becomes available the next time you update the console and server containers.

  7. To apply the configMap to your environment and integrate the plug-ins, run the following command:

     oc apply -f plugins-config.yaml
    
  8. Update the Operator custom resource to include the configPlugins global parameter and value:

     configPlugins: <configmap_name>   
    

Proceed to deploy the product components. After the deployment, you can include jobs related to these integrations when defining your workload.

Installing custom plug-ins in the Case

In addition to the integrations available on Automation Hub, you can extend IBM Workload Automation with custom plug-ins that you create. For information about creating a custom plug-in see Workload Automation Lutist Development Kit on Automation Hub.

To install a custom plug-in and make it available to be used in your workload, perform the following steps before deploying or upgrading the console and server components:

  1. Create a new folder with a name of your choosing, for example, “my_custom_plugins”.

  2. Create a Dockerfile with the following content and save it to the new folder as is, “my_custom_plugins”. This file does not require any customizations.

     FROM registry.access.redhat.com/ubi8:8.3
    
     ENV WA_BASE_UID=999
     ENV WA_BASE_GID=0
     ENV WA_USER=wauser
     ENV WA_USER_HOME=/home/${WA_USER}
    
     USER 0
    
     RUN echo "Creating \"${WA_USER}\" user for Workload Automation and assign it to group \"${WA_BASE_GID}\"" \
     && userdel systemd-coredump \
     && if  [ ${WA_BASE_GID} -ne 0 ];then \
     groupadd -g ${WA_BASE_GID} -r ${WA_USER};fi \
     && /usr/sbin/useradd -u ${WA_BASE_UID} -m -d ${WA_USER_HOME} -r -g ${WA_BASE_GID} ${WA_USER}
    
     RUN mkdir -p /opt/wa_plugins /opt/wautils /tmp/custom_plugins
     COPY plugins/* /opt/wa_plugins/
    
     RUN chown -R ${WA_BASE_UID}:0 /opt/wa_plugins \
     && chmod -R 755 /opt/wa_plugins
    
     COPY copy_custom_plugins.sh /opt/wautils/copy_custom_plugins.sh
    
     RUN chmod 755 /opt/wautils/copy_custom_plugins.sh \
     && chown ${WA_BASE_UID}:${WA_BASE_GID} /opt/wautils/copy_custom_plugins.sh
    
     USER ${WA_BASE_UID}
    
     CMD [ "/opt/wautils/copy_custom_plugins.sh" ] 
    
  3. Create another file specifically with the name: copy_custom_plugins.sh. The file must contain the following content, and must be saved to the new folder you created, “my_custom_plugins”:

     #!/bin/sh
     ####################################################################
     # Licensed Materials Property of HCL*
     # (c) Copyright HCL Technologies Ltd. 2021. All rights reserved.
     #
     # * Trademark of HCL Technologies Limited
     ####################################################################
    
    
     copyCustomPlugins(){
         SOURCE_PLUGINS_DIR=$1
         REMOTE_PLUGINS_DIR=$2
    
    
         echo "I: Starting copy of custom plugins...." 
         if [ -d "${SOURCE_PLUGINS_DIR}" ] && [ -d "${REMOTE_PLUGINS_DIR}" ];then
     	echo "I: Copying custom plugins...." 
     	cp --verbose -R ${SOURCE_PLUGINS_DIR} ${REMOTE_PLUGINS_DIR}
         fi
     }
    
     ###############
     #MAIN
     ###############
    
     copyCustomPlugins $1 $2
    
  4. Create a sub-folder specifically named: “plugins”, in the new folder “my_custom_plugins”.

  5. Copy your custom .jar plug-ins to the “plugins” sub-folder.

  6. Run the following command to build the Docker image:

     docker build -t <your_docker_registry_name>/<your_image_name>:<your_tag> .
    

    where <your_docker_registry_name> is the name of your docker registry, <your_image_name> is the name of your Docker image, and <your_tag> is the tag you assigned to your Docker image.

  7. Run the following command to push the Docker image to the registry:

     docker push <your_docker_registry_name>/<your_image_name>:<your_tag>
    
  8. Configure the customPluginImageName global parameter in the Operator custom resource with name of the image and tag built in the previous steps. See the following readme file for more information about this global parameter: Readme file.

     customPluginImageName: <your_docker_registry_name>/<your_image_name>:<your_tag>
    

Proceed to deploy the product components. After the deployment, you can include jobs related to your custom plug-ins when defining your workload.

Deploy the IBM Workload Automation product components

When you have the operator running on your namespace, you can install the IBM Workload Automation product components. You can use a custom resource to start the component on your cluster and provide it the secrets that you created in the previous steps. Note that you can only deploy one product component per OpenShift project.

After the Operator has been installed and configured, create the Operator instance that enables you to leverage the Workload Automation components. Ensure you have the <workload-automation-project> admin role.

  1. From the Red Hat OpenShift Container Platform console, go to Operators -> Installed Operators and click the Workload Automation Operator hyperlink. Click Create instance and A YAML file is displayed to allow you to configure the parameters for each component.

  2. Modify the values of the parameters in the YAML file. These parameters are described in the Configuration section of the README IBM Workload Automation.

  3. Click Create and all of the IBM Workload Automation components and resources for the <workload-automation-project> are installed.

Verifying the installation

After the deployment procedure is complete, you can validate the deployment to ensure that everything is working. You can do this in several ways:

Status parameters for a custom resource

To verify that the Operator is successfully deployed, from the OCP web console, go to Operators -> Installed operators. The status is displayed as Succeeded the install strategy completed with no errors as in the following image:

Installed Operator with succeeded status

Manually validate the installation

To manually verify that the installation was successfully installed, you can perform the following checks:

  1. Ensure that all the IBM Workload Automation pods in the OCP Workload Automation project (default is workload-automation) created during the installation are in running state. In this example, 2 server pods, 3 agents, 1 console pod are all in the running state: all pods in running state

  2. Access the Workload Automation wa-waserver-0 server pod (by default, this pod takes on the master role after the installation) and run the following commands:

An example of the output for this command is as follows:

listcpu output

An example of the output for this command is as follows:

showcpu output

To ensure the Dynamic Workload Console logout page redirects to the login page, modify the value of the logout url entry available in file authentication_config.xml:

   <jndiEntry value="${logout.url}" jndiName="logout.url" />

where the logout.url string in jndiName should be replaced with the logout URL of the provider.

Adding additional groups

To add additional groups, perform the following steps:

  1. Log in to IAM and retrieve the name of the icp:<group_name>:admin group. In the current example, the group name is icp:workload_automation:admin.
  2. Log in to the Dynamic Workload Console.
  3. Create the group. In the current example, the group name is icp:workload-automation2:admin.
  4. Log in to the master domanin manager.
  5. Create the ACL for the group. In the current example, the group name is icp:workload_automation:admin.

For more information, see Configuring the Dynamic Workload Console.

Upgrade and rollback

The following procedures are provided to upgrade and roll back the Workload Automation components. Since the product configuration files are isolated on a persistent volume in a dedicated directory, these procedures do not impact availability or performance.

Upgrading the product components

Complete the following steps to upgrade any or all of the following Workload Automation container images:

Note: If you have configured a configMaps file as described in Installing Automation Hub integrations in the Case, this upgrade procedure automatically upgrades any integrations or plug-ins previously installed from Automation Hub and will also implement any changes made to the configMaps file post-installation.

  1. Retrieve the new tag version related to each product component by running the following commands and maintain a copy of them:

ibmcloud cr image-list --restrict cp/ibm-workload-automation-agent-dynamic
ibmcloud cr image-list --restrict cp/ibm-workload-automation-console
ibmcloud cr image-list --restrict cp/ibm-workload-automation-server

  1. From the Red Hat OpenShift Container Platform console, go to Operators -> Installed Operators and click the Workload Automation Operator hyperlink.

  2. Select the IBM Workload Automation Operator tab and then select the wa hyperlink. Next, from the YAML tab, edit the value of the tag in the yaml file. For each of the components you want to upgrade, locate the tag parameter and update it with the appropriate value.

...
...
 "image": {
"pullPolicy": "Always",
"repository": "workload-automation-agent-dynamic",
"tag": "<new_tag>"
...
...
  1. Save the changes in the yaml file and verify the images are updating in the Workload Automation pods.

The product components are now updated.

Rolling back the product components

Complete the following steps to roll back any or all of the following Workload Automation container images to the previous version:

  1. Retrieve the previous tag version related to each product component by running the following commands and maintain a copy of them:

ibmcloud cr image-list --restrict cp/ibm_wa_agent
ibmcloud cr image-list --restrict cp/ibm_wa_console
ibmcloud cr image-list --restrict cp/ibm_wa_server

  1. From the Red Hat OpenShift Container Platform console, go to Operators -> Installed Operators and click the Workload Automation Operator hyperlink.

  2. Select the IBM Workload Automation Operator tab and then select the wa hyperlink. Next, from the YAML tab, edit the value of the tag in the yaml file. For each of the components you want to roll back, locate the tag parameter and update it with the appropriate value.

...
...
 "image": {
"pullPolicy": "Always",
"repository": "workload-automation-agent-dynamic",
"tag": **"<previous_tag>"**
...
...
  1. Save the changes in the YAML file and verify the images are updating in the Workload Automation pods.

The product components have now been returned to the previous version.

Backup and restore IBM Workload Automation

This topic describes how to roll back a master domain manager to the previous status.

Important: The first step of the procedure is to create a backup copy of the database and of some directories. The backup is required for the subsequent restore operation which returns the master domain manager to the previous status.

You can revert back to an earlier status of the master domain manager by performing the following rollback procedure. This procedure is not the recommended one, if you have a backup master domain manager. This procedure is also not recommended unless the IBM Workload Automation database is corrupted.

  1. To create a backup, set the fence to GO for all workstations in IBM Workload Automation by using the Dynamic Workload Console or by typing the following command:

       conman fence /@/@; 101
    
  2. Check that all the jobs are in a complete state.

  3. Create a backup copy of the IBM Workload Automation database, and of the Persistent Volume Claim (PVC) containing IBM Workload Automation data by running the following command:

       oc rsync wa-waserver-0:/home/wauser/wadata/ wadata/
    
  4. To restore the backup, set the fence to GO for all workstations in IBM Workload Automation by using the Dynamic Workload Console or by typing the following command:

       conman fence /@/@; 101
    
  5. Check that all the jobs are in a complete state.

  6. Restore the IBM Workload Automation Database, and PVC containing the files by running the following command:

       oc rsync wadata/ wa-waserver-0:/home/wauser/wadata/
    
  7. Run the ResetPlan command to clean the IBM Workload Automation queues and restart from a clean environment.

  8. Run the JnextPlan command to generate the plan for the day, starting from the current timestamp and to until tomorrow’s start of day. See the following syntax:

       JnextPlan -from mm/dd/yyyy hhmm tz timezone -to mm/dd/yyyy hhmm tz timezone
    

For example, to generate a plan from 15:41 and with of start of day 0000 in UTC time zone, type the following command:

      JnextPlan -from 06/19/2020 1541 tz UTC -to 06/20/2020 0000 tz UTC

Results

IBM Workload Automation data have now been restored to their previous state.

Next steps

By default a single server, console, and agent is installed. If you want to change the topology for IBM Workload Automation, see the configuration section in the README IBM Workload Automation.

Change history

Added June 2021 - version

Added February 2021 - version 1.4.1