Deploying HCL Commerce on a Kubernetes cluster

HCL Commerce is a single, unified e-commerce platform that offers the ability to do business directly with consumers (B2C) or directly with businesses (B2B). It is a customizable, scalable, distributed, and high availability solution that is built to use open standards. It provides easy-to-use tools for business users to centrally manage a cross-channel strategy. Business users can create and manage precision marketing campaigns, promotions, catalog, and merchandising across all sales channels. HCL Commerce uses cloud friendly technology to make deployment and operation both easy and efficient.

A complete HCL Commerce environment is composed of an authoring (auth) environment and a live (live) environment. The authoring environment is for site administration and business users to make changes to the site, while the live environment is for shopper access. A third grouping also exists within HCL Commerce Version 9.1. The shared (share) environment contains the applications that can be consumed by both auth and live environment types. This is used for the new Elasticsearch-based search solution.

For more information on HCL Commerce, see HCL Commerce product overview.

HCL Commerce supports several deployment configurations. The default configuration mode used by the provided Helm Chart is Vault configuration mode. Vault is also the recommended configuration mode for HCL Commerce as it was designed to store configuration data securely. HCL Commerce also uses Vault as a Certificate Authority to issue certificate to each application to communicate with one another securely. Therefore, ensure that you have a Vault service available for HCL Commerce to access. The following steps highlight the minimum requirements before deploying HCL Commerce.

For non-production environments, you can consider the use of hcl-commerce-vaultconsul-helmchart to deploy and initialize Vault for HCL Commerce as it can initialize the Vault and populate data for HCL Commerce. However, that chart runs Vault in development and non-high availability (HA) mode and does not handle Vault token securely. Therefore, it should not be used for production environments. You can read Vault Concepts for all considerations that must be made to run Vault in a production setting.

Important: The environment that you create should not be used for a live production site without further consideration toward security hardening, load balancing, ingress routing, and performance tuning. To operate HCL Commerce Version 9.1 in a live production environment, you must commit further time and resources to both performance and security considerations.

With load balancing and ingress routing specifically, you can configure which services you want to expose externally, and restrict the remaining services within the cluster network. This configuration limits their access from and exposure to the wider Internet.

Before you begin

  • Ensure that you have deployed Vault. Vault is a mandatory component that is used by default as a Certificate Agent to automatically issue certificates, as well as to store and retrieve essential deployment configuration variables and secrets. For more information, see Deploying a development Vault for HCL Commerce on Kubernetes
  • Ensure that your environment is prepared. To set up the appropriate environment, see Prerequisites for deploying HCL Commerce on a Kubernetes cluster.
  • HCL Commerce Version 9.1.7.0 or laterBeginning with HCL Commerce 9.1.7.0, a Power Linux version of the Helm Chart is included for use on that platform. Ensure that you are using the correct version of the Helm Chart for the platform that you are deploying to.
  • HCL Commerce Version 9.1.14.0 or later

Procedure

  1. Optional: If you are using the Elasticsearch-based search solution for HCL Commerce, you must deploy Elasticsearch, Zookeeper, and Redis.
    Important:
    • Ensure that Elasticsearch, Zookeeper, and Redis are all deployed with persistence enabled. This will ensure that your search index and connector configurations are saved if the containers are restarted.
    • The deployments of Elasticsearch, Zookeeper, and Redis require specific versions of their Helm Charts. These versions are compatible with the sample values bundled in your cloned HCL Commerce Helm Chart or HCL Commerce Plinux Helm Chart Git project. The specific version values are referenced in the hcl-commerce-helmchart/stable/hcl-commerce/Chart.yaml Helm Chart, from the cloned HCL Commerce Helm Chart Git project. Ensure that the versions specified in the following commands with the version parameter are aligned with the values that are referenced in this file.
    Note: If you are deploying Elasticsearch, Zookeeper, and Redis on Red Hat OpenShift, you might be required to grant privileged Security Context Constraints (SCC) to the service account in order to prevent security errors. This is based on which service account you choose to use.

    For example, oc adm policy add-scc-to-user privileged -z default –n NAMESPACE

    1. Deploy Elasticsearch.
      1. Create a namespace for Elasticsearch.
        kubectl create ns elastic
      2. Add the Helm Chart repository.
        helm repo add elastic https://helm.elastic.co
      3. Deploy Elasticsearch using a local elasticsearch-values.yaml file.

        A sample version of this file is available in the sample_values directory of your cloned HCL Commerce Helm Chart Git project.

        helm install elasticsearch elastic/elasticsearch -n elastic -f elasticsearch-values.yaml --version="elasticsearch-chart-version"
      4. Monitor the deployment and ensure that all pods are healthy.

      For more information about deploying Elasticsearch with Helm, see the The Elasticsearch Helm Chart documentation.

    2. Deploy Zookeeper.
      1. Create a namespace for Zookeeper.
        kubectl create ns zookeeper
      2. Add the Helm Chart repository.
        helm repo add bitnami https://charts.bitnami.com/bitnami
      3. Deploy Zookeeper with the provided zookeeper-values.yaml deployment configuration file.

        A copy of this file can be found in the sample_values directory of your cloned HCL Commerce Helm Chart Git project.

        helm install my-zookeeper bitnami/zookeeper -n zookeeper -f zookeeper-values.yaml --version="zookeeper-chart-version"
        PowerNote: For a Power-based deployment prior to HCL Commerce 9.1.12.0, you must download and modify the Zookeeper Helm Chart before deploying it with the provided zookeeper-values.yaml deployment configuration file.
        1. Download the Helm Chart.
          helm pull bitnami/zookeeper --version "zookeeper-chart-version"
        2. Extract the Helm Chart.
        3. Modify the statefulset.yaml configuration file within the templates directory of the Zookeeper Helm Chart.
          1. Open the file for editing.
          2. Update the commands section with the following additions.
              1. Locate the line.
                HOSTNAME=`hostname -s`
              2. Before this line, insert the following command.
                yum install -y hostname
              1. Locate the line.
                exec /entrypoint.sh /run.sh
              2. Replace this line with the following.
                exec /usr/bin/start-zk.sh
          3. Save and close the file.
        4. Deploy Zookeeper with helm install using the modified values with the provided zookeeper-values.yaml deployment configuration file.

          A copy of this file can be found in the sample_values directory of your cloned HCL Commerce Helm Chart Git project.

          helm install my-zookeeper modified_zookeeper_helm_chart_path -n zookeeper -f zookeeper-values.yaml
      4. Monitor the deployment and ensure that all pods are healthy.

      For more information about deploying Zookeeper with Helm, see the The Zookeeper Helm Chart documentation.

    3. Deploy Redis.
      1. Create a namespace for Redis.
        kubectl create ns redis
      2. Add the Helm Chart repository.
        helm repo add bitnami https://charts.bitnami.com/bitnami
      3. Deploy Redis using a local redis-values.yaml file.

        A sample version of this file is available in the sample_values directory of your cloned HCL Commerce Helm Chart Git project.

        helm install my-redis bitnami/redis -n redis -f redis-values.yaml --set master.disableCommands="" --version="redis-chart-version"
      4. Monitor the deployment and ensure that all pods are healthy.

      For more information about deploying Redis with Helm, see the Redis Helm Chart documentation.

  2. Optional: HCL Commerce Version 9.1.12.0 or later If you intend to enable the Approval service for use within a Marketplace, you must deploy PostgreSQL to be used as the database.
    1. Create a namespace for PostgreSQL.
      kubectl create ns postgresql
    2. Add the Helm Chart repository.
      helm repo add bitnami https://charts.bitnami.com/bitnami
    3. Deploy PostgreSQL using a local postgresql-values.yaml file. A sample version of this file is available in the sample_values directory of your cloned HCL Commerce Helm Chart Git project.
      Important: An initializtion SQL file is used to customize the database for use with the Approval server. You must update the sample password that is used in the script, and ensure that the datasource password under the Approval server section is updated with the same password.
      helm install my-postgresql bitnami/postgresql -n postgresql -f postgresql-values.yaml --version="postgresql-chart-version" 
    4. Monitor the deployment and ensure that all pods are healthy.

    For more information about deploying PostgreSQL with Helm, see the PostgreSQL Helm Chart documentation.

  3. Configure your HCL Commerce deployment Helm Chart.
    Use the provided hcl-commerce-helmchart to customize your deployment. Review the following topics based on your configuration knowledge and requirements:
    Note: It is strongly recommended to not modify the default  values.yaml configuration  file for your deployment. Instead create a copy to use as your customized values file, for example, my-values.yaml. This will allow you to maintain your customized values for future deployments and upgrades.
  4. Use Helm to control the deployment of HCL Commerce.

    Once you have finished the configuration of your deployment in your my-values.yaml file and meet the environment prerequisites, you are ready to deploy HCL Commerce by using Helm.

    • First time deployment
      1. Deploy the share group with release name demo-qa-share into the commerce namespace.
        helm install demo-qa-share hcl-commerce-helmchart -f my-values.yaml --set common.environmentType=share -n commerce
      2. Deploy the auth group with the release name demo-qa-auth into the commerce namespace.
        helm install demo-qa-auth hcl-commerce-helmchart -f my-values.yaml --set common.environmentType=auth -n commerce
      3. Deploy the live group with the release name demo-qa-live into the commerce namespace.
        helm install demo-qa-live hcl-commerce-helmchart -f my-values.yaml --set common.environmentType=live -n commerce

      Once the HCL Commerce applications are deployed, if you have further configuration changes or image updates, you can use Helm upgrade command to update the deployment.

    • Updating a deployment
      To update a deployment run the following Helm command for the release and environmentType that you want to update.
      helm upgrade release-name hcl-commerce-helmchart -f my-values.yaml --set common.environmentType=environmentType -n commerce
      Note:
      • HCL Commerce Version 9.1.15.0 or laterIf you are upgrading a deployment that uses NGINX or GKE ingress from a version prior to HCL Commerce 9.1.7.0 to a version 9.1.15.0 or greater, you must enable the ingressFormatUpgrade parameter within the values.yaml configuration file to trigger an upgrade job to clean up old ingress definitions. Failure to do so will result in errors with ingress definitions that are in conflict during the upgrade.
      • HCL Commerce Version 9.1.14.0 or laterA non-root user for use within all HCL Commerce containers was introduced in the HCL Commerce 9.1.14.0 release. This change can impact various aspects of your deployment. Review HCL Commerce container users and privileges before upgrading to ensure that your deployment will continue to function as expected.
      • There are several considerations when upgrading your deployment with regards to the Assets Tool and its persisted storage configuration:
        • If your existing deployment prior to the upgrade does not enable assetsPVC, then set the migrateAssetsPvcFromRootToNonroot parameter within the values.yaml configuration file to false.
        • Instead of using commercenfs in the values.yaml configuration file to create the NFS storageclass, it is recommended to create an NFS storageclass manually. Creating a storageclass manually will avoid issues that can be encountered when running the helm upgrade command when deploying separate environment types within a single namespace.

          To create an NFS storageclass, see nfs-server-provisioner.

      • If you use the Elasticsearch-based search solution, it is required to use a completely new persistent volume for NiFi, and clear any existing Zookeeper data before you redeploy. This is required so that the newer version of the connectors can be created automatically during the deployment.
        • To clear the NiFi data:
          1. See Persisting search data to create a new Persistent Volume Claim (PVC), and configure the new PVC name in your deployment values.yaml file.
          2. You can then remove the previous attached persistent volume claim.
            kubectl delete pvc previous_pvc_name -n commerce
        • To clear Zookeeper data:
          1. Delete the existing Zookeeper instance.
            helm delete my-zookeeper -n zookeeper
          2. Remove the existing persistent volume claims.
            kubectl delete pvc --all -n zookeeper
          Then, follow step 1.b.iii to re-deploy Zookeeper.
      • HCL Cache caches classes that can be modified in newer versions of HCL Commerce. To avoid errors in de-serializing an old version of the class, it is strongly recommended to clear Redis keys after upgrading HCL Commerce. Redis keys can be cleared with the Redis flushdb or flushall commands.
      • Once you upgrade HCL Commerce, recreate any customized search profiles and connectors before your next search indexing.
    • Removing a deployment
      To uninstall or delete a deployment run the following Helm command for the release that you want to remove.
      helm delete release-name
  5. Observe the deployment.

    When you install or update HCL Commerce, the start-up must follow a precise sequence. The Support Container is primarily used for service dependency checks, to ensure that the various Commerce applications are brought online properly, and in the expected order. In addition, it is also used by some utility jobs, such as for TLS certificate generation for secure ingress. The deployment process can take up to 10 minutes depending on the capacity of your Kubernetes worker nodes.

    You can check the status of your deployment. The following values are displayed in the Status column.
    • Running: This container is started.
    • Init: 0/1: This container is pending on another container to start.
    You can also observe the following values displayed in the Ready column:
    • 0/1: This container is started but the application is not yet ready.
    • 1/1: This application is ready to use.
  6. Access your environments.
    By default, the Helm Chart uses the default values of tenant, env, and envtype. If you changed the default values, update the host names that are used within the following step examples.
    1. Check the ingress server IP address.
      kubectl get ingress -n commerce
    2. Create the ingress server IP and hostname mapping by editing your development environment hosts file.
      #Auth environment
      Ingress_IP store.demoqaauth.mycompany.com www.demoqaauth.mycompany.com cmc.demoqaauth.mycompany.com tsapp.demoqaauth.mycompany.com search.demoqaauth.mycompany.com
      #Live environment
      Ingress_IP store.demoqalive.mycompany.com www.demoqalive.mycompany.com cmc.demoqalive.mycompany.com tsapp.demoqalive.mycompany.com searchrepeater.demoqalive.mycompany.com 
       
      Note:
      • HCL Commerce Version 9.1.7.0 or laterPowerFor a Power Linux deployment on OpenShift, OpenShift routes must be utilized to expose services, instead of the ingress server. The Ingress_IP value in the hosts sample must be replaced by the IP address of the OpenShift service.
      • For Ambassador, or Emmisary ingress, the Ingress_IP is the IP address of the Ambassador or Emissary service.
      • search.demoqaauth.mycompany.com is used to expose the Search Master service.
      • searchrepeater.demoqalive.mycompany.com is used to expose the Search Repeater service within your live environment, to trigger index replication.
    3. Access your environment pages and tools with following URLs:
      • An Aurora storefront: https://store.demoqaauth.mycompany.com/wcs/shop/en/auroraesite
      • An Emerald storefront (The new React-based reference store): https://www.demoqaauth.mycompany.com/Emerald
      • Management Center for HCL Commerce: https://cmc.demoqaauth.mycompany.com/lobtools/cmc/ManagementCenter
  7. Build your search index.
    • With the Solr-based search solution
      1. Trigger the Build Index job. This example uses the default spiuser, password, and master catalog ID.
        curl -X POST -u spiuser:plain_text_spiuser_password https://tsapp.demoqaauth.mycompany.com/wcs/resources/admin/index/dataImport/build?masterCatalogId=10001 -k
        A response with a jobStatusId is displayed.
      2. Check the Build Index job status using the jobStatusId value that was returned.
        curl -X GET -u spiuser:plain_text_spiuser_password https://tsapp.demoqaauth.mycompany.com/wcs/resources/admin/index/dataImport/status?jobStatusId=jobStatusId -k
        A returned value of 0 indicates that the build completed successfully.
      Note:
      • The default password for the spiuser user is passw0rd for HCL Commerce 9.1.0.0 through 9.1.8.0, and QxV7uCk6RRiwvPVaa4wdD78jaHi2za8ssjneNMdu3vgqi for HCL Commerce 9.1.9.0 and greater.
      • It is essential to set your own spiuser password to secure your deployment. For more information, see Setting the spiuser password in your Docker images.
    • With the Elasticsearch-based search solution
      1. Trigger the Build Index job.
        curl -X POST -k -u spiuser:plain_text_spiuser_password https://tsapp.demoqaauth.mycompany.com/wcs/resources/admin/index/dataImport/build?connectorId=auth.reindex&storeId=1
        A response with a jobStatusId is displayed.
        Note:
        • The default password for the spiuser user is passw0rd for HCL Commerce 9.1.0.0 through 9.1.8.0, and QxV7uCk6RRiwvPVaa4wdD78jaHi2za8ssjneNMdu3vgqi for HCL Commerce 9.1.9.0 and greater.
        • It is essential to set your own spiuser password to secure your deployment. For more information, see Setting the spiuser password in your Docker images.
      2. Check the Build Index job status using the jobStatusId value that was returned.
        curl -X GET -u spiuser:plain_text_spiuser_password https://tsapp.demoqaauth.mycompany.com/wcs/resources/admin/index/dataImport/status?jobStatusId=jobStatusId -k

        A returned value of 0 indicates that the build completed successfully. For more information, see Building the Elasticsearch Index.

Results

Your HCL Commerce Version 9.1 Kubernetes deployment is now complete.