HCL Commerce Version or later

Setting up persistent storage volumes for a Kubernetes deployment

The Assets Tool was reintroduced in HCL Commerce This tool requires persistent volume storage for your deployment. This storage allows for all assets that are added and managed via the Assets Tool in Management Center for HCL Commerce to be accessed and persisted.

To make files accessible by multiple pods across your deployment, and to allow for the files to be persisted, a ReadWriteMany type of persistent volume is required.

For more information on persistent volumes in Kubernetes, see Persistent Volumes in the Kubernetes documentation.

For more information on the Assets Tool, see Assets tool.

Note: If you are not planning on using the Assets Tool within your deployment, then persistent storage is not required.


  1. Create a persistent volume.
    • Use a commercial cloud offering such as Google FileStore, Amazon Elastic File System, or Azure Files.
      For example, using Google FileStore:
      1. Create the FIleStore instance.
      2. Consume the file shares within your Kubernetes environment, or review the following sample yaml configuration files.
        • For a PersistentVolume (PV)
          apiVersion: v1
          kind: PersistentVolume
            name: hcl-commerce-sample-readwritemany-pv
              storage: 5Gi
            - ReadWriteMany
              - hard
              - nolock
              path: /file-share 
              server: ip-address
        • For a PersistentVolumeClaim (PVC)
          apiVersion: v1 
          kind: PersistentVolumeClaim 
            name: hcl-commerce-sample-readwritemany-claim
            - ReadWriteMany
            storageClassName: ""
            volumeName: hcl-commerce-sample-readwritemany-pv
                storage: 5Gi
    • Stand up a cloud agnostic solution, such as Rook Ceph.
      Note: Be aware of the following when implementing Rook Ceph:
      • Rook Ceph can either be installed as a set of Kubernetes resources, or by installing a Helm Chart.
      • The Linux kernel of your PV machine must be built with the RBD module (Ubuntu has the necessary module required).
      • You must create or enable the shared file system (cephFileSystems) after the Ceph cluster has been provisioned, and then create a Kubernetes StorageClass backed by this file system.
      • You require at least three worker nodes for a production cluster.
    • If you want to explore the Assets Tool within a non-production deployment, you can use nfs-server-provisioner.

      An example of this is as follows:

        1. Add the Helm repository.
          helm repo add kvaps https://kvaps.github.io/charts
        2. Create a storage class to be used by the PVC.
          helm install sample-nfs-server-provisioner kvaps/nfs-server-provisioner --version 1.3.1 
          storageClass.mountOptions={vers=4.1} -n ${NAMESPACE}
  2. Set the PVC for your deployment.


Your persistent storage volume is created, and your deployment has been configured to use it.