Migrating the server to Kubernetes/Openshift

You can migrate your HCL DevOps Deploy (Deploy) server from an on-premises production installation to a containerized instance that runs in a Kubernetes or Openshift cluster.

Before you begin

  • Open a pre-emptive guidance case with Deploy support. This case serves as a forum for support to help you answer any questions or concerns that arise from the cloning process.
  • Stop the Deploy production server. Plan downtime for a duration of the database and application data cloning process, and have a running containerized Deploy server instance.

About this task

You create a new environment by cloning the data from the current production environment and referring to the cloned data in the containerized Deploy server installation. To migrate your Deploy server from an on-premises site to a containerized instance follow these steps:


  1. Clone the Deploy database.
    Contact your database administrators (DBAs) to clone the database. The DBA takes a backup of the current database to a new location that can be accessed from pods running in the Kubernetes cluster. If more assistance is needed than your DBA can provide, use the support case for help.
  2. Clone the application data folder.
    Most application data folders have mount points to an external file system. Copy the contents of your application data folder to a new directory so that your cloned server does not use the same application data folder as your on-prem production server.
    • If the directory is contained in a network filesystem like NFS, you should be able to refer to that network path when you create the Kubernetes persistent volume resource. To create an NFS PV and PVC, see the following example of a YAML file format.
      apiVersion: v1
      kind: PersistentVolume
        name: deploy-appdata-vol
          volume: deploy-appdata-vol
          storage: 20Gi
          - ReadWriteOnce
          path: /volume1/k8/deploy-appdata
      kind: PersistentVolumeClaim
      apiVersion: v1
        name: deploy-appdata-volc
        storageClassName: ""
          - "ReadWriteOnce"
            storage: 20Gi
            volume: deploy-appdata-vol 
    • If you don't use NFS or another network file system to back up your persistent volume, copy your application data directory contents into a persistent volume in your Kubernetes cluster. The associated persistent volume claim resource name is required when you install the new Deploy server instance.
  3. Configure the cloned appdata.
    • Ensure that the spec.persistentVolumeReclaimPolicy parameter is set to Retain on the application data persistent volume. By default, the value is Delete for dynamically created persistent volumes. Setting the value to Retain ensures that the persistent volume is not freed or deleted if its associated persistent volume claim is deleted.
    • Enable debug logging by creating the appdata/enable-debug file. This file is required for the init and application containers to debug log messages.
  4. If the production Deploy server is configured to use S3 storage, clone the S3 bucket and modify the following S3 storage properties specified in the installed.properties file, or ensure they are correct for your cloned S3 bucket.
    codestation.s3.bucket – the bucket name
    codestation.s3.region – the region
    codestation.s3.user – the API key
    codestation.s3.url – custom URL
    codestation.s3.signerOverride – signature algorithm
    codestation.s3.enablePathStyleAccess – true or false
    codestation.s3.password – API secret
  5. Restart the production Deploy server, if required.
    Note that changes you make to the production Deploy server instance after this point are not present in the containerized Deploy server instance that runs in the Kubernetes cluster.
  6. Modify the cloned database.
    • For high-availability (HA) configurations, complete these steps:
      • Remove the JMS cluster configuration from the database. This removal requires deleting all the contents of the ds_network_relay table. If you need assistance deleting the contents, contact your DBA.
      • Remove the Web Cluster configuration from the database, if applicable. This removal requires deleting all the contents of the ds_server table. If you need assistance deleting the contents, contact your DBA.
    • To stop automatic component version imports, run the following SQL command to update the components in the database:
  7. Install the Deploy server in the Kubernetes or Openshift cluster by following the instructions in the Helm chart bundle README.
    • If you are installing into an OpenShift cluster and using NFS for persistent storage, make sure that you use Helm chart v7.3.3 or later because the availability of support for using the supplementalGroups parameter was added to statefulset resources.
    • If you are upgrading to a new version of Deploy, manually disable any patches in the cloned appdata/patches directory by adding the .off suffix to the patch file names. This is done automatically in containerized Deploy starting with version For best results, migrate to the same version of Deploy.
  8. Create new agents for the containerized instance or point your existing agents to the new containerized server.
    Note that the agent information that exists in the cloned database show the status as offline because the agents are configured to connect only to the on-prem production server. You can create new agents for the cloned environment. These agents can be installed on the original agent machines or VMs, and in the Kubernetes cluster. The only requirement is having network connectivity with the worker nodes that run in the Kubernetes cluster.
    If you don't want to containerize the existing agents, point them to the new containerized server by following the steps in the How to point an existing agent to a new Deploy server document.