Linux

Deploying an HCL Commerce Version 9.0.0.0 to 9.0.1.17 authoring environment with Docker Compose

Deploy a simple HCL Commerce authoring environment to create an environment where site administrators and business users can update the store catalog or store configurations by using HCL Commerce tools.With this simple authoring environment deployment, you prepare a search container as the search_master node. The live environment that you use with this authoring environment should include a search_repeater and search_slave that communicates with the search_master in the authoring environment.

Before you begin

  1. Ensure that the Docker images are loaded to your private Docker registry. If you are an administrator responsible for your Docker registry, see Downloading HCL Commerce software.
  2. Ensure that your machine has the minimum requirements of a 2 core processor, 8 GB RAM, and 50 GB free disk space.
  3. Ensure that you are deploying the authoring environment on a machine that is not running an existing HCL Commerce environment.
  4. Prepare a Db2 database for use with HCL Commerce or Prepare an Oracle database for use with HCL Commerce.
  5. Load the HCL Commerce authoring database schema by setting type=staging.

Procedure

  1. Install Docker.
    1. Install Docker Version 19.03.8 or later.
    2. Install Docker Compose Version 1.24.1 or later.
    3. Consider creating a Docker Unix group.
      Note: You need to prefix sudo to Docker commands if you do not create a Docker Unix group. For more information, see .
  2. Download the following sample Docker Compose file based on how the database is configured.
    OptionDocker Compose sample
    Database running inside a Docker container Download the following file:
    Note: If the link does not prompt you to save, right-click and save the file. Open the file in a source code editor to view and edit in the proper YAML format.

    The YAML files are samples that assume that you are using a Db2 Docker image. Ensure that you update all the parameters that are in angle brackets <>. The sample files are commented with descriptions of the parameters.

    Database running on a standard server (not in a Docker container) Download the following file:
    Note: If the link does not prompt you to save, right-click and save the file. Open the file in a source code editor to view and edit in the proper YAML format.
    Ensure that you update all the parameters that are in angle brackets <>. The sample files are commented with descriptions of the parameters.
  3. Oracle Download the Oracle JDBC driver java/ojdbc8.jar file from the Oracle installation folder, and put it under the directory where you saved the Docker Compose file.
  4. In a command line interface, go to where you saved the Docker Compose file.
  5. Run the applicable command to deploy the Docker containers based on the Docker Compose file that you have.
    • docker-compose -f docker-compose-auth.yml up -d
    • docker-compose -f docker-compose-auth-extdb.yml up -d
    Note:
    • If the images are not already on your machine, the command downloads Docker images from your registry. The images are approximately 10 GB in total so the duration of this command depends on your Internet connection.
    • Whenever your Docker virtual machine is restarted, you need to manually restart the Docker containers by rerunning this docker-compose -f <file> up -d command.
  6. If you loaded sample data in your database, then build the search index.
    1. Send the following REST request (POST) and add basic authentication with login as spiuser and the password for the spiuser.
      You can build the search index by using one of the following methods:
      • By using the curl utility. Use the following curl command with the spiuser plain text password.
        curl -k -u spiuser:spiuserPassword -X POST https://transaction_server_hostname:5443/wcs/resources/admin/index/dataImport/build?masterCatalogId=10001
      • By using a browser plug-in such as HttpRequester (for Mozilla Firefox) or Postman (for Google Chrome). Use the following URL and authenticate with user spiuser and the spiuser plain text password.
        https://transaction_server_hostname:5443/wcs/resources/admin/index/dataImport/build?masterCatalogId=10001
      Important: To set a password for the spiuser, see Setting the spiuser password in your Docker images.

      Note the jobstatusId that you get in the response, for example, {"jobstatusId":"xxxxx"}.

    2. Use the jobstatusId and send the following REST request (GET) to check the request execution status.
      Again, use basic authentication with spiuser and the spiuser password.
      https://<search_server_hostname>:3738/search/admin/resources/index/build/status?jobStatusId={jobstatusId}
      When successful, you will get a Status: 200 OK and response similar to the following:
      response content: {
      "finishTime":"2017-08-01 06:49:31.395759",
      "lastUpdate":"2017-08-01 06:49:31.395759",
      "progress":"100%",
      "jobStatusId":"14003",
      "startTime":"2017-08-01 06:48:17.369909",
      "message":"Indexing job started for masterCatalogId:10,001. Indexing job finished successfully for masterCatalogId:10001.",
      "jobType":"SearchIndex",
      "properties":"[]",
      "status":"0"}
  7. Ensure that you can log in to Management Center.
    • https://<transaction_server_hostname>:8000/lobtools/cmc/ManagementCenter
  8. If you loaded sample data into your authoring database, ensure that you can visit the sample Aurora store.
    • https://<store_server_hostname>:8443/wcs/shop/en/auroraesite

What to do next