Configuring remote cache invalidation

Using Apache Kafka and Apache ZooKeeper, you can run cache invalidation from the HCL Commerce Transaction server, or remotely from a Liberty server. You can enable and customize this feature by editing HCL Commerce configuration files.

Before you begin

HCL Commerce Version 9 uses Apache Kafka and Apache ZooKeeper to seamlessly synchronize data across multiple servers. The following procedure assumes that you have working installations of both products on your network.
Important: Install and run Kafka and ZooKeeper as isolated servers in dedicated Docker containers. Neither application is bundled with HCL Commerce. Configure Kafka either by using Vault or environment variables. Before launching the ts-app, HCL commerce requires Kafka to be functional in case it is to be used.
Here is an example to set environment variables:
crsApp:
  enabled: true
  name: crs-app
  image: commerce/crs-app
  tag: v9-latest
  replica: 1
  resources:
    requests:
      memory: 2048Mi
      cpu: 500m
    limits:
      memory: 4096Mi
      cpu: 2
  ## when using custom envParameters, use key: value format
  envParameters:
    auth: 
      ZOOKEEPER_SERVERS: my-kafka-zookeeper.kafka.svc.cluster.local:2181
      KAFKA_SERVERS: my-kafka.kafka.svc.cluster.local:9092
      KAFKA_TOPIC_PREFIX: sample
    live: 
      ZOOKEEPER_SERVERS: my-kafka-zookeeper.kafka.svc.cluster.local:2181
      KAFKA_SERVERS: my-kafka.kafka.svc.cluster.local:9092
      KAFKA_TOPIC_PREFIX: sample     
  nodeLabel: ""
  fileBeatConfigMap: ""
  nodeSelector: {}
  coresSharingPersistentVolumeClaim: ""
Note: HCL Commerce Version 9.0.1.18 or laterIf you want to establish secure communication with Kafka you can utilise the optional parameter provided. Here is an example to set environment variables:
Note:

- KAFKA_SERVERS=<kafkaServerHostOrIPList>
- KAFKA_TOPIC_PREFIX=<kafkaTopicPrefix>
- KAFKA_AUTHENTICATION_USERID=<kafkaAuthenticationUserID>
- KAFKA_AUTHENTICATION_PASSWORD=<kafkaAuthenticationPassword>
When the variables are set, HCL Commerce configures kafka during startup by using the run engine command, run set-kafka-server <KafkaServers> <TopicPrefix> <ZookeeperServers>"https://help.hcltechsw.com/commerce/9.0.0/developer/refs/rre_transaction.html

If the <KafkaServers> string starts with (no-crs), then the Transaction server will not publish invalidation messages intended to be received by the Store server.

Note: If the containers, such as DB2 and TS-App, are in separate timezones while setting Kafka/ Zookeeper in Version 9, invalidations may not occur correctly. In the Kafka properties ensure that log.message.timestamp.type = LogAppendTime. By default this value is set to log.message.timestamp.type = CreateTime.

HCL Commerce Version 9.0.1.18 or laterThe Kafka authentification password should be the encrypted string using the wcs_encrypt password. The ASCII encrypted string should be saved in the configuration file, along with the password for the Kafka servers' Sasl.

About this task

Complete the following procedures to enable remote cache invalidation:

Procedure

Configure the Transaction server
  1. Open WebSphere Commerce Developer and switch to the Enterprise Explorer view.
  2. Go to the following directory and open the Transaction server custom configuration file wc-component.xml for editing.
    workspace_dir\WC\xml\config\com.ibm.commerce.foundation-ext.

    If the directory or configuration file do not exist, create them.

    1. Create the directory.
      1. Navigate to the following path.

        workspace_dir\WC\xml\config\

      2. Create the extension directory.

        com.ibm.commerce.foundation-ext

    2. Create the custom configuration file within the extension directory.
      1. Navigate into the extension directory, workspace_dir\WC\xml\config\com.ibm.commerce.foundation-ext\.
      2. Create an empty wc-component.xml file in the com.ibm.commerce.component_name-ext folder. This file is your custom wc-component.xml file.
      3. Add the basic XML elements required for your custom wc-component.xml file.
        1. Open your empty custom wc-component.xml file in an XML editor.
        2. Copy the contents of the default configuration file into your custom configuration file (workspace_dir\WC\xml\config\com.ibm.commerce.foundation\wc-component.xml), or construct it from scratch by copying the following code into the file:
          <?xml version="1.0" encoding="UTF-8"?>
          <_config:DevelopmentComponentConfiguration
          	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          	xsi:schemaLocation="http://www.ibm.com/xmlns/prod/commerce/foundation/config ../xsd/wc-component.xsd "
          	xmlns:_config="http://www.ibm.com/xmlns/prod/commerce/foundation/config">
          
              <_config:extendedconfiguration>
          
          	
              </_config:extendedconfiguration>
          	
          </_config:DevelopmentComponentConfiguration>
      4. Add or modify the XML for any properties that you want to change to your custom wc-component.xml file.
        1. Navigate to the default component configuration file at the following path.
          workspace_dir\WC\xml\config\com.ibm.commerce.foundation\wc-component.xml
          Important: Never change properties directly in this file because your changes will be overwritten with future HCL software updates.
        2. Copy the XML elements for the properties you want to change from the default component configuration file to your custom wc-component.xml file. Insert the copied XML elements after the <_config:extendedconfiguration> element.

          Even though you are copying only certain elements, you must maintain the XML hierarchy for those elements in the file. For example, if you want to change the property defined in a specific <_config:property> element, you must retain the parent and ancestor elements of the <_config:property> element, but you can delete all the siblings if you are not changing them. See the example at the end of this topic.

          If it previously did not exist, your custom configuration file is now created and ready to be customized further.

  3. Locate the wc.store.remote.kafka parameter of the RemoteStoreConfiguration configuration grouping. Add the addresses of the Apache Kafka broker clusters, as a comma-separated string, to the value attribute. Set the port numbers according to your local environment. For example:
    <_config:configgrouping name="RemoteStoreConfiguration">
        ...
        <!-- value to kafka servers connection string -->
        <_config:property name="wc.store.remote.kafka" value="kafka-broker1:9092,kafka-broker2:9092,kafka-broker3:9092"/>
        ...
    </_config:configgrouping>
  4. Locate the wc.store.remote.kafka.topicPrefix parameter in the same configuration grouping. Add the topic prefix for cache invalidation. This string contains the same value as the prefix configured in the remote store server. The value must be same across Transaction servers.
    <!-- value to kafka servers topic prefix -->
        <_config:property name="wc.store.remote.kafka.topicPrefix" value="sampleprefix"/>
  5. Locate the wc.remote.zookeeper parameter of the TransactionKafkaConfiguration configuration grouping. Add the addresses of the Apache ZooKeeper servers, as a comma-separated string, to the value attribute. Set the port numbers according to your local environment. For example:
    <_config:configgrouping name="TransactionKafkaConfiguration">
        ...
        <_config:property name="wc.remote.zookeeper" value="zookeeper1:2181,zookeeper2:2181,zookeeper3:2181/kafka"/>
        ...
    </_config:configgrouping>
  6. Save and close the custom configuration file.
Configure the Store server
  1. Go to the following directory.
    workspace_dir\Liberty\servers\crsServer\configDropins\overrides
  2. Edit the custom configuration file jndi.xml. If the file does not exist, create it.
  3. Add the configuration string for the ZooKeeper servers to the jndiName attribute of the jndiEntry element. The value is the same as the ZooKeeper configuration string in the Transaction server.
    <jndiEntry jndiName="com.ibm.commerce.foundation.server.services.zookeeper.hostnameport" value=""/>
  4. Add the topic prefix string. This value is the same as the topic prefix string in the Transaction server.
    <jndiEntry jndiName="com.ibm.commerce.foundation.server.services.cacheinvalidation.topicprefix" value=" "/>
  5. Save and close the custom configuration file.
  6. Deploy your changes to the HCL Commerce runtime environment.
Follow best practices
Use the following practices as a guide when configuring your Kafka installation.
Read up on proven approaches
Review Is your Zookeeper in need of maintenance? on the HCL blog, and the Apache Kafka documentationfor tips and utilities that can help you optimize your configuration.
Configure message retention
Default message retention is 7 days, which is very excessive when messages are no longer needed after cache invalidations have been processed by all applications. A retention time of 10 minutes is sufficient for most configurations.
Disable automatic topic setup
By default, if a message is sent to a topic that is not set up already, Kafka will automatically create it. This approach results in configurations such as message retention and replication-factor not being set up, which can cause outage issues. To avoid such issues, disable the automatic configuration of topics. This forces the client to set up topics with the parameters you have defined.
Use replicas
Replicas provide for high availability. For example, with two brokers the configuration can be as follows:
CacheInvalidation leader: broker0 replica: broker1
PeerCacheInvalidation leader: broker1 replica: broker0
Optimize Producer configurations
Use the following settings with the Producers:
//acks=all, This means the leader will wait for the full set of in-sync replicas to acknowledge the record. 
        //This guarantees that the record will not be lost as long as at least one in-sync replica remains alive.
        configValues.put(ProducerConfig.ACKS_CONFIG, "all");

        // Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error.
        configValues.put(ProducerConfig.RETRIES_CONFIG, 0);

        //The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. 
        configValues.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);

        //LINGER_MS_CONFIG, This setting gives the upper bound on the delay for batching.
        configValues.put(ProducerConfig.LINGER_MS_CONFIG, 1);

        //buffer.memory, The total bytes of memory the producer can use to buffer records waiting to be sent to the server. 
        configValues.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
For more information, see 3.3 Producer Configs in the Apache Kafka documentation.
Optimize Consumer configurations
Use the following settings with the Consumers:
//ENABLE_AUTO_COMMIT_CONFIG : If true, periodically commit to Kafka the offsets of messages already returned by the consumer.
        configValues.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true");

 // AUTO_OFFSET_RESET_CONFIG : earliest: automatically reset the offset to the earliest offset
configValues.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest"); 
For more information, see 3.4 Consumer Configs in the Apache Kafka documentation.
Note that since each server defines its own Consumer group (single consumer configuration) and all messages are stored with a static key, a single partition is used. This guarantees messages are received in the same order they were created.

Example

You can examine the contents of your messages by using Kafdrop, or use kafka-manager to manage the topic and brokers. Messages have the following format, depending on the transaction.
Transaction server to Transaction server messages have the following format:
Transaction-to-Store messages have a similar format:

What to do next

To improve performance and optimize data for analysis, you can enable, disable and customize individual cache settings. The following Transaction and Store server DistributedMaps are relevant to Kafka.
DistributedMaps in the Transaction server
  • WCSessionDistributedMapCache
  • WCCatalogEntryDistributedMapCache
  • WCCatalogGroupDistributedMapCache
  • WCMiscDistributedMapCache
  • WCUserDistributedMapCache
  • WCPriceDistributedMapCache
  • WCMarketingDistributedMapCache
  • WCPromotionDistributedMapCache
  • WCContractDistributedMapCache
  • WCTelesalesDistributedMapCache
  • WCSystemDistributedMapCache
  • WCFlexFlowDistributedMapCache
  • WCRESTTagDistributedMapCache
  • WCSEOURLKeyword2URLTokenDistributedMapCache
  • wCSEOURLToken2URLKeywordDistributedMapCache
  • WCSEORedirectRulesDistributedMapCache
  • WCSEOURLDistributedMapCache
  • WCWidgetDefinitionDistributedMapCache
  • WCLayoutDistributedMapCache
  • WCSEOPageDefinitionDistributedMapCache
  • WCPR_Cache
DistributedMaps in the Store server
  • WCFlexFlowDistributedMapCache
  • WCStoreDistributedMapCache
  • WCSEORedirectRulesDistributedMapCache
  • WCSEOURLDistributedMapCache
  • WCSEOURLToken2URLKeywordDistributedMapCache
  • WCSEOURLKeyword2URLTokenDistributedMapCache
  • WCRESTTagDistributedMapCache
  • WCLayoutDistributedMapCache
For information about customizing the DistributedMaps, see Additional HCL Commerce data cache configuration.