Caching strategy

When you plan a caching strategy for HCL Commerce, considerations such as which pages will be cached, and where they will be cached are important. These decisions are influenced by whether you are caching a local (Transaction) server, or a remote (Store) server. To help with these decisions, consider the following approaches.

Caching local (Transaction) server pages

When you are caching locally, assess the following considerations:
  • Which pages provide the best performance improvement if cached.
  • Where caching is to occur.
  • Whether to cache full pages or page fragments.
  • How to invalidate the cached data.

Which pages to cache

Good candidates for caching are web pages that:

  • Are frequently accessed.
  • Are stable over time.
  • Contain a majority of contents that can be reused by various users.

A good example would be catalog display pages.

Where caching is to occur

Theoretically, caching takes place in the tier closest to the user. In reality, other factors such as security and user-specific data can influence the choice of the best place to cache the content. To maximize the benefit of dynamic caching, elements of a page can be fragmented as finely as possible so that they can be cached independently in different cache entries.

For example, the non-user specific, non-security sensitive fragments are generally useful to many users, and can be cached in a more public space and closer to the user. The security sensitive data can be cached behind the enterprise firewall.

For stores that run on the Transaction server, caching outside of WebSphere Application Server can be used with larger databases and sites to improve performance. Edge Server and the ESI cache-plugin are provided with WebSphere Application Server to provide extra caching capability. Session information (language ID, preferred currency ID, parent Organization, contract ID, and member group) must be stored in session cookies. The cookies are required in order for caching to be done on a server external to WebSphere Application Server.

Cache full pages or page fragments

All web pages consist of smaller and often simpler fragments. An example of a page fragment might be a header, sidebar, footer, or an e-Marketing Spot. Breaking a web page into fragments or components makes more caching possible for any page, even for personalized pages. Fragments can be designed to maximize their reusability.

Caching a whole web page means that the entire page is cached as a large cache entry that includes all the content from all fragments that have no includes or forwards. This approach can save a significant amount of application server processing and is typically useful when the external HTTP request contains all the information that is needed to find the entry.

If web pages are broken into different fragments and the fragments are cached individually, then some fragments can be reused for a wider audience. When a web page is requested, then different fragments are reassembled to produce the page. For more information, see Full page and fragment caching.

If the page output has sections that are user-dependent, then the page output is cached in a manner that is known as fragment caching. That is, the JSP pages are cached as separate entries and are reassembled when they are requested. If the page output always produces the same result based on the URL parameters and request attributes, then this output can be cached with the cache-entry. Use the property consume-subfragments (CSF) , and the HCL Commerce's store controller servlet (com.ibm.commerce.struts.ECActionServlet for HCL Commerce Version 9.0.0.x, or com.ibm.commerce.struts.v2.ECActionServlet for Version 9.0.x) as the servlet name.

Web pages can be cached by using full page caching or fragment caching, or a combination of both methods.

Caching remote (Store) server pages

If you are using remote stores that run under the WebSphere Liberty Profile, your caching strategy must change to reflect the containerization of the Transaction and Search servers. In particular, you need to cache the results of REST calls differently.

Which pages are cached

Your selection of pages to be cached is largely similar for local and remote strategies. Aside from the servlet cache, a remote store server cache contains not only the JSP/Servlet rendering result, but also the remote REST access result. One thing to bear in mind is that calls that were previously local in Local Store topologies are now remote calls. Therefore, you have two considerations. You still need to provide rapid response times to calls from the customer browser, but you must also minimize the number of calls that are passed to remote servers. You can cache content that is frequently fetched from the Transaction or Search servers, such as rendering results and REST results for common remote queries.

Where caching occurs

Consider caching REST tag results. If the REST result is user-, security-, or environment-neutral, then it is a candidate for caching in the REST result cache. You can use the wcf:rest tag attribute "cached" to declare that a call's result can be cached.

How to invalidate the cached data

You can trigger cache invalidation passively, or actively. Passive invalidation uses the Time To Live (TTL) parameter of cache entries for webpages and fragments to trigger the action. After the TTL time expires, the cache data container will trigger cache invalidation. This configuration works best when you set the TTL at the level of whole web pages, and refresh the cache daily. When you use the TTL parameter, any custom logic you create is overridden by the cache expiry.

Active invalidation can be triggered by events that you define in the server's cachespec.xml configuration file. However, changes in one server's file are not automatically propagated to the other servers. If you are using the Solr search engine with HCL Commerce Version 9.0.x, Apache Kafka is used as the default messaging infrastructure to propagate the files. You can create a de facto pipe by writing the invalidation command to the CACHEIVL table, which is accessed by all the servers. This approach is not as fast as a real command pipe.

If you are using Elasticsearch with HCL Commerce Version 9.1, your caching solution is Redis. You can monitor and control Redis using the Cache Manager, as described in HCL Cache. For information about caching and invalidation in Elasticsearch, see HCL Cache with Redis.

For more information about configuring Apache Kafka, see Cache invalidation using Kafka and ZooKeeper.

Alternatively, if you use WebSphere eXtreme Scale as the centralized cache data container, cache invalidation is also centralized. In this case, you do not need to use a separate messaging system such as Kafka.