Database performance tuning for WebSphere Commerce and pureScale

To improve database performance, tune WebSphere Commerce and pureScale.

The following section describes recommended tuning for DB2 pureScale. The tuning procedures are also applicable for the Enterprise Server Edition, unless noted for DB2 pureScale.

Registry settings

Two performance-related environment variables can be set for a DB2 pureScale environment:
  • DB2_DATABASE_CF_MEMORY
  • DB2_MCR_RECOVERY_PARALLELISM_CAP
These variables are not applicable for Enterprise Server Edition. For more information about the appropriate variables for your environment, see DB2 pureScale environment variables.

Set up WebSphere Commerce server affinity to DB2 pureScale member

By default, a WebSphere Commerce server can connect to any DB2 pureScale member by using workload balancing. This connection can occur when the enableSysplexWLB custom property in the WebSphere Application Server data source is set to true. This connection can be sufficient for workloads that are not performance or volume intensive.

An approach for achieving higher performance is to set enableSysplexWLB to false, which means the data source connects to a specific pureScale member. Failover can be configured with the enableClientAffinitiesList, clientRerouteAlternateServername, and clientRerouteAlternatePortNumer parameters for the data source. Manual load balancing can be achieved if a similar number of servers are configured for each DB2 pureScale member.

WebSphere Commerce data sources are created by default at the server scope. You can create a data source at the node scope to group child servers and simplify management of data source changes. This data source allows assignment of commerce nodes to a pureScale member rather than having to work with many servers and their data sources.

The following steps setup affinity for a WebSphere Commerce data source to use a specific DB2 pureScale member and enable failover to another member:
  1. Using the WebSphere Application Server administrative console, go to Resources > JDBC > Data sources.
  2. Select the data source to be configured.
  3. In Common and required data source properties, complete all the information for the target DB2 pureScale member.
  4. In Additional Properties > Custom Properties add or edit the following custom properties and their values:
    pureScale properties
    Property Value Description
    enableSysplexWLB false Workload balancing off among the database members.
    enableClientAffinitiesList 1 The IBM Data Server Driver for JDBC and SQLJ enables client affinities for cascaded failover support. This means that only servers that are specified in the clientRerouteAlternateServerName and clientRerouteAlternatePortNumber properties are tried again. The driver does not attempt to reconnect to any other servers.
    affinityFailbackInterval 300 The number of seconds to wait after the first transaction boundary to fail back to the primary server. Set this value if you want to fail back to the primary server.
    enableSeamlessFailover 1 The IBM Data Server Driver for JDBC and SQLJ uses seamless failover. This failover means that the driver does not throw an SQLException with error code -4498 after a failed connection is successfully established when the following conditions are true:
    • The connection was not being used for a transaction at the time the failure occurred.
    • There are no outstanding global resources, such as global temporary tables or open, held cursors, or connection states that prevent a seamless failover to another server.
    When seamless failover occurs after the connection to a new data source is established, the driver reissues the SQL statement that was being processed when the original connection failed.
  5. In Additional Properties > WebSphere Application Server data source properties > Advanced DB2 features > DB2 automatic client reroute options, set up the DB2 pureScale members to use for failover.

    Add the host names of the DB2 pureScale members to fail over to separated by commas in the Alternate server names. Then, add their corresponding DB2 server listener port values separated by commas in the Alternate port numbers field.

  6. Save your changes and restart the WebSphere Commerce servers to use the new data source settings.

Append mode for tables

WebSphere Commerce tables that have high insert activity and have rows that are inserted according to an increasing primary key value are good candidates for using append mode. Append mode for a table results in rows always being inserted at the end of the table. The Append mode eliminates costs that are associated with finding free space. Append mode is suitable when you are adding only new data and retrieving existing data in a table.
  • To enable a table for append mode, use the following DB2 command: ALTER TABLE table_name APPEND ON
  • To disable a table if it is already in append mode, use the following DB2 command: ALTER TABLE table_name APPEND OFF
  • To examine if a table is using append mode or not, use the db2look utility to get details on a table.

The drawbacks for using append mode for tables are that deletions waste space, reads cost more, and clustered indexes are not possible.

Analysis using DB2 snapshots or table functions help validate which tables are suitable for append mode. It is important to consider the table usage pattern during a workload that includes administrative and maintenance operations.

An example of a table with high inserts is the orders table. Orders are constantly created but rarely deleted and might be read periodically.

Increase table space extent size

When heavy inserts occur in tables or indexes, new extents are allocated to the table space. Each extent addition involves internal message traffic within the pureScale environment. The larger extent sizes for those table spaces with heavy inserts help reduce internal communication.

The table space extent size cannot be altered after creation. To implement this change, you must create table spaces with larger extent sizes, create tables with heavy inserts and their associated objects (including indexes, alias, constraints, triggers, aliases) in the larger-extent table spaces, move data from the existing tables to the newly created tables that reside in larger-extent table spaces, and drop those existing tables. One way to perform this task is to use the db2look DB2 utility to capture the object definitions for modification and the select insert approach to move the data.

An example of a table with heavy inserts is the CTXDATA table, where data is inserted every time that the site gets a hit.

Drop unused indexes

Each insert, update, and delete operation in a table leads to similar operations for each of the associated indexes. To reduce the overhead of maintaining indexes, keep the minimal set of required indexes.

Note: If you drop an index, you might encounter a database performance issue when new fixes or features are added to your site. When you apply maintenance, you can introduce new database queries that rely on the dropped index in your instance. Without the index in place, the query performance can be negatively impacted. When you do drop an index, ensure that you monitor your database performance closely after you apply a maintenance package. If your database performance is negatively affected, consider adding the dropped index back onto the appropriate table.
To drop an index, use the following DB2 command:

drop index schema_name.index_name;

Use random indexes

If the column value of data is always ascending or descending and heavy inserts occur, the inserts of the associated index occur in the same page to keep the order. This process leads to page contention. Using random indexes puts the column values in random order, reducing the page contention, and introducing an overhead for data retrieval.

To create an index with random order, drop the original index and then re-create it with the random option. To have the primary key benefit from random indexes, drop the primary key and re-create it after you create random indexes.

To create a random index for a table, use the following DB2 command:

CREATE INDEX schema_name.index_name ON schema_name.table_name (column_name RANDOM);

Inline LOBs

When LOB is stored separately from the data, another I/O is required to insert, update, and fetch the data. The performance of LOB access can be improved by placing LOB and data together. The limit of the inline LOB length is the difference among the size of the base table row, the size of existing data columns, and 4 bytes for overhead. LOBs that cannot be stored inline are stored as separate storage objects.

To take advantage of inline lob functionality for both existing and future lob data, use the following DB2 command:
	alter table table_name alter LOB_column_name  set inline length size;
	reorg table table_name longlobdata;

For example, the PX_PROMOTION table might be a table that benefits from inline LOBs.