An active-active high availability scenario

Implement active-active high availability between the Dynamic Workload Console and the master domain manager so that a switch to a backup is transparent to Dynamic Workload Console users.

Use a load balancer between the Dynamic Workload Console servers and the master domain manager so that in the event the master needs to switch to a backup, the switch is transparent to console users.

Configure the master domain manager and backup master domain managers behind a second load balancer so that the workload is balanced across all backup master domain managers and the master domain manager. Load balancing distributes workload requests across all configured nodes to avoid any single node from being overloaded and avoids a single point of failure.

You might already have installed and configured a number of backup master domain managers, in addition to your master domain manager, that you use for dedicated operations and to alleviate the load on the master domain manager. For example, you might have one dedicated to monitoring activities, another for event management, and perhaps your scheduling activities are run on the master domain manager. Administrators must create engine connections for each of these and users have to switch between engines to run dedicated operations. Should one of them go down, users need to be notified about which engine to use as a replacement and switch to the replacement engine.

To simply this, configure a load balancer in front of the master domain manager and backup master domain managers so that users are unaware of when a switch occurs and administrators configure a single engine connection in single-sign on that points to the name or IP address and port number of the load balancer and not ever need to know the hostname of the current active master. The load balancer monitors the engine nodes and takes over the task of balancing the workload and the switch to a backup master domain manager is completely transparent to console instance users. Any backup master domain manager can satisfy HTTP requests, even those that can be satisfied only by the active master, such as requests on the plan, because the requests are proxied to and from the active master.

To complete the picture of a full high availability HCL Workload Automation environment, the RDBMS and the Dynamic Workload Console need to be configured in high availability. If your RDBMS includes a high availability disaster recovery (HADR) feature and it is enabled, you can configure the datasource.xml file on the WebSphere Application Server Liberty server of the master and backup components to add failover properties. The key-value pairs to set depend on your specific RDBMS. As an example, Db2®'s datasource can be configured with the following set of properties in the XML element named properties.db2.jcc:
<properties.db2.jcc
            databaseName="TWS"
            user="…"
            password="…"
            serverName="MyMaster"
            portNumber="50000"
            clientRerouteAlternateServerName="MyBackup"
            clientRerouteAlternatePortNumber="50000"
	    retryIntervalForClientReroute="3000"
	    maxRetriesForClientReroute="100"
         />
For the Dynamic Workload Console, ensure that it is connected to an external database (not the embedded Apache Derby database), replicate it, and link the consoles to a load balancer that supports session affinity so that requests related to the same user session are dispatched to the same console instance.
Note: If your environment includes the HCL Workload Automation Agent for z/OS to schedule jobs (JCL) on the JES2 subsystem of z/OS, ensure that the value for the host.bootstrap.port.sec parameter specified in the ports_variables.xml file is the same on every workstation hosting the Dynamic Workload Console component in your environment. For more information about the location of the ports_variables.xml file, see Configuring HCL Workload Automation using templates.
The following is a sample of the high-level architecture of an active-active high availability system with two load balancers
Figure 1. And end-to-end environment configured for high availability
An environment composed of two Dynamic Workload Console servers behind a load balancer with console users connecting directly to the load balancer. A second load balancer is configured between the Dynamic Workload Console servers and the engine modes, master domain manager and backup master domain manager. The engine nodes connect to an HADR RDBMS
This environment configuration offers numerous benefits:
High availability
The load balancer monitors the nodes and takes care of balancing the workload across the nodes, eliminating the possibility of a node creating a single point of failure.
Scalability
As client requests increase, you can add additional backup master domain manager nodes to the configuration to support the increased workload.
User-friendly
Users are unaware of when a switch occurs to a different node. Administrators do not have to worry about creating new engine connections to run workloads on a different node.
Low overhead
This configuration does not require any manual intervention when nodes become unavailable. Additional flexibility is provided to console instance users who no longer have to run certain operations on dedicated nodes.
Optimization of hardware
Load balancing distributes session requests across multiple servers thereby utilizing hardware resources equally.
Note: When there are multiple Dynamic Workload Console servers connected to a load balancer and one of the servers becomes unavailable, load balancing takes place automatically, however, console users need to perform a page refresh and reopen any tabbed pages that were previously opened.
The following is intended to be a high-level view of the steps required to implement a scenario of this kind. It is an example and is not meant to be a verified procedure.
  1. Install a load balancer on a workstation either within the HCL Workload Automation environment or external to the environment. Ports must be open between the load balancer and the Dynamic Workload Console nodes and the engine nodes.
  2. Configure multiple Dynamic Workload Console instances in a cluster where the consoles share the same repository settings and a load balancer takes care of dispatching and redirecting connections among the nodes in the cluster. The load balancer must support session affinity.See Configuring High Availability
  3. Exchange certificates between the load balancer and the Dynamic Workload Console and between the second load balancer and the master domain manager and backup master domain manager nodes.
  4. Configure the load balancer configuration file with details about the Dynamic Workload Console, master domain manager, and backup master domain managers. The configuration file indicates which nodes (Dynamic Workload Console, master domain manager, and backup master domain managers) are available and the routes to be used to dispatch client calls to the Dynamic Workload Console server nodes.
  5. Configure an engine connection that points to the name or IP address of the load balancer, and specify the incoming port number to the load balancer that corresponds to the outgoing port number to the master (default port number 31116). The load balancer must point to the HTTPS of the Dynamic Workload Console and the HTTPS of the master domain manager.
  6. Configure an RDBMS in high availability and enable the HADR feature.
  7. To configure the Dynamic Workload Console nodes in a cluster behind the load balancer, modify the ports_config.xml file as follows:
    <httpEndpoint host="${httpEndpoint.host}" httpPort="${host.http.port}" httpsPort="${host.https.port}" id="defaultHttpEndpoint">
            <httpOptions removeServerHeader="true"/>
            <remoteIp useRemoteIpInAccessLog="true"/>
        </httpEndpoint>
This solution requires a load balancer that supports session affinity. Nginx is an example of a load balancer of this kind. The following is an abstract from a configuration file for an Nginx load balancer that demonstrates the configuration settings necessary to implement the high availability use case depicted in And end-to-end environment configured for high availability.
user  nginx;
worker_processes  10;  ## Default: 1
worker_rlimit_nofile 8192;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

 
events {
  worker_connections  4096;  ## Default: 1024
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;
 

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

 
    access_log  /var/log/nginx/access.log  main;

 
    sendfile        on;

 
    keepalive_timeout  65;

 
    upstream wa_console { ##DWC configuration
        ip_hash;
        server DWC1_HOSTNAME:DWC1_PORT max_fails=3 fail_timeout=300s;
        server DWC2_HOSTNAME:DWC2_PORT max_fails=3 fail_timeout=300s;    
        keepalive 32;
    }
    
    upstream wa_server_backend_https {
       server MDM1_HOSTNAME:MDM1_PORT  weight=1;
       server MDM2_HOSTNAME:MDM2_PORT  weight=1;

 
    }

 
 server{
    listen          443 ssl;


    ssl_certificate /etc/nginx/certs/nginx.crt;
    ssl_certificate_key /etc/nginx/certs/nginxkey.key;
    ssl_trusted_certificate /etc/nginx/certs/ca-certs.crt;
    location /
    {
           proxy_pass https://wa_console;
        proxy_cache off;
        
        proxy_set_header Host $host;
        
        proxy_set_header Forwarded " $proxy_add_x_forwarded_for;proto=$scheme";
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Forwarded-Host   $host;
        proxy_set_header X-Real-IP          $remote_addr;
        proxy_set_header X-Forwarded-Port  443;
    }
  }
 
 server{
    listen          9443 ssl;
 
    ssl_certificate /etc/nginx/certs/nginx.crt;
    ssl_certificate_key /etc/nginx/certs/nginxkey.key;
    ssl_trusted_certificate /etc/nginx/certs/ca-certs.crt;
    location /
    {
           proxy_pass https://wa_server_backend_https;
        proxy_cache off;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Forwarded-Host   $host;
        proxy_set_header X-Real-IP          $remote_addr;

        proxy_set_header Connection "close";
    }
  }
    
}
where:
DWCx_HOSTNAME:DWCx_PORT
is the address of the Dynamic Workload Console.
MDMx_HOSTNAME:MDMx_PORT
is the address of the master domain manager.