Scenario: Shared disk, passive–active failover on a master domain manager
This scenario describes how to configure HCL Workload Automation and a remote or local DB2® database so that a HACMP cluster is able to manage the failover of the active master domain manager.
Configuring HCL Workload Automation and a remote DB2® database
The following procedure explains how to configure HCL Workload Automation and a remote DB2® database so that a passive, idle node in the cluster can take over from an active master domain manager that has failed. The prerequisite for this procedure is that you have already configured HACMP.
Install HCL Workload Automation using one of the installation methods described in Planning and Installation Guide.
- Create the same TWS administrator user and group on all
the nodes of the cluster. Ensure that the user has the same ID on
all the nodes and points to the same home directory on the shared
disk where you are going to install HCL Workload Automation.Example: You want to create the group named twsadm for all HCL Workload Automation administrators and the TWS Administrator user named twsusr with user ID 518 and home
/cluster/home/twsusr
” on the shared disk:
To install HCL Workload Automation in a directory other than the user home on the shared disk, ensure that the directory structure is the same on all nodes and that themkgroup id=518 twsadm mkuser id=518 pgrp=twsadm home=/cluster/home/twsusr twsusr passwd twsusr
useropts
file is available to all nodes. Ensure also that the user has the same ID on all the nodes of the cluster. - Start the node that you want to use to run the installation of HCL Workload Automation and set the parameters so that HACMP mounts the shared disk automatically.
- Install the DB2® administrative client on both nodes or on a shared disk configuring it for failover as described in DB2® documentation.
- Create the db2inst1 instance on the active node to create a direct link between HCL Workload Automation and the remote DB2® server.
- Proceed with the HCL Workload Automation installation, using twsuser as the home directory and the local db2inst1 instance.
After you installed HCL Workload Automation, run the cluster collector tool to automatically collect files from the active master domain manager. These files include the registry files, the Software Distribution catalog, and the HCL Workload Automation external libraries. The cluster collector tool creates a .tar file containing the collected files. To copy these files on the passive nodes, you must extract this .tar file on them.
To configure HCL Workload Automation for HACMP, perform the following steps:
- Run the cluster collector tool.
- From
TWA_home/TWS/bin
, run./twsClusterCollector.sh -collect -tarFileName tarFileName
where tarFileName is the complete path where the archive is stored.
- Copy
tws_user_home/useropts_twsuser
from the active node to the passive master domain manager from both the root and user home directories, to the other nodes. - Replace the node hostname with the service IP address for themaster domain manager definitions, the WebSphere Application Server, the Dynamic workload broker and the agent. This is described in Changing the workstation host name or IP address.This is described in the topic about changing the workstation host name or IP address in the Administration Guide .
- Copy the start_tws.sh and stop_tws.sh scripts from TWA_home/TWS/config to the TWA_home directory.
- Customize the start_tws.sh and stop_tws.sh scripts by setting the DB2_INST_USER parameter that is used to run the start and stop of the DB2® instance during the “failover” phase.
- Try the start_tws.sh and stop_tws.sh scripts to verify HCL Workload Automation starts and stops correctly.
- Move the shared volume on the second cluster node (if you have already defined the cluster group, you can move it by using the clRGmove HACMP command).
- Run the collector tool to extract HCL Workload Automation libraries.
From the
TWA_home/TWS/bin
directory, run:
wheretarFileName is the complete path where the archive is stored../twsClusterCollector.sh -deploy -tarFileName tarFileName
- Configure a new Application Controller resource on HACMP using the customized start_tws.sh and stop_tws.sh scripts.
Local DB2®
- Install the DB2® locally on both the nodes or on the shared disk, without creating a new instance.
- Create a new instance on the shared disk, define all the DB2® users also on the second node,
and modify the following two files:
- /etc/hosts.equiv
Add a new line with just the Service IP address value.
- <db2-instance-home>/sqllib/db2nodes.cfg
Add a new line similar to the following line:
0 <Service IP address> 0
- /etc/hosts.equiv
- To stop the monman process used for Event Driven Workload Automation, add "conman startmon" and "conman stopman" to the start_tws.sh and stop_tws.sh scripts respectively.