IBM Tivoli Monitoring > Version 6.3 Fix Pack 2 > Installation Guides > High Availability Guide for Distributed Systems
IBM Tivoli Monitoring, Version 6.3 Fix Pack 2
Configure the cluster creation
To set up Tivoli Monitoring with Tivoli System Automation for Multiplatforms, you will need two cluster nodes with at least one NIC (Network Interface Cards) in each computer. The first step is to prepare both cluster nodes to be Tivoli System Automation for Multiplatforms cluster ready.
All commands must run under root authority.
- On both nodes run the following commands:
export CT_MANAGEMENT_SCOPE=2 preprpnode node1 node2where node1 and node2 are the names that resolve to the NICs, as prepared previously.
- On one node, edit the itm6/BuildScripts/clustervariables.sh file, and supply the values that are described in Table 1 for the variables.
Variables for cluster creation
Variable name Type Description Required Notes Example ITM6_POLICIES_DIRECTORY General The complete path to the itm6 directory, as untarred from the itm6.sam. policies-2.0.tar file YES All nodes must use the same location /usr/sbin/rsct/sapolicie s/itm6 CANDLEHOME General The Tivoli Monitoring installation directory on the shared disk YES All nodes must use the same path /shareddisk/ITM HUBNAME General Name of the Tivoli Monitoring Hub YES HUB_ ITMSYSTEM CLUSTER_NODE1 General Name of the first cluster node YES This should be the same value used in the preprpnode command NODE1 CLUSTER_NODE2 General Name of the second cluster node YES This should be the same value used in the preprpnode command NODE2 CLUSTER_DOMAIN_NAME General Name of the Tivoli System Automation for Multiplatforms domain to be created YES ITM CLUSTER_RESOURCE_GROUP General Name of the Tivoli System Automation for Multiplatforms Resource Group to hold the Tivoli Monitoring resources YES ITMRG DB2_POLICIES_DIRECTORY DB2 cluster The complete path to the directory that contains the DB2 Tivoli System Automation for Multiplatforms scripts (db2_start.ksh, db2_stop.ksh, db2_monitor.ksh, etc), as extracted from the db2salinux.tar.gz file NO – only needed if a DB2 cluster is being used /usr/sbin/rsct/sapolicie s/db2 DB2_INSTANCE DB2 cluster The DB2 instance in the cluster See above DB2INST1 HAFS_MOUNTPOINT Shared file system The mount point of the shared disk YES /shareddisk HAFS_DEVICENAME Shared file system The device name of the shared disk YES /dev/sdd1 HAFS_VFS Shared file system The type of shared disk YES reiserfs AIXVG_VGNAMEG Shared file system on AIX Name of the volume group containing the shared file system‘s logical volume NO – only needed if cluster is running on AIX AIXVG_LVNAME Shared file system on AIX Name of the logical volume containing the shares file system NO – only needed if cluster is running on AIX HAIP_IPADDRESS Virtual IP address information The IP address of the virtual system YES 9.12.13.14 HAIP_NETMASK Virtual IP address information The subnet mask of the virtual system YES 255.255.252.0 HAIPNICS_ NODE1ETHER NETNUMBER Ethernet information for nodes The Ethernet interface number for node1 YES Run the ifconfig command to determine the Ethernet numbers eth3 (for Linux) HAIPNICS_ NODE2ETHER NETNUMBER Ethernet information for nodes The Ethernet interface number for node2 YES Run the ifconfig command to determine the Ethernet numbers en5 (for AIX) NETTB_ADDRESS Network tiebreaker The IP address of the network tiebreakers NO – only needed if using a network tiebreaker Must be pingable by both nodes 10.20.20.40 SCSITB_NODE1HOST SCSI tiebreaker on Linux The host number of the SCSI tiebreaker device on node1 NO - only needed if using a SCSI tiebreaker on Linux Use the dmesg | grep ‘Attached scsi disk’ command or cat /proc/scsi/s csi command to determine these values 1 SCSITB_NODE1CHAN SCSI tiebreaker on Linux The channel number of the SCSI tiebreaker device on node1 See above See above 0 SCSITB_NODE1ID SCSI tiebreaker on Linux The ID number of the SCSI tiebreaker device on node1 See above See above 0 SCSITB_NODE1LUN SCSI tiebreaker on Linux The logical unit number of the SCSI tiebreaker device on node1 See above See above 0 SCSITB_NODE2HOST SCSI tiebreaker on Linux The host number of the SCSI tiebreaker device on node2 See above See above 1 SCSITB_NODE2CHAN SCSI tiebreaker on Linux The channel number of the SCSI tiebreaker device on node2 See above See above 0 SCSITB_NODE2ID SCSI tiebreaker on Linux The ID number of the SCSI tiebreaker device on node2 See above See above 0 SCSITB_NODE2LUN SCSI tiebreaker on Linux The logical unit number of the SCSI tiebreaker device on node2 See above See above 0 DISKTB_DISKNAME Disk tiebreaker on AIX The disk name of the disk tiebreaker NO - only needed if using a Disk tiebreaker on AIX /dev/hdisk0 ECKDTB_DEVICENUM ECKD™ tiebreaker on Linux on zSeries The device number of the ECKD tiebreaker NO - only needed if using a ECKD tiebreaker on Linux on zSeries Use the cat /proc/dasd/devices command to get this information 099f
- Change to the itm6/BuildScripts directory, and run the following command:
./generateclusterfiles.sh
- You might want to modify the StartCommandTimeout (the amount of time, in seconds, to wait for the resource to start), and StopCommandTimeout (the amount of time, in seconds, to wait for the resource to stop) values in the following files, based on how long these actions take on your system. For example, you can double the expected times.
Hub TEMS: itm6/DefFiles/TEMSSrv.def
Portal server: itm6/DefFiles/TEPSSrv.def
Warehouse Proxy Agent: itm6/DefFiles/TDWProxy.def
Summarization and Pruning Agent: itm6/DefFiles/SPAgent.def
- Copy the contents of the itm6 directory, and all of its subdirectories, from the node where the generateclusterfiles.sh command was run, to the same location on the other node.
After copying, verify that the execution flag and owner are correct, for the files in itm6/BuildScripts and itm6/ControlScripts.