IBM Tivoli Monitoring > Version 6.3 Fix Pack 2 > Installation Guides > High Availability Guide for Distributed Systems > Create clusters with Tivoli Monitoring components in an HACMP environment
IBM Tivoli Monitoring, Version 6.3 Fix Pack 2
Known problems and limitations
It is important to remember specific characteristics and constraints of Tivoli Monitoring installation and set up, and their effects on the cluster setup.
During the certification test for Tivoli Monitoring clustering, issues encountered when setting up the clustered environment are formally reported as defects. These defects are typically related to the setup of Tivoli Monitoring in a non-default manner, instead of being specific to the cluster environment. These defects are handled as part of the Tivoli Monitoring service stream. Here is a list of known problems and workarounds:
- The Tivoli Monitoring installer configures the components to be autostarted by default. It does not give the user an option to configure the components to not autostart. Under this limitation, the user has to edit an operating system script to remove this behavior.
This same behavior occurs whether installing for the first time or applying Fix Packs.
- GSKit is a prerequisite of Tivoli Monitoring. As of the current release of Tivoli Monitoring, this component can only be installed into a specific location: /usr/local/ibm/gsk7. Under this limitation, GSKit cannot be installed into a user defined location, such as the file system for the shared disk.
The workaround is to use the Tivoli Monitoring installation program to install GSKit 15 on both nodes.
- If there are user console activities being performed, such as clicking an object on the workspace to request data, while the failover process is occurring, then the console takes a longer period of time to become active. This delays the time it takes for the agent to become available to the console user. This is due to the internal delay timers for the reconnection algorithm.
- There are some IBM Tivoli Monitoring components that are dependent on other components, like the Tivoli Enterprise Portal Server requiring DB2. In this case, you will either want to create a cluster Application Server that includes all of the IBM Tivoli Monitoring components with their dependencies, or write scripts that locks the processes that are waiting on their dependencies, otherwise the cluster will fail on startup.
- If the clustered Tivoli Enterprise Portal Server does not start after failover/switch, follow the following instructions:
- Make sure Tivoli Enterprise Portal Server starts successfully on the first computer in your cluster.
- Switch to the first cluster computer.
- For Windows:
Change the current directory to %CANDLE_HOME%\CNPSJ\profiles\ITMProfile\config\cells\ITMCell\nodes\ITMNode.
- For UNIX or Linux systems:
Change the current directory to $CANDLE_HOME/<architecture>/iw/profiles/ITMProfile/config/cells/ITMCell/nodes/ITMNode.
- Locate and open serverindex.xml file.
- Check if all "hostName" and "host" properties are equal to virtual host name for the Tivoli Enterprise Portal Server cluster. Any invalid value must be changed to the correct value.
- For each unique and invalid host name found in step 5, apply steps 2–5 accordingly.
- In order to configure the Ethernet adapters for communication by IBM Tivoli Monitoring, you can use the configuration parameter: KDEB_INTERFACELIST. See 2 for information on how to remove this entry in the registry.
- In order for Java Web Start to work, you must change all occurrences of $HOST$ to your fully qualified host name in the .jnlpt file. You can locate the .jnlpt file in the CANDLE_HOME\config directory.
Parent topic:
Create clusters with Tivoli Monitoring components in an HACMP environment