IBM Tivoli Monitoring > Version 6.3 Fix Pack 2 > Installation Guides > High Availability Guide for Distributed Systems > Create clusters with Tivoli Monitoring components in an HACMP environment > Install the portal server on its base HACMP cluster

IBM Tivoli Monitoring, Version 6.3 Fix Pack 2


Install and set up the portal server on clusternode1


When the DB2 database is up and running, the installation of the portal server follows the instructions in the IBM Tivoli Monitoring: Installation and Setup Guide.

To set up the portal server on clusternode1, complete the following steps:


Procedure

  1. Make sure that the resource group is online and the shared file system, virtual IP address, and the DB2 are available on clusternode1.

  2. Run the installation script install.sh from the product image as described in the IBM Tivoli Monitoring: Installation and Setup Guide.

    Make sure you install the portal server into the shared file system. For easier handling of the configuration, also install the IBM Tivoli Monitoring Service Console.

    After installation, export the DISPLAY variable to point to your workstation. If you have an X server running, you can use the graphical IBM Tivoli Monitoring Service Console to configure the portal server. Otherwise, use the command line configuration.

  3. Configure the portal server interface and URL such that the portal clients can connect to portal server on its virtual host name. Add the following line to the <CANDLEHOME>/config/cq.ini file:

      KFW_INTERFACE_cnps_HOST=<virtualhostname>

    where <virtualhostname> is the virtual host name of the portal server.

  4. Use one of the following 2 methods to configure the portal server:

    • To invoke a GUI application, use <CANDLEHOME>/bin/itmcmd manage, which requires a connection to an X Server.

    • To invoke a text application, use <CANDLEHOME>/bin/itmcmd config –A cq, which prompts you for the configuration values.

    If you are configuring the portal server on Linux on zSeries in 64-bit mode, you need to run the command from a 31-bit session. To open a 31-bit session, run:

    s390 sh

  5. Configure the monitoring server host name so that it is the same as the virtual host name of the monitoring server (assuming the hub monitoring server is clustered).

  6. Configure the portal server to use the virtual host name.

    1. Make sure Tivoli Enterprise Portal Server and Tivoli Enterprise Portal Server Extensions are stopped.

    2. Back up your configuration data located in the $CANDLEHOME/<architecture>/iw/profiles directory.

    3. Run the UNIX terminal.

    4. Change the current directory to $CANDLEHOME/<architecture>/iw/scripts.

    5. Run sh updateTEPSEHostname.sh <old_hostname> <new_hostname>.

      • Under <old_hostname> substitute the current host name, do not include domain name.

      • Under <new_hostname> substitute the valid virtual host name for the Tivoli Enterprise Portal Server cluster.

      If you repeat the updateTEPSEHostnamescript execution, make sure to use the correct "old_hostname". For example, the following sequence it correct:

      • updateTEPSEHostname.bat h1 crocodile, the ewas host name was changed to "crocodile".

      • updateTEPSEHostname.bat crocodile hippi, as old host name, use "crocodile" or the script will have no effect.

    6. If the operation is successful, 'BUILD SUCCESSFUL' is displayed.

    7. Set the Primary Network Name to the virtual host name of the portal server. This configuration is necessary for the communication between the portal server and the rest of the Tivoli Monitoring infrastructure, including the monitoring server.

      The Primary Network Name in the Network Name field, which is available after selecting Specify Optional Primary Network on the GUI TEMS Connection tab, and the Optional Primary Network Name field in the text application.

  7. Configure the portal server database on the shared file system in the database settings of the portal server.

    After successful configuration, start the portal server manually and check the connection via portal client.

  8. Stop the portal server, and remove the autostart of the portal server.

    Under AIX, the installer creates a file under /etc, all named rc.itm1. This file has to be removed to ensure that the portal server does not start up during reboot.

  9. Change ServerName in <CANDLEHOME>/<platform>/iu/ihs/conf/httpd.conf to clustername.


Parent topic:

Install the portal server on its base HACMP cluster

+

Search Tips   |   Advanced Search