Installation checklist - ITCAM Agent for WebSphere Applications


Before the installation

  1. Obtain installation media for ITCAM for Application Diagnostics.
  2. Visit the ITCAM Wiki
  3. Install Enterprise Portal Server and Tivoli Enterprise Monitoring Server


IBM Tivoli Monitoring support files

  1. Install support files for every agent in use...

    ...on the following hosts...

    1. Tivoli Enterprise Monitoring Servers
    2. Tivoli Enterprise Portal Servers
    3. Tivoli Enterprise Portal browser client
    4. All hosts where the Tivoli Enterprise Portal Client is installed.

    You do not need to install support files on hosts using a Web browser to access Tivoli Enterprise Portal.

  2. To regenerate kyn_resources.xml, required for the Java WebStart client, configure the Tivoli Enterprise Portal Server after application files have been installed,


Managing Server checklist

If Managing Server functionality is required, the server must be operational configuring the data collector components of the agents.

For better performance, install the Visualization Engine component of the Managing Server on a separate host. The Visualization Engine can run on the same system if the number of events that are received by the Publish Server are low and the number of users is minimal.


ITCAM Agent for WebSphere Applications - Pre-Install

Check the following items when installing ITCAM Agent for WebSphere Applications on any host.


Before installing the agent

  1. If running WebSphere version 6.1 or higher (with IBM Developer Kit for the Java platform version 1.5.0), consider using the Generational Garbage Collection policy.

    This policy can reduce the Garbage Collection (GC) pauses and lower CPU use in many cases, resulting in better application response times. This policy can be set by using the JVM parameters...

      -Xgcpolicy:gencon

  2. Create a user account for the agent, and ensure the following permissions for it:

    1. Read permission to the WebSphere logs (the SystemOut.log and SystemErr.log).

      The agent parses these logs to display data in the Log Analysis workspace.

    2. WebSphere server start/stop privileges, if the Start/Stop Application Servers Take Action commands are needed.

      Verify the privileges by logging in as the user and issuing the startServer and stopServer scripts for all WebSphere server instances.

  3. Ensure that the user who completes the configuration of the agent has the following permissions:

  4. To monitor performance overhead that is introduced by the agent...

    1. Prepare a baseline performance metric of the system from the operating system without the agent.

      The metrics include CPU use, memory use, disk paging, and other metrics that you consider important. You can use operating system tools like NMON, TOP, and Task Manager to observe and record these metrics.

    2. Evaluate baseline GC activity by turning on the verbose GC option in the WebSphere Java Virtual Machine.

      This option helps in the comparison of GC performance and helps you decide whether to increase the heap size. If the free heap space after GC activity is near the maximum heap size, then you increase the heap size when you configure it with the data collector. Use the data collector configuration process to add 128 MB of heap space, if necessary.


Before configuring the monitoring agent

  1. Verify that the correct application support files are installed in...

    • Tivoli Enterprise Monitoring Servers (remote and hub)
    • Tivoli Enterprise Portal

    You can check for the installed version using the cinfo command on UNIX systems and the KinCInfo command on Windows systems and checking the version for the yn component.

    To view the product codes, run...

      ./cinfo

    ...and type 1 when prompted to display the product codes for the components installed on this computer.

  2. Analyze the product-provided situations and determine which ones fit your monitoring needs.

    If many situations that are of little value are enabled, IBM Tivoli Monitoring makes many queries. The number of queries can impact the performance of the IBM Tivoli Monitoring Infrastructure (including the agent) and possibly of WAS. This impact can become critical when a single Monitoring Agent is connected to many data collectors.

  3. Evaluate the tables (workspaces) for which you want to collect historical data.

    Collecting unwanted data can have the following effects:

    • Effects the performance of the agent and the WebSphere server due to periodic queries. The effect can become critical when the monitoring agent is connected to many data collectors.

    • Database storage issues, both short-term history in files and long-term history in Tivoli Data Warehouse.

  4. There is a 32-character IBM Tivoli Monitoring limitation on managed system names.

    This limitation affects both the agent name and the WebSphere server names.

    For the agent name, the default name is:

      node_ID:host_name:KYNA

    If node_ID is not supplied during agent configuration, then it defaults to Primary. Make sure that the combined length of node_ID and host_namedoes not exceed 26 characters. If it does, provide the appropriate shortened names. The node_ID is supplied in the Alternate Node ID field during the Advanced Monitoring Agent configuration.

    The host_name variable can be changed with the CTIRA_HOSTNAME variable in...

      TEMA_HOME/config/yn.ini

    ...for UNIX systems, and...

      TEMA_HOME/TMAITM6/kynenv

    ...for Windows systems.

  5. The 32-character IBM Tivoli Monitoring limitation also applies to the WebSphere server names. The default name for WebSphere server managed system node is...

      profile_name$server_name:host_name:KYNS

    The two colons (:) and KYNS are required entries. If the other parts of the names add up to more than 26 characters, then the server names are truncated. To display proper names, consider creating server aliases by providing proper short names for the servers. The server aliases can be assigned by configuring the monitoring agent or by editing...

      TEMA_HOME/config/<host_name>_yn.cfg

  6. Ensure that the Warehouse proxy is set up and the short-term history data is transferred to the Warehouse periodically.

    Large short-term history files impact Tivoli Enterprise Portal workspace response times if a workspace contains tables that display short-term history. These files are not indexed and searches must be done by scanning sequentially from the beginning of the file. For more information about short-term history files, see IBM Tivoli Monitoring Administrator Guide, chapter Collecting historical data.


Before configuring the data collector

  1. If the data collector connects to a Managing Server, ensure the kernel and Publish Server components of the Managing Server are up and running.

  2. Analyze the ports that are required to connect to the Managing Server.

    If there are multiple data collectors on the same host, consider enabling the Port consolidator to reduce the number of open ports, especially if there is a firewall between the data collector and Managing Server. For more information about the Managing Server ports that are to be opened and about the Port Consolidator see IBM Tivoli Composite Application Manager for Application Diagnostics: Managing Server Installation and Configuration Guide.

  3. When starting the configuration, complete the following checks:

    • For WAS Base Edition, ensure that the application server instance is running.

    • For Network Deployment and Extended Deployment, ensure that the Node Agent and Deployment manager are running.


After configuring the data collector

  1. Verify that the user account that runs the WebSphere process has read and write permission on...

      DC_home/runtime

    On some systems, it is also necessary to give this permission for all the parent directories. One way to verify permissions is to use the WebSphere user to create a dummy file in...

      DC_home/runtime

  2. Make sure the user account that starts the WebSphere process has read permission on all the agent install directories.

  3. Perform the following steps to check the Java Virtual Machine parameters, either through the WebSphere administrative console or by reading the server.xml file:

    1. Add the following distributed garbage collection parameters to the Java Virtual Machine parameters if they have not been already added:

        -Dsun.rmi.dgc.client.gcInterval=3600000
        -Dsun.rmi.dgc.server.gcInterval=3600000

      If these parameters are not present, add them to the Java Virtual Machine parameter list. Without these parameters, explicit garbage collection starts every minute. The parameters set the interval to 1 hour, allowing typical allocation failures to trigger garbage collection as needed.

    2. Verify that the -Xnoclassgc parameter is not present in the JVM parameters.

      This parameter can cause more memory use, because the classes are not garbage collected. This issue becomes worse when dynamic classes are generated (for example, when using Java reflection).

    3. Verify that the maximum heap size, set in the WebSphere administrative console or in the -Xmx JVM parameter, has an adequate value for the agent memory use.

      To check if the value is adequate, examine the verbose garbage collection log file without the agent. If the free heap space after garbage collection is always close to the current maximum heap size, add at least 128 MB for agent memory use. However, if the free heap space is high (50-60% of maximum heap space) after each garbage collection, then the current heap sizing can be sufficient. Use proper judgment to plan for adequate memory use, so that the Java Virtual Machine does not produce Out Of Memory errors or engage in increased garbage collection activity impacting CPU use.

    4. On Sun HotSpot JVMs (Solaris and HP-UX), make sure the -XX:MaxPermSize parameter is present in the JVM parameters with adequate size.

      All the Java class objects and “interned” String objects (String literals and String.intern() objects) are kept in this area. Set this size to at least 128 MB if it was not present. If the size was set before the data collector configuration, set the size as (1.2MPS + 20) MB, where MPS is the original MaxPermSize; for Sun HotSpot JVM version 1.6, set it to (1.2MPS + 80) MB.

  4. If necessary, change the location for the data collector trace files. The default locations are as follows:

    • On Windows systems:

        c:\IBM\ITM\TMAITM6\wasdc\7.1\logs\CYN\logs\server

    • On Linux and UNIX systems:

        /opt/ibm/ITM/architecture_code/yn/wasdc/7.1/logs/CYN/logs/server

    On Linux and UNIX systems, the trace file locations are typically on a different mount. In this way, data collector trace and First Failure Data Collection (FFDC) files do not consume hard disk space on the mount where the WebSphere server is installed. To change the location of the trace files, set the new location in the following properties:

    • DC_home/runtime/server/dc.java.properties

        jlog.common.dir
        ibm.common.log.dir

    • DC_home/runtime/server/dc.env.properties (trace-dc-native.log and msg-dc-native.log)

        CCLOG_COMMON_DIR

  5. If the agent is used for the first time in the environment or with a new application, initially do not enable entry/exit, memory leak, and lock analysis instrumentation.

    First use regular L1 monitoring, and then start adding features. This approach helps users to understand the WebSphere environment, the applications, and the optimal usage of the various monitoring features. Also, enabling all the features simultaneously might lead to server instability.

  6. On HotSpot JVMs (Solaris and HP-UX), heap analysis and heap dump features consume significant memory and have a potential memory leak in the JRE functionality that produces the heap dump. Do not use these features in these JVMs, especially in the production environment, because it can destabilize the WebSphere JVMs.

  7. Before using the agent, restart all WAS instances where the data collector was configured.


After starting the monitoring agent

Verify that the new node displays in the Tivoli Enterprise Portal console. If the node does not display, first check whether the agent has connected to the Tivoli Enterprise Monitoring Server. Complete the following checks:

  1. Ensure that there are no errors in the TEMA_HOME/logs/hostname_yn_timestamp.log log file.

    This log also provides the following information about the configuration settings that the agent is using.

      CT_CMSLIST The location of the Tivoli Enterprise Monitoring Server.
      CTIRA_HIST_DIR The location of short-term history files.
      KWJ_JAVA_HOME The location of JRE used by the monitoring agent for its embedded JVM. The WebSphere and J2EE Monitoring Agents have a JVM that is run within the monitoring agent process, known as the embedded JVM.

  2. Verify that the selected history collection has started in...

      ITM_HOME/logs/hostname_yn_timestamp.log

    Entries on starting...

      agent_id:hostname:KYNA.LG0.UADVISOR_KYN_*

    ...must be present. There must also be a Connecting to CMS TEMS entry. If the Tivoli Enterprise Monitoring Server connection is not made, the history entries are not in the log.

  3. Check that the following entries are in...

      ITM_HOME/logs/kyntema- msg.log

    ...on Windows systems, or in...

      ITM_HOME/OS_DIR/logs/kyn-tema-msg.log

    ...on Linux and UNIX systems...

      KYNA0001I ITCAM for WebSphere Monitoring Agent start-up complete. KYNA0009I Listening for incoming data collector connections on port 63335.

  4. Decide which PMIattributes you want to monitor.

    When servers have many resources such as JDBC, J2C, and Servlets, the performance cost for collecting attributes for these resources can be high. The required PMIattributes can be selected in the datacollector.properties file.

    If you prefer to set these attributes through the WAS configuration and prevent the agent from changing them, set the property...

      am.pmi.settings.nochange=true


After starting the data collector

  1. Check whether the server instance shows up in the Tivoli Enterprise Portal console. If not, check the monitoring agent log file...

      kyn-tema-msg.log

    ...the following entry...

      KYNA0011I Application server connected: server-name.

  2. If the data collector is used with the Managing Server, check for the following entries in SystemOut.log:

      <PPECONTROLLER, > Successfully joined Kernel <MS_host>:<port>
      <PPEPROBE, > Successfully joined Kernel <MS_host>:<port>

  3. Make sure that the monitoring level is set to L1. The L2 monitoring level consumes more system resources (depending on the number of J2EE calls that are made by the application).

    L3 can consume significant system resources based on the application methods instrumented and the load on the system.

  4. Make sure that the load and application types that are exercised on the system are comparable to the baseline that was obtained before configuration.

    Then, check the system health. Monitor the increase in the CPU, memory (both physical and virtual), and disk paging rates.

    Compare it against the baseline statistics that were collected before configuration. If these system metrics are too high, begin to troubleshoot by checking the following items:

    1. Check the WebSphere logs for any new log messages, errors, or exceptions. If any error message is not obvious (for example disk space running low), contact IBM software support.

    2. Check the WebSphere FFDC directory for any new files that are generated. If any files are not obvious and can be attributed to the introduction of the agent, contact IBM Software Support.
    3. On WebSphere systems using IBM Developer Kit for the Java platform (AIX, Linux, Windows systems), check for any Java core or heap dump files that are created after the ITCAM data collector was introduced.

    4. If possible, check the verbose Garbage Collection file (the default is native_stderr.log on IBM Developer Kit for the Java platform and native_stdout.log on HotSpot) for increased garbage collection activity.

      You can use the verbose garbage collection log analyzer tool to provide vital statistics such as “percent of time spent on GC, any Out Of Memory errors, and heap fragmentation.

    5. Check the data collector trace logs for any repeated trace messages. Repeated messages also become evident if the trace files are large and are rolling over rapidly. If errors or repeated messages are observed, contact IBM software support.

      The ITCAM trace files are:

      The default location on UNIX:

        /opt/ibm/ITM/architecture_code/yn/wasdc/7.1/logs/CYN/logs/server

      On Windows:

        C:\IBM\ITM\TMAITM6\wasdc\7.1\logs\CYN\logs\server


Agent for J2EE

Check the following items when deploying the J2EE Agent on any host.


Before installing the agent

  1. Create a user account for the agent, and ensure that it has the following permissions:

    1. Read permission to the J2EE Application Server logs. The agent parses these logs to display data in the Log Analysis workspace.

    2. Application server start/stop privileges, if the Start/Stop Application Servers Take Action commands are needed. This permission can be verified by logging on as the user and starting/stopping all the application server instances.

  2. To monitor performance overhead introduced by the agent:

    1. Prepare a baseline performance metric of the system from the operating system without the agent. The metrics include CPU use, memory use, disk paging, and other metrics that you consider important. You can use operating system tools like NMON, TOP, and Task Manager to observe and record these metrics.

    2. Evaluate baseline GC activity by turning on the verbose GC option in the WebSphere Java Virtual Machine. Doing this not only helps in the comparison with the GC performance with the agent, but also helps you decide whether to increase the heap size with the agent installed. If the free heap space after GC is near the maximum heap size, increase the heap size.

  3. Make sure to run both the data collector and the monitoring agent installation programs.


Before configuring the monitoring agent

  1. Verify that the correct application support files are installed in the Tivoli Enterprise Monitoring Servers (remote and hub) and Tivoli Enterprise Portal. You can check for the installed version using the cinfo –i command on Linux and UNIX systems and the KinCInfo –i command on Windows systems and checking the version for the yj component.

  2. Analyze the product-provided situations and determine which ones fit your monitoring needs. If many situations that are of little value are enabled, IBM Tivoli Monitoring makes many queries. The number of queries can impact the performance of the IBM Tivoli Monitoring Infrastructure (including the agent) and possibly of WAS. This impact can become critical when a single Monitoring Agent is connected to many data collectors.

  3. Carefully evaluate the tables (workspaces) for which collect historical data. Collecting unwanted data can result in the following effects:

    • Performance effect on the agent and application server due to periodic queries. This issue becomes critical when the Monitoring Agent is connected to many data collectors.

    • Database storage issues, both short-term history in files and long-term history in Tivoli Data Warehouse (TDW).

  4. There is a 32-character IBM Tivoli Monitoring limitation on managed system names. This issue affects both the agent name and the application server names. For the agent name, the default name is node_ID:host_name:KYJA. If node_ID is not supplied during Agent configuration, then it defaults to “Primary.” Make sure that the combine length of the node_ID and the host_name does not exceed 26 characters. If it does, provide the appropriate shortened names. The node_ID is supplied in the Alternate Node ID field during the Advanced Monitoring Agent configuration. The host_name can be changed with the CTIRA_HOSTNAME variable in the TEMA_HOME/config/yj.ini file for UNIX systems, and TEMA_HOME/TMAITM6/kyjenv file for Windows systems.

  5. The 32- character IBM Tivoli Monitoring limitation also applies to the application server names. To display different names, consider creating server aliases by providing proper short names for the servers. The server aliases can be assigned by configuring the monitoring agent or by editing the TEMA_HOME/config/<host_name>_yj.cfg file.

  6. Make sure that the Warehouse proxy is set up and the short-term history data is transferred to the Warehouse periodically. Having large short-term history files impacts Tivoli Enterprise Portal workspace response times if a workspace contains tables that display short-term history. These files are not indexed and searches must be done by scanning sequentially from the beginning of the file. For more information about short-term history files, see IBM Tivoli Monitoring Administrator Guide, chapter Collecting historical data.


Before configuring the data collector

  1. If your data collector connects to a Managing Server, make sure that the kernel and Publish Server components of the Managing Server are up and running.

  2. Analyze the ports that are required to connect to the Managing Server. If there are multiple data collectors on the same host, consider enabling the port consolidator to reduce the number of ports that are opened to the Managing Server. This issue is especially important if there is a firewall between the data collector and Managing Server. For more information about the Managing Server ports that are to be opened and about the Port Consolidator see IBM Tivoli Composite Application Manager for Application Diagnostics: Managing Server Installation and Configuration Guide.

  3. When starting the configuration, ensure that the application server is up and running.


After configuring the data collector

  1. Verify the user account running the application server process has read and write permission on the DC_home/runtime directory. On some systems, it is also necessary to give this permission for all the parent directories. One way to verify that the application is running is to use the application server user to create a dummy file in the DC_home/runtime directory.

  2. Make sure the user account running the application server process has read permission on all the agent install directories.

  3. Perform the following steps to check the Java Virtual Machine parameters:

    1. Verify that the following distributed garbage collection parameters have been added to the Java Virtual Machine parameters:
      –Dsun.rmi.dgc.client.gcInterval=3600000 
      –Dsun.rmi.dgc.server.gcInterval=3600000
      If these parameters are not present, add them to the Java Virtual Machine parameter list. Without these parameters, explicit garbage collection starts every minute. The parameters set the interval to 1 hour, allowing standard allocation failures to trigger the garbage collection as needed.

    2. Verify that the –Xnoclassgc parameter is not present in the Java Virtual Machine parameters. This parameter can cause more memory use, because the classes are not garbage collected. This issue becomes worse when dynamic classes are generated (for example, when using Java reflection).

    3. Verify that the –Xmx parameter has an adequate value for the agent overhead. Verify this value by examining the verbose garbage collection log file without the agent. If the free heap space after garbage collection is always close to the current maximum heap size, add at least 128 MB for agent overhead. If the free heap space is high (50-60% of maximum heap space) after each garbage collection, then the current heap sizing is sufficient. Use proper judgment to plan for adequate memory use, so that the Java Virtual Machine does not produce Out Of Memory errors or engage in increased garbage collection activity that has an impact on CPU use.
    4. On Sun HotSpot Java Virtual Machines (Solaris and HP-UX), make sure that the –XX:MaxPermSize parameter is present in the Java Virtual Machine parameters with adequate size. All the Java class objects and “interned” String objects (String literals and String.intern() objects) are kept in this area. Set this size to at least 128 MB if it was not present. If the size was set before the data collector configuration, set the size to (1.2MPS + 20) MB, where MPS is the original MaxPermSize; for Sun HotSpot JVM version 1.6, set it to (1.2MPS + 80) MB.

  4. If necessary, change the location for the data collector trace files. The default locations are as follows:

    • On Windows systems: C:\Program Files\ibm\tivoli\common\CYJ\logs.
    • On Linux and UNIX systems: /var/ibm/Tivoli/common/CYJ/logs.

    On UNIX systems, the trace file locations are typically on a different mount. In this way, Data Collector trace and First Failure Data Collection (FFDC) files do not consume hard disk space on the mount where the WebSphere server is installed. To change the location of the trace files, set the new location in the following properties:

    • In DC_home/runtime/server/dc.java.properties: jlog.common.dir, ibm.common.log.dir

    • In DC_home/runtime/server/dc.env.properties (trace-dc-native.log and msg-dc-native.log): CCLOG_COMMON_DIR

  5. If the agent is used for the first time in the environment or with a new application, initially do not enable entry/exit, memory leak, and lock analysis instrumentation. First use regular L1 monitoring, and then start adding features. This approach helps users to understand the WebSphere environment, the applications, and the optimal usage of the various monitoring features. Also, enabling all the features simultaneously might lead to server instability.
  6. On HotSpot Java Virtual Machines (Solaris and HP-UX), heap analysis and heap dump features consume much memory and have a potential memory leak in the JRE functionality that produces the heap dump. Do not use these features in these Java Virtual Machines, especially in the production environment, since it can destabilize the Java Virtual Machine. If you still need to enable the heap dump on these systems, add the following property in datacollector_custom.properties:
    internal.doheapdump=true

  7. Before using the agent, restart all application server instances where the data collector was configured.


After starting the monitoring agent

Verify that the new node appears in the Tivoli Enterprise Portal console. If the node does not display, first check whether the agent has connected to the Tivoli Enterprise Monitoring Server. Complete the following checks:

  1. Make sure there are no errors in the TEMA_HOME/logs/hostname_yj_timestamp.log log file. This log also provides the following information about the configuration settings the agent is using:

    • CT_CMSLIST: The location of the Tivoli Enterprise Monitoring Server.

    • CTIRA_HIST_DIR: The location of short-term history files.

    • KWJ_JAVA_HOME: The location of JRE used by the monitoring agent for its embedded Java Virtual Machine.

  2. In the TEMA_HOME/logs/hostname_yj_timestamp.log log file, verify that the selected history collection has started. Entries on starting the agent agent_id:hostname:KYJA.LG0. UADVISOR_KYJ_* must be present. There must also be a Connecting to CMS TEMS entry. If the Tivoli Enterprise Monitoring Server connection is not made, the history entries are not seen.

  3. Check that the following entries display in the TEMA_HOME/OS_DIR/logs/kyj-tema-msg.log log file:
    KYJA0001I ITCAM for J2EE Monitoring Agent start-up complete. 
    KYJA0009I Listening for incoming data collector connections on port 63335.


After starting the data collector

  1. Check whether the server instance displays in the Tivoli Enterprise Portal console. If the server instance does not display, check the monitoring agent kyj-tema-msg.log log file for the following entry: KYJA0011I Application server connected: server-name.

  2. If the data collector is used with the Managing Server, check for the following entries in SystemOut.log log file:
    <PPECONTROLLER, …> Successfully joined Kernel <MS_host>:<port>
    <PPEPROBE, ….> Successfully joined Kernel <MS_host>:<port>

  3. Ensure that the monitoring level is set to L1. The L2 monitoring level consumes more system resources (depending on the number of J2EE calls that are made by the application). L3 can consume significant system resources based on the application methods instrumented and the load on the system.

  4. Make sure that the load and application types that are exercised on the system are comparable to the baseline obtained before configuration. Then, check the system health. Monitor the increases in the CPU, memory (both physical and virtual), and disk paging rates. Compare it against the baseline statistics that were collected before configuration. If these system metrics are too high, begin to troubleshoot by checking the following items:

    1. Check the application server logs for any new log messages, errors, or exceptions. If these log entries are not obvious (for example, disk space running low), contact IBM software support ; to do this, access the Web site http://www14.software.ibm.com/webapp/set2/sas/f/handbook/home.html and click Contacts.

    2. Check the data collector trace logs for any repeated trace messages. This issue becomes evident if the trace files are large and are rolling over rapidly. If errors or repeated messages are observed, contact IBM software support; to do this, access the Web site http://www14.software.ibm.com/webapp/set2/sas/f/handbook/home.html and click Contacts. The ITCAM trace files are: trace-dc-native.log, trace-dc-bcm.log, and trace-dc-ParentLast.log. The default location is the /var/ibm/Tivoli/common/CYN/logs/server directory on Linux and UNIX systems, and the C:\ibm\tivoli\common\CYN\logs\server directory on Windows systems.


Agent for HTTP Servers

Check the following items when installing the Agent for HTTP Servers on any host.


Before installing the agent

  1. Create a user account for the agent, and ensure that the following permissions are assigned to the user account:

    1. Read permission to the HTTP Server logs. The agent parses these logs to display data in the Log Analysis workspace.

    2. HTTP server start/stop privileges, if the Start/Stop HTTP Servers Take Action commands are needed. This permission can be verified by logging in as the user and starting and stopping all the HTTP server instances.

  2. To monitor the performance overhead introduced by the agent, prepare a baseline performance metric of the system from the operating system without the agent. The metrics include CPU use, memory use, disk paging, and other metrics that you consider important. You can use operating system tools like NMON, TOP, and Task Manager to observe and record these metrics.


Before configuring the monitoring agent

  1. Verify that the correct application support files are installed in the Tivoli Enterprise Monitoring Servers (remote and hub) and Tivoli Enterprise Portal. You can check for the installed version using the cinfo –i command on UNIX systems and the KinCInfo –i command on Windows systems and checking the version for the ht component.

  2. Analyze the product-provided situations and determine which ones fit your monitoring needs. Having many situations that are of little value might impact the performance of the IBM Tivoli Monitoring Infrastructure (including the agent) and possibly of the HTTP server, as many queries have to be made.

  3. Carefully evaluate the tables (workspaces) for which collect historical data. Collecting unwanted data can have the following effects:

    • Performance effect on the agent and HTTP server due to periodic queries.

    • Database storage issues, both short-term history in files and long-term history in Tivoli Data Warehouse (TDW).

  4. There is a 32-character IBM Tivoli Monitoring limitation on managed system names. This issue affects both the agent name and the HTTP server names. For the agent name, the default name is node_ID:host_name:KHTA. If node_ID is not supplied during agent configuration, then it defaults to “Primary.” Make sure the combined length of the node_ID and the host_name does not exceed 26 characters. If it does, provide the appropriate shortened names. The node_ID is supplied in the Alternate Node ID field during the Advanced Monitoring Agent configuration. The host_name variable can be changed with the CTIRA_HOSTNAME variable in the TEMA_HOME/config/ht.ini file for UNIX systems, and the TEMA_HOME/TMAITM6/khtenv file for Windows systems.

  5. The 32-character IBM Tivoli Monitoring limitation also applies to the HTTP server names. If you want different names to display, consider creating server aliases by providing proper short names for the servers. The server aliases can be assigned by configuring the monitoring agent or by editing the TEMA_HOME/config/<host_name>_ht.cfg file.

  6. Make sure that the Warehouse proxy is set up and the short-term history data is transferred to the Warehouse periodically. Having large short-term history files impacts Tivoli Enterprise Portal workspace response times if a workspace contains tables that display short-term history. These files are not indexed and searches must be done by scanning sequentially from the beginning of the file. For more information about short-term history files, see IBM Tivoli Monitoring Administrator Guide, chapter Collecting historical data.


After starting the monitoring agent

Verify that the new node displays in the Tivoli Enterprise Portal console. If the node does not display, first check whether the agent has connected to the Tivoli Enterprise Monitoring Server. Complete the following checks:

  1. Make sure there are no errors in the TEMA_HOME/logs/hostname_ht_timestamp.log log file. This log also provides the following information about the configuration settings the agent is using:

  2. In the TEMA_HOME/logs/hostname_ht_timestamp.log file, verify that the selected history collection has started. Entries on starting the agent agent_id:hostname:KHTA.LG0. UADVISOR_KHT_* must be present. There must also be a Connecting to CMS TEMS entry. If the Tivoli Enterprise Monitoring Server connection is not made, the history entries do not display.


Parent topic:

Plan an installation