IBM Tivoli Monitoring > Version 6.3 > User's Guides > Agentless OS Monitor User's Guides > Agentless Monitoring for Linux User's Guide > Troubleshooting > Problems and workarounds IBM Tivoli Monitoring, Version 6.3


Agent troubleshooting

A problem can occur with the agent after it has been installed.
Table 1 contains problems and solutions that can occur with the agent after it is installed.


Agent problems and solutions

Problem Solution
Log data accumulates too rapidly. Check the RAS trace option settings, which are described in Set RAS trace parameters using the GUI. The trace option settings that you can set on the KBB_RAS1= and KDC_DEBUG= lines potentially generate large amounts of data.
SNMP attribute groups are not reporting data.

  1. Check the Data Collection Status workspace to identify the error being reported.

  2. Verify connectivity with the target system:

    1. Make sure that the system can be reached using a tool such as ping.

    2. Make sure no firewalls are blocking communications on the SNMP port (UDP 161).

    3. Verify that the community strings and passwords match what is configured on the SNMP system.

    4. Review the snmpd.conf file and verify that the SNMP system is not restricting access to localhost.

    5. Use an SNMP tool like snmpwalk to verify connectivity to the SNMP system.

    6. Review the snmpd.conf file and verify that the MIB branches are not restricted. See Agent-specific installation and configuration for more information.

The Managed System Name for the remote system keeps switching between agent instances. The remote system has been defined in two different agent configurations. The remote system nodes must have a name that is unique across an IBM Tivoli Monitoring environment.
When using the itmcmd agent commands to start or stop this monitoring agent, you receive the following error message:

MKCIIN0201E Specified product is not configured.

Include the command option -o to specify the instance to start or stop. The instance name must match the name used for configuring the agent. For example:

./itmcmd agent -o Test1 start r4

For more information about using the itmcmd commands, see the Command Reference.

Perfmon attribute groups are not reporting data. Use the Extensible Performance Counter List (exctrlst) Microsoft utility from the Microsoft Support website (http://support.microsoft.com/kb/927229) to determine whether the performance features are installed correctly on the remote system.

Scroll to the Extensible Performance Counter List (exctrlst.exe).

The Microsoft TechNet article on how to use exctrlst can be found in the Microsoft TechnNet Library (http://technet.microsoft.com/en-us/library/cc737958.aspx).

A configured and running instance of the monitoring agent is not displayed in the Tivoli Enterprise Portal, but other instances of the monitoring agent on the same system are displayed in the portal. IBM Tivoli Monitoring products use Remote Procedure Call (RPC) to define and control product behavior. RPC is the mechanism that a client process uses to make a subroutine call (such as GetTimeOfDay or ShutdownServer) to a server process somewhere in the network. Tivoli processes can be configured to use TCP/UDP, TCP/IP, SNA, and SSL as the protocol (or delivery mechanism) for RPCs that you want.

IP.PIPE is the name given to Tivoli TCP/IP protocol for RPCs. The RPCs are socket-based operations that use TCP/IP ports to form socket addresses. IP.PIPE implements virtual sockets and multiplexes all virtual socket traffic across a single physical TCP/IP port (visible from the netstat command).

A Tivoli process derives the physical port for IP.PIPE communications based on the configured, well-known port for the hub Tivoli Enterprise Monitoring Server. (This well-known port or BASE_PORT is configured using the 'PORT:' keyword on the KDC_FAMILIES / KDE_TRANSPORT environment variable and defaults to '1918'.)

The physical port allocation method is defined as (BASE_PORT + 4096*N), where N=0 for a Tivoli Enterprise Monitoring Server process and N={1, 2, ..., 15} for another type of monitoring server process. Two architectural limits result as a consequence of the physical port allocation method:

  • No more than one Tivoli Enterprise Monitoring Server reporting to a specific Tivoli Enterprise Monitoring Server hub can be active on a system image.

  • No more than 15 IP.PIPE processes can be active on a single system image.

A single system image can support any number of Tivoli Enterprise Monitoring Server processes (address spaces) if each Tivoli Enterprise Monitoring Server on that image reports to a different hub. By definition, one Tivoli Enterprise Monitoring Server hub is available per monitoring enterprise, so this architecture limit has been reduced to one Tivoli Enterprise Monitoring Server per system image.

No more than 15 IP.PIPE processes or address spaces can be active on a single system image. With the first limit expressed earlier, this second limitation refers specifically to Tivoli Enterprise Monitoring Agent processes: no more than 15 agents per system image.

Continued on next row.

Continued from previous row.

This limitation can be circumvented (at current maintenance levels, IBM Tivoli Monitoring V6.1, Fix Pack 4 and later) if the Tivoli Enterprise Monitoring Agent process is configured to use the EPHEMERAL IP.PIPE process. (This process is IP.PIPE configured with the 'EPHEMERAL:Y' keyword in the KDC_FAMILIES / KDE_TRANSPORT environment variable). The number of ephemeral IP.PIPE connections per system image has no limitation. If ephemeral endpoints are used, the Warehouse Proxy agent is accessible from the Tivoli Enterprise Monitoring Server associated with the agents using ephemeral connections either by running the Warehouse Proxy agent on the same computer or using the Firewall Gateway feature. (The Firewall Gateway feature relays the Warehouse Proxy agent connection from the Tivoli Enterprise Monitoring Server computer to the Warehouse Proxy agent computer if the Warehouse Proxy agent cannot coexist on the same computer.)

I cannot find my queries. Agents that include subnodes display their queries within the element in the Query Editor list that represents the location of the attribute group. The queries are most often found under the name of the subnode, not the name of the agent.
No historical data is returned including the startup entries previously displayed in the workspace. No support is available in the auditing for relaying subnode data. To see the historical data, you must choose nodes and not subnodes. A subnode in the Managed System Status workspace will not have a Tivoli Enterprise Monitoring Server name listed under the Managing System.

Examples:

  • The Managing System for R4:icvr5d06:LNX is icvr5d06_LZ_icvw3d62:ICVW3D62:R4 (not a Tivoli Enterprise Monitoring Server), so this system is a subnode

  • The Managing System for icvr5d06_LZ_icvw3d62:ICVW3D62:R4 is icvw3d62 (the hub Tivoli Enterprise Monitoring Server), so this system is a node.

After you distribute to the correct group, you can see the historical data that is saved in the Short term History (STH) file KRAAUDIT under %CANDLEHOME%/CMS.

You can trace the Tivoli Enterprise Monitoring Server log file with ERROR(UNIT: KFAAPHST) to see the AUDIT data saved in the STH.


Parent topic:

Problems and workarounds

+

Search Tips   |   Advanced Search