Workload Management component troubleshooting tips

 

If the Workload Management component is not properly distributing the workload across servers in multi-node configuration, use these steps to isolate the problem.

There are some basic steps for troubleshooting the Workload Management component:

 

Eliminate environment or configuration issues

First, determine the health of the cluster. In other words, are the servers capable of serving the applications for which they have been enabled? To do this, identify the cluster that is exhibiting the problem.

  • Are there network connection problems with the members of the cluster or the administrative servers, for example deployment manager or node agents?

    • If so, ping the machines to ensure that they are properly connected to the network.

  • Is there other activity on the machines where the servers are installed that is impacting the servers ability to service a request? For example, check the processor utilization as measured by the task manager, processor ID, or some other outside tool to see if:

    • It is not what is expected, or is erratic rather than constant.

    • It shows that a newly added, installed, or upgraded member of the cluster is not being utilized.

  • Are all of the appservers you started on each node running, or are some stopped?

  • Are the applications installed and operating?

  • If the problem relates to distributing workload across persistent CMP(or BMP) enterprise beans, have you configured the supporting JDBC drivers and datasources on each server? For problems relating to data access, review the topic Cannot access a datasource.

If you are experiencing workload management problems related to HTTP requests, such as HTTP requests not being served by all members of the cluster, be aware that the HTTP plugin will balance the load across all servers that are defined in the PrimaryServers list if affinity has not been established. If you do not have a PrimaryServers list defined then the plugin will load balance across all servers defined in the cluster if affinity has not been established. If affinity has been established, the plugin should go directly to that server for all requests.

For workload management problems relating to enterprise bean requests, such as enterprise bean requests not getting served by all members of a cluster:

  • Are the weights set to the allowed values?

    • For the cluster in question, log onto the administrative console and:

      1. Select Cluster -> Manage cluster
      2. Select your cluster from the list.
      3. Select Cluster Members.
      4. For each server in the cluster, click on servername and note the assigned weight of the server.

    • Ensure that the weights are within the valid range of 0-20. If a server has a weight of 0, no requests will be routed to it. Weights greater than 20 are treated as 0.

Note: The remainder of this article deals with enterprise bean workload balancing only. For more help on diagnosing problems in distributing Web (HTTP) requests, view the topics HTTP plugin component troubleshooting tips and Web resource (JSP, servlet, html file, image, etc) will not display.

 

Browse log files for WLM errors and CORBA minor codes

If you still encounter problems with enterprise bean workload management, the next step is to check the activity.log for entries that show:

  • A server that has been marked unusable more than once and remains unusable.
  • All servers in a cluster have been marked bad and remain unusable.
  • A Location Service Daemon (LSD) has been marked unusable more than once and remains unusable.

To do this, use the Log Analyzer tool to open the service log ( activity.log ) on the affected servers, and look for the following entries:

  • WWLM0061W: An error was encountered sending a request to cluster member member and that member has been marked unusable for future requests to the cluster cluster.

    Note: It is not unusual for a server to be marked unusable. The server may be tagged unusable for normal operational reasons, such as a ripple start being executed while there is still a load on the server from a client.

  • WWLM0062W: An error was encountered sending a request to cluster member member that member has been marked unusable, for future requests to the cluster cluster two or more times.

  • WWLM0063W: An error was encountered attempting to use the LSD LSD_name to resolve an object reference for the cluster cluster and has been marked unusable for future requests to that cluster.

  • WWLM0064W: Errors have been encountered attempting to send a request to all members in the cluster cluster and all of the members have been marked unusable for future requests that cluster.

  • WWLM0065W: An error was encountered attempting to update a cluster member server in cluster cluster, as it was not reachable from the deployment manager.

If any of these warning are encountered, follow the user response given in the log. If, after following the user response, the warnings persist, look at any other errors and warnings in the Log Analyzer on the affected servers to look for:

  • A possible user response, such as changing a configuration setting.
  • Base class exceptions that might indicate a WebSphere Application Server defect.

You may also see exceptions with "CORBA" as part of the exception name, since WLM uses CORBA (Common Object Request Broker Architecture) to communicate between processes. Look for a statement in the exception stack specifying a "minor code". These codes denote the specific reason a CORBA call or response could not complete. WLM minor codes fall in range of 0x4921040 - 0x492104F. For an explanation of minor codes related to WLM, see theJavadoc for the package and class com.ibm.websphere.wlm.WsCorbaMinorCodes.

 

Analyze PMI data

The purpose for analyzing the PMI data is to understand the workload arriving for each member of a cluster. The data for any one member of the cluster is only useful within the context of the data of all the members of the cluster. To obtain PMI data for all members of a cluster, see Performance monitoring infrastructure.

Once you have obtained the PMI data, calculate the percentage of numIncomingRequests for each member of the cluster to the total of the numIncomingRequests of all members of the cluster. A comparison of this percentage value to the percentage of weights directed to each member of the cluster provides an initial look at the balance of the workload directed to each member of a cluster.

In addition to the numIncomingRequests two other metrics show how work is balanced between the members of a cluster, numincomingStrongAffinityRequests and numIncomingNonWLMObjectRequests. These two metrics show the number of requests directed to a specific member of a cluster that could only be serviced by that member.

For example, consider a 3-server cluster. We have assigned the following weights to each of these three servers:

  • Server1 = 5
  • Server2 = 3
  • Server3 = 2

Allow our cluster of servers to start servicing requests, and wait for the system to reach a steady state, that is the number of incoming requests to the cluster equals the number of responses from the servers. In such a situation, we would expect that the percentage of requests routed to each server to be:

  • % routed to Server1 = weight1 / (weight1+weight2+weight3) = 5/10 or 50%
  • % routed to Server2 = weight2 / (weight1+weight2+weight3) = 3/10 or 30%
  • % routed to Server3 = weight3 / (weight1+weight2+weight3) = 2/10 or 20%

Now let us consider a case where there are no incoming requests with neither strong affinity nor any non-WLM object requests.

In this scenario, let us assume that the PMI metrics gathered show the number of incoming requests for each server are:

  • numIncomingRequestsServer1 = 390
  • numIncomingRequestsServer2 = 237
  • numIncomingRequestsServer3 = 157

Thus, the total number of requests coming into the cluster is:

numIncomingRequestsCluster = numIncomingRequestsServer1 
                           + numIncomingRequestsServer2   
                           + numIncomingRequestsServer3 = 784

numincomingStrongAffinityRequests = 0
numIncomingNonWLMObjectRequests = 0

Can we decide based on this data if WLM is properly balancing the incoming requests among the servers in our cluster? Since there are no requests with strong affinity, the question we need to answer is, are the requests in the ratios we expect based on the assigned weights? The computation to answer that question is straightforward:

  • % (actual) routed to Server1 = 390 / 784 = 49.8%
  • % (actual) routed to Server2 = 237 / 784 = 30.2%
  • % (actual) routed to Server3 = 157 / 784 = 20.0%
So WLM is behaving as designed, as the data are completely what is expected, based on the weights assigned the servers.

Now let us consider a 3-server cluster. We have assigned the following weights to each of these three servers:

  • Server1 = 5
  • Server2 = 3
  • Server3 = 2

Allow this cluster of servers to start servicing requests and wait for the system to reach a steady state, that is the number of incoming requests to the cluster equals the number of responses from the servers. In such a situation, we would expect that the percentage of requests routed to Server1-3 would be:

  • % routed to Server1 = weight1 / (weight1+weight2+weight3) = 5/15 or 1/3 of the requests.
  • % routed to Server2 = weight2 / (weight1+weight2+weight3) = 5/15 or 1/3 of the requests.
  • % routed to Server3 = weight3 / (weight1+weight2+weight3) = 5/15 or 1/3 of the requests.

In this scenario, let us assume that the PMI metrics gathered show the number of incoming requests for each server are:

  • numIncomingRequestsServer1 = 1236
  • numIncomingRequestsServer2 = 1225
  • numIncomingRequestsServer3 = 1230

Thus, the total number of requests coming into the cluster:

  • numIncomingRequestsCluster = numIncomingRequestsServer1 + numIncomingRequestsServer2 + numIncomingRequestsServer3 = 3691
  • numincomingStrongAffinityRequests = 445, and that all 445 requests are aimed at Server1.
  • numIncomingNonWLMObjectRequests = 0.

In this case, we see that the number of requests was not evenly split among the three servers, as expected. Instead, the distribution is:

  • % (actual) routed to Server1 = 1236 / 3691= 33.49%
  • % (actual) routed to Server2 = 1225 / 3691= 33.19%
  • % (actual) routed to Server3 = 1230 / 3691= 33.32%

However, the correct interpretation of this data is the routing of requests is not perfectly balanced because Server1 had several hundred strong affinity requests. WLM attempts to compensate for strong affinity requests directed to 1 or more servers by distributing new incoming requests preferentially to servers which are not participating in transactional affinity, to compensate for those servers that are participating in transactions. In the case of incoming requests with strong affinity and non-WLM object requests, the analysis would be analogous to this case.

If, once you have analyzed the PMI data and accounted for transactional affinity and non-WLM object requests, the percentage of actual incoming requests to servers in a cluster to do not reflect the assigned weights, this indicates that requests are not being properly balanced. If this is the case, it is recommended that you repeat the steps described above for eliminating environment and configuration issues and browsing log files before proceeding.

 

Resolve problem or contact IBM support

If the PMI data or client logs indicate an error in WLM, collect the following information and contact IBM support.

  • A detailed description of your environment.

  • A description of the symptoms.

  • The SystemOut.logs and SystemErr.logs for all servers in the cluster.

  • The activity.log .

  • The First Failure Data Capture logs.

  • The PMI metrics.

  • A description of what the client is attempting to do, and a description of the client. For example, 1 thread, multiple threads, servlet, J2EE client, etc..

If none of these steps solves the problem, check to see if the problem has been identified and documented using the links in Diagnosing and fixing problems: Links. If you do not see a problem that resembles yours, or if the information provided does not solve your problem, contact IBM support for further assistance.


Troubleshooting installation problems

 

WebSphere is a trademark of the IBM Corporation in the United States, other countries, or both.

 

IBM is a trademark of the IBM Corporation in the United States, other countries, or both.