WAS v8.5 > Tune performance

Troubleshooting performance problems

This topic illustrates that solving a performance problem is an iterative process and shows how to troubleshoot performance problems.

It is recommended that you review the tuning parameters hot list page before reading this topic. Solving a performance problem is frequently an iterative process of:

This process is often iterative because when one bottleneck is removed the performance is now constrained by some other part of the system. For example, replacing slow hard disks with faster ones might shift the bottleneck to the CPU of a system.

Measuring system performance and collecting performance data

Locating a bottleneck

Consult the following scenarios and suggested solutions:

Eliminating a bottleneck

Consider the following methods to eliminate a bottleneck:

Reducing the demand for resources can be accomplished in several ways. Caching can greatly reduce the use of system resources by returning a previously cached response, thereby avoiding the work needed to construct the original response. Caching is supported at several points in the following systems:

Application code profiling can lead to a reduction in the CPU demand by pointing out hot spots we can optimize. IBM Rational and other companies have tools to perform code profiling. An analysis of the application might reveal areas where some work might be reduced for some types of transactions.

Change tuning parameters to increase some resources, for example, the number of file handles, while other resources might need a hardware change, for example, more or faster CPUs, or additional application servers. Key tuning parameters are described for each major WAS component to facilitate solving performance problems. Also, the performance advisors page can provide advice on tuning a production system under a real or simulated load.

Workload distribution can affect performance when some resources are underutilized and others are overloaded. WAS workload management functions provide several ways to determine how the work is distributed. Workload distribution applies to both a single server and configurations with multiple servers and nodes.

Some critical sections of the application and server code require synchronization to prevent multiple threads from running this code simultaneously and leading to incorrect results. Synchronization preserves correctness, but it can also reduce throughput when several threads must wait for one thread to exit the critical section. When several threads are waiting to enter a critical section, a thread dump shows these threads waiting in the same procedure. Synchronization can often be reduced by: changing the code to only use synchronization when necessary; reducing the path length of the synchronized code; or reducing the frequency of invoking the synchronized code.


Related concepts:

PMI
Data we can collect with request metrics
Why to use the performance advisors


Related


Why use request metrics?
Logging performance data with Tivoli Performance Viewer


Reference:

Tune parameter hot list


Related information:

http://www.redbooks.ibm.com/abstracts/sg246392.html?Open

http://www-306.ibm.com/software/webservers/appserv/was/performance.html

http://www.spec.org/jAppServer2004/results/jAppServer2004.html


+

Search Tips   |   Advanced Search