Determining optimum queue sizes
A simple way to determine the right queue size for any component is to perform a number of load runs against the appserver environment at a time when the queues are very large, ensuring maximum concurrency through the system.
For example, one approach would be:
Set the queue sizes for the Web server, Web container and data source to an initial value, for example 100. Simulate a large number of typical user interactions entered by concurrent users in an attempt to fully load the WebSphere environment. In this context, "concurrent users" mean simultaneously active users that send a request, wait for the response, and immediately resend a new request upon response reception - without thinktime. Use any stress tool to simulate this workload, such as OpenSTA, discussed in Testing the performance of an application or the tools mentioned in Other testing tools.
Measure overall throughput and determine at what point the system capabilities are fully stressed (the saturation point). Repeat the process, each time increasing the user load. After each run, record the throughput (requests per second) and response times (seconds per request) and plot the throughput curve. The throughput of WAS is a function of the number of concurrent requests present in the total system. At some load point, congestion will start to develop due to a bottleneck and throughput will increase at a much lower rate until reaching a saturation point (maximum throughput value). The throughput curve should help you identify this load point.
It is desirable to reach the saturation point by driving CPU utilization close to 100%, since this gives an indication that a bottleneck is not caused by something in the application. If the saturation point occurs before system utilization reaches 100%, there is likely another bottleneck that is being aggravated by the application. For example, the application might be creating Java objects causing excessive garbage collection bottlenecks in Java.
Note There are two ways to manage application bottlenecks: remove the bottleneck or replicate the bottleneck. The best way to manage a bottleneck is to remove it. You can use a Java-based application profiler, such as WebSphere Studio Application Developer (see Chapter 17, Development-side performance and analysis tools for more information), Performance Trace Data Visualizer (PTDV), Optimizelt, JProbe or Jinsight to examine overall object utilization.
The most manageable type of bottleneck occurs when the CPUs of the servers become fully utilized. This type of bottleneck can be fixed by adding additional or more powerful CPUs.
An example throughput curve is shown in Figure 19-6.
Figure 19-6 Throughput curve
In Figure 19-6, Section A contains a range of users that represent a light user load. The curve in this section illustrates that as the number of concurrent user requests increase, the throughput increases almost linearly with the number of requests. You can interpret this to mean that at light loads, concurrent requests face very little congestion within the WAS system queues.
In the heavy load zone or Section B, as the concurrent client load increases, throughput remains relatively constant. However, the response time increases proportionally to the user load. That is, if the user load is doubled in the heavy load zone, the response time doubles.
In Section C (the buckle zone) one or more of the system components have become exhausted and throughput starts to degrade. For example, the system might enter the buckle zone when the network connections at the Web server exhaust the limits of the network adapter or if the requests exceed operating system limits for file handles.
Prev | Home | Next WebSphere is a trademark of the IBM Corporation in the United States, other countries, or both.
IBM is a trademark of the IBM Corporation in the United States, other countries, or both.