Network Deployment (Distributed operating systems), v8.0 > Tune performance > Tune the application serving environment
Tune transport channel services
The transport channel services manage client connections and I/O processing for HTTP and JMS requests. These I/O services are based on the non-blocking I/O (NIO) features that are available in Java. These services provide a highly scalable foundation to WAS request processing. Java NIO-based architecture has limitations in terms of performance, scalability, and user usability. Therefore, integration of true asynchronous I/O is implemented. This implementation provides significant benefits in usability, reduces the complexity of I/O processing, and reduces that amount of performance tuning perform. Key features of the new transport channel services include:
- Scalability, which enables the product to handle many concurrent requests
- Asynchronous request processing, which provides a many-to-one mapping of client requests to web container threads
- Resource sharing and segregation, which enables thread pools to be shared between the web container and a messaging service
- Improved usability
- Incorporation of autonomic tuning and configuration functions
Change the default values for settings on one or more of the transport channels associated with a transport chain can improve the performance of that chain.
Figure 1. Transport Channel Servicecore groups in the same cell" />
Procedure
- Adjust TCP transport channel settings. In the administration console, click...
Servers > Server Types > WebSphere application servers > server_name
> Ports. Then click View associated transports for the appropriate port.
- Select the transport chain whose properties you are changing.
- Click the TCP transport channel defined for that chain.
- Lower the value specified for the Maximum open connections property. This parameter controls the maximum number of connections that are available for a server to use. Leaving this parameter at the default value of 20000, which is the maximum number of connections, might lead to stalled websites under failure conditions, because the product continues to accept connections, thereby increasing the connection, and associated work, backlog. The default should be changed to a significantly lower number, such as 500, and then additional tuning and testing should be performed to determine the optimal value that you should specify for a specific website or application deployment.
- If client connections are being closed without data being written back to the client, change the value specified for the Inactivity timeout parameter. This parameter controls the maximum number of connections available for a server use. After receiving a new connection, the TCP transport channel waits for enough data to arrive to dispatch the connection to the protocol-specific channels above the TCP transport channel. If not enough data is received during the time period specified for the Inactivity timeout parameter, the TCP transport channel closes the connection.
The default value for this parameter is 60 seconds. This value is adequate for most applications. Increase the value specified for this parameter if your workload involves many connections and all of these connections cannot be serviced in 60 seconds.
- Assign a thread pool to a specific HTTP port. Each TCP transport channel is assigned to a particular thread pool. Thread pools can be shared between one or more TCP transport channels as well as with other components. The default setting for a TCP transport channel is to have all HTTP-based traffic assigned to the WebContainer thread pool and all other traffic assigned to the Default thread pool. Use the Thread pool menu list to assign a particular thread pool to each TCP transport channel. The default setting for this parameter has all HTTP-based traffic assigned to the WebContainer thread pool and all other traffic is assigned to the Default thread pool. The information about thread pool collection describes how to create additional thread pools.
- Tune the size of your thread pools. By default, a thread pool can have a minimum of 10 threads and a maximum of 50 maximum threads.
To adjust these values, click Thread pools >threadpool_name and adjust the values specified for the Minimum Size and Maximum Size parameters for that thread pool.
Typical applications usually do not need more than 10 threads per processor. One exception is if there is some off server condition, such as a very slow backend request, that causes a server thread to wait for the backend request to complete. In such a case, processor usage is low and increasing the workload does not increase processor throughput. Thread memory dumps show nearly all threads in a callout to the backend resource. If this condition exists, and the backend is tuned correctly, try increasing the minimum number of threads in the pool until you see improvements in throughput and thread memory dumps show threads in other areas of the run time besides the backend call.
The setting for the Grow as needed parameter is changed unless your backend is prone to hanging for long periods of time. This condition might indicate that all of your runtime threads are blocked waiting for the backend instead of processing other work that does not involve the hung backend.
- Adjust HTTP transport channel settings. In the administration console, click...
Servers > Server Types > WebSphere application servers > server_name
> Ports. Then click View associated transports for the appropriate port.
- Select the transport chain whose properties you are changing.
- Click the HTTP transport channel defined for that chain.
- Tune HTTP keep-alive.
The Use persistent (keep-alive) connections setting controls whether connections are left open between requests. Leaving the connections open can save setup and teardown costs of sockets if your workload has clients that send multiple requests. The default value is true, which is typically the optimal setting.
If your clients only send single requests over substantially long periods of time, it is probably better to disable this option and close the connections right away rather than to have the HTTP transport channel setup the timeouts to close the connection at some later time.
- Change the value specified for the Maximum persistent requests parameter to increase the number of requests that can flow over a connection before it is closed.
When the Use persistent connections option is enabled, the Maximum persistent requests parameter controls the number of requests that can flow over a connection before it is closed. The default value is 100. This value should be set to a value such that most, if not all, clients always have an open connection when they make multiple requests during the same session. A proper setting for this parameter helps to eliminate unnecessary setting up and tearing down of sockets.
For test scenarios in which the client is never closed, a value of -1 disables the processing which limits the number of requests over a single connection. The persistent timeout shuts down some idle sockets and protects your server from running out of open sockets.
- Change the value specified for the Persistent timeout parameter to increase the length of time that a connection is held open before being closed due to inactivity. The Persistent timeout parameter controls the length of time that a connection is held open before being closed because there is no activity on that connection. The default value is 30 seconds This parameter is set to a value that keeps enough connections open so that most clients can obtain a connection available when they must make a request.
- If clients are having trouble completing a request because it takes them more than 60 seconds to send their data, change the value specified for the Read timeout parameter. Some clients pause more than 60 seconds while sending data as part of a request.
To ensure that they are able to complete their requests, change the value specified for this parameter to a length of time in seconds that is sufficient for the clients to complete the transfer of data. Be careful when changing this value that you still protect the server from clients who send incomplete data and thereby use resources (sockets) for an excessive amount of time.
- If some of your clients require more than 60 seconds to receive data being written to them, change the value specified for the Write timeout parameter. Some clients are slow and require more than 60 seconds to receive data that is sent to them.
To ensure that they are able to obtain all of their data, change the value specified for this parameter to a length of time in seconds that is sufficient for all of the data to be received. Be careful when changing this value that you still protect the server from malicious clients.
- Adjust the web container transport channel settings. In the administration console, click...
Servers > Server Types > WebSphere application servers > server_name
> Ports. Then click View associated transports for the appropriate port.
- Select the transport chain whose properties must be changed.
- Click the web container transport channel defined for that chain.
- If multiple writes are required to handle responses to the client, change the value specified for the Write buffer size parameter to a value that is more appropriate for your clients. The Write buffer size parameter controls the maximum amount of data per thread that the web container buffers before sending the request on for processing. The default value is 32768 bytes, which are sufficient for most applications. If the size of a response is greater than the size of the write buffer, the response is chunked and written back in multiple TCP writes.
If change the value specified for this parameter, make sure that the new value enables most requests to be written out in a single write. To determine an appropriate value for this parameter, look at the size of the pages that are returned and add some additional bytes to account for the HTTP headers.
- Adjust the settings for the bounded buffer.
Even though the default bounded buffer parameters are optimal for most of the environments, you might want to change the default values in certain situations and for some operating systems to enhance performance. Changing the bounded buffer parameters can degrade performance. Therefore, make sure that you tune the other related areas, such as the web container and ORB thread pools, before deciding to change the bounded buffer parameters.
To change the bounded buffer parameters:
- In the admin console...
Servers > Server Types > WebSphere application servers > server_name
- In the Server Infrastructure > Java and process management > Process definition > Java virtual machine.
- Click Custom properties .
- Enter one of the following custom properties in the Name field and an appropriate value in the Value field, and then click Apply to save the custom property and its setting.
- com.ibm.ws.util.BoundedBuffer.spins_take=value
Number of times a web container thread can attempt to retrieve a request from the buffer before the thread is suspended and enqueued. This parameter enables you to trade off the cost of performing possibly unsuccessful retrieval attempts, with the cost to suspending a thread and activating it again in response to a put operation.
Default: The number of processors available to the operating system minus 1. Recommended: Use any non-negative integer value. In practice, using an integer from 2 to 8 yields the best performance results. Usage: com.ibm.ws.util.BoundedBuffer.spins_take=6. Six attempts are made before the thread is suspended. - com.ibm.ws.util.BoundedBuffer.yield_take=true or false
Specifies that a thread yields the processor to other threads after a set number of attempts to take a request from the buffer. Typically a lower number of attempts is preferable.
Default: false Recommended: The effect of yield is implementation-specific for individual platforms. Usage: com.ibm.ws.util.BoundedBuffer.spins_take=boolean value - com.ibm.ws.util.BoundedBuffer.spins_put=value
Number of attempts an InboundReader thread makes to put a request into the buffer before the thread is suspended and enqueued. Use this value to trade off between the cost of repeated, possibly unsuccessful, attempts to put a request into the buffer with the cost to suspend a thread and reactivate it in response to a take operation.
Default: The value of com.ibm.ws.util.BoundedBuffer.spins_take divided by 4. Recommended: Use any non-negative integer value. In practice an integer 2 - 8 have shown the best performance results. Usage: com.ibm.ws.util.BoundedBuffer.spins_put=6. Six attempts are made before the thread is suspended. - com.ibm.ws.util.BoundedBuffer.yield_put=true or false
Specifies that a thread yields the processor to other threads after a set number of attempts to put a request into the buffer. Typically a lower number of attempts is preferable.
Default: false Recommended: The effect of yield is implementation-specific for individual platforms. Usage: com.ibm.ws.util.BoundedBuffer.yield_put=boolean value - com.ibm.ws.util.BoundedBuffer.wait=number of milliseconds
Maximum length of time, in milliseconds, that a request might unnecessarily be delayed if the buffer is completely full or if the buffer is empty.
Default: 10000 milliseconds Recommended: A value of 10000 milliseconds usually works well. In rare instances when the buffer becomes either full or empty, a smaller value guarantee a more timely handling of requests, but there is usually a performance impact to using a smaller value. Usage: com.ibm.ws.util.BoundedBuffer.wait=8000. A request might unnecessarily be delayed up to 8000 milliseconds.
- Click Apply and then click Save to save these changes.
Tune the application serving environment
Related
Thread pool collection