Tune TCP/IP buffer sizes
WebSphere Application Server uses the TCP/IP sockets communication mechanism extensively. For a TCP/IP socket connection, the send and receive buffer sizes define the receive window. The receive window specifies the amount of data that can be sent and not received before the send is interrupted. If too much data is sent, it overruns the buffer and interrupts the transfer. The mechanism that controls data transfer interruptions is referred to as flow control. If the receive window size for TCP/IP buffers is too small, the receive window buffer is frequently overrun, and the flow control mechanism stops the data transfer until the receive buffer is empty.
Flow control can consume a significant amount of CPU time and result in additional network latency as a result of data transfer interruptions. IBM recommends that you increase buffer sizes to avoid flow control under normal operating conditions. A larger buffer size reduces the potential for flow control to occur, and results in improved CPU utilization. However, a large buffer size can have a negative effect on performance in some cases. If the TCP/IP buffers are too large and applications are not processing data fast enough, paging can increase. The goal is to specify a value large enough to avoid flow control, but not so large that the buffer accumulates more data than the system can process.
The default buffer size is 8 KB. The maximum size is 8 MB (8096 KB). The optimal buffer size depends on several network environment factors including types of switches and systems, acknowledgment timing, error rates and network topology, memory size, and data transfer size. When data transfer size is extremely large, we might want to set the buffer sizes up to the maximum value to improve throughput, reduce the occurrence of flow control, and reduce CPU cost.
Buffer sizes for the socket connections between the web server and WAS are set at 64KB. In most cases this value is adequate.
Flow control can be an issue when an application uses either the IBM Developer Kit for Java(TM) JDBC driver or the IBM Toolbox for Java JDBC driver to access a remote database. If the data transfers are large, flow control can consume a large amount of CPU time. If we use the IBM Toolbox for Java JDBC driver, we can use custom properties to configure the buffer sizes for each data source. IBM recommends specified large buffer sizes, for example, 1 MB.
Some system-wide settings can override the default 8 KB buffer size for sockets. With some applications, for example, WebSphere Commerce Suite, a buffer size of 180 KB reduces flow control and typically does not adversely affect paging. The optimal value is dependent on specific system characteristics. We might need to try several values before you determine the ideal buffer size for the system.
(AIX) See "TCP/IP network settings "in Running IBM WAS on System p and AIX: Optimization and Best Practices . In addition, see TCP streaming workload tuning.
(Linux) See Linux Tuning.
(Windows) For information on tuning TCP/IP buffer sizes, see the Windows 2000 and Windows Server 2003 TCP Features document. Consider setting the TcpWindowSize value to either 8388608 or 16777216.
(ZOS) TCP/IP can be the source of some significant remote method delays.
(iSeries) (ZOS)
Tasks
To change the system wide value...
- (iSeries) Tune the TCP/IP buffer sizes.
- Change the TCP/IP configuration.
- Run the Change TCP/IP Attribute, CHGTCPA command.
- View and change the buffer sizes by pressing F4 on the Change TCP/IP Attributes window. The buffer sizes are displayed as the TCP receive and send buffer sizes. Type new values and save our changes.
- Recycle TCP/IP, then monitor CPU and paging rates to determine if they are within recommended system guidelines.
- Tune the TCP/IP buffer sizes.
- First, ensure that we have defined enough sockets to the system and that the default socket timeout of 180 seconds is not too high. To allow enough sockets, update the BPXPRMxx parmlib member:
- Set MAXSOCKETS for the AF_INET filesystem high enough.
- Set MAXFILEPROC high enough.
We should set MAXSOCKETS and MAXFILEPROC to at least 5000 for low-throughput, 10000 for medium-throughput, and 35000 for high-throughput WebSphere transaction environments. Setting high values for these parameters should not cause excessive use of resources unless the sockets or files are actually allocated.
bpracExample:
/* Open/MVS Parmlib Member */ /* CHANGE HISTORY: */ /* 01/31/02 AEK Increased MAXSOCKETS on AF_UNIX from 10000 to 50000*/ /* per request from My Developer */ /* 10/02/01 JAB Set up shared HFS */ /* KERNEL RESOURCES DEFAULT MIN MAX */ /* ======================== =================== === =========== */ . . MAXFILEPROC(65535) /* 64 3 65535 */ . . NETWORK DOMAINNAME(AF_INET) DOMAINNUMBER(2) MAXSOCKETS(30000) .- Next check the specification of the port in TCPIP profile dataset to ensure that NODELAYACKS is specified as follows:
PORT 8082 TCP NODELAYACKSIn your runs, changing this could improve throughput by as much as 50% (this is particularly useful when dealing with trivial workloads). This setting is important for good performance when running SSL.
- We should ensure that your DNS configuration is optimized so that lookups for frequently-used servers and clients are being cached.
Caching is sometimes related to the name server's Time To Live (TTL) value. On the one hand, setting the TTL high will ensure good cache hits. However, setting it high also means that, if the Daemon goes down, it will take a while for everyone in the network to be aware of it.
A good way to verify that your DNS configuration is optimized is to issue the oping and onslookup USS commands. Make sure they respond in a reasonable amount of time. Often a misconfigured DNS or DNS server name will cause delays of 10 seconds or more.
- Increase the size of the TCPIP send and receive buffers from the default of 16K to at least 64K. This is the size of the buffers including control information beyond what is present in the data that we are sending in the application. To do this specify the following:
TCPCONFIG TCPSENDBFRSIZE 65535 TCPRCVBUFRSIZE 65535It is unreasonable, in some cases, to specify 256 KB buffers.
- Increase the default listen backlog.
The default listen backlog is used to buffer spikes in new connections which come with a protocol like HTTP. The default listen backlog is 10 requests. We should use the TCP transport channel listenBacklog custom property to increase this value to something larger. For example:
listenBacklog=100(ZOS) Note: The value we use for listenBacklog can be limited by the specification of the SOMAXCONN statement in the TCP/IP profile. If we use a listenBacklog value greater than the SOMAXCONN value, the listenBacklog value is not be used; the value for SOMAXCONN is used.
IMPORTANT: If listenBacklog is not set for channel types of HTTP, HTTP SSL, IIOP and IIOP SSL, the listenBacklog is set from deprecated environment values: protocol_http_backlog, protocol_https_backlog, protocol_iiop_backlog, and protocol_iiop_backlog_ssl. If the associated deprecated environment value is not specified, a default of 10 is used.
For channel types that are not HTTP, HTTP SSL, IIOP and IIOP SSL, the default for listenBacklog is 511.
- Reduce the finwait2 time.
In the most demanding benchmarks you may find that even defining 65K sockets and file descriptors does not give you enough 'free' sockets to run 100%. When a socket is closed abnormally (for example, no longer needed) it is not made available immediately. Instead it is placed into a state called finwait2 (this is what shows up in the netstat -s command). It waits there for a period of time before it is made available in the free pool. The default for this is 600 seconds.
Unless we have trouble using up sockets, we should leave this set to the default value.. If we are using z/OS Version 1.2 or newer, we can control the amount of time the socket stays in finwait2 state by specifying the following in the configuration file:
FINWAIT2TIME 60
(ZOS) (iSeries)
Repeat this process until you determine the ideal buffer size.
(ZOS) (iSeries)
What to do next
The TCP/IP buffer sizes are changed. Repeat this process until you determine the ideal buffer size.
(iSeries) For more information about TCP/IP performance, see Chapter 5 of the Performance Capabilities Reference. Links to several editions of the Performance Capabilities Reference are in the Performance Management Resource Library.