Object Request Broker tuning guidelines
The Object Request Broker (ORB) is used whenever enterprise beans are accessed through a remote interface. If we experience particularly high or low CPU consumption, we might have a problem with the value of one of the following parameters. Examine these core tuning parameters for every application deployment.
Thread pool adjustments
Size
Tune the size of the ORB thread pool according to your workload. Avoid suspending threads because they have no work ready to process. If threads do not have work ready to process, CPU time is consumed by calling the Object.wait method, performing a context switch. Tune the thread pool size such that the length of time that the threads wait is short enough to prevent them from being destroyed because they are idle too long.
The thread pool size is dependent on your workload and system. In typical configurations, applications need 10 or fewer threads per processor.
However, if the application is performing a very slow backend request, like a request to a database system, a server thread blocks waiting for the backend request to complete. With backend requests, CPU use is fairly low. In this case, increasing the load does not increase CPU use or throughput. Your thread dumps indicate that nearly all the threads are in a call out to the backend resource. In this case, consider increasing the number of threads per processor until throughput improves and thread dumps show that the threads are in other areas of the run time besides the backend call. We should adjust the number of threads only if your backend resource is tuned correctly.
The Allow thread allocation beyond maximum thread size parameter also affects thread pool size, but do not use this parameter unless your back end stops for long periods of time, causing the blocking of all the run-time threads waiting for the backend system instead of processing other work that does not involve the backend system.
We can adjust the thread pool size settings in the administrative console. Click...
Servers > Server Types > Application servers > server > Container services > ORB service > Thread pool
We can adjust the minimum and maximum number of threads.
Thread pool timeout
Each inbound and outbound request through the ORB requires a thread from the ORB thread pool. In heavy load scenarios or scenarios where ORB requests nest deeply, it is possible for a JVM to have all threads from the ORB thread attempting to send requests. Meanwhile, the remote JVM ORB that process these requests has all threads from its ORB thread pool attempting to send requests. As a result, progress is never made, threads are not released back to the ORB thread pool, and the ORB is unable to process requests. As a result, there is a potential deadlock. Using the administrative console, we can adjust this behavior through the ORB com.ibm.websphere.orb.threadPoolTimeout custom property. See documentation about the Object Request Broker custom properties.
Fragment size
The ORB separates messages into fragments to send over the ORB connection. We can configure this fragment size through the com.ibm.CORBA.FragmentSize parameter.
To determine and change the size of the messages that transfer over the ORB and the number of required fragments...
- In the administrative console, enable ORB tracing in the ORB Properties page.
- Enable ORBRas tracing from the logging and tracing page.
- Increase the trace file sizes because tracing can generate a lot of data.
- Restart the server and run at least one iteration (preferably several) of the case that we are measuring.
- Look at the traceable file and do a search for Fragment to follow: Yes.
This message indicates that the ORB transmitted a fragment, but it still has at least one remaining fragment to send prior to the entire message arrives. A Fragment to follow: No value indicates that the particular fragment is the last in the entire message. This fragment can also be the first, if the message fit entirely into one fragment.
If we go to the spot where Fragment to follow: Yes is located, you find a block that looks similar to the following example:
Fragment to follow: Yes Message size: 4988 (0x137C) -- Request ID: 1411This example indicates that the amount of data in the fragment is 4988 bytes and the Request ID is 1411. If we search for all occurrences of Request ID: 1411, we can see the number of fragments used to send that particular message. If we add all the associated message sizes, we have the total size of the message sent through the ORB.
- We can configure the fragment size by setting the com.ibm.CORBA.FragmentSize ORB custom property.
Interceptors
Interceptors are ORB extensions that can set up the context prior to the ORB runs a request. For example, the context might include transactions or activity sessions to import. If the client creates a transaction, and then flows the transaction context to the server, then the server imports the transaction context onto the server request through the interceptors.
Most clients do not start transactions or activity sessions, so most systems can benefit from removing the interceptors that are not required.
To remove the interceptors, manually edit the server.xml file and remove the interceptor lines that are not needed from the ORB section.
Connection Cache Adjustments
Depending on an application server's workload, and throughput or response-time requirements, we might need to adjust the size of the ORB's connection cache. Each entry in the connection cache is an object that represents a distinct TCP/IP socket endpoint, identified by the hostname or TCP/IP address, and the port number used by the ORB to send a GIOP request or a GIOP reply to the remote target endpoint. The purpose of the connection cache is to minimize the time required to establish a connection by reusing ORB connection objects for subsequent requests or replies. (The same TCP/IP socket is used for the request and corresponding reply.)
For each application server, the number of entries in the connection cache relates directly to the number of concurrent ORB connections. These connections consist of both the inbound requests made from remote clients and outbound requests made by the application server. When the server-side ORB receives a connection request, it uses an existing connection from an entry in the cache, or establishes a new connection and adds an entry for that connection to the cache.
The ORB Connection cache maximum and Connection cache minimum properties are used to control the maximum and minimum number of entries in the connection cache at a given time. When the number of entries reaches the value specified for the Connection cache maximum property, and a new connection is needed, the ORB creates the requested connection, adds an entry to the cache and searches for and attempts to remove up to five inactive connection entries from the cache. Because the new connection is added prior to the inactive entries are removed, it is possible for the number of cache entries to temporarily exceed the value specified for the Connection cache maximum property.
An ORB connection is considered inactive if the TCP/IP socket stream is not in use and there are no GIOP replies pending for any requests made on that connection. As the application workload diminishes, the ORB closes the connections and removes the entries for these connections from the cache. The ORB continues to remove entries from the cache until the number of remaining entries is at or less than the value specified for the Connection cache maximum property. The number of cache entries is never less then the value specified for the Connection cache minimum property, which must be at least five connections less than the value specified for the Connection cache maximum property.
Adjustments to the connection cache in the client-side ORB are usually not necessary because only a small number of connections are made on that side.
JNI Reader Threads
By default, the ORB uses a Java thread for processing each inbound connection request it receives. As the number of concurrent requests increases, the storage consumed by a large number of reader threads increases and can become a bottleneck in resource-constrained environments. Eventually, the number of Java threads created can cause out-of-memory exceptions if the number of concurrent requests exceeds the system's available resources.
To help address this potential problem, we can configure the ORB to use JNI reader threads where a finite number of reader threads, implemented using native OS threads instead of Java threads, are created during ORB initialization. JNI reader threads rely on the native OS TCP/IP asynchronous mechanism that enables a single native OS thread to handle I/O events from multiple sockets at the same time. The ORB manages the use of the JNI reader threads and assigns one of the available threads to handle the connection request, using a round-robin algorithm. Ordinarily, JNI reader threads should only be configured when using Java threads is too memory-intensive for the application environment.
The number of JNI reader threads we should allocate for an ORB depends on many factors and varies significantly from one environment to another, depending on available system resources and workload requirements. The following potential benefits might be achieved if we use JNI threads:
- Because a fixed number of threads is allocated, memory usage is reduced. This reduction provides significant benefit in environments with unusually large and sustained client-request workloads.
- The time needed to dynamically create and destroy Java threads is eliminated because a fixed number of JNI threads is created and allocated during ORB initialization.
- Each JNI thread can handle up to 1024 socket connections and interacts directly with the asynchronous I/O native OS mechanism, which might provide enhanced performance of network I/O processing.
Tune the application serving environment Object Request Broker service settings Thread pool settings Object Request Broker custom properties