+

Search Tips   |   Advanced Search

(zos)

MDB throttle settings for message-driven beans on z/OS

We can tune a variety of settings for the "MDB throttle", to control the amount of MDB work that the server processes at a given time.

The following concerns affect the choice of MDB throttle settings:


MDB throttle - high and low thresholds

This topic assumes that you are mapping a single listener port to any given destination. Although the throttle threshold values are calculated individually for each listener port, only a single queue agent thread exists for each destination regardless of the number of listener ports. For example, when listener ports A and B are mapped to queue Q, if listener port B reaches the low threshold, it releases the throttle on the single queue agent thread for queue Q, even though listener port A might have reached its high threshold (and would therefor be blocking if the listener ports were mapped to separate destinations). More specifically, we cannot map multiple listener ports to a single destination and set a high threshold of 1 on each listener port, to perform serial processing on the message-driven beans on each listener port. Because of this restriction, it is strongly recommended that you map a single listener port only to each destination.

The MDB throttle support maintains a count of the current number of in-flight messages for each listener port.

When a message reference is sent to the MRH, the in-flight count is incremented by 1. Next, the in-flight count is compared against the high threshold value for this listener port.

The in-flight count is decremented by 1 whenever the controller is notified that a work record for this Listener Port has completed (whether or not the application transaction was committed). After being decremented, the in-flight count is checked against the low threshold value for this Listener Port. If the in-flight count drops down to the low threshold value, the previously-blocked queue agent thread is awoken (notified). At that point new work records can be queued onto the WLM queue. That is, the throttle has been "released".

The low threshold and high threshold values are set externally by one setting, the listener port Maximum sessions parameter. The high threshold value is set internally equal to the Maximum sessions value defined externally. The low threshold value is computed and set internally by the formula: low threshold = (high threshold / 2), with the value rounding down to the nearest integer.

However, if the Message Listener service has been configured with the Custom Property of MDB.THROTTLE.THRESHOLD.LOW.EQUALS.HIGH defined and set to a value of "true", then the low threshold value is set internally to the high threshold value (which is the externally-set Maximum sessions property of the listener port).

One queue agent thread is established for each destination, rather than for each listener port. Therefore, it two listener ports are mapped onto the same queue, a throttle-blocking condition on one listener port also results in a blocking of the queueing of work records for the second listener port. The second listener port is blocked even if it has not reached its high threshold value. For simplicity, do not share a destination across more than one listener port.


MDB throttle - tuning

You should set the listener port "Maximum sessions" value to twice the number of worker threads available in all servants in the scalable server (2*WT) so that, if we have an available worker thread in some servant, we do not leave it idle because of a blocked MDB throttle. That is, we do not want to have an empty WLM queue, an available servant worker thread, and a blocked throttle.

With the basic recommendation of using the 2*WT value, then, a blocked throttle is released at the moment when the following conditions are true:

Furthermore, by setting the high threshold to 2*(WT+N) we can ensure that, at the moment a servant worker thread frees up and releases the throttle, there is a backlog of N messages pre-processed and sitting on the WLM queue ready for dispatch. However, setting a very high value introduces the WLM queue overload problem that the throttle was introduced to avoid. Note that this scenario assumes that the queue (or topic) is fully-loaded with messages to be processed.

Therefore raising the high threshold value allows the server to create a small backlog of preprocessed messages sitting on the WLM queue if a workload spike occurs. However, raising the high threshold value also increases the chance that a work record for a given message might time out before the application can be dispatched with the given message. That is, the server might reach the MDB Timeout limit. The given message is eventually re-delivered to the server, but only later and the processing done up until that time would be wasted. Also, a very large high threshold value would effectively bypass the MDB throttle function, in which case the WLM queue could be overloaded; this would cause the server to fail.


MDB throttle - alternative tuning

Although the scalable server was designed with the goal of maximizing throughput, it is possible to use the listener port settings to achieve other workflow management goals.

For example, a high threshold setting of ‘1' guarantees that messages are processed in the order that they are received onto the destination.

There might also be other business reasons, based on capacity or other factors, to restrict a particular listener port to much less concurrency than the server would otherwise support. Although this is certainly a supported configuration, it might cause the throttle to block when there are idle worker threads available.


MDB throttle example

Suppose your server is configured with the maximum server instances value set to 3, with workload profile of IOBOUND. You have two CPUs, therefore WebSphere Application server will create six worker threads in each servant. Your application (a single MDB mapped to a queue) handles each message relatively quickly (so there is less risk of timeout) and we want the total time from arrival of a given message on the MDB queue until the end of MDB dispatch for this message to be as small as possible.

To provide a quick response time for a surge in work, you opt for a bigger backlog. You set the Listener Port maximum sessions value to 100 = 2 * (3 * 6 + 32).

Any value greater than or equal to 36 = 2 * 3 * 6 would keep all available servant worker threads busy. In practice it is not critical to pick the best possible "backlog factor", it is sufficient to make a good estimate then round up to a approximate value. For example in this case you might choose a value of 100.


Related concepts

(zos) Tune listener ports for workload management on WebSphere Application Server for z/OS

(zos) Default messaging provider activation specifications and workload management on WebSphere Application Server for z/OS

(zos) Tune WebSphere MQ activation specifications for workload management on WebSphere Application Server for z/OS

(zos) The message-driven bean throttling mechanism on z/OS

(zos) Connection factory settings for ASF message-driven beans that use WebSphere MQ as the messaging provider on z/OS


Related tasks

(zos) Tune message-driven bean processing on z/OS by using WebSphere MQ as the messaging provider in ASF mode