Design and performance considerations for z/OS applications

Application design is one of the most important factors affecting performance. Use this topic to understand some of the design factors involved in performance.

There are a number of ways in which poor program design can affect performance. These problems can be difficult to detect because the program can appear to perform well, while affecting the performance of other tasks. Several problems specific to programs making MQI calls are demonstrated in the following sections.

For more information about application design, see Design considerations for IBM MQ applications.


Effect of message length

Although IBM MQ for z/OS allows messages to hold up to 100 MB of data, the amount of data in a message affects the performance of the application that processes the message. To achieve the best performance from the application, send only the essential data in a message. For example, in a request to debit a bank account, the only information that might need to be passed from the client to the server application is the account number and the amount to debit.


Effect of message persistence

Persistent messages are logged. Logging messages reduces the performance of the application, so use persistent messages for essential data only. If the data in a message can be discarded if the queue manager stops or fails, use a nonpersistent message.

Data for persistent messages is written to log buffers. These buffers are written to the log data sets when:

  • A commit occurs
  • A message is got or put out of syncpoint
  • WRTHRSH buffers are filled

Processing many messages in one unit of work can cause less input/output than if the messages were processed one for each unit of work, or out of syncpoint.


Searching for a particular message

The MQGET call typically retrieves the first message from a queue. If we use the message and correlation identifiers ( MsgId and CorrelId ) in the message descriptor to specify a particular message, the queue manager searches the queue until it finds that message. Using MQGET in this way affects the performance of the application because, to find a particular message, IBM MQ might have to scan the entire queue.

We can use the IndexType queue attribute to specify that we want the queue manager to maintain an index that can be used to increase the speed of MQGET operations on the queue. However, there is a small performance reduction for maintaining an index, so only generate one if we need to use it. We can choose to build an index of message identifiers or of correlation identifiers, or we can choose not to build an index for queues where messages are retrieved sequentially. Try to have many different key values, not lots with the same value. For example Balance1, Balance2, and Balance3, not three with Balance. For shared queues, we must have the correct IndexType. For details of the IndexType queue attribute, see IndexType.

To avoid affecting queue manager restart time by using indexed queues, use the QINDXBLD(NOWAIT) parameter in the CSQ6SYSP macro. This allows the queue manager restart to complete without waiting for queue index building to complete.

For a full description of the IndexType attribute, and other object attributes see Attributes of objects.


Queues that contain messages of different lengths

Get a message, using a buffer size matching the expected size of the message. If you receive the return code indicating that the message is too long, get a bigger buffer. When the get fails in this way, the data length returned is the size of the unconverted message data. If you specify MQGMO_CONVERT on the MQGET call, and the data expands during conversion, it still might not fit in the buffer, in which case we need to further increase the size of the buffer.

If we issue the MQGET with a buffer length of zero, it returns the size of the message and the application can then get a buffer of this size and reissue the get. If you have multiple applications processing the queue, another application might have already processed the message when the original application reissued the get. If you occasionally have large messages, you might need to get a large buffer just for these messages, and release it after the message has been processed. This should help reduce virtual storage problems if all applications have large buffers.

If the application cannot use messages of a fixed length, another solution to this problem is to use the MQINQ call to find the maximum size of messages that the queue can accept, then use this value in your MQGET call. The maximum size of messages for a queue is stored in the MaxMsgL attribute of the queue. This method could use large amounts of storage, however, because the value of MaxMsgL could be as high as 100 MB, the maximum allowed by IBM MQ for z/OS. Note: We can lower the MaxMsgL parameter after large messages have been put to the queue. For example we can put a 100 MB message, then set MaxMsgL to 50 bytes. This means that it is still possible to get bigger messages than the application expected.


Frequency of syncpoints

Programs that issue many MQPUT calls within syncpoint, without committing them, can cause performance problems. Affected queues can fill up with messages that are currently unusable, while other tasks might be waiting to get these messages. This has implications in terms of storage, and in terms of threads tied up with tasks that are attempting to get messages.

As a rule if you have multiple applications processing a queue you typically get the best performance when you have either

  • 100 short messages (less than 1 KB), or
  • One message for larger messages (100 KB)

for each syncpoint. If there is only one application processing the queue, we must have more messages for each unit of work.

We can limit the number of messages that a task can get or put within a single unit of recovery with the MAXUMSGS queue manager attribute. For information about this attribute, see the ALTER QMGR command in MQSC commands.


Advantages of the MQPUT1 call

Use the MQPUT1 call only if you have a single message to put on a queue. To put more than one message, use the MQOPEN call, followed by a series of MQPUT calls and a single MQCLOSE call.


How many messages can a queue manager contain

    Local Queues

    The number of local messages a queue manager can hold is basically the size of the page sets. You can have up to 100 page sets (though it is recommended page set 0 and page set 1 are for system related objects and queues). We can use a page set with extended format and increase the capacity of a page set.

    Shared Queues

    The capacity for shared queues depends on the size of the coupling facility (CF). IBM MQ uses CF list structures where fundamental storage units are entries and elements. Each message is stored as 1 entry and multiple elements containing the associated MQMD and other message data. The number of elements consumed by a single message depends on the size of the message and, for CFLEVEL(5), the offload rules in effect at MQPUT time. Fewer elements are needed when message data is offloaded to either Db2 or SMDS. Message data access is slower when the message has been offloaded. See Performance Supportpac MP1H for further comparison of performance and CPU overhead associated with message offload.


What affects performance

Performance can mean how fast messages can be processed, and it can also mean how much CPU is needed per message.

    What affects how fast messages can be processed

    For persistent messages the biggest impact is the speed of the log data sets. The speed of the log data sets depends on the DASD they are on. Therefore care should be taken to put log data set on low used volumes to reduce contention. Striping the MQ logs improves the log performance when there are multiple pages written per I/O. Z High Performance Fibre connection (zHPF) also has a significant performance to I/O response time when the I/O subsystem is busy.

    When there is a request to get and put a message, access to the queue is locked during the request to preserve integrity of the queue. For planning purposes consider the queue locked for the whole request. So if the time for a put is 100 microseconds, and you have more than 10,000 requests a second you might experience delays. We might achieve better than this in practice, but it is a good general rule. We can use different queues to improve performance.

    Possible reasons for this can be:

    • use a common reply queue which every CICS transaction uses
    • each CICS transaction is given a unique reply to queue
    • a reply to a queue for CICS region and all transactions in the CICS region use this queue.

    The answer depends on the number of requests a second, and the response time of the requests.

    If messages have to be read from a page set, they will be slower compared to when the messages are in the buffer pool. If we have more messages than fit into a buffer pool, then they will spill to disk. So we need to ensure that the buffer pool is big enough for the short lived messages. If you have messages that you process many hours later, these are likely to spill to disk, so you should expect a get for these messages to be slower than if they were in the buffer pool.

    For a shared queue, the speed of the messages depends on the speed of the Coupling Facility. A CF within the physical processor is likely to be faster than an external CF. The CF response time depends on how busy the CF is. For example on the Hursley systems, when the CF was 17% busy the response time was 14 microseconds. When the CF was 95% busy the response time was 45 microseconds.

    If your MQ requests use a lot of CPU, this can affect how fast messages are processed. Because if the Logical Partition (LPAR) is constrained for CPU, applications will be delayed waiting for CPU.

    How much CPU per message

    In general bigger messages use more CPU, so avoid large (x MB) messages if possible.

    When getting specific messages from queues, the queue should be indexed so the queue manager can go directly to the message (and so avoids potentially an entire scan of the queue). If the queue is not indexed then the queue is scanned from the beginning looking for the message. If there are 1000 messages on the queue, it may have to scan all 1000 messages. The result is a lot of unnecessary CPU usage.

    Channels using TLS have an additional cost due to the encryption of the message.

    In MQ V7 we can select messages by a selector string in addition to the CORRELID or MSGID. Every message has to be looked in, so if there are many messages on the queue this is expensive.

    It is more efficient for an application to do OPEN PUT PUT CLOSE than PUT1 PUT1.

    Triggering in CICS

    When the message arrival rate of messages for a triggered queue is low, it is efficient to use trigger first. When the message arrival rate is more than 10 messages a second, it is more efficient to trigger the first transaction, then have the transaction process a message and get the next message, and so on. If a message has not arrived in a short period ( say between 0.1 and 1 second) the transaction ends. At high throughput you might need multiple transactions running to process the messages and to prevent a build up of messages. For every trigger message produced, this requires a put and a get of a trigger message, which in effect doubles the cost of the message.

    How many connections or concurrent users are supported

    Each connection uses virtual storage within the queue manager, so the more concurrent users the more storage used. For a very large buffer pool and large number of users, then you might be constrained for virtual storage, and you might need to reduce the size of our buffer pools.

    If security is being used, the queue manager caches information within the queue manager for a long period. The amount of virtual storage that is used within the queue manager is affected.

    The CHINIT can support up to about 10,000 connections. This is limited by virtual storage. If a connection uses more storage, for example using by TLS, the storage per connection increases, which therefore means the CHINIT can support less connections. If we are processing large messages, these will require more storage for buffers in the CHINIT, so the CHINIT can support less messages.

    Connections to a remote queue manager are more efficient than client connections. For example, every MQ client requests requires two network flows (one for the request and one for the response). With a channel to a remote queue manager, there may be 50 sends over the network before a response comes back. If we are considering a large client network, it may be more efficient to use a concentrator queue manager on a distributed box, and have one channel coming in and out of the concentrator.


Other things affecting performance

Log data set should be at least 1000 cylinders in size. If the logs are smaller than this, checkpoint activity may be too frequent. On a busy system a checkpoint typically should be every 15 minutes or longer, at very high throughputs it may less than this. When a checkpoint occurs the buffer pools are scanned and 'old' messages and changed pages are written to disk. If checkpoints are too frequent, this can impact performance. The value of LOGLOAD can also affect checkpoint frequency. If the queue manager abnormally ends, then at restart it may have to read back to 3 checkpoints. The best checkpoint interval is a balance between the activity when a checkpoint is taken, and the amount of log data that may need to be read when the queue manager restarts.

There is a significant overhead incurred when starting a channel. It is usually better to start a channel and leave it connected, rather than frequent starts and stops of the channel.

Parent topic: Design considerations for IBM MQ applications


Related information