Address space storage
Use this topic for basic guidance on address space requirements for the IBM MQ components.
Storage requirements can be divided into the following categories:For more details see, Suggested regions sizes.
With 31-bit address space, a virtual line marks the 16-megabyte address, and 31-bit addressable storage is often known as
above the (16MB) line. With 64-bit address space there is a second virtual line called the bar that marks the 2-gigabyte address. The bar separates storage below the 2-gigabyte address, calledbelow the bar, from storage above the 2-gigabyte address, calledabove the bar. Storage below the bar uses 31-bit addressability, storage above the bar uses 64-bit addressability.We can specify the limit of 31-bit storage by using the REGION parameter on the JCL, and the limit of above the bar storage by using the MEMLIMIT parameter. These specified values can be overridden by MVS exits.
Attention: A change to how the system works has been introduced. Now, Cross-system Extended Services (XES) allocates 4 GB of storage in high virtual storage for each connection to a serialized list structure, or 36 GB for each connection to a lock structure.Prior to this change, this storage was allocated in data spaces. After application of this APAR, based on the way IBM MQ calculates storage usage, messages CSQY225E and CSQY224I might be issued, indicating Queue manager is short of local storage above the bar.
You will also see an increase to the above bar values in message CSQY220I
For more information, see the IBM support document 2017139.
Common storage
Each IBM MQ for z/OS® subsystem has the following approximate storage requirements:In addition, each concurrent IBM MQ logical connection requires about 5 KB of ECSA. When a task ends, other IBM MQ tasks can reuse this storage. IBM MQ does not release the storage until the queue manager is shut down, so we can calculate the maximum amount of ECSA required by multiplying the maximum number of concurrent logical connections by 5 KB. Concurrent logical connections are the number of:
- CSA 4 KB
- ECSA 800 KB, plus the size of the trace table that is specified in the TRACTBL parameter of the CSQ6SYSP system parameter macro. For more information, see Use CSQ6SYSP.
- Tasks (TCBs) in Batch, TSO, z/OS UNIX and Linux System Services, IMS, and Db2® SPAS regions that are connected to IBM MQ, but not disconnected.
- CICS® transactions that have issued an IBM MQ request, but have not terminated
- JMS Connections, Sessions, TopicSessions or QueueSessions that have been created (for bindings connection), but not yet destroyed or garbage collected.
- Active IBM MQ channels.
We can set a limit to the common storage, used by logical connections to the queue manager, with the ACELIM configuration parameter. The ACELIM control is primarily of interest to sites where Db2 stored procedures cause operations on IBM MQ queues.
When driven from a stored procedure, each IBM MQ operation can result in a new logical connection to the queue manage. Large Db2 units of work, for example due to table load, can result in an excessive demand for common storage.
ACELIM is intended to limit common storage use and to protect the z/OS system. Using ACELIM causes IBM MQ failures when the limit is exceeded. See the ACELIM section in Use CSQ6SYSP for more information.
Use SupportPac MP1B to format the SMF 115 subtype 3 records produced by STATISTICS CLASS(2) trace.
The amount of storage currently in the subpool controlled by the ACELIM value is indicated in the output, on the line titled ACE/PEB. SupportPac MP1B indicates the number of bytes in use.
Increase the normal value by a sufficient margin to provide space for growth and workload spikes. Divide the new value by 1024 to yield a maximum storage size in KB for use in the ACELIM configuration.
The channel initiator typically requires ECSA usage of up to 160 KB.
Queue manager private region storage usage
IBM MQ for z/OS can use storage above the 2 GB bar for some internal control blocks. We can have buffer pools in this storage, which gives you the potential to configure much larger buffer pools if sufficient storage is available. Typically buffer pools are the major internal control blocks that use storage above the 2 GB bar.
Each buffer pool size is determined at queue manager initialization time, and storage is allocated for the buffer pool when a page set that is using that buffer pool is connected. A new parameter LOCATION (ABOVE|BELOW) is used to specify where the buffers are allocated. We can use the ALTER BUFFPOOL command to dynamically change the size of buffer pools.
To use above the bar (64 Bit) storage, we can specify a value for MEMLIMIT parameter (for example MEMLIMIT=3G) on the EXEC PGM=CSQYASCP parameter in the queue manager JCL. Your installation might have a default value set.
You should specify a MEMLIMIT and specify a sensible storage size rather than MEMLIMIT=NOLIMIT to prevent potential problems. If you specify NOLIMIT or a very large value, then an ALTER BUFFPOOL command with a large size, can use up all of the available z/OS virtual storage, which will lead to paging in your system.
Start with a MEMLIMIT=3G and increase this size when you need to increase the size of your buffer pools.
Specify MEMLIMIT= 2 GB plus the size of the buffer pools above the bar, rounded up to the nearest GB. For example, for 2 buffer pools configured with LOCATION ABOVE, buffer pool 1 has 10,000 buffers, buffer pool 2 has 50,000 buffers. Memory usage above the bar equals 60,000 (total number of buffers) * 4096 = 245,760,000 bytes = 234.375 MB. All buffer pools regardless of LOCATION will make use of 64 bit storage for control structures. As the number of buffer pools and number of buffers in those pools increase this can become significant. A good rule of thumb is that each buffer requires an additional 200 bytes of 64 bit storage. For a configuration with 10 buffer pools each with 20,000 buffers that would require: 200 * 10 * 20,000 = 40,000,000 equivalent to 40 MB. We can specify 3 GB for the MEMLIMIT size, which will allow scope for growth (40MB + 200MB + 2 GB which rounds up to 3 GB).
For some configurations there can be significant performance benefits to using buffer pools that have their buffers permanently backed by real storage. We can achieve this by specifying the FIXED4KB value for the PAGECLAS attribute of the buffer pool. However, you should only do this if there is sufficient real storage available on the LPAR, otherwise other address spaces might be affected. For information about when you should use the FIXED4KB value for PAGECLAS, see IBM MQ Support Pac MP16: IBM MQ for z/OS - Capacity planning & tuning
To minimize paging, consider real storage in addition to the virtual storage that is used by the queue manager and the channel initiator.
Before we use storage above the bar, you should discuss with your MVS systems programmer to ensure that there is sufficient auxiliary storage for peak time usage, and sufficient real storage requirements to prevent paging.
Note: The size of memory dump data sets might have to be increased to handle the increased virtual storage.Making the buffer pools so large that there is MVS paging might adversely affect performance. You might consider using a smaller buffer pool that does not page, with IBM MQ moving the message to and from the page set.
We can monitor the address space storage usage from the CSQY220I message that indicates the amount of private region storage in use above and below the 2 GB bar, and the remaining amount.
Channel initiator storage usage
There are two areas of channel initiator storage usage that you must consider:Private region storage usage
- Private region
- Accounting and statistics
You should specify REGION=0M for the CHINIT to allow it to use the maximum below the bar storage. The storage available to the channel initiator limits the number of concurrent connections the CHINIT can have.
Every channel uses approximately 170 KB of extended private region in the channel initiator address space. Storage is increased by message size if messages larger than 32 KB are transmitted. This increased storage is freed when:The storage is freed for reuse within the Language Environment, however, is not seen as free by the z/OS virtual storage manager. This means that the upper limit for the number of channels is dependent on message size and arrival patterns, and on limitations of individual user systems on extended private region size. The upper limit on the number of channels is likely to be approximately 9000 on many systems because the extended region size is unlikely to exceed 1.6 GB. The use of message sizes larger than 32 KB reduces the maximum number of channels in the system. For example, if messages that are 100 MB long are transmitted, and an extended region size of 1.6 GB is assumed, the maximum number of channels is 15.
- A sending or client channel requires less than half the current buffer size for 10 consecutive messages.
- A heartbeat is sent or received.
The channel initiator trace is written to a dataspace. The size of the database storage, is controlled by the TRAXTBL parameter. See ALTER QMGR.
Accounting and statistics storage usageYou should allow the channel initiator access to a minimum of 256 MB of virtual storage, and we can do this by specifying MEMLIMIT=256M.
If we do not set the MEMLIMIT parameter in the channel initiator JCL, we can set the amount of virtual storage above the bar using the MEMLIMIT parameter in the SMFPRMxx member of SYS1.PARMLIB, or from the IEFUSI exit.
If you set the MEMLIMIT to restrict the above bar storage below the required level, the channel initiator issues the CSQX124E message and class 4 accounting and statistics trace will not be available.
Suggested region sizes
The following table shows suggested values for region sizes.
Table 1. Suggested definitions for JCL region sizes Definition setting System Queue manager REGION=0M, MEMLIMIT=3G Channel initiator REGION=0M
Managing the MEMLIMIT and REGION size
Other mechanisms, for example the MEMLIMIT parameter in the SMFPRMxx member of SYS1.PARMLIB or the IEFUSI exit might be used at your installation to provide a default amount of virtual storage above the bar for z/OS address spaces. See memory management above the bar for full details about limiting storage above the bar.