Use client connections to connect to multiple IBM MQ queue managers
How we can use client channel definition tables (CCDTs), connection name lists (CONNAME list), load balancing, and code stubs to connect to multiple queue managers together with the advantages and disadvantages of each option for a specific requirement.
Terms used in this information
- CCDT- multi-QMGR
- Means a CCDT file that contains multiple client connection (CLNTCONN) channels with the same group, that is the queue manager name client connection (QMNAME CLNTCONN) attribute, where different CLNTCONN entries resolve to different queue managers.
- Load balancer
- Means a network appliance with a Virtual IP address (VIP) configured with port monitoring of the TCP/IP listeners of multiple IBM MQ queue managers. How the VIP is configured in the network appliance depends on the network appliance we are using.
The following choices relate only to applications sending messages, or initiating synchronous request and reply messaging. The considerations for applications servicing those messages and requests, for example, the listeners are completely separate, and discussed in detail in "Connecting a message listener to a queue".
Scale of code change required for existing applications that connect to a single queue manager
- CONNAME list, CCDT multi-QMGR, and Load balancer
- MQCONN("QMNAME") to MQCONN("*QMNAME")
- Code stub
- Replace existing JMS or MQI connection logic with a code stub.
Support for different WLM strategies
- CONNAME list
- Prioritized only.
- CCDT multi-QMGR
- Prioritized or random.
- Load balancer
- Any, including each connection for all messages.
- Code stub
- Any, including each message for all messages.
Performance overhead while primary queue manager is unavailable
- CONNAME list
- Always tries first in list.
- CCDT multi-QMGR
- Remembers last good connection.
- Load balancer
- Port monitoring avoids bad queue managers.
- Code stub
- Can remember last good connection, and retry intelligently.
XA transaction support
- CONNAME list, CCDT multi-QMGR, and Load balancer
- The transaction manager needs to store recovery information that reconnects to the same queue manager resource.
- Code stub
- Code stub can meet the XA requirements for a transaction manager, for example, multiple connection factories.
Connection rebalancing on failback
An example of this is when a queue manager restarts after a failure or planned outage, and you need to know how long it will be until applications can use that queue manager again
- CONNAME list, CCDT multi-QMGR, and Load balancer
- Connection pooling in Java EE holds onto connections indefinitely, unless connections are configured with an aged timeout.
- Code stub
- Code stub can handle failback flexibly, with little or no performance overhead.
Admin flexibility to hide infrastructure changes from apps
- CONNAME list
- DNS only.
- CCDT multi-QMGR
- DNS and shared file-system, or shared file-system, or CCDT file push.
- Load balancer
- Dynamic virtual IP address (VIP).
- Code stub
- DNS or single queue manager CCDT entries.
Avoiding disruption around planned maintenance
There is another situation that we need to consider and plan for, which is how to avoid disruption to applications, for example, errors and timeouts visible to the end users, during planned maintenance of a queue manager. The best approach to avoid disruption is to remove all work from a queue manager before it is stopped.
Consider a request and reply scenario. You want all in-flight requests to complete, and the replies to be processed by the application, but we do not want any additional work to be submitted into the system. Simply quiescing the queue manager does not fulfill this need, as well-coded applications receive a return code RC2161 MQRC_Q_MGR_QUIESCING exception, before they receive their reply messages for in-flight requests.
We can set PUT(DISABLED) on the request queues used to submit work, while leaving the reply queues both PUT(ENABLED) and GET(ENABLED). In this way, we can monitor the depth of the request, transmission, and reply queues. Once they all stabilize, that is, in-flight requests complete or time out, we can stop the queue manager.
However, good coding in the requesting applications is required to handle a PUT(DISABLED) request queue, which results in the return code RC2051 MQRC_PUT_INHIBITED error, when trying to send a message.
Note that the exception does not occur when creating the connection to IBM MQ, or opening the request queue. The exception occurs only when an attempt is made to actually send a message, using the MQPUT call.
Building a code stub that includes this error handling logic for request and reply scenarios, and asking the application teams to use such a code stub in the future, can help you develop applications with consistent behavior.
Parent topic: Application development concepts