+

Search Tips   |   Advanced Search

Interoperation when WebSphere Application Server application servers are clustered and WebSphere MQ queue managers are clustered

WebSphere MQ queue managers are usually clustered in order to distribute the message workload and because, if one queue manager fails, the others can continue running.

In this topic "application server" refers to an application server running on WebSphere Application Server and "queue manager" refers to a queue manager running on WebSphere MQ.

There are two topology options:


The queue managers run on different hosts from the application servers

In the subsequent figure:

Figure 1. WebSphere Application Server clustering: client mode attachment to queue managers

If application server 1 fails:

Application server 2 can take over its workload because they are both attached to queue manager 1.

If application server 2 fails:

Application server 1 can take over its workload because they are both attached to queue manager 1.

If application server 3 fails:

We must restart it as soon as possible for the following reasons:

  • Other application servers in the cluster can take over its external workload, but no other application server can take over its WebSphere MQ workload, because no other application server is attached to queue manager 2. The workload that was generated by application server 3 ceases.

  • Queue manager 3 continues to distribute work between queue manager 1 and queue manager 2, even though the workload arriving at queue manager 2 cannot be processed by application server 1 or 2.

If we choose not to restart, we can alleviate this situation by manually configuring Q1 on queue manager 2 so that the ability to put messages to it is inhibited. This results in all messages being sent to queue manager 1 where they are processed by the other application servers.

If queue manager 1 fails:

You should restart it as soon as possible for the following reasons:

  • Messages that are on queue manager 1 when it fails are not processed until you restart queue manager 1.

  • No new messages from WebSphere MQ applications are sent to queue manager 1, instead new messages are sent to queue manager 2 and consumed by application server 3.

  • Because application servers 1 and 2 are not attached to queue manager 2, they cannot take on any of its workload.

  • Because application servers 1, 2 and 3 are in the same WebSphere Application Server cluster, their non-WebSphere MQ workload continues to be distributed between them all, even though application servers 1 and 2 cannot use WebSphere MQ because queue manager 1 has failed.

Although this networking topology can provide availability and scalability, the relationship between the workload on different queue managers and the application servers to which they are connected is complex. We can contact the IBM representative to obtain expert advice.


The queue managers run on the same hosts as the application servers

In the subsequent figure:

Figure 2. WebSphere Application Server clustering: bindings mode attachment to queue managers

If application server 1 fails:

Application server 3 can take over its workload because they are both attached to queue manager 1.

If application server 3 fails:

Application server 1 can take over its workload because they are both attached to queue manager 1.

If application server 2 fails:

We must restart it as soon as possible for the following reasons:

  • Because no other application server is attached to queue manager 2 no other application server can take over its WebSphere MQ workload. The workload that was generated by application server 2 ceases. Other application servers in the cluster can, however, take over its external workload

  • Queue manager 3 continues to distribute work between queue manager 1 and queue manager 2, even though the workload arriving at queue manager 2 cannot be taken on by application server 2.

    If we choose not to restart, we can alleviate this situation by manually configuring Q1 on queue manager 2 so that the ability to put messages to it is inhibited. This results in all messages being sent to queue manager 1 where they are processed by the other application servers.

If queue manager 1 fails:

We must restart it as soon as possible for the following reasons:

  • Messages that are on queue manager 1 when it fails are not processed until you restart queue manager 1.

  • Because application servers 1 and 3 are not attached to queue manager 2, they cannot take on any of its workload.

  • No new messages from WebSphere MQ applications are sent to queue manager 1, instead new messages are sent to queue manager 2 and consumed by application server 2.

  • Because application servers 1, 2 and 3 are in the same WebSphere Application Server cluster, their non-WebSphere MQ workload continues to be distributed between them all, even though application servers 1 and 3 cannot use WebSphere MQ because queue manager 1 has failed.

Although this networking topology can provide availability and scalability, the relationship between the workload on different queue managers and the application servers with which they are connected is complex. We can contact the IBM representative to obtain expert advice.