Converting an existing network into a cluster

 


Setting up a new cluster through Moving a full repository to another queue manager set up and then extend a new cluster. The remaining two tasks explore a different approach: that of converting an existing network of queue managers into a cluster.

Scenario

  • A WebSphere MQ network is already in place, connecting the nationwide branches of a retail store. It has a hub and spoke structure: all the queue managers are connected to one central queue manager. The central queue manager is on the system on which the inventory application runs. The application is driven by the arrival of messages on the RCVINVCQ queue, for which each queue manager has a remote-queue definition.

  • To ease administration you are going to convert this network into a cluster and create another queue manager at the central site to share the workload.

  • Both the central queue managers are to host full repositories and be accessible to the inventory application.

  • The inventory application is to be driven by the arrival of messages on the RCVINVCQ queue hosted by either of the central queue managers.

  • The inventory application is to be the only application running in parallel and accessible by more than one queue manager. All other applications will continue to run as before.

  • All the branches have network connectivity to the two central queue managers.

  • The network protocol is TCP.

Note:
You do not need to convert your entire network all at once. This task could be completed in stages.

 

1. Review the inventory application for message affinities

Before proceeding ensure that the application can handle message affinities. See Reviewing applications for message affinities for more information.

 

2. Alter the two central queue managers to make them full repository queue managers

The two queue managers CHICAGO and CHICAGO2 are at the hub of this network. You have decided to concentrate all activity associated with the chainstore cluster on to those two queue managers. As well as the inventory application and the definitions for the RCVINVCQ queue, you want these queue managers to host the two full repositories for the cluster. At each of the two queue managers, issue the following command:

ALTER QMGR REPOS(CHAINSTORE)

 

3. Define a CLUSRCVR channel on each queue manager

At each queue manager in the cluster, define a cluster-receiver channel and a cluster-sender channel. It does not matter which of these you define first.

Make a CLUSRCVR definition to advertise each queue manager, its network address, and so on, to the cluster. For example, on queue manager ATLANTA:

DEFINE CHANNEL(TO.ATLANTA) CHLTYPE(CLUSRCVR) TRPTYPE(TCP)
CONNAME(ATLANTA.CHSTORE.COM) CLUSTER(CHAINSTORE)
DESCR('Cluster-receiver channel')

 

4. Define a CLUSSDR channel on each queue manager

Make a CLUSSDR definition at each queue manager to link that queue manager to one or other of the full repository queue managers. For example, you might link ATLANTA to CHICAGO2:

DEFINE CHANNEL(TO.CHICAGO2) CHLTYPE(CLUSSDR) TRPTYPE(TCP)
CONNAME(CHICAGO2.CHSTORE.COM) CLUSTER(CHAINSTORE)
DESCR('Cluster-sender channel to repository queue manager')

 

5. Install the inventory application on CHICAGO2

You already have the inventory application on queue manager CHICAGO. Now you need to make a copy of this application on queue manager CHICAGO2. Refer to the WebSphere MQ Application Programming Guide and install the inventory application on CHICAGO2.

 

6. Define the RCVINVCQ queue on the central queue managers

On CHICAGO, modify the local queue definition for the queue RCVINVCQ to make the queue available to the cluster. Issue the command:

ALTER QLOCAL(RCVINVCQ) CLUSTER(CHAINSTORE)

On CHICAGO2, make a definition for the same queue:

DEFINE QLOCAL(RCVINVCQ) CLUSTER(CHAINSTORE)

(On z/OS you can use the MAKEDEF option of the COMMAND function of CSQUTIL to make an exact copy on CHICAGO2 of the RCVINVCQ on CHICAGO. See the WebSphere MQ for z/OS System Administration Guide for details.)

When you make these definitions, a message is sent to the full repositories at CHICAGO and CHICAGO2 and the information in them is updated. From then on, when a queue manager wants to put a message to the RCVINVCQ, it will find out from the full repositories that there is a choice of destinations for messages sent to that queue.

 

7. Delete all remote-queue definitions for the RCVINVCQ

Now that you have linked all your queue managers together in the CHAINSTORE cluster, and have defined the RCVINVCQ to the cluster, the queue managers no longer need remote-queue definitions for the RCVINVCQ. At every queue manager, issue the command:

DELETE QREMOTE(RCVINVCQ)

Until you do this, the remote-queue definitions will continue to be used and you will not get the benefit of using clusters.

 

8. Implement the cluster workload exit (optional step)

Because there is more than one destination for messages sent to the RCVINVCQ, the workload management algorithm will determine which destination each message will be sent to.

If you want to implement your own workload management routine, write a cluster workload exit program. See Workload balancing for more information.

Now that you have completed all the definitions, if you have not already done so, start the channel initiator on WebSphere MQ for z/OS and, on all platforms, start a listener program on each queue manager. The listener program listens for incoming network requests and starts the cluster-receiver channel when it is needed. See Establishing communication in a cluster for more information.

 

WebSphere is a trademark of the IBM Corporation in the United States, other countries, or both.

 

IBM is a trademark of the IBM Corporation in the United States, other countries, or both.