Designing clusters
Selecting queue managers to hold full repositories
In each cluster select at least one, preferably two, or possibly more of the queue managers to hold full repositories. A cluster can work quite adequately with only one full repository but using two improves availability. You interconnect the full repository queue managers by defining cluster-sender channels between them.
- The most important consideration is that the queue managers chosen to hold full repositories need to be reliable and well managed. For example, it would be far better to choose queue managers on a stable z/OS system than queue managers on a portable personal computer that is frequently disconnected from the network.
- You might also consider the location of the queue managers and choose ones that are in a central position geographically or perhaps ones that are located on the same system as a number of other queue managers in the cluster.
- Another consideration might be whether a queue manager already holds the full repositories for other clusters. Having made the decision once, and made the necessary definitions to set up a queue manager as a full repository for one cluster, you might well choose to rely on the same queue manager to hold the full repositories for other clusters of which it is a member.
When a queue manager sends out information about itself or requests information about another queue manager, the information or request is sent to two full repositories. A full repository named on a CLUSSDR definition handles the request whenever possible, but if the chosen full repository is not available another full repository is used. When the first full repository becomes available again it collects the latest new and changed information from the others so that they keep in step.
In very large clusters, containing thousands of queue managers, you might want to have more than two full repositories. Then you might have one of the following topologies. (These are only example topologies)
If all the full repository queue managers go out of service at the same time, queue managers continue to work using the information they have in their partial repositories. Clearly they are limited to using the information that they have. New information and requests for updates cannot be processed. When the full repository queue managers reconnect to the network, messages are exchanged to bring all repositories (both full and partial) back up-to-date.
The full repositories republish the publications they receive through the manually-defined CLUSSDR channels, which must point to other full repositories in the cluster. You must make sure that a publication received by any full repository ultimately reaches all the other full repositories. You do this by manually defining CLUSSDR channels between the full repositories. The more interconnection of full repositories you have, the more robust the cluster.
Having only two full repositories is sufficient for all but very exceptional circumstances.
Organizing a cluster
Having selected the queue managers to hold full repositories, you need to decide which queue managers should link to which full repository. The CLUSSDR channel definition links a queue manager to a full repository from which it finds out about the other full repositories in the cluster. From then on, the queue manager sends messages to any two full repositories, but it always tries to use the one to which it has a CLUSSDR channel definition first. It is not significant which full repository you choose. However, consider the topology of your configuration, and perhaps the physical or geographical location of the queue managers as shown in Figure 12 through Figure 14.
WebSphere MQ Explorer tries to contact a full repository queue manager in your cluster in order to build its displays. If you are using a z/OS system as a full repository, the Explorer will not be able to contact it because z/OS queue managers cannot run the command server to respond to the Explorer's PCF commands. To ensure that a particular full repository queue manager is not used by the WebSphere MQ Explorer, include the string %NOREPOS% in the description field of its cluster-receiver channel definition. When the explorer chooses which full repository to contact, it ignores those whose channel description contains %NOREPOS%, and treats them as though they did not hold a full repository for the cluster.
Because all cluster information is sent to two full repositories, there might be situations in which you want to make a second CLUSSDR channel definition. You might do this in a cluster that has a large number of full repositories, spread over a wide area, to control which full repositories your information is sent to.
Naming Convention
When setting up a new cluster, consider a naming convention for the queue managers. Every queue manager must have a different name, but it might help you to remember which queue managers are grouped where if you give them a set of similar names.
Every cluster-receiver channel must also have a unique name. One possibility is to use the queue-manager name preceded by the preposition TO. For example TO.PARIS, TO.LONDON, and so on. If you have more than one channel to the same queue manager, each with different priorities or using different protocols you might extend this convention to use names such as TO.PARIS.S1, TO.PARIS.N3, and TO.PARIS.T4. A1 might be the first SNA channel, N3 might be the NetBIOS channel with a network priority of 3, and so on.
The find qualifier might describe the class of service the channel provides. See Defining classes of service for more details.
Remember that all cluster-sender channels have the same name as their corresponding cluster-receiver channel.
Do not use generic connection names on your cluster-receiver definitions. In WebSphere MQ for z/OS you can define VTAM generic resources or Dynamic Domain Name Server (DDNS) generic names, but do not do this if you are using clusters. If you define a CLUSRCVR with a generic CONNAME there is no guarantee that your CLUSSDR channels will point to the queue managers you intend. Your initial CLUSSDR might end up pointing to any queue manager in the queue-sharing group, not necessarily one that hosts a full repository. Furthermore, if a channel goes to retry status, it might reconnect to a different queue manager with the same generic name and the flow of your messages will be disrupted.
Overlapping clusters
You can create clusters that overlap, as described in Putting across clusters. There are a number of reasons you might do this, for example:
- To allow different organizations to have their own administration.
- To allow independent applications to be administered separately.
- To create classes of service.
- To create test and production environments.
In Figure 10 the queue manager LONDON is a member of both the clusters illustrated. When a queue manager is a member of more than one cluster, you can take advantage of namelists to reduce the number of definitions you need. A namelist can contain a list of names, for example, cluster names. You can create a namelist naming the clusters, and then specify this namelist on the ALTER QMGR command for LONDON to make it a full repository queue manager for both clusters. See Adding a new, interconnected cluster for some examples of how to use namelists.
If you have more than one cluster in your network, give them different names. If two clusters with the same name are ever merged, it will not be possible to separate them again. It is also a good idea to give the clusters and channels different names so that they are more easily distinguished when you look at the output from DISPLAY commands. Queue manager names must be unique within a cluster for it to work correctly.
Defining classes of service
Imagine a university that has a queue manager for each member of staff and each student. Messages between members of staff are to travel on channels with a high priority and high bandwidth. Messages between students are to travel on cheaper, slower channels. You can set up this network using traditional distributed queuing techniques. WebSphere MQ knows which channels to use by looking at the destination queue name and queue manager name.
To clearly differentiate between the staff and students, you could group their queue managers into two clusters as shown in Figure 15. WebSphere MQ will move messages to the meetings queue in the staff cluster only over channels that are defined in that cluster. Messages for the gossip queue in the students cluster go over channels defined in that cluster and receive the appropriate class of service.
Objects
The following objects are needed when using WebSphere MQ clusters. They are included in the set of default objects defined when you create a queue manager except on z/OS, where they can be found in the customization samples.
Do not alter the default queue definitions. You can alter the default channel definitions in the same way as any other channel definition, using MQSC or PCF commands.
- SYSTEM.CLUSTER.REPOSITORY.QUEUE
- Each queue manager in a cluster has a local queue called SYSTEM.CLUSTER.REPOSITORY.QUEUE. This queue is used to store all the full repository information. This queue is not normally empty.
- SYSTEM.CLUSTER.COMMAND.QUEUE
- Each queue manager in a cluster has a local queue called SYSTEM.CLUSTER.COMMAND.QUEUE. This queue is used to carry messages to the full repository. The queue manager uses this queue to send any new or changed information about itself to the full repository queue manager and to send any requests for information about other queue managers. This queue is normally empty.
- SYSTEM.CLUSTER.TRANSMIT.QUEUE
- Each queue manager has a definition for a local queue called SYSTEM.CLUSTER.TRANSMIT.QUEUE. This is the transmission queue for all messages to all queues and queue managers that are within clusters.
- SYSTEM.DEF.CLUSSDR
- Each cluster has a default CLUSSDR channel definition called SYSTEM.DEF.CLUSSDR. This is used to supply default values for any attributes that you do not specify when you create a cluster-sender channel on a queue manager in the cluster.
- SYSTEM.DEF.CLUSRCVR
- Each cluster has a default CLUSRCVR channel definition called SYSTEM.DEF.CLUSRCVR. This is used to supply default values for any attributes that you do not specify when you create a cluster-receiver channel on a queue manager in the cluster.
WebSphere is a trademark of the IBM Corporation in the United States, other countries, or both.
IBM is a trademark of the IBM Corporation in the United States, other countries, or both.