+

Search Tips   |   Advanced Search

Network topologies for interoperation using an IBM MQ link

For completeness, this topic describes a wide range of topologies, including clustered and highly available topologies. Note that, for clustering and high availability, we need to use the network deployment or z/OS version of the product.


Single WAS application server connected to a single IBM MQ queue manager

In this basic scenario, an IBM MQ link connects a single WAS application server to an IBM MQ queue manager. The WAS messaging engine that connects to IBM MQ using the IBM MQ link is called the gateway messaging engine. The IBM MQ queue manager or queue-sharing group to which the IBM MQ link connects is called the gateway queue manager.

IBM MQ links always use TCP/IP connections, even if the IBM MQ queue manager is running on the same host as the application server. We do not need to specify a client or bindings transport type for the connection, as we do when IBM MQ is the messaging provider.

The IBM MQ link consists of one or two message channels to send messages to IBM MQ, receive messages from IBM MQ, or both. Each message channel uses one TCP/IP connection.

The message channels support point-to-point messaging between WAS applications and IBM MQ applications. We can also configure a publish/subscribe bridge on the IBM MQ link for publish/subscribe messaging between WAS applications and IBM MQ applications. See Message exchange through an IBM MQ link.


WAS cell connected to an IBM MQ network

A single IBM MQ link can connect an entire WAS service integration bus, representing multiple application servers, to multiple IBM MQ queue managers. The messages that are exchanged between the two networks all pass through the IBM MQ link, which connects a single gateway messaging engine in WAS, and a single gateway queue manager in IBM MQ. The gateway messaging engine and gateway queue manager distribute the messages, which can be point-to-point or publish/subscribe messages, to the appropriate application servers and queue managers in their respective networks.

With this topology, interoperation ceases if any of the following conditions occurs:

In these situations, none of the application servers in the WAS cell can communicate with any of the queue managers in IBM MQ. In the event of a failure, messages are queued as follows:

We can improve the robustness of this topology and introduce greater availability by setting up high availability frameworks in WAS and IBM MQ.


High availability for a WAS cell connected to an IBM MQ network

The WAS high availability framework eliminates single points of failure and provides peer to peer failover for applications and processes running within WAS. This framework also allows integration of WAS into an environment that uses other high availability frameworks, such as High Availability Cluster Multi-Processing (HACMP), in order to manage non-WAS resources.

Both WAS application servers and IBM MQ queue managers can be arranged in clusters, so that if one fails, the others can continue running. In the network topology shown here, the WAS cell containing the service integration bus now includes a WAS cluster which provides backup for the gateway messaging engine. If the gateway messaging engine fails, it can restart in another application server in the cluster, and it can then restart the IBM MQ link to the gateway queue manager. Similarly, the gateway queue manager is part of an IBM MQ high-availability cluster.

For WAS and IBM MQ to interoperate in this network topology, we must add support for changes of IP address. The IBM MQ gateway queue manager uses one IP address to reach the WAS gateway messaging engine, and the WAS gateway messaging engine uses one IP address to reach the IBM MQ gateway queue manager. In a high availability configuration, if the gateway messaging engine fails over to a different application server, or the gateway queue manager fails and is replaced by a failover gateway queue manager, the connection to the original IP address for the failed component is lost. We must ensure that both products are able to reinstate their connection to the component in its new location.

To ensure that the connection to a failover WAS gateway messaging engine is reinstated, choose one of the following options:

  1. If we are using a version of IBM MQ that is earlier than v7.0.1, install the SupportPac R01 for IBM MQ. This SupportPac provides the IBM MQ queue manager with a list of alternative IP addresses and ports, so that the queue manager can connect with the WAS gateway messaging engine after the messaging engine fails over to a different IP address and port. In WAS we must set a high availability policy of "One of N" for the gateway messaging engine. For more information about the IBM MQ MR01 SupportPac, see MR01: Creating a HA Link between IBM MQ and a Service Integration Bus.

  2. If we are using IBM MQ Version 7.0.1+, use the connection name (CONNAME) to specify a connection list. Although typically only one machine name is required, we can provide multiple machine names to configure multiple connections with the same properties. The connections are tried in the order in which they are specified in the connection list until a connection is successfully established. If no connection is successful, the channel starts retry processing. When using this option, specify the CONNAME as a comma-separated list of names of machines for the stated TransportType, making sure that all the WAS cluster member IPs are listed directly in the CONNAME. For further information about using the CONNAME, see the IBM MQ information center.

    IBM MQ Version 7.0.1 does not require SupportPac MR01 because this release includes the equivalent function to that provided by SupportPac MR01 for earlier releases. The ability to use the CONNAME to specify a connection list was added as part of the support for multi-instance queue managers in IBM MQ v7.0.1, however, it can also be used as another option to ensure that the connection to a failover WAS gateway messaging engine is reinstated.

  3. Use an external high availability framework, such as HACMP, to manage a resource group containing the gateway messaging engine. When we use an external high availability framework, the IP address can be failed over to the machine that runs the application server to which the gateway messaging engine has moved. Follow this procedure to handle the IP address correctly:

    • Set a high availability policy of "No operation" for the messaging engine, so that the external high availability framework controls when and where the messaging engine runs.

    • Create resources for the messaging engine and its IP address in the resource group managed by the external high availability framework.

    • Consider locating the messaging engine data store in the same resource group as the resource that represents the messaging engine.

To ensure that the connection to a failover IBM MQ gateway queue manager is reinstated, choose one of the following options:

  1. Set up multi-instance queue managers in IBM MQ, as described in the IBM MQ information center. In your definition for the IBM MQ link sender channel, select Multiple Connection Names List, and specify the host names (or IP addresses) and ports for the servers where the active and standby queue managers are located. If the active gateway queue manager fails, the service integration bus uses this information to reconnect to the standby gateway queue manager.

  2. Create the IBM MQ high-availability cluster using an external high availability framework, such as HACMP, that supports IP address takeover. IP address takeover ensures that the gateway queue manager in its new location appears as the same queue manager to the service integration bus.

The gateway queue manager and the gateway messaging engine store status information that they use to prevent loss or duplication of messages when they restart communication following a failure. This means that the gateway messaging engine must always reconnect to the same gateway queue manager.

If we use IBM MQ for z/OS queue sharing groups, we can configure the IBM MQ link to use shared channels for the connection. Shared channels provide superior availability compared to the high-availability clustering options available on other IBM MQ platforms, because shared channels can reconnect to a different queue manager in the same queue sharing group. Reconnecting in the same queue sharing group is typically faster than waiting to restart the same queue manager in the same or a different location.

Although the network topology described in this section can provide availability and scalability, the relationship between the workload on different queue managers and the WAS application servers to which they are connected is complex. We can contact the IBM representative to obtain expert advice.


Multiple WAS cells connected to an IBM MQ network

In this example scenario, a business has two geographically separatedWAS cells, and wants to connect them to the same enterprise-wide IBM MQ network. Each service integration bus has its own gateway messaging engine, which connects using an IBM MQ link to a nearby IBM MQ gateway queue manager.

With this network topology, WAS applications running in either WAS cell can exchange point-to-point or (with a publish/subscribe bridge) publish/subscribe messages with IBM MQ applications. They can also use the facilities of the enterprise-wide IBM MQ network to exchange messages with WAS applications running in the other WAS cell. As in the previous scenario, the business can use high availability frameworks in WAS and IBM MQ to provide increased availability and scalability.


Related:

  • High availability of messaging engines connected to IBM MQ
  • Message exchange through an IBM MQ link
  • External high availability frameworks and service integration
  • Create a new IBM MQ link
  • IBM MQ library
  • IBM MQ link sender channel [Settings]