+

Search Tips   |   Advanced Search

Network topologies for interoperation using a WebSphere MQ link

These examples show a range of network topologies, from simple to complex, that enable WebSphere Application Server to interoperate with WebSphere MQ using a WebSphere MQ link.

For completeness, this topic describes a wide range of topologies, including clustered and highly available topologies. Note that, for clustering and high availability, use the network deployment or z/OS version of the product.


Single WebSphere Application Server application server connected to a single WebSphere MQ queue manager

In this basic scenario, a WebSphere MQ link connects a single WebSphere Application Server application server to a WebSphere MQ queue manager. The WebSphere Application Server messaging engine that connects to WebSphere MQ using the WebSphere MQ link is called the gateway messaging engine. The WebSphere MQ queue manager or queue-sharing group to which the WebSphere MQ link connects is called the gateway queue manager.

Figure 1. Single application server connected to a gateway queue manager

WebSphere MQ links always use TCP/IP connections, even if the WebSphere MQ queue manager is running on the same host as the application server. You do not need to specify a client or bindings transport type for the connection, as you do when WebSphere MQ is the messaging provider.

The WebSphere MQ link consists of one or two message channels to send messages to WebSphere MQ, receive messages from WebSphere MQ, or both. Each message channel uses one TCP/IP connection.

The message channels support point-to-point messaging between WebSphere Application Server applications and WebSphere MQ applications. We can also configure a publish/subscribe bridge on the WebSphere MQ link for publish/subscribe messaging between WebSphere Application Server applications and WebSphere MQ applications. For more details about the WebSphere MQ link and its message channels, see Message exchange through a WebSphere MQ link.


WAS cell connected to a WebSphere MQ network

A single WebSphere MQ link can connect an entire WebSphere Application Server service integration bus, representing multiple application servers, to multiple WebSphere MQ queue managers. The messages that are exchanged between the two networks all pass through the WebSphere MQ link, which connects a single gateway messaging engine in WebSphere Application Server, and a single gateway queue manager in WebSphere MQ. The gateway messaging engine and gateway queue manager distribute the messages, which can be point-to-point or publish/subscribe messages, to the appropriate application servers and queue managers in their respective networks.

Figure 2. Multiple application servers connected to multiple queue managers

With this topology, interoperation ceases if any of the following conditions occurs:

In these situations, none of the application servers in the WAS cell can communicate with any of the queue managers in WebSphere MQ. In the event of a failure, messages are queued as follows:

We can improve the robustness of this topology and introduce greater availability by setting up high availability frameworks in WebSphere Application Server and WebSphere MQ.


High availability for a WAS cell connected to a WebSphere MQ network

The WebSphere Application Server high availability framework eliminates single points of failure and provides peer to peer failover for applications and processes running within WebSphere Application Server. This framework also allows integration of WAS into an environment that uses other high availability frameworks, such as High Availability Cluster Multi-Processing (HACMPâ„¢), in order to manage non-WebSphere Application Server resources.

Both WebSphere Application Server application servers and WebSphere MQ queue managers can be arranged in clusters, so that if one fails, the others can continue running. In the network topology shown here, the WAS cell containing the service integration bus now includes a WAS cluster which provides backup for the gateway messaging engine. If the gateway messaging engine fails, it can restart in another application server in the cluster, and it can then restart the WebSphere MQ link to the gateway queue manager. Similarly, the gateway queue manager is part of a WebSphere MQ high-availability cluster.

Figure 3. High availability for multiple application servers connected to multiple queue managers

For WebSphere Application Server and WebSphere MQ to interoperate in this network topology, add support for changes of IP address. The WebSphere MQ gateway queue manager uses one IP address to reach the WAS gateway messaging engine, and the WAS gateway messaging engine uses one IP address to reach the WebSphere MQ gateway queue manager. In a high availability configuration, if the gateway messaging engine fails over to a different application server, or the gateway queue manager fails and is replaced by a failover gateway queue manager, the connection to the original IP address for the failed component is lost. We must ensure that both products are able to reinstate their connection to the component in its new location.

To ensure that the connection to a failover WebSphere Application Server gateway messaging engine is reinstated, choose one of the following options:

  1. For a version of WebSphere MQ that is earlier than Version 7.0.1, install the SupportPac MR01 for WebSphere MQ. This SupportPac provides the WebSphere MQ queue manager with a list of alternative IP addresses and ports, so that the queue manager can connect with the WAS gateway messaging engine after the messaging engine fails over to a different IP address and port. In WebSphere Application Server set a high availability policy of "One of N" for the gateway messaging engine. For more information about the WebSphere MQ MR01 SupportPac, see MR01: Create a HA Link between WebSphere MQ and a Service Integration Bus.

  2. If we are using WebSphere MQ Version 7.0.1, use the connection name (CONNAME) to specify a connection list. Although typically only one machine name is required, we can provide multiple machine names to configure multiple connections with the same properties. The connections are tried in the order in which they are specified in the connection list until a connection is successfully established. If no connection is successful, the channel starts retry processing. When using this option, specify the CONNAME as a comma-separated list of names of machines for the stated TransportType, making sure that all the WAS cluster member IPs are listed directly in the CONNAME. For further information about using the CONNAME, see the WebSphere MQ information center.

    WebSphere MQ Version 7.0.1 does not require SupportPac MR01 because this release includes the equivalent function to that provided by SupportPac MR01 for earlier releases. The ability to use the CONNAME to specify a connection list was added as part of the support for multi-instance queue managers in WebSphere MQ Version 7.0.1, however, it can also be used as another option to ensure that the connection to a failover WebSphere Application Server gateway messaging engine is reinstated.

  3. Use an external high availability framework, such as HACMP, to manage a resource group containing the gateway messaging engine. When you use an external high availability framework, the IP address can be failed over to the machine that runs the application server to which the gateway messaging engine has moved. Follow this procedure to handle the IP address correctly:

    • Set a high availability policy of "No operation" for the messaging engine, so that the external high availability framework controls when and where the messaging engine runs.

    • Create resources for the messaging engine and its IP address in the resource group that is managed by the external high availability framework.

    • Consider locating the messaging engine data store in the same resource group as the resource that represents the messaging engine.

To ensure that the connection to a failover WebSphere MQ gateway queue manager is reinstated, choose one of the following options:

  1. Set up multi-instance queue managers in WebSphere MQ, as described in the WebSphere MQ information center. In a definition for the WebSphere MQ link sender channel, select Multiple Connection Names List, and specify the host names (or IP addresses) and ports for the servers where the active and standby queue managers are located. If the active gateway queue manager fails, the service integration bus uses this information to reconnect to the standby gateway queue manager.

  2. Create the WebSphere MQ high-availability cluster using an external high availability framework, such as HACMP, that supports IP address takeover. IP address takeover ensures that the gateway queue manager in its new location appears as the same queue manager to the service integration bus.

The gateway queue manager and the gateway messaging engine store status information that they use to prevent loss or duplication of messages when they restart communication following a failure. This means that the gateway messaging engine must always reconnect to the same gateway queue manager.

If we use WebSphere MQ for z/OS queue sharing groups, we can configure the WebSphere MQ link to use shared channels for the connection. Shared channels provide superior availability compared to the high-availability clustering options available on other WebSphere MQ platforms, because shared channels can reconnect to a different queue manager in the same queue sharing group. Reconnecting in the same queue sharing group is typically faster than waiting to restart the same queue manager in the same or a different location.

Although the network topology described in this section can provide availability and scalability, the relationship between the workload on different queue managers and the WAS application servers to which they are connected is complex. We can contact the IBM representative to obtain expert advice.


Multiple WAS cells connected to a WebSphere MQ network

In this example scenario, a business has two geographically separatedWAS cells, and wants to connect them to the same enterprise-wide WebSphere MQ network. Each service integration bus has its own gateway messaging engine, which connects using a WebSphere MQ link to a nearby WebSphere MQ gateway queue manager.

Figure 4. Geographically separated application servers connected to the same WebSphere MQ network

With this network topology, WebSphere Application Server applications running in either WAS cell can exchange point-to-point or (with a publish/subscribe bridge) publish/subscribe messages with WebSphere MQ applications. They can also use the facilities of the enterprise-wide WebSphere MQ network to exchange messages with WebSphere Application Server applications running in the other WAS cell. As in the previous scenario, the business can use high availability frameworks in WebSphere Application Server and WebSphere MQ to provide increased availability and scalability.


Related concepts

  • High availability of messaging engines that are connected to WebSphere MQ
  • Message exchange through a WebSphere MQ link
  • External high availability frameworks and service integration
  • Create a new WebSphere MQ link

  • WebSphere MQ library


    Related information:

  • WebSphere MQ link sender channel [Settings]