WAS v8.5 > WebSphere applications > Messaging resources > Interoperation with WebSphere MQ > Interoperation using a WebSphere MQ linkNetwork topologies for interoperation using a WebSphere MQ link
These examples show a range of network topologies, from simple to complex, that enable WebSphere Application Server to interoperate with WebSphere MQ using a WebSphere MQ link.
- Single WAS application server connected to a single WebSphere MQ queue manager
- WAS cell connected to a WebSphere MQ network
- High availability for a WAS cell connected to a WebSphere MQ network
- Multiple WAS cells connected to a WebSphere MQ network
For completeness, this topic describes a wide range of topologies, including clustered and highly available topologies. Note that, for clustering and high availability, you need to use the network deployment or z/OS version of the product.
Single WAS application server connected to a single WebSphere MQ queue manager
In this basic scenario, a WebSphere MQ link connects a single WAS application server to a WebSphere MQ queue manager. The WAS messaging engine that connects to WebSphere MQ using the WebSphere MQ link is called the gateway messaging engine. The WebSphere MQ queue manager or queue-sharing group to which the WebSphere MQ link connects is called the gateway queue manager.
Figure 1. Single application server connected to a gateway queue manager
WebSphere MQ links always use TCP/IP connections, even if the WebSphere MQ queue manager is running on the same host as the application server. We do not need to specify a client or bindings transport type for the connection, as we do when WebSphere MQ is the messaging provider.
The WebSphere MQ link consists of one or two message channels to send messages to WebSphere MQ, receive messages from WebSphere MQ, or both. Each message channel uses one TCP/IP connection.
The message channels support point-to-point messaging between WAS applications and WebSphere MQ applications. We can also configure a publish/subscribe bridge on the WebSphere MQ link for publish/subscribe messaging between WAS applications and WebSphere MQ applications. For more details about the WebSphere MQ link and its message channels, see Message exchange through a WebSphere MQ link.
WAS cell connected to a WebSphere MQ network
A single WebSphere MQ link can connect an entire service integration bus, representing multiple application servers, to multiple WebSphere MQ queue managers. The messages that are exchanged between the two networks all pass through the WebSphere MQ link, which connects a single gateway messaging engine in WAS, and a single gateway queue manager in WebSphere MQ. The gateway messaging engine and gateway queue manager distribute the messages, which can be point-to-point or publish/subscribe messages, to the appropriate application servers and queue managers in their respective networks.
Figure 2. Multiple application servers connected to multiple queue managers
With this topology, interoperation ceases if any of the following conditions occurs:
- The WAS application server containing the gateway messaging engine fails.
- The host on which that WAS application server is running fails.
- The WebSphere MQ gateway queue manager fails.
- The host on which the WebSphere MQ gateway queue manager is running fails.
In these situations, none of the application servers in the WAS cell can communicate with any of the queue managers in WebSphere MQ. In the event of a failure, messages are queued as follows:
- If the gateway messaging engine in WAS fails or can no longer communicate with WebSphere MQ, messages that were already queued in the gateway messaging engine, which has store and forward capability, are stored there and are sent when interoperation is restored.
- If the gateway messaging engine in WAS fails, messages that were queued in the messaging engines of other application servers are stored in those messaging engines and are sent when the gateway messaging engine is in operation.
- If the gateway queue manager in WebSphere MQ fails or can no longer communicate with WAS, messages that were already queued in the gateway queue manager are sent when interoperation is restored.
- If the gateway queue manager in WebSphere MQ fails, messages that were queued in other queue managers are sent when the gateway queue manager is in operation.
We can improve the robustness of this topology and introduce greater availability by setting up high availability frameworks in WAS and WebSphere MQ.
High availability for a WAS cell connected to a WebSphere MQ network
The high availability framework eliminates single points of failure and provides peer to peer failover for applications and processes running within WAS. This framework also allows integration of WAS into an environment that uses other high availability frameworks, such as High Availability Cluster Multi-Processing (HACMPâ„¢), in order to manage non-WAS resources.
Both WAS application servers and WebSphere MQ queue managers can be arranged in clusters, so that if one fails, the others can continue running. In the network topology shown here, the WAS cell containing the service integration bus now includes a WAS cluster which provides backup for the gateway messaging engine. If the gateway messaging engine fails, it can restart in another application server in the cluster, and it can then restart the WebSphere MQ link to the gateway queue manager. Similarly, the gateway queue manager is part of a WebSphere MQ high-availability cluster.
Figure 3. High availability for multiple application servers connected to multiple queue managers
For WAS and WebSphere MQ to interoperate in this network topology, add support for changes of IP address. The WebSphere MQ gateway queue manager uses one IP address to reach the WAS gateway messaging engine, and the WAS gateway messaging engine uses one IP address to reach the WebSphere MQ gateway queue manager. In a high availability configuration, if the gateway messaging engine fails over to a different application server, or the gateway queue manager fails and is replaced by a failover gateway queue manager, the connection to the original IP address for the failed component is lost. You must ensure that both products are able to reinstate their connection to the component in its new location.
To ensure the connection to a failover WAS gateway messaging engine is reinstated, choose one of the following options:
- If we are using a version of WebSphere MQ that is earlier than v7.0.1, install the SupportPac MR01 for WebSphere MQ. This SupportPac provides the WebSphere MQ queue manager with a list of alternative IP addresses and ports, so the queue manager can connect with the WAS gateway messaging engine after the messaging engine fails over to a different IP address and port. In WAS set a high availability policy of "One of N" for the gateway messaging engine. For more information about the WebSphere MQ MR01 SupportPac, see MR01: Creating a HA Link between WebSphere MQ and a Service Integration Bus.
- If we are using WebSphere MQ v7.0.1, use the connection name (CONNAME) to specify a connection list. Although typically only one machine name is required, we can provide multiple machine names to configure multiple connections with the same properties. The connections are tried in the order in which they are specified in the connection list until a connection is successfully established. If no connection is successful, the channel starts retry processing. When using this option, specify the CONNAME as a comma-separated list of names of machines for the stated TransportType, making sure that all the WAS cluster member IPs are listed directly in the CONNAME. For further information about using the CONNAME, see the WebSphere MQ information center.
WebSphere MQ v7.0.1 does not require SupportPac MR01 because this release includes the equivalent function to that provided by SupportPac MR01 for earlier releases. The ability to use the CONNAME to specify a connection list was added as part of the support for multi-instance queue managers in WebSphere MQ v7.0.1, however, it can also be used as another option to ensure the connection to a failover WAS gateway messaging engine is reinstated.
- Use an external high availability framework, such as HACMP, to manage a resource group containing the gateway messaging engine. When we use an external high availability framework, the IP address can be failed over to the machine that runs the application server to which the gateway messaging engine has moved. Follow this procedure to handle the IP address correctly:
- Set a high availability policy of "No operation" for the messaging engine, so the external high availability framework controls when and where the messaging engine runs.
- Create resources for the messaging engine and its IP address in the resource group that is managed by the external high availability framework.
- Consider locating the messaging engine data store in the same resource group as the resource that represents the messaging engine.
To ensure the connection to a failover WebSphere MQ gateway queue manager is reinstated, choose one of the following options:
- Set up multi-instance queue managers in WebSphere MQ, as described in the WebSphere MQ information center. In a definition for the WebSphere MQ link sender channel, select Multiple Connection Names List, and specify the host names (or IP addresses) and ports for the servers where the active and standby queue managers are located. If the active gateway queue manager fails, the service integration bus uses this information to reconnect to the standby gateway queue manager.
- Create the WebSphere MQ high-availability cluster using an external high availability framework, such as HACMP, that supports IP address takeover. IP address takeover ensures the gateway queue manager in its new location appears as the same queue manager to the service integration bus.
The gateway queue manager and the gateway messaging engine store status information they use to prevent loss or duplication of messages when they restart communication following a failure. This means the gateway messaging engine must always reconnect to the same gateway queue manager.
If we use WebSphere MQ for z/OS queue sharing groups, we can configure the WebSphere MQ link to use shared channels for the connection. Shared channels provide superior availability compared to the high-availability clustering options available on other WebSphere MQ platforms, because shared channels can reconnect to a different queue manager in the same queue sharing group. Reconnecting in the same queue sharing group is typically faster than waiting to restart the same queue manager in the same or a different location.
Although the network topology described in this section can provide availability and scalability, the relationship between the workload on different queue managers and the WAS application servers to which they are connected is complex. We can contact your IBM representative to obtain expert advice.
Multiple WAS cells connected to a WebSphere MQ network
In this example scenario, a business has two geographically separatedWAS cells, and wants to connect them to the same enterprise-wide WebSphere MQ network. Each service integration bus has its own gateway messaging engine, which connects using a WebSphere MQ link to a nearby WebSphere MQ gateway queue manager.
Figure 4. Geographically separated application servers connected to the same WebSphere MQ network
With this network topology, WAS applications running in either WAS cell can exchange point-to-point or (with a publish/subscribe bridge) publish/subscribe messages with WebSphere MQ applications. They can also use the facilities of the enterprise-wide WebSphere MQ network to exchange messages with WAS applications running in the other WAS cell. As in the previous scenario, the business can use high availability frameworks in WAS and WebSphere MQ to provide increased availability and scalability.
Related concepts:
High availability of messaging engines that are connected to WebSphere MQ
Message exchange through a WebSphere MQ link
External high availability frameworks and service integration
Create a new WebSphere MQ link
Reference:
Related information:
WebSphere MQ link sender channel [Settings]