Network Deployment (Distributed operating systems), v8.0 > Applications > Service integration > High availability and workload sharing > Workload sharing



Workload sharing with queue destinations

If you add a server cluster to a service integration bus and deploy one or more messaging engines to the cluster, you can configure the cluster bus member for scalability. The messaging engines in the cluster bus member share the messaging workload associated with queue destinations deployed to the cluster.

See Service integration high availability and workload sharing configurations for more information about configuring messaging engines to share workload.

When you deploy a queue destination to a cluster, the queue is automatically partitioned across the set of messaging engines that is associated with the cluster.

  • If there is only one messaging engine in the cluster, the destination is localized by that messaging engine. The destination is not partitioned.

  • If there is more than one messaging engine in the cluster, the destination is partitioned across all messaging engines in the cluster. Each messaging engine deals with a subset of the messages that the destination handles.

The availability characteristics of a partition are the same as those of the messaging engine it is localized by.

If a producing or consuming application uses an alias destination that is configured to a subset of queue points, the following behavior applies to the subset of queue points, not the entire set of queue points of the target queue destination.



Send messages to a partitioned queue destination

Typically, you create scalable cluster bus member and partition a queue destination if a single server cannot support the message processing load for the queue.

To be used effectively, a partitioned queue requires multiple consumers with at least one consumer consuming from each partition. A typical use is a cluster of message-driven beans (MDBs). For details about how an MDB consumes from a clustered destination, see How a message-driven bean connects in a cluster.

The default behavior of the messaging system if the producing application is connected to a messaging engine of a cluster bus member that hosts the queue destination

By default, the producing application prefers to send all its messages to the local queue point. This behavior maximizes performance of message delivery by minimizing the distance the message has to travel to the queue point. Figure 1. Default behavior: messages are sent to the local queue point

If the local queue point is not available, the messages are processed as though no local queue point exists. A queue point is not available to new messages if:

  • The messaging engine that owns the queue point is not available (for example, the messaging engine is stopped).
  • The queue point reaches its high message threshold.
  • The queue point has had the ability to send messages to it disabled.

The default behavior of the messaging system if the producing application is connected to a messaging engine of a bus member that does not host the queue destination

By default, the messaging system workload balances the messages across the available queue points.

In this figure, a producing application is connected to a messaging engine of a bus member that does not host the queue destination, with its messages workload balanced across the available queue points.

Figure 2. Default behavior: messages are workload balanced across all queue points

Configurable behavior when the producing application is connected to a messaging engine of a bus member that hosts the queue destination

If you want all the messages from the producing application to be workload balanced across all queue points of a queue destination, even when connected to a messaging engine with a queue point, consider disabling the default Prefer local queue point configuration option on the message producer. This option is available to JMS message producers and messages inbound from foreign bus connections that use WAS v7.0 or later.

In this figure, a producing application is connected to a messaging engine of a bus member that hosts the queue destination. The producing application has the "prefer local queue point" option disabled. Its messages are workload balanced across all queue points.

Figure 3. Disable Prefer local queue point: messages are workload balanced across all queue points

However, if a number of producing applications' connections are also workload balanced across all messaging engines in the bus member, consider retaining the default Prefer local queue point behavior, for performance reasons. This is because the default behavior:

  • Workload balances messages from all producing applications across all queue points
  • Minimizes the need to send messages from the connected messaging engine to another messaging engine in the same cluster bus member

Configurable behavior when the producing application is connected to a messaging engine of a cluster bus member that does not host the queue destination

To send all messages produced during a single session of an application producer to the same queue point, you can configure Message affinity. The messages are not then workload balanced across multiple queue points.

Consider configuring Message affinity for an application to send sets of messages to the same queue point to be processed in order, by a single instance of a consumer. The system selects the single queue point to which all the messages are sent, based on the Prefer local queue point configuration option for that application producer. This option is available to JMS message producers and messages inbound from foreign bus connections that use WAS v7.0 or later.

In this figure, a producing application connects to a messaging engine of a bus member that does not host the queue destination. The producing application has message affinity configured. All messages are sent to one queue point.

Figure 4. Message affinity: All messages produced are sent to the same queue point

If the selected queue point becomes unavailable, any further messages sent from this application are queued in transit to the selected queue point or the send operation is rejected. This behavior corresponds to sending messages to a queue destination with a single queue point.



Consuming messages from a partitioned queue destination

When a consumer session is created, the consumer is bound to one partition of the destination. If the consumer is connected to a messaging engine that has a local partition of the destination, the consumer is bound to that partition. If the consumer is connected to a messaging engine that does not have a partition of the destination, the consumer is bound to a partition in another messaging engine selected dynamically by the workload manager. Once bound, a consumer receives messages from the bound partition only. By default, if the partition to which a consumer is bound does not have any messages, the consumer does not receive messages from alternative partitions, even if such partitions contain messages.

If you configure a partitioned destination in a cluster that does not have local consumers, it is important to have at least one consumer for each partition of the destination, to ensure that all messages are consumed. We can achieve this by targeting individual consumers to connect to specific messaging engines that have a queue point. MDBs are a specific type of message consumer. For details of their behavior when consuming from a partitioned destination, see How a message-driven bean connects in a cluster.

If you want a consumer to receive messages from all available queue points of a destination, you can configure the message consumer.

If there is a high number of small messages and the MDBs do only a small amount of processing, you can use workload balanced messaging engines with partitioned queues, as described in this topic. If, however, the MDBs do a greater amount of processing of a lower number of messages, you might need only one messaging engine, but deploy the MDBs to as many servers as possible, whether or not those servers have messaging engines, and even if those servers are not members of the same cell. A typical situation is when MDBs update a user database. For details about configuring MDB deployment across multiple servers, see Configure MDB throttling for the default messaging provider.

The default behavior of the messaging system when a consumer consumes from a partitioned destination

When a consuming application's session is created, the session is associated with one of the queue points of the queue destination. If the queue destination has multiple queue points, the system chooses one. By default, the messaging system prefers to associate the consumer with the local queue point on the connected messaging engine. If there is no available local queue point on the connected messaging engine, the system chooses another queue by using the WAS workload manager.

The default behavior tries to maximize performance of message consumption from queue points by limiting the messages available to the consumer to those on the associated queue point of the consumer. The consumer cannot consume messages from other queue points, even if its associated queue point has no messages, but other queue points have messages.

In this figure, a consuming application is connected to a messaging engine with no local queue point. Only messages from the one associated queue point are consumed.

Figure 5. Default behavior: only messages from the associated queue point are consumed

Configurable behavior of the messaging system when a consumer consumes from a partitioned destination

We can configure a message consumer so that its associated queue point gathers messages from all available queue points of a destination and makes them visible to the consumer.

Consider configuring Message gathering if you want a consumer to treat a partitioned queue as a queue that is not partitioned. However, gathering messages from multiple queue points is significantly slower than consuming from a single queue point. So, if possible, reconfigure the destination to have a single queue point, or use an alias destination to restrict message producers and consumers to a single queue point. If you require scalability of multiple queue points and performance is important, consider alternative solutions to gathering messages.

In Message gathering mode of operation, the consumer might not see messages in the order in which they are held on the queue points. Therefore, message order is not maintained.

This option is available to JMS message producers and messages inbound from foreign bus connections using WAS v7.0 or later.

In this figure, a consuming application connects to a messaging engine with no local queue point. The consuming application has message gathering enabled. The associated queue point gathers messages from all available queue points of a destination and makes them available to the consumer.

Figure 6. Message gathering: messages are consumed from all queue points


Point-to-point messaging across multiple buses Message ordering Queue destinations Strict message ordering for bus destinations Mediation handlers and mediation handler lists Configuration for workload sharing or scalability How a message-driven bean connects in a cluster WebSphere MQ link receiver JMS request and reply messaging with cluster bus members


Clusters and workload management Configure high availability and workload sharing of service integration Create a queue for point-to-point messaging Configure MDB throttling for the default messaging provider



Related

States of the WebSphere MQ link and its channels

+

Search Tips   |   Advanced Search

+

Search Tips   |   Advanced Search