Using Clusters

      

Load Balancing in a Cluster

This section describes the load balancing support that a WebLogic Server cluster provides for different types of objects, and related planning and configuration considerations for architects and administrators. It contains the following information:

For information about replication and failover in a cluster, see Failover and Replication in a Cluster.

 


Load Balancing for Servlets and JSPs

Load balancing of servlets and JSPs can be accomplished with the built-in load balancing capabilities of a WebLogic proxy plug-in or with separate load balancing hardware.

In addition to distributing HTTP traffic, external load balancers can distribute initial context requests that come from Java clients over t3 and the default channel. See Load Balancing for EJBs and RMI Objects for a discussion of object-level load balancing in WebLogic Server.

 

Load Balancing with a Proxy Plug-in

The WebLogic proxy plug-in maintains a list of WebLogic Server instances that host a clustered servlet or JSP, and forwards HTTP requests to those instances on a round-robin basis. This load balancing method is described in Round Robin Load Balancing.

The plug-in also provides the logic necessary to locate the replica of a client's HTTP session state if a WebLogic Server instance should fail.

WebLogic Server supports the following Web servers and associated proxy plug-ins:

For instructions on setting up proxy plug-ins, see Configure Proxy Plug-Ins.

How Session Connection and Failover Work with a Proxy Plug-in

For a description of connection and failover for HTTP sessions in a cluster with proxy plug-ins, see Accessing Clustered Servlets and JSPs Using a Proxy.

 

Load Balancing HTTP Sessions with an External Load Balancer

Clusters that utilize a hardware load balancing solution can use any load balancing algorithm supported by the hardware. These can include advanced load-based balancing strategies that monitor the utilization of individual machines.

Load Balancer Configuration Requirements

If you choose to use load balancing hardware instead of a proxy plug-in, it must support a compatible passive or active cookie persistence mechanism, and SSL persistence.

Load Balancers and the WebLogic Session Cookie

A load balancer that uses passive cookie persistence can use a string in the WebLogic session cookie to associate a client with the server hosting its primary HTTP session state. The string uniquely identifies a server instance in the cluster. You must configure the load balancer with the offset and length of the string constant. The correct values for the offset and length depend on the format of the session cookie.

The format of a session cookie is:

sessionid!primary_server_id!secondary_server_id

where:

For sessions using non-replicated memory, cookie, JDBC, or file-based session persistence, the secondary_server_id is not present. For sessions that use in-memory replication, if the secondary session does not exist, the secondary_server_id is “NONE”.

For general instructions on configuring load balancers, see Configuring Load Balancers that Support Passive Cookie Persistence. Instructions for configuring BIG-IP, see Configuring BIG-IP Hardware with Clusters.

Related Programming Considerations

For programming constraints and recommendations for clustered servlets and JSPs, see Programming Considerations for Clustered Servlets and JSPs.

How Session Connection and Failover Works with a Load Balancer

For a description of connection and failover for HTTP sessions in a cluster with load balancing hardware, see Accessing Clustered Servlets and JSPs with Load Balancing Hardware.

 


Load Balancing for EJBs and RMI Objects

This section describes WebLogic Server load balancing algorithms for EJBs and RMI objects.

The load balancing algorithm for an object is maintained in the replica-aware stub obtained for a clustered object.

By default, a WebLogic Server cluster uses round-robin load balancing, described in Round Robin Load Balancing. You can configure a different default load balancing method for the cluster by using the Administration Console to set weblogic.cluster.defaultLoadAlgorithm. For instructions, see Configure Load Balancing Method for EJBs and RMIs. You can also specify the load balancing algorithm for a specific RMI object using the -loadAlgorithm option in rmic, or with the home-load-algorithm or stateless-bean-load-algorithm in an EJB's deployment descriptor. A load balancing algorithm that you configure for an object overrides the default load balancing algorithm for the cluster.

In addition to the standard load balancing algorithms, WebLogic Server supports custom parameter-based routing. For more information, see Parameter-Based Routing for Clustered Objects.

 

Round Robin Load Balancing

WebLogic Server uses the round-robin algorithm as the default load balancing strategy for clustered object stubs when no algorithm is specified. This algorithm is supported for RMI objects and EJBs. It is also the method used by WebLogic proxy plug-ins.

The round-robin algorithm cycles through a list of WebLogic Server instances in order. For clustered objects, the server list consists of WebLogic Server instances that host the clustered object. For proxy plug-ins, the list consists of all WebLogic Server instances that host the clustered servlet or JSP.

The advantages of the round-robin algorithm are that it is simple, cheap and very predictable. The primary disadvantage is that there is some chance of convoying. Convoying occurs when one server is significantly slower than the others. Because replica-aware stubs or proxy plug-ins access the servers in the same order, a slow server can cause requests to “synchronize” on the server, then follow other servers in order for future requests.

WebLogic Server does not always load balance an object's method calls. For more information, see Optimization for Collocated Objects.

 

Weight-Based Load Balancing

This algorithm applies only to EJB and RMI object clustering.

Weight-based load balancing improves on the round-robin algorithm by taking into account a pre-assigned weight for each server. You can use the Server -> Configuration -> Cluster tab in the Administration Console to assign each server in the cluster a numerical weight between 1 and 100, in the Cluster Weight field. This value determines what proportion of the load the server will bear relative to other servers. If all servers have the same weight, they will each bear an equal proportion of the load. If one server has weight 50 and all other servers have weight 100, the 50-weight server will bear half as much as any other server. This algorithm makes it possible to apply the advantages of the round-robin algorithm to clusters that are not homogeneous.

If you use the weight-based algorithm, carefully determine the relative weights to assign to each server instance. Factors to consider include:

If you change the specified weight of a server and reboot it, the new weighting information is propagated throughout the cluster via the replica-aware stubs. For related information see Cluster-Wide JNDI Naming Service.

Notes: WebLogic Server does not always load balance an object's method calls. For more information, see Optimization for Collocated Objects.

Note: In this version of WebLogic Server, weight-based load balancing is not supported for objects that communicate using the RMI/IIOP protocol.

 

Random Load Balancing

The random method of load balancing applies only to EJB and RMI object clustering.

In random load balancing, requests are routed to servers at random. Random load balancing is recommended only for homogeneous cluster deployments, where each server instance runs on a similarly configured machine. A random allocation of requests does not allow for differences in processing power among the machines upon which server instances run. If a machine hosting servers in a cluster has significantly less processing power than other machines in the cluster, random load balancing will give the less powerful machine as many requests as it gives more powerful machines.

Random load balancing distributes requests evenly across server instances in the cluster, increasingly so as the cumulative number of requests increases. Over a small number of requests the load may not be balanced exactly evenly.

Disadvantages of random load balancing include the slight processing overhead incurred by generating a random number for each request, and the possibility that the load may not be evenly balanced over a small number of requests.

WebLogic Server does not always load balance an object's method calls. For more information, see Optimization for Collocated Objects.

 

Server Affinity Load Balancing Algorithms

WebLogic Server provides three load balancing algorithms for RMI objects that provide server affinity. Server affinity turns off load balancing for external client connections: instead, the client considers its existing connections to WebLogic server instances when choosing the server instance on which to access an object. If an object is configured for server affinity, the client-side stub attempts to choose a server instance to which it is already connected, and continues to use the same server instance for method calls. All stubs on that client attempt to use that server instance. If the server instance becomes unavailable, the stubs fail over, if possible, to a server instance to which the client is already connected.

The purpose of server affinity is to minimize the number IP sockets opened between external Java clients and server instances in a cluster. WebLogic Server accomplishes this by causing method calls on objects to “stick” to an existing connection, instead of being load balanced among the available server instances. With server affinity algorithms, the less costly server-to-server connections are still load-balanced according to the configured load balancing algorithm—load balancing is disabled only for external client connections.

Server affinity is used in combination with one of the standard load balancing methods: round-robin, weight-based, or random:

Server Affinity and Initial Context

A client can request an initial context from a particular server instance in the cluster, or from the cluster by specifying the cluster address in the URL. The connection process varies, depending on how the context is obtained:

Server Affinity and IIOP Client Authentication Using CSIv2

If you use WebLogic Server's Common Secure Interoperability (CSIv2) functionality to support stateful interactions with WebLogic Server's Java EE Application Client (“thin client”), use an affinity-based load balancing algorithm to ensure that method calls stick to a server instance. Otherwise, all remote calls will be authenticated. To prevent redundant authentication of stateful CSIv2 clients, use one of the load balancing algorithms described in Round-Robin Affinity, Weight-Based Affinity, and Random-Affinity.

Round-Robin Affinity, Weight-Based Affinity, and Random-Affinity

WebLogic Server has three load balancing algorithms that provide server affinity:

Server affinity is supported for all types of RMI objects including JMS objects, all EJB home interfaces, and stateless EJB remote interfaces.

The server affinity algorithms consider existing connections between an external Java client and server instances in balancing the client load among WebLogic server instances. Server affinity:

Server Affinity Examples

The following examples illustrate the effect of server affinity under a variety of circumstances. In each example, the objects deployed are configured for round-robin-affinity.

Example 1—Context from cluster

In this example, the client obtains context from the cluster. Lookups on the context and object calls stick to a single connection. Requests for new initial context are load balanced on a round-robin basis. Figure 5-1 Client Obtains Context From the Cluster

Client Obtains Context From the Cluster

  1. Client requests a new initial context from the cluster (Provider_URL=clusteraddress) and obtains the context from MS1.

  2. Client does a lookup on the context for Object A. The lookup goes to MS1.

  3. Client issues a call to Object A. The call goes to MS1, to which the client is already connected. Additional method calls to Object A stick to MS1.

  4. Client requests a new initial context from the cluster (Provider_URL=clusteraddress) and obtains the context from MS2.

  5. Client does a lookup on the context for Object B. The call goes to MS2, to which the client is already connected. Additional method calls to Object B stick to MS2.

Example 2—Server Affinity and Failover

This example illustrates the effect that server affinity has on object failover. When a Managed Server goes down, the client fails over to another Managed Server to which it has a connection. Figure 5-2 Server Affinity and Failover

Server Affinity and Failover

  1. Client requests new initial context from MS1.

  2. Client does a lookup on the context for Object A. The lookup goes to MS1.

  3. Client makes a call to Object A. The call goes to MS1, to which the client is already connected. Additional calls to Object A stick to MS1.

  4. The client obtains a stub for Object C, which is pinned to MS3. The client opens a connection to MS3.

  5. MS1 fails.

  6. Client makes a call to Object A.The client no longer has a connection to MS1. Because the client is connected to MS3, it fails over to a replica of Object A on MS3.

Example 3—Server affinity and server-to-server connections

This example illustrates the fact that server affinity does not affect the connections between server instances. Figure 5-3 Server Affinity and Server-to-Server Connections

Server Affinity and Server-to-Server Connections

  1. A JSP on MS4 obtains a stub for Object B.

  2. The JSP selects a replica on MS1. For each method call, the JSP cycles through the Managed Servers upon which Object B is available, on a round-robin basis.

 

Parameter-Based Routing for Clustered Objects

Parameter-based routing allows you to control load balancing behavior at a lower level. Any clustered object can be assigned a CallRouter. This is a class that is called before each invocation with the parameters of the call. The CallRouter is free to examine the parameters and return the name server to which the call should be routed. For information about creating custom CallRouter classes, see “Parameter-Based Routing for Clustered Objects” in Programming WebLogic RMI.

 

Optimization for Collocated Objects

WebLogic Server does not always load balance an object's method calls. In most cases, it is more efficient to use a replica that is collocated with the stub itself, rather than using a replica that resides on a remote server. The following figure illustrates this. Figure 5-4 Collocation Optimization Overrides Load Balancer Logic for Method Call

Collocation Optimization Overrides Load Balancer Logic for Method Call

In this example, a client connects to a servlet hosted by the first WebLogic Server instance in the cluster. In response to client activity, the servlet obtains a replica-aware stub for Object A. Because a replica of Object A is also available on the same server instance, the object is said to be collocated with the client's stub.

WebLogic Server always uses the local, collocated copy of Object A, rather than distributing the client's calls to other replicas of Object A in the cluster. It is more efficient to use the local copy, because doing so avoids the network overhead of establishing peer connections to other servers in the cluster.

This optimization is often overlooked when planning WebLogic Server clusters. The collocation optimization is also frequently confusing for administrators or developers who expect or require load balancing on each method call. If your Web application is deployed to a single cluster, the collocation optimization overrides any load balancing logic inherent in the replica-aware stub.

If you require load balancing on each method call to a clustered object, see Recommended Multi-Tier Architecture for information about how to plan your WebLogic Server cluster accordingly.

Transactional Collocation

As an extension to the basic collocation strategy, WebLogic Server attempts to use collocated clustered objects that are enlisted as part of the same transaction. When a client creates a UserTransaction object, WebLogic Server attempts to use object replicas that are collocated with the transaction. This optimization is depicted in the figure below. Figure 5-5 Collocation Optimization Extends to Other Objects in Transaction

Collocation Optimization Extends to Other Objects in Transaction

In this example, a client attaches to he first WebLogic Server instance in the cluster and obtains a UserTransaction object. After beginning a new transaction, the client looks up Objects A and B to do the work of the transaction. In this situation WebLogic Server always attempts to use replicas of A and B that reside on the same server as the UserTransaction object, regardless of the load balancing strategies in the stubs for A and B.

This transactional collocation strategy is even more important than the basic optimization described in Optimization for Collocated Objects. If remote replicas of A and B were used, added network overhead would be incurred for the duration of the transaction, because the peer connections for A and B would be locked until the transaction committed. Furthermore, WebLogic Server would need to employ a multi-tiered JDBC connection to commit the transaction, incurring additional network overhead.

By using collocating clustered objects during a transaction, WebLogic Server reduces the network load for accessing the individual objects. The server also can make use of a single-tiered JDBC connection, rather than a multi-tiered connection, to do the work of the transaction.

 


Load Balancing for JMS

WebLogic Server JMS supports server affinity for distributed JMS destinations and client connections.

By default, a WebLogic Server cluster uses the round-robin method to load balance objects. To use a load balancing algorithm that provides server affinity for JMS objects, configure the desired method for the cluster as a whole. You can configure the load balancing algorithm by using the Administration Console to set weblogic.cluster.defaultLoadAlgorithm. For instructions, see Configure Load Balancing Method for EJBs and RMIs.

To provide persistent store for failover of JMS and JTA pinned services, you may consider using high-availability clustering software such as VERITAS Cluster Server, which provides an integrated, out-of-the-box solution for WebLogic Server based applications. Some other recommended high-availability software solutions include SunCluster, IBM HACMP, or the equivalent.

 

Server Affinity for Distributed JMS Destinations

Server affinity is supported for JMS applications that use the distributed destination feature; this feature is not supported for standalone destinations. If you configure server affinity for JMS connection factories, a server instance that is load balancing consumers or producers across multiple members of a distributed destination will first attempt to load balance across any destination members that are also running on the same server instance.

For detailed information on how the JMS connection factory's Server Affinity Enabled option affects the load balancing preferences for distributed destination members, see “How Distributed Destination Load Balancing Is Affected When Using the Server Affinity Enabled Attribute” in Configure WebLogic JMS.

 

Initial Context Affinity and Server Affinity for Client Connections

A system administrator can establish load balancing of JMS destinations across multiple servers in a cluster by configuring multiple JMS servers and using targets to assign them to the defined WebLogic Servers. Each JMS server is deployed on exactly one WebLogic Server and handles requests for a set of destinations. During the configuration phase, the system administrator enables load balancing by specifying targets for JMS servers. For instructions on setting up targets, see Configure Migratable Targets for Pinned Services. For instructions on deploying a JMS server to a migratable target, see Deploying, Activating, and Migrating Migratable Services.

A system administrator can establish cluster-wide, transparent access to destinations from any server in the cluster by configuring multiple connection factories and using targets to assign them to WebLogic Servers. Each connection factory can be deployed on multiple WebLogic Servers. Connection factories are described in more detail in “Connection Factory” in Programming WebLogic JMS.

The application uses the Java Naming and Directory Interface (JNDI) to look up a connection factory and create a connection to establish communication with a JMS server. Each JMS server handles requests for a set of destinations. Requests for destinations not handled by a JMS server are forwarded to the appropriate server.

WebLogic Server provides server affinity for client connections. If an application has a connection to a given server instance, JMS will attempt to establish new JMS connections to the same server instance.

When creating a connection, JMS will try first to achieve initial context affinity. It will attempt to connect to the same server or servers to which a client connected for its initial context, assuming that the server instance is configured for that connection factory. For example, if the connection factory is configured for servers A and B, but the client has an InitialContext on server C, then the connection factory will not establish the new connection with A, but will choose between servers B and C.

If a connection factory cannot achieve initial context affinity, it will try to provide affinity to a server to which the client is already connected. For instance, assume the client has an InitialContext on server A and some other type of connection to server B. If the client then uses a connection factory configured for servers B and C it will not achieve initial context affinity. The connection factory will instead attempt to achieve server affinity by trying to create a connection to server B, to which it already has a connection, rather than server C.

If a connection factory cannot provide either initial context affinity or server affinity, then the connection factory is free to make a connection wherever possible. For instance, assume a client has an initial context on server A, no other connections and a connection factory configured for servers B and C. The connection factory is unable to provide any affinity and is free to attempt new connections to either server B or C.

In the last case, if the client attempts to make a second connection using the same connection factory, it will go to the same server as it did on the first attempt. That is, if it chose server B for the first connection, when the second connection is made, the client will have a connection to server B and the server affinity rule will be enforced.

 


Load Balancing for JDBC Connections

Load balancing of JDBC connection requires the use of a multi data source configured for load balancing. Load balancing support is an option you can choose when configuring a multi data source.

A load balancing multi data source provides the high available behavior described in Failover and JDBC Connections, and in addition, balances the load among the data sources in the multi data source. A multi data source has an ordered list of data sources it contains. If you do not configure the multi data source for load balancing, it always attempts to obtain a connection from the first data source in the list. In a load-balancing multi data source, the data sources it contains are accessed using a round-robin scheme. In each successive client request for a multi data source connection, the list is rotated so the first pool tapped cycles around the list.

For instructions on clustering JDBC objects, see Configure Clustered JDBC.