Binary logging | Host


Tuning Load Balancer parameters


+

Search Tips   |   Advanced Search

 

From monitoring the load balancing behavior, you can draw conclusions for tuning both Web servers and Load Balancer. In Chapter 18, Monitor and tune Web servers, we describe how each single Web server can be configure for good performance for WebSphere Commerce. In this section, we describe how Load Balancer can be configured to distribute requests among the Web servers in a way yielding good performance and scalability.

Most important, you should configure Load Balancer's Dispatcher component to use the manager and advisor subcomponents to provide server weights to the executor component for load balancing. Optionally, you may use metric server, which needs to be installed on the balanced servers to send OS level statistics back to the manager component. While the use of metric server is beyond the scope of this book, manager and advisors are used in all the scenarios as described in 7.1.2, IBM WebSphere Edge Components Load Balancer, and, more specifically, in 11.2, Configure Load Balancer.

Dispatcher can be tuned in many ways. For each component (executor, manager, advisor) and each managed object (host, cluster, port, server), there are several parameters. We list some of them here with regard to optimizations for WebSphere Commerce. Refer to Load Balancer Administration Guide, GC31-6858, for detailed information.

To make the settings described below in the GUI of either the MAC forwarding or the NAT forwarding scenario, log on to your Load Balancer node, run dsserver (if necessary) and lbadmin, and connect to your host as described in step 1 to 4.

Note: For some objects, default values for contained objects can be defined. For example, you can set the default port stale time out at the cluster level for new ports in the cluster. We describe the settings under the component to which they apply. For example, port stale time out is explained at the port level.

Furthermore, all settings applying to server affinity (for example, port sticky time) are explained in 19.3, Server affinity.

xxxx