+

Search Tips   |   Advanced Search

Introduction: Clusters

Clusters are groups of servers managed together and participate in workload management. A cluster can contain nodes or individual application servers. A node is usually a physical computer system with a distinct host IP address running one or more application servers. Clusters can be grouped under the configuration of a cell, which logically associates many servers and clusters with different configurations and applications with one another depending on the discretion of the administrator and what makes sense in their organizational environments. Clusters are responsible for balancing workload among servers. Servers that are a part of a cluster are called cluster members. When we install an application on a cluster, the application is automatically installed on each cluster member. We can configure a cluster to provide workload balancing with service integration or with message driven beans in the application server.

Because each cluster member contains the same applications, we can distribute client tasks in distributed platforms according to the capacities of the different machines by assigning weights to each server. Assigning weights to the servers in a cluster improves performance and failover. Tasks are assigned to servers that have the capacity to perform the task operations. If one server is unavailable to perform the task, it is assigned to another cluster member. This reassignment capability has obvious advantages over running a single application server that can become overloaded if too many requests are made.

Normal runtime processing automatically starts all server components during the server startup process. This processing applies to all servers, including servers that are part of a cluster. However, we can configure servers, including servers that are cluster members, such that not all of the server components start during the server startup process. This capability enables the server to consume resources as needed, thereby providing a smaller and more manageable footprint, and normally results in a performance improvement. When we configure cluster members such that not all of the cluster member components start when the cluster or a specific cluster member is started, the cluster member components are dynamically started as they are needed. For example, if an application module starts that requires a specific server component, that component is dynamically started.


Node groups

Any application installed to a cluster must be able to execute on any application server that is a member of that cluster. Because a node group forms the boundaries for a cluster, all of the members of a cluster must be members of the same node group. Therefore, for the application you deploy to run successfully, all of the members of a cluster must be located on nodes that meet the requirements for that application. In a cell that has many different server configurations, it might be difficult to determine which nodes have the capabilities to host the application. A node group can be used to define groups of nodes that have enough in common to host members of a given cluster. All cluster members in a cluster must be in the same node group.

All nodes are members of at least one node group. When creating a cluster, the first application server we add to the cluster defines the node group within which all of the other cluster members must reside. All other cluster members we add to the cluster can only be on nodes that are members of this same node group. When creating a new cluster member in the console, you are allowed to create the application server on a node that is a member of the node group for that cluster only. Nodes can be members of multiple node groups. If the first cluster member we add to a cluster has multiple node groups defined, the system automatically chooses the node group that bounds the cluster. We can change the node group by modifying the cluster settings. Use the Server cluster settings page to change the node group.


Core groups

In a high availability environment, a group of clusters can be defined as a core group. All of the application servers defined as a member of one of the clusters included in a core group are automatically members of that core group. Individual application servers that are not members of a cluster can also be defined as a member of a core group. The use of core groups enables WebSphere Application Server to provide high availability for applications that must always be available to end users. We can also configure core groups to communicate with each other using the core group bridge. The core groups can communicate within the same cell or across cells.


Cluster members

We can improve system performance if we configure each cluster member, such that each of their components are dynamically started as they are needed instead of letting all of these components automatically start when the cluster member starts. Selecting this option can improve cluster startup time, and reduce the memory footprint of the cluster members. Starting components as they are needed is most effective if all of the applications that are deployed on the cluster are of the same type. For example, using this option works better if all of the applications are web applications that use servlets, and JSP. This option works less effectively if the applications use a mix servlets, JSPs and EJB.


Avoid trouble

If we have thin clients running in an environment where where requests are being routed between multiple cells or where requests are being routed within a single cell that includes nodes from earlier versions of the product, they might suddenly encounter a situation where the port information about the cluster members of the target cluster has become stale. This situation most commonly occurs when all of the cluster members have dynamic ports and are restarted during a time period when no requests are being sent. The client process in this state will eventually attempt to route to the node agent to receive the new port data for the cluster members, and then use that new port data to route back to the members of the cluster.

If any issues occur that prevent the client from communicating with the node agent, or that prevent the new port data being propagated between the cluster members and the node agent, request failures might occur on the client. In some cases, these failures are temporary. In other cases we need to restart one or more processes to resolve a failure.

To circumvent the client routing problems that might arise in these cases, we can configure static ports on the cluster members. With static ports, the port data does not change as a client process gets information about the cluster members. Even if the cluster members are restarted, or there are communication or data propagation issues between processes, the port data the client holds is still valid. This circumvention does not necessarily solve the underlying communication or data propagation issues, but removes the symptoms of unexpected or uneven client routing decisions.


Related