+

Search Tips   |   Advanced Search

Configure multi-cell performance management: Peer-Cell Topology

Manage multi-cell performance in the environment to avoid overprovisioning resources, such as CPU and memory utilization.

The Peer-Cell Topology works well if we have two or more disjoint data centers, one cell per data center, and we want failover capability between them. In this topology, the on demand routers (ODRs) remain in the cells; however, the two cells are not joined via core-group bridges. In front of our two cells, we have one or more load-balancers, plugins, or sprayers which are able to both preserve session affinity (as applicable), as well as equitably distribute traffic.

A peer-cell topology applies to server virtualized environments, such as AIX LPARs/WPARs, Linux on System z, VMware, and Solaris Zones, as well as non-virtualized environments in which multiple cells do not share the physical hardware. In a peer-cell topology, all cells can perform work, such as contain ODRs and application servers, and make autonomic decisions to start or stop application servers.

Indications for the peer-cell topology include:

The following procedure describes a sample scenario in which multi-cell performance management is configured in a peer-cell topology environment so that work requests can be routed from an ODR to dynamic cluster members across cells. The ODR is installed and running on CellA, which is the center cell. The two-point cells, CellB and CellC, contain the dynamic clusters and applications.

If our cells are currently linked, please unlink the cells prior to configuring the peer-cell topology using the following procedure:

  1. Run $WAS_HOME/bin/crossCellCGBCfg clear remote DMGR remote DMGR soap port on the local DMGR node.

  2. Stop all the processes in each cell. Make sure all processes are stopped before bringing the processes up again. This ensures any data of the remote cell is cleared up in the on demand configuration (ODC). The profile profiles/profile name/config/cells/cell name/multicelloverlaybridgesettings.xml should not be present.

  3. To make sure the ODC does not contain data of the remote cell we can Run $WAS_HOME/bin/wsadmin.sh -lang jython -f ve_manageODC.py getTargetTree LocalDmgrNodeName dmgr > target.xml

Use the unlinkCells script to disable communication between Intelligent Management cells that were previously linked using the linkCells script. After we run the unlinkCells script, the on demand router (ODR) no longer routes work requests to the unlinked cell. For information on unlinking cells using the unlinkCells script please see unlinkCells|unlinkCellsZOS script


Tasks

  1. Create Generic Server Clusters for Failover Destinations. On each ODR in both of the cells create a generic server cluster and add it to the port of the ODR's in the remote cell.

    1. In the administrative console, navigate to Cluster Generic Server Clusters

    2. Click New

    3. Enter a name for the generic server cluster and select an appropriate protocol.

    4. Click Ports

    5. Click New

    6. Enter the host name and HTTP(s) port of the ODR in the other cell in the Host and Port fields respectively, click OK. Repeat this step once for each ODR in the other cell.

    7. Save and sync all changes.

  2. In each cell, for each ODR or ODR cluster, create application routing rules if we want failover between cells. Each rule should have a different priority value, rules are evaluated in order of priority. For more information on creating routing rules please see Routing rules


Related:

  • Topology Configurations for Multi-Cell Routing
  • Configure multi-cell performance management: Star Topology
  • Set up Intelligent Management for dynamic operations