+

Search Tips   |   Advanced Search

coregroupsplit.py script

We can use the coregroupsplit.py script to split the existing cell into multiple core groups. Consider running this script if we have more than 40 WebSphere Application Server related processes such as application servers, node agents, and on demand routers (ODRs) defined in the core group.


Purpose

The coregroupsplit.py script divides the existing cell into multiple core groups. If a server changes its core group membership, then you must restart the entire cell to prevent partitions from forming. For this reason, the default options used by this script do not change the core group membership of servers that are members of any core groups other than DefaultCoreGroup.

Running the script attempts to satisfy the following best practices for core groups:

This script also tunes the high availability manager for optimum performance. By default, the script configures a core group bridge on each node agent in the cell. The script increases the number of core groups until it reaches the optimal level, depending on the number of nodes and servers in the cell. The core group bridge node agents are configured as a part of the DefaultAccessPointGroup access point group in a mesh topology. In the preferred mesh topology, all of the access points are collected into a single access point group. As a result, all bridge interfaces can directly intercommunicate.

Remember that you must give core group bridges at least 512 MB of JVM space.


Location

The coregroupsplit.py script is in the app_server_root/bin directory.


Usage

The default script usage follows:

Running this script might result in unbalanced core groups, in which some core groups are loaded more or less than other core groups. We can rerun this script to rebalance core group membership, but in this case you must restart the entire cell for the changes to take effect. To rerun the script, use the following command:


Parameters

-reconfig

Performs a full reconfiguration to rebalance the distribution of servers among the core groups.

-linked

Creates a ring topology of core group bridges.

-createbridges

Creates separate core group bridge processes instead of creating the bridge in the node agent.

-numcoregroups

Number of core groups to create.

-datastacksize

Number of megabytes that overrides the default data stack size.

-proxycoregroup

Places the on demand routers (ODR) and proxy servers in a separate core group.

-odrcoregroup

Places the on demand routers (ODR) and proxy servers in a separate core group.

-nosave

Does not save any changes made to the core group. We can use this option to test setting parameters and running the script.

-debug

Prints troubleshooting information.

-nodesPerCG:number

Number of node agents required for each core group.

-numberOfServersPerCG:number

Maximum number of servers for each core group.

-bridgeHeapSize:number

Server heap size of the core group bridge in megabytes.


Examples

The following example results in a linked topology where the core group bridges are connected in a ring:

We can also use this script to create static cluster that are dedicated as core group bridges for communication within the core group. Use the following example:


Related concepts

  • Dynamic clusters


    Related tasks

  • Create dynamic clusters