Configure the core group bridge between core groups in different cells
We have two or more core groups in different cells. A core group bridge can be used where the availability status of servers in different cells needs to be shared across all of those cells. For example, we might have a situation where a WebSphere proxy server needs the ability to route requests to servers in other cells.Cells connected via core group bridges must have unique cell names.
Use core group bridge custom properties to set up advanced configurations for a core group bridge.
When configuring core group bridges, remember the following requirements:
- Whenever a change is made in core group bridge configuration, including the addition of a new bridge, or the removal of an existing bridge, we must fully shutdown, and then restart all core group bridges in the affected access point groups.
- There must be at least one running core group bridge in each core group. If we configure two bridges in each core group, a single server failure does not disrupt the bridge functionality. Also, configuring two bridges enables us to periodically cycle out one of the bridges. If all the core group bridges in a core group are shutdown, the core group state from all foreign core groups is lost.
It is also recommended that:
- Core group bridges be configured in their own dedicated server process, and that these processes have their monitoring policy set for automatic restart.
- For each of our core groups, we set the IBM_CS_WIRE_FORMAT_VERSION core group custom property to the highest value supported on the environment.
- To conserve resources, do not create more than two core group bridge interfaces when defining a core group access point. We can use one interface for workload purposes and another interface for high availability. Ensure that these interfaces are on different nodes for high availability purposes.
- We should typically specify ONLY two bridge interfaces per core group. Having at least two bridge interfaces is necessary for high availability. Having more than two bridge interfaces adds unnecessary overhead in memory and CPU.
bprac
When creating a core group bridge between two core groups in different cells, the member communication key value of both access point groups in both cells must match. By default, the member communication key defaults to whatever the name of the access point group is. There are two ways to do this:
- Make two access point groups the same name.
- If the names of two access point groups are different, set the member communication key of either access point group to the name of the other access point group.
Configure a core group bridge between core groups in different cells
- Configure bridge interfaces for our core group access point. Configuring a bridge interface indicates that the specified node, server, and chain combination is a core group bridge server. This node and server use the specified chain to communicate with other core groups.
- In the administrative console, click...
Servers > Core Groups > Core group bridge settings > Access point groups >access_point_group > Core group access points
- Select one of the listed core group access points. Then click...
Show detail > Bridge interfaces > New
- Select a node, server, and transport chain for our bridge interface.
- Click Apply.
- Repeat this set of steps to add more bridge interfaces to the core group access point.
Define at least two bridge interfaces for each core group access point to back up the configuration. When defining two core group bridge servers, if one of these two servers fails, the other server handles any pending communication, thereby preventing an interruption in the communication between the core groups.
The bridge interfaces that we select must all have the same transport chain..
- If we want the ability to add core group bridge servers to the configuration without restarting the other servers in the configuration, define the cgb.allowUndefinedBridges custom property on all of the access point groups in the configuration.
- In the administrative console, click...
Servers > Core Groups > Core group bridge settings > Access point groups > access_point_group > Custom properties > New
- Type the name as...
cgb.allowUndefinedBridges
...and set the value to any string.
The existence of the cgb.allowUndefinedBridges property automatically enables this property. Therefore, we can set the value to any string value. Setting the value to false does not disable the property. To disable the property, we must remove it from the list of defined custom properties or change its name.
- Click Apply and save the configuration.
When we complete this step on all the access point groups in the configuration, we can add a bridge interface to one of the cells. We can save the configuration so that it is propagated to all of the nodes. Instead of restarting all of the application servers, we need to restart the new bridge interface server only.
- Add peer access points and peer ports to your access point group.
If we defined the cgb.allowUndefinedBridges custom property for all of the access point groups in the configuration, we do not need to add peer access points or peer ports to the listener cell.
Add a peer access point for each core group that is in another cell. Within each peer access point, we should configure a peer port that corresponds to each bridge interface in the other cell. Before we add a peer access point, determine the following information about the other cell:
- Cell name
- Core group name
- Core group access point name
- Host and port information. The host and port correspond to the bridge interfaces configured in the other cell. Specify a peer port for each bridge interface in the other cell.
- In the administrative console, click...
Servers > Core Groups > Core group bridge settings > Access point groups >access_point_group > Peer access points > New
- Specify the information for our peer access point.
In addition to specifying a name for the peer access point, complete the following actions:
- Specify the remote cell where the peer access point resides.
- Specify the name of the core group in the remote cell to which the peer access points belongs.
- Select either Use peer ports or Use proxy peer access points, depending on whether the peer access point can be reached directly or can only be reached indirectly through another peer access point.
- Select the level of access that we want a server from another cell to have to the local cell when that server uses this access point to establish communication with the local cell.
- If we select Full access, the communicating server can read data from and write data to the local cell. This level of access is appropriate if there is no reason to restrict read or write access to the local cell.
- If we select Read only, the communicating server can read data from the local cell, but is not permitted to write data to the local cell. This level of access is appropriate if applications running in other core groups need to access data contained in the local cell. but we want to prevent the communicating server from changing the data.
- If we select Write only, the communicating server can write data to the local cell, but is not permitted to read data from the local cell. This level of access is appropriate if applications running in other core groups need to write data to the local cell, but the data stored on the local cell is sensitive. For example, the local cell might contain customer account numbers, and we do not want applications that resides outside of the local cell to read this information.
- Click Next.
- Select Use peer ports. Specify the host and port information for our peer cell. For example, if we defined a bridge interface in cell_x, use that configuration information for our peer port in cell_y.
- Click Next and then Finish. Save the configuration.
- Optional: If more than one bridge interface is defined in your peer cell, add additional peer ports for each bridge interface.
- Click Peer access points > peer_access_point > Show detail > Peer ports > New.
- Enter the host name and port.
- Click Apply , and save the changes.
- Optional: Configure the high availability manager protocol to establish transparent bridge failover support.
During core group bridge state rebuilds, cross-core group state can be moved between running bridges. This situation might cause the data to be temporarily unavailable until the bridge has completed the rebuild process.
If we are running on Version 7.0.0.1 or later, set the IBM_CS_HAM_PROTOCOL_VERSION core group custom property to 6.0.2.31 for all of our core groups to avoid a possible high availability state outage during core group bridge failover. When this custom property is set to 6.0.2.31, the remaining bridges recover the high availability state of the failed bridge without the data being unavailable in the local core group.
Complete the following actions to set the IBM_CS_HAM_PROTOCOL_VERSION core group custom property to 6.0.2.31 for all of our core groups.
- Shut down all core group bridges in all of our core groups.
- Repeat the following actions for each core group in each of our cells:
- In the administrative console, click...
Servers > Core Groups > Core group settings > core_group_name > Custom properties
- Specify IBM_CS_HAM_PROTOCOL_VERSION in the Name field, and 6.0.2.31 in the Value field.
- Save changes.
- Synchronize our changes across the topology.
- Restart all of the bridges in the topology.
All of the core groups within this topology are using the 6.0.2.31 high availability manager protocol.
We configured the core group bridge between core groups in different cells.
The following figure illustrates the resulting core group bridge between the two core groups that are in two different cells. Each cell has a defined access point group containing one core group access point for the core group that is in the cell and a peer access point for the other cell.
Example
The following example illustrates the configuration steps that are performed to set up a core group bridge between two cells. In this example:
- The two cells are referred to as the primary cell and the remote cell.
- wasdmgr02/dmgr/DCS is the name of the deployment manager on the primary cell, and wasdmgr02/dmgr/DCS is the name of the deployment manager on the remote cell.
- wasna01/nodeagent/DCS is the name of a node on both the primary cell and the remote cell.
- CGAP_1/DefaultCoreGroup is the name of the core groups on both the primary cell and the remote cell.
- Use the administrative console for the primary cell, click...
Servers > Core groups > Core group bridge settings > Access point groups > DefaultAccessPointGroup > Core group access points
- Select CGAP_1/DefaultCoreGroup. and then click Show Detail.
- Select Bridge interfaces> New.
- In the Bridge interfaces field, select the deployment manager, wasdmgr02/dmgr/DCS, from the list of available bride interfaces> OK.
- Click New to create a second bridge interface.
- In the Bridge interfaces field, select a node agent, such as wasna01/nodeagent/DCS, and then click OK to save the changes.
- Go to the administrative console for the remote cell, and click Servers > Core groups > Core group bridge settings > Access point groups > DefaultAccessPointGroup > Core group access points..
- Select CGAP_1/DefaultCoreGroup. and then click Show Detail.
- Select Bridge interfaces> New.
- In the Bridge interfaces field, select the deployment manager, wasdmgr03/dmgr/DCS, from the list of available bride interfaces> OK.
- Click New to create a second bridge interface.
- In the Bridge interfaces field, select the node agent, wasna01/nodeagent/DCS, from the list of available bride interface> OK to save the changes.
- Save changes.
- Gather the following information for the remote cell:
- The DCS port for the deployment manager. Click System administration > Deployment manager > Ports > DCS_UNICAST_ADDRESS, and write down the port number for DCS_UNICAST_ADDRESS. In this example, the DCS port for the deployment manager is 9353.
- The DCS port for the wasna01 node agent. Click System administration >Node agents > wasna01 > Ports > DCS_UNICAST_ADDRESS, and write down the port number for DCS_UNICAST_ADDRESS. In this example, the DCS port for the node agent is 9454.
- The name the of the core group in the cell to which the Enterprise Javabeans (EJB) cluster belongs. Click Servers > Core groups > Core group settings > DefaultCoreGroup >. Core group members, verify that the servers are members of the DefaultCoreGroup core group, and then write down the core group name. In this example the core group name is DefaultCoreGroup.
- The name of the cell. Click System administration > Cell, and then write down the name that displays in the Name field. In this example, the name of the name of the cell is wascell03.
- The name of the core group access point. Click Servers > Core groups > DefaultCoreGroup > Core group bridge settings, expand the DefaultAccessPointGroup field, and write down the name of the core group access point that displays when you expand Core Group DefaultCoreGroup. In this example the name of the core group access point is CGAP_1.
- Go back to the administrative console for the primary cell and gather the same information about the primary cell. In this example:
- The DCS port for the deployment manager on the primary cell is 9352.
- The DCS port for the wasna01 node agent on the primary cell is 9353.
- The name of the core group in the cell to which the EJB cluster belongs is DefaultCoreGroup.
- the name of the cell is wascell02.
- The name of the core group access point is CGAP_1.
- Create a new peer access point that points to the remote cell. In the primary cell administrative console, click...
Servers > Core groups > Core group bridge settings > Access point groups > DefaultAccessPointGroup > Peer access points.
- Click New to start the Create new peer access point wizard.
- Specify the name of the new peer access point, RemoteCellGroup, in the Name field, wascell03 in the Remote cell name field, DefaultCoreGroup in the Remote cell core group name field, and CGAP_1 in the Remote cell core group access point name field.
- Click Next, and then select either Use peer ports or Use a proxy peer access point. For this example, we select Use peer ports, and specify washost02 in the Host field, and 9353 in the Port field. These values are the host name and DCS port number for the deployment manager on the remote cell.
- Click Next, confirm that the information specified for the new peer access point is correct, and then click Finish.
- Create a second peer access point for the node agent.
- Select the peer access point that we just created, RemoteCellGroup/wascell03/DefaultCoreGroup/CGAP_1, and then click Show Detail.
- In the Peer addressability section, select Peer ports, and then click Peer ports > New.
- Specify washost04 in the Host field, and 9454 in the Port field. These values are the host name and DCS port number for the node agent on the remote cell.
- Click OK and then click Save to save the changes to the master configuration.
- To start the Create new peer access point wizard and create peer access points in the remote cell, go to remote cell administrative console, and click...
Servers > Core groups > Core group bridge settings > Access point groups > DefaultAccessPointGroup > Peer access points > New
- Specify the name of the new peer access point, PrimaryCellGroup, in the Name field, wascell02 in the Remote cell name field, DefaultCoreGroup in the Remote cell core group name field, and CGAP_1 in the Remote cell core group access point name field.
- Click Next, and then select either Use peer ports or Use a proxy peer access point. For this example, we select Use peer ports, and specify washost01 in the Host field, and 9352 in the Port field. These values are the host name and DCS port number for the deployment manager on the primary cell.
- Click Next, confirm that the information specified for the new peer access point is correct, and then click Finish.
- Create a second peer access point for the node agent on the primary cell.
- Select the peer access point that we just created, PrimaryCellGroup/wascell02/DefaultCoreGroup/CGAP_1, and then click Show Detail.
- In the Peer addressability section, select Peer ports, and then click Peer ports > New.
- Specify washost03 in the Host field, and 9353 in the Port field. These values are the host name and DCS port number for the node agent on the primary cell.
- Click OK and then click Save to save the changes to the master configuration.
- Restart both cells.
What to do next
Continue configuring the high availability environment.
Related:
Core group communications using the core group bridge service Core groups (high availability domains) Set up a high availability environment Create backup clusters Core group bridge settings Core group bridge custom properties