WebSphere eXtreme Scale Product Overview > Availability > Replication for availability > Use zones for replica placement
Zone-preferred routing
With zone-preferred routing, eXtreme Scale can direct transactions to zones based on the specifications.
WebSphere eXtreme Scale allows you to exercise a significant amount of control over where the shards of an ObjectGrid are placed. See Use zones for replica placement for information on some basic scenarios and how to configure the deployment policy accordingly.
Zone-preferred routing allows eXtreme Scale clients to specify a bias for a particular zone or set of zones. So eXtreme Scale will attempt to route client transactions to preferred zones before attempting to route to any other zone.
Requirements for zone-preferred routing
There are several factors to consider before attempting zone-preferred routing. Verify the application is able to satisfy the requirements of your scenario.
Per-container partition placement is necessary in order to leverage zone-preferred routing. This placement strategy is a good fit for applications that are storing session data in the ObjectGrid. WebSphere eXtreme Scale's default partition placement strategy is fixed-partition. Keys are hashed at transaction commit time to determine which partition houses the key-value pair of the map when using fixed-partition placement.
Per-container placement assigns the data to a random partition at transaction commit time through the SessionHandle. You must be able to reconstruct the SessionHandle in order to retrieve the data from the ObjectGrid.
Since you can use zones to have more control over where primaries and their replicas will reside in the domain, a multi-zone deployment is advantageous when the data resides in multiple physical locations. Geographically separating primaries and replicas is a way to ensure that catastrophic loss of 1 datacenter will not impact the availability of the data.
When data is spread across a multi-zone topology, it is likely that clients are also spread across the topology. Routing clients to their local zone or datacenter has the obvious performance benefit of reduced network latency. Take advantage of this scenario when possible.
Configure the topology for zone-preferred routing
Consider the following scenario. You have 2 data centers: Chicago and London. In order to minimize response time of clients, you want clients to read and write data to their local datacenter.
ObjectGrid primary shards must be placed in each data center so that transactions can be written locally from each location. Additionally, clients will need to be aware of zones in order to route to the local zone.
Per-container placement locates new primary shards on each container started. Replicas are placed according to zone and placement rules specified by the deployment policy. By default, a replica is placed in a different zone than its primary. Consider the following deployment policy for this scenario.
<?xml version="1.0" encoding="UTF-8"?> <deploymentPolicy xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://ibm.com/ws/objectgrid/deploymentPolicy ../deploymentPolicy.xsd" xmlns="http://ibm.com/ws/objectgrid/deploymentPolicy"> <objectgridDeployment objectgridName="universe"> <mapSet name="mapSet1" placementStrategy="PER_CONTAINER" numberOfPartitions="3" maxAsyncReplicas="1"> <map ref="planet" /> </mapSet> </objectgridDeployment> </deploymentPolicy>
Each container that starts with the deployment policy will receive 3 new primaries. Each primary will have 1 asynchronous replica. Start each container with the appropriate zone name. Use the -zone parameter if you are launching the containers with the startOgServer script.
For a Chicago container server:
startOgServer.sh s1 -objectGridFile ../xml/universeGrid.xml -deploymentPolicyFile ../xml/universeDp.xml -catalogServiceEndpoints MyServer1.company.com:2809 -zone Chicago
startOgServer.bat s1 -objectGridFile ../xml/universeGrid.xml -deploymentPolicyFile ../xml/universeDp.xml -catalogServiceEndpoints MyServer1.company.com:2809 -zone Chicago
If the containers are running in WebSphere Application Server, create a node group and name it with the prefix “ReplicationZone”. Servers running on the nodes in such node groups will be placed into the appropriate zone. For example, servers running on a Chicago node might be in a node group named “ReplicationZoneChicago”. See the “Associating WebSphere Extended Deployment nodes with zones” in Use zones for replica placement.
Primaries for the Chicago zone will have replicas in the London zone. Primaries for the London zone will have replicas in the Chicago zone.
Figure 1. Primaries and replicas in zones
Set the preferred zones for the clients. This information can be provided in one of several different ways. The most straightforward way is to provide a client properties file to the client JVM. Create a file named objectGridClient.properties and ensure that it is in your class path. See the Client properties file topic for information. For more information, see the topic on the client properties file in the
Include the preferZones property in the file. Set the property value to the appropriate zone. Clients in Chicago should have the following in the objectGridClient.properties file.
preferZones=Chicago
The property file for London clients should contain
preferZones=London
This property instructs each client to route transactions to its local zone if possible. Data that is inserted into a primary in the local zone will be asynchronously replicated to the foreign zone.
Use the SessionHandle to route to the local zone
The per-container placement strategy does not use a hash-based algorithm to determine the location of the key-value pairs in the ObjectGrid. Your ObjectGrid must leverage SessionHandles to ensure that transactions are routed to the right location when using this placement strategy. When a transaction is committed, a SessionHandle is bound to the Session if one has not already been set. Alternatively, the SessionHandle can be bound to the Session by calling Session.getSessionHandle prior to committing the transaction. The following code snippet shows a SessionHandle being bound prior to committing the transaction.
Session ogSession = objectGrid.getSession(); // binding the SessionHandle SessionHandle sessionHandle = ogSession.getSessionHandle(); ogSession.begin(); ObjectMap map = ogSession.getMap("planet"); map.insert("planet1", "mercury"); // tran is routed to partition specified by SessionHandle ogSession.commit();
Assume that the code above was running on a client in the Chicago data center. Since the preferZones attribute has been set to Chicago for this client, this transaction would be routed to one of the primary partitions in the Chicago zone, partition 0, 1, 2, 6, 7, or 8.
This SessionHandle is the path back to the partition storing this committed data. The SessionHandle must be reused or reconstructed and set on the Session in order to get back to the partition containing the committed data.
ogSession.setSessionHandle(sessionHandle); ogSession.begin(); // value returned will be "mercury" String value = map.get("planet1"); ogSession.commit();
Since this transaction reuses the SessionHandle that was created during the insert transaction, the get transaction will be routed to the partition that holds the inserted data. This transaction will be unable to retrieve the previously inserted data if the SessionHandle has not been set.
How container and zone failures affect zone-based routing
A client with the preferZones property set will have all of its transactions routed to the specified zone or zones under normal circumstances. However, the loss of a container will result in a replica getting promoted to primary in a foreign zone. A client that was previously routing to partitions in the local zone may be forced to the foreign zone in order to retrieve previously inserted data.
Consider the following scenario. A container in the Chicago zone is lost. It previously contained primaries for partitions 0, 1, and 2. The new primaries for these partitions will be in the London zone since this zone was hosting the replicas for these partitions.
Any Chicago client that is using a SessionHandle pointing to one of the failed over partitions will now be routed to London. Chicago clients using new SessionHandles will be routed to Chicago-based primaries.
Similarly, if the entire Chicago zone is lost, all replicas in the London zone will become primaries. In this case, all Chicago clients will have their transactions routed to London.
Parent topic
Use zones for replica placement