Administration guide > Configure the deployment environment > Configuring deployment policies > Controlling shard placement with zones


Zone definition examples for deployment file

Overview

You can specify zones and zone rules using the deployment policy descriptor XML file.

The following examples show how to use zone rules by defining them in a deployment file.


Primary and replica in different zones

This example places primary shards in one zone, and replica shards in a different zone, with a single asynchronous replica. All primary shards start in the DC1 zone. Replica shards start in zone DC2.

<?xml version="1.0" encoding="UTF-8"?>

<deploymentPolicy 
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://ibm.com/ws/objectgrid/deploymentPolicy ../deploymentPolicy.xsd"    
       xmlns="http://ibm.com/ws/objectgrid/deploymentPolicy">

<objectgridDeployment objectgridName="library">

    <mapSet 
        name="ms1" 
        numberOfPartitions="13" 
        minSyncReplicas="0"
        maxSyncReplicas="0" 
        maxAsyncReplicas="1">

    <map ref="book" />

    <zoneMetadata>

        <shardMapping 
          shard="P" 
          zoneRuleRef="primaryRule"/>

        <shardMapping 
          shard="A" 
          zoneRuleRef="replicaRule"/>

        <zoneRule name="primaryRule">
            <zone name="DC1" />
        </zoneRule>

        <zoneRule name="replicaRule">
        </zoneRule>

    </zoneMetadata>

    </mapSet>
 
</objectgridDeployment>

</deploymentPolicy>

One asynchronous replica is defined in the ms1 mapSet element. Therefore, two shards exist for each partition: a primary and one asynchronous replica. In the zoneMetadata element, a shardMapping element is defined for each shard: P for the primary, and DC1 for the asynchronous replica. The primaryRule attribute defines the zone set for the primary shards, which is just zone DC1, and this rule is to be used for primary shard placement. Asynchronous replicas are placed in the DC2 zone.

However, if the DC2 zone is lost, the replica shards become unavailable. The loss or failure of a container server in the DC1 zone can result in data loss, even though a replica has been specified.

To address this possibility, you can either add a zone or add a replica, as described in the following sections.


Add a zone: striping shards

The following code configures a new zone:

<?xml version="1.0" encoding="UTF-8"?>

<deploymentPolicy 
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://ibm.com/ws/objectgrid/deploymentPolicy ../deploymentPolicy.xsd" 
    xmlns="http://ibm.com/ws/objectgrid/deploymentPolicy">

    <objectgridDeployment objectgridName="library">

    <mapSet 
        name="ms1" 
        numberOfPartitions="13" 
        minSyncReplicas="0"
        maxSyncReplicas="0" 
        maxAsyncReplicas="1">

         <map ref="book" />

         <zoneMetadata>

          <shardMapping shard="P" 
                        zoneRuleRef="stripeRule"/>

          <shardMapping shard="A" 
                        zoneRuleRef="stripeRule"/>

          <zoneRule name="stripeRule" 
                    exclusivePlacement="true" >

              <zone name="A" />
              <zone name="B" />
              <zone name="C" />

          </zoneRule>
        </zoneMetadata>
     </mapSet>
    </objectgridDeployment>
</deploymentPolicy>

Three total zones have been defined in this code: A, B, and C. Instead of separate primary and replica zone rules, a shared zone rule named stripeRule is defined. This rule includes all of the zones, with the exclusivePlacement attribute set to true. The eXtreme Scale placement policy ensures that primary and replica shards are in separate zones. This striping of placement causes primary and replica shards to spread across both zones to conform to this policy. Adding a third zone C ensures that losing any one zone does not result in data loss, and still leaves primary and replica shards for each partition. A zone failure results in the loss of either the primary shard, the replica shard, or neither. Any lost shard is replaced from the surviving shard in a surviving zone, placing it in the other surviving zone.


Add a replica: multiple data centers

The classic two data-center scenario has high speed, low latency networks in each data center, but high latency between the data centers. Synchronous replicas are used in each data center where the low latency minimizes the impact of replication on response times. Asynchronous replication is used between data centers, so the high latency network has no impact on response time.

<?xml version="1.0" encoding="UTF-8"?>

<deploymentPolicy 
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://ibm.com/ws/objectgrid/deploymentPolicy ../deploymentPolicy.xsd" 
    xmlns="http://ibm.com/ws/objectgrid/deploymentPolicy">

    <objectgridDeployment objectgridName="library">

     <mapSet name="ms1" 
             numberOfPartitions="13" 
             minSyncReplicas="1"
             maxSyncReplicas="1" 
             maxAsyncReplicas="1">

        <map ref="book" />

        <zoneMetadata>
             <shardMapping shard="P" zoneRuleRef="primarySync"/>
             <shardMapping shard="S" zoneRuleRef="primarySync"/>
             <shardMapping shard="A" zoneRuleRef="async"/>
    
             <zoneRule name="primarySync" exclusivePlacement="false">
                 <zone name="DC1" />
                 <zone name="DC2" />
             </zoneRule>
    
             <zoneRule name="async" exclusivePlacement="true">
                 <zone name="DC1" />
                 <zone name="DC2" />
             </zoneRule>
        </zoneMetadata>
    </mapSet>
</objectgridDeployment>
</deploymentPolicy>

The exclusivePlacement attribute set to false creates a configuration with the primary and synchronous replica shards of each partition placed in the same zone.

The asynchronous replica uses the exclusivePlacement attribute set to true, which means that a shard cannot be placed in a zone with another shard from the same partition. As a result, the asynchronous replica shard does not get placed in the same zone as the primary or synchronous replica shard.

There are three shards per partition in this mapSet: a primary, and both a synchronous and asynchronous replica, so there are three shardMapping elements, one for each shard.

If a zone is lost, any asynchronous replicas are lost, and not regenerated, because they have no separate zone. If the primary and replica shards are lost, then the surviving asynchronous replica is promoted to primary, and a new synchronous replica is created in the zone. The primaries and replicas are striped across each zone.

With exclusive placement, each shard has its own zone: You must have enough zones for all the shards to place in their own zones. If a rule has one zone, only one shard can be placed in the zone. With two zones, you can have up to two shards in the zone.


Parent topic:

Controlling shard placement with zones


+

Search Tips   |   Advanced Search