WebSphere eXtreme Scale Administration Guide > Configure WebSphere eXtreme Scale > Configuring with XML files
Deployment policy descriptor XML file
To configure a deployment policy, use a deployment policy descriptor XML file.
In the following sections, the elements and attributes of the deployment policy descriptor XML file are defined. See the deploymentPolicy.xsd file for an example of the deployment policy XML schema.
deploymentPolicy element
The deploymentPolicy element is the top-level element of the deployment policy XML file. This element sets up the namespace of the file and the schema location. The schema is defined in the deploymentPolicy.xsd file.
- Number of occurrences: One
- Child element: objectgridDeployment element
objectgridDeployment element
The objectgridDeployment element is used to reference an ObjectGrid instance from the ObjectGrid XML file. Within the objectgridDeployment element, you can divide your maps into map sets.
- Number of occurrences: One or more
- Child element: mapSet element
Attributes
- objectgridName
- Specifies the name of the ObjectGrid instance to deploy. This attribute references an objectGrid element that is defined in the ObjectGrid XML file. (Required)
<objectgridDeployment objectgridName="objectgridName"/>For example, the objectgridName attribute is set as CompanyGrid in the companyGridDpReplication.xml file. The objectgridName attribute references the CompanyGrid that is defined in the companyGrid.xml file. Read about the ObjectGrid descriptor XML file which you should couple with the deployment policy file for each ObjectGrid instance.
mapSet element
The mapSet element is used to group maps together. The maps within a mapSet element are partitioned and replicated similarly. Each map must belong to only one mapSet element.
- Number of occurrences: One or more
- Child element: map element
Attributes
- name
- Specifies the name of the mapSet. This attribute must be unique within the objectgridDeployment element. (Required)
- numberOfPartitions
- Specifies the number of partitions for the mapSet element. The default value is 1. The number should be appropriate for the number of containers that host the partitions. (Optional)
- minSyncReplicas
- Minimum number of synchronous replicas for each partition in the mapSet. The default value is 0. Shards are not placed until the domain can support the minimum number of synchronous replicas. To support the minSyncReplicas value, you need one more container than the value of minSyncReplicas. If the number of synchronous replicas falls below the value of minSyncReplicas, write transactions are no longer allowed for that partition. (Optional)
- maxSyncReplicas
- Maximum number of synchronous replicas for each partition in the mapSet. The default value is 0. No other synchronous replicas are placed for a partition after a domain reaches this number of synchronous replicas for that specific partition. Adding containers that can support this ObjectGrid can result in an increased number of synchronous replicas if the maxSyncReplicas value has not already been met. (Optional)
- maxAsyncReplicas
- Maximum number of asynchronous replicas for each partition in the mapSet. The default value is 0. After the primary and all synchronous replicas have been placed for a partition, asynchronous replicas are placed until the maxAsyncReplicas value is met. (Optional)
- replicaReadEnabled
- If this attribute is set to true, read requests are distributed amongst a partition primary and its replicas. If the replicaReadEnabled attribute is false, read requests are routed to the primary only. The default value is false. (Optional)
- numInitialContainers
- Specifies the number of eXtreme Scale containers that are required before initial placement occurs for the shards in this mapSet element. The default value is 1. This attribute can help save process and network bandwidth when bringing an ObjectGrid instance online. (Optional)
- Start an eXtreme Scale container sends an event to the catalog service. The first time that the number of active containers is equal to the numInitialContainers value for a mapSet element, the catalog service places the shards from the mapSet, provided that minSyncReplicas can also be satisfied. After the numInitialContainers value has been met, each container-started event can trigger a rebalance of unplaced and previously placed shards. If you know approximately how many containers you are going to start for this mapSet element, you can set the numInitialContainers value close to that number to avoid the rebalance after every container start. Placement occurs only when you reach the numInitialContainers value specified in the mapSet element.
- autoReplaceLostShards
- Specifies if lost shards are placed on other containers. The default value is true. When a container is stopped or fails, the shards running on the container are lost. A lost primary shard causes one of its replica shards to be promoted to the primary shard for the corresponding partition. Because of this promotion, one of the replicas is lost. If you want lost shards to remain unplaced, set the autoReplaceLostShards attribute to false. This setting does not affect the promotion chain, but only the replacement of the last shard in the chain. (Optional)
- developmentMode
- With this attribute, you can influence where a shard is placed in relation to its peer shards. The default value is true. When the developmentMode attribute is set to false, no two shards from the same partition are placed on the same computer. When the developmentMode attribute is set to true, shards from the same partition can be placed on the same machine. In either case, no two shards from the same partition are ever placed in the same container. (Optional)
- placementStrategy
- There are two placement strategies. The default strategy is FIXED_PARTITION, where the number of primary shards that are placed across available containers is equal to the number of partitions defined, increased by the number of replicas. The alternate strategy is PER_CONTAINER, where the number of primary shards that are placed on each container is equal to the number of partitions that are defined, with an equal number of replicas placed on other containers. (Optional)
The following code shows the available attributes for a given mapSet element.
<mapSet (1) name="mapSetName" (2) numberOfPartitions="numberOfPartitions" (3) minSyncReplicas="minimumNumber" (4) maxSyncReplicas="maximumNumber" (5) maxAsyncReplicas="maximumNumber" (6) replicaReadEnable="true" | "false" (7) numInitialContainers="numberOfInitialContainersBeforePlacement" (8) autoReplaceLostShards="true" | "false" (9) developmentMode="true" | "false" (10) placementStrategy="FIXED_PARTITION"|"PER_CONTAINER" />
In the following example, the mapSet element is used to configure a deployment policy. The value is set to mapSet1, and is divided into 10 partitions. Each of these partitions must have at least one synchronous replica available and no more than two synchronous replicas. Each partition also has an asynchronous replica if the environment can support it. All synchronous replicas are placed before any asynchronous replicas are placed. Additionally, the catalog service does not attempt to place the shards for the mapSet1 element until the domain can support the minSyncReplicas value. Supporting the minSyncReplicas value requires two or more containers: one for the primary and two for the synchronous replica.
<?xml version="1.0" encoding="UTF-8"?> <deploymentPolicy xmlns:xsi="http://www.w3.org./2001/XMLSchema-instance" xsi:schemaLocation="http://ibm.com/ws/objectgrid/deploymentPolicy ../deploymentPolicy.xsd" xmlns="http://ibm.com/ws/objectgrid/deploymentPolicy"> <objectgridDeployment objectgridName="CompanyGrid"> <mapSet name="mapSet1" numberOfPartitions="10" minSyncReplicas="1" maxSyncReplicas="2" maxAsyncReplicas="1" numInitialContainers="10" autoReplaceLostShards="true" developmentMode="false" replicaReadEnabled="true"> <map ref="Customer"/> <map ref="Item"/> <map ref="OrderLine"/> <map ref="Order"/> </mapSet> </objectgridDeployment> </deploymentPolicy>Although only two containers are required to satisfy the replication settings, the numInitialContainers attribute requires 10 available containers before the catalog service attempts to place any of the shards in this mapSet element. After the domain has 10 containers that are able to support the CompanyGrid ObjectGrid, all shards in the mapSet1 element are placed.
Because the autoReplaceLostShards attribute is set to true, any shard in this mapSet element that is lost as the result of container failure is automatically replaced onto another container, provided that a container is available to host the lost shard. Shards from the same partition cannot be placed on the same machine for the mapSet1 element because the developmentMode attribute is set to false. Read-only requests are distributed across the primary shardand its replicas for each partition because the replicaReadEnabled value is true.
map element
Each map in a mapSet element references one of the backingMap elements that is defined in the corresponding ObjectGrid XML file. Every map in a distributed eXtreme Scale environment can belong to only one mapSet element. See the objectGrid.xsd file for more information on the ObjectGrid XML file.
- Number of occurrences: One or more
- Child element: None
Attributes
- ref
- Provides a reference to a backingMap element in the ObjectGrid XML file. Each map in a mapSet element must reference a backingMap element from the ObjectGrid XML file. The value that is assigned to the ref attribute must match the name attribute of one of the backingMap elements in the ObjectGrid XML file, as in the following code snippet. (Required)
<map (1) ref="backingMapReference" />The companyGridDpMapSetAttr.xml file uses the ref attribute on the map to reference each of the backingMap elements from the companyGrid.xml file.
For information on avoiding XML configuration conflicts, see Troubleshoot XML configuration.
- deploymentPolicy.xsd file
Use the deployment policy XML schema to create a deployment descriptor XML file.
Parent topic
Configure with XML files
Related concepts
Troubleshoot XML configuration
Related tasks
Related reference
Distributed eXtreme Scale grid configuration