Administration guide > Configure the deployment environment > Configuring deployment policies
Configure distributed deployments
To manage the topology, use...
- deployment policy descriptor XML file
- objectgrid descriptor XML file
The deployment policy descriptor is provided to the eXtreme Scale container server and specifies...
- The maps that belong to each map set
- The number of partitions
- The number of synchronous and asynchronous replicas
- The minimum number of active container servers before placement occurs
- Automatic replacement of lost shards
- Placement of each shard from a single partition onto a different machine
For information on starting container servers, see...
Endpoint information is not pre-configured in the dynamic environment. There are no server names or physical topology information found in the deployment policy. All shards in a data grid are automatically placed into containers by the catalog service, which uses the constraints that are defined by the deployment policy to automatically manage shard placement. You can add servers to the environment as needed.
Restriction: In a WAS environment, a core group size of more than 50 members is not supported.
A deployment policy XML file is passed to a container server during startup.
A deployment policy must be used along with an ObjectGrid XML file.
The deployment policy is not required to start a container, but is recommended. The deployment policy must be compatible with the ObjectGrid XML file that is used with it. For each objectgridDeployment element in the deployment policy, include a corresponding objectGrid element in the ObjectGrid XML file. The maps in the objectgridDeployment must be consistent with the backingMap elements found in the ObjectGrid XML. Every backingMap must be referenced within one and only one mapSet element.
In the following example, the companyGridDpReplication.xml file is intended to be paired with the corresponding companyGrid.xml file.
<?xml version="1.0" encoding="UTF-8"?> <deploymentPolicy xmlns:xsi="http://www.w3.org./2001/XMLSchema-instance" xsi:schemaLocation="http://ibm.com/ws/objectgrid/deploymentPolicy/deploymentPolicy.xsd" xmlns="http://ibm.com/ws/objectgrid/deploymentPolicy"> <objectgridDeployment objectgridName="CompanyGrid"> <mapSet name="mapSet1" numberOfPartitions="11" minSyncReplicas="1" maxSyncReplicas="1" maxAsyncReplicas="0" numInitialContainers="4"> <map ref="Customer" /> <map ref="Item" /> <map ref="OrderLine" /> <map ref="Order" /> </mapSet> </objectgridDeployment> </deploymentPolicy>
<?xml version="1.0" encoding="UTF-8"?> <objectGridConfig xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://ibm.com/ws/objectgrid/config/objectGrid.xsd" xmlns="http://ibm.com/ws/objectgrid/config"> <objectGrids> <objectGrid name="CompanyGrid"> <backingMap name="Customer" /> <backingMap name="Item" /> <backingMap name="OrderLine" /> <backingMap name="Order" /> </objectGrid> </objectGrids> </objectGridConfig>
companyGridDpReplication.xml has one mapSet element that is divided into 11 partitions. Each partition will have one synchronous replica as specified by...
When set to 1, each partition in the mapSet element must have at least one synchronous replica available to process write transactions.
When set to 1, each partition cannot exceed one synchronous replica. The partitions in this mapSet element have no asynchronous replicas.
numInitialContainers instructs the catalog service to defer placement until four containers are available to support this ObjectGrid instance. The numInitialContainers attribute is ignored after the specified number of containers has been reached.
Distributed topologyDistributed coherent caches offer increased performance, availability, and scalability, which you can configure.
WXS automatically balances servers. You can include additional servers without restarting WXS. Adding additional servers without having to restart WXS allows you to have simple deployments and also large, terabyte-sized deployments in which thousands of servers are needed.
This deployment topology is flexible. Using the catalog service, you can add and remove servers to better use resources without removing the entire cache. Use the startOgServer and stopOgServer commands to start and stop container servers. Both of these commands require you to specify the -catalogServiceEndPoints option.
All distributed topology clients communicate to the catalog service through IIOP. All clients use the ObjectGrid interface to communicate with servers.
Containers host the data and the catalog service allows clients to communicate with the grid of containers.
The catalog service forwards requests, allocates space in host containers, and manages the health and availability of the overall system. Clients connect to a catalog service, retrieve a description of the container-server topology, and then communicate directly to each server as needed.
When the server topology changes due to the addition of new servers, or due to the failure of others, the catalog service automatically routes client requests to the appropriate server that hosts the data.
A catalog service typically exists in its own grid of Java™ virtual machines. A single catalog server can manage multiple servers. You can start a container in a JVM by itself or load the container into an arbitrary JVM with other containers for different servers. A client can exist in any JVM and communicate with one or more servers. A client can also exist in the same JVM as a container.
You can also create a deployment policy programmatically when you are embedding a container in an existing Java process or application. For more information, see the DeploymentPolicy API documentation.
Parent topic:Configure deployment policies
Replicas and shards
Map sets for replication
Enable client-side map replication
Controlling shard placement with zones
Configure failover detection
Configure deployment policies
Configure local deployments
Deployment policy descriptor XML file