WebSphere eXtreme Scale Administration Guide > Configure WebSphere eXtreme Scale


Distributed eXtreme Scale grid configuration


To manage the topology use...

The deployment policy is provided to the eXtreme Scale container, and specifies...

For information on starting container servers, read about...

The deployment policy also controls the following placement behaviors.

Endpoint information is not pre-configured in the dynamic environment. There are no server names or physical topology information found in the deployment policy. All shards in a grid are automatically placed into containers by the catalog service. The catalog service uses the constraints that are defined by the deployment policy to automatically manage shard placement. This automatic shard placement leads to easy configuration for large grids. You can also add servers to the environment as needed.

Restriction: In a WebSphere Application Server environment, a core group size of more than 50 members is not supported.

A deployment policy XML file is passed to an eXtreme Scale container during startup. A deployment policy must be used along with an ObjectGrid XML file. The deployment policy is not required to start a container, but is recommended. The deployment policy must be compatible with the ObjectGrid XML file that is used with it. For each objectgridDeployment element in the deployment policy, include a corresponding objectGrid element in the ObjectGrid XML file. The maps in the objectgridDeployment must be consistent with the backingMap elements found in the ObjectGrid XML. Every backingMap must be referenced within one and only one mapSet element.

In the following example, the companyGridDpReplication.xml file is intended to be paired with the corresponding companyGrid.xml file.

companyGridDpReplication.xml

<?xml version="1.0" encoding="UTF-8"?>

<deploymentPolicy 
    xmlns:xsi="http://www.w3.org./2001/XMLSchema-instance"
    xsi:schemaLocation="http://ibm.com/ws/objectgrid/deploymentPolicy ../deploymentPolicy.xsd"
    xmlns="http://ibm.com/ws/objectgrid/deploymentPolicy">

    <objectgridDeployment objectgridName="CompanyGrid">

        <mapSet name="mapSet1" 
                   numberOfPartitions="11"
                   minSyncReplicas="1" 
                   maxSyncReplicas="1"
                   maxAsyncReplicas="0" 
                   numInitialContainers="4">

            <map ref="Customer" />
            <map ref="Item" />
            <map ref="OrderLine" />
            <map ref="Order" />

        </mapSet>

    </objectgridDeployment>

</deploymentPolicy>


companyGrid.xml

<?xml version="1.0" encoding="UTF-8"?>

<objectGridConfig 
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://ibm.com/ws/objectgrid/config ../objectGrid.xsd"
    xmlns="http://ibm.com/ws/objectgrid/config">

    <objectGrids>

        <objectGrid name="CompanyGrid">

            <backingMap name="Customer" />
            <backingMap name="Item" />
            <backingMap name="OrderLine" />
            <backingMap name="Order" />

        </objectGrid>
    </objectGrids>

</objectGridConfig>

The companyGridDpReplication.xml file has one mapSet element that is divided into 11 partitions. Each partition must have exactly one synchronous replica. The number of synchronous replicas is specified by the minSyncReplicas and maxSyncReplicas attributes.

Because the minSyncReplicas attribute is set to 1, each partition in the mapSet element must have at least one synchronous replica available to process write transactions. Since maxSyncReplicas is set to 1, each partition cannot exceed one synchronous replica. The partitions in this mapSet element have no asynchronous replicas.

The numInitialContainers attribute instructs the catalog service to defer placement until four containers are available to support this ObjectGrid instance. The numInitialContainers attribute is ignored after the specified number of containers has been reached.

Although the companyGridDpReplication.xml file is a basic example, a deployment policy can offer you full control over your eXtreme Scale environment. Read about the deployment policy descriptor XML file.


Distributed topology

Distributed coherent caches offer increased performance, availability, and scalability, which you can configure.

WebSphere eXtreme Scale automatically balances servers. You can include additional servers without restarting WebSphere eXtreme Scale. Adding additional servers without having to restart eXtreme Scale allows you to have simple deployments and also large, terabyte-sized deployments in which thousands of servers are needed.

Using the catalog service, you can add and remove servers to better use resources without removing the entire cache. You can use the startOgServer and stopOgServer commands to start and stop container servers. Both of these commands require you to specify the option...

-catalogServiceEndPoints option

All distributed topology clients communicate to the catalog service through IIOP. All clients use the ObjectGrid interface to communicate with servers.

Containers host the data and the catalog service allows clients to communicate with the grid of containers. The catalog service...

Clients...

When the server topology changes due to the addition of new servers, or due to the failure of others, the catalog service automatically routes client requests to the appropriate server that hosts the data.

A catalog service typically exists in its own grid of JVMs. A single catalog server can manage multiple servers. You can start a container in a JVM by itself or load the container into an arbitrary JVM with other containers for different servers. A client can exist in any JVM and communicate with one or more servers. A client can also exist in the same JVM as a container.

You can also create a deployment policy programmatically when embedding a container in an existing Java process or application.



Parent topic

Configure WebSphere eXtreme Scale

Related reference

startOgServer script
stopOgServer script
Deployment policy descriptor XML file
ObjectGrid descriptor XML file


+

Search Tips   |   Advanced Search