WebSphere eXtreme Scale Product Overview > WebSphere eXtreme Scale overview



Work with WebSphere eXtreme Scale


WebSphere eXtreme Scale is an elastic, scalable, in-memory data grid. It dynamically caches, partitions, replicates, and manages application data and business logic across multiple servers.

Since it is not an in-memory database, consider the specific configuration requirements for eXtreme Scale. The first step to deploying an eXtreme Scale data grid is to start a core group and catalog service, which will act as coordinator for all other Java™ Virtual Machines participating in the grid and manage configuration information. WebSphere eXtreme Scale processes are started with simple command script invocations from the command line.

The next step is to start WebSphere eXtreme Scale server processes for the grid to store and retrieve data. As servers are started, they automatically register themselves with the core group and catalog service allowing them to cooperate in providing grid services. More servers increase both grid capacity and reliability.

A local grid is a simple, single-instance grid where all the data is in the one grid. To effectively use eXtreme Scale as an in-memory database processing space, you can configure and deploy a distributed grid. The data in the distributed grid is spread out over the various eXtreme Scale servers containing it such that each server contains only some of the data, called a partition.

A key distributed grid configuration parameter is the number of partitions in the grid. The grid data is partitioned into this number of subsets, each of which is called a partition. The catalog service locates the partition for a given datum based on its key. The number of partitions directly affects the capacity and scalability of the grid. A server may contain one or more grid partitions. Thus the server's memory space limits the size of a partition. Conversely, increasing the number of partitions increases the capacity of the grid. The maximum capacity of a grid is the number of partitions times the usable memory size of a server, which may be a JVM.

A partition's data is stored in a shard. For availability, a grid may be configured with replicas, which may be synchronous or asynchronous. Changes to the grid data are made to the primary shard, and replicated to the replica shards. The total memory consumed/required by a grid is thus the size of the grid times (1 (for the primary) + the number of replicas).

WebSphere eXtreme Scale distributes a grid's shards over the number of servers containing the grid. These servers may be on the same and/or separate physical machines. For availability, replica shards are placed in separate machines from primary shards.

WebSphere eXtreme Scale monitors the status of its servers and moves shards among them should they and/or their containing physical machines fail and subsequently recover. For example, if the server containing a replica shard fails, eXtreme Scale will allocate a new replica shard, and replicate data from the primary to the new replica. Should a server containing a primary shard fail, the replica shard is promoted to be the primary shard, and, as before, a new replica shard is constructed. If you start an additional server for the grid, the shards will be distributed over all servers so that the load on each is as balanced as possible. This is called scale-out. Similarly, for scale-in, you may stop one of the servers to reduce the resources consumed by a grid, and again the shards will be balanced over the remaining servers, just as in a failure situation.



Parent topic

WebSphere eXtreme Scale overview


+

Search Tips   |   Advanced Search