WebSphere eXtreme Scale Product Overview >


Scalability


WebSphere eXtreme Scale is scalable through the use of partitioned data, and can scale to thousands of containers if required because each container is independent from other containers.

WebSphere eXtreme Scale divides data sets into distinct partitions that can be moved between processes or even between machines at run time. You can, for example, start with a deployment of four servers and then expand to a deployment with ten servers as the demands on the cache grow. Just as you can add more physical machines and processing units for vertical scalability, you may extend eXtreme Scale's elastic scaling capability horizontally with partitioning. This is another major difference with in-memory databases (IMDBs) as opposed to eXtreme Scale (which is a data grid), since IMDBs can only scale vertically.

With WebSphere eXtreme Scale, you can also use a set of APIs to gain transactional access this partitioned and optionally distributed data. In terms of performance, the choices you make for interacting with the cache are as significant as the functions to manage the cache for availability.

Scalability is not available when containers communicate with one another. The availability management, or core grouping, protocol is an O(N2) heartbeat and view maintenance algorithm, but is mitigated by keeping the number of core group members to 20. Only peer to peer replication between shards exists.


Partitioning

Partitioning is the mechanism that WebSphere eXtreme Scale uses to scale out an application. Partitioning is the separation of the application state into parts where each part contains some set of complete instance data. Partitioning is not like Redundant Array of Independent Disks (RAID) striping, which slices each instance across all stripes. Each partition hosts the complete data for individual entries. Partitioning is a very effective means for scaling, but is not applicable to all applications. Applications that require transactional guarantees across large sets of data do not scale and cannot be partitioned effectively, so eXtreme Scale does not currently support two-phase commit across partitions.

Select the number of partitions carefully. The number of partitions that are defined in the deployment policy directly affects the number of containers to which an application can scale.

Each partition is made up of a primary shard and the configured number of replica shards.

The number of containers that can be used to scale out a single application is derived using the formula...


Distributed clients

The WebSphere eXtreme Scale client protocol supports very large numbers of clients. The partitioning strategy offers assistance by assuming that all clients are not always interested in all partitions because connections can be spread across multiple containers. Clients are connected directly to the partitions so latency is limited to one transferred connection.


+

Search Tips   |   Advanced Search