WebSphere eXtreme Scale Administration Guide >
Capacity planning
If you have an initial data set size and a projected data set size, you can plan the capacity that you need to run WebSphere eXtreme Scale. Although such planning helps you deploy eXtreme Scale efficiently for future changes, it allows you to maximize the elasticity of eXtreme Scale which you would not have with a different scenario such as an in-memory database or other type of database.
- Grids, partitions, and shards
An eXtreme Scale distributed grid is divided into partitions. A partition holds an exclusive subset of the data. A partition is made up of one or more shards: a primary shard and replica shards. You do not need to have replica shards in a partition, but replica shards provide high availability. Whether the deployment is an independent in-memory data grid or an in-memory database processing space, data access in eXtreme Scale relies heavily on sharding concepts.- Sizing memory and partition count calculation
You can calculate the amount of memory and partitions needed for the specific configuration.- Sizing CPU per partition for transactions
Although a major functionality of eXtreme Scale is its ability for elastic scaling, it is also important to consider sizing and to adjust the ideal number of CPUs to scale up.- Sizing CPUs for parallel transactions
Single-partition transactions have throughput scaling linearly as the grid grows. Parallel transactions are different from single-partition transactions because they touch a set of the servers (this can be all of the servers).