Product overview > Scalability overview
Data grids, partitions, and shards
A data grid is divided into partitions. A partition holds an exclusive subset of the data. A partition contains one or more shards: a primary shard and replica shards. Replica shards are not necessary in a partition, but you can use replica shards to provide high availability. Whether the deployment is an independent in-memory data grid or an in-memory database processing space, data access in eXtreme Scale relies heavily on shards.
The data for a partition is stored in a set of shards at run time. This set of shards includes a primary shared and possibly one or more replica shards. A shard is the smallest unit that eXtreme Scale can add or remove from a Java™ virtual machine.
Two placement strategies exist: fixed partition placement (default) and per container placement. The following discussion focuses on the usage of the fixed partition placement strategy.\
Total number of shards
If the environment includes 10 partitions that hold one million objects with no replicas, then 10 shards would exist that each store 100,000 objects. If you add a replica to this scenario, then an extra shard exists in each partition. In this case, 20 shards exist: 10 primary shards and 10 replica shards. Each one of these shards store 100,000 objects. Each partition consists of a primary shard and one or more (N) replica shards. Determining the optimal shard count is critical. If you configure few shards, data is not distributed evenly among the shards, resulting in out of memory errors and processor overloading issues. You must have at least 10 shards for each JVM as you scale. When you are initially deploying the data grid, you would potentially use many partitions.
Number of shards per JVM scenarios
Scenario: small number of shards for each JVM Data is added and removed from a JVM using shard units. Shards are never split into pieces. If 10 GB of data existed, and 20 shards exist to hold this data, then each shard holds 500 MB of data on average. If nine JVMs host the data grid, then on average each JVM has two shards. Because 20 is not evenly divisible by 9, a few JVMs have three shards, in the following distribution:
- Seven JVMs with two shards
- Two JVMs with three shards
Because each shard holds 500 MB of data, the distribution of data is unequal. The seven JVMs with two shards each host 1 GB of data. The two JVMs with three shards have 50% more data, or 1.5 GB, which is a much larger memory burden. Because the two JVMs are hosting three shards, they also receive 50% more requests for their data. As a result, having few shards for each JVM causes imbalance.
To increase the performance, you increase the number of shards for each JVM.
Scenario: increased number of shards per JVM In this scenario, consider a much larger number of shards. In this scenario, there are 101 shards with nine JVMs hosting 10 GB of data. In this case, each shard holds 99 MB of data. The JVMs have the following distribution of shards:
- Seven JVMs with 11 shards
- Two JVMs with 12 shards
The two JVMs with 12 shards now have just 99 MB more data than the other shards, which is a 9% difference. This scenario is much more evenly distributed than the 50% difference in the scenario with few shards. From a processor use perspective, only 9% more work exists for the two JVMs with the 12 shards compared to the seven JVMs that have 11 shards. By increasing the number of shards in each JVM, the data and processor use is distributed in a fair and even way.
When you are creating the system, use 10 shards for each JVM in its maximally sized scenario, or when the system is running its maximum number of JVMs in the planning horizon.
Additional placement factors
The number of partitions, the placement strategy, and number and type of replicas are set in the deployment policy. The number of shards that are placed depend on the deployment policy that you define. The numInitialContainers, minSyncReplicas, developmentMode, maxSyncReplicas, and maxAsyncReplicas attributes affect where and when partitions and replicas can be placed. If the maximum number of replicas are not placed during the initial startup, additional replicas might be placed if you start additional servers later. When you are planning the number of shards per JVM, the maximum number of primary and replica shards is dependent on having enough JVMs started to support the configured maximum number of replicas. A replica is never placed in the same process as its primary. If a process is lost, both the primary and the replica are lost. When the developmentMode attribute is set to false, the primary and replicas are not placed on the same phyiscal server.
Parent topic:
Scalability overview
Related concepts
Single-partition and cross-data-grid transactions
Sizing memory and partition count calculation
Related tasks
Related reference
Deployment policy descriptor XML file