5.5.1 Deployment
Home

IBM



5.5.1 Deployment

Planning the capacity of an eXtreme Scale environment is a crucial task. If an initial and a projected data set size is known, you can plan for the right capacity for an eXtreme Scale deployment to maximize the elasticity and achieve maximum benefits.

In this section, we talk about how to calculate the required servers, recommended partitions, shards, and number of JVMs for an eXtreme Scale deployment of a session grid based on certain application inputs.

We will need the following to begin with:

iSize: The initial grid size represented in number of objects expected to be stored in the grid.

gRate: Annual growth rate.

rFactor: The number of replicas required in the deployment.

sizeKb: The average object size in Kb.

jHeap: Java heap size in Gb.

nShards: Number of shards to be placed in each JVM.

phMem: Physical machine memory in Gb.

We are also assuming the following constant values:

mHc: Maximum Java heap consumption, a constant value of 70%.

hRm: Additional Java heap real memory requirement, a constant value of 25%.

oSMo: Operating system memory overhead in Gb, a constant of 1.5 Gb.

Given these inputs and constants we can arrive at the following recommended values:

Raw Memory required in Kb (rMem):

rMem = iSize*sizeKb*(rFactor+1)

Recommended number of JVMs (rJVM):

rJVM = RoundUp(rMem / ((1000000*jHeap)*mHc))

Recommended number of Shards (rShards):

rShards = rJVM*nShards

Recommended Partitions (rPar):

rPar = GetNearestPrime(rShards / (1+rFactor))

Minimum required Servers (rServ):

rServ = RoundUp(rJVM / ((phMem-oSMo) / (jHeap*(1+hRm)))

These values help in designing the deployment of eXtreme Scale for session offloading with Portal Server.


+

Search Tips   |   Advanced Search