Administration guide > Plan the WebSphere eXtreme Scale environment > Capacity planning
Sizing CPU per partition for transactions
Overview
This section provides a method to calculate cpu usage as throughput increases.
Processor costs include:
- Cost of servicing create, retrieve, update, and delete operations from clients.
- Cost of replication from other JVMs.
- Cost of invalidation.
- Cost of eviction policy.
- Cost of garbage collection.
- Cost of application logic.
JVMs per server
Use two servers and start the maximum JVM count per server. Use the calculated partition counts from the previous section. Then, preload the JVMs with enough data to fit on these two computers only. Use a separate server as a client. Run a realistic transaction simulation against this data grid of two servers.
To calculate the baseline, try to saturate the processor usage. If you cannot saturate the processor, then it is likely that the network is saturated. If the network is saturated, add more network cards and round robin the JVMs over the multiple network cards.
Run the computers at 60% processor usage, and measure the create, retrieve, update, and delete transaction rate. This measurement provides the throughput on two servers.
This throughput rate will double with four servers, double again at 8 servers, and so on. This scaling assumes that the network capacity and client capacity is also able to scale.
As a result, eXtreme Scale response time should be stable as the number of servers is scaled up. The transaction throughput should scale linearly as computers are added to the data grid.
Parent topic:
Capacity planning
Related concepts
Sizing memory and partition count calculation
Sizing CPUs for parallel transactions
Dynamic cache capacity planning