WebSphere eXtreme Scale Product Overview > Scalability > Partitioning > Placement and partitions
Per-container placement strategy
As opposed to a fixed-partition placement policy, you may also enable per-container placement, which allows for different configurations that are not possible in a fixed-partition scenario.
WebSphere eXtreme Scale allows per-container placement as an alternative to what could be termed the "typical" placement strategy, a fixed-partition approach with the key of a Map hashed to one of those partitions. In a per-container case (which you set with PER_CONTAINER), the deployment places the partitions on the set of online container servers and automatically scales them out or in as containers are added or removed from the server grid. A grid with the fixed-partition approach works well for key-based grids, where the application uses a key object to locate data in the grid. The following discusses the alternative.
Example per-container grid
PER_CONTAINER grids are different. You specify that the grid use PER_CONTAINER through the placementPolicy attribute in the deployment XML file. Instead of configuring how many partitions total you want in the grid, you specify how many partitions you want per container that you start.
For example, if you set the number of partitions per container to be 5, then when you start a container eXtreme Scale creates 5 new anonymous partition primaries on that container and creates any necessary replicas on the other containers already deployed.
The following is a potential sequence in a per-container environment as the grid grows.
- Start container C0 hosting 5 primaries (P0 - P4).
- C0 hosts: P0, P1, P2, P3, P4.
- Start container C1 hosting 5 more primaries (P5 - P9). Replicas are balanced on the containers.
- C0 hosts: P0, P1, P2, P3, P4, R5, R6, R7, R8, R9.
- C1 hosts: P5, P6, P7, P8, P9, R0, R1, R2, R3, R4.
- Start container C2 hosting 5 more primaries (P10 - P14). Replicas are balanced further.
- C0 hosts: P0, P1, P2, P3, P4, R7, R8, R9, R10, R11, R12.
- C1 hosts: P5, P6, P7, P8, P9, R2, R3, R4, R13, R14.
- C2 hosts: P10, P11, P12, P13, P14, R5, R6, R0, R1.
The pattern continues as more containers are started, creating 5 new primary partitions each time and rebalancing replicas on the available containers in the grid.
WebSphere eXtreme Scale does not move primaries when using the PER_CONTAINER strategy, only replicas.
Remember that the partition numbers are arbitrary and have nothing to do with keys, so you cannot use key-based routing. If a container stops then the partition IDs created for that container are no longer used, so there is a gap in the partition IDs. In the example, there would no longer be partitions P5 - P9 if the container C2 failed, leaving only P0 - P4 and P10 - P14, so key-based hashing is impossible.
Use numbers like 5 or even more likely 10 for how many partitions per container works best if you consider the consequences of a container failure.
To spread the load of hosting shards evenly across the grid, you need more than just one partition for each container. If you had a single partition per container, then when a container fails, only one container (the one hosting the corresponding replica shard) must bear the full load of the lost primary. In this case, the load is immediately doubled for the container. However, if you have 5 partitions per container, then 5 containers pick up the load of the lost container, lowering impact on each by 80 percent. Using multiple partitions per container generally lowers the potential impact on each container substantially. More directly, consider a case in which a container spikes unexpectedly–the replication load of that container is spread over 5 containers rather than only one.
Use the per-container policy
Several scenarios make the per-container strategy an ideal configuration, such as with HTTP session replication or application session state. In such a case, an HTTP router assigns a session to a servlet container. The servlet container needs to create an HTTP session and chooses one of the 5 local partition primaries for the session. The "ID" of the partition chosen is then stored in a cookie. The servlet container now has local access to the session state which means zero latency access to the data for this request as long as you maintain session affinity. And eXtreme Scale replicates any changes to the partition.
In practice, remember the repercussions of a case in which you have multiple partitions per container (say 5 again). Of course, with each new container started, you have 5 more partition primaries and 5 more replicas. Over time, more partitions should be created and they should not move or be destroyed. But this is not how the containers would actually behave. When a container starts, it hosts 5 primary shards, which can be called "home" primaries, existing on the respective containers that created them. If the container fails, the replicas become primaries and eXtreme Scale creates 5 more replicas to maintain high availability (unless you disabled auto repair). The new primaries are in a different container than the one that created them, which can be called "foreign" primaries. The application should never place new state or sessions in a foreign primary. Eventually, the foreign primary has no entries and eXtreme Scale automatically deletes it and its associated replicas. The foreign primaries' purpose is to allow existing sessions to still be available (but not new sessions).
A client can still interact with a grid that does not rely on keys. The client just begins a transaction and stores data in the grid independent of any keys. It asks the Session for a SessionHandle object, a serializable handle allowing the client to interact with the same partition when necessary. WebSphere eXtreme Scale chooses a partition for the client from the list of home partition primaries. It does not return a foreign primary partition. The SessionHandle can be serialized in an HTTP cookie, for example, and later convert the cookie back into a SessionHandle. Then the WebSphere eXtreme Scale APIs can obtain a Session bound to the same partition again, using the SessionHandle.
You cannot use agents to interact with a PER_CONTAINER grid.
Advantages
The previous description is different from a normal FIXED_PARTITION or hash grid because the per-container client stores data in a place in the grid, gets a handle to it and uses the handle to access it again. There is no application-supplied key as there is in the fixed-partition case.
Your deployment does not make a new partition for each Session. So in a per-container deployment, the keys used to store data in the partition must be unique within that partition. For example, you may have the client generate a unique SessionID and then use it as the key to find information in Maps in that partition. Multiple client sessions then interact with the same partition so the application needs to use unique keys to store session data in each given partition.
The previous examples used 5 partitions, but the numberOfPartitions parameter in the objectgrid XML file can be used to specify the partitions as required. Instead of per grid, the setting is per container. (The number of replicas is specified in the same way as with the fixed-partition policy.)
The per-container policy can also be used with multiple zones. If possible, eXtreme Scale returns a SessionHandle to a partition whose primary is located in the same zone as that client. The client can specify the zone as a parameter to the container or by using an API. The client zone ID can be set using serverproperties or clientproperties.
The PER_CONTAINER strategy for a grid suits applications storing conversational type state rather than database-oriented data. The key to access the data would be a conversation ID and is not related to a specific database record. It provides higher performance (because the partition primaries can be collocated with the servlets for example) and easier configuration (without having to calculate partitions and containers).
Parent topic
Placement and partitions