Administration guide > Plan the WebSphere eXtreme Scale environment > Cache topology
Peer-replicated local cache
For a local WebSphere eXtreme Scale cache, ensure the cache is synchronized if there are multiple processes with independent cache instances. To do so, enable a peer-replicated cache with JMS.
WebSphere eXtreme Scale includes two plug-ins that automatically propagate transaction changes between peer ObjectGrid instances. The JMSObjectGridEventListener plug-in automatically propagates eXtreme Scale changes using Java™ Messaging Service (JMS).
Figure 1. Peer-replicated cache with changes that are propagated with JMS
If you are running a WAS environment, the TranPropListener plug-in is also available. The TranPropListener plug-in uses the high availability (HA) manager to propagate the changes to each peer eXtreme Scale cache instance.
Figure 2. Peer-replicated cache with changes that are propagated with the high availability manager
HA manager propagates changes among two ObjectGrid instances that are running in different JVMs. Each ObjectGrid instance is associated with an application." />
Advantages
- The data is more valid because the data is updated more often.
- With the TranPropListener plug-in, like the local environment, the eXtreme Scale can be created programmatically or declaratively with the eXtreme Scale deployment descriptor XML file or with other frameworks such as Spring. Integration with the high availability manager is done automatically.
- Each BackingMap can be independently tuned for optimal memory utilization and concurrency.
- BackMap updates can be grouped into a single unit of work and can be integrated as a last participant in 2-phase transactions such as Java Transaction Architecture (JTA) transactions.
- Ideal for few-JVM topologies with a reasonably small dataset or for caching frequently accessed data.
- Changes to the eXtreme Scale are replicated to all peer eXtreme Scale instances. The changes are consistent as long as a durable subscription is used.
Disadvantages
- Configuration and maintenance for the JMSObjectGridEventListener can be complex. eXtreme Scale can be created programmatically or declaratively with the eXtreme Scale deployment descriptor XML file or with other frameworks such as Spring.
- Not scalable: The amount of memory required by the database may overwhelm the JVM.
- Functions improperly when adding JVMs:
- Data cannot be partitioned
- Invalidation is expensive.
- Each cache must be warmed-up independently
When to use
This deployment topology should be used only when the amount of data to be cached is small (can fit into a single JVM) and is relatively stable.
Parent topic:
Cache topology: In-memory and distributed caching
Related concepts
Multi-master data grid replication topologies