Product overview > Availability overview > Multi-master replication
Available topologies for multi-master replication
You have several different options when choosing the topology for the deployment that incorporates multi-master replication.
A replication data grid infrastructure is a connected graph of catalog service domains with bidirectional links among them. With a link, two catalog service domains can communicate data changes. For example, the simplest topology is a pair of catalog service domains with a single link between them. The catalog service domains are named alphabetically: A, B, C, and so on, from the left. A link can cross a wide area network (WAN), spanning large distances. Even if the link is interrupted, you can still change data in either catalog service domain. The topology reconciles changes when the link reconnects the catalog service domains. Links automatically try to reconnect if the network connection is interrupted.
catalog service domains" tabindex="8" />
After you set up the links, then eXtreme Scale first tries to make every catalog service domain identical. Then, eXtreme Scale tries to maintain the identical conditions as changes occur in any catalog service domain. The goal is for each catalog service domain to be an exact mirror of every other catalog service domain connected by the links. The replication links between the catalog service domains help ensure that any changes made in one domain are copied to the other domains.
Although it is such a simple deployment, a line topology demonstrates some qualities of the links. First, it is not necessary for a catalog service domain to be connected directly to every other catalog service domain to receive changes. Domain B pulls changes from Domain A. Domain C receives changes from Domain A through Domain B, which connects Domains A and C. Similarly, Domain D receives changes from the other domains through Domain C. This ability spreads the load of distributing changes away from the source of the changes.
Notice that if Domain C fails, the following would occur:
Ultimately, Domains A, B, C, and D would all become identical to one other again.
- Domain D would be orphaned until Domain C was restarted
- Domain C would synchronize itself with Domain B, which is a copy of Domain A
- Domain D would use Domain C to synchronize itself with changes on Domains A and B. These changes initially occurred while Domain D was orphaned (while Domain C was down).
Another option you have with multi-master replication is a ring topology, which is more resilient than the topologies described in the previous sections. A catalog service domain or a single link can fail. Still, the surviving catalog service domains can obtain changes by traveling around the ring, away from the failure. Each catalog service domain has two links to the other catalog service domains. And each catalog service domain has at most two links, no matter how large the ring topology. Changes from a particular domain might travel through several domains before all of them mirror each other. Going through several domains causes potentially high latency, similar to the processes for a line topology.
You can also deploy a more sophisticated ring topology, with a root catalog service domain at the center of the ring. The root catalog service domain functions as the central point of reconciliation. The other catalog service domains act as remote points of reconciliation for changes occurring in the root catalog service domain. The root catalog service domain can arbitrate changes among the catalog service domains. If a ring topology contains more than one ring around a root catalog service domain, the domain can only arbitrate changes among the innermost ring. However, the results of the arbitration spread throughout the catalog service domains in the other rings.
With a hub-and-spoke topology, changes travel through a hub catalog service domain. Because the hub is the only intermediate catalog service domain that is specified, hub-and-spoke topologies have lower latency. The hub domain is connected to every spoke domain through a link. The hub distributes changes among the catalog service domains. The hub acts as a point of reconciliation for collisions. In an environment with a high update rate, the hub might require run on more hardware than the spokes to remain synchronized. WebSphere eXtreme Scale is designed to scale linearly, meaning you can make the hub larger, as needed, without difficulty. However, if the hub fails, then changes are not distributed until the hub restarts. Any changes on the spoke catalog service domains will be distributed after the hub is reconnected.
You can also use a strategy with fully replicated clients, a topology variation which uses a pair of eXtreme Scale servers running as a hub. Every client creates a self-contained single container data grid with a catalog in the client JVM. A client uses its data grid to connect to the hub catalog. This connection causes the client to synchronize with the hub as soon as the client obtains a connection to the hub.
Any changes made by the client are local to the client, and are replicated asynchronously to the hub. The hub acts as an arbitration domain, distributing changes to all connected clients. The fully replicated clients topology provides a reliable L2 cache for an object relational mapper, such as OpenJPA. Changes are distributed quickly among client JVMs through the hub. If the cache size can be contained within the available heap space, the topology is a reliable architecture for this style of L2.
Use multiple partitions to scale the hub domain on multiple JVMs, if necessary. Because all of the data still must fit in a single client JVM, multiple partitions increase the capacity of the hub to distribute and arbitrate changes. However, having multiple partitions does not change the capacity of a single domain.
You can also use an acyclic directed tree. An acyclic tree has no cycles or loops, and a directed setup limits links to existing only between parents and children. Use the tree topology when you have many catalog service domains such that the ring topology would overwork the hub. You can also use a tree if you require being able to add child catalog service domains without updating the root catalog service domain.
A tree topology can still have a central point of reconciliation in the root catalog service domain. The second level can still function as a remote point of reconciliation for changes occurring in the catalog service domain beneath them. The root catalog service domain can arbitrate changes between the catalog service domains on the second level only. You can also use N-ary trees, each of which have N children at each level. Each catalog service domain connects out to n links.
Parent topic:Multi-master data grid replication topologies