Cache replication
Cache replication generates data one time and copies to the other servers in the cluster. Cache entries that are not needed are removed or replaced.
The data replication configuration can exist as...
- Part of the Web container dynamic cache configuration accessible through the console
- With the cachespec.xml file, which allows one to configure cache replication at the Web container level, but disable it for a specific cache entry.
Cache replication can take on three forms:
PUSH Cache entries are pushed to cluster members. These entries cannot store non-serializable data. PULL Pulls data from other members of the cluster. Not recommended. PUSH/PULL Cache entries are shared between appservers on demand. When an appserver generates a cache entry, it broadcasts the cache ID of the created entry to all cooperating appservers. Each server then knows whether an entry exists for any given cache ID. On a given request for that entry, the appserver knows whether to generate the entry or pull it from somewhere else. These entries cannot store non-serializable data.
The methods...
- DistributedMap.containsKey()
- DistributedMap.keySet()
...do not show the key that has been broadcasted in the receiving server. This is because the keys that are stored in table...
DRSPushPullTable...are different from the backing map that is used by local cache...
DistributedMapThe method...
DistributedMap.get(Object key)...will retrieve the value from the server that broadcasted the key.
The dynamic cache service broadcasts cache replication updates asynchronously, based on the batch update interval set via the console, rather than sending them immediately, when they are created. Invalidations are sent immediately.
In PUSH/PULL mode, the cached object is kept locally on the server that creates it; however, other servers also use the cache ID and store it in the DRSPushPullTable table. If a remote server needs the object, it requests the object by cache ID, or name, from the creating server.
Each cache instance has one DRSPushPullTable table that is associated with it.
The following conditions cause the DRSPushPullTable table to grow too big:
- There are too many entries being shared with other servers.
- Not many entries are expiring.
- If you are using the disk offload feature, the disk scan is not running often to evict the expired entries.
Use the following suggestions to resolve the issue:
- Increase the heap size to 1.5 GB or 2 GB, if possible.
- Maintain better distribution for the expiration time of entries...
- 20% of the entries never expire.
- 30% of the entries expire in 3600 seconds.
- 30% of the entries expire in 600 seconds.
- 20% of the entries expire in 60 seconds.
- When you use the disk offload feature in WAS V6.1, for the disk cache performance settings, which are low, balanced, and custom, adjust the disk cleanup frequency to an optimal value, in minutes. For example, about 20% of the entries expire at that time.
See also...