IBM User Guide for Java V7 on Windows > Troubleshooting and support > Using diagnostic tools > Shared classes diagnostic data > Deploying shared classes
Cache performance
Shared classes use optimizations to maintain performance under most circumstances. However, there are configurable factors that can affect shared classes performance.
Use of Java archive and compressed files
The cache keeps itself up-to-date with file system updates by constantly checking file system timestamps against the values in the cache.
When a class loader opens and reads a .jar file, a lock can be obtained on the file. Shared classes assume that the .jar file remains locked and so need not be checked continuously.
.class files can be created or deleted from a directory at any time. If you include a directory name in a classpath, shared classes performance can be affected because the directory is constantly checked for classes. The impact on performance might be greater if the directory name is near the beginning of the classpath string. For example, consider a classpath of /dir1:jar1.jar:jar2.jar:jar3.jar;. When loading any class from the cache using this classpath, the directory /dir1 must be checked for the existence of the class for every class load. This checking also requires fabricating the expected directory from the package name of the class. This operation can be expensive.
Advantages of not filling the cache
A full shared classes cache is not a problem for any JVMs connected to it. However, a full cache can place restrictions on how much sharing can be performed by other JVMs or applications.
ROMClasses are added to the cache and are all unique. Metadata is added describing the ROMClasses and there can be multiple metadata entries corresponding to a single ROMClass. For example, if class A is loaded from myApp1.jar and another JVM loads the same class A from myOtherApp2.jar, only one ROMClass exists in the cache. However there are two pieces of metadata that describe the source locations.
If many classes are loaded by an application and the cache is 90% full, another installation of the same application can use the same cache. The extra information that must be added about the classes from the second application is minimal.
After the extra metadata has been added, both installations can share the same classes from the same cache. However, if the first installation fills the cache completely, there is no room for the extra metadata. The second installation cannot share classes because it cannot update the cache. The same limitation applies for classes that become stale and are redeemed. See Redeeming stale classes. Redeeming the stale class requires a small quantity of metadata to be added to the cache. If you cannot add to the cache, because it is full, the class cannot be redeemed.
Read-only cache access
If the JVM opens a cache with read-only access, it does not obtain any operating system locks to read the data. This behavior can make cache access slightly faster. However, if any containers of cached classes are changed or moved on a classpath, then sharing is disabled for all classes on that classpath. There are two reasons why sharing is disabled:
- The JVM is unable to update the cache with the changes, which might affect other JVMs.
- The cache code does not continually recheck for updates to containers every time a class is loaded because this activity is too expensive.
Page protection
By default, the JVM protects all cache memory pages using page protection to prevent accidental corruption by other native code running in the process. If any native code attempts to write to the protected page, the process ends, but all other JVMs are unaffected.
The only page not protected by default is the cache header page, because the cache header must be updated much more frequently than the other pages. The cache header can be protected by using the -Xshareclasses:mprotect=all option. This option has a small affect on performance and is not enabled by default.
Switching off memory protection completely using -Xshareclasses:mprotect=none does not provide significant performance gains.
Caching Ahead Of Time (AOT) code
The JVM might automatically store a small amount of Ahead Of Time (AOT) compiled native code in the cache when it is populated with classes. The AOT code enables any subsequent JVMs attaching to the cache to start faster. AOT data is generated for methods that are likely to be most effective.
You can use the -Xshareclasses:noaot, -Xscminaot, and -Xscmaxaot options to control the use of AOT code in the cache. See JVM command-line options for more information.
In general, the default settings provide significant startup performance benefits and use only a small amount of cache space. In some cases, for example, running the JVM without the JIT, there is no benefit gained from the cached AOT code. In these cases, turn off caching of AOT code.
To diagnose AOT issues, use the -Xshareclasses:verboseAOT command-line option. This option generates messages when AOT code is found or stored in the cache.
Caching JIT data
The JVM can automatically store a small amount of JIT data in the cache when it is populated with classes. The JIT data enables any subsequent JVMs attaching to the cache to either start faster, run faster, or both.
You can use the -Xshareclasses:nojitdata, -Xscminjitdata<size>, and -Xscmaxjitdata<size> options to control the use of JIT data in the cache.
In general, the default settings provide significant performance benefits and use only a small amount of cache space.
Making the most efficient use of cache space
A shared class cache is a finite size and cannot grow. The JVM makes more efficient use of cache space by sharing strings between classes, and ensuring that classes are not duplicated. However, there are also command-line options that optimize the cache space available.
-Xscminaot and -Xscmaxaot place upper and lower limits on the amount of AOT data the JVM can store in the cache. -Xshareclasses:noaot prevents the JVM from storing any AOT data.
-Xscminjitdata<size> and -Xscmaxjitdata<size> place upper and lower limits on the amount of JIT data the JVM can store in the cache. -Xshareclasses:nojitdata prevents the JVM from storing any JIT data.
-Xshareclasses:nobootclasspath disables the sharing of classes on the boot classpath, so that only classes from application class loaders are shared. There are also optional filters that can be applied to Java™ classloaders to place custom limits on the classes that are added to the cache.
Very long classpaths
When a class is loaded from the shared class cache, the stored classpath and the class loader classpath are compared. The class is returned by the cache only if the classpaths "match". The match need not be exact, but the result should be the same as if the class were loaded from disk.
Matching very long classpaths is initially expensive, but successful and failed matches are remembered. Therefore, loading classes from the cache using very long classpaths is much faster than loading from disk.
Growing classpaths
Where possible, avoid gradually growing a classpath in a URLClassLoader using addURL(). Each time an entry is added, an entire new classpath must be added to the cache.
For example, if a classpath with 50 entries is grown using addURL(), you might create 50 unique classpaths in the cache. This gradual growth uses more cache space and has the potential to slow down classpath matching when loading classes.
Concurrent access
A shared class cache can be updated and read concurrently by any number of JVMs. Any number of JVMs can read from the cache while a single JVM is writing to it.
When multiple JVMs start at the same time and no cache exists, only one JVM succeeds in creating the cache. When created, the other JVMs start to populate the cache with the classes they require. These JVMs might try to populate the cache with the same classes.
Multiple JVMs concurrently loading the same classes are coordinated to a certain extent by the cache itself. This behavior reduces the effect of many JVMs trying to load and store the same class from disk at the same time.
Class GC with shared classes
Running with shared classes has no affect on class garbage collection. Class loaders loading classes from the shared class cache can be garbage collected in the same way as class loaders that load classes from disk. If a class loader is garbage collected, the ROMClasses it has added to the cache persist.
Class Debug Area
A portion of the shared classes cache is reserved for storing the class attribute information LineNumberTable and LocalVariableTable during JVM debugging. By storing these attributes in a separate region, the operating system can decide whether to keep the region in memory or on disk, depending on whether debugging is taking place.
You can control the size of the Class Debug Area using the -Xscdmx command-line option. Use any of the following variations to specify a Class Debug Area with a size of 1 MB:
- -Xscdmx1048576
- -Xscdmx1024k
- -Xscdmx1m
The number of bytes passed to –Xscdmx must always be less than the total cache size. This value is always rounded down to the nearest multiple of the system page size.
The amount of LineNumberTable and LocalVariableTable attribute information stored for different applications varies. When the Class Debug Area is full, use -Xscdmx to increase the size. When the Class Debug Area is not full, create a smaller region, which increases the available space for other artifacts elsewhere in the cache.
The size of the Class Debug Area affects available space for other artifacts, like AOT code, in the shared classes cache. Performance might be adversely affected if the cache is not sized appropriately. You can improve performance by using the -Xscdmx option to resize the Class Debug Area, or by using the -Xscmx option to create a larger cache.
If you start the JVM with -Xnolinenumbers when creating a new shared classes cache, the Class Debug Area is not created. The option -Xnolinenumbers advises the JVM not to load any class debug information, so there is no need for this region. If -Xscdmx is also used on the command line to specify a non zero debug area size, then a debug area is created despite the use of -Xnolinenumbers.
Raw Class Data Area
When a cache is created with -Xshareclasses:enableBCI, a portion of the shared classes cache is reserved for storing the original class data bytes. Storing this data in a separate region allows the operating system to decide whether to keep the region in memory or on disk, depending on whether the data is being used. Because the amount of raw class data stored in this area can vary for an application, the size of the Raw Class Data Area can be modified using the rcdSize suboption. For example, these variations specify a Raw Class Data Area with a size of 1 MB:
-Xshareclasses:enableBCI,rcdSize=1048576 -Xshareclasses:enableBCI,rcdSize=1024k -Xshareclasses:enableBCI,rcdSize=1mThe number of bytes passed to rcdSize must always be less than the total cache size. This value is always rounded down to the nearest multiple of the system page size. As with the Class Debug Area, the size of this area affects available space for other artifacts, such as AOT code, in the shared classes cache. Performance might be adversely affected if the cache is not sized appropriately. When the cache is created without enableBCI, the default size of the Raw Class Data Area is 0 bytes. However, when the enableBCI is used, a portion of the cache is automatically reserved.
Parent: Deploying shared classes