Java memory tuning tips

 

+

Search Tips   |   Advanced Search

 

Overview

Garbage collection normally consumes from 5% to 20% of total execution time of a properly functioning application. To determine the percentage of time the JVM spends in garbage collection, divide the time it took to complete the collection by the length of time since the last allocation failure and multiply the result by 100. For example,

83.29/3724.32 * 100 = 2.236 percent

Use garbage collection to evaluate application performance health. By monitoring garbage collection during the execution of a fixed workload, you gain insight as to whether the application is over-utilizing objects. Garbage collection can even detect the presence of memory leaks.

You can monitor garbage collection statistics using...

Before you begin monitoring, set the minimum and maximum heap sizes to the same value.

 

Detect over-utilization of objects

Use TPV or IBM Support Assistant utilities to check if the application is overusing objects, by observing the counters for the JVM runtime.

To enable the Java virtual machine profiler interface (JVMPI) counters...

  1. Set the -XrunpmiJvmpiProfiler command line option

  2. Set the JVM module maximum level in PMI.

The optimum average time between garbage collections is at least 5-6 times the average duration of a single garbage collection. If you do not achieve this number, the application is spending more than 15% of its time in garbage collection.

To clear the garbage collection bottleneck optimize the application is to implement object caches and pools. Use a Java profiler to determine which objects to target.

If you cannot optimize the application, add...

Additional memory allows each clone to maintain a reasonable heap size. Additional processors allow the clones to run in parallel.

 

Detect memory leaks

Memory leaks result when garbage collection occurs more and more frequently until the heap is exhausted.

The Java code fails with a fatal Out of Memory exception.

Memory leaks occur when an unused object has references that are never freed. Memory leaks most commonly occur in collection classes, such as Hashtable because the table always has a reference to the object, even after real references are deleted.

High workload often causes applications to crash immediately after deployment in the production environment. This is especially true for leaking applications where the high workload accelerates the magnification of the leakage and a memory allocation failure occurs.

Memory leak testing measures kilobytes that cannot be garbage collected and compares these amounts between expected sizes of useful and unusable memory. This task is achieved more easily if the numbers are magnified, resulting in larger gaps and easier identification of inconsistencies.

Tivoli Performance Viewer can help find memory leaks. For best results, repeat experiments with increasing duration, like 1000, 2000, and 4000-page requests. The Tivoli Performance Viewer graph of used memory should have a sawtooth shape. Each drop on the graph corresponds to a garbage collection. There is a memory leak if one of the following occurs:

Also, look at the difference between the number of objects allocated and the number of objects freed. If the gap between the two increases over time, there is a memory leak.

Heap consumption indicating a possible leak during a heavy workload (the application server is consistently near 100% CPU utilization), yet appearing to recover during a subsequent lighter or near-idle workload, is an indication of heap fragmentation. Heap fragmentation can occur when the JVM can free sufficient objects to satisfy memory allocation requests during garbage collection cycles, but the JVM does not have the time to compact small free memory areas in the heap to larger contiguous spaces.

Another form of heap fragmentation occurs when small objects (less than 512 bytes) are freed. The objects are freed, but the storage is not recovered, resulting in memory fragmentation until a heap compaction has been run.

Heap fragmentation can be reduced by forcing compactions to occur, but there is a performance penalty for doing this. Use the Java -X command to see the list of memory options.

 

Java heap parameters

The Java heap parameters also influence the behavior of garbage collection. Increasing the heap size supports more object creation. Because a large heap takes longer to fill, the application runs longer before a garbage collection occurs. However, a larger heap also takes longer to compact and causes garbage collection to take longer.

For performance analysis, the initial and maximum heap sizes should be equal.

When tuning a production system where the working set size of the Java application is not understood, a good starting value for the initial heap size is 25% of the maximum heap size. The JVM then tries to adapt the size of the heap to the working set size of the application.

Imagine three CPU profiles, each running a fixed workload with varying Java heap settings.

For all three configurations, the total time in garbage collection is approximately 15%.

If you expand the heap too aggressively, paging can occur. Use vmstat or Windows 2000/2003 Performance Monitor to check for paging, and, if found, reduce the size of the heap or add more memory to the system.

When all the runs are finished, compare the following statistics:

If the application is not over-utilizing objects and has no memory leaks, the state of steady memory utilization is reached. Garbage collection also occurs less frequently and for short duration.

If the heap free space settles at 85% or more, consider decreasing the maximum heap size values because the application server and the application are under-utilizing the memory allocated for heap.


 

Related Tasks

Tuning the application serving environment
Performance: Resources for learning.
IBM Support

 



 

 

IBM is a trademark of the IBM Corporation in the United States, other countries, or both.
Tivoli is a trademark of the IBM Corporation in the United States, other countries, or both.