IBM User Guide for Java V7 on Windows > IBM SDK for Java > The collector and the allocator



Frequently asked questions about the Garbage Collector

Examples of subjects that have answers in this section include default values, Garbage Collector (GC) policies, GC helper threads, Mark Stack Overflow, heap operation, and out of memory conditions.

What are the default heap and native stack sizes?

See Default settings for the JVM.

What is the difference between the GC policies gencon, balanced, optavgpause, and optthruput?

gencon
The gencon policy (default) uses a concurrent mark phase combined with generational garbage collection to help minimize the time that is spent in any garbage collection pause. This policy is particularly useful for applications with many short-lived objects, such as transactional applications. Pause times can be significantly shorter than with the optthruput policy, while still producing good throughput. Heap fragmentation is also reduced.
balanced
The balanced policy uses mark, sweep, compact and generational style garbage collection. The concurrent mark phase is disabled; concurrent garbage collection technology is used, but not in the way that concurrent mark is implemented for other policies. The balanced policy uses a region-based layout for the Java™ heap. These regions are individually managed to reduce the maximum pause time on large heaps and increase the efficiency of garbage collection. The policy tries to avoid global collections by matching object allocation and survival rates. If you have problems with application pause times that are caused by global garbage collections, particularly compactions, this policy might improve application performance. If you are using large systems that have Non-Uniform Memory Architecture (NUMA) characteristics (x86 and POWER platforms only), the balanced policy might further improve application throughput. For more information about this policy, including when to use it, see Balanced Garbage Collection policy.
optavgpause
The optavgpause policy uses concurrent mark and concurrent sweep phases. Pause times are shorter than with optthruput, but application throughput is reduced because some garbage collection work is taking place while the application is running. Consider using this policy if you have a large heap size (available on 64-bit platforms), because this policy limits the effect of increasing heap size on the length of the garbage collection pause. However, if your application uses many short-lived objects, the gencon policy might produce better performance.
optthruput
The optthruput policy disables the concurrent mark phase. The application stops during global garbage collection, so long pauses can occur. This configuration is typically used for large-heap applications when high application throughput, rather than short garbage collection pauses, is the main performance goal. If your application cannot tolerate long garbage collection pauses, consider using another policy, such as gencon.

What is the default GC mode (gencon, optavgpause, or optthruput)?

gencon - that is, combined use of the generational collector and concurrent marking.

How many GC helper threads are created or "spawned"? What is their work?

The garbage collector creates n-1 helper threads, where n is the number of GC threads specified by the -Xgcthreads<number> option. See Garbage Collector command-line options for more information. If you specify -Xgcthreads1, the garbage collector does not create any helper threads. Setting the -Xgcthreads option to a value that is greater than the number of processors on the system does not improve performance, but might alleviate mark-stack overflows, if your application suffers from them.

These helper threads work with the main GC thread during the following phases:

What is Mark Stack Overflow (MSO)? Why is MSO bad for performance?

Work packets are used for tracing all object reference chains from the roots. Each such reference that is found is pushed onto the mark stack so that it can be traced later. The number of work packets allocated is based on the heap size and therefore is finite and can overflow. This situation is called Mark Stack Overflow (MSO). The algorithms to handle this situation are expensive in processing terms, and therefore MSO has a large impact on GC performance.

How can I prevent Mark Stack Overflow?

The following suggestions are not guaranteed to avoid MSO:

  • Increase the number of GC helper threads using -Xgcthreads command-line option
  • Decrease the size of the Java heap using the -Xmx setting.
  • Use a small initial value for the heap or use the default.
  • Reduce the number of objects the application allocates.
  • If MSO occurs, you see entries in the verbose gc as follows:
    <warning details="work stack overflow" count="<mso_count>"
             packetcount="<allocated_packets>" />
    Where <mso_count> is the number of times MSO has occurred and <allocated_packets> is the number of work packets that were allocated. By specifying a larger number, say 50% more, with -Xgcworkpackets<number>, the likelihood of MSO can be reduced.

When and why does the Java heap expand?

The JVM starts with a small default Java heap, and it expands the heap based on the allocation requests made by an application until it reaches the value specified by -Xmx. Expansion occurs after GC if GC is unable to free enough heap storage for an allocation request. Expansion also occurs if the JVM determines that expanding the heap is required for better performance.

When does the Java heap shrink?

Heap shrinkage occurs when GC determines that there is heap storage space available, and releasing some heap memory is beneficial for system performance. Heap shrinkage occurs after GC, but when all the threads are still suspended.

Does GC guarantee that it clears all the unreachable objects?

GC guarantees only that all the objects that were not reachable at the beginning of the mark phase are collected. While running concurrently, our GC guarantees only that all the objects that were unreachable when concurrent mark began are collected. Some objects might become unreachable during concurrent mark, but they are not guaranteed to be collected.

I am getting an OutOfMemoryError. Does this mean that the Java heap is exhausted?

Not necessarily. Sometimes the Java heap has free space but an OutOfMemoryError can occur. The error might occur for several reasons:

  • Shortage of memory for other operations of the JVM.
  • Some other memory allocation failing. The JVM throws an OutOfMemoryError in such situations.
  • Excessive memory allocation in other parts of the application, unrelated to the JVM, if the JVM is just a part of the process, rather than the entire process (JVM through JNI, for instance).
  • The heap has been fully expanded, and an excessive amount of time (95%) is being spent in the GC. This check can be disabled using the option -Xdisableexcessivegc.

When I see an OutOfMemoryError, does that mean that the Java program exits?

Not always. Java programs can catch the exception thrown when OutOfMemory occurs, and (possibly after freeing up some of the allocated objects) continue to run.

In verbose:gc output, sometimes I see more than one GC for one allocation failure. Why?

You see this message when GC decides to clear all soft references. The GC is called once to do the regular garbage collection, and might run again to clear soft references. Therefore, you might see more than one GC cycle for one allocation failure.


Parent: The collector and the allocator








Error 404 - Not Found

Error 404 - Not Found

The document you are looking for may have been removed or re-named. Please contact the web site owner for further assistance.