Tune Java virtual machines

 

+

Search Tips   |   Advanced Search

  1. Overview
  2. JVM tuning
  3. Flat Heap
  4. Generational heaps
  5. The young generation
  6. The tenured generation
  7. The permanent generation
  8. Tuning for throughput
  9. Heap sizing
  10. Sizing generational heaps
  11. Sizing HotSpot JVM generations
  12. Sizing the IBM Java 1.5 JVM
  13. Interpreting verbose:gc output
  14. Optimization Procedure

 

Overview

The appserver is a Java based process and requires a Java virtual machine (JVM) environment to run and support the Java applications running on the appserver.

A JVM is a compiled binary code program that executes a virtual machine (VM) process, which you can think of as an OS abstraction layer. This abstraction layer provides the basis for the portability of Java code. All JVMs interpret compiled Java code the same way.

The JVM starts when the appserver starts, and it allots an area of memory for the data objects created at runtime. The reserved memory area is called the "heap".

A Java runtime environment provides the execution environment for Java based applications and servers such as WAS.

If you install WAS on Sun Solaris or HP-UX, the HotSpot JVM is installed; otherwise, the IBM JVM is installed. The JVM version that you get depends on your server version. In WAS V5.0 through V6.0, you get the Java 1.4 JVM. Starting with WAS V6.1, you get the Java 1.5 JVM

To determine the JVM provider on which your appserver is running,...

cd APP_SERVER_ROOT/java/bin
java –fullversion

In response to this command, the appserver writes information about the JVM, including the JVM provider information, into the SystemOut.log file.

All JVMs use Just In Time (JIT) compilers to compile Java byte codes into native instructions during server run-time.

 

JVM tuning

  1. Ensure the most recent service update is installed on your system. Almost every new service level includes JVM performance improvements.

  2. JVM garbage collection provides one of the biggest opportunities for improving JVM performance.

  3. Tune class loading

  4. Start up versus runtime performance optimization

The following steps provide specific instructions on how to perform the following types of tuning for each JVM. The steps do not have to be performed in any specific order.

 

Flat Heap

The IBM Java 1.4 JVM uses a flat heap, which is a single unstructured area of memory used for all of an application's data objects. When the IBM Java 1.4 JVM starts up, it allocates a certain amount of memory for the heap and places all of the objects it creates in the heap. When the heap is full, an attempt to place an object in the heap fails, causing the JVM to run the garbage collection process.

 

Generational heaps: The Sun HotSpot JVM and the IBM Java 1.5 JVM

The HotSpot JVM supports a flat heap model; however, in Java 1.2, Sun introduced a heap based on a generational model which, unlike the flat heap model, partitions the heap into sections for three generations of data objects...

The generational heap is the default heap in current releases of the HotSpot JVM. Each section of the heap hosts different objects at different times, depending on several factors. The generational heap allows certain objects to move through the different partitions with a series of promotions; this mechanism enables the garbage collection process to skip certain objects when checking objects for references. Whether this system is better than the flat heap model is the subject of much debate.

The IBM Java 1.5 JVM also supports a generational heap model (although that is not the default setting of this JVM). The IBM Java 1.5 JVM generational model behaves similarly to the HotSpot JVM model; the exception is that the IBM version doesn't have a section for a permanent generation. It supports a young generation optimized for applications that create many objects that die young.

 

The young generation

The young generation is the space where all new objects begin their lives. In the HotSpot JVM, the young generation is divided into three parts:

Sometimes administrators refer to the entire young generation space as the eden space, but that is inaccurate. In the IBM JVM, the young generation is called "the nursery", and it is divided into two spaces, which take turns being the allocate and survivor spaces.

By default, the HotSpot JVM's young generation is one-third the size of the total heap. You can use several parameters to configure the size of the young generation and its division into eden and survivor spaces.

The IBM Java 1.5 JVM has no default setting for the nursery; the garbage collector determines the nursery size dynamically, based on the overall size of the heap. It, too, is configurable.

When the JVM needs to place a new object in the heap, it checks for enough contiguous space in the eden/ nursery space and, if it finds enough space, puts the object there. The JVM continues placing new objects in this space until there is no longer enough space for a new object. At that time, the garbage collector runs a minor collection, which is a normal garbage collection cycle that is restricted to the young generation space.

In the HotSpot JVM, the survivor spaces are staging areas for objects. As the garbage collector checks the eden space and finds objects that are still referenced, it copies them all to one of the survivor spaces. The next time the garbage collector runs a minor collection, in addition to checking the eden space again, it copies all of the remaining live objects from that survivor space to the other survivor space.

In this way, the garbage collector switches objects between the survivor spaces in order to give the objects a chance to go unused and be collected (deleted) before they are copied to the tenured generation space.

The IBM Java 1.5 JVM works much the same way, except that after it checks one of the nursery spaces for objects that are still in use, it simply copies them to the other space.

After an object has survived enough garbage collection runs (i.e., has moved between the survivor spaces enough times or, in the case of the IBM JVM, simply copied back and forth between the two spaces), the JVM moves the object into the tenured generation. The garbage collector dynamically determines when an object is ready to be promoted.

 

The tenured generation

The tenured generation space is where the JVM places all of the veteran objects that have survived the young generation. In the HotSpot JVM, this generation is, by default, twice the size of the young generation.

The IBM Java 1.5 JVM garbage collector does not enforce a default nursery:tenured size ratio.

The advantage of having an object in the tenured space is that the garbage collector does not need to spend time checking tenured objects for references. Tenured objects are assumed to be necessary and thus are never checked during a minor collection. Another advantage of having objects in this space is that it limits the amount of space that the garbage collector has to traverse when looking for unneeded objects, thus saving garbage collection time.

When the number of objects in the tenured space becomes so great that the garbage collector cannot allocate enough contiguous space to promote an object from the young generation to the tenured space, the garbage collector runs a major collection. In a major collection, also called a full garbage collection, the garbage collector checks the entire heap.

 

The permanent generation

The permanent generation contains the internal representation of every class that the HotSpot JVM loads and instantiates into objects in the heap (remember that the IBM Java 1.5 JVM generational model only has nursery and tenured generations). The permanent generation also contains representations of internal objects like java.lang.Object. The more classes loaded in the JVM at any given time, the larger the permanent generation. Even so, the permanent generation space is usually the smallest area of the heap (actually, the total heap size does not include the permanent generation, which is sized independently of the other generations). I recommend sizing the permanent generation relative to the size of the maximum heap, setting its maximum to one-fourth the maximum heap size, unless you see that you are having problems.

Garbage collection in the permanent generation space works the same as in the other generations. When a class goes out of scope, the JVM no longer needs to hold the internal, reflective information for the class, so the garbage collector clears the information from the permanent generation space. Generally, however, you will find that this generation does not change size very often.

 

Tuning for throughput

Throughput is a percentage measure of the time the JVM spends working on the application as opposed to time spent on managing the heap.

If you know how often your JVM is running garbage collection (from looking at verbose:gc output), then you can make qualified decisions about how to change the minimum and maximum heap size.

If garbage collection runs infrequently and pauses your program for longer than is acceptable while it cleans up objects, you may want to reduce the maximum heap size.

If garbage collection runs very frequently, such that overall throughput is less than 95%, you may need to make the maximum heap size larger to reduce the frequency of garbage collection

Resizing the heap will not solve the problem of a memory leak that eats up more and more heap space over time, because the lost memory can never be reclaimed by garbage collection. Refer to your verbose:gc logs to track heap use. If there is a memory leak, you will see that the garbage collections are not successful in clearing out objects. Coupled with an overall linear increase of memory use over time, this fact tells you that you probably have a memory leak.

 

Heap sizing

For both JVMs the default settings for maximum and minimum heap size is 64MB initial heap size with no defined maximum. For an active JVM, a 64MB starting point for the entire heap is very unrealistic. At this size, the heap must be resized fairly frequently, and resizing is an STW operation.

If the JVM is running a large application, it will need to go through several expansion phases on startup (i.e., grow the heap) before it can even begin running the application. Then, after the application is up and running, the JVM will need to go through the same heap-sizing process again in order to handle the application's real user load.

Size your heap according to the needs of your application. If you know your application is going to use at least 1500MB heap in active use, set the Initial Heap Size field to "1500".

Next, set the maximum size of the heap. The Maximum Heap Size field determines the growth limit of your heap. If memory limits on the system are a concern, this setting could be valuable.

With a 32-bit JVM the maximum heap size setting is limited to something less than 4GB.

The limit on the maximum heap size amount varies according to platform, because the OS itself uses memory above the size of your JVM for its own internal management of the JVM process.

For instance, on Intel-based systems, you can't grow the JVM beyond 2GB. A 2GB JVM process running on an Intel box will in fact use a total of 4GB of RAM, the maximum for a 32-bit platform.

If you are running multiple appservers on the same hard ware, it can be very useful to set the maximum heap size so that you can be sure the memory is properly split among all applications.

I mentioned that the JVM grows the heap to match the contiguous memory needs of the application. It also can do the opposite: shrink the heap. If the present heap size is larger than the minimum and the amount of free memory is relatively high, then the JVM may shrink the heap size, which is another STW operation. However, if your application needs to fluctuate tremendously, you may find that the JVM spends a lot of time shrinking and expanding the heap and not nearly enough time running your applications.

One common way to solve this problem is to set the minimum and maximum heap size settings to the same value. With the heap set this way, the heap size is constant and the application never has to wait for the JVM to resize the heap. This strategy is clearly advantageous if you are worried about the time needed to resize your heap, but there is a potential drawback: If your JVM isn't heavily allocating space right now but nevertheless needs to run a garbage collection, then it will have to traverse far more of the heap than may be otherwise necessary. However, you would have this same issue if you set your minimum value too high anyway, and sizing your minimum up to the needs of normal usage is a good idea. Therefore, in general, setting the minimum and maximum to the same value is also a good idea, assuming that you have properly sized your heap according to the machine and needs of your applica tions. You can easily see what your JVM needs are while running under high and low volume by enabling verbose:gc logging and studying the output.

 

Sizing generational heaps

With generational heaps we need to consider how much space to allot for each generation.

In general, you should aim for as few full collections as possible since they are STW operations. If the young generation space is too small, then the JVM will promote objects prematurely to the tenured genera tion, which has a performance cost because objects in the tenured generation are cleared by a full garbage collection; in the young generation, these same objects would have been cleaned up during a minor collec tion. On the other hand, if the young generation is too big, then the tenured generation may not be able to easily support all of the promoted objects, and the JVM will need to run full garbage collections more frequently in order to clear it.

 

Sizing HotSpot JVM generations

The default sizes for generational heaps are usually acceptable for the HotSpot JVM. Recall that by default, the HotSpot's young generation is 1/3 the size of the total heap and the tenured generation is two times the size of the young generation, which makes it 2/3 the size of the total heap (the permanent genera tion is sized independently of the rest of the heap so references to the total heap don.t take it into account).

You can control how the HotSpot JVM changes the size of the young and tenured generation heaps by setting ratios or explicit minimum and maximum sizes in the "Generic JVM arguments" section of the WebSphere Administrative console. By default, the HotSpot JVM uses the following ratio to tell the JVM to make the tenured generation twice the size of the young generation:

-XX:NewRatio=2

Put differently, it means that there is a 2:1 ratio of tenured generation space to young generation space.

You also have the option to use one or both of the following settings to size the young generation:

Note that there are no options to size the tenured generation. It gets what is left of the heap.

You have these options for setting the size of the permanent generation:

For example, to set the initial permanent generation size to 128MB and its maximum size to 256MB:

-XX:PermSize=128m -XX:MaxPermSize=256m

 

Sizing the IBM Java 1.5 JVM

Unless you set a maximum size, the IBM Java 1.5 JVM garbage collector does not grow the nursery space to a size larger than 64MB (it will, however, grow the tenured generation space up to a defined maximum size).

The IBM Java 1.5 JVM supports tuning the generation sizes using the following settings (it does not support ratios):

-Xmn<x> Set the initial/maximum nursery space size to <x>
-Xmns<x> Set the initial nursery space size to <x>
-Xmnx<x> Set maximum nursery space size to <x>
-Xmo<x> Set initial/maximum tenured space size to <x>
-Xmos<x> Set initial tenured space size to <x>
-Xmox<x> Set maximum tenured space size to <x>

 

Interpreting verbose:gc output

By turning on verbose:gc from the WebSphere Administrative console, under Java Virtual Machine settings you can get details on the amount of heap space that applications are using, how often the JVM is running garbage collection, and therefore how much time the JVM is spending working on applications.

By default, verbose:gc output is written to the native_stdout log.

Turning on verbose:gc has very little effect (less than 5%) on the overall performance of an application.

You can use tools such as tagtraum gcviewer to get information on JVM throughput.

Minor collections are denoted as GC. Major collections are denoted as Full GC, and include the STW compact operation.

For example...

[GC 16004K->11531K(51200K), 0.0222102 secs]
[GC 17035K->11966K(51200K), 0.0289637 secs]
[GC 17464K->12208K(51200K), 0.0129400 secs]
[GC 17712K->12848K(51200K), 0.0251405 secs]
[Full GC 15616K->11759K(51200K), 0.7539289 secs]
[GC 17391K->13098K(51328K), 0.0353323 secs]

The number before the arrow tells the amount of heap space used prior to the collection, and the number after the arrow tells the amount used after the collection. These before and after heap size numbers are a good indicator of the total size of the heap. The number in parentheses is the total amount available. If the JVM grows or shrinks the heap, then this number will change from one run to the next. The last column lists the amount of time it took for the collection to complete.

Note that the full garbage collection took only 0.7 seconds. As long as the major garbage collection runs do not take place too frequently, things look good. An infrequent major garbage collection that does not degrade throughput to below 95% indicates a properly sized heap and usually acceptable performance.

The default verbose:gc output does not tell you everything you might want to know, but it is sufficient for you to see if you have a problem with the heap size. In this example, it is clear that the collections are running quickly and that they are effectively clearing out old objects. However, if there had been many major garbage collections relative to minor collec tions, combined with before and after heaps that were very close to the total allocated size, it would indicate that the JVM was short on memory.

 

Optimization Procedure

  1. Optimize the startup and runtime performance

    In some environments, such as a development environment, it is more important to optimize the startup performance of your appserver rather than the runtime performance. In other environments, it is more important to optimize the runtime performance. By default, IBM JVMs are optimized for runtime performance, while HotSpot based JVMs are optimized for startup performance.

    The Java JIT compiler has a big impact on whether startup or runtime performance is optimized. The initial optimization level that the compiler uses influences the length of time it takes to compile a class method, and the length of time it takes to start the server. For faster startups, you should reduce the initial optimization level that the compiler uses. However if you reduce the initial optimization level, the runtime performance of your applications might be degraded because the class methods are now compiled at a lower optimization level.

    • -Xquickstart

      This setting influences how the IBM JVM uses a lower optimization level for class method compiles. A lower optimization level provides for faster server startups, but lowers runtime performance. If this parameter is not specified, the IBM JVM defaults to starting with a high initial optimization level for compiles, which results in faster runtime performance, but slower server starts.

      Default: High initial compiler optimization level
      Recommended: High initial compiler optimization level
      Usage: -Xquickstart provides faster server startup.

    (Solaris) HotSpot based JVMs initially compile class methods with a low optimization level. Use this JVM option to change that behavior:

  2. Configure the heap size

    The Java heap parameters influence the behavior of garbage collection. Increasing the heap size supports more object creation. Because a large heap takes longer to fill, the application runs longer before a garbage collection occurs. However, a larger heap also takes longer to compact and causes garbage collection to take longer.

    The JVM has thresholds it uses to manage the JVM's storage. When the thresholds are reached, the garbage collector gets invoked to free up unused storage.

    Therefore, garbage collection can cause significant degradation of Java performance. Before changing the initial and maximum heap sizes, you should consider the following information:

    • In the majority of cases you should set the maximum JVM heap size to value higher than the initial JVM heap size.

      This allows for the JVM to operate efficiently during normal, steady state periods within the confines of the initial heap but also to operate effectively during periods of high transaction volume by expanding the heap up to the maximum JVM heap size. In some rare cases where absolute optimal performance is required you might want to specify the same value for both the initial and maximum heap size. This will eliminate some overhead that occurs when the JVM needs to expand or contract the size of the JVM heap. Make sure the region is large enough to hold the specified JVM heap.

    • Beware of making the Initial Heap Size too large.

      While it initially improves performance by delaying garbage collection, it ultimately affects response time when garbage collection eventually kicks in (because it runs for a longer time).

    The IBM Developer Kit and Runtime Environment, Java2 Technology Edition, V5.0 Diagnostics Guide, that is available on the developerWorks Web site, provides additional information on tuning the heap size.

    To use the console to configure the heap size:

    1. In the console, click...

      Servers | Application Servers | server | Server Infrastructure | Java and Process Management | Process Definition | Java Virtual Machine

    2. Specify a new value in either the Initial heap size or the Maximum heap size field.

      You can also specify values for both fields if adjust both settings.

      For performance analysis, the initial and maximum heap sizes should be equal.

      The Initial heap size setting specifies, in megabytes, how often garbage collection runs. The Maximum heap size setting specifies how often garbage collection runs. Both of these settings have a significant effect on performance.

      When tuning a production system where the working set size of the Java application is not understood, a good starting value for the initial heap size is 25% of the maximum heap size. The JVM then tries to adapt the size of the heap to the working set size of the application.


      The illustration represents three CPU profiles, each running a fixed workload with varying Java heap settings. In the middle profile, the initial and maximum heap sizes are set to 128MB. Four garbage collections occur. The total time in garbage collection is about 15% of the total run. When the heap parameters are doubled to 256MB, as in the top profile, the length of the work time increases between garbage collections.

      Only three garbage collections occur, but the length of each garbage collection is also increased. In the third profile, the heap size is reduced to 64MB and exhibits the opposite effect. With a smaller heap size, both the time between garbage collections and the time for each garbage collection are shorter. For all three configurations, the total time in garbage collection is approximately 15%. This example illustrates an important concept about the Java heap and its relationship to object utilization. There is always a cost for garbage collection in Java applications.

      Run a series of test experiments that vary the Java heap settings. For example, run experiments with 128MB, 192MB, 256MB, and 320MB. During each experiment, monitor the total memory usage. If you expand the heap too aggressively, paging can occur. Use the vmstat command or the Windows 2000/2003 Performance Monitor to check for paging. If paging occurs, reduce the size of the heap or add more memory to the system. When all the runs are finished, compare the following statistics:

      • Number of garbage collection calls

      • Average duration of a single garbage collection call

      • Ratio between the length of a single garbage collection call and the average time between calls

      If the application is not over utilizing objects and has no memory leaks, the state of steady memory utilization is reached. Garbage collection also occurs less frequently and for short duration.

      If the heap free space settles at 85% or more, consider decreasing the maximum heap size values because the appserver and the application are under-utilizing the memory allocated for heap.

    3. Click Apply or OK.

    4. Save your changes to the master configuration.

    5. Stop and restart the appserver.

    You can also use the following command line parameters to adjust these settings. These parameters apply to all supported JVMs and are used to adjust the minimum and maximum heap size for each appserver or appserver instance.

       

    • -Xms

      This setting controls the initial size of the Java heap. Properly tuning this parameter reduces the overhead of garbage collection, which improves server response time and throughput. For some applications, the default setting for this option might be too low, which causes a high number of minor garbage collections.

      Default: 50MB. This default value applies for both 31-bit and 64-bit configurations.
      Recommended: Workload specific, but higher than the default.
      Usage: -Xms256m sets the initial heap size to 256 megabytes.

       

    • -Xmx

      This setting controls the maximum size of the Java heap. Increasing this parameter increases the memory available to the appserver, and reduces the frequency of garbage collection. Increasing this setting can improve server response time and throughput. However, increasing this setting also increases the duration of a garbage collection when it does occur. This setting should never be increased above the system memory available for the appserver instance. Increasing the setting above the available system memory can cause system paging and a significant decrease in performance.

      Default: 256MB. This default value applies for both 31-bit and 64-bit configurations.
      Recommended: Workload specific, but higher than the default, depending on the amount of available physical memory.
      Usage: -Xmx512m sets the maximum heap size to 512 megabytes.

       

    • -Xlp

      This setting is used with the IBM JVM to allocate the heap when using large pages (16MB). However, if you use this setting your operating system must be configured to support large pages. Using large pages can reduce the CPU overhead needed to keep track of heap memory, and might also allow the creation of a larger heap.

      See Tuning operating systems for more information about tuning your operating system.

       

    • –Xlp64k

      This setting can be used with the IBM JVM to allocate the heap using a 64 kilobyte page size (medium pages). Using this virtual memory page size for the memory that an application requires can improves the performance and throughput of the application because of hardware efficiencies that are associated with a larger page size.

      To support a 64KB page size, in the console, click...

      Servers | Application servers | server | Process Definition | Environment Entries | New

      ...and then specify LDR_CNTRL in the Name field and DATAPSIZE=64K@TEXTPSIZE=64K@STACKPSIZE=64K in the Value field

      [AIX] AIX has rich support around 64KB pages, and 64KB pages are intended to be general purpose. 64KB pages are very easy to use, and it is expected that many applications will see performance benefits when using 64KB pages rather than the default 4KB pages. This setting can be changed without changing the operating system configuration.

      Default: 4KB
      Recommended: -Xlp64k enables the 64KB page size support. [AIX]

      POWER5+ systems, AIX 5L V5.3 with the 5300-04 Recommended Maintenance Package supports a new 64KB page size when running the 64-bit kernel.

  3. Tune Java memory and objects...

    1. Check for over-utilization of objects.

      You can use TPV to check if the application is overusing objects, by observing the counters for the JVM runtime. Set the command line option...

      -XrunpmiJvmpiProfiler

      ...as well as the JVM module maximum level in order to enable the JVMPI counters.

      The best result for the average time between garbage collections is at least 5-6 times the average duration of a single garbage collection. If you do not achieve this number, the application is spending more than 15% of its time in garbage collection.

      If the information indicates a garbage collection bottleneck, there are two ways to clear the bottleneck. The most cost-effective way to optimize the application is to implement object caches and pools. Use a Java profiler to determine which objects to target. If you can not optimize the application, adding memory, processors and clones might help. Additional memory allows each clone to maintain a reasonable heap size. Additional processors allow the clones to run in parallel.

    2. Test for memory leaks

      Memory leaks in the Java language are a dangerous contributor to garbage collection bottlenecks. Memory leaks are more damaging than memory overuse, because a memory leak ultimately leads to system instability. Over time, garbage collection occurs more frequently until the heap is exhausted and the Java code fails with a fatal out-of-memory exception. Memory leaks occur when an unused object has references that are never freed. Memory leaks most commonly occur in collection classes, such as Hashtable because the table always has a reference to the object, even after real references are deleted.

      High workload often causes applications to crash immediately after deployment in the production environment. This is especially true for leaking applications where the high workload accelerates the magnification of the leakage and a memory allocation failure occurs.

      The goal of memory leak testing is to magnify numbers. Memory leaks are measured in terms of the amount of bytes or kilobytes that cannot be garbage collected.

      The delicate task is to differentiate these amounts between expected sizes of useful and unusable memory. This task is achieved more if the numbers are magnified, resulting in larger gaps and easier identification of inconsistencies.

      The following list contains important conclusions about memory leaks:

         

      • Long-running test

        Memory leak problems can manifest only after a period of time, therefore, memory leaks are found during long-running tests. Short running tests can lead to false alarms. It is sometimes difficult to know when a memory leak is occurring in the Java language, especially when memory usage has seemingly increased either abruptly or monotonically in a given period of time. The reason it is hard to detect a memory leak is that these kinds of increases can be valid or might be the intention of the developer.

        You can learn how to differentiate the delayed use of objects from completely unused objects by running applications for a longer period of time. Long-running application testing gives you higher confidence for whether the delayed use of objects is actually occurring.

         

      • Repetitive test

        In many cases, memory leak problems occur by successive repetitions of the same test case. The goal of memory leak testing is to establish a big gap between unusable memory and used memory in terms of their relative sizes. By repeating the same scenario over and over again, the gap is multiplied in a very progressive way. This testing helps if the number of leaks caused by the execution of a test case is so minimal that it is hardly noticeable in one run.

        You can use repetitive tests at the system level or module level. The advantage with modular testing is better control. When a module is designed to keep the private module without creating external side effects such as memory usage, testing for memory leaks is easier.

        First, the memory usage before running the module is recorded. Then, a fixed set of test cases are run repeatedly. At the end of the test run, the current memory usage is recorded and checked for significant changes. Remember, garbage collection must be suggested when recording the actual memory usage by inserting System.gc() in the module where you want garbage collection to occur, or using a profiling tool, to force the event to occur.

         

      • Concurrency test

        Some memory leak problems can occur only when there are several threads running in the application. Unfortunately, synchronization points are very susceptible to memory leaks because of the added complication in the program logic. Careless programming can lead to kept or unreleased references. The incident of memory leaks is often facilitated or accelerated by increased concurrency in the system. The most common way to increase concurrency is to increase the number of clients in the test driver.

        Consider the following points when choosing which test cases to use for memory leak testing:

        • A good test case exercises areas of the application where objects are created. Most of the time, knowledge of the application is required. A description of the scenario can suggest creation of data spaces, such as adding a new record, creating an HTTP session, performing a transaction and searching a record.

        • Look at areas where collections of objects are used. Typically, memory leaks are composed of objects within the same class. Also, collection classes such as Vector and Hashtable are common places where references to objects are implicitly stored by calling corresponding insertion methods. For example, the get method of a Hashtable object does not remove its reference to the retrieved object.

      You can use the Tivoli Performance Viewer to help find memory leaks.

      For the best results, repeat experiments with increasing duration, like 1000, 2000, and 4000 page requests.

      The Tivoli Performance Viewer graph of used memory should have a sawtooth shape. Each drop on the graph corresponds to a garbage collection. There is a memory leak if one of the following occurs:

      • The amount of memory used immediately after each garbage collection increases significantly. The sawtooth pattern looks more like a staircase.

      • The sawtooth pattern has an irregular shape.

      Also, look at the difference between the number of objects allocated and the number of objects freed. If the gap between the two increases over time, there is a memory leak.

      Heap consumption indicating a possible leak during a heavy workload (the appserver is consistently near 100% CPU utilization), yet appearing to recover during a subsequent lighter or near-idle workload, is an indication of heap fragmentation. Heap fragmentation can occur when the JVM can free sufficient objects to satisfy memory allocation requests during garbage collection cycles, but the JVM does not have the time to compact small free memory areas in the heap to larger contiguous spaces.

      Another form of heap fragmentation occurs when small objects (less than 512 bytes) are freed. The objects are freed, but the storage is not recovered, resulting in memory fragmentation until a heap compaction has been run.

      Heap fragmentation can be reduced by forcing compactions to occur, but there is a performance penalty for doing this. Use the Java -X command to see the list of memory options.

  4. Tune garbage collection

    Examining Java garbage collection gives insight to how the application is utilizing memory. Garbage collection is a Java strength. By taking the burden of memory management away from the application writer, Java applications are more robust than applications written in languages that do not provide garbage collection. This robustness applies as long as the application is not abusing objects. Garbage collection normally consumes from 5% to 20% of total execution time of a properly functioning application. If not managed, garbage collection is one of the biggest bottlenecks for an application.

    Monitor garbage collection during the execution of a fixed workload, enables you to gain insight as to whether the application is over-utilizing objects. Garbage collection can even detect the presence of memory leaks.

    You can use JVM settings to configure the type and behavior of garbage collection. When the JVM cannot allocate an object from the current heap because of lack of contiguous space, the garbage collector is invoked to reclaim memory from Java objects that are no longer being used.

    Each JVM vendor provides unique garbage collector policies and tuning parameters.

    You can use the Verbose garbage collection setting in the console to enable garbage collection monitoring. The output from this setting includes class garbage collection statistics. The format of the generated report is not standardized between different JVMs or release levels.

    For more information about monitoring garbage collection, see:

    To adjust your JVM garbage collection settings:

    1. In the console, click Servers > Application Servers > server.

    2. Under Server Infrastructure, click Java and Process Management > Process Definition > Java Virtual Machine.

    3. Enter the –X option you want to change in the Generic JVM arguments field.

    4. Click OK.

    5. Save your changes to the master configuration.

    6. Stop and restart the appserver.

    The following list describes the –X options for the different JVM garbage collectors.

    The IBM JVM garbage collector.

    A complete guide to the IBM Java garbage collector is provided in the IBM Developer Kit and Runtime Environment, Java2 Technology Edition, V5.0 Diagnostics Guide. This document is available on the developerWorks Web site.

    Use the Java -X option to view a list of memory options.

    • -Xgcpolicy

      Starting with Java 5.0, the IBM JVM provides four policies for garbage collection. Each policy provides unique benefits.

      optthruput The default garbage collection method is optthruput, which is optimized for object creation in the heap. This method postpones garbage collection until it's absolutely necessary so that when garbage collection finally runs, it takes longer than the alternative garbage collection method (optavgpause). Because optthruput runs less frequently, however, it is almost always the fastest method of garbage collection over time, and you should start with this policy.

      If the optthruput method of garbage collection causes an excessively long pause (20 seconds or more) for the STW compact operation, consider using the optavgpause method. You can monitor the length of the pause by studying the output of the verbose:gc logging option.

      optavgpause The optavgpause method does the shorter pieces of the garbage collection (i.e., the mark and sweep operations) during normal JVM operations. When the garbage collection (compact) operation is finally required, it has less work to do and the pause is shorter. You should only consider using optavgpause if you find that the pause for garbage collection with optthruput violates a service level agreement (SLA).

      To enable open...

      Application servers | SERVERNAME | Process Definition | Java Virtual Machine

      ...and add the following argument to the "Generic JVM arguments" section:

      -Xgcpolicy:optavgpause
      gencon New in IBM Java 5.0. Generational garbage collector for the IBM JVM. The generational scheme attempts to achieve high throughput along with reduced garbage collection pause times.

      To accomplish this goal, the heap is split into new and old segments. Long lived objects are promoted to the old space while short-lived objects are garbage collected quickly in the new space. The gencon policy provides significant benefits for many applications, but is not suited to all applications and is generally more difficult to tune.

      subpool Can increase performance on multiprocessor systems, that commonly use more then 8 processors. This policy is only available on IBM pSeries and zSeries processors. The subpool policy is similar to the optthruput policy except that the heap is divided into subpools that provide improved scalability for object allocation.

      Default: optthruput
      Recommended: optthruput
      Usage: Xgcpolicy:optthruput sets the garbage collection to optthruput

      Setting gcpolicy to optthruput disables concurrent mark. You should get the best throughput results when you use the optthruput policy unless you are experiencing erratic application response times, which is an indication that you might have pause time problems

      Setting gcpolicy to optavgpause enables concurrent mark with its default values. This setting alleviates erratic application response times that normal garbage collection causes. However, this option might decrease overall throughput.

       

    • -Xnoclassgc

      By default, the JVM unloads a class from memory whenever there are no live instances of that class left. Therefore, class unloading can degrade performance.

      You can use the -Xnoclassgc argument to disable class garbage collection so that your applications can reuse classes more easily. Turning off class garbage collection eliminates the overhead of loading and unloading the same class multiple times.

      Default: Class garbage collection is enabled.
      Recommended:

      ADisable class garbage collection.

      Usage: Xnoclassgc disables class garbage collection.

    The Sun JVM garbage collector. (Solaris)

    On the Solaris platform, an appserver runs on the Sun HotSpot JVM rather than the IBM JVM. It is important to use the correct tuning parameters with the Sun JVM in order to utilize its performance optimizing features.

    The Sun HotSpot JVM relies on generational garbage collection to achieve optimum performance. The following command line parameters are useful for tuning garbage collection.

       

    • -XX:SurvivorRatio

      The Java heap is divided into a section for old (long lived) objects and a section for young objects. The section for young objects is further subdivided into the section where new objects are allocated, called eden, and the section where new objects that are still in use survive their first few garbage collections before being promoted to old objects, called survivor space. Survivor ratio is the ratio of eden to survivor space in the young object section of the heap.

      Increasing this setting optimizes the JVM for applications with high object creation and low object preservation. Because WAS instances generate more medium and long lived objects than other appservers, this setting should be lowered from the default.

      Default: 32
      Recommended: 16
      Usage: -XX:SurvivorRatio=16

       

    • -XX:PermSize

      The section of the heap reserved for the permanent generation holds all of the reflective data for the JVM. This size should be increased to optimize the performance of applications that dynamically load and unload a lot of classes. Setting this to a value of 128 megabytes eliminates the overhead of increasing this part of the heap.

      Recommended: 128m
      Usage: XX:PermSize=128m sets perm size to 128 megabytes.

       

    • -Xmn

      This setting controls how much space the young generation is allowed to consume on the heap. Properly tuning this parameter can reduce the overhead of garbage collection, improving server response time and throughput.

      The default setting for this is typically too low, resulting in a high number of minor garbage collections. Setting this setting too high can cause the JVM to only perform major (or full) garbage collections. These usually take several seconds and are extremely detrimental to the overall performance of your server. You must keep this setting below half of the overall heap size to avoid this situation.

      Default: 2228224 bytes
      Recommended: Approximately 1/4 of the total heap size
      Usage: -Xmn256m sets the size to 256 megabytes.

       

    • -Xnoclassgc

      By default the JVM unloads a class from memory when there are no live instances of that class left, but this can degrade performance. Turning off class garbage collection eliminates the overhead of loading and unloading the same class multiple times.

      If a class is no longer needed, the space that it occupies on the heap is normally used for the creation of new objects. However, if you have an application that handles requests by creating a new instance of a class and if requests for that application come in at random times, it is possible that when the previous requester is finished, the normal class garbage collection will clean up this class by freeing the heap space it occupied, only to have to re-instantiate the class when the next request comes along. In this situation you might want to use this option to disable the garbage collection of classes.

      Default: Class garbage collection is enabled.
      Recommended: Class garbage collection is disabled.
      Usage: Xnoclassgc disables class garbage collection.

    For additional information on tuning the Sun JVM, see Performance Documentation for the Java HotSpot VM.

    The HP JVM garbage collector. (HP-UX)

    The HP JVM relies on generational garbage collection to achieve optimum performance. The following command line parameters are useful for tuning garbage collection.

       

    • -Xoptgc

      This setting optimizes the JVM for applications with many short-lived objects. If this parameter is not specified, the JVM usually does a major (full) garbage collection. Full garbage collections can take several seconds and can significantly degrade server performance.

      Default: off
      Recommended: on
      Usage: -Xoptgc enables optimized garbage collection.

       

    • -XX:SurvivorRatio

      The Java heap is divided into a section for old (long lived) objects and a section for young objects. The section for young objects is further subdivided into the section where new objects are allocated, called eden, and the section where new objects that are still in use survive their first few garbage collections before being promoted to old objects, called survivor space. Survivor ratio is the ratio of eden to survivor space in the young object section of the heap. Increasing this setting optimizes the JVM for applications with high object creation and low object preservation. Because WAS instances generate more medium and long lived objects than other appservers, this setting should be lowered from the default.

      Default: 32
      Recommended: 16
      Usage: -XX:SurvivorRatio=16

       

    • -XX:PermSize

      The section of the heap reserved for the permanent generation holds all of the reflective data for the JVM. This size should be increased to optimize the performance of applications which dynamically load and unload a lot of classes. Specifying a value of 128 megabytes eliminates the overhead of increasing this part of the heap.

      Default: 0
      Recommended: 128 megabytes
      Usage: -XX:PermSize=128m sets PermSize to 128 megabytes

       

    • -XX:+ForceMmapReserved

      This command disables the lazy swap functionality and allows the operating system to use larger memory pages, thereby optimizing access to the memory that makes up the Java heap. By default, the Java heap is allocated lazy swap space. Lazy swap functionality saves swap space because pages of memory are allocated as needed. However, the lazy swap functionality forces the use of 4KB pages. In large heap systems, this allocation of memory can spread the heap across hundreds of thousands of pages.

      Default: off
      Recommended: on
      Usage: -XX:+ForceMmapReserved disables the lazy swap functionality.

       

    • -Xmn

      This setting controls how much space the young generation is allowed to consume on the heap. Properly tuning this parameter can reduce the overhead of garbage collection, improving server response time and throughput. The default setting for this is typically too low, resulting in a high number of minor garbage collections.

      Default: No default
      Recommended: Approximately 3/4 of the total heap size
      Usage: -Xmn768m sets the size to 768 megabytes.

       

    • Virtual Page Size

      Setting the Java virtual machine instruction and data page sizes to 64MB can improve performance.

      Default: 4MB
      Recommended: 64MB
      Usage: Use the following command. The command output provides the current operating system characteristics of the process executable:

      chatr +pi64M +pd64M /opt/WebSphere/AppServer/java/bin/PA_RISC2.0/native_threads/java

       

    • -Xnoclassgc

      By default the JVM unloads a class from memory when there are no live instances of that class left, but this can degrade performance. Turning off class garbage collection eliminates the overhead of loading and unloading the same class multiple times.

      If a class is no longer needed, the space that it occupies on the heap is normally used for the creation of new objects. However, if you have an application that handles requests by creating a new instance of a class and if requests for that application come in at random times, it is possible that when the previous requester is finished, the normal class garbage collection will clean up this class by freeing the heap space it occupied, only to have to re-instantiate the class when the next request comes along. In this situation you might want to use this option to disable the garbage collection of classes.

      Default: class garbage collection is enabled.
      Recommended: class garbage collection is disabled.
      Usage: Xnoclassgc disables class garbage collection.

    (HP-UX) For additional information on tuning the HP virtual machine, see Java technology software HP-UX 11i.

  5. (HP-UX) Tune the HP JVM for HP-UX Set the following options to improve application performance:

    -XX:SchedulerPriorityRange=SCHED_NOAGE
    -XX:-ExtraPollBeforeRead
    -XX:+UseSpinning

    (HP-UX) For additional information on tuning the HP virtual machine, see Java technology software HP-UX 11i.

  6. (Solaris) Select either client or server mode for the Sun HotSpot JVM on Solaris.

    The Java Virtual Machine that WAS uses on the Solaris platform runs in two modes: client or server. Each mode has its advantages. Client mode is a good mode to select if your environment:

    • Requires quick recovery after a server reboot or crash. Client mode allows the virtual machine to warm up faster, which lets an appserver service a large number of requests very quickly after startup.

    • Has physical RAM limitations. Client mode uses less memory than server mode uses. This memory savings is more significant if your overall JVM size is small because of hardware limitations. For example, your overall JVM size might be small because you are running several JVMs on a single piece of hardware.

    To maximize performance on appservers that are rarely restarted you should run the HotSpot JVM in server mode. When the JVM is in server mode, it takes several times longer for an appserver to get to a state where it can service a large number of requests. However, after it gets to that state, server mode can significantly out perform a comparable JVM running in client mode.

    The HotSpot JVM running in server mode uses a high optimization compiler that optimizes and re-optimizes the Java code during the initial warm up stage. All of this optimization work takes awhile, but once the JVM is warmed up, appservers run significantly faster than they do in client mode on the same hardware.

    The Solaris implementation of Java 5.0 examines your hardware and tries to select the correct JVM mode for your environment. If the JVM determines that it is running on a server level machine, the JVM automatically enables server mode. In Java 1.4.2 and earlier, the default mode is client mode and must use the -server flag on the JVM command line to enable server mode.

    Because the JVM automatically enables server mode if your machine has at least 2 CPUs and 2 GB of memory, your JVMs probably default to server mode. However, you can use the -client and -server flags in the generic JVM arguments to force the virtual machine into either mode if the mode the JVM selects for you does not fit your environment.

     

  7. Enable class sharing in a cache.

    The share classes option of the IBM Java 2 Runtime Environment (J2RE) V1.5.0 lets you share classes in a cache. Sharing classes in a cache can improve startup time and reduce memory footprint. Processes, such as appservers, node agents, and deployment managers, can use the share classes option. (Solaris) (HP-UX)

    The IBM J2RE 1.5.0 is currently not used on:

    • (Solaris) Solaris

    • (HP-UX) HP-UX

    If you use this option, you should clear the cache when the process is not in use. To clear the cache, either call the APP_SERVER_ROOT/bin/clearClassCache.bat/sh utility or stop the process and then restart the process.

    If disable the share classes option for a process, specify the generic JVM argument -Xshareclasses:none for that process:

    1. In the console, click Servers > Application Servers > server.

    2. Under Server Infrastructure, click Java and Process Management > Process Definition > Java Virtual Machine.

    3. Enter -Xshareclasses:none in the Generic JVM arguments field.

    4. Click OK.

    5. Save your changes to the master configuration.

    6. Stop and restart the appserver.

    Default: The Share classes in a cache option is enabled.
    Recommended: Leave the share classes in a cache option enabled.
    Usage: -Xshareclasses:none disables the share classes in a cache option.

  8. (Solaris) (HP-UX) Minimize memory consumption.

    (Solaris) The default tuning preferences for the Sun Java 5.0 JVM uses more memory then previous JVM versions. This additional memory helps to maximize throughput. However, it can cause problems if you are running environments like JVM hoteling, where physical memory is usually at a premium. You can add the following tuning parameters to the generic JVM arguments if tune the Sun Java 5.0 JVM for minimal memory consumption:

    -client -XX:MaxPermSize=256m -XX:-UseLargePages -XX:+UseSerialGC

    (Solaris) Setting these parameters might impact some throughput and might result in slightly slower server startup times. If you are running very large applications, you can specify a higher value for the MaxPermSize setting. (HP-UX) The default tuning preferences for the HP Java 5 JVM uses more memory then previous JVM versions. This additional memory helps to maximize throughput. However, it can cause problems if you are running environments like JVM hoteling, where physical memory is usually at a premium. You can add the following tuning parameters to the generic JVM arguments if tune the Sun Java 5.0 JVM for minimal memory consumption:

    -XX:-UseParallelGC –XX:-UseAdaptiveSizePolicy
    Setting these parameters might result in slightly slower server startup times.

  9. Tune the configuration update process for a large cell configuration. In a large cell configuration, you might have to determine whether configuration update performance or consistency checking is more important. When configuration consistency checking is turned on, a large amount of time might be required to save a configuration change or to deploy a large number of applications. The following factors influence how much time is required:

    • The more appservers or clusters there are defined in cell, the longer it takes to save a configuration change.

    • The more applications there are deployed in a cell, the longer it takes to save a configuration change.

    If the amount of time required to change a configuration change is unsatisfactory, you can add the config_consistency_check custom property to your JVM settings and set the value of this property to false.

    1. In the console, click System administration > Deployment manager.

    2. Under Server Infrastructure, select Java and Process Management, and then click Process Definition.

    3. Under Additional Properties, click Java Virtual Machine > Custom Properties > New.

    4. Enter config_consistency_check in the Name field and false in the Value field.

    5. Click OK and then Save to apply these changes to the master configuration.

    6. Restart the server.

 

What to do next

Each Java vendor provides detailed information on performance and tuning for their JVM. Use the following Web sites to obtain additional tuning information for a specific Java runtime environments:

If you use DB2, consider disabling SafepointPolling technology in the HP JVM for HP-UX. Developed to ensure safepoints for Java threads, SafepointPolling technology generates a signal that can interfere with the signal between WAS and a DB2 database. When this interference occurs, database deadlocks often result. Prevent the interference by starting the JVM with the -XX:-SafepointPolling option, which disables SafepointPolling during runtime.


 

Related tasks


Tuning the application serving environment

 

Related Reference


Java virtual machine settings