+

Search Tips   |   Advanced Search

Tune EJB cache with trace service

The size of our EJB cache can affect the performance of the application server. One of the steps in tuning your EJB container to optimum performance levels is to fine-tune the EJB cache.

IBM recommends using the High Performance Extensible Logging (HPEL) log and trace infrastructure . We view HPEL log and trace information using the logViewer .

The following procedure describes how to use the diagnostic trace service to help determine the best cache size.


Tasks

  1. Enable the EJB cache trace. To learn about working with the trace service, see the topic, Working with trace. (iSeries) (Dist) For information about the trace service settings, see the topic Diagnostic trace service settings.

    Set up your trace to use this trace string:

    com.ibm.ejs.util.cache.BackgroundLruEvictionStrategy=all=enabled:com.ibm.ejs.util.cache.CacheElementEnumerator=
    all=enabled
    

    (iSeries) (Dist) Set Maximum File Size to 200MB or more. If we leave the default value of 20MB, we could fill up the single 20 MB trace log and lose some data because of trace wrapping.

    (iSeries) (Dist) Set Maximum Number of Historical Files to 5. Five files should be sufficient, but if you see that all five files are full and trace wrapping occurs, increase this value.

  2. (iSeries) (Dist) Stop the server, delete existing logs, then start the server.

  3. Stop and restart the server.

  4. Run typical scenarios to capture cache trace data. By running a typical scenario with the trace enabled, we get the EJB cache trace data to analyze in the following steps.

  5. View and analyze the trace output.

    1. Open your trace log. Look for either or both of the following trace strings to display:

      (iSeries) (Dist) BackgroundLru 3 EJB Cache: Sweep (1,40) - Cache limit not reached : 489/2053
      BackgroundLru > EJB Cache: Sweep (16,40) - Cache limit exceeded : 3997/2053 Entry

      (ZOS) Trace: 2007/03/22 11:47:07.048 01 t=7A9690 c=UNK key=P8 (13007002)
      ThreadId: 0000006a
      FunctionName: com.ibm.ejs.util.cache.BackgroundLruEvictionStrategy
      SourceId: com.ibm.ejs.util.cache.BackgroundLruEvictionStrategy
      Category: FINEST
      ExtendedMessage: EJB Cache: Sweep (23,40) - Cache limit not reached : 0/2053

      Trace: 2007/03/22 11:54:16.755 01 t=7BD3B0 c=UNK key=P8 (13007002)
      ThreadId: 0000006d
      FunctionName: EJB Cache: Sweep (75,37) - Cache limit exceeded : 3801/2053
      SourceId: com.ibm.ejs.util.cache.BackgroundLruEvictionStrategy
      Category: FINER
      ExtendedMessage: Entry

      In the trace strings that include the words Cache limit you find a ratio. For example, 3997/2053. The first number is the number of enterprise beans currently in the EJB cache (called the capacity). The second number is the EJB cache setting (more about this in later steps). Use this ratio, particularly the capacity, in your analysis.

      Also look for the statements Cache limit not reached and Cache limit exceeded.

      Cache limit not reached

      Your cache is equal to or larger than what is appropriate. If it is larger, we are wasting memory, and should reduce the cache size to a more appropriate value.

      Cache limit exceeded

      The number of beans currently being used is greater than the capacity we have specified, indicating that the cache is not properly tuned. The capacity can exceed the EJB Cache setting because the setting is not a hard limit. The EJB Container does not stop adding beans to the cache when the limit is reached. Doing so could mean that when the cache is full, a request for a bean would not be fulfilled, or would at least be delayed until the cache fell below the limit. Instead, the cache limit can be exceeded, but the EJB Container attempts to clean up the cache and keep it below the EJB Cache size.

      In the case where the cache limit is exceeded, we might see a trace point similar to this:

      (iSeries) (Dist) BackgroundLru < EJB Cache: Sweep (64,38) - Evicted = 50 : 3589/2053 Exit

      (ZOS) EJB Cache: Sweep (64,38) - Evicted = 50 : 3589/2053

      Notice the Evicted = string. If we see this string, we are using either Stateful Session Beans or Entity Beans configured for Option A or B caching. Evicted objects mean that we are not taking full advantage of the caching option that we have chosen. Your first step is to try increasing the EJB Cache size. If continued running of the application results in more evictions, it means that the application is accessing or creating more new beans between EJB Cache sweeps than the cache can hold, and NOT reusing existing beans.

      We might want to consider using Option C caching for our entity beans, or checking the application to see if it is not removing Stateful Session Beans after they are no longer needed.

      Entity beans configured with Option C caching are only in the cache while in a transaction, and are required to be held in the cache for the entire transaction. Therefore, they are never evicted during a cache sweep, but are removed from the cache when the transaction completes. In addition, if we are using only Stateless Session Beans or Entity Beans with Option C caching (or both), then we might want to increase your EJB Cache cleanup interval to a larger number. The cleanup interval can be set as described in EJB cache settings. Stateless Session Beans are NOT in the EJB Cache, and since Entity Beans using Option C caching are never evicted by the caching (LRU) strategy, there is really no need to sweep often. When using only Stateless Session Beans or Option C caching, we should only see "Evicted = 0" in the trace example shown .

    2. Analyze your trace log. Look for the trace string Cache limit exceeded.

      • We might find more than one instance of this string. Examine them all to find the highest capacity value of beans in the EJB Cache. Reset your EJB Cache size to about 110% of this number. Setting the EJB Cache size is explained in a later step.

      • We might find no instances of this string. This means that we have not exceeded the capacity of the EJB Cache (which is your end goal), but not seeing it during your initial analysis could also mean that the cache is too large and using unnecessary memory. In this case, you still must tune the cache by reducing the cache size until the cache limit is not exceeded, then increasing it to the optimum value. Setting the EJB Cache size is explained in a later step.

      Your ultimate goal is to set the cache limit to a value that does not waste resources, but also does not get exceeded. A good set-up gives you a trace with only the Cache limit not reached message, and a ratio where the capacity number can be near, but below, 100% of the EJB Cache setting.

      IBM recommends that we do not set the cache size to anything less than the default of 2053.

  6. Modify the cache settings based on your analysis. See EJB cache settings for information about how to do this.

  7. (iSeries) (Dist) Stop the server, delete all logs, and restart the server.

  8. Stop and restart the server.

  9. Repeat the previous steps until we are satisfied with our settings.
  10. Disable the EJB Cache trace. With the cache properly tuned, we can remove the trace, remove old logs, and restart the server.


What to do next

From your analysis, it is possible to set the EJB cache optimally from an EJB Container perspective, but perhaps not optimally from a WebSphere Application Server perspective. A larger cache size provides more hits and better EJB cache performance, but uses more memory. Memory used by the cache is not available to other areas of the product, potentially causing the overall performance to suffer. In a system with ample memory, this might not be an issue and properly tuning the EJB cache might increase overall performance. However, we should take into account this system performance versus EJB cache performance when configuring the cache.

  • Work with trace
  • Use High Performance Extensible Logging to troubleshoot applications
  • EJB cache settings
  • Diagnostic trace service settings