Tuning WebLogic Server Applications

WebLogic Server only performs as well as the applications running on it. To quote the authors of Mastering BEA WebLogic Server: Best Practices for Building and Deploying J2EE Applications: "Good application performance starts with good application design. Overly-complex or poorly-designed applications will perform poorly regardless of the system-level tuning and best practices employed to improve performance." In other words, a poorly designed application can create unnecessary bottlenecks. For example, resource contention could be a case of bad design, rather than inherent to the application domain.

This section discusses some methods to determine the bottlenecks that can impede your application's performance:

 


Using Performance Analysis Tools

This section is a quick reference for using the OptimizeIt and JProbe profilers with WebLogic Server.

A profiler is a performance analysis tool that allows you to reveal hot spots in the application that result in either high CPU utilization or high contention for shared resources. For a list of common profilers, see Performance Analysis Tools.

 

Using the JProbe Profiler

The JProbe Suite is a family of products that provide the capability to detect performance bottlenecks, find and fix memory leaks, perform code coverage, and other metrics.

The JProbe website provides a technical white paper, "Using Sitraka JProbe and BEA WebLogic Server, which describes how developers can analyze code with any of the JProbe Suite tools running inside BEA WebLogic Server.

 

Using the Optimizeit Profiler

The Optimizeit Profiler from Borland is aperformance debugging tool for Solaris and Windows platforms.

Borland provides detailed J2EE Integration Tutorials for the supported versions of Optimizeit Profiler that work with WebLogic Server.

 


JDBC Application Tuning

Most performance gains or losses in a database application are determined by how the application is designed. The number and location of clients, size and structure of DBMS tables and indexes, and the number and types of queries all affect application performance.

For more information on optimizing your applications for JDBC and tuning WebLogic JDBC connection pools, see:

 


JMS Application Tuning

There are a number of design choices that impact performance of JMS applications. Some others include reliability, scalability, manageability, monitoring, user transactions, message driven bean support, and integration with an application server. In addition, there are WebLogic JMS extensions and features have a direct impact on performance.

For more information on optimizing your applications for JMS and tuning WebLogic JMS, see:

 


EJB Application Tuning

Tuning WebLogic Server EJBs describes how to tune WebLogic Server Enterprise Java Beans to match your application needs.

 


Web Services Tuning

There are some performance issues you should be aware of when you program your WebLogic Web services:

 


Managing Sessions

As a general rule, you should optimize your application so that it does as little work as possible when handling session persistence and sessions. You should also design a session management strategy that suits your environment and application.

 

Managing Session Persistence

Weblogic Server offers five session persistence mechanisms that cater to the differing requirements of your application. The session persistence mechanisms are configurable at the Web application layer. Which session management strategy you choose for your application depends on real-world factors like HTTP session size, session life cycle, reliability, and session failover requirements. For example, a Web application with no failover requirements could be maintained as a single memory-based session; whereas, a Web application with session fail-over capability could be maintained as replicated sessions or JDBC-based sessions, based on their life cycle and object size.

In terms of pure performance, in-memory session persistence is a better overall choice when compared to JDBC-based persistence for session state. According to the authors of Session Persistence Performance in BEA WebLogic Server 7.0: "While all session persistence mechanisms have to deal with the overhead of data serialization and deserialization, the additional overhead of the database interaction impacts the performance of the JDBC-based session persistence and causes it to under-perform compared with the in-memory replication." However, in-memory-based session persistence requires the use of WebLogic clustering, so it isn't an option in a single-server environment.

On the other hand, an environment using JDBC-based persistence does not require the use of WebLogic clusters and can maintain the session state for longer periods of time in the database. One way to improve JDBC-based session persistence is to optimize your code so that it has as high a granularity for session state persistence as possible. Other factors that can improve the overall performance of JDBC-based session persistence are: the choice of database, proper database server configuration, JDBC driver, and the JDBC connection pool configuration.

For more information on managing session persistence, see:

 

Minimizing Sessions

Configuring how WebLogic Server manages sessions is a key part of tuning your application for best performance. Consider the following:

  • Use of sessions involves a scalability trade-off.
  • Use sessions sparingly. In other words, use sessions only for state that cannot realistically be kept on the client or if URL rewriting support is required. For example, keep simple bits of state, such as a user's name, directly in cookies. You can also write a wrapper class to "get" and "set" these cookies, in order to simplify the work of servlet developers working on the same project.
  • Keep frequently used values in local variables.
  • Put aggregate objects rather than multiple single objects into the session where possible.

For more information, see "Setting Up Session Management in Assembling and Configuring Web Applications.

 


Using Execute Queues to Control Thread Usage

You can fine-tune an application's access to execute threads (and thereby optimize or throttle its performance) by using multiple execute queues in WebLogic Server. However, keep in mind that unused threads represent significant wasted resources in a Weblogic Server system. You may find that available threads in configured execute queues go unused, while tasks in other queues sit idle waiting for threads to become available. In such a situation, the division of threads into multiple queues may yield poorer overall performance than having a single, default execute queue.

Default WebLogic Server installations are configured with a default execute queue, weblogic.kernel.Default, which is used by all applications running on the server instance. You may want to configure additional queues to:

  • Optimize the performance of critical applications. For example, you can assign a single, mission-critical application to a particular execute queue, guaranteeing a fixed number of execute threads. During peak server loads, nonessential applications may compete for threads in the default execute queue, but the mission-critical application has access to the same number of threads at all times.
  • Throttle the performance of nonessential applications. For an application that can potentially consume large amounts of memory, assigning it to a dedicated execute queue effectively limits the amount of memory it can consume. Although the application can potentially use all threads available in its assigned execute queue, it cannot affect thread usage in any other queue.
  • Remedy deadlocked thread usage. With certain application designs, deadlocks can occur when all execute threads are currently utilized. For example, consider a servlet that reads messages from a designated JMS queue. If all execute threads in a server are used to process the servlet requests, then no threads are available to deliver messages from the JMS queue. A deadlock condition exists, and no work can progress. Assigning the servlet to a separate execute queue avoids potential deadlocks, because the servlet and JMS queue do not compete for thread resources.

Be sure to monitor each execute queue to ensure proper thread usage in the system as a whole. See Tuning the Default Execute Queue Threads for general information about optimizing the number of threads.

 

Creating Execute Queues

An execute queue represents a named collection of execute threads that are available to one or more designated servlets, JSPs, EJBs, or RMI objects. An execute queue is represented in the domain config.xml file as part of the Server element. For example, an execute queue named CriticalAppQueue with four execute threads appears in the config.xml file as follows:

...



<Server
 Name="examplesServer"
 ListenPort="7001"
 NativeIOEnabled="true"/>
 <ExecuteQueue Name="default"
  ThreadCount="15"/>
 <ExecuteQueue Name="CriticalAppQueue"
  ThreadCount="4"/>
 ...
</Server>

To configure a new execute queue using the Administration Console:

  1. Start the Administration Server if it is not already running.
  2. Access the Administration Console for the domain.
  3. Expand the Servers node in the left pane to display the servers configured in your domain.
  4. Right-click the name of the server instance on which you want to add an execute queue, and then select View Execute Queues from the pop-up menu.
  5. On the execute queue Configuration tab, click the Configure a New Execute Queue link.
  6. On the execute queue Configuration tab, modify the following attributes or accept the system defaults:

    • Queue Length: Always leave the Queue Length at the default value of 65536 entries. The Queue Length specifies the maximum number of simultaneous requests that the server can hold in the queue. The default of 65536 requests represents a very large number of requests; outstanding requests in the queue should rarely, if ever reach this maximum value.

      If the maximum Queue Length is reached, WebLogic Server automatically doubles the size of the queue to account for the additional work. Note, however, that exceeding 65536 requests in the queue indicates a problem with the threads in the queue, rather than the length of the queue itself; check for stuck threads or an insufficient thread count in the execute queue.

    • Queue Length Threshold Percent: The percentage (from 1-99) of the Queue Length size that can be reached before the server indicates an overflow condition for the queue. All actual queue length sizes below the threshold percentage are considered normal; sizes above the threshold percentage indicate an overflow. When an overflow condition is reached, WebLogic Server logs an error message and increases the number of threads in the queue by the value of the Threads Increase attribute to help reduce the workload.

      By default, the Queue Length Threshold Percent value is 90 percent. In most situations, you should leave the value at or near 90 percent, to account for any potential condition where additional threads may be needed to handle an unexpected spike in work requests. Keep in mind that Queue Length Threshold Percent must not be used as an automatic tuning parameter - the threshold should never trigger an increase in thread count under normal operating conditions.

    • Thread Count: The number of threads assigned to this queue. If you do not need to use more than 15 threads (the default) for your work, do not change the value of this attribute. (For more information, see Should You Modify the Default Thread Count?)
    • Threads Increase: The number of threads WebLogic Server should add to this execute queue when it detects an overflow condition. If you specify zero threads (the default), the server changes its health state to "warning" in response to an overflow condition in the thread, but it does not allocate additional threads to reduce the workload.

      Note: If WebLogic Server increases the number of threads in response to an overflow condition, the additional threads remain in the execute queue until the server is rebooted. Monitor the error log to determine the cause of overflow conditions, and reconfigure the thread count as necessary to prevent similar conditions in the future. Do not use the combination of Threads Increase and Queue Length Threshold Percent as an automatic tuning tool; doing so generally results in the execute queue allocating more threads than necessary and suffering from poor performance due to context switching.

    • Threads Minimum: The minimum number of threads that WebLogic Server should maintain in this execute queue to prevent unnecessary overflow conditions. By default, the Threads Minimum is set to 5.
    • Threads Maximum: The maximum number of threads that this execute queue can have; this value prevents WebLogic Server from creating an overly high thread count in the queue in response to continual overflow conditions. By default, the Threads Maximum is set to 400.
    • Thread Priority: The priority of the threads associated with this queue. By default, the Thread Priority is set to 5.
  7. Click Create to create the new execute queue.
  8. Reboot the server to use the new settings.

 

Assigning Servlets and JSPs to Execute Queues

You can assign a servlet or JSP to a configured execute queue by identifying the execute queue name in the initialization parameters. Initialization parameters appear within the init-param element of the servlet's or JSP's deployment descriptor file, web.xml. To assign an execute queue, enter the queue name as the value of the wl-dispatch-policy parameter, as in the example:

<servlet>



   <servlet-name>MainServlet</servlet-name>
   <jsp-file>/myapplication/critical.jsp</jsp-file>
   <init-param>
      <param-name>wl-dispatch-policy</param-name>
      <param-value>CriticalAppQueue</param-value>
   </init-param>
</servlet>

See "Initializing a Servlet in Programming WebLogic HTTP Servlets for more information about specifying initialization parameters in web.xml.

 

Assigning EJBs and RMI Objects to Execute Queues

To assign an EJB object to a configured execute queue, use the new dispatch-policy element in weblogic-ejb-jar.xml. For more information, see the weblogic-ejb-jar.xml Deployment Descriptor.

While you can also set the dispatch policy through the appc compiler -dispatchPolicy flag, BEA strongly recommends you use the deployment descriptor element instead. This way, if the EJB is recompiled, during deployment for example, the setting will not be lost.

To assign an RMI object to a configured execute queue, use the -dispatchPolicy option to the rmic compiler. For example:

java weblogic.rmic -dispatchPolicy CriticalAppQueue ...

Skip navigation bar  Back to Top Previous Next