Request flow prioritization
Overview
With WebSphere Virtual Enterprise, you can define performance goals and bind them to specific subsets of the incoming traffic using the on demand router (ODR) and associated autonomic managers.
The ODR is a server that acts as an HTTP proxy or a StateLess SIP Proxy (SLSP), forwarding different request flows.
An ODR contains the autonomic request flow manager (ARFM) that prioritizes inbound traffic according to service policy configuration and protects downstream servers from being overloaded.
For inbound SIP/UDP messages, the ODR might route the message to another ODR in order to properly check for and handle UDP retransmissions.
The on demand configuration (ODC) component allows the ODR to sense its environment by dynamically configuring routing rules at runtime. An ODR is able to route HTTP requests to...
- WebSphere Virtual Enterprise servers
- WAS ND servers
- Servers that are not running WebSphere software
The ODR, like the Web server plug-in for WAS, uses session affinity for routing work requests. After a session is established on a server, later work requests for the same session go to the original server, which maximizes cache usage and reduces queries to back-end resources.
Service policies
A service policy is a user-defined categorization that is assigned to potential work as an attribute that is read by the ARFM. You can use a service policy to classify requests based on request attributes, including...
- URI
- client name and address
- user ID or group
By configuring service policies, you apply varying levels of importance to the actual work. You can use multiple service policies to deliver differentiated services to different categories of requests. Service policy goals can differ in performance targets as well as importances.
The autonomic request flow manager (ARFM)
The autonomic request flow manager exists in the ODR and controls request prioritization.
The ARFM contains the following components...
- Compute power controller per target cell.
That is, a cell to which some ARFM gateway directly sends work. This is an HAManagedItem that can run in any node agent, ODR, or deployment manager.
- Gateway per a used combination of protocol family, proxy process, and deployment target.
A gateway runs in its proxy process. For HTTP and SIP, the proxy processes are the ODRs. For JMS and IIOP, the proxy processes are the WebSphere Application servers.
- A work factor estimator per target cell.
This is an HAManagedItem that can run in any node agent, ODR, or deployment manager.
Dynamic workload management (DWLM)
Dynamic workload management is a feature of the ODR that applies the same principles as workload management (WLM), such as routing based on a weight system, establishing a prioritized routing system.
With WLM, you manually set static weights in the administrative console. With DWLM, the system can dynamically modify the weights to stay current with the business goals.
DWLM can be shut off. If you intend to use the automatic operating modes for the components of dynamic operations, then setting a static WLM weight on any of your dynamic clusters could get in the way of allowing the on demand aspect of WebSphere XD to function properly.
WLM and DWLM are not limited to the ODRs, but also apply to IIOP traffic when the client is using the WAS JDK and ORBs, and prefer local routing is not employed.
The following diagram shows an equal amount of requests flow into the ODR. Platinum, gold, and bronze are used to depict a descending order of importance, respectively. After the work is categorized, prioritized, and queued, a higher volume of more important work (platinum) is processed, while a lower volume of less important (bronze) work waits to get queued. Because bronze is delayed, the long-term average rate of Bronze coming out of the ODR is not less than the long-term average rate of bronze going in. The features of the dynamic operations attempt to keep the work within the target time allotted for completion.
Related tasks
Creating ODRs
Dynamic operations
Dynamic operations environment
Defining a service policy