Alert state caching
Use alert state caching to reduce the number of repetitious alert evaluations.
The caching mechanism implemented by the alerts engine is intended to give developers a way to minimize the number of alert evaluations that take place in response to portlet requests. Alerts can generally be divided into two groups based upon the nature of the data that activates them. Periodic alerts are good candidates for caching.
- Non-periodic alerts are driven by asynchronous events or data that is subject to change at any time.
A good example is an alert that is activated when the number of unresolved support cases in the system goes above some threshold. Since this number may change from moment to moment there is no way to accurately predict when the alert may be activated.
- Periodic alerts are driven by data that changes over time but after changing the data remains constant for some well-defined period.
A good example is an alert that is activated when the previous quarter's revenue falls below some threshold. Since quarterly revenue data typically does not change once the quarter has closed there is little reason to evaluate this alert more than once with the previous quarter's data.
Alert caching is a collaborative effort between the alerts engine and the various alert evaluators. Since the alerts engine has no way of knowing whether an alert is periodic or not it relies upon the business logic in the evaluator to tell it whether or not a particular alert should be cached. Parent topic: Alerts engine architecture
Library | Support | Terms of use |
Last updated: Thursday, March 15, 2007 11:57am EST
This information center is powered by Eclipse technology. (http://www.eclipse.org)