IBM Tivoli Monitoring > Version 6.3 Fix Pack 2 > Administrator's Guide > Manage historical data > Performance impact of historical data requests

IBM Tivoli Monitoring, Version 6.3 Fix Pack 2


Impact of large amounts of historical data on the monitoring server or agent

The default location for storing short-term historical data is at the monitoring agent, although in certain configurations the monitoring server might be preferable.

This topic presents factors to consider when determining

The collection location can be negatively impacted when large amounts of data are processed. This occurs because the warehousing process on the monitoring server or the monitoring agent must read the large row set from the short-term history files. The data must then be transmitted by the warehouse proxy to the data warehouse. For large datasets, this impacts memory, CPU resources, and, especially when collection is at the monitoring server, disk space.

Because of its ability to handle numerous requests simultaneously, the impact on the monitoring server might not be as great as the impact on the monitoring agent. Nonetheless, when historical collection is at the monitoring server, the history data file for one attribute group can contain data for many agents (all the agents storing their data at the monitoring server) thus making a larger dataset. As well, requests against a large dataset also impact memory and resources at the Tivoli Enterprise Portal Server.

When historical data is stored at the agent, the history file for one attribute group contains data only for that agent and is much smaller than the one stored at the monitoring server. The most recent 24 hours worth of data comes from short-term history files. Beyond 24 hours, the data is retrieved from the Tivoli Data Warehouse. (You can change the break point with the KFW_REPORT_TERM_BREAK_POINT portal server environment variable.) This action is transparent to the user; however, requests returning a large a amount of data can negatively impact the performance of monitoring servers, monitoring agents, and your network.

If a query goes to the short-term history file and retrieves a large amount of data, this retrieval can consume a large amount of CPU and memory and users can experience low system performance while the data is being retrieved. When processing a large data request, the agent might be prevented from processing other requests until this one has completed. This is important with many monitoring agents because the agent can typically process only one view query or situation at a time.

A best practice that can be applied to the historical collection, to the view query, or both is to use filters to limit the data before it gets collected or reported. For historical collections, pre-filtering is done in the Filter tab of the Historical Collection Configuration editor or the filter option of the CLI tacmd histcreatecollection command, as described in the IBM Tivoli Monitoring Command Reference and Create a historical collection in the Tivoli Enterprise Portal User's Guide. For workspace views, pre-filtering is done in the Query editor by creating another query from a predefined query and adding a filter to the specification, as described in Create another query to the monitoring server in the Tivoli Enterprise Portal User's Guide.


Parent topic:

Performance impact of historical data requests

+

Search Tips   |   Advanced Search