IBM Tivoli Composite Application Manager for Application Diagnostics, Version 7.1.0.1

Problems and tips on the databases of Managing server

The following examples are some problems with the Managing Server databases you might encounter.


Correcting DB2 high CPU usage on Linux

The problem: The db2fm process (DB2 Fault WebSphere Business Monitor) exhibits high CPU usage on Red hat systems, even after changes to the /etc/inittab file to change the startup process from respawn to once.

The cause: This problem is present with the base DB2 8.2 included in the supplied prerequisites CD for ITCAM Agent for WebSphere Applications on Linux. The ultimate fix for the problem is applying DB2 Fix Pack 9 or higher. This patch is applied to all Red hat VMWare images in order to stabilize the system performance. The following procedure lists the procedure to install Fix Pack 12 on Red hat Linux.

  1. Download the appropriate version of the patch, based on whether you are running Red hat 3 (2.4 kernel) or Red hat 4 (2.6 kernel). This example uses the 32 bit 2.4 kernel download file, FP12_MI00154.tar.

  2. Comment out (add a # sign at the beginning) the following line at the end of the /etc/inittab file.
    fmc:2345:respawn:/opt/IBM/db2/V8.1/bin/db2fmcd #DB2 Fault Monitor Coordinator

  3. Save the file and reboot the Red hat image, which will effectively disable the DB2 auto start. Check for any DB2 processes with the command ps -ef|grep db2.

  4. Copy the FP12_MI00154.tar file to the Red hat VMWare image using ftp, sftp, scp, or a Samba share and extract the tar file with the command tar xvf FP12_MI00154.tar

  5. Run the command ./installFixPak to update all the Red hat software packages for DB2.

  6. Confirm the DB2 instance name created when you installed the database, typically db2inst1.

  7. Run the command /opt/IBM/db2/V8.1/instance/db2iupdt db2inst1 to update the current instance. Repeat this command for any other installed instances.

  8. Run the command /opt/IBM/db2/V8.1/instance/dasupdt to the DB2 Administrative Server.

  9. Uncomment the last line in the /etc/inittab file, save the file, and reboot once again.

Note: These steps are excerpted from the release.txt file included with the patch, in the doc directory. Please refer to this document for any details.


Maintain an Oracle Database

The question: How do I know which segments have plenty of free space under the high-water mark and would benefit from a reorganization?

The answer: You can use the Oracle Enterprise Manager interface provided in Oracle Database 10g to target a specific tablespace to identify potential candidates.

However, if the database has several hundred tablespoonsful, it cannot be possibly done every day, and not every tablespace would have segments that need reorganization. In Oracle Database 10g Release 2, the supplied package DBMS_SPACE provides an automatic tool that proactively scans the segments and reports any potential candidates for reorganization. The built-in function ASA_RECOMMENDATIONS shows the segments. This is a function in the pipeline. You can use it as follows:

  1. Select * from table (dbms_space.asa_recommendations());

  2. Runstats in DB2 replacement. Automatic table level statistics gathering is on by default.

In 10g, statistics are collected automatically if the initialization parameter STATISTIC_LEVEL is set to TYPICAL or ALL. (The default value is TYPICAL, so automatic statistics gathering is enabled out of the box.) Oracle Database 10g has a predefined Scheduler job named GATHER_STATS_JOB, which is activated with the appropriate value of the STATISTIC_LEVEL parameter. The collection of statistics is fairly resource-intensive. To ensure it doesn't affect regular operation of the database, there is a special resource consumer group named AUTO_TASK_CONSUMER_GROUP predefined for automatically executed tasks such as gathering of statistics. This consumer group makes sure that the priority of these statistics collection jobs is below that of the default consumer group, and hence that the risk of automatic tasks taking over the machine is reduced or eliminated.

The question: How do I set the parameter STATISTIC_LEVEL to TYPICAL without making the statistics collection automatic?

The answer: Disable the Scheduler job by issuing the following:

BEGIN 
DBMS_SCHEDULER.DISABLE('GATHER_STATS_JOB');
END; 

To make sure it is indeed set:

  1. Select * from DBA_SCHEDULER_JOBS WHERE JOB_NAME = 'GATHER_STATS_JOB'; the enabled value will have a true column

  2. Check the last date when your tables were analyzed: select * from DBA_TA_STATISTICS WHERE OWNER='AMUSER' AND TABLE_NAME NOT LIKE '%$%' ORDER BY TABLE_NAME;


Script for manually trimming data of the Octigate database

The question: The data trim process may take much time to complete (up to 24 hours or more, in specific cases). What can be done as an alternative cleaning mechanism, without losing all the historical data?

The answer: The SQL scripts provided here are to be used exclusively when the normal data trim process does not respond well because of the large amount of data on certain few days (millions of requests per day). The standard data trim job is still recommended as a rule.

This script can be run either through cron job or manually, and the following three tables are handled for data trim:

It creates a temporary table using a statement as the following sentence, retaining n days of data.

For Oracle database with n=1,

create table request_tmp as select * from request where END_TIME > (sysdate - 1); 

For DB2 with n=3,

insert into request_tmp select * from request where END_TIME > 
    (current_timestamp - 3 days); 

The main table REQUEST is then truncated and all entries from the preceding temporary table are inserted back into the REQUEST table. The temporary table is dropped. The commits are intermediate since the log resource could be a bottleneck to the number of rows that can be held in pending state. The script can be customized as needed.


Trimming a Large Octigate Database

The question: Is there anything I can do to quickly reduce the size of the Octigate database if I have not been running the datatrim script and the database has grown to an excessively large size?

The answer: Ideally, you should set up your ITCAM database (Octigate) pruning and optimization when you create the Managing Server. For details on optimizing the database, see the appendix on maintaining the monitoring environment in ITCAM for Application Diagnostics: Managing Server Installation and Customization Guide.

In the case of an extremely large Octigate database that has not been trimmed by the datatrim script on a regular basis, you may want to consider dropping and rebuilding the following 4 tables.

request
method
serverstats
gc_data

These 4 tables are usually the big ones. Recreating them will not harm ITCAM in any way other than the fact that the data will be lost.

In addition, another technote has been provided that includes AS-IS scripts to help remove large amounts of data. See this link to see how it's done: http://www-01.ibm.com/support/docview.wss?&uid=swg21383986


Parent topic:

Run the Managing Server

+

Search Tips   |   Advanced Search