+

Search Tips | Advanced Search

How large should I make my active log?

Estimating the size of active log a queue manager needs.

The size of the active log is limited by:
logsize = (primaryfiles + secondaryfiles) * logfilepages * 4096

The log should be large enough to cope with your longest running transaction running when the queue manager is writing the maximum amount of data per second to disk.

If your longest running transaction runs for N seconds, and the maximum amount of data per second written to disk by the queue manager is B bytes per second in the log, your log should be at least:
logsize >= 2 * (N+1) * B

The queue manager is likely to be writing the maximum amount of data per second to disk when you are running at peak workload, or it might be when you are recording media images.

If a transaction runs for so long that the log extent containing its first log record is not contained within the active log, the queue manager rolls back active transactions one at a time, starting with the transaction with the oldest log record.

The queue manager needs to make old log extents inactive before the maximum number of primary and secondary files are being used, and the queue manager needs to allocate another log extent.

Decide how long you want your longest running transaction to run, before the queue manager is allowed to roll it back. Your longest running transaction might be waiting for slow network traffic or, in the case of a poorly designed transaction, waiting for user input.

We can investigate how long your longest running transaction runs for, by issuing the following runmqsc command:
DISPLAY CONN(*) UOWLOGDA UOWLOGTI

Issuing the dspmqtrn -a command, shows all the XA and non XA commands in all states.

Issuing this command lists the date and time that the first log record was written for all of your current transactions. Attention: For the purposes of calculating the log size, it is the time since the first log record was written that matters, not the time since the application or transaction started. Round up the length of your longest running transaction to the nearest second. This is because of optimizations in the queue manager.

The first log record can be written long after the application started, if the application begins by, for example, issuing an MQGET call that waits for a length of time before actually getting a message.

By reviewing the maximum observed date and time output from the
DISPLAY CONN(*) UOWLOGDA UOWLOGTI
command you issued originally, from the current date and time, we can estimate how long your longest running transaction runs.

Ensure you run this runmqsc command repeatedly while your longest running transactions are running at peak workload so that we do not underestimate the length of your longest running transaction.

In IBM MQ Version 8.0 use the operating system tools, for example, iostat on UNIX platforms.

From IBM MQ Version 9.0, we can discover the bytes per second that the queue manager is writing to the log by issuing the following command:
amqsrua -m qmgr -c DISK -t Log 
The logical bytes written, shows the bytes per second that the queue manager is writing to the log. For example:
$ amqsrua -m mark -c DISK -t Log
Publication received PutDate:20160920 PutTime:15383157 Interval:4 minutes,39.579 seconds
Log - bytes in use 37748736
Log - bytes max 50331648
Log file system - bytes in use 316243968
Log file system - bytes max 5368709120
Log - physical bytes written 4334030848 15501948/sec
Log - logical bytes written 3567624710 12760669/sec
Log - write latency 411 uSec

In this example, the logical bytes per second written to the log is 12760669/sec or approx 12 MiB per second.

Using
DISPLAY CONN(*) UOWLOGDA UOWLOGTI
showed that the longest running transaction was:
CONN(57E14F6820700069)
EXTCONN(414D51436D61726B2020202020202020)
TYPE(CONN)
APPLTAG(msginteg_r)                     UOWLOGDA(2016-09-20)
UOWLOGTI(16.44.14)
As the current date and time was 2016-09-20 16.44.19, this transaction had been running for 5 seconds. However, you require tolerating transactions running for 10 seconds before the queue manager rolls them back. So your log size should be:
2 * (10 + 1) * 12 = 264 MiB
.

The number of log files must be able to contain the largest expected log size (calculated in the preceding text). This will be:

Minimum number of log files = (Required log size) / (LogFilePages * log file page size (4096))

Using the default LogFilePages, which is 4096, and the log size estimate of 264MiB, calculated in the preceding text, the minimum number of log files should be:
264MiB / (4096 x 4096) = 16.5
that is, 17 log files. If you size your log so that your expected workload runs within the primary files:

Therefore, in the preceding example, set the following values so the workload runs within the primary log files:

Note the following:


How large should I make my LogFilePages?

Generally make your LogFilePages large enough so that you are able to easily increase the size of your active log without reaching the maximum number of primary files. A few large log files is preferable to many small log files because a few large log files allows you more flexibility to increase the size of your log should you need to.

For linear logging, very large log files might make the performance variable. With very large log files there is a bigger step to create and format a new log file, or to archive an old one. This is more of a problem with manual and archive log management because with automatic log management new log files are rarely created.