storage required, logging, archive log" /> Logs and archive storage

 

Logs and archive storage

Active log data sets record significant events and data changes. They are periodically off-loaded to the archive log. Consequently, the space requirements for your active log data sets depend on the volume of messages that your WebSphere MQ handles and how often the active logs are off-loaded to your archive data sets. WebSphere MQ provides optional support for dual logging; if you use this your log storage requirement doubles.

If you decide to place the archive data sets on direct access storage devices (DASD), we need to reserve enough space on the devices. You should also reserve space for the bootstrap data sets (BSDS). A typical size for each BSDS might be 500 KB. These are all separate data sets and you should allocate space for them on different volumes and strings if possible to minimize DASD contention and problems caused by any defects on the physical devices.

Because each change to the system is logged, we can estimate the size of storage required from the size and expected throughput of persistent messages (nonpersistent messages are not logged). You must add to this a small overhead for the header information in the data sets.

Additionally, CF structure backups are written to the active log of the queue manager where the BACKUP CFSTRUCT command is issued.

To arrive at the size of the log extents, we can develop an algorithm that depends on various factors including the message rate and size of persistent messages and how frequently you want to switch the log, and CF structure backup characteristics.

Figure 22 shows an approximate calculation for the number of records to specify in the cluster for the log data set.

Figure 22. Calculating the number of records to specify in the cluster for the log data set

Number of records = ((a * log switch interval) + S) / 4096
 
where a = (Number of puts/sec*(average persistent message size+500))
         + (Number of gets/sec*110)
         + (Number of units of recovery started*120)
         + (Number of syncpoints a second*240)
 and
       log switch interval = time period between successive log
                             switches required in seconds
 and
       

S = the number of CF structure backups in the log switch interval multiplied by the average structure size

Each log data set should have the same number of records specified and should not have secondary extents. Other than for a very small number of records, AMS rounds up the number of records so that a whole number of cylinders is allocated. The number of records actually allocated is:

   c = (INT (number of log records / b) + 1) * b
 
   Where b is the number of 4096-byte blocks in each cylinder (180 for a
   3390 device) and INT means round down to an integer