Migrating a queue manager to become a DR RDQM queue manager
We can migrate an existing queue manager to become a disaster recovery (DR) replicated data queue manager (RDQM) by backing up its persistent data, then restoring the data to a newly created RDQM queue manager that has the same name.
About this task
DR Replicated Data Queue Managers require a dedicated logical volume (filesystem) and the configuration of disk replication. These components are only configured when a new queue manager is created. An existing queue manager can be migrated to use RDQM by backing up its persistent data, then restoring the data to a newly created RDQM queue manager that has the same name. This procedure preserves the queue manager configuration, state, and persistent messages at the time the backup was created.
Note: We can only migrate a queue manager from a version of IBM MQ that is the same as or lower than the version where RDQM is installed. The operating system and architecture must also be the same. Otherwise, we must create a new queue manager on your target platform, see Moving a queue manager to a different operation system. We should satisfy the following conditions before we migrate a queue manager:
- Evaluate your disaster recovery requirements and see RDQM disaster recovery.
- Review the applications and queue managers that connect to the queue manager. Consider the changes required to route the connections to the RDQM node where the queue manager is running.
- Provision, or identify existing, RDQM nodes for the chosen configuration. For information about the system requirements for RDQM, see Requirements for RDQM DR solution.
- Install IBM MQ Advanced, which includes the RDQM feature, on each node.
- Optionally, verify the RDQM configuration using a test queue manager, which can then be deleted. Testing the configuration is recommended to identify and resolve any problems before migrating the queue manager.
- Review the security configuration for the queue manager, then replicate the required local users and groups on each RDQM node.
- Review the queue manager and channel configuration to determine if API exits, channel exits, or data conversion exits are used. Install the required exits on each RDQM node.
- Review any queue manager services that have been defined, then install and configure the required processes on each RDQM node.
Procedure
- Back up the existing queue manager:
- Stop the existing queue manager by issuing a wait shutdown command endmqm -w, or an immediate shutdown command endmqm -i. This step is important to ensure the data in the backup is consistent.
- Determine the location of the queue manager data directory by viewing the IBM MQ configuration file, mqs.ini. On Linux, this file is located in the /var/mqm directory. For more information about mqs.ini, see IBM MQ configuration file, mqs.ini.
Locate the QueueManager stanza for the queue manager in the file. If the stanza contains a key named DataPath then its value is the queue manager data directory. If the key does not exist, then the queue manager data directory can be determined using the values of the Prefix and Directory keys. The queue manager data directory is a concatenation of these values, of the form prefix/qmgrs/directory. For more information about the QueueManager stanza, see QueueManager stanza of the mqs.ini file.
- Create a backup of the queue manager data directory. On Linux, we can do this by using the tar command. For example, to back up the data directory for a queue manager you can use the following command. Note the last parameter of the command, which is a single period (dot):
tar -cvzf qm-data.tar.gz -C queue_manager_data_dir .- Determine the location of the queue manager log directory by viewing the IBM MQ queue manager configuration file qm.ini. This file is located in the queue manager data directory. For more information about the file, see Queue manager configuration files, qm.ini.
The queue manager log directory is defined as the value of the LogPath key in the Log stanza. For information about the stanza, see Log stanza of the qm.ini file.
- Create a backup of the queue manager log directory. On Linux, we can do this by using the tar command. For example, to back up the log directory for a queue manager we can use the following command. Note the last parameter of the command, which is a single period (dot):
tar -cvzf qm-log.tar.gz -C queue_manager_log_dir .- Create a backup of any certificate repositories used by the queue manager if they are not located in the queue manager data directory. Ensure that both the key database file and the password stash file are backed up. For information about the queue manager key repository, see The SSL/TLS key repository and Locating the key repository for a queue manager. For information about locating the AMS key store if the queue manager is configured to use AMS Message Channel Agent (MCA) interception, see Message Channel Agent (MCA) interception.
- The existing queue manager is no longer required, so it can be deleted. However, where possible, we should only delete the existing queue manager after it has been successfully restored on the target system. Deferring deletion ensures that the queue manager can be restarted if the migration process does not complete successfully. Note: If you defer deletion of the existing queue manager, do not restart it. It is important the queue manager remains ended because further changes to its configuration or state are lost during the migration.
- Prepare the primary RDQM node:
- Create a new RDQM queue manager with the same name as the queue manager that you backed up. Ensure the filesystem allocated for the RDQM queue manager by crtmqm is big enough to contain the data, primary logs and secondary logs for the existing queue manager, plus some additional space for future expansion. For information about how to create an RDQM queue manager, see Create a disaster recovery RDQM.
- Determine the primary RDQM node for the queue manager. For information about how to determine the primary node, see rdqmstatus (display RDQM status).
- On the primary RDQM node, if the RDQM queue manager is started, stop it by using the endmqm -w or endmqm -i command.
- Determine the location of the data and log directories for the RDQM queue manager (use the methods described in steps 1b and 1d).
- Delete the contents of the RDQM queue manager data and log directories, but not the directories themselves.
- Restore the queue manager on the primary RDQM node:
- Copy the backups of the queue manager data and log directories to the primary RDQM node, plus any separate backups of certificate repositories used by the queue manager.
- Restore the backup of the queue manager data directory to the empty data directory for the new RDQM queue manager, ensuring that file ownership and permissions are preserved. If the backup was created using the example tar command in step 1c then the following command can be used by the root user to restore it:
tar -xvzpf qm-data.tar.gz -C queue_manager_data_dir- Restore the backup of the queue manager log directory to the empty log directory for the new RDQM queue manager, ensuring that file ownership and permissions are preserved. If the backup was created using the example tar command in step 1e then the following command can be used by the root user to restore it:
tar -xvzpf qm-log.tar.gz -C queue_manager_log_dir- Edit the restored queue manager configuration file, qm.ini, in the data directory for the RDQM queue manager. Update the value of the LogPath key in the Log stanza to specify the log directory for the RDQM queue manager. Review other file paths that are defined in the configuration file and update them if necessary. For example, you might need to update the following paths:
- The path for error log files that are generated by diagnostic message services.
- The path for exits that are required by the queue manager.
- The path for switch load files if the queue manager is an XA transaction coordinator.
- If the queue manager is configured to use AMS Message Channel Agent (MCA) interception, copy the AMS key store to the new RDQM installation, then review and update the configuration. The key store must be available on each RDQM node, so if it is not located in the replicated filesystem for the queue manager it must be copied to each node instead. For more information, see Message Channel Agent (MCA) interception.
- Verify that the queue manager is displayed by the dspmq command and its status is reported as ended. The following example shows sample output for an RDQM DR queue manager:
$ dspmq -o status -o dr QMNAME(QM1) STATUS(Ended normally) DRROLE(Primary)- Verify that the restored queue manager data has been replicated to the secondary RDQM nodes by using the rdqmstatus command to display the status for the queue manager. The DR status should be reported as Normal on each node. The following example shows sample output for an RDQM DR queue manager:
$ rdqmstatus -m QM1 Queue manager status: Ended normally Queue manager file system: 51MB used, 1.0GB allocated [5%] DR role: Primary DR status: Normal DR type: Synchronous DR port: 3000 DR local IP address: 192.168.20.1 DR remote IP address: 192.168.20.2- Start the queue manager on the primary RDQM node.
- Connect to the queue manager and update the value of the SSLKEYR queue manager attribute to specify the new location of the queue manager certificate repository. By default, the value of this attribute is set to queue_manager_data_directory/ssl/key. The certificate repository must be located in the same location on each RDQM node. If the repository is not located in the replicated filesystem for the queue manager, then it must be copied to each node instead.
- Review the IBM MQ object definitions for the queue manager and update the value of object attributes that reference changed network settings, the IBM MQ installation directory, or the queue manager data directory, including the following objects:
- Local IP addresses used by listeners (IPADDR attribute).
- Local IP addresses used by channels (LOCLADDR attribute).
- Local IP addresses defined for cluster-receiver channels (CONNAME attribute).
- Local IP addresses defined for communication information objects (GRPADDR attribute).
- System paths defined for process and service object definitions.
- Stop and restart the queue manager to ensure the changes become effective.
- Repeat step 3j for remote queue managers, plus equivalent settings for applications, that connect to the migrated queue manager, including:
- Channel connection names (CONNAME attribute).
- Channel authentication rules that restrict inbound connections from the queue manager based on its IP address or hostname.
- Client channel definition tables (CCDTs), domain name settings (DNS), network routing, or equivalent connection information.
- Perform a managed failover of the queue manager to each RDQM node to ensure the required configuration has been established successfully, see Switching over to a recovery node.
Parent topic: Create a disaster recovery RDQM