Requirements for RDQM DR solution
We must meet a number of requirements before you configure an RDQM disaster recovery (DR) queue manager pair.
System requirements
Before configuring RDQM DR, we must complete some configuration on each of the servers that are to host RDQM DR queue managers.- Each node requires a volume group named drbdpool. The storage for each disaster recovery replicated data queue manager (DR RDQM) is allocated as two separate logical volumes per queue manager from this volume group. (Each queue manager requires two logical volumes to support the reverting to snapshot operation, so each DR RDQM is allocated just over twice the storage that you specify when you create it.) For the best performance, this volume group should be made up of one or more physical volumes that correspond to internal disk drives (preferably SSDs).
- After you have created the drbdpool volume group, do nothing else with it. IBM MQ manages the logical volumes created in drbdpool, and how and where they are mounted.
- Each node requires an interface that is used for the data replication. This should have
sufficient bandwidth to support the replication requirements given the expected workload of all of
the replicated data queue managers.
For maximum fault tolerance, this interface should be an independent Network Interface Cards (NICs).
- DRBD requires that each node used for RDQM has a valid internet host name (the value that is returned by uname -n), as defined by RFC 952 amended by RFC 1123.
- If there is a firewall between the nodes used for DR RDQM, then the firewall must allow traffic
between the nodes on the ports that are used for replication. A sample script is provided,
/opt/mqm/samp/rdqm/firewalld/configure.sh, that opens up the necessary ports if
we are running the standard firewall in RHEL. We must run the script as root. If
we are using some other firewall, examine the service definitions
/usr/lib/firewalld/services/rdqm* to see which ports need to
be opened. The script adds the following permanent firewallD service rules for DRBD and IBM MQ (you
can edit the script to omit the Pacemaker ports if we are not using HA):
- MQ_INSTALLATION_PATH/samp/rdqm/firewalld/services/rdqm-drbd.xml allows TCP ports 7000-7100.
- MQ_INSTALLATION_PATH/samp/rdqm/firewalld/services/rdqm-mq.xml allows TCP port 1414 (we must edit the script if you require a different port)
- If the system uses SELinux in a mode other than permissive, we must run the following
command:
semanage permissive -a drbd_t
Network requirements
It is recommended that you locate the nodes used for disaster recovery in different data centers.
We should be aware of the following limitations:- Performance degrades rapidly with increasing latency between data centers. IBM will support a latency of up to 5 ms for synchronous replication and 50 ms for asynchronous replication.
- The data sent across the replication link is not subject to any additional encryption beyond that which might be in place from using IBM MQ AMS.
- Configure an RDQM queue manager for disaster recovery incurs an overhead due to the requirement to replicate data between the two RDQM nodes. Synchronous replication incurs a greater overhead than asynchronous replication. When synchronous replication is used, disk I/O operations are blocked until the data is written to both nodes. When asynchronous replication is used, data must only be written to the primary node before processing can continue.
User requirements for working with queue managers
To create, delete, or configure replicated data queue managers (RDQMs) we must either be the root user, or have a user ID belonging to the mqm group that is granted sudo authority for the following commands:A user who belongs to the mqm group can view the state and status of a DR RDQM by using the following commands:
Parent topic: RDQM disaster recovery