Requirements for RDQM HA solution

We must meet a number of requirements before you configure the RDQM high availability (HA) group.


System requirements

Before configuring the RDQM HA group, we must complete some configuration on each of the three servers that are to be part of the HA group.

  • Each node requires a volume group named drbdpool. The storage for each replicated data queue manager is allocated as a separate logical volume per queue manager from this volume group. For the best performance, this volume group should be made up of one or more physical volumes that correspond to internal disk drives (preferably SSDs). We can create drbdpool before or after you have installed the RDQM HA solution, but we must create drbdpool before you actually create any RDQMs. Check your volume group configuration by using the vgs command. The output should be similar to the following:
      VG       #PV #LV #SN Attr   VSize   VFree 
      drbdpool   1   9   0 wz--n- <16.00g <7.00g
      rhel       1   2   0 wz--n- <15.00g     0
    
    In particular, check that there is no c character in the sixth column of the attributes (that is, wz--nc). The c indicates that clustering is enabled, and if it is we must delete the volume group and recreate it without clustering.
  • After you have created the drbdpool volume group, do nothing else with it. IBM MQ manages the logical volumes created in drbdpool, and how and where they are mounted.
  • Each node requires up to three interfaces that are used for configuring the RDQM support:

    • A primary interface for Pacemaker to monitor the HA group.
    • An alternate interface for Pacemaker to monitor the HA group.
    • An interface for the synchronous data replication, which is known as the replication interface. This should have sufficient bandwidth to support the replication requirements given the expected workload of all of the replicated data queue managers running in the HA group.

    We can configure the HA group so that the same IP address is used for all three interfaces, a separate IP address is used for each interface, or the same IP address is used for primary and alternate and a separate IP address for the replication interface.

    For maximum fault tolerance, these interfaces should be independent Network Interface Cards (NICs).

  • DRBD requires that each node in the HA group has a valid internet host name (the value that is returned by uname -n), as defined by RFC 952 amended by RFC 1123.
  • If there is a firewall between the nodes in the HA group, then the firewall must allow traffic between the nodes on a range of ports. A sample script is provided, /opt/mqm/samp/rdqm/firewalld/configure.sh, that opens up the necessary ports if we are running the standard firewall in RHEL. We must run the script as root. If we are using some other firewall, examine the service definitions /usr/lib/firewalld/services/rdqm* to see which ports need to be opened. The script adds the following permanent firewallD service rules for DRBD, Pacemaker, and IBM MQ:

    • MQ_INSTALLATION_PATH/samp/rdqm/firewalld/services/rdqm-drbd.xml allows TCP ports 7000-7100.
    • MQ_INSTALLATION_PATH/samp/rdqm/firewalld/services/rdqm-pacemaker.xml allows UDP ports 5404-5407
    • MQ_INSTALLATION_PATH/samp/rdqm/firewalld/services/rdqm-mq.xml allows TCP port 1414 (we must edit the script if you require a different port)

  • If the system uses SELinux in a mode other than permissive, we must run the following command:
    semanage permissive -a drbd_t


Network requirements

It is recommended that you locate the three nodes in the RDQM HA group in the same data center.

If you do choose to locate the nodes in different data centers, then be aware of the following limitations:

  • Performance degrades rapidly with increasing latency between data centers. Although IBM will support a latency of up to 5 ms, you might find that the application performance cannot tolerate more than 1 to 2 ms of latency.
  • The data sent across the replication link is not subject to any additional encryption beyond that which might be in place from using IBM MQ AMS.

We can configure a floating IP address to enable a client to use the same IP address for a replicated data queue manager (RDQM) regardless of which node in the HA group it is running on. The floating address binds to a named physical interface on the primary node for the RDQM. If the RDQM fails over and a different node becomes the primary, the floating IP is bound to an interface of the same name on the new primary. The physical interfaces on the three nodes must all have the same name, and belong to the same subnet as the floating IP address.


User requirements for configuring the cluster

We can configure the RDQM HA group as user root. If we do not want to configure as root, you configure as a user in the mqm group instead. For an mqm user to configure the RDQM cluster, we must meet the following requirements:

  • The mqm user must be able to use sudo to run commands on each of the three servers that make up the RDQM HA group.
  • If the mqm user can use SSH without a password to run commands on each of the three servers that make up the RDQM HA group, then the user needs to run commands on only one of the servers.
  • If you configure password-less SSH for the mqm user, that user must have the same UID on all three servers.

We must configure sudo so that the mqm user can run the following commands with root authority:

/opt/mqm/bin/crtmqm
/opt/mqm/bin/dltmqm
/opt/mqm/bin/rdqmadm
/opt/mqm/bin/rdqmstatus


User requirements for working with queue managers

To create, delete, or configure replicated data queue managers (RDQMs) we must use a user ID that belongs to both the mqm and haclient groups (the haclient group is created during installation of Pacemaker).

  • Set up passwordless SSH
    We can set up passwordless SSH so that you only need issue configuration commands on one node in the HA group.

Parent topic: RDQM high availability