+

Search Tips | Advanced Search

Connect to a queue manager deployed in an OpenShift cluster

A set of configuration examples for connecting to a queue manager deployed in a Red Hat OpenShift cluster.


About this task

You need an OpenShift Route to connect an application to an IBM MQ queue manager from outside a Red Hat OpenShift cluster.

We must enable TLS on the IBM MQ queue manager and client application, because Server Name Indication (SNI) is only available in the TLS protocol. The Red Hat OpenShift Container Platform Router uses SNI for routing requests to the IBM MQ queue manager.

The required configuration of the OpenShift Route depends on the SNI behavior of our client application.

To set the SNI header as TLS 1.2 or higher, a CipherSpec or CipherSuite must be used for the TLS communication.

The SNI is set to the MQ channel if the following conditions are met:

  • The IBM MQ C Client is V8 or later.
  • The Java/JMS Client is V9.1.1 or later, and the Java installation supports the javax.net.ssl.SNIHostName class.
  • The .NET Client is in unmanaged mode.

The SNI is set to the Hostname if a hostname is supplied as the connection name, and the following conditions are met:

  • The .NET Client is in managed mode.
  • The AMQP or XR client is used.
  • The Java/JMS Clients are used with AllowOutboundSNI set to NO.

The SNI is not set and is blank under the following conditions:

  • The IBM MQ C Client is V7.5 or earlier.
  • IBM MQ C Client is used with AllowOutboundSNI set to NO.
  • The Java/JMS Clients are used with a Java installation that does not support the javax.net.ssl.SNIHostName class.


Example

Client applications that set the SNI to the MQ channel require a new OpenShift Route to be created for each channel you wish to connect to. You also have to use unique channel names across your Red Hat OpenShift Container Platform cluster, to allow routing to the correct queue manager.

To determine the required host name for each of our new OpenShift Routes, we need to map each channel name to an SNI address as documented here: https://www.ibm.com/support/pages/ibm-websphere-mq-how-does-mq-provide-multiple-certificates-certlabl-capability

We must then create a new OpenShift Route (for each channel) by applying the following yaml in your cluster:
  apiVersion: route.openshift.io/v1
  kind: Route
  metadata:
    name: <provide a unique name for the Route>
    namespace: <the namespace of our MQ deployment>
  spec:
    host: <SNI address mapping for the channel>
    to:
      kind: Service
      name: <the name of the Kubernetes Service for the MQ deployment (for example "<Queue Manager Name>-ibm-mq")>
    port:
      targetPort: 1414
    tls:
      termination: passthrough

Configure your client application connection details

We can determine the host name to use for the client connection by running the following command:
oc get route <Name of hostname based Route (for example "<Queue Manager Name>-ibm-mq-qm")> 
-n <namespace of our MQ deployment> -o jsonpath="{.spec.host}"

The port for the client connection should be set to the port used by the Red Hat OpenShift Container Platform Router - normally 443.

Parent topic: Use the IBM MQ Operator and certified containers


Related tasks

Last updated: 2020-10-04