+

Search Tips | Advanced Search

Configure and tuning the operating system on Linux

Use this topic when we are configuring IBM MQ on Linux systems.

Attention: The information in this topic applies only if the queue manager is started by the mqm user ID.

  • If any other user ID starts the queue manager, ensure that the NOFILE and NPROC entries, shown for mqm, are duplicated for that user ID.
  • NOFILE and NPROC limits set using a pluggable security module are not applied to queue managers started with systemd. To apply these limits to queue managers started with systemd, specify equivalent values in the unit file that contains the queue manager service configuration.


Shell interpreter

Ensure that /bin/sh shell is a valid shell interpreter compatible with the Bourne shell, otherwise the post-installation configuration of IBM MQ does not complete successfully. If the shell was not installed using RPM, you might see a prerequisites failure of /bin/sh shell when you try to install IBM MQ . The failure is because the RPM tables do not recognize that a valid shell interpreter is installed. If the failure occurs, you can reinstall the /bin/sh shell by using RPM, or specify the RPM option --nodeps to disable dependency checking during installation of IBM MQ . Note: The --dbpath option is not supported when installing IBM MQ on Linux.


Swap space

During high load IBM MQ can use virtual memory (swap space). If virtual memory becomes full it could cause IBM MQ processes to fail or become unstable, affecting the system.

To prevent this situation the IBM MQ administrator should ensure that the system has been allocated enough virtual memory as specified in the operating system guidelines.


System V IPC kernel configuration

IBM MQ uses System V IPC resources, in particular shared memory. However, a limited number of semaphores are also used.

The minimum configuration for IBM MQ for these resources is as follows:

Name Kernel-name Value Increase Description
shmmni kernel.shmmni 4096 Yes Maximum number of shared memory segments
shmmax kernel.shmmax 268435456 No Maximum size of a shared-memory segment (bytes)
shmall kernel.shmall 2097152 Yes Maximum amount of shared memory (pages)
semmsl kernel.sem 32 No Maximum amount of semaphores permitted per set
semmns kernel.sem 4096 Yes Maximum number of semaphores
semopm kernel.sem 32 No Maximum number of operations in single operations
semmni kernel.sem 128 Yes Maximum number of semaphore sets
thrmax kernel.threads-max 32768 Yes Maximum number of threads
pidmax kernel.pid_max 32768 Yes Maximum number of process identifiers
Notes:
  1. These values are sufficient to run two moderate sized queue managers on the system. If you intend to run more than two queue managers, or the queue managers are to process a significant workload, you might need to increase the values displayed as Yes in the Increase column.
  2. The kernel.sem values are contained within a single kernel parameter containing the four values in order.

To view the current value of the parameter log on, as a user with root authority, and type:

sysctl Kernel-name
To add or alter these values, log on as a user with root authority. Open the file /etc/sysctl.conf with a text editor, then add or change the following entries to your chosen values:
kernel.shmmni = 4096
kernel.shmall = 2097152
kernel.shmmax = 268435456
kernel.sem = 32 4096 32 128
Then save and close the file.

To load these sysctl values immediately, enter the following command sysctl -p.

If we do not issue the sysctl -p command, the new values are loaded when the system is rebooted.

By default the Linux kernel has a maximum process identifier, that can also be used with threads, and might limit the allowed number of threads.

The operating system reports when the system lacks the necessary resources to create another thread, or the system-imposed limit on the total number of threads in a process {PTHREAD_THREADS_MAX} would be exceeded.

For more information on kernel.threads-max and kernel.pid-max, see Resource shortage in IBM MQ queue manager when running a large number of clients


TCP/IP configuration

To use keepalive for IBM MQ channels, we can configure the operation of the KEEPALIVE using the kernel parameters:
net.ipv4.tcp_keepalive_intvl
net.ipv4.tcp_keepalive_probes
net.ipv4.tcp_keepalive_time
See Use the TCP/IP SO_KEEPALIVE option for further information.

To view the current value of the parameter log on, as a user with root authority, and type sysctl Kernel-name.

To add or alter these values, log on as a user with root authority. Open the file /etc/sysctl.conf with a text editor, then add or change the following entries to your chosen values.

To load these sysctl values immediately, enter the following command sysctl -p.

If we do not issue the sysctl -p command, the new values are loaded when the system is rebooted.


Maximum open files

Attention: The term mqm user applies to the mqm user, and any other user ID that is used to start the queue manager.

The maximum number of open file-handles in the system is controlled by the parameter fs.file-max

The minimum value for this parameter for a system with two moderate sized queue managers is 524288. Note: If the operating system default is higher, we should leave the higher setting, or consult your operating system provider.

You are likely to need a higher value if you intend to run more than two queue managers, or the queue managers are to process a significant workload.

To view the current value of a parameter, log on as a user with root authority, and type sysctl fs.file-max.

To add or alter these values, log on as a user with root authority. Open the file /etc/sysctl.conf with a text editor, then add or change the following entry to your chosen value:
fs.file-max = 524288
Then save and close the file.

To load these sysctl values immediately, enter the following command sysctl -p.

If we do not issue the sysctl -p command, the new values are loaded when the system is rebooted.

If we are using a pluggable security module such as PAM (Pluggable Authentication Module), ensure that this module does not unduly restrict the number of open files for the mqm user. To report the maximum number of open file descriptors per process for the mqm user, login as the mqm user and enter the following values:
ulimit -n
For a standard IBM MQ queue manager, set the nofile value for the mqm user to 10240 or more. To set the maximum number of open file descriptors for processes running under the mqm user, add the following information to the /etc/security/limits.conf file:
mqm       hard  nofile     10240
mqm       soft  nofile     10240

The pluggable security module limits are not applied to queue managers started with systemd. To start an IBM MQ queue manager with systemd set LimitNOFILE to 10240 or more in the unit file that contains the queue manager service configuration.


Maximum processes

Attention: The term mqm user applies to the mqm user, and any other user ID that is used to start the queue manager.

A running IBM MQ queue manager consists of a number of thread programs. Each connected application increases the number of threads running in the queue manager processes. It is normal for an operating system to limit the maximum number of processes that a user runs. The limit prevents operating system failures due to an individual user or subsystem creating too many processes. We must ensure that the maximum number of processes that the mqm user is allowed to run is sufficient. The number of processes must include the number of channels and applications that connect to the queue manager.

The following calculation is useful when determining the number of processes for the mqm user:
nproc = 2048 + clientConnections * 4 + qmgrChannels * 4 +
    localBindingConnections
where:

  • clientConnections is the maximum number of connections from clients on other machines connecting to queue managers on this machine.
  • qmgrChannels is the maximum number of running channels (as opposed to channel definitions) to other queue managers. This includes cluster channels, sender/receiver channels, and so on.
  • localBindingConnections does not include application threads.

The following assumptions are made in this algorithm:

  • 2048 is a large enough contingency to cover the queue manager threads. This might need to be increased if a lot of other applications are running.
  • When setting nproc, take into account the maximum number of applications, connections, channels and queue managers that might be run on the machine in the future.
  • This algorithm takes a pessimistic view and the actual nproc needed might be slightly lower for later versions of IBM MQ and fastpath channels.
  • On Linux, each thread is implemented as a light-weight process (LWP) and each LWP is counted as one process against nproc.

We can use the PAM_limits security module to control the number of processes that users run. We can configure the maximum number of processes for the mqm user as follows:

mqm       hard  nproc      4096
mqm       soft  nproc      4096
For more details on how to configure the PAM_limits security module type, enter the following command:
man limits.conf

The pluggable security module limits are not applied to queue managers started with systemd. To start an IBM MQ queue manager with systemd set LimitNPROC to a suitable value in the unit file that contains the queue manager service configuration.

We can check the system configuration using the mqconfig command.

For more information on configuring the system, see How to configure UNIX and Linux systems for IBM MQ.


32-bit support on 64-bit Linux platforms

Some 64-bit Linux distributions no longer support 32-bit applications by default. For details of affected platforms, and guidance on enabling 32-bit applications to run on these platforms, see Hardware and software requirements on Linux systems.

Parent topic: Preparing the system on Linux


Related concepts


Related information

Last updated: 2020-10-04