+

Search Tips   |   Advanced Search


What's New - WebSphere MQ v7.5

  1. Introduction
  2. Introduction to WebSphere MQ Telemetry
  3. WebSphere MQ client for HP Integrity NonStop Server
  4. New function in maintenance level upgrades
  5. Behavior that has changed between version 7.1 to version 7.5
  6. MQI client: MQPUT1 sync point behavior change
  7. Publish/Subscriber Application migration: Register
  8. Publish/Subscribe Application migration: Request Update replaced
  9. Publish/Subscribe:
  10. Publish/Subscribe: Adding a stream
  11. Publish/Subscribe:
  12. Publish/Subscribe:
  13. Publish/Subscribe:
  14. Publish/Subscribe:
  15. Publish/Subscribe:
  16. Publish/Subscribe: Stopping queued publish/subscribe
  17. Publish/Subscribe: Configuration settings
  18. Publish/Subscribe:
  19. Publish/Subscribe: Local and global publications and subscriptions
  20. Publish/Subscribe: Mapping an alias queue to a topic
  21. Publish/Subscribe:
  22. Publish/Subscribe:
  23. Publish/Subscribe: Multi-topic publications and subscriptions
  24. Publish/Subscribe:
  25. Publish/Subscribe: Streams
  26. Publish/Subscribe: Subscription names
  27. Publish/Subscribe: Traditional identity
  28. Publish/Subscribe:
  29. Publish/Subscribe:
  30. Queue manager logs: Default sizes increased
  31. SSL or TLS Authority
  32. SSLPEER and SSLCERTI changes
  33. Preferred alternatives: Consider replacing these version 6.0 functions with their version 7.5 alternatives
  34. WebSphere MQ Explorer changes
  35. AIX: Shared objects
  36. AIX: /usr/lpp/mqm symbolic
  37. AIX, HP-UX, and Solaris:
  38. UNIX and Linux: crtmqlnk and dltmqlnk removed
  39. UNIX and Linux: Message catalogs moved
  40. UNIX and Linux: MQ services and triggered
  41. UNIX and Linux: ps -ef |
  42. UNIX and Linux: /usr symbolic
  43. Windows: amqmsrvn.exe process removed
  44. Windows: IgnoredErrorCodes
  45. Windows: Installation and infrastructure
  46. Windows: Local queue performance monitoring
  47. Windows: Logon as a service required
  48. Windows: Migration of registry
  49. Windows: MSCS restriction with multiple installations
  50. Windows: Relocation
  51. Windows: SPX support on Windows Vista
  52. Windows: Task manager interpretation
  53. Windows: WebSphere MQ Installation affected by User
  54. Windows: WebSphere MQ Active
  55. WebSphere MQ version 7.5, IBM i and z/OS


Introduction

Use WebSphere MQ to enable applications to communicate at different times and in many diverse computing environments. WebSphere MQ sends messages across networks of diverse components. Your application connects to WebSphere MQ to send or receive a message. WebSphere MQ handles the different processors, operating systems, subsystems, and communication protocols it encounters in transferring the message. If a connection or a processor is temporarily unavailable, WebSphere MQ queues the message and forwards it when the connection is back online.

WebSphere MQ is messaging and queuing middleware, with point-to-point, publish/subscribe, and file transfer modes of operation. Applications can publish messages to many subscribers over multicast.

Messaging Programs communicate by sending each other data in messages rather than by calling each other directly.
Queuing Messages are placed on queues, so that programs can run independently of each other, at different speeds and times, in different locations, and without having a direct connection between them.
Point-to-point Applications send messages to a queue, or to a list of queues. The sender must know the name of the destination, but not where it is.
Publish/subscribe Applications publish a message on a topic, such as the result of a game played by a team. WebSphere MQ sends copies of the message to applications that subscribe to the results topic. They receive the message with the results of games played by the team. The publisher does not know the names of subscribers, or where they are.
Multicast Multicast is an efficient form of publish/subscribe messaging that scales to many subscribers. It transfers the effort of sending a copy of a publication to each subscriber from WebSphere MQ to the network. Once a path for the publication is established between the publisher and subscriber, WebSphere MQ is not involved in forwarding the publication.
File transfer Files are transferred in messages. WebSphere MQ File Transfer Edition manages the transfer of files and the administration to set up automated transfers and log the results. You can integrate the file transfer with other file transfer systems, with WebSphere MQ messaging, and the web.
Telemetry WebSphere MQ Telemetry is messaging for devices. WebSphere MQ connects device and application messaging together. It connects the internet, applications, services, and decision makers with networks of instrumented devices. WebSphere MQ Telemetry has an efficient messaging protocol that connects a large numbers of devices over a network. The messaging protocol is published, so that it can be incorporated into devices. You can also develop device programs with one of the published programming interfaces for the protocol.

WebSphere MQ sends and receives data between your applications, and over networks.

Message delivery is assured and decoupled from the application. Assured, because WebSphere MQ exchanges messages transactionally, and decoupled, because applications do not have to check that messages they sent are delivered safely.

You can secure message delivery between queue managers with SSL/TLS. With Advanced Message Security (AMS), you can encrypt and sign messages between being put by one application and retrieved by another.

Create and manage WebSphere MQ with the WebSphere MQ Explorer GUI or by running commands from a command window or application.

Send and receive WebSphere MQ messages from browsers with the HTTP protocol.

An administrator creates and starts a queue manager with commands. Subsequently, the queue manager is usually started automatically when the operating system boots. Applications, and other queue managers can then connect to it to send and receive messages.

An application or administrator creates a queue or a topic. Queues and topics are objects that are owned and stored by a queue manager.

When your application wants to transfer data to another application, it puts the data into a message. It puts the message onto a queue, or publishes the message to a topic. There are three main ways that the message can be retrieved:

MQ channels connect one queue manager to another over a network. You can create MQ channels yourself, or a queue manager in a cluster of queue managers creates MQ channels when they are needed.

You can have many queues and topics on one queue manager.

You can have more than one queue manager on one computer.

An application can run on the same computer as the queue manager, or on a different one. If it runs on the same computer, it is a WebSphere MQ server application. If it runs on a different computer, it is a WebSphere MQ client application. Whether it is WebSphere MQ client or server makes almost no difference to the application. You can build a client/server application with WebSphere MQ clients or servers.


What tools and resources come with WebSphere MQ?

Control commands, which are run from the command line. You create, start, and stop queue managers with the control commands. You also run WebSphere MQ administrative and problem determination programs with the control commands.

WebSphere MQ script commands (MQSC), which are run by an interpreter. Create queues and topics, configure, and administer WebSphere MQ with the commands. Edit the commands in a file, and pass the file to the runmqsc program to interpret them. You can also run the interpreter on one queue manager, which sends the commands to a different computer to administer a different queue manager.

The Programmable Command Format (PCF) commands, which you call in your own applications to administer WebSphere MQ. The PCF commands have the same capability as the script commands, but they are easier to program.

Sample programs.

On Windows and Linux x86 and x86-64 platforms, where you can run the following utilities:


Introduction to WebSphere MQ Telemetry

People, businesses, and governments increasingly want to use WebSphere MQ Telemetry to interact more smartly with the environment we live and work in. WebSphere MQ Telemetry connects all kinds of devices to the internet and to the enterprise, and reduces the costs of building applications for smart devices.

The following diagrams demonstrate some typical uses of WebSphere MQ Telemetry:

An MQTT message that contains energy usage data sent to service provider.

A telemetry application sends control commands that are based on analysis of energy usage data.


What is WebSphere MQ Telemetry?


What can it do for me?


How do I use it?


How does it work?

WebSphere MQ Telemetry replaces the SCADA nodes that were withdrawn in version 7 of WebSphere Message Broker and runs on Windows, Linux, and AIX. "WebSphere MQ Telemetry and WebSphere Message Broker version 7.0" provides information to help you migrate applications from using the SCADA nodes in WebSphere Message Broker V6. Telemetry applications using WebSphere Message Broker version 7 subscribe to topics that are common to MQTT clients. They receive publications from MQTT clients using MQInput nodes and publish to MQTT clients using publication nodes.


WebSphere MQ Managed File Transfer

WebSphere MQ Managed File Transfer uses WebSphere MQ to transfer files between queue managers. You can extend its reach to workstations and servers that do not have a queue manager. You can extend it using file transfer agents, Apache Ant, and integrating it with IBM Sterling Commerce:Direct, web gateways, and with SFTP, FTP, or FTPS protocol servers.

With WebSphere MQ Managed File Transfer, you can automate, control, secure, and audit the transfer of files.


WebSphere MQ Advanced Message Security

IBM WebSphere MQ Advanced Message Security (AMS) is a separately installed component, which is separately charged. It provides a high level of protection for sensitive data that is flowing through the WebSphere MQ network. You do not need to modify existing applications to take advantage of AMS.


Message Channel Agent (MCA) interception

MCA interception feature allows a queue manager running under IBM WebSphere MQ with a licensed install of WebSphere MQ Advanced Message Security to selectively enable policies to be applied for server connection channels. MCA interception allows clients that remain outside WebSphere MQ AMS to still be connected to a queue manager and their messages to be encrypted and decrypted.


Multiple cluster transmission queues

You can change the new queue manager attribute DEFCLXQ to assign a different cluster transmission queue to each cluster-sender channel. Messages to be forwarded by each cluster-sender channel are placed on separate cluster transmission queues You can also configure cluster transmission queues manually by setting the new queue attribute CLCHNAME. You can decide which cluster-sender channels share which transmission queues, which have separate transmission queues, and which use the cluster transmission queue, or queues. The change assists system administrators that manage the transfer of messages between clustered queue managers.


Extended transactional functionality is now a part of the core client

Extended transactional functionality is now incorporated into the WebSphere MQ core client. You do not need to purchase a separate extended transactional client license, or to install a separate Extended Transactional Client component


Identifying a connection to a queue manager by setting an application name

An application can set a name that identifies its connection to the queue manager. Display the application name with the DISPLAY CONN command. The name is returned in the APPLTAG field. You can also display the name in the WebSphere MQ Explorer Application Connections window. The field is called App name. You can set the name of an application connection on all platforms, except z/OS.


Certificate validation policies

On UNIX, Linux, and Windows, you can specify how strictly the certificate chain validation conforms to the RFC 5280 industry security standard.


More transactional visibility

The dspmqtrn command has two new parameters: -a and -q to provide more information when an asynchronous rollback occurs. Two new messages AMQ7486 and AMQ7487 provide information about the transaction that is being rolled back, and whether the transaction is associated with a connection.


WebSphere MQ client for HP Integrity NonStop Server

WebSphere MQ now supports the new client for the HP Integrity NonStop Server platform. The new client is released in SupportPac MAT1, WebSphere MQ Clients for NSS .


HP Integrity NonStop Server client commands

The following commands are applicable to the WebSphere MQ client for HP Integrity NonStop Server OSS and Guardian environments:

The following command is applicable to the WebSphere MQ client for HP Integrity NonStop Server OSS environment:

New Product Identifier, MQNC, added to the DISPLAY CHSTATUS command


New function in maintenance level upgrades

On platforms other than z/OS, IBM might introduce new functions between releases in maintenance level upgrades such as fix packs. A maintenance level upgrade including new function increases the maximum command level of an installation. When you apply the maintenance level upgrade, the installation supports the new command level.

As fix packs that change the maximum command level are released, the function that they introduce and the command level introduced will be documented here.


Behavior that has changed between version 7.1 to version 7.5

Between version 7.1 and version 7.5 some aspects of WebSphere MQ have changed that might affect existing applications, administrative scripts, or management procedures.

The following list of changes, are copied from the migration guide. They might affect the operation of existing applications, administrative scripts, and management procedures. New functions, and changes that do not affect existing applications, administrative procedures, and administrative scripts are not listed here

Review the list of changes carefully before upgrading queue managers to version 7.5. Decide whether you must plan to make changes to existing applications, scripts, and procedures before starting to migrate systems to WebSphere MQ version 7.5.


Display channel and cluster status: Switching

A cluster-sender channel that is switching its configuration to a different cluster transmission queue has a new channel state: Switching.

Existing application programs are not affected by the new state.

System management programs that monitor channel or cluster status might receive the new state as a result of a inquiry.

The state is set during the short interval while the channel modifies the destination transmission queue that messages are stored on. Before the switching state is set, messages are stored on the previously associated transmission queue. After the switching state, messages are stored on the newly configured transmission queue. The channel enters the switching state if a cluster-sender channel is starting, a configuration change is required, and the conditions for starting the switch are met.


Command level changed to 750

The command level on platforms other than z/OS and IBM i changes to 750 in version 7.5. z/OS and IBM i are at command level 710.


Change in behavior of the endmqm command

Issuing an endmqm command and dspmq command immediately after each other might return misleading status.

When issuing an endmqm -c or endmqm -w command, in the unlikely event that a dspmq command is issued in the small timeframe between the applications disconnecting and the queue manager actually stopping, the dspmq command might report the status as Ending immediately, even though a controlled shutdown is actually happening.


Behavior that has changed between version 7.0.1 to version 7.5

Between version 7.0.1 and version 7.5 some aspects of WebSphere MQ have changed that might affect existing applications, administrative scripts, or management procedures.

The following list of changes, are copied from the migration guide; Migrating and upgrading WebSphere MQ . They might affect the operation of existing applications, administrative scripts, and management procedures. New functions, and changes that do not affect existing applications, administrative procedures, and administrative scripts are not listed here

Review the list of changes carefully before upgrading queue managers to version 7.5. Decide whether you must plan to make changes to existing applications, scripts, and procedures before starting to migrate systems to WebSphere MQ version 7.5.


Channel authentication

When you migrate a queue manager to version 7.5, channel authentication using channel authentication records is disabled. Channels continue to work as before. If you create a queue manager in version 7.5, channel authentication using channel authentication records is enabled, but with minimal additional checking. Some channels might fail to start.


Migrated queue managers

Channel authentication is disabled for migrated queue managers.

To start using channel authentication records you must run this MQSC command:

 ALTER QMGR CHLAUTH(ENABLED)


New queue managers

Channel authentication is enabled for new queue managers.

You want to connect existing queue managers or WebSphere MQ MQI client applications to a newly created queue manager. Most connections work without specifying any channel authentication records. The following exceptions are to prevent privileged access to the queue manager, and access to system channels.

  1. Privileged user IDs asserted by a client-connection channel are blocked by means of the special value *MQADMIN.

     SET CHLAUTH('*') TYPE(BLOCKUSER) USERLIST('*MQADMIN') +
    DESCR('Default rule to disallow privileged users')
    

  2. Except for the channel used by WebSphere MQ Explorer, all SYSTEM.* channels are blocked.

     SET CHLAUTH('SYSTEM.*') TYPE(ADDRESSMAP) ADDRESS('*') USERSRC(NOACCESS) +
    DESCR('Default rule to disable all SYSTEM channels')
    
    SET CHLAUTH(SYSTEM.ADMIN.SVRCONN) TYPE(ADDRESSMAP) ADDRESS('*') USERSRC(CHANNEL) +
    DESCR('Default rule to allow MQ Explorer access')
    

Note: This behaviour is default for all new WebSphere MQ version 7.5 queue managers on startup.

If you must work around the exceptions, you can run an MQSC command to add in more rules to allow channels blocked by the default rules to connect, or disable channel authentication checking:

 ALTER QMGR CHLAUTH(DISABLED)


Change in behavior of the endmqm command

Issuing an endmqm command and dspmq command immediately after each other might return misleading status.

When issuing an endmqm -c or endmqm -w command, in the unlikely event that a dspmq command is issued in the small timeframe between the applications disconnecting and the queue manager actually stopping, the dspmq command might report the status as Ending immediately, even though a controlled shutdown is actually happening.


Parent topic: Behavior that has changed between version 6.0 to version 7.5


Changes to cluster error recovery on servers other than z/OS

Before version 7.5, if a queue manager detected a problem with the local repository manager managing a cluster, it updated the error log. In some cases, it then stopped managing clusters. The queue manager continued to exchange applications messages with a cluster, relying on its increasingly out of date cache of cluster definitions. From version 7.5 onwards, the queue manager reruns operations that caused problems, until the problems are resolved. If, after five days, the problems are not resolved, the queue manager shuts down to prevent the cache becoming more out of date. As the cache becomes more out of date, it causes a greater number of problems. The changed behavior regarding cluster errors in version 7.5 does not apply to z/OS.

Every aspect of cluster management is handled for a queue manager by the local repository manager process, amqrrmfa. The process runs on all queue managers, even if there are no cluster definitions.

Before version 7.5, if the queue manager detected a problem in the local repository manager, it stopped the repository manager after a short interval. The queue manager kept running, processing application messages and requests to open queues, and publish or subscribe to topics.

With the repository manager stopped, the cache of cluster definitions available to the queue manager became more out of date. Over time, messages were routed to the wrong destination, and applications failed. Applications failed attempting to open cluster queues or publication topics that had not been propagated to the local queue manager.

Unless an administrator checked for repository messages in the error log, the administrator might not realize the cluster configuration had problems. If the failure was not recognized over an even longer time, and the queue manager did not renew its cluster membership, even more problems occurred. The instability affected all queue managers in the cluster, and the cluster appeared unstable.

From version 7.5 onwards, WebSphere MQ takes a different approach to cluster error handling. Rather than stop the repository manager and keep going without it, the repository manager reruns failed operations. If the queue manager detects a problem with the repository manager, it follows one of two courses of action.

  1. If the error does not compromise the operation of the queue manager, the queue manager writes a message to the error log. It reruns the failed operation every 10 minutes until the operation succeeds. By default, you have five days to deal with the error; failing which, the queue manager writes a message to the error log, and shuts down. You can postpone the five day shutdown.
  2. If the error compromises the operation of the queue manager, the queue manager writes a message to the error log, and shuts down immediately.

An error that compromises the operation of the queue manager is an error that the queue manager has not been able to diagnose, or an error that might have unforeseeable consequences. This type of error often results in the queue manager writing an FFST file. Errors that compromise the operation of the queue manager might be caused by a bug in WebSphere MQ, or by an administrator, or a program, doing something unexpected, such as ending a WebSphere MQ process.

The point of the change in error recovery behavior is to limit the time the queue manager continues to run with a growing number of inconsistent cluster definitions. As the number of inconsistencies in cluster definitions grows, the chance of abnormal application behavior grows with it.

The default choice of shutting down the queue manager after five days is a compromise between limiting the number of inconsistencies and keeping the queue manager available until the problems are detected and resolved.

You can extend the time before the queue manager shuts down indefinitely, while you fix the problem or wait for a planned queue manager shutdown. The five-day stay keeps the queue manager running through a long weekend, giving you time to react to any problems or prolong the time before restarting the queue manager.


Corrective actions

You have a choice of actions to deal with the problems of cluster error recovery. The first choice is to monitor and fix the problem, the second to monitor and postpone fixing the problem, and the final choice is to continue to manage cluster error recovery as in releases before version 7.5.

  1. Monitor the queue manager error log for the error messages AMQ9448 and AMQ5008, and fix the problem.

    • AMQ9448 indicates that the repository manager has returned an error after running a command. This error marks the start of trying the command again every 10 minutes, and eventually stopping the queue manager after five days, unless you postpone the shutdown.

    • AMQ5008 indicates that the queue manager was stopped because a WebSphere MQ process is missing. AMQ5008 results from the repository manager stopping after five days. If the repository manager stops, the queue manager stops.

  2. Monitor the queue manager error log for the error message AMQ9448, and postpone fixing the problem.

    • If you disable getting messages from SYSTEM.CLUSTER.COMMAND.QUEUE, the repository manager stops trying to run commands, and continues indefinitely without processing any work. However, any handles that the repository manager holds to queues are released. Because the repository manager does not stop, the queue manager is not stopped after five days.
    • Run an MQSC command to disable getting messages from SYSTEM.CLUSTER.COMMAND.QUEUE:
    • ALTER QLOCAL(SYSTEM.CLUSTER.COMMAND.QUEUE) GET(DISABLED)

  3. Revert the queue manager to the same cluster error recovery behavior as before version 7.5.

    • You can set a queue manager tuning parameter to keep the queue manager running if the repository manager stops.
    • The tuning parameter is TolerateRepositoryFailure, in the TuningParameter stanza of the qm.ini file. To prevent the queue manager stopping, if the repository manager stops, set TolerateRepositoryFailure to TRUE;
    • Restart the queue manager to enable the TolerateRepositoryFailure option.
    • If a cluster error has occurred that prevents the repository manager starting successfully, and hence the queue manager from starting, set TolerateRepositoryFailure to TRUE to start the queue manager without the repository manager.


Special consideration

Before version 7.5, some administrators managing queue managers that were not part of a cluster stopped the amqrrmfa process. Stopping amqrrmfa did not affect the queue manager.

Stopping amqrrmfa in version 7.5 causes the queue manager to stop, because it is regarded as a queue manager failure. You must not stop the amqrrmfa process in version 7.5, unless you set the queue manager tuning parameter, TolerateRepositoryFailure.


Example

Figure 1. Set TolerateRepositoryFailure to TRUE in qm.ini

 TuningParameters:
    TolerateRepositoryFailure=TRUE


Command level changed to 750

The command level on platforms other than z/OS and IBM i changes to 750 in version 7.5. z/OS and IBM i are at command level 710.


Parent topic: Behavior that has changed between version 6.0 to version 7.5


Connect to multiple queue managers and use MQCNO_FASTPATH_BINDING

Applications that connect to queue managers using the MQCNO_FASTPATH_BINDING binding option might fail with an error and reason code MQRC_FASTPATH_NOT_AVAILABLE.

An application can connect to multiple queue managers from the same process. In releases earlier than version 7.5, an application can set any one of the connections to MQCNO_FASTPATH_BINDING. In version 7.5, only the first connection can be set to MQCNO_FASTPATH_BINDING. See Fast path for the complete set of rules.

To assist with migration, you can set a new environment variable, AMQ_SINGLE_INSTALLATION. The variable reinstates the same behavior as in earlier releases, but prevents an application connecting to queue managers associated with other installations in the same process.


Fast path

On a server with multiple installations, applications using a fast path connection to WebSphere MQ version 7.1 or later must follow these rules:

  1. The queue manager must be associated with the same installation as the one from which the application loaded the WebSphere MQ run time libraries. The application must not use a fast path connection to a queue manager associated with a different installation. An attempt to make the connection results in an error, and reason code MQRC_INSTALLATION_MISMATCH.
  2. Connecting non-fast path to a queue manager associated with the same installation as the one from which the application has loaded the WebSphere MQ run time libraries prevents the application connecting fast path, unless either of these conditions are true:

    • The application makes its first connection to a queue manager associated with the same installation a fast path connection.
    • The environment variable, AMQ_SINGLE_INSTALLATION is set.

  3. Connecting non-fast path to a queue manager associated with a version 7.1 or later installation, has no effect on whether an application can connect fast path.
  4. You cannot combine connecting to a queue manager associated with a version 7.0.1 installation and connecting fast path to a queue manager associated with a version 7.1, or later installation.

With AMQ_SINGLE_INSTALLATION set, you can make any connection to a queue manager a fast path connection. Otherwise almost the same restrictions apply:


Custom scripts

In WebSphere MQ for Windows Version 7.5 custom scripts to install packages can fail as they have now been renamed.

Custom scripts to install WebSphere MQ can be incomplete as they have been added or removed.


Changes to data types

A number of data types have changed between WebSphere MQ version 7.0.1 to WebSphere MQ version 7.5 and new data types have been added. This topic lists the changes for data types that have a new current version in version 7.5.

The current version of a data type is incremented if the length of a data type is extended by adding new fields. The addition of new constants to the values that can be set in a data type does not result in a change to the current version value. the data types that have new versions. Click on the links to read about the new fields.

New fields added to existing data types

Data type New version New fields
Channel definition MQCD_VERSION_10 BatchDataLimit
(MQLONG) DefReconnect (MQLONG)
UseDLQ (MQLONG)
Channel exit MQCXP_VERSION_8 MCAUserSource (MQLONG)
pEntryPoints (PMQIEP)
Data conversion exit MQDXP_VERSION_2 pEntryPoints(PMQIEP)
Pre-connect exit MQNXP_VERSION_2 pEntryPoints(PMQIEP)
Publish exit publication context MQPBC_VERSION_2 MsgDescPtr(PMQMD)
Publish exit MQPSXP_VERSION_2 pEntryPoints (PMQIEP)
Cluster workload exit MQWXP_VERSION_4 pEntryPoints (PMQIEP)


Default transmission queue restriction

The information center in previous versions of WebSphere MQ warned about defining the default transmission queue as SYSTEM.CLUSTER.TRANSMIT.QUEUE. In version 7.5, any attempt to set or use a default transmission queue that is defined as SYSTEM.CLUSTER.TRANSMIT.QUEUE results in an error.

In earlier versions of WebSphere MQ no error was reported when defining the default transmission queue as SYSTEM.CLUSTER.TRANSMIT.QUEUE. MQOPEN or MQPUT1 MQI calls that resulted in referencing the default transmission queue did not return an error. Applications might have continued working and failed later on. The reason for the failure was hard to diagnose.

The change ensures that any attempt to set the default transmission queue to SYSTEM.CLUSTER.TRANSMIT.QUEUE, or use a default transmission queue set to SYSTEM.CLUSTER.TRANSMIT.QUEUE, is immediately reported as an error. The queue name supplied is not valid for DEFXMITQ.


Display channel and cluster status: Switching

A cluster-sender channel that is switching its configuration to a different cluster transmission queue has a new channel state: Switching.

Existing application programs are not affected by the new state.

System management programs that monitor channel or cluster status might receive the new state as a result of a inquiry.

The state is set during the short interval while the channel modifies the destination transmission queue that messages are stored on. Before the switching state is set, messages are stored on the previously associated transmission queue. After the switching state, messages are stored on the newly configured transmission queue. The channel enters the switching state if a cluster-sender channel is starting, a configuration change is required, and the conditions for starting the switch are met.


dspmqver

New types of information are displayed by dspmqver to support multiple installations. The changes might affect existing administrative scripts you have written to manage WebSphere MQ.

The changes in output from dspmqver that might affect existing command scripts that you have written are twofold:

  1. Version 7.5 has extra -f field options. If you do not specify a -f option, output from all the options is displayed. To restrict the output to the same information that was displayed in earlier releases, set the -f option to a value that was present in the earlier release. Compare the output for dspmqver in Figure 1 and Figure 2 with the output for dspmqver -f 15 in Figure 3 .

    Figure 1. Default dspmqver options in WebSphere MQ version 7.0.1

     dspmqver 
    

     Name:        WebSphere MQ Version:     7.0.1.6
    CMVC level:  p701-L110705
    BuildType:   IKAP - (Production)
    

    Figure 2. Default dspmqver options in WebSphere MQ version 7.5

     dspmqver 
    

     Name:        WebSphere MQ Version:     7.1.0.0
    Level:       p000-L110624
    BuildType:   IKAP - (Production)
    Platform:    WebSphere MQ for Windows
    Mode:        32-bit
    O/S:         Windows XP, Build 2600: SP3
    InstName:    110705
    InstDesc:    July 5 2011
    InstPath:    C:\Program Files\IBM\WebSphere MQ_110705
    DataPath:    C:\Program Files\IBM\WebSphere MQ Primary:     No
    MaxCmdLevel: 710
    
    Note there are a number (1) of other installations, 
    use the '-i' parameter to display them.
    

    Figure 3. dspmqver with option to make WebSphere MQ version 7.5 similar to WebSphere MQ version 7.0.1

     dspmqver -f 15 
    

     Name:        WebSphere MQ Version:     7.1.0.0
    Level:       p000-L110624
    BuildType:   IKAP - (Production)
    
  2. The heading of the build level row has changed from CMVC level: to Level:.


Exits and installable services

When migrating to WebSphere MQ version 7.5 for a distributed platform, if you install WebSphere MQ in a non-default location, you must update your exits and installable services. Data conversion exits generated using the crtmqcvx command must be regenerated using the updated command.

When writing new exits and installable services, you do not need to link to any of the following WebSphere MQ libraries:


Fewer WebSphere MQ MQI client log messages

A WebSphere MQ MQI client used to report every failed attempt to connect to a queue manager when processing a connection name list. From version 7.5, only if the failure occurs with the last connection in the list is a message written to the queue manager error log.

Reporting the last failure and no others, reduces growth of the queue manager error log.


GSKit: Changes from GSKit V7.0 to GSKit V8.0

For distributed platforms, GSKit V8.0 is integrated with WebSphere MQ. In versions of WebSphere MQ prior to version 7.1, you installed GSKit separately. GSKit V8.0 was included as an alternative to GSKit V7.0 in WebSphere MQ version 7.0.1; it is now the only version of GSKit provided with WebSphere MQ. Some functions in GSKit V8.0 are different to the functions in GSKit V7.0.

Review the following list of changes.


GSKit: Some FIPS 140-2 compliant channels do not start

Three CipherSpecs are no longer FIPS 140-2 compliant. If a client or queue manager is configured to require FIPS 140-2 compliance, channels that use the following CipherSpecs do not start after migration.

To restart a channel, alter the channel definition to use a FIPS 140-2 compliant CipherSpec. Alternatively, configure the queue manager, or the client in the case of a WebSphere MQ MQI client, not to enforce FIPS 140-2 compliance.

Earlier versions of WebSphere MQ enforced an older version of the FIPS 140-2 standard. The following CipherSpecs were considered FIPS 140-2 compliant in earlier versions of WebSphere MQ and are also compliant in version 7.1:

Use these CipherSpecs if you want WebSphere MQ version 7.1 to interoperate in a FIPS 140-2 compliant manner with earlier versions.

Previous WebSphere MQ releases enforced an older version of the FIPS 140-2 standard. The following CipherSpecs were considered FIPS 140-2 compliant by previous WebSphere MQ releases and are also considered compliant by WebSphere MQ version 7.1:

Use these CipherSpecs if you need WebSphere MQ version 7.1 to interoperate in a FIPS 140-2 compliant manner with earlier WebSphere MQ releases.


GSKit: Certificate Common Name (CN) not mandatory

In GSKit V8.0, the iKeyman command accepts any element of the distinguished name (DN), or a form of the subject alternative name (SAN). It does not mandate that you provide it with a common name. In GSKit V7.0, if you create a self-signed certificate using the iKeyman command you had to specify a common name.

The implication is that applications searching for a certificate might not able to assume that a certificate has a common name. You might need to review how applications search for certificates, and how applications handle errors involving the common name. Alternatively, you might choose to check that all self-signed certificates are given common names.

Some other certificate tools that you might also be using, do not require a common name. It is therefore likely the change to GSKit is not going to cause you a problem.


GSKit: Commands renamed

The command name gsk7cmd is replaced with runmqckm; gsk7ikm is replaced with strmqikm, and gsk7capicmd is replaced with runmqakm . All the commands start the GSKit V8.0 certificate administration tools, and not the GSKit V7.0 tools.

WebSphere MQ Version 7.5 does not use a machine-wide shared installation of GSKit: instead it uses a private GSKit installation in the WebSphere MQ installation directory. Each WebSphere MQ Version 7.5 installation can use a different GSKit version. To display the version number of GSKit embedded in a particular WebSphere MQ installation, run the dspmqver command from that installation as shown in the following table:

Renamed GSKit commands

Platform GSKit V7.0 command GSKit V8.0 command
UNIX and Linux gsk7cmd runmqckm
UNIX and Linux gsk7ikm strmqikm
Windows,UNIX and Linux gsk7capicmd runmqakm
Windows,UNIX and Linux gsk7ver dspmqver -p 64 -v

Note: Do not use the gsk8ver command to display the GSKit version number: only the dspmqver command will show the correct GSKit version number for WebSphere MQ Version 7.5.


GSKit: The iKeyman commands to insert a certificate do not check that all required CA certificates are present

The iKeyman command in GSKit V8.0 does not validate a certificate when it is inserted into a key repository. iKeyman in GSKit V7.0, validated a certificate before it inserted the certificate into a certificate store.

The implication is, that if you create a certificate using the iKeyman in GSKit V8.0, all the necessary intermediate and root CA certificates might not be present, or they might have expired; when the certificate is checked it might fail.

Necessary certificates might be missing, or have expired. This can cause SSL and TLS connections to fail with error AMQ9633.


GSKit: PKCS#11 and JRE addressing mode

If you use iKeyman or iKeycmd to administer certificates and keys for PKCS#11 cryptographic hardware note that the addressing mode of the JRE for these tools has changed.

On the following platforms the JRE was 32 bit however in WebSphere MQ version 7.5 it is now 64 bit only. Where it has changed you might need to install additional PKCS#11 drivers appropriate for the addressing mode of the iKeyman and iKeycmd JRE. This is because the PKCS#11 driver must use the same addressing mode as the JRE. The following table shows the WebSphere MQ version 7.5 JRE addressing modes.

WebSphere MQ version 7.5 JRE addressing modes

Platform JRE Addressing Mode
Windows (32 bit or 64 bit) 32
Linux for System x 32 bit 32
Linux for System x 64 bit 64
Linux for System p 64
Linux for System z 64
HP-UX 64
Solaris Sparc 64
Solaris x86-64 64
AIX 64


GSKit: Import of a duplicate PKCS#12 certificate

In GSKit V8.0, the iKeyman command does not report an attempt to import a duplicate PKCS#12 certificate as an error. In GSKit V7.0, the iKeyman command reported an error. In neither version is a duplicate certificate imported.

For GSKIT V8.0, a duplicate certificate is a certificate with the same label and public key.

The implication is that if some of the issuer information is different, but the name and public key are the same, the changes are not imported. The correct way to update a certificate is to use the -cert -receive option, which replaces an existing certificate.

gskcapicmd does not allow or ignore duplicates on import in this way.


GSKit: Certificate stores created by iKeyman and iKeycmd no longer contain CA certificates

The iKeyman and iKeycmd utilities in GSKit V8.0 create a certificate store without adding pre-defined CA certificates to the store. To create a working certificate store, you must now add all the certificates that you require and trust. In GSKit V7.0 iKeyman and iKeycmd created a certificate store that already contained CA certificates.

Existing data bases created by GSKit V7.0 are unaffected by this change.


GSKit: Password expiry to key database deprecated

In GSKit V8.0, the password expiry function in iKeyman continues to work the same as in GSKit V7.0, but it might be withdrawn in future versions of GSKit.

Use the file system protection provided with the operating system to protect the key database and password stash file.


Linux: Recompile C++ applications and update run time libraries

C++ WebSphere MQ MQI client and server applications on Linux must be recompiled using GNU Compiler Collection (GCC) 4.1.2, or later. Compilers older than GCC 4.1.2 are no longer supported. The C++ GCC 4.1.2 run time libraries, or later, must be installed in /usr/lib or /usr/lib64

If you are using one of the supported Linux distributions, the libraries are correctly installed

The GCC 4.1.2 libraries support SSL and TLS connections from a WebSphere MQ MQI client. SSL and TLS use GSKit version 8, which depends on libstdc++.so.6. libstdc++.so.6 is included in GCC 4.1.2.


GSKit: Signature algorithm moved out of settings file

In GSKit V8.0, the default signature algorithm Used when creating self-signed certificates or certificate requests or selected in the creation dialogs is passed as a command-line parameter. In GSKit V7.0, the default signature algorithm was specified in the settings file.

The change has very little effect: it causes a different default signature algorithm to be selected. It does not alter the selection of a signature algorithm.


GSKit: Signed certificate validity period not within signer validity

In GSKit V8.0, the iKeyman command does not check whether the validity period of a resulting certificate is within the validity period of the signed certificate. In GSKit V7.0, iKeyman checked that the validity period of the resulting certificate was within the validity period of the signed certificate.

The IETF RFC standards for SSL/TLS allow a certificate whose validity dates extend beyond those of its signer. This change to GSKit brings it into line with those standards. The check is whether the certificate is issued within the validity period of the signer, and not whether it expires within the validity period of the signer.


GSKit: Stricter default file permissions

The default file permissions set by runmqckm and strmqikm in WebSphere MQ version 7.5 on UNIX and Linux are stricter than the permissions that are set by runmqckm, strmqikm, gsk7cmd, and gsk7ikm in earlier releases of WebSphere MQ.

The permissions set by runmqckm and strmqikm in WebSphere MQ version 7.5 permit only the creator to access the UNIX and Linux SSL/TLS key databases. The runmqckm, strmqikm, gsk7cmd, and gsk7ikm tools in earlier releases of WebSphere MQ set world-readable permissions, making the files liable to theft and impersonation attacks.

The permissions set by gsk7capicmd, in earlier releases of WebSphere MQ, and runmqakm in WebSphere MQ version 7.5, permit only the creator to access UNIX and Linux SSL/TLS key databases.

The migration of SSL/TLS key databases to version 7.5 does not alter their access permissions. In many cases, administrators set more restrictive access permissions on these files to overcome the liability to theft and impersonation attacks; these permissions are retained.

The default file permissions set on Windows are unchanged. Continue to tighten up the access permissions on SSL/TLS key database files on Windows after creating the files with runmqckm or strmqikm.


Java: Different message property data type returned

If the data type of a message property is set, the same data type is returned when the message is received. In some circumstances in WebSphere MQ version 7.0.1, properties set with a specific type were returned with the default type String.

The change affects Java applications that used the MQRFH2 class, and retrieved properties using the getFieldValue method.

You can write a message property in Java using a method such as setIntFieldValue. In WebSphere MQ version 7.0.1 the property is written into an MQRFH2 header with a default type of String. When you retrieve the property with the getFieldValue method, a String object is returned.

The change is that now the correct type of object is returned, in this example the type of object returned is Integer.

If your application retrieves the property with the getIntFieldValue method, there is no change in behavior; an int is returned. If property is written to the MQRFH2 header by some other means, and the data type is set, then getFieldValue returns the correct type of object.


JMS: Reason code changes

Some reason codes returned in JMS exceptions have changed. The changes affect MQRC_Q_MGR_NOT_AVAILABLE and MQRC_SSL_INITIALIZATION_ERROR.

In earlier releases of WebSphere MQ, if a JMS application call fails to connect, it receives an exception with a reason code 2059 (080B) (RC2059): MQRC_Q_MGR_NOT_AVAILABLE . In version 7.5, it can still receive MQRC_Q_MGR_NOT_AVAILABLE, or one of the following more specific reason codes.

Similarly, when trying to connect, a JMS application might have received 2393 (0959) (RC2393): MQRC_SSL_INITIALIZATION_ERROR . In version 7.5, it can still receive MQRC_SSL_INITIALIZATION_ERROR, or a more specific reason code, such as 2400 (0960) (RC2400): MQRC_UNSUPPORTED_CIPHER_SUITE , that identifies the cause of the SSL initialization error.


JMS: ResourceAdapter object configuration

When a WebSphere Application Server connects to WebSphere MQ it creates Message Driven Beans (MDBs) using JMS connections. These MDBs can no longer share one JMS connection. The configuration of ResourceAdapter object is migrated so that there is a single MDB for each JMS connection.


Changed ResourceAdapter properties

connectionConcurrency

The maximum number of MDBs to share a JMS connection. Sharing connections is not possible and this property always has the value 1. Its previous default value was 5.

maxConnections

This property is the number of JMS connections that the resource adapter can manage. In version 7.5, it also determines the number of MDBs that can connect because each MDB requires one JMS connection. The default value of maxConnections is now 50. Its previous default value was 10.

If connectionConcurrency is set to a value greater than 1, the maximum number of connections supported by the resource adapter is scaled by the value of connectionConcurrency. For example, if maxConnections is set to 2 and connectionConcurrency is set to 4, the maximum number of connections supported by the resource adapter is 8. As a result, connectionConcurrency is set to 1 and maxConnections is set to 8.

If connectionConcurrency is set to a value greater than 1, it is adjusted automatically. To avoid automatic adjustment, set connectionConcurrency to 1. You can then set maxConnections to the value you want.

The scaling mechanism ensures that sufficient connections are available for existing deployments whether you have changed them in your deployment, configuration, or programs.

If the adjusted maxConnections value exceeds the MAXINST or MAXINSTC attributes of any used channel, previously working deployments might fail.

The default value of both channel attributes equates to unlimited. If you changed them from the default value, you must ensure that the new maxConnections value does not exceed MAXINST or MAXINSTC.


MQI and PCF reason code changes

Reason codes that have changed and that affect some existing programs, are listed.

MQRC_NOT_OPEN_FOR_INPUT

In WebSphere MQ version 7.0 a queue opened with MQOO_OUTPUT, and then browsed, returned an error with the wrong reason-code, MQRC_NOT_OPEN_FOR_INPUT. The correct reason-code, MQRC_NOT_OPEN_FOR_BROWSE, was issued by version 6.0 and earlier. Version 7.5 correctly returns an error with the same reason code as version 6.0, MQRC_NOT_OPEN_FOR_BROWSE.

MQRC_DEF_XMIT_Q_USAGE_ERROR

The information center in previous versions of WebSphere MQ warned about defining the default transmission queue as SYSTEM.CLUSTER.TRANSMIT.QUEUE. In version 7.5, an attempt to open the default transmission queue, defined as SYSTEM.CLUSTER.TRANSMIT.QUEUE, results in the error MQRC_DEF_XMIT_Q_USAGE_ERROR.

MQRC_FASTPATH_NOT_AVAILABLE

An application that connects to multiple queue managers in the same process and uses MQCNO_FASTPATH_BINDING might fail with an error and reason code MQRC_FASTPATH_NOT_AVAILABLE;

MQRCCF_DEF_XMIT_Q_CLUS_ERROR

The information center in previous versions of WebSphere MQ warned about defining the default transmission queue as SYSTEM.CLUSTER.TRANSMIT.QUEUE. In version 7.5, an attempt to alter the queue manager attribute DEFXMITQ to SYSTEM.CLUSTER.TRANSMIT.QUEUE results in an error. The PCF reason code is 3269 (0CC5) (RC3269): MQRCCF_DEF_XMIT_Q_CLUS_ERROR .


Publish/Subscribe: Delete temporary dynamic queue

If a subscription is associated with a temporary dynamic queue, when the queue is deleted, the subscription is deleted. This changes the behavior of incorrectly written publish/subscribe applications migrated from version 6.0. Publish/subscribe applications migrated from WebSphere Message Broker are unchanged. The change does not affect the behavior of integrated publish/subscribe applications, which are written using the MQI publish/subscribe interface.


Summary

In version 7.5, you cannot create a temporary dynamic queue as the destination for publications for a durable subscription using the integrated publish/subscribe interface.

In the current fix level of version 7.5, if you use either of the queued publish/subscribe interfaces, MQRFH1 or MQRFH2, the behavior is the same. You can create a temporary dynamic queue as the subscriber queue, and if the queue is deleted, the subscription is deleted with it. Deleting the subscription with the queue retains the same the supported behavior of WebSphere MQ version 6.0, WebSphere Event Broker, and WebSphere Message Broker applications. It modifies the unsupported behavior of WebSphere MQ version 6.0 applications.


SSLPEER and SSLCERTI changes

WebSphere MQ version 7.5 obtains the Distinguished Encoding Rules (DER) encoding of the certificate and uses it to determine the subject and issuer distinguished names. The subject and issuer distinguished names are used in the SSLPEER and SSLCERTI fields. A SERIALNUMBER attribute is also included in the subject distinguished name and contains the serial number for the certificate of the remote partner. Some attributes of subject and issuer distinguished names are returned in a different sequence from previous releases.

The change to subject and issuer distinguished names affects channel security exits. It also affects aplications which depend upon the subject and issuer distinguished names that are returned by the PCF programming interface. Channel security exits and applications that set or query SSLPEER and SSLCERTI must be examined, and possibly changed. The fields that are affected are listed in Table 1 and Table 2 .

Channel status fields affected by changes to subject and issuer distinguished names

Channel status attribute PCF channel parameter type
SSL Peer (SSLPEER) MQCACH_SSL_SHORT_PEER_NAME
SSLCERTI MQCACH_SSL_CERT_ISSUER_NAME

Channel data structures affected by changes to subject and issuer distinguished names

Channel data structure Field
MQCD - Channel definition SSLPeerNamePtr (MQPTR)
MQCXP - Channel exit parameter SSLRemCertIssNamePtr (PMQVOID)

Existing peer name filters specified in the SSLPEER field of a channel definition are not affected. They continue to operate in the same manner as in earlier releases. The peer name matching algorithm has been updated to process existing SSLPEER filters. It is not necessary to alter any channel definitions.


Queue manager logs: Default sizes increased

The default size of a queue manager log files has been changed to 4096. The AMQERRnn.log has increased from 256 KB to 2 MB on UNIX, Linux, and Windows platforms. The change affects both new and migrated queue managers.


Queue manager log

In WebSphere MQ Version 7.5 default log size is 4096.


Queue manager error log

Override the change by setting the environment variable MQMAXERRORLOGSIZE, or setting ErrorLogSize in the QMErrorLog stanza in the qm.ini file.

The change increases the number of error messages that are saved in the error logs.


Telemetry: Installer integrated with WebSphere MQ

WebSphere MQ Telemetry is no longer installed separately from WebSphere MQ. It is installed as a component of WebSphere MQ. If you installed WebSphere MQ Telemetry with version 7.0.1, you must uninstall it before installing version 7.5.

You can install WebSphere MQ Telemetry at the same time as WebSphere MQ, or you can rerun the installer and install WebSphere MQ Telemetry at a later time.


WebSphere MQ Explorer changes

IBM WebSphere Eclipse Platform is no longer shipped with WebSphere MQ; it is not required to run MQ Explorer. The change makes no difference to administrators who run MQ Explorer. For developers who run MQ Explorer in an Eclipse development environment, a change is necessary. You must install and configure a separate Eclipse environment to be able to switch between MQ Explorer and other perspectives.


Packaging changes

In versions of WebSphere MQ earlier than version 7.5, you can select the Workbench mode preference in MQ Explorer. In workbench mode, you could switch to the other perspectives installed in the WebSphere Eclipse Platform. You can no longer set the Workbench mode preference, because the WebSphere Eclipse Platform is not shipped with MQ Explorer in version 7.5.

To switch between MQ Explorer and other perspectives, you must install MQ Explorer into your own Eclipse environment or into an Eclipse-based product. You can then switch between perspectives. For example, you can develop applications using WebSphere MQ classes for JMS or WebSphere MQ Telemetry applications

If you installed extensions to previous versions of MQ Explorer, such as SupportPacs or WebSphere Message Broker Explorer, you must reinstall compatible versions of the extensions after upgrading MQ Explorer to version 7.5.

If you continue to run WebSphere MQ version 7.0.1 on the same server as WebSphere MQ version 7.5, and you use MQ Explorer, each installation uses its own installation of MQ Explorer. When you uninstall version 7.0.1, its version of MQ Explorer is uninstalled. To remove IBM WebSphere Eclipse Platform, uninstall it separately. The workspace is not deleted.


Test result migration

Test results are not migrated from version to version. To view any test results, you must rerun the tests.


AIX: Shared objects

In WebSphere MQ for Windows Version 7.5 the .a shared objects in the lib64 directory contains both the 32 bit and 64 bit objects. A symlink to the .a file is also placed in the lib directory. The AIX loader can then correctly pick up the correct object for the type of application being run.

This means that WebSphere MQ applications can run with the LIBPATH containing either the lib or lib64 directory, or both.


AIX: /usr/lpp/mqm symbolic link removed

Before version 6.0, WebSphere MQ placed a symbolic link in /usr/lpp/mqm on AIX. The link ensured queue managers and applications migrated from WebSphere MQ versions before version 5.3 continued to work, without change. The link is not created in Version 7.5.

In version 5.0, WebSphere MQ for AIX was installed into /usr/lpp/mqm. That changed in version 5.3 to /usr/mqm. A symbolic link was placed in /usr/lpp/mqm, linking to /usr/mqm. Existing programs and scripts that relied on the installation into /usr/lpp/mqm continued to work unchanged. That symbolic link has been removed in Version 7.5, because you can now install WebSphere MQ in any directory. Applications and command scripts are affected by the change.

The effect on applications is no different to the effect of migrating on other UNIX and Linux platforms. If the installation is made primary, then symbolic links to the WebSphere MQ link libraries are placed in /usr/lib. Most applications migrated from earlier WebSphere MQ versions search the default search path, which normally includes /usr/lib. The applications find the symbolic link to the WebSphere MQ load libraries in /usr/lib.

If the installation is not primary, then you must configure the correct search path to load the WebSphere MQ link libraries. If you choose to run setmqenv, WebSphere MQ places the WebSphere MQ link library path into LIBPATH. Unless the application is configured not to search the LIBPATH, if for example it is a setuid or setgid application, then the WebSphere MQ library is loaded successfully

If you have written command scripts that run WebSphere MQ commands, you might have coded explicit paths to the directory tree where WebSphere MQ was installed. You must modify these command scripts. You can run setmqenv to create the correct environment to run the command scripts. If you have set the installation as primary, you do not have to specify the path to the command.


AIX, HP-UX, and Solaris: Building applications for TXSeries

You must rebuild WebSphere MQ applications that link to TXSeries.

Before version 7.5, WebSphere MQ applications that used the TXSeries CICS support loaded the WebSphere MQ library, mqz_r. From version 7.5 those applications must load the WebSphere MQ library, mqzi_r instead. You must change your build scripts accordingly.

Version 7.5 of mqz_r includes code to load a different version of the WebSphere MQ library. WebSphere MQ loads a different version of the WebSphere MQ library, if it detects that the queue manager the application is connected to is associated with a different installation to the one from which the library was loaded. mqzi_r does not include the additional code. When using TXSeries, the application must run with the WebSphere MQ library it loaded, and not a different library loaded by WebSphere MQ. For this reason, WebSphere MQ applications that use the WebSphere MQ TXSeries support must load the mqzi_r library, and not the mqz_r library.

An implication of applications loading mqzi_r is that the application must load the correct version of mqzi_r. It must load the one from the installation that is associated with the queue manager that the application is connected to.


Linux: Recompile C++ applications and update run time libraries

C++ WebSphere MQ MQI client and server applications on Linux must be recompiled using GNU Compiler Collection (GCC) 4.1.2, or later. Compilers older than GCC 4.1.2 are no longer supported. The C++ GCC 4.1.2 run time libraries, or later, must be installed in /usr/lib or /usr/lib64

If you are using one of the supported Linux distributions, the libraries are correctly installed

The GCC 4.1.2 libraries support SSL and TLS connections from a WebSphere MQ MQI client. SSL and TLS use GSKit version 8, which depends on libstdc++.so.6. libstdc++.so.6 is included in GCC 4.1.2.


Linux: Increased shared memory allocation required

The maximum amount of shared memory (SHMMAX) to allocate on Linux systems was omitted from the version 7.0 information centers. The default system allocation is 32 MB. WebSphere MQ starts by allocating 64 MB and increases its allocation on demand by doubling its previous allocation. On a production system set SHMMAX to at least 256 MB to accommodate additional allocations.


UNIX and Linux: crtmqlnk and dltmqlnk removed

The crtmqlnk and dltmqlnk commands are not present in version 7.5. Before version 7.1, the commands created symbolic links in subdirectories of /usr. From version 7.1, you must use the setmqinst command instead.


UNIX and Linux: Message catalogs moved

WebSphere MQ message catalogs are no longer stored in the system directories in version 7.5. To support multiple installations, copies of the message catalogs are stored with each installation. If you want messages only in the locale of your system, the change has no affect on your system. If you have customized the way the search procedure selects a message catalog, then the customization might no longer work correctly.

Set the LANG environment variable to load a message catalog for a different language from the system locale.


UNIX and Linux: MQ services and triggered applications

WebSphere MQ version 7.5 has been configured to allow both LD_LIBRARY_PATH and $ORIGIN to work for MQ services and triggered applications. For this reason MQ Services and triggered applications have been changed so that they run under the user ID who started the queue manager and not setuid or setgid.

If any files used by the service were previously restricted to certain users, then they might not be accessible by the user ID who started the queue manager. Resources used by MQ services or triggered applications must be adjusted as appropriate.

On AIX, LD_LIBRARY_PATH is also known as LIBPATH and $ORIGIN is not supported.

On HP-UX, LD_LIBRARY_PATH is also known as SHLIB_PATH.


UNIX and Linux: ps -ef | grep amq interpretation

The interpretation of the list of WebSphere MQ processes that results from filtering a scan of UNIX or Linux processes has changed. The results can show WebSphere MQ processes running for multiple installations on a server. Before version 7.5, the search identified WebSphere MQ processes running on only a single installation of WebSphere MQ on a UNIXor Linux server.

The implications of this change depend on how the results are qualified and interpreted, and how the list of processes is used. The change affects you, only if you start to run multiple installations on a single server. If you have incorporated the list of WebSphere MQ processes into administrative scripts or manual procedures, you must review the usage.


Examples

The following two examples, which are drawn from the information center, illustrate the point.

  1. In the information center, before version 7.5, the scan was used as a step in tasks to change the installation of WebSphere MQ. The purpose was to detect when all queue managers had ended. In version 7.5, the tasks use the dspmq command to detect when all queue managers associated with a specific installation have ended.
  2. In the information center, a process scan is used to monitor starting a queue manager in a high availability cluster. Another script is used to stop a queue manager. In the script to stop a queue manager, if the queue manager does not end within a period of time, the list of processes is piped into a kill -9 command. In both these cases, the scan filters on the queue manager name, and is unaffected by the change to multiple installations.


UNIX and Linux: /usr symbolic links removed

On all UNIX and Linux platforms, the links from the /usr file system are no longer made automatically. In order to take advantage of these links, you must set an installation as the primary installation or set the links up manually.

In previous releases the installation of WebSphere MQ on UNIX and Linux created the symbolic links shown in Table 1 . In version 7.5, these links are not created. You must run setmqinst to create a primary installation containing symbolic links. No symbolic links are created in other installations.

Default symbolic links created in releases before version 7.1

Symbolic link from To
/usr/bin/amq... /opt/mqm/bin/amq...
/usr/lib/amq... /opt/mqm/lib/amq...
/usr/include/cmq... /opt/mqm/inc/cmq...
/usr/share/man/... /opt/mqm/man/...

Only a subset of those links created with previous releases are now made


Windows: amqmsrvn.exe process removed

The amqmsrvn.exe DCOM process was replaced by a Windows service, amqsvc.exe, in version 7.1. This change is unlikely to cause any problems. However, you might have to make some changes. You might have configured the user that runs the WebSphere MQ Windows service MQSeriesServices without the user right to Log on as a service. Alternatively, the user might not have List Folder privilege on all the subdirectories from the root of the drive to the location of the service amqsvc.exe.

If you omitted the Log on as a service user privilege, or one of the subdirectories under which WebSphere MQ is installed does not grant the List Folder privilige to the user, the MQ_InstallationName WebSphere MQ Windows services in version 7.5 fails to start.


Diagnosing the problem

If the service fails to start, Windows event messages are generated:

If the Prepare WebSphere MQ wizard encounters a failure when validating the security credentials of the user performing an installation, an error is returned: WebSphere MQ is not correctly configured for Windows domain users. This error indicates that the service failed to start.


Resolution

To resolve this problem:


Windows: IgnoredErrorCodes registry key

The registry key used to specify error codes that you do not want written to the Windows Application Event Log has changed.

The contents of this registry key are not automatically migrated. To continue to ignore specific error codes, you must manually migrate the registry key.

Previously, the key was in the following location:

The key is now in the following location:

where MQ_INSTALLATION_NAME is the installation name associated with a particular installation of WebSphere MQ.


Windows: Installation and infrastructure information

The location of Windows installation and infrastructure information has changed.

A top-level string value, WorkPath, in the HKLM\SOFTWARE\IBM\WebSphere MQ key, stores the location of the product data directory which is shared between all installations. The first installation on a machine specifies it, subsequent installations pick up the same location from this key.

Other information previously stored in the registry on Windows is now stored in .ini files.


Windows: Local queue performance monitoring

In WebSphere MQ for Windows Version 7.5 it is no longer possible to monitor local queues using the Windows performance monitor.

Use the performance monitoring commands, which are common to all platforms, provided by WebSphere MQ.


Windows: Logon as a service required

The user ID that runs the WebSphere MQ Windows service must have the user authority to Logon as a service. If the user ID does not have the authority to run the service, the service does not start and it returns an error in the Windows system event log. Typically you will have run the Prepare WebSphere MQ wizard, and set up the user ID correctly. Only if you have configured the user ID manually is it possible that you might have a problem in version 7.5.

You have always been required to give the user ID that you configure to run WebSphere MQ the user authority to Logon as a service. If you run the Prepare WebSphere MQ wizard, it creates a user ID with this authority. Alternatively, it ensures that a user ID you provide has this authority.

It is possible that you ran WebSphere MQ in earlier releases with a user ID that did not have the Logon as a service authority. You might have used it to configure the WebSphere MQ Windows service MQSeriesServices, without any problems. If you run a WebSphere MQ Windows service in version 7.5 with the same user ID that does not have the Logon as a service authority, the service does not start.

The WebSphere MQ Windows service MQSeriesServices, with the display name IBM MQSeries, changed in version 7.1. A single WebSphere MQ Windows service per server is no longer sufficient. A WebSphere MQ Windows service per installation is required. Each service is named MQ_InstallationName, and has a display name WebSphere MQ (InstallationName). The change, which is necessary to run multiple installations of WebSphere MQ, has prevented WebSphere MQ running the service under a single specific user ID. In version 7.5, a MQ_InstallationName service must run as a service.

The consequence is a user ID that is configured to run the Windows service MQ_InstallationName must be configured to Logon as a service. If the user ID is not configured correctly, errors are returned in the Windows system event log.

Many installations on earlier releases, and installations from version 7.1 onwards, configure WebSphere MQ with the Prepare WebSphere MQ wizard. The wizard sets up the user ID with the Logon as a service authority and configures the WebSphere MQ Windows service with this user ID. Only if, in previous releases, you have configured MQSeriesServices with another user ID that you configured manually, might you have this migration problem to fix.


Windows: MSCS restriction with multiple installations

When you install or upgrade to WebSphere MQ version 7.5, the first WebSphere MQ installation on the server is the only one that can be used with Microsoft Cluster Server (MSCS). No other installations on the server can be used with MSCS. This restriction limits the use of MSCS with multiple WebSphere MQ installations.

When you run the haregtyp command it defines the first WebSphere MQ to be installed as an MSCS resource type The implications are as follows:

  1. You must associate queue managers that are participating in an MSCS cluster with the first installation on the server.
  2. Setting the primary installation has no effect on which installation is associated with the MSCS cluster.
  3. If you are upgrading from version 7.0.1 to version 7.5, you must follow the single-stage migration scenario;


Windows: Migration of registry information

Before version 7.1 all WebSphere MQ configuration information, and most queue manager configuration information, was stored in the Windows registry. From version 7.1 onwards all configuration information is stored in files. If version 7.0.1 is uninstalled from a server that has other WebSphere MQ installations, an additional migration step must be performed. The additional step completes transferring configuration information from the registry to the mqs.ini file.

The change does not affect the operation of existing applications or queue managers, but it does affect any administrative procedures and scripts that reference the registry.

Before version 7.0.1 all WebSphere MQ configuration information was stored in the Windows registry. In version 7.0.1, to support multi-instance queue managers, the queue manager configuration information of some queue managers is stored in qm.ini and qmstatus.ini rather than in the registry.

In version 7.5, all WebSphere MQ configuration information about Windows is stored in files; the same files as in UNIX and Linux. If you are migrating an existing Windows system to version 7.5, the transfer of configuration data from the registry to files is automatic. It takes place when the installation is upgraded to version 7.5.

If version 7.0.1 is uninstalled rather than upgraded, and there are other WebSphere MQ version 7.5 installations on the same server, the migration requires extra steps.

The version 7.0.1 configuration information is accessed from other installations. You must stop all the queue managers and WebSphere MQ applications running on the server to release any locks.

The WebSphere MQ configuration information in the registry is automatically migrated to qm.ini and qmstatus.ini when version 7.0.1 is uninstalled from a server that has a version 7.5, or later, installation. See step 2 in UNIX, Linux, and Windows: Side-by-side migration from version 7.0.1 to version 7.5 and step 5 in UNIX, Linux, and Windows: Multi-stage migration from version 7.0.1 to version 7.5 . As a consequence, after uninstallation of version 7.0.1 on a multi-installation server, it is difficult to restore a version 7.0.1 installation to run any queue managers that you want to restore to the 701 command level:

  1. You cannot reinstall version 7.0.1 on the server. You must run the queue managers on a different server.
  2. When you transfer the queue manager data to another server, with version 7.0.1 installed, you must create the correct registry configuration entries. The entries are not available to copy from the registry on the multi-installation server. Back up the registry entries before uninstalling version 7.0.1.


Windows: Relocation of the mqclient.ini file

In WebSphere MQ for Windows Version 7.5 the mqclient.ini file has moved from FilePath to WorkPath. This is similar to the model already used on UNIX and Linux systems.

For users who supply separate file and work paths you will see a change in behavior. You have an additional step to perform when you choose to uninstall WebSphere MQ version 7.0 before installing WebSphere MQ version 7.5. Before uninstalling WebSphere MQ version 7.0, you must copy mqclient.ini directly to the Config directory in your data path so that it can be picked up by the WebSphere MQ version 7.5 installation.


Windows: Task manager interpretation

The interpretation of the processes listed by the Windows Task Manager has changed. The results can show WebSphere MQ processes running for multiple installations on a server. Before version 7.5, the process list identified WebSphere MQ processes running on only a single installation of WebSphere MQ on a Windows server.

The implications of this change depend on how the results are qualified and interpreted, and how the list of processes is used. The change affects you, only if you start to run multiple installations on a single server. If you have incorporated the list of WebSphere MQ processes into administrative scripts or manual procedures, you must review the usage.


Windows: WebSphere MQ Active Directory Services Interface

The WebSphere MQ Active Directory Services Interface is no longer available.

If your application uses the WebSphere MQ Active Directory Services Interface, you must rewrite your application to use Programmable Command Formats.


Behavior that has changed between version 6.0 to version 7.5

Between version 6.0 and version 7.5 some aspects of WebSphere MQ have changed that might affect existing applications, administrative scripts, or management procedures.

The following list of changes, are copied from the migration guide. They might affect the operation of existing applications, administrative scripts, and management procedures. New functions, and changes that do not affect existing applications, administrative procedures, and administrative scripts are not listed here


Apache Axis shipped with WebSphere MQ updated from version 1.1 to 1.4

The level of Apache Axis shipped with WebSphere MQ is updated to version 1.4 from a patched version of Axis 1.1.

Continue to use the Axis 1.1 JAR file with applications that were built with WebSphere MQ version 6.0. Build new applications that use the WebSphere MQ web transport for SOAP with the Axis 1.4 JAR file.

If you are writing clients that use W3C SOAP over JMS, download Axis 1.4.1 from Apache Axis. Build applications that use W3C SOAP over JMS with the Axis 1.4.1 JAR file.

Note: The naming of versions and releases used by Axis causes confusion. Typically, Axis 1.4 refers to the JAX-RPC implementation, and Axis2 to the JAX-WS implementation.

Axis 1.4 is a version level. If you search for Axis 1.4 on the internet, you are taken to http://ws.apache.org/axis/ . The page contains a list of preceding versions of Axis (1.2, 1.3) and the April 22, 2006, final release of Axis 1.4. There are later releases of Axis 1.4, that fix bugs, but they are all known as Axis 1.4. It is one of these bug fix releases that is shipped with WebSphere MQ. For Axis 1.4, use the version of axis.jar that is shipped with WebSphere MQ rather than the one obtainable from http://ws.apache.org/axis/ .

The Axis website also refers to Axis 1.1 to refer to all the versions of what is more typically called Axis 1.4. Axis 1.2 is used to refer to what is typically called Axis2.

Axis 1.5 is not a later release of Axis 1.4, it is an Axis2 release. If you search for Axis 1.5 you are directed to http://ws.apache.org/axis2/ . http://ws.apache.org/axis2/download.cgi contains a list of release versions of Axis2, labeled 0.9 to 1.5.1 (and including, confusingly version 1.4). The release version of Axis2 to use with WebSphere MQ transport for SOAP is 1.4.1. Download Axis2 1.4.1 from http://ws.apache.org/axis2/download/1_4_1/download.cgi .


Changes to cluster error recovery on servers other than z/OS

Before version 7.5, if a queue manager detected a problem with the local repository manager managing a cluster, it updated the error log. In some cases, it then stopped managing clusters. The queue manager continued to exchange applications messages with a cluster, relying on its increasingly out of date cache of cluster definitions. From version 7.5 onwards, the queue manager reruns operations that caused problems, until the problems are resolved. If, after five days, the problems are not resolved, the queue manager shuts down to prevent the cache becoming more out of date. As the cache becomes more out of date, it causes a greater number of problems. The changed behavior regarding cluster errors in version 7.5 does not apply to z/OS.

Every aspect of cluster management is handled for a queue manager by the local repository manager process, amqrrmfa. The process runs on all queue managers, even if there are no cluster definitions.

Before version 7.5, if the queue manager detected a problem in the local repository manager, it stopped the repository manager after a short interval. The queue manager kept running, processing application messages and requests to open queues, and publish or subscribe to topics.

With the repository manager stopped, the cache of cluster definitions available to the queue manager became more out of date. Over time, messages were routed to the wrong destination, and applications failed. Applications failed attempting to open cluster queues or publication topics that had not been propagated to the local queue manager.

Unless an administrator checked for repository messages in the error log, the administrator might not realize the cluster configuration had problems. If the failure was not recognized over an even longer time, and the queue manager did not renew its cluster membership, even more problems occurred. The instability affected all queue managers in the cluster, and the cluster appeared unstable.

From version 7.5 onwards, WebSphere MQ takes a different approach to cluster error handling. Rather than stop the repository manager and keep going without it, the repository manager reruns failed operations. If the queue manager detects a problem with the repository manager, it follows one of two courses of action.

  1. If the error does not compromise the operation of the queue manager, the queue manager writes a message to the error log. It reruns the failed operation every 10 minutes until the operation succeeds. By default, you have five days to deal with the error; failing which, the queue manager writes a message to the error log, and shuts down. You can postpone the five day shutdown.
  2. If the error compromises the operation of the queue manager, the queue manager writes a message to the error log, and shuts down immediately.

An error that compromises the operation of the queue manager is an error that the queue manager has not been able to diagnose, or an error that might have unforeseeable consequences. This type of error often results in the queue manager writing an FFST file. Errors that compromise the operation of the queue manager might be caused by a bug in WebSphere MQ, or by an administrator, or a program, doing something unexpected, such as ending a WebSphere MQ process.

The point of the change in error recovery behavior is to limit the time the queue manager continues to run with a growing number of inconsistent cluster definitions. As the number of inconsistencies in cluster definitions grows, the chance of abnormal application behavior grows with it.

The default choice of shutting down the queue manager after five days is a compromise between limiting the number of inconsistencies and keeping the queue manager available until the problems are detected and resolved.

You can extend the time before the queue manager shuts down indefinitely, while you fix the problem or wait for a planned queue manager shutdown. The five-day stay keeps the queue manager running through a long weekend, giving you time to react to any problems or prolong the time before restarting the queue manager.


Corrective actions

You have a choice of actions to deal with the problems of cluster error recovery. The first choice is to monitor and fix the problem, the second to monitor and postpone fixing the problem, and the final choice is to continue to manage cluster error recovery as in releases before version 7.5.

  1. Monitor the queue manager error log for the error messages AMQ9448 and AMQ5008, and fix the problem.

    • AMQ9448 indicates that the repository manager has returned an error after running a command. This error marks the start of trying the command again every 10 minutes, and eventually stopping the queue manager after five days, unless you postpone the shutdown.
    • AMQ5008 indicates that the queue manager was stopped because a WebSphere MQ process is missing. AMQ5008 results from the repository manager stopping after five days. If the repository manager stops, the queue manager stops.

  2. Monitor the queue manager error log for the error message AMQ9448, and postpone fixing the problem.

    • If you disable getting messages from SYSTEM.CLUSTER.COMMAND.QUEUE, the repository manager stops trying to run commands, and continues indefinitely without processing any work. However, any handles that the repository manager holds to queues are released. Because the repository manager does not stop, the queue manager is not stopped after five days.
    • Run an MQSC command to disable getting messages from SYSTEM.CLUSTER.COMMAND.QUEUE:
    • ALTER QLOCAL(SYSTEM.CLUSTER.COMMAND.QUEUE) GET(DISABLED)

  3. Revert the queue manager to the same cluster error recovery behavior as before version 7.5.

    • You can set a queue manager tuning parameter to keep the queue manager running if the repository manager stops.
    • The tuning parameter is TolerateRepositoryFailure, in the TuningParameter stanza of the qm.ini file. To prevent the queue manager stopping, if the repository manager stops, set TolerateRepositoryFailure to TRUE;
    • Restart the queue manager to enable the TolerateRepositoryFailure option.
    • If a cluster error has occurred that prevents the repository manager starting successfully, and hence the queue manager from starting, set TolerateRepositoryFailure to TRUE to start the queue manager without the repository manager.


Special consideration

Before version 7.5, some administrators managing queue managers that were not part of a cluster stopped the amqrrmfa process. Stopping amqrrmfa did not affect the queue manager.

Stopping amqrrmfa in version 7.5 causes the queue manager to stop, because it is regarded as a queue manager failure. You must not stop the amqrrmfa process in version 7.5, unless you set the queue manager tuning parameter, TolerateRepositoryFailure.


Example

Figure 1. Set TolerateRepositoryFailure to TRUE in qm.ini

manager configuration files, qm.ini


MQI client: Changes

A WebSphere MQ MQI client can connect to queue managers running on earlier or later releases of WebSphere MQ without recompilation or relinking. The behavior of a client application might change. Changes that are specific to clients are changes to the client definition table, to the default settings of client and server-connection channels, and to the behavior of MQPUT1. Clients might also be affected by other changes to WebSphere MQ.


MQI client: Client Channel Definition Table (CCDT)

The client channel definition table has changed from version 6.0 to version 7.5. Existing clients must continue to use existing CCDTs. To use a version 7.5 CCDT, you must update the client.

You can connect a WebSphere MQ MQI client application to any level of queue manager. If the client uses a CCDT, it must use a CCDT built by the same or earlier version of queue manager. If the client connects without using a CCDT, this restriction does not apply.

In a common migration scenario, if you upgrade a version 6.0 queue manager to version 7.5, and you do not create new CCDTs for its clients, the clients connect to the version 7.5 queue manager without any changes being required. Client behavior might change as a result of changes to the queue manager.

Another common migration scenario is to update some queue managers and some clients to version 7.5, leaving some queue managers and clients at version 6.0. In this scenario, you want to update the CCDT for version 7.5 WebSphere MQ MQI client connected to version 7.5 queue managers to version 7.5, so that the clients can exploit version 7.5 function fully. The new clients also connect to version 6.0 queue managers. Existing clients connect to both version 6.0 and version 7.5 queue managers. In order that version 7.5 clients can exploit new version 7.5 function, you must deploy a version 7.5 CCDT to version 7.5 clients. version 6.0 clients must continue to use the version 6.0 CCDT. Both sets of clients can connect to both sets of queue managers, regardless of the CCDT they are using.

If the client is a WebSphere MQ MQI client, the version of the WebSphere MQ MQI client libraries linked to by the client must be the same or greater than the version of the queue manager that was used to build the CCDT. If the client is a Java or JMS client, then the client must be built with versions of the WebSphere MQ JAR files that are the same or greater than the queue manager that was used to build the CCDT.

To upgrade a version 6.0 WebSphere MQ MQI client to use a version 7.5 CCDT, you must upgrade the WebSphere MQ MQI client installation to version 7.5. Unless you decide to do so for other reasons, do not rebuild the client application.

To upgrade a version 6.0 Java or JMS client to use a version 7.5 CCDT, redeploy the WebSphere MQ JAR files to the client workstation. You do not need to rebuild the Java or JMS client with the new JAR files.


MQI client: Client configuration stanzas moved into a new configuration file

Client configuration information is moved from existing configuration stanzas into a new configuration file, mqclient.ini.

Moving client configuration information affects existing settings; for example:


MQI client: Default behavior of client-connection and server-connection

The default settings for client and server connection channels have changed to use the new shared conversations capability. The change has an impact on performance, and on the behavior of heartbeats and channels exits.

From version 7.0, the default for client and server connections is to share an MQI channel. Each channel is defined with a default of 10 threads to run up to 10 client conversations per channel instance. Before version 7.0, each conversation was allocated to a different channel instance. The change might cause migration problems for existing client applications. You can restore version 6.0 behavior of a server or client connection channel by setting the channel attribute, SHARECNV, to 0.

Note: If you set SHARECNV to 1, rather than 0, each conversation is allocated a separate channel instance. However, the channel behaves like a new channel. For example, heartbeats flow in each direction at any time, and the channel supports the following features:

You can set the MQCONNX option, MQCNO_NO_CONV_SHARING and connect the application to a channel with SHARECNV set to a value greater than 1. The result is the same as connecting the application to a channel with SHARECNV set to 1. If you connect the application to a channel with SHARECNV set to 0, the connection behaves like a version 6.0 MQI channel.


Performance

Processing of messages on channels that use the default configuration of 10 shared conversations is, on average, 15% slower than on version 6.0. Version 6.0 channels do not share conversations.

On an MQI channel instance that is sharing conversations, all of the conversations on a socket are received by the same thread. By using a single queue manager thread, a high SHARECNV value reduces queue manager thread usage, and generally improves performance. However, if the conversations sharing a socket are all busy, the conversational threads contend with one another to use the receiving thread. The contention cause delays, and in this situation a lower SHARECNV value is better. A SHARECNV value of 1 eliminates contention to use the receiving thread and makes new features available.


Heartbeats

Heartbeats can now flow across the channel at any time in either direction. The version 6.0 behavior is for heartbeats to flow only when an MQGET call is waiting. Restore the version 6.0 behavior by setting SHARECNV to 0.


Channel exits

The behavior of a client or server connection channel exit changes when the channel is sharing conversations. It is unlikely that the change affects the actual behavior of existing exits; but it could. The change is as follows:

Updating MQCD when SharingConversations field is set to TRUE does not affect the way the channel runs. Only alterations made when MQCXP SharingConversations field is set to FALSE, on an MQXR_INIT call, change channel behavior.

(MQLONG) (MQLONG) programs for MQI channels consume sample program


MQI client: MQPUT1 sync point behavior change

An MQPUT1 call by a WebSphere MQ MQI client application that failed in WebSphere MQ version 6.0 can now sometimes succeed. The failure is returned to the application later, if it calls MQCMIT. For the change in behavior to occur, the MQPUT1 must be in sync point.

In the scenario, Example call sequence that demonstrates the change in behavior , an MQPUT1 call can succeed where it failed in version 6.0. The result occurs when all the following conditions are met:

You can make the WebSphere MQ MQI client behave like version 6.0. Set Put1DefaultAlwaysSync to YES in the CHANNELS stanza of the client configuration file;

Figure 1. Add Put1DefaultAlwaysSync to mqclient.ini

 Channels:
    Put1DefaultAlwaysSync=YES


Example call sequence that demonstrates the change in behavior

  1. MQCONN to queue manager from a WebSphere MQ MQI client application.
  2. MQPUT1 to a nonexistent queue with the MQPMO_SYNCPOINT option
  3. MQDISC

In WebSphere MQ version 6.0 the MQPUT1 call ends with MQCC_FAILED and MQRC_UNKNOWN_OBJECT_NAME(2085). Running with a client and server later than version 6.0, the MQPUT1 call ends with MQRC_NONE and MQCC_OK. of the client configuration file


Channel authentication

When you migrate a queue manager to version 7.5, channel authentication using channel authentication records is disabled. Channels continue to work as before. If you create a queue manager in version 7.5, channel authentication using channel authentication records is enabled, but with minimal additional checking. Some channels might fail to start.


Migrated queue managers

Channel authentication is disabled for migrated queue managers.

To start using channel authentication records you must run this MQSC command:


New queue managers

Channel authentication is enabled for new queue managers.

You want to connect existing queue managers or WebSphere MQ MQI client applications to a newly created queue manager. Most connections work without specifying any channel authentication records. The following exceptions are to prevent privileged access to the queue manager, and access to system channels.

  1. Privileged user IDs asserted by a client-connection channel are blocked by means of the special value *MQADMIN.

     SET CHLAUTH('*') TYPE(BLOCKUSER) USERLIST('*MQADMIN') +
    DESCR('Default rule to disallow privileged users')
    
  2. Except for the channel used by WebSphere MQ Explorer, all SYSTEM.* channels are blocked.

     SET CHLAUTH('SYSTEM.*') TYPE(ADDRESSMAP) ADDRESS('*') USERSRC(NOACCESS) +
    DESCR('Default rule to disable all SYSTEM channels')
    
    SET CHLAUTH(SYSTEM.ADMIN.SVRCONN) TYPE(ADDRESSMAP) ADDRESS('*') USERSRC(CHANNEL) +
    DESCR('Default rule to allow MQ Explorer access')
    

Note: This behaviour is default for all new WebSphere MQ version 7.5 queue managers on startup.

If you must work around the exceptions, you can run an MQSC command to add in more rules to allow channels blocked by the default rules to connect, or disable channel authentication checking:

 ALTER QMGR CHLAUTH(DISABLED)
authentication records


Connect to multiple queue managers and use MQCNO_FASTPATH_BINDING

Applications that connect to queue managers using the MQCNO_FASTPATH_BINDING binding option might fail with an error and reason code MQRC_FASTPATH_NOT_AVAILABLE.

An application can connect to multiple queue managers from the same process. In releases earlier than version 7.5, an application can set any one of the connections to MQCNO_FASTPATH_BINDING. In version 7.5, only the first connection can be set to MQCNO_FASTPATH_BINDING. See Fast path for the complete set of rules.

To assist with migration, you can set a new environment variable, AMQ_SINGLE_INSTALLATION. The variable reinstates the same behavior as in earlier releases, but prevents an application connecting to queue managers associated with other installations in the same process.


Fast path

On a server with multiple installations, applications using a fast path connection to WebSphere MQ version 7.1 or later must follow these rules:

  1. The queue manager must be associated with the same installation as the one from which the application loaded the WebSphere MQ run time libraries. The application must not use a fast path connection to a queue manager associated with a different installation. An attempt to make the connection results in an error, and reason code MQRC_INSTALLATION_MISMATCH.
  2. Connecting non-fast path to a queue manager associated with the same installation as the one from which the application has loaded the WebSphere MQ run time libraries prevents the application connecting fast path, unless either of these conditions are true:

    • The application makes its first connection to a queue manager associated with the same installation a fast path connection.
    • The environment variable, AMQ_SINGLE_INSTALLATION is set.

  3. Connecting non-fast path to a queue manager associated with a version 7.1 or later installation, has no effect on whether an application can connect fast path.
  4. You cannot combine connecting to a queue manager associated with a version 7.0.1 installation and connecting fast path to a queue manager associated with a version 7.1, or later installation.

With AMQ_SINGLE_INSTALLATION set, you can make any connection to a queue manager a fast path connection. Otherwise almost the same restrictions apply:

options


Custom scripts

In WebSphere MQ for Windows Version 7.5 custom scripts to install packages can fail as they have now been renamed.

Custom scripts to install WebSphere MQ can be incomplete as they have been added or removed.


Changes to data types

A number of data types have changed between WebSphere MQ version 6.0 and WebSphere MQ version 7.5, and new data types have been added. This topic lists the changes for data types that have a new current version since WebSphere MQ version 6.0.

The current version of a data type is incremented if the length of a data type is extended by adding new fields. The addition of new constants to the values that can be set in a data type does not result in a change to the current version value. Click on the links to read about the new fields.

New fields added to existing data types

Data type New version New fields
Authentication information record MQAIR_VERSION_2 OCSPResponderURL (MQCHAR256)
Get-message options MQGMO_VERSION_4 Reserved2 (MQLONG)
MsgHandle (MQHMSG)
Object descriptor MQOD_VERSION_4 ObjectString (MQCHARV)
SelectionString (MQCHARV)
ResObjectString (MQCHARV)
ResolvedType (MQLONG)
Put-message options MQPMO_VERSION_3 OriginalMsgHandle (MQHMSG)
NewMsgHandle (MQHMSG)
Action (MQLONG)
PubLevel (MQLONG)
Channel definition MQCD_VERSION_9 SharingConversations
PropertyControl
MaxInstances
MaxInstancesPerClient
ClientChannelWeight
ConnectionAffinity
MQCD_VERSION_10 BatchDataLimit (MQLONG)
DefReconnect (MQLONG)
UseDLQ (MQLONG)
Exit parameter structure MQAXP_VERSION_2 ExitMsgHandle
Channel exit parameter MQCXP_VERSION_7 Connection handle
SharingConversations
MQCXP_VERSION_8 MCAUserSource (MQLONG)
pEntryPoints (PMQIEP)
Cluster workload queue-record structure MQWQR_VERSION_3 DefPutResponse
Cluster workload exit parameter structure MQWXP_VERSION_4 pEntryPoints (PMQIEP)


Default transmission queue restriction

The information center in previous versions of WebSphere MQ warned about defining the default transmission queue as SYSTEM.CLUSTER.TRANSMIT.QUEUE. In version 7.5, any attempt to set or use a default transmission queue that is defined as SYSTEM.CLUSTER.TRANSMIT.QUEUE results in an error.

In earlier versions of WebSphere MQ no error was reported when defining the default transmission queue as SYSTEM.CLUSTER.TRANSMIT.QUEUE. MQOPEN or MQPUT1 MQI calls that resulted in referencing the default transmission queue did not return an error. Applications might have continued working and failed later on. The reason for the failure was hard to diagnose.

The change ensures that any attempt to set the default transmission queue to SYSTEM.CLUSTER.TRANSMIT.QUEUE, or use a default transmission queue set to SYSTEM.CLUSTER.TRANSMIT.QUEUE, is immediately reported as an error.


Related reference :
The queue name supplied is not valid for DEFXMITQ.


dspmqver

New types of information are displayed by dspmqver to support multiple installations. The changes might affect existing administrative scripts you have written to manage WebSphere MQ.

The changes in output from dspmqver that might affect existing command scripts that you have written are twofold:

  1. Version 7.5 has extra -f field options. If you do not specify a -f option, output from all the options is displayed. To restrict the output to the same information that was displayed in earlier releases, set the -f option to a value that was present in the earlier release. Compare the output for dspmqver in Figure 1 and Figure 2 with the output for dspmqver -f 15 in Figure 3 .

    Figure 1. Default dspmqver options in WebSphere MQ version 7.0.1

     dspmqver 
    

     Name:        WebSphere MQ Version:     7.0.1.6
    CMVC level:  p701-L110705
    BuildType:   IKAP - (Production)
    

    Figure 2. Default dspmqver options in WebSphere MQ version 7.5

     dspmqver 
    

     Name:        WebSphere MQ Version:     7.1.0.0
    Level:       p000-L110624
    BuildType:   IKAP - (Production)
    Platform:    WebSphere MQ for Windows
    Mode:        32-bit
    O/S:         Windows XP, Build 2600: SP3
    InstName:    110705
    InstDesc:    July 5 2011
    InstPath:    C:\Program Files\IBM\WebSphere MQ_110705
    DataPath:    C:\Program Files\IBM\WebSphere MQ Primary:     No
    MaxCmdLevel: 710
    
    Note there are a number (1) of other installations, 
    use the '-i' parameter to display them.
    

    Figure 3. dspmqver with option to make WebSphere MQ version 7.5 similar to WebSphere MQ version 7.0.1

     dspmqver -f 15 
    

     Name:        WebSphere MQ Version:     7.1.0.0
    Level:       p000-L110624
    BuildType:   IKAP - (Production)
    
  2. The heading of the build level row has changed from CMVC level: to Level:.


Exits and installable services

When migrating to WebSphere MQ version 7.5 for a distributed platform, if you install WebSphere MQ in a non-default location, you must update your exits and installable services. Data conversion exits generated using the crtmqcvx command must be regenerated using the updated command.

When writing new exits and installable services, you do not need to link to any of the following WebSphere MQ libraries:


Fewer WebSphere MQ MQI client log messages

A WebSphere MQ MQI client used to report every failed attempt to connect to a queue manager when processing a connection name list. From version 7.5, only if the failure occurs with the last connection in the list is a message written to the queue manager error log.

Reporting the last failure and no others, reduces growth of the queue manager error log.


GSKit: Changes from GSKit V7.0 to GSKit V8.0

For distributed platforms, GSKit V8.0 is integrated with WebSphere MQ. In versions of WebSphere MQ prior to version 7.1, you installed GSKit separately. GSKit V8.0 was included as an alternative to GSKit V7.0 in WebSphere MQ version 7.0.1; it is now the only version of GSKit provided with WebSphere MQ. Some functions in GSKit V8.0 are different to the functions in GSKit V7.0.


GSKit: Some FIPS 140-2 compliant channels do not start

Three CipherSpecs are no longer FIPS 140-2 compliant. If a client or queue manager is configured to require FIPS 140-2 compliance, channels that use the following CipherSpecs do not start after migration.

To restart a channel, alter the channel definition to use a FIPS 140-2 compliant CipherSpec. Alternatively, configure the queue manager, or the client in the case of a WebSphere MQ MQI client, not to enforce FIPS 140-2 compliance.

Earlier versions of WebSphere MQ enforced an older version of the FIPS 140-2 standard. The following CipherSpecs were considered FIPS 140-2 compliant in earlier versions of WebSphere MQ and are also compliant in version 7.1:

Use these CipherSpecs if you want WebSphere MQ version 7.1 to interoperate in a FIPS 140-2 compliant manner with earlier versions.

Previous WebSphere MQ releases enforced an older version of the FIPS 140-2 standard. The following CipherSpecs were considered FIPS 140-2 compliant by previous WebSphere MQ releases and are also considered compliant by WebSphere MQ version 7.1:

Use these CipherSpecs if you need WebSphere MQ version 7.1 to interoperate in a FIPS 140-2 compliant manner with earlier WebSphere MQ releases.


GSKit: Certificate Common Name (CN) not mandatory

In GSKit V8.0, the iKeyman command accepts any element of the distinguished name (DN), or a form of the subject alternative name (SAN). It does not mandate that you provide it with a common name. In GSKit V7.0, if you create a self-signed certificate using the iKeyman command you had to specify a common name.

The implication is that applications searching for a certificate might not able to assume that a certificate has a common name. You might need to review how applications search for certificates, and how applications handle errors involving the common name. Alternatively, you might choose to check that all self-signed certificates are given common names.

Some other certificate tools that you might also be using, do not require a common name. It is therefore likely the change to GSKit is not going to cause you a problem.


GSKit: Commands renamed

The command name gsk7cmd is replaced with runmqckm; gsk7ikm is replaced with strmqikm, and gsk7capicmd is replaced with runmqakm . All the commands start the GSKit V8.0 certificate administration tools, and not the GSKit V7.0 tools.

WebSphere MQ Version 7.5 does not use a machine-wide shared installation of GSKit: instead it uses a private GSKit installation in the WebSphere MQ installation directory. Each WebSphere MQ Version 7.5 installation can use a different GSKit version. To display the version number of GSKit embedded in a particular WebSphere MQ installation, run the dspmqver command from that installation as shown in the following table:

Renamed GSKit commands

Platform GSKit V7.0 command GSKit V8.0 command
UNIX and Linux gsk7cmd runmqckm
UNIX and Linux gsk7ikm strmqikm
Windows,UNIX and Linux gsk7capicmd runmqakm
Windows,UNIX and Linux gsk7ver dspmqver -p 64 -v

Note: Do not use the gsk8ver command to display the GSKit version number: only the dspmqver command will show the correct GSKit version number for WebSphere MQ Version 7.5.


GSKit: The iKeyman commands to insert a certificate do not check that all required CA certificates are present

The iKeyman command in GSKit V8.0 does not validate a certificate when it is inserted into a key repository. iKeyman in GSKit V7.0, validated a certificate before it inserted the certificate into a certificate store.

The implication is, that if you create a certificate using the iKeyman in GSKit V8.0, all the necessary intermediate and root CA certificates might not be present, or they might have expired; when the certificate is checked it might fail.

Necessary certificates might be missing, or have expired. This can cause SSL and TLS connections to fail with error AMQ9633.


GSKit: Certificate stores created by iKeyman and iKeycmd no longer contain CA certificates

The iKeyman and iKeycmd utilities in GSKit V8.0 create a certificate store without adding pre-defined CA certificates to the store. To create a working certificate store, you must now add all the certificates that you require and trust. In GSKit V7.0 iKeyman and iKeycmd created a certificate store that already contained CA certificates.

Existing data bases created by GSKit V7.0 are unaffected by this change.


GSKit: Import of a duplicate PKCS#12 certificate

In GSKit V8.0, the iKeyman command does not report an attempt to import a duplicate PKCS#12 certificate as an error. In GSKit V7.0, the iKeyman command reported an error. In neither version is a duplicate certificate imported.

For GSKIT V8.0, a duplicate certificate is a certificate with the same label and public key.

The implication is that if some of the issuer information is different, but the name and public key are the same, the changes are not imported. The correct way to update a certificate is to use the -cert -receive option, which replaces an existing certificate.

gskcapicmd does not allow or ignore duplicates on import in this way.


GSKit: Password expiry to key database deprecated

In GSKit V8.0, the password expiry function in iKeyman continues to work the same as in GSKit V7.0, but it might be withdrawn in future versions of GSKit.

Use the file system protection provided with the operating system to protect the key database and password stash file.


GSKit: Signature algorithm moved out of settings file

In GSKit V8.0, the default signature algorithm used when creating self-signed certificates or certificate requests or selected in the creation dialogs is passed as a command-line parameter. In GSKit V7.0, the default signature algorithm was specified in the settings file.

The change has very little effect: it causes a different default signature algorithm to be selected. It does not alter the selection of a signature algorithm. and runmqakm options


GSKit: Signed certificate validity period not within signer validity

In GSKit V8.0, the iKeyman command does not check whether the validity period of a resulting certificate is within the validity period of the signed certificate. In GSKit V7.0, iKeyman checked that the validity period of the resulting certificate was within the validity period of the signed certificate.

The IETF RFC standards for SSL/TLS allow a certificate whose validity dates extend beyond those of its signer. This change to GSKit brings it into line with those standards. The check is whether the certificate is issued within the validity period of the signer, and not whether it expires within the validity period of the signer.


GSKit: Stricter default file permissions

The default file permissions set by runmqckm and strmqikm in WebSphere MQ version 7.5 on UNIX and Linux are stricter than the permissions that are set by runmqckm, strmqikm, gsk7cmd, and gsk7ikm in earlier releases of WebSphere MQ.

The permissions set by runmqckm and strmqikm in WebSphere MQ version 7.5 permit only the creator to access the UNIX and Linux SSL/TLS key databases. The runmqckm, strmqikm, gsk7cmd, and gsk7ikm tools in earlier releases of WebSphere MQ set world-readable permissions, making the files liable to theft and impersonation attacks.

The permissions set by gsk7capicmd, in earlier releases of WebSphere MQ, and runmqakm in WebSphere MQ version 7.5, permit only the creator to access UNIX and Linux SSL/TLS key databases.

The migration of SSL/TLS key databases to version 7.5 does not alter their access permissions. In many cases, administrators set more restrictive access permissions on these files to overcome the liability to theft and impersonation attacks; these permissions are retained.

The default file permissions set on Windows are unchanged. Continue to tighten up the access permissions on SSL/TLS key database files on Windows after creating the files with runmqckm or strmqikm.


JMS and Java changes

WebSphere MQ classes for JMS has been rewritten to integrate it with WebSphere MQ queue manager internal interfaces. To a large extent, its functional behavior remains the same. Some changes affect the way existing JMS and Java applications behave.


JMS: Reason code changes

Some reason codes returned in JMS exceptions have changed. The changes affect MQRC_Q_MGR_NOT_AVAILABLE and MQRC_SSL_INITIALIZATION_ERROR.

In earlier releases of WebSphere MQ, if a JMS application call fails to connect, it receives an exception with a reason code 2059 (080B) (RC2059): MQRC_Q_MGR_NOT_AVAILABLE . In version 7.5, it can still receive MQRC_Q_MGR_NOT_AVAILABLE, or one of the following more specific reason codes.

Similarly, when trying to connect, a JMS application might have received 2393 (0959) (RC2393): MQRC_SSL_INITIALIZATION_ERROR. In version 7.5, it can still receive MQRC_SSL_INITIALIZATION_ERROR, or a more specific reason code, such as 2400 (0960) (RC2400): MQRC_UNSUPPORTED_CIPHER_SUITE, that identifies the cause of the SSL initialization error.


JMS: Channel exits written in C and C++ called from JMS have changed behavior

Channel exit programs, written in C or C++, now behave identically, whether they are called from WebSphere MQ classes for JMS or from a WebSphere MQ MQI client. New Java channel exit interfaces offer improved function and performance.

You can now write channel exit classes to implement a new set of interfaces in the com.ibm.mq.exits package. Use the new interfaces to define classes to call as channel exits. The old Java channel exit interfaces, which are part of the WebSphere MQ classes for Java , are still supported. The new interfaces have improved function and performance.


JMS: Data conversion

The default behavior from version 7.0.1.5 onwards, is for the client JVM to perform data conversion for a JMS client. This behavior is the same as if the client is connected to a version 6.0, or earlier release, of queue manager. For a version JMS client connected to a queue manager at version 7.0.0.0 to 7.0.1.4, the behavior can be different.

Connected to a version 6.0 queue manager, or earlier, data conversion is always performed by the client JVM. In version 7.0.0.0 to version 7.0.1.4 data conversion is either performed by the client JVM, or by the queue manager, but the JMS client has no direct control where conversion performed. From version 7.0.1.5 onwards, the JMS client controls where data conversion is performed. The default behavior for a client connected to a queue manager at version 7.0.1.5 onwards is the same as when the client is connected to a version 6.0 queue manager, or earlier.

As a significant improvement to the JMS client in WebSphere MQ Version 7.0, the JMS client can request that the queue manager performs data conversion. The change aligns the JMS client with all other WebSphere MQ clients in terms of carrying out data conversion in the queue manager. The queue manager uses operating system code page file sets to carry out data conversion, as it has done for many years.

Between version 7.0.0.0 and 7.0.1.4 inclusive, queue manager data conversion is not optional. Conversion is performed on every JMS message sent to a client that contained JMS data with a defined message format, and which is not encoded in 1208 (UTF-8) or has numeric data not encoded in Native encoding. Both the client and the queue manager must be version 7.0, and the PROVIDERVERSION must be 7 for queue manager data conversion to be performed. If either is not version 7.0, PROVIDERVERSION is set to 6, and no queue manager data conversion is performed.

From version 7.0.1.5 onwards queue manager data conversion is optional. The default version 7.0.1.5 behavior is changed back to be the same as version 6.0, because queue manager conversion caused migration problems encountered in some cases. The problems that have been encountered, and their solutions are described in Table 1 .

Queue manager data conversion problems and solutions

Problem Solution
7.0.0.0 → 7.0.1.3 7.0.1.3 with IZ67359 7.0.1.4 with IC72897 or 7.0.1.5 onwards

z/OS new line differences

(Differences in mapping new lines between code pages used by Java and code pages used by WebSphere MQ)

No work-around

No work-around

You have 2 options:

  1. The default behavior resolves the problem
  2. Set RECEIVECONVERSION to WMQ_RECEIVE_CONVERSION_CLIENT_MSG

Queue manager and application both convert formatted message, and the application assumes the CCSID of the header.

Alter the application to check the CCSID of the message before performing conversion

 

Set ReceiveCCSID system property to the value the application expects.

You have 2 options:

  1. The default behavior resolves the problem
  2. Set RECEIVECONVERSION to WMQ_RECEIVE_CONVERSION_QMGR and RECEIVECCSID to the value the application expects.

Queue manager does not recognize the name in the message data format property.

Implement a null conversion exit

   

You have 2 options:

  1. The default behavior resolves the problem
  2. Set RECEIVECONVERSION to WMQ_RECEIVE_CONVERSION_CLIENT_MSG

Queue manager unable to convert to 1208.

No work-around

Set ReceiveCCSID system property to a value the queue manager can convert to.

You have 2 options:

  1. The default behavior resolves the problem
  2. Set RECEIVECONVERSION to WMQ_RECEIVE_CONVERSION_QMGR and RECEIVECCSID to a value the queue manager can convert to.

Note:

  1. The default behavior in version 7.0.1.5 (or version 7.0.1.4 with IC72897) has reverted to the behavior in version 6.0. It is possible that you have written a version 7.0 JMS client that takes advantage of queue manager data conversion. For the client to continue to work correctly, you must set the property RECEIVECONVERSION to WMQ_RECEIVE_CONVERSION_QMGR. See JMS message conversion for examples.
  2. The MQ link receiver between a WebSphere MQ channel and the WebSphere Application Server service integration bus does not use the WebSphere MQ classes for JMS as its JMS provider. It does not call a data conversion exit. It is unaffected by these migration considerations.


JMS: Dots in property names

JMS property names must not contain dots. The rules for naming JMS properties in the JMS specification follow the rules for naming Java identifiers. One of the rules is that identifiers must not contain dots. WebSphere MQ version 6.0 documented the rule, but did not enforce it. Releases following version 6.0 enforce the rule.

A JMS program calling version 6.0 of WebSphere MQ classes for JMS can create JMS property names containing dots. The same program, calling a version of WebSphere MQ classes for JMS later than version 6.0 receives an exception. A DetailedMessageFormatException is thrown by the SettypeProperty methods of JMSMessage.

The same application calling version 6.0 of WebSphere MQ classes for JMS does not throw an exception, even if connected to a queue manager running version 7.5 .


JMS: Error code and exception message changes

Compared to version 6.0 of WebSphere MQ classes for JMS, most error codes, and exception messages, have changed. The reason for these changes is that WebSphere MQ classes for JMS now has a layered architecture. Exceptions are thrown from different layers in the code.

You must modify your application if it parses or tests exception messages returned by the Throwable.getMessage method, or error codes returned by the JMSException.getErrorCode method.


Examples of changes

For example, an application tries to connect to a queue manager that does not exist. Version 6.0 of WebSphere MQ classes for JMS threw a JMSException exception with the following information.

 MQJMS2005: Failed to create MQQueueManager for 'localhost:QM_test'.
This exception contained a linked MQException exception with the following information.

 MQJE001: Completion Code 2, Reason 2058

WebSphere MQ classes for JMS now throws a JMSException exception with the following information.

 Message : JMSWMQ0018: Failed to connect to queue manager 'QM_test' with           connection mode 'Client' and host name 'localhost'.
Class : class com.ibm.msg.client.jms.DetailedJMSException
Error Code : JMSWMQ0018
Explanation : null
User Action : Check the queue manager is started and if running in client mode,
              check there is a listener running. Please see the linked exception
              for more information.
This exception contains a linked MQException exception with the following information.

 Message : JMSCMQ0001: WebSphere MQ call failed with compcode '2' ('MQCC_FAILED')
          reason '2058' ('MQRC_Q_MGR_NAME_ERROR').
Class : class com.ibm.mq.MQException
Completion Code : 2
Reason Code : 2058


JMS: Exception listeners change

JMS exception listeners behave differently than they did in version 6.0. Applications might receive more or fewer exceptions than they do version 6.0.

In version 6.0 of WebSphere MQ an exception listener is called to inform the application of any error condition that occurred asynchronously to the application execution. During the processing of a message for an asynchronous consumer, the application might have no other means to discover the exception. The errors that result in the exception listener being called include connection broken events, or if an attempt was made to process an unreadable message.

APAR IY81774 introduced a system property activateExceptionListener. If this property is set, all exceptions resulting from a broken connection are sent to the exception listener, regardless of the context in which they occur. The exception listener is passed connection broken exceptions that are discovered during a synchronous JMS API call, and exceptions that occur asynchronously. If a connection broken event occurred during a call to receive, the exception is passed to the exception listener, and is thrown in the receive method.

You can now set the ASYNC_EXCEPTIONS property of the factory object. Set to its default value of ASYNC_EXCEPTIONS_ALL, the exception listener is called for all broken connection exceptions. The exception listener is also called for all exceptions that occur outside the scope of a synchronous JMS API call. This setting provides the same behavior as system property activateExceptionListener that was introduced in APAR IY81774. The property activateExceptionListener is therefore deprecated.

If you set the ASYNC_EXCEPTIONS property to ASYNC_EXCEPTIONS_CONNECTIONBROKEN, only exceptions indicating a broken connection are sent to the exception listener. These exceptions include connection broken exceptions occurring both synchronously and asynchronously. They do not include any other asynchronous errors such as for unreadable messages. In this mode, if the exception listener is triggered, the connection can be considered to have failed. It is no longer possible to use the connection to send or receive messages.

Applications must take appropriate action, such as attempting to remake the connection, when exception listener calls are made.


JMS: Integration of WebSphere MQ classes for JMS with WebSphere Application Server

You can configure WebSphere MQ as a JMS provider for WebSphere Application Server. The way the WebSphere MQ messaging provider is configured has changed between WebSphere Application Server V6.0/6.1 and V7.0. This topic describes the changes, and different configurations you can create to run different versions of WebSphere Application Server and WebSphere MQ.

The way WebSphere MQ is integrated with WebSphere Application Server has changed between WebSphere Application Server V6.0/6.1 and V7.0.

WebSphere Application Server V7.0

WebSphere MQ classes for JMS are built in to WebSphere Application Server V7.0 as a JCA 1.5 resource adapter, wmq.jmsra.rar. To use the WebSphere MQ resource adapter as the messaging provider in WebSphere Application Server, configure activation specifications and connection factories to use the WebSphere MQ messaging provider.

wmq.jmsra.rar is also shipped with WebSphere MQ. It is identical to the wmq.jmsra.rar file shipped with WebSphere Application Server. wmq.jmsra.rar is shipped with WebSphere MQ to install as a JCA 1.5 resource adapter on application servers from different suppliers.

Note: The WebSphere Application Server instructions for installing JCA resource adapters from other suppliers do not apply to the WebSphere MQ resource adapter. Should you mistakenly install wmq.jmsra.rar on WebSphere Application Server as a JCA 1.5 resource adapter, it results in uncertainty about which version of the adapter is used.

If you do not install WebSphere MQ on the same server as WebSphere Application Server, you can use the WebSphere MQ classes for JMS in client mode only. If you install WebSphere MQ on the same server as WebSphere Application Server, you have the choice of using the WebSphere MQ classes for JMS in either client mode or bindings mode. Use the WebSphere MQ messaging provider general property Native Library Path, to locate the native WebSphere MQ Java libraries installed on the same server as the application server.

In bindings mode, the WebSphere MQ queue manager must be at least at fix pack 6.0.2.5 (6.0.2.7 on z/OS). In client mode the WebSphere MQ classes for Java can connect to any queue manager at V6.0 or higher.

For full WebSphere MQ version 7.5 functionality, use WebSphere Application Server V7.0 with WebSphere MQ version 7.5. If the application server connects to a V6.0 queue manager, either locally or remotely, the WebSphere MQ classes for JMS run in migration mode

The WebSphere MQ JCA resource adapter, which is shipped with WebSphere Application Server, is maintained by applying WebSphere Application Server fix packs; see Which version of WebSphere MQ is shipped with WebSphere Application Server . Maintain the WebSphere MQ server libraries in the normal way, by applying WebSphere MQ fix packs, or PTFs on z/OS.

If you migrate profiles from WebSphere Application Server 7.0.0.0 to later maintenance levels, or from later maintenance levels to the 7.0.0.0 level, you must manually adjust the resource adapter configuration; see Maintaining the WebSphere MQ JCA resource adapter .

WebSphere Application Server V6.0/V6.1

The WebSphere MQ classes for JMS, com.ibm.mq.jar, com.ibm.mqjms.jar, and dhbcore.jar, are installed by WebSphere Application Server into $WAS_INSTALL_ROOT\lib\WMQ\java\lib. Using these classes, JMS applications connect to WebSphere MQ in client mode.

If you do not install WebSphere MQ on the same server as WebSphere Application Server, you can use the WebSphere MQ classes for JMS in client mode only. If you install WebSphere MQ on the same server as WebSphere Application Server, you have the choice of using the WebSphere MQ classes for JMS in either client mode or bindings mode.

To use the WebSphere MQ classes for JMS installed by WebSphere MQ, set the WebSphere Application Server environment variable $MQ_INSTALL_ROOT to point to the WebSphere MQ installation. You can then use the libraries installed by WebSphere MQ to connect to WebSphere MQ in either client or bindings mode.

You can use WebSphere MQ version 7.5 with WebSphere Application Server V6.0 or V6.1. The default configuration, which uses the WebSphere MQ classes for JMS installed by WebSphere Application Server, is restricted by the client to WebSphere MQ version 6.0 functionality. If you install either the WebSphere MQ version 7.5 client or server on the application server, you can access WebSphere MQ version 7.5 functionality. With the combination of WebSphere MQ version 7.5 and WebSphere Application Server V6.0/V6/.1, you can use the integrated publish/subscribe API, shared channels, queue manager selectors, and other V7.0 enhancements; see Mixed configurations .

wmq.jmsra.rar is also shipped with WebSphere MQ. It is identical to the wmq.jmsra.rar file shipped with WebSphere Application Server. wmq.jmsra.rar is shipped with WebSphere MQ to install as a JCA 1.5 resource adapter on application servers from different suppliers. You must not configure a JCA resource adapter on WebSphere Application Server V6.0 or V6.1 using the WebSphere MQ resource adapter, wmq.jmsra.rar. If you do so, and then try to start a JMS application, it results in Java errors.

Maintain the WebSphere MQ classes for JMS shipped with WebSphere Application Server by applying WebSphere Application Server fix packs; see Which version of WebSphere MQ is shipped with WebSphere Application Server . If you are using the libraries installed by a WebSphere MQ installation by setting $MQ_INSTALL_ROOT, maintain the WebSphere MQ libraries in the normal way, by applying WebSphere MQ fix packs, or PTFs on z/OS.


Mixed configurations

When you upgrade WebSphere Application Server from V6.0/6.1 to V7.0 or WebSphere MQ from version 6.0 to version 7.5, you can run with different versions of WebSphere Application Server and WebSphere MQ, as well as upgrading both. You must take note of any functional and performance implications of running with a mixed configuration. Mixed configurations are supported to cater for complex migration scenarios.

WebSphere Application Server V7.0 and WebSphere MQ version 6.0

WebSphere Application Server V7.0 always uses the version 7.0 WebSphere MQ classes for JMS installed with the application server as the WebSphere MQ message provider. The WebSphere MQ message provider connects to a local or remote queue manager running WebSphere MQ version 6.0. The version 7.0 capabilities in the WebSphere MQ resource adapter are not available when connected to the version 6.0 queue manager. As a result, the WebSphere MQ classes for JMS run in migration mode, which has a performance and function cost. Migration mode is set automatically. You can configure it manually by setting the WebSphere MQ classes for JMS property PROVIDERMODE=6.

You must apply a fix to the WebSphere MQ resource adapter to run WebSphere MQ classes for JMS in this configuration. The minimum WebSphere Application Server fix pack required is 7.0.0.9. The WebSphere MQ resource adapter is provided by WebSphere Application Server, so you must obtain the fix pack from WebSphere Application Server rather than WebSphere MQ. If you are using a different application server, you must apply WebSphere MQ fix pack 7.0.1.1 to update wmq.jmsra.rar to the same level.

If you had installed WebSphere MQ version 6.0 on the same server as WebSphere Application Server V6.0, you can continue to use both client and bindings mode.

With WebSphere Application Server V6.0/V6.1 and WebSphere MQ version 6.0 installed, you would have set the WebSphere Application Server environment variable, $MQ_INSTALL_ROOT, to the WebSphere MQ version 6.0 installation directory. WebSphere Application Server V6.0/V6.1 loads the WebSphere MQ classes for JMS from $MQ_INSTALL_ROOT to resolve references to JMS classes. WebSphere Application Server V7.0 does not use $MQ_INSTALL_ROOT. WebSphere Application Server V7.0 loads the WebSphere MQ classes for JMS from the WebSphere MQ resource adapter shipped with WebSphere Application Server V7.0.

Some behavior of the WebSphere MQ classes for JMS has changed between WebSphere MQ version 6.0 and WebSphere MQ version 7.0. Even if the JMS application is connected to a version 6.0 queue manager, because WebSphere Application Server V7.0 uses the version 7.0 WebSphere MQ classes for JMS, there are some differences in JMS client behavior.

  • The MQException class is packaged in com.ibm.wmq.jmqi.jar, not com.ibm.mq.jar.

    • Applications compiled with Java 1.5, or greater, automatically resolve class references to com.ibm.mq.jmqi.jar.

  • JMS exceptions and errors are handled differently; see JMS: Error code and exception message changes .

WebSphere Application Server V6.0/6.1 and WebSphere MQ version 7.5

To use WebSphere Application Server V6.0/6.1 with the version 7.5 WebSphere MQ classes for JMS, you must install either the WebSphere MQ version 7.5 client, or version 7.5 server, on the same server as WebSphere Application Server. Set $MQ_INSTALL_ROOT to the WebSphere MQ installation directory path.

You might encounter JMS application migration problems, as a result of changes to the WebSphere MQ classes for JMS. If your solution requires connection to a version 7.5 queue manager, you might be able to bypass some problems by continuing to use the version 6.0 WebSphere MQ classes for JMS. Alternatively, install the version 6.0 WebSphere MQ JMS client on the same server as the application server and set $MQ_INSTALL_ROOT to the WebSphere MQ installation directory. Connect the JMS application to the version 7.5 queue manager in client mode.

Note: You cannot connect the version 6.0 WebSphere MQ classes for JMS, installed with WebSphere Application Server, to WebSphere MQ version 7.5 installed on the same server, in bindings mode.

The WebSphere Application Server V6.0/V6.1 administration console only configures WebSphere MQ version 6.0 capabilities. To configure new WebSphere MQ version 7.5 properties, you must use custom properties.


Miscellaneous changes between version 6.0 to version 7.5 affecting JMS

In this section changes that affect JMS applications using the WebSphere MQ messaging provider managed by WebSphere Application Server are described. Changes to the WebSphere MQ implementation of JMS in version 7.5 that affect the migration of JMS applications from WebSphere MQ version 6.0 to WebSphere MQ version 7.5, are described in other topics.

The Target client field is no longer displayed in the WebSphere Application Server Administration Console.

The Target client field on a JMS destination in V6.0/V6.1 specified whether the message recipient expected the body of a JMS message was a JMS message or a WebSphere MQ message. Examples of JMS messages are JMS bytes, JMS text, and JMS object.

In WebSphere Application Server V7.0, the same function is controlled by the check box Append RFH version 2 headers to messages sent to this destination. See "Target client" is field not displayed in the WebSphere Application Server Administration Console V7 .

If you are sending messages to a WebSphere MQ application, that does not expect JMS messages, clear the check box. If you select the check box, messages are constructed with an RFH2 header specifying the JMS message type.

A number of JMS issues are resolved, including use of multi-instance queue managers.

A number of JMS issues are resolved in later WebSphere Application Server fix packs.

Problems using multi-instance queue managers, are fixed in WebSphere Application Server fix pack 7.0.0.13 (WebSphere MQ fix pack 7.0.1.3). Set the new custom properties, XMSC_WMQ_CONNECTION_NAME_LIST for connection factories, and connectionNameList for activation specifications, to create a comma-separated lists of host(port) names to connect to a multi-instance queue manager.

Automatic client reconnection is not supported by WebSphere Application Server.

Asynchronous consume, browse with mark, and Application Server Facilities

Message Driven Beans (MDBs) using Application Server Facilities (ASF) use asynchronous consume rather than polling to read messages asynchronously. Asynchronous consume is a WebSphere MQ version 7.0 enhancement.

The destination is now able to browsed with mark on non z/OS platforms, eliminating problems of message contention, and lower priority messages being marooned by constantly arriving higher priority messages.

As a result ASF is more efficient on non z/OS platforms than it was with V6.0. (WebSphere MQ for z/OS efficiently received messages asynchronously in V6.0.)

Activation specifications

Use activation specifications with MDBs, rather than message listeners, to consume messages in WebSphere Application Server V7.0. Activation specifications are better than message listeners:

  • Activation specifications are part of the JCA 1.5 specification standard.
  • Use activation specification eliminates configuring a connection factory for inbound messages.
  • Activation specifications can be defined at node or cell scope, not just server scope. If you have multiple servers, you require only one activation specification, but multiple message listener definitions.

Message listener ports are still supported. The WebSphere Application Server administration console has a wizard, and a command, to migrate a message listener to an activation specification.


Terminology

WebSphere Application Server uses the term WebSphere MQ messaging provider. The WebSphere MQ messaging provider is the same code as the WebSphere MQ classes for JMS. WebSphere MQ also provides the WebSphere MQ classes for JMS as JCA 1.5 resource adapter for installing on other application servers. All these terms refer to the same code.

Which version of WebSphere MQ is shipped with WebSphere Application Server?

IC64098: APPLICATION DOES NOT AUTOMATICALLY RECONNECT TO THE QUEUE MANAGER IF CONNECTION IS LOST WHEN USING THE MQ RESOURCE ADAPTER

Fix Central

PK87026: CUMULATIVE MAINTENANCE FIXPACK 6.0.2.7 FOR THE JMS FEATURE OF WEBSPHERE MQ FOR Z/OS VERSION 6.

Maintaining the WebSphere MQ resource adapter

When to use ASF and non-ASF modes to process messages in WebSphere Application Server for JMS Application Server Facilities

Use the WebSphere MQ messaging provider in WebSphere Application Server V7: Part 1: Introducing the new WebSphere MQ messaging provider


JMS: Messaging provider

The WebSphere MQ classes for JMS normally call the integrated publish/subscribe interface by default. In version 6.0 the classes used the queued command message interface.

With the new PROVIDERVERSION JMS property an application can select whether the WebSphere MQ classes for JMS call the the integrated or the queued command message publish/subscribe interface; see When to use PROVIDERVERSION . By default the integrated publish/subscribe interface is normally called, but depending on the context, the WebSphere MQ classes for JMS might call the queued interface; see the rules in Rules for selecting the WebSphere MQ messaging provider mode .


JMS: Migration of subscriptions to named streams

In version 6.0 publish/subscribe, you can create subscriptions to topics in named streams, as well as to topics in the default, or unnamed, stream. Version 7.5 publish/subscribe does not have named streams. It emulates streams by modifying the topic string of a subscription that is associated with a stream. The result is, for most applications, the migration of subscription names from version 6.0 to version 7.5 is automatic. In the following cases, make changes to the results of the automatic migration.


The relationship between queued publish/subscribe streams and integrated publish/subscribe topics

In queued publish/subscribe, publications can be divided into different streams. Publications on same topic, published to different streams, are separate from one another. A subscriber on one stream cannot subscribe to publications to another stream. Each stream is implemented as a separate stream queue. There is a common, unnamed, stream queue that is called SYSTEM.BROKER.DEFAULT.STREAM.

In integrated publish/subscribe, publications cannot be separated into different streams. Two topics, to be distinct, must have two different topic strings.

To map between the two topic models, the stream name is prefixed to the topic string used by integrated publish/subscribe. Suppose a version 6.0 publisher creates a publication on stream UKSPORTS with the topic string football. Generally, to a version 7.5 subscriber, the publication has the topic string UKSPORTS/football. For a JMS client program, the same BrokerPubQ connection factory property that named the stream UKSPORTS in version 6.0, provides the root topic name UKSPORTS in version 7.5. No client program change is required.

In practice, mapping between streams and topics might not be as simple as the stream name being the same as the root topic name. To cater for the possible collision of stream names with existing root topic strings, the stream name corresponds to a topic object name. The steam name maps to the topic string defined for the topic object. In the preceding example, the version 7.5 queue manager must have a topic object called UKSPORTS with a topic string UKSPORTS. If the topic string was GBSPORTS, the version 6.0 publication to football, on the UKSPORTS stream, would be mapped to the version 7.5 topic GBSPORTS/football.

During the migration of a version 6.0 queue manager to version 7.5, if you have used version 6.0 publish/subscribe, you must run the command strmqbrk. strmqbrk migrates the version 6.0 publish/subscribe resources that are defined on the queue manager. It automatically configures stream to topic mapping on version 7.5 for any streams you have defined on version 6.0. For example, if the publish/subscribe broker had a stream, UKSPORTS, strmqbrk creates a topic object UKSPORTS with a topic string UKSPORTS. It also modifies the namelist, SYSTEM.QPUBSUB.QUEUE.NAMELIST; see Figure 1


How JMS clients refer to publications to named streams in version 7.5

In version 6.0, a JMS client set BrokerPubQ to the stream name to use a named stream. In version 7.5, BrokerPubQ is interpreted by the queue manager as the name of a WebSphere MQ administrative topic object.

The queue manager uses the topic object name defined by BrokerPubQ to set the topic name property of a JMS topic. The queue manager finds the administrative topic object named in BrokerPubQ and extracts its topic string. It prefixes the topic name passed to the JMS topic with the topic string.

For example, suppose you migrated the publish/subscribe stream UKSPORTS from version 6.0, and subscribed to a JMS topic, with the JMS topic string football. In version 7.5, the queue manager adds UKSPORTS to the topic name passed to the JMS topic. In version 7.5, the JMS application still sets the topic name to football, but the queue manager sets the JMS topic name to UKSPORTS/football.

The JMS client requires no code change. If the automatic migration performed by strmqbrk was successful, and suitable for your installation, then no manual administrative changes are required either.

You might make further administrative changes:

You can set the connection factory property BrokerPubQ in version 7.5 to restrict a JMS client to a specific topic space. The topic space is restricted to the topic tree rooted in the topic name defined in the topic object named in BrokerPubQ. The topic space can be set administratively, and changed, without changing the JMS client.

The migration command strmqbrk adds existing streams to a version 7.5 queue manager automatically. You can create a stream to use with version 7.5 queued publish/subscribe, after you have migrated the publish/subscribe resources. Follow the example, Figure 1 .

Figure 1. Adding a stream to a version 7.5 queue manager

 define qlocal('UKSPORT')
     1 : define qlocal('UKSPORT')
AMQ8006: WebSphere MQ queue created.
define topic('UKSPORT') topicstr('UKSPORT') wildcard(BLOCK)
     2 : define topic('UKSPORT') topicstr('UKSPORT') wildcard(BLOCK)
AMQ8690: WebSphere MQ topic created.
alter namelist(SYSTEM.QPUBSUB.QUEUE.NAMELIST) 
      NAMES('UKSPORT', 'SYSTEM.BROKER.DEFAULT.STREAM', 'SYSTEM.BROKER.ADMIN.STREAM')
     3 : alter namelist(SYSTEM.QPUBSUB.QUEUE.NAMELIST) 
         NAMES('UKSPORT', 'SYSTEM.BROKER.DEFAULT.STREAM', 'SYSTEM.BROKER.ADMIN.STREAM')
AMQ8551: WebSphere MQ namelist changed.


JMS: Receiving MQRFH2 headers in messages sent from JMS applications

In version 6.0, if a JMS message is received by a non-JMS application, the JMS properties are returned in an MQRFH2 header. In later versions of WebSphere MQ, the JMS properties might be returned either as message properties, or as an MQRFH2 header.

To preserve the WebSphere MQ version 6.0 behavior of returning JMS message properties in an MQRFH2 header you have two choices.

  1. Leave the MQGMO property option as its default value, MQGMO_PROPERTIES_AS_Q_DEF, and leave the queue PROPCTL attribute set to its default value, COMPAT.
  2. Set the MQGMO property option to MQGMO_PROPERTIES_COMPATIBILITY.


JMS: Tracing and error reporting

JMS tracing and error reporting has been extended. For compatibility with version 6.0 the Java property MQJMS_TRACE_LEVEL is retained, but the trace differs.

WebSphere MQ classes for JMS now contains a class that an application can use to control tracing. An application can start and stop tracing, specify the required level of detail in a trace, and customize trace in various ways. For example, you can now configure trace by using properties that you specify in a client configuration file.

In version 6.0, the Java property MQJMS_TRACE_LEVEL turned on JMS trace. It has three values:

on

Traces WebSphere MQ classes for JMS calls only.

base

Traces both WebSphere MQ classes for JMS calls and the underlying WebSphere MQ classes for Java calls.

off

Disables tracing.
Setting MQJMS_TRACE_LEVEL to on or base produces the same results as setting the com.ibm.msg.client.commonservices.trace.status property to on.

Setting the property, MQJMS_TRACE_DIR to somepath/tracedir is equivalent to setting the com.ibm.msg.client.commonservices.trace.outputName property to somepath/tracedir/mqjms_%PID%.trc.


JMS: ResourceAdapter object configuration

When a WebSphere Application Server connects to WebSphere MQ it creates Message Driven Beans (MDBs) using JMS connections. These MDBs can no longer share one JMS connection. The configuration of ResourceAdapter object is migrated so that there is a single MDB for each JMS connection.


Changed ResourceAdapter properties

connectionConcurrency

The maximum number of MDBs to share a JMS connection. Sharing connections is not possible and this property always has the value 1. Its previous default value was 5.

maxConnections

This property is the number of JMS connections that the resource adapter can manage. In version 7.5, it also determines the number of MDBs that can connect because each MDB requires one JMS connection. The default value of maxConnections is now 50. Its previous default value was 10.

If connectionConcurrency is set to a value greater than 1, the maximum number of connections supported by the resource adapter is scaled by the value of connectionConcurrency. For example, if maxConnections is set to 2 and connectionConcurrency is set to 4, the maximum number of connections supported by the resource adapter is 8. As a result, connectionConcurrency is set to 1 and maxConnections is set to 8.

If connectionConcurrency is set to a value greater than 1, it is adjusted automatically. To avoid automatic adjustment, set connectionConcurrency to 1. You can then set maxConnections to the value you want.

The scaling mechanism ensures that sufficient connections are available for existing deployments whether you have changed them in your deployment, configuration, or programs.

If the adjusted maxConnections value exceeds the MAXINST or MAXINSTC attributes of any used channel, previously working deployments might fail.

The default value of both channel attributes equates to unlimited. If you changed them from the default value, you must ensure that the new maxConnections value does not exceed MAXINST or MAXINSTC.


JMS: JMS_IBM_Character_Set

Prior to WebSphere MQ V7.5, applications using WebSphere MQ messaging provider migration mode could set the JMS_IBM_Character_Set property of a message to a numerical Coded Character Set Identifier.

When the message was sent, the Coded Character Set Identifier stored in the JMS_IBM_Character_Set property was mapped to the MQMD field CodedCharacterSetID.

When using the WebSphere MQ V7.5 classes for JMS, a JMSException containing the message:

 MQJMS1006: invalid value for 'JMS_IBM_Character_Set': '<number>'
is thrown if an application tries to send a message that has the JMS_IBM_Character_Set property set to a numerical Coded Character Set Identifier.

The JMS_IBM_Character_Set property must be set to the Java character set string that maps to the Coded Character Set Identifier that the application wants to use.


MQI reason code changes

The reason codes that are returned for a few specific errors have changed. If an application checks any of these reason codes to take a specific action, you must check whether the application must be modified.


MQCONN and MQCONNX

Some more specific reason codes for failed MQCONN and MQCONNX calls are now returned by WebSphere MQ.

If an MQCONN or MQCONNX call failed to connect to a queue manager in WebSphere MQ version 6.0, it issued reason code 2059 (080B) (RC2059): MQRC_Q_MGR_NOT_AVAILABLE . It can still issue MQRC_Q_MGR_NOT_AVAILABLE, or one of the following more specific reason codes.

If an application takes specific actions based on the reason code, change the application to take account of the additional reason codes.

An application that connects to multiple queue managers in the same process and uses MQCNO_FASTPATH_BINDING might fail with an error and reason code MQRC_FASTPATH_NOT_AVAILABLE; see Connect to multiple queue managers and use MQCNO_FASTPATH_BINDING .


MQOPEN and MQPUT1

The information center in previous versions of WebSphere MQ warned about defining the default transmission queue as SYSTEM.CLUSTER.TRANSMIT.QUEUE. In version 7.5, an attempt to open the default transmission queue, defined as SYSTEM.CLUSTER.TRANSMIT.QUEUE, results in the error MQRC_DEF_XMIT_Q_USAGE_ERROR.

In version 7.5, an attempt to alter the queue manager attribute DEFXMITQ to SYSTEM.CLUSTER.TRANSMIT.QUEUE results in an error. The PCF reason code is 3269 (0CC5) (RC3269): MQRCCF_DEF_XMIT_Q_CLUS_ERROR .


MQSUB, MQPUT, MQPUT1

As a result of changes to message selector processing, applications that check explicitly for reason code 2459 (099B) (RC2459): MQRC_SELECTOR_SYNTAX_ERROR must check for, and handle, the reason code 2551 (09F7) (RC2551): MQRC_SELECTION_NOT_AVAILABLE .


MQPUT1

If a WebSphere MQ MQI client application issues MQPUT1 in sync point, and later commits the message, some failures are now returned in the MQCMIT call. In version 6.0 the failures are returned by MQPUT1. See the scenario in Example call sequence that demonstrates the change in behavior in MQI client: MQPUT1 sync point behavior change .


MQRFH2 migration, message properties, and property folders

The addition of message properties to the MQI in WebSphere MQ version 7.0 can result in some applications that processed MQRFH2 folders satisfactorily in version 6.0, getting different results in version 7.5. If you leave the new queue and channel PROPCTL attribute at its default setting of COMPAT, most applications that process MQRFH2 folders are unaffected.

Message properties are transferred either in the message descriptor, MQMD, or if there is an MQRFH2 header, in NameValueData. Some message properties are only transferred in NameValueData.

If a message property is transferred in NameValueData, it is transferred in special type of folder, called a property folder. Any properties can be accessed using the MQINQMP and MQSETMP MQI calls.

A folder contains either name/value pairs, and is known as an ordinary folder, or it contains message properties and it is a properties folder. The syntax of NameValueData makes clear which folders are ordinary folders, and which are property folders; see NameValueData (MQCHARn) . In short, a property folder has two forms. It is either a folder with the attribute contents='properties', or it is one of a special set of folders that are designated property folders.

Message properties are mapped to message property folders; see Properties specified as MQRFH2 elements . The NameValueData syntax includes all the restrictions necessary to map between property names and property folders.

Properties are intended to be accessed by applications using programming interfaces, and not by direct interaction with the physical contents of a message. From version 7.0, applications can create user properties and property folders using the MQSETMP and MQINQMP functions; see Message properties . It is good programming practice not to read or write MQRFH2 headers directly, but to use MQSETMP and MQINQMP.

WebSphere MQ also creates special property folders, such as <jms></jms> or <mqps></mqps>. It creates special property folders when an application uses a particular programming interface such as the JMS client or the integrated publish/subscribe interface. Most of these special folders also existed in version 6.0.

MQI applications written for version 6.0 have no property functions to create message properties directly. However, by placing name/value pairs in a special property folder, such as <usr></usr>, an application can create message properties. JMS client applications can also create message properties by setting JMS message properties.

The distinction between property folders and ordinary folders is important. In processing message properties, WebSphere MQ modifies the contents of MQRFH2 folders. The modifications can affect both property and ordinary folders. Experience with using MQRFH2 headers has led some programmers to assume wrongly that MQRFH2 contents are not touched by WebSphere MQ. If an MQRFH2 contains properties, then in modifying the contents of property folders, WebSphere MQ might alter the position or order of ordinary folders.

For example, in version 6.0, some applications relied on the relative position or ordering of name/value pairs within a NameValueData buffer. By making simplifying assumptions about the physical layout of NameValueData, the code in those applications to access name/value pairs is faster, but not so robust. Other applications have treated the folder syntax as XML, whereas the syntax is more restrictive. In practice, additional XML data has largely been preserved. As long as WebSphere MQ did not touch the folder, the additional XML syntax that is invalid in an MQRFH2, is not detected. Before WebSphere MQ started to use the MQRFH2 for message properties, these applications might have worked satisfactorily, as on some platforms the MQRFH2 folder was not touched.

The MQRFH2 syntax was, and is, only checked as much as is necessary for the queue manager to process the MQRFH2. As WebSphere MQ makes more use of the contents of an MQRFH2, for example to process properties, more syntax checking has become necessary. The additional syntax checking can cause previously working applications to behave differently, or to fail. The additional processing affects the contents of properties folders more than the contents of ordinary folders.

In version 7.5, you can set a new PROPCTL queue attribute value, V6COMPAT, on both the sender and receiver queue, and intervening transmission queues. The effect is to restore MQRFH2 to its version 6.0 behavior. Use this option to solve migration problems, and not to simplify parsing MQRFH2 folders. To make programming MQRFH2 folders simpler, use the message properties MQI. It is also a poor optimization to set V6COMPAT so that an application can gain reduced path length from simpler parsing of MQRFH2 folders. This gain is more than offset by the additional storage and pathlength required by the queue manager to implement V6COMPAT.

A Program Temporary Fix (PTF) for z/OS, was created in version 7.0.1 to disable property processing. This PTF was distributed by IBM service in response to any special request for the fix to solve migration problems. It is superseded by the PROPCTL V6COMPAT option.


PROPCTL queue attribute default setting

The default setting of the PROPCTL attribute for queues migrated from version 6.0 is changed from COMPAT to V6COMPAT. The change applies to local, alias, and model queue definitions. The change does not affect remote queues, which have no PROPCTL attribute. It does affect transmission queues, because they are local queues.

The change solves the problems experienced by some applications that create MQRFH2 headers when they send messages to applications running on version 7.0 or version 7.5. A few applications that rely to some extent on the physical format of MQRFH2 headers, or add additional attributes to MQRFH2 folder tags might experience problems.

The PROPCTL attribute is a property of local, alias, and model queues. It was added to queues in version 7.0. The default setting of the PROPCTL attribute for system queues, such as SYSTEM.DEFAULT.LOCAL.QUEUE or SYSTEM.DEFAULT.ALIAS.QUEUE is COMPAT. If a new queue is defined without setting the LIKE property, it inherits the attributes defined on the corresponding system default queue.

Existing queues that were created before version 7.0 did not have a PROPCTL attribute. When they are migrated from version 6.0 to version 7.0, or version 7.0.1, they inherit the default setting of the PROPCTL attribute, COMPAT.

In version 7.5, the default setting of the PROPCTL attribute for system queues created with a newly defined queue manager remains COMPAT. However, queues migrated from version 6.0 behave differently. The PROPCTL attribute is set to V6COMPAT on all queues. The behavior of the PROPCTL attribute is shown in Figure 1 .

In the first row, QM1 is migrated directly from version 6.0 to version 7.5. The PROPCTL queue attribute is set to the default value V6COMPAT. Because the PROPCTL queue attribute is also set to V6COMPAT on all the default queues, newly created queues, shown in black, also have the PROPCTL queue attribute set to V6COMPAT.

In the second row, QM2 is migrated first to version 7.0, or version 7.0.1, and then to version 7.5. The migration to version 7.0 sets the PROPCTL queue attribute to COMPAT. Because the PROPCTL queue attribute is also set to COMPAT on all the default queues, newly created queues, shown in black, also have the PROPCTL queue attribute set to COMPAT. When QM2 is migrated to version 7.5, all queues already have a PROPCTL attribute, and so they retain the same setting. If the setting on the all the default queues has been left as COMPAT, new queues also have the PROPCTL queue attribute set to COMPAT.

In the third row, QM3 is created in version 7.0 or version 7.0.1. The default setting of the PROPCTL queue attribute is COMPAT. When QM3 is migrated to version 7.5, the queues retain their PROPCTL setting.

In the fourth row, a new queue manager is created in version 7.5. Like version 7.0, the default setting of the PROPCTL queue attribute is COMPAT.

Figure 1. Migration and creation of queues with default PROPCTL attribute setting

This change affects not only existing queues that were created on version 6.0, but also effects new queues created in version 7.5. New queues are effected because default system queues, such as SYSTEM.DEFAULT.LOCAL.QUEUE and SYSTEM.DEFAULT.ALIAS.QUEUE are also migrated to version 7.5, and their PROPCTL property is set to V6COMPAT. Unless you override the default setting of PROPCTL when creating a queue, the definition inherits the PROPCTL attribute value from a system default queue.

The intended consequence of this change is that queues defined on a queue manager migrated from version 6.0 behave differently to queues defined on a queue manager created on version 7.0 or later. The change does not apply to migration from version 6.0 to version 7.0 or version 7.0.1. A queue manager that is migrated directly to version 7.5 from version 6.0 behaves differently to a queue manager migrated indirectly from version 6.0.

V6COMPAT does not change the behavior of version 6.0 applications migrated to version 7.5 that would have worked correctly with the previous default value of COMPAT. The new setting resolves a migration problem for a few programs that rely of how the MQRFH2 header is formatted in version 6.0. For new applications, V6COMPAT has a small impact on the performance of message transfer.

To override the change on the system default queues, after migrating a queue manager to version 7.5, redefine the PROPCTL attribute on the following queues:

All queues migrated from version 6.0, including system queues have PROPCTL set to V6COMPAT. The PROPCTL setting on the other system queues does not affect the behavior of the queue manager. The PROPCTL setting has no effect on programs that set an explicit MQGMO_PROPORTIES_* option.


Rules for selecting the WebSphere MQ messaging provider mode

The WebSphere MQ messaging provider has two modes of operation: WebSphere MQ messaging provider normal mode, and WebSphere MQ messaging provider migration mode. You can select which mode a JMS application uses to publish and subscribe.

The WebSphere MQ messaging provider normal mode uses all the features of a MQ queue manager to implement JMS. This mode is used only to connect to a WebSphere MQ queue manager and can connect to queue managers in either client or bindings mode. This mode is optimized to use the new function.

The WebSphere MQ messaging provider migration mode uses the features and algorithms supplied with WebSphere MQ version 6.0. To connect to WebSphere Event Broker or WebSphere Message Broker version 6.0 or 6.1 using WebSphere MQ Enterprise Transport, you must use this mode. You can connect to queue manager running on a later version using this mode, but none of the new features of the later version are used.

If you are not using WebSphere MQ Real-Time Transport, the mode of operation used is determined primarily by the PROVIDERVERSION property of the connection factory. If you cannot change the connection factory that you are using, you can use the com.ibm.msg.client.wmq.overrideProviderVersion property to override any setting on the connection factory. This override applies to all connection factories in the JVM but the actual connection factory objects are not modified.

You can set PROVIDERVERSION to the possible values: 7 , 6, or unspecified. However, PROVIDERVERSION can be a string in any one of the following formats:

where V, R, M and F are integer values greater than or equal to zero.

7 - Normal mode

Uses the WebSphere MQ messaging provider normal mode.

If you set PROVIDERVERSION to 7only the WebSphere MQ messaging provider normal mode of operation is available. If the queue manager specified in the connection factory settings is not a Version 7.0.1 queue manager, the createConnection method fails with an exception.

The WebSphere MQ messaging provider normal mode uses the sharing conversations feature and the number of conversations that can be shared is controlled by the SHARECNV() property on the server connection channel. If this property is set to 0, you cannot use WebSphere MQ messaging provider normal mode and the createConnection method fails with an exception.

6 - Migration mode

Uses the WebSphere MQ messaging provider migration mode.

The WebSphere MQ classes for JMS use the features and algorithms supplied with WebSphere MQ version 6.0. To connect to WebSphere Event Broker version 6.0 or WebSphere Message Broker version 6.0 or 6.1 using WebSphere MQ Enterprise Transport version 6.0, you must use this mode. You can connect to a WebSphere MQ version 7.0.1 queue manager using this mode, but none of the new features of a version 7.0.1 queue manager are used, for example, read ahead or streaming. If you have a WebSphere MQ version 7.0.1 client connecting to a WebSphere MQ version 7.0.1 queue manager on a distributed platform or a WebSphere MQ version 7.0.1 queue manager on z/OS , then the message selection is done by the queue manager rather than on the client system.

unspecified

The PROVIDERVERSION property is set to unspecified by default.

A connection factory that was created with a previous version of WebSphere MQ classes for JMS in JNDI takes this value when the connection factory is used with the new version of WebSphere MQ classes for JMS. The following algorithm is used to determine which mode of operation is used. This algorithm is used when the createConnection method is called and uses other aspects of the connection factory to determine if WebSphere MQ messaging provider normal mode or WebSphere MQ messaging provider migration mode is required.

  1. First, an attempt to use WebSphere MQ messaging provider normal mode is made.
  2. If the queue manager connected is not WebSphere MQ version 7.0.1, the connection is closed and WebSphere MQ messaging provider migration mode is used instead.
  3. If the SHARECNV property on the server connection channel is set to 0, the connection is closed and WebSphere MQ messaging provider migration mode is used instead.
  4. If BROKERVER is set to V1 or the new default unspecified value, WebSphere MQ messaging provider normal mode continues to be used, and therefore any publish/subscribe operations use the new WebSphere MQ version 7.0.1 features.

    If WebSphere Event Broker or WebSphere Message Broker are used in compatibility mode (and you want to use version 6.0 publish/subscribe function rather than the WebSphere MQ version 7.0.1 publish/subscribe function), set PROVIDERVERSION to 6, and ensure WebSphere MQ messaging provider migration mode is used.

    See ALTER QMGR for information about the PSMODE parameter of the ALTER QMGR command for further information on compatibility.

  5. If BROKERVER is set to V2 the action taken depends on the value of BROKERQMGR :

    • The BROKERQMGR is blank:
      • The BROKERCONQ command queue exists and can be opened for output (that is, MQOPEN for output succeeds).
      • PSMODE on the queue manager is set to COMPAT or DISABLED

    • If BROKERQMGR is non-blank :
      • The BROKERQMGR has been changed from the default setting.
      • The connection factory is intended for use with WebSphere Event Broker or WebSphere Message Broker and WebSphere MQ Enterprise Transport.


Publish/subscribe changes

Publish/subscribe function is now performed by the queue manager, rather than by a separate publish/subscribe broker. When you upgrade a WebSphere MQ version 6.0 queue manager, publish/subscribe function is not automatically migrated. You must migrate publish/subscribe using the strmqbrk command. After migration, with the differences noted in the following topics, existing version 6.0 publish/subscribe programs work without change.

The programs themselves are not migrated; version 6.0 publish/subscribe programs work when connected to a later version queue manager, after migrating publish/subscribe. The publish/subscribe integrated into a queue manager emulates a version 6.0 publish/subscribe broker. Old and new publish/subscribe programs can work along side one another. They can share the same topic space on the same queue manager.

Version 6.0 and later version queue managers can exchange publications and subscriptions in a publish/subscribe hierarchy.


Publish/Subscribe: Altering fields in a subscription

In version 6.0, you can update the topic, destination queue, and subscription name of a subscription as long as you have named the subscription. You can no longer alter these fields, once a subscription has been created.

Changing the registration of an application


Publish/Subscribe: Attributes

Attributes that control publish/subscribe are now integrated into the queue manager. Some are equivalent to attributes controlling version 6.0 publish/subscribe.

Queue manager publish/subscribe attributes

Queue manager attribute Description Version 6.0 equivalent
PSMODE Controls whether the publish/subscribe engine and the queued publish/subscribe interface are running. strmqbrk, endmqbrk
PSNPMSG Controls whether to continue processing after a failure to process a non-persistent message. DiscardNonPersistentInputMsg
PSNPRES Controls whether to continue processing after a failure to deliver a response message. PubSubNPResponse
PSRTYCNT Controls how many times to attempt to process a publish/subscribe command message MaxMsgRetryCount
PSSYNCPT Controls whether to process queued publish/subscribe messages under sync point control SyncPointIfPersistent


Publish/Subscribe: Application migration

The version 6.0 WebSphere MQ publish/subscribe command message interface has been superseded by the WebSphere MQ publish/subscribe programming interface. It is your choice whether to migrate existing applications to the new interface. The following topics provide reference information to help you migrate applications to the publish/subscribe programming interface.

The migration is not a straightforward mapping: the publish/subscribe programming models are different. To maintain compatibility you can run both kinds of publish/subscribe programs, and they work with the same publication and subscriptions.

You might have migrated the publish/subscribe broker in WebSphere Message Broker or WebSphere Event Broker to WebSphere MQ. If so, you might be running applications using two different versions of the publish/subscribe command message interface. One version uses commands formatted as MQRFH headers, and the other as MQRFH2 headers. Both versions of the command message interface are supported along with the programming interface. You can write publish/subscribe programs for WebSphere Message Broker version 7.0 using either of the command message interfaces, or the programming interface.

The publish/subscribe programming interface is integrated into the MQI. It is sometimes known as integrated publish/subscribe to distinguish it from queued publish/subscribe. Queued publish/subscribe is name given to the implementation of the version 6.0 publish/subscribe command interface. You can use both queued and integrated publish/subscribe, and use them together with the same topics and subscriptions.

JMS can use either interface. In most circumstances, JMS defaults to using the integrated interface. You can control which interface it uses by setting the JMS property PROVIDERVERSION to 6 or 7.


Publish/Subscribe Application migration: Identity

You can identify a subscriber in two ways in version 6.0. All subscribers have a traditional identity, and you can add a subscription name. All JMS subscriptions must have a subscription name. In version 7.5, all subscriptions have only a subscription name. In version 6.0, the traditional identity is also used to identify a publisher. In version 7.5 publishers do not have an identity, except for the connection they establish with the queue manager.

The traditional identity is a combination of queue name, queue manager name, and optional correlation identifier.

Effects of Identity changes

Version 6.0 Version 7.5

The subscription name is associated with a traditional identity by specifying addname, and a subscription name, on a Register Subscriber command.

The subscription name is the SubName field in the Subscription Descriptor, MQSD. You must set the SubName field in the Subscription Descriptor, MQSD when issuing MQSUB to identify a subscriber.

The correlation identifier has a secondary use which is to allow subscribers to MQGET by CorrelId and get publications only for a particular subscription. If multiple subscriptions are using same queue, the CorrelId differentiated the subscriptions and publications.

You can optionally set the SubCorrelId in the Subscription Descriptor, MQSD, in the MQSUB call. Alternatively the queue manager generates the value of SubCorrelId.

Note: The correlation identifier is not passed between queue managers in a broker hierarchy.

You can set the registration option MQPS_ANONYMOUS, Anon. The option tells the broker that the identity of the publisher is not to be divulged, except to subscribers with additional authority.

A publisher can stop subscribers replying to publications they receive. Set the MQPMO_SUPPRESS_REPLYTO MQPMO option when publishing.

A publisher can set MQPS_OTHER_SUBSCRIBERS_ONLY, OtherSubsOnly, and based on the identity of the publisher and subscriber avoid sending publications to a subscriber who is also the publisher.

Publishing and subscribing applications can be identified by their connection to the queue manager. The same connection can be used to both publish and subscribe. If it is, you can prevent publications created on the connection being sent to a subscription on the same connection. Set the subscription option, MQSO_NOT_OWN_PUBS.

Publish

Publisher and subscriber identity

Publisher identity

Register Publisher

Register Subscriber


Publish/Subscribe Application migration: Stream name replaced by topic object

Stream names are not used in the version 7.5 integrated publish/subscribe interface. Topic objects take the place of version 6.0 stream names.

You can configure version 7.5 to emulate streams if queued and integrated publish/subscribe must coexist; see the related information.

Migrating streams to topics

Version 6.0 Version 7.5

The optional command message parameter, MQPS_STREAM_NAME, sets a stream name in the version 6.0 publish/subscribe interface.

Before issuing MQPUT to publish to a topic, or MQSUB to subscribe, set the ObjectName and ObjectString fields. The ObjectName field is equivalent to a stream name, and ObjectString to the topic string. ObjectName is the name of an administrative topic object.

To publish, set the fields in the Object Descriptor, MQOD; to subscribe, set the fields in the Subscription Descriptor, MQSD.

To map the topic object to an emulated stream; see the related references.

If a stream does not exist the publish/subscribe broker returns MQRCCF_STREAM_ERROR to the application.

If a topic object does not exist the queue manager returns MQRC_UNKNOWN_OBJECT_NAME to the application.

As an example, a version 6.0 application publishes to a stream queue called SYSTEM.BROKER.RESULTS.STREAM with a topic string of Sport/Soccer/State/LatestScore.

  1. Create a stream queue called SYSTEM.BROKER.RESULTS.STREAM.
  2. The application publishes to the topic string Sport/Soccer/State/LatestScore using the stream SYSTEM.BROKER.RESULTS.STREAM.

In version 7.5, the application publishes to a topic SYSTEM.BROKER.RESULTS.STREAM with a topic string of Sport/Soccer/State/LatestScore.

  1. Create a topic object called SYSTEM.BROKER.RESULTS.STREAM; define it with a topic string of RESULTS.STREAM.
  2. The application publishes to the topic string Sport/Soccer/State/LatestScore using the topic object SYSTEM.BROKER.RESULTS.STREAM.

The resulting publication has a topic string of RESULTS.STREAM/Sport/Soccer/State/LatestScore. If the application must preserve the same topic string, then define the topic SYSTEM.BROKER.RESULTS.STREAM with a topic string of /. If you do so, you lose the ability to distinguish topics published to different streams.

You can extend the example, and emulate the version 6.0 stream in version 7.5. With the change, the version 6 and version 7.5 applications can interoperate.

  1. Set the WILDCARD property of the SYSTEM.BROKER.RESULTS.STREAM topic to BLOCK.
  2. Create a queue called SYSTEM.BROKER.RESULTS.STREAM
  3. Add SYSTEM.BROKER.RESULTS.STREAM to the namelist SYSTEM.QPUBSUB.QUEUE.NAMELIST.


Related reference :

Streams, Version 6


Publish/Subscribe Application migration: Delete Publication replaced by ClearTopic

Replace Delete Publication by the PCF Clear Topic command. Migrate the Delete Publication parameters and options to the equivalent options in the ClearTopic command.

Migrate Delete Publication to Clear Topic

Delete Publication Clear Topic

MQPS_COMMAND with value MQPS_DELETE_PUBLICATION

Implicit in the ClearTopic command

MQPS_TOPIC

TopicString parameter 1

MQPS_DELETE_OPTIONS

ClearType parameter

MQPS_STREAM_NAME

See Publish/Subscribe Application migration: Stream name replaced by topic object

Return code migration

Reason code returned by the publish/subscribe broker Reason code returned by the queue manager
MQRCCF_STREAM_ERROR MQRC_UNKNOWN_OBJECT_NAME
MQRCCF_TOPIC_ERROR MQRC_OBJECT_STRING_ERROR
MQRCCF_INCORRECT_STREAM A stream error can no longer occur.

Delete Publication

Delete Publication (Format)


Publish/Subscribe Application migration: Deregister Publisher replaced by MQCLOSE

Replace the Deregister Publisher command by the MQCLOSE Message Queue Interface (MQI) call. Migrate the Deregister Publisher parameters and options to the equivalent options in the MQCLOSE call.

Deregister Publisher has a different lifecycle to MQCLOSE. Between Register Publisher and Deregister Publisher a publisher remains registered, even when the publisher is not connected. Integrated publish/subscribe has no Register Publisher equivalent. MQCLOSE releases the object connected by MQOPEN. When the publisher application is not connected, the publisher is not known by the queue manager. A connection can also be closed by the queue manager without the publisher application issuing MQCLOSE.

Migrate Deregister Publisher to MQCLOSE.

Deregister Publisher MQCLOSE

MQPS_COMMAND with value MQPS_DEREGISTER_PUBLISHER

Deregistering is implicit in releasing the handle to a topic previously opened using MQOPEN with the MQOO_OUTPUT option.

Queue and queue manager name to publish to.

The names are provided either by MQPS_Q_MGR_NAME and MQPS_Q_NAME, or by the ReplyToQMgr and ReplyToQ fields in the Message Descriptor, MQMD, of the command message.

Publications are sent to topics, not queues. The topic is identified by the ObjectName and ObjectString of the topic Object Descriptor, MQOD. The queue manager is the queue manager the application is connected to.

MQPS_REGISTRATION_OPTIONS = MQPS_CORREL_ID_AS_IDENTITY (MQCHAR*) or MQREGO_CORREL_ID_AS_ IDENTITY (MQLONG)

Publishers have no lasting identity; see Publish/Subscribe Application migration: Identity

MQPS_REGISTRATION_OPTIONS = MQPS_DEREGISTER_ALL (MQCHAR*) or MQREGO_DEREGISTER_ ALL (MQLONG)

Similar to issuing MQDISC, and implicitly closing all open topic connections.

MQPS_STREAM_NAME

See Publish/Subscribe Application migration: Stream name replaced by topic object

Migrate return codes

Reason code returned by the publish/subscribe broker Reason code returned by the queue manager
MQRCCF_STREAM_ERROR MQRC_HOBJ_ERROR
MQRCCF_TOPIC_ERROR MQRC_HOBJ_ERROR
MQRCCF_NOT_REGISTERED MQRC_HOBJ_ERROR
MQRCCF_Q_MGR_NAME_ERROR MQRC_HOBJ_ERROR
MQRCCF_Q_NAME_ERROR     MQRC_HOBJ_ERROR
MQRCCF_DUPLICATE_IDENTITY MQRC_HOBJ_ERROR
MQRCCF_UNKNOWN_STREAM MQRC_HOBJ_ERROR
MQRCCF_REG_OPTIONS_ERROR MQRC_OPTIONS_ERROR

Deregister Publisher

Deregister Publisher (Format)


Publish/Subscribe Application migration: Deregister Subscriber replaced by MQCLOSE

Replace the Deregister Subscriber command by the MQCLOSE Message Queue Interface (MQI) call. Migrate the Deregister Subscriber parameters and options to the equivalent options in the MQCLOSE call. Deregister Subscriber has a different lifecycle to MQCLOSE. Register Subscriber might be issued by a different application or process to Deregister Subscriber. If Deregister Subscriber is used in a different program to Register Subscriber, call MQSUB with the MQSO_RESUME option to get a handle to the subscription.

Migrate Deregister Subscriber to MQCLOSE

Deregister Subscriber MQCLOSE

MQPS_COMMAND with value MQPS_DEREGISTER_SUBSCRIBER

Call MQCLOSE with the option MQCO_REMOVE_SUB

Queue and queue manager name to subscribe to.

The names are provided either by MQPS_Q_MGR_NAME and MQPS_Q_NAME, or by the ReplyToQMgr and ReplyToQ fields in the Message Descriptor, MQMD, of the command message.

A connection to a subscription also has a connection to the queue to which matching publications are sent.

A handle to the queue is set or returned on the MQSUB call. Whether a handle is set or returned depends on whether the queue is managed or unmanaged, and whether the subscription is being created or resumed. The queue manager is the queue manager the application is connected to.

MQPS_REGISTRATION_OPTIONS = MQPS_CORREL_ID_AS_IDENTITY (MQCHAR*) or MQREGO_CORREL_ID_AS_IDENTITY (MQLONG)

The subscriber identity is implicit in the connection handle obtained by MQSUB. It is released by MQCLOSE. The correlation identifier is no longer part of a subscription identity.

MQPS_REGISTRATION_OPTIONS = MQPS_DEREGISTER_ALL (MQCHAR*) or MQREGO_DEREGISTER_ALL (MQLONG)

Only one topic is subscribed to in one MQSUB call. Releasing the handle with MQCLOSE, closes the only subscription. If multiple subscriptions are open, MQDISC closes them all.

MQPS_REGISTRATION_OPTIONS = MQPS_FULL_RESPONSE (MQCHAR*) or MQREGO_FULL_RESPONSE (MQLONG)

MQSUB returns all the response fields in the MQSD structure.

MQPS_REGISTRATION_OPTIONS = MQPS_LEAVE_ONLY (MQCHAR*) or MQREGO_LEAVE_ONLY (MQLONG)

Subscriptions no longer have subidentities, so there is no equivalent to the MQPS_LEAVE_ONLY option.

MQPS_REGISTRATION_OPTIONS = MQPS_VARIABLE_USER_ID (MQCHAR*) or MQREGO_VARIABLE_USER_ID (MQLONG)

Subscriptions are no longer shared between subscriber identities. Deregistering a subscription for another subscriber has no direct equivalent.

MQPS_STREAM_NAME

See Publish/Subscribe Application migration: Stream name replaced by topic object

MQPS_SUBSCRIPTION_NAME

The subscription is identified by the SubName field of the Subscription Descriptor, MQSD.

MQPS_TOPIC

The topic string is identified by the ObjectString field in the Subscription Descriptor, MQSD.

Migrate return codes

Reason code returned by the publish/subscribe broker Reason code returned by the queue manager
MQRCCF_STREAM_ERROR MQRC_HOBJ_ERROR
MQRCCF_TOPIC_ERROR MQRC_HOBJ_ERROR
MQRCCF_NOT_REGISTERED MQRC_HOBJ_ERROR
MQRCCF_Q_MGR_NAME_ERROR MQRC_HOBJ_ERROR
MQRCCF_Q_NAME_ERROR MQRC_HOBJ_ERROR
MQRCCF_DUPLICATE_IDENTITY MQRC_HOBJ_ERROR
MQRCCF_UNKNOWN_STREAM MQRC_HOBJ_ERROR
MQRCCF_REG_OPTIONS_ERROR MQRC_OPTIONS_ERROR

Deregister subscriber

Deregister subscriber (format) descriptor


Publish/Subscribe Application migration: Publish replaced by MQPUT and MQPUT1

Replace the Publish command by the MQPUT or MQPUT1 Message Queue Interface (MQI) calls. Migrate the Publish parameters and options to the equivalent options in the MQPUT calls.

Migrate Publish to MQPUT and MQPUT1

Publish MQPUT or MQPUT1

MQPS_COMMAND with value MQPS_PUBLISH

Call MQPUT or MQPUT1 to an object handle obtained by opening a topic for MQOO_OUTPUT.

MQPS_ANONYMOUS or MQREGO_ANONYMOUS

There is no equivalent to registering a publisher for new applications and registering it as an anonymous publisher. To achieve the same result of not passing a replyto destination to subscribers, call MQPUT or MQPUT1 setting the MQPMO option MQPMO_SUPPRESS_REPLYTO.

MQPS_TOPIC

Set ObjectString in the Object Descriptor, MQOD, of the publication topic to the topic string.

A Publish command can publish to more than one topic. To achieve the same result, an application must call MQOPEN call for each topic. It must then call MQPUT for every topic, sending each the same publication. Alternatively, combine the two calls, by using MQPUT1.

MQPS_PUBLICATION_OPTIONS = MQPS_CORREL_ID_AS_IDENTITY (MQCHAR*) or MQPUBO_CORREL_ID_AS_IDENTITY (MQLONG)

Publishers have no lasting identity; see Publish/Subscribe Application migration: Identity

MQPS_PUBLICATION_OPTIONS = MQPS_IS_RETAINED_ PUBLICATION (MQCHAR*) or MQPUBO_IS_RETAINED_ PUBLICATION (MQLONG)

The message property, MQIsRetained, mqps.Ret, is set on a retained publication sent to a subscriber.

A subscriber can also request no retained publications, by setting the subscription option, MQSO_NEW_PUBLICATIONS_ONLY, and only a retained publication, by setting MQSO_PUBLICATIONS_ON_REQUEST

MQPS_PUBLICATION_OPTIONS = MQPS_NO_ REGISTRATION (MQCHAR*) or MQPUBO_NO_ REGISTRATION (MQLONG)

This option is not relevant to integrated publish/subscribe, because publishers are not registered.

MQPS_PUBLICATION_OPTIONS = MQPS_OTHER_SUBSCRIBERS_ONLY (MQCHAR*) or MQPUBO_OTHER_SUBSCRIBERS_ONLY (MQLONG)

If an application is not to receive its own publications, subscribe using the option MQPMO_NOT_OWN_SUBS on the MQPUT or MQPUT1 calls.

Note: The scope of the same publisher or subscriber is a common connection handle to the queue manager.

MQPS_PUBLICATION_OPTIONS = MQPS_RETAIN_PUBLICATION (MQCHAR*) or MQPUBO_RETAIN_PUBLICATION (MQLONG)

Call MQPUT or MQPUT1 setting the MQPMO option MQPMO_RETAIN.

MQPS_INTEGER_DATA (MQCHAR*) or MQPUBO_INTEGER_DATA (MQLONG), or MQPS_STRING_DATA (MQCHAR*) or MQPUBO_STRING_DATA (MQLONG)

Set MQPubStrIntData, mqpse.Sid; see Publish/subscribe message properties .

MQPS_Q_MGR_NAME

The ReplyToQMgr field in the Message Descriptor, MQMD, of the publication contains the queue manager the publisher is connected to. The field is blank if the publisher sets the MQPMO option MQPMO_SUPPRESS_REPLYTO.

MQPS_Q_NAME

The ReplyToQ field in the Message Descriptor, MQMD, of the publication contains the queue the publisher expects replies on. The publisher can leave the ReplyToQ field blank.

MQPS_REGISTRATION_OPTIONS = MQPS_ANONYMOUS (MQCHAR*) or MQREGO_ANONYMOUS (MQLONG)

There is no equivalent to registering a publisher for new applications and registering it as an anonymous publisher. To achieve the same result of not passing a replyto destination to subscribers, call MQPUT or MQPUT1 setting the MQPMO option MQPMO_SUPPRESS_REPLYTO.

MQPS_REGISTRATION_OPTIONS = MQPS_CORREL_ID_AS_IDENTITY (MQCHAR*) or MQREGO_CORREL_ID_AS_IDENTITY (MQLONG)

Publishers have no lasting identity; see Publish/Subscribe Application migration: Identity

MQPS_REGISTRATION_OPTIONS = MQPS_DIRECT_REQUEST (MQCHAR*) or MQREGO_DIRECT_ EQUEST (MQLONG)

Set ReplyToQ and ReplyToQMgr in the Message Descriptor, MQMD, of a publication message. Do not set MQPMO option MQPMO_SUPPRESS_REPLYTO.

MQPS_REGISTRATION_OPTIONS = MQPS_LOCAL (MQCHAR*)   or  MQREGO_LOCAL (MQLONG)

Set MQPMO option MQPMO_SCOPE_QMGR.

Note: The meaning of SCOPE has changed; see Publish/Subscribe: Local and global publications and subscriptions

MQPS_SEQUENCE_NUMBER

Set MQPubSeqNum, mqpse.Seq

MQPS_STREAM_NAME

See Publish/Subscribe Application migration: Stream name replaced by topic object

Publish

Publish (Format)


Publish/Subscribe Application migration: Register Publisher replaced by MQOPEN

Replace the Register Publisher command by the MQOPEN Message Queue Interface (MQI) call. Migrate the Register Publisher parameters and options to the equivalent options in the MQPUT calls.

Register Publisher has a different lifecycle to MQOPEN. Between Register Publisher and Deregister Publisher a publisher remains registered, even when the publisher is not connected. Integrated publish/subscribe has no Register Publisher equivalent. MQCLOSE releases the object connected by MQOPEN. When the publisher application is not connected, the publisher is not known by the queue manager. A connection can also be closed by the queue manager without the publisher application issuing MQCLOSE.

Migrate Register Publisher to MQOPEN

Register Publisher MQOPEN

MQPS_COMMAND with value MQPS_REGISTER_PUBLISHER

Call MQPUT or MQPUT1 to an object handle obtained by opening a topic for MQOO_OUTPUT.

If your application did not use Register Publisher, see Publish/Subscribe Application migration: Publish replaced by MQPUT and MQPUT1 .

MQPS_TOPIC

Set ObjectString in the Object Descriptor, MQOD, of the publication topic to the topic string.

A Publish command can publish to more than one topic. To achieve the same result, an application must call MQOPEN call for each topic. It must then call MQPUT for every topic, sending each the same publication. Alternatively, combine the two calls, by using MQPUT1.

MQPS_Q_MGR_NAME

You must set the ReplyToQMgr field in the Message Descriptor, MQMD, for every publication to identify the publisher. The field is blank if the publisher sets the MQPMO option MQPMO_SUPPRESS_REPLYTO.

MQPS_Q_NAME

You must set the ReplyToQ field in the Message Descriptor, MQMD, for every publication to identify the publisher. The publisher can leave the ReplyToQ field blank.

MQPS_REGISTRATION_OPTIONS = MQPS_ANONYMOUS (MQCHAR*) or MQREGO_ANONYMOUS (MQLONG)

There is no equivalent to registering a publisher for new applications and registering it as an anonymous publisher. To achieve the same result of not passing a replyto destination to subscribers, call MQPUT or MQPUT1 setting the MQPMO option MQPMO_SUPPRESS_REPLYTO.

MQPS_REGISTRATION_OPTIONS = MQPS_CORREL_ID_AS_IDENTITY (MQCHAR*) or MQREGO_CORREL_ID_AS_IDENTITY (MQLONG)

Publishers have no lasting identity; see Publish/Subscribe Application migration: Identity

MQPS_REGISTRATION_OPTIONS = MQPS_DIRECT_REQUEST (MQCHAR*) or MQREGO_DIRECT_ EQUEST (MQLONG)

Set ReplyToQ and ReplyToQMgr in the Message Descriptor, MQMD, of a publication message. Do not set MQPMO option MQPMO_SUPPRESS_REPLYTO.

MQPS_REGISTRATION_OPTIONS = MQPS_LOCAL (MQCHAR*)   or  MQREGO_LOCAL (MQLONG)

Set MQPMO option MQPMO_SCOPE_QMGR.

Note: The meaning of SCOPE has changed; see Publish/Subscribe: Local and global publications and subscriptions

MQPS_STREAM_NAME

See Publish/Subscribe Application migration: Stream name replaced by topic object

Register Publisher

Register Publisher (Format)


Publish/Subscriber Application migration: Register Subscriber replaced by MQSUB

Replace the Register Subscriber command by the MQSUB Message Queue Interface (MQI) call. Migrate the Register Subscriber parameters and options to the equivalent options in the MQSUB calls.

Migrate Register Subscriber to MQSUB

Register Subscriber MQSUB

MQPS_COMMAND with value MQPS_REGISTER_SUBSCRIBER

Call MQSUB.

If your application did not use Register Subscriber then the use of the MQSUB verb is not required for equivalent behavior.

MQPS_TOPIC

Set ObjectString in the Subscription Descriptor, MQSD

A Register Subscriber command can subscribe to more than one topic. To achieve the same result, an application must call MQSUB call for each topic.

MQPS_Q_MGR_NAME

MQSUB requires the connection handle of the queue manager to which the application is connected.

MQPS_Q_NAME

MQSUB either returns the queue handle of the managed queue or the application can provide the handle to an unmanaged queue.

Publications that match the subscription are returned to the queue.

MQPS_REGISTRATION_OPTIONS = MQPS_ADD_NAME (MQCHAR*) or QREGO_ADD_NAME (MQLONG)

A subscription has only single name.

MQPS_REGISTRATION_OPTIONS = MQPS_ANONYMOUS (MQCHAR*) or MQREGO_ANONYMOUS (MQLONG)

Not applicable to subscribers.

MQPS_REGISTRATION_OPTIONS = MQPS_CORREL_ID_AS_IDENTITY (MQCHAR*) or MQREGO_CORREL_ID_AS_IDENTITY (MQLONG)

The subscriber identity is implicit in the connection handle obtained by MQSUB. It is released by MQCLOSE. The correlation identifier is no longer part of a subscription identity.

MQPS_REGISTRATION_OPTIONS = MQPS_DUPLICATES_OK (MQCHAR*) or MQREGO_DUPLICATES_OK (MQLONG)

Duplicate publications are possible; see Overlapping topics and Grouping subscriptions .

MQPS_REGISTRATION_OPTIONS = MQPS_FULL_RESPONSE (MQCHAR*) or MQREGO_FULL_RESPONSE (MQLONG)

MQSUB returns all the response fields in the MQSD structure.

MQPS_REGISTRATION_OPTIONS = MQPS_INCLUDE_STREAM_NAME (MQCHAR*) or MQREGO_INCLUDE_STREAM_NAME (MQLONG)

See Publish/Subscribe Application migration: Stream name replaced by topic object

MQPS_REGISTRATION_OPTIONS = MQPS_INFORM_IF_RETAINED (MQCHAR*) or MQREGO_INFORM_IF_RETAINED (MQLONG)

The message property MQIsRetained, mqps.Ret, is returned with every publication.

MQPS_REGISTRATION_OPTIONS = MQPS_JOIN_EXCLUSIVE (MQCHAR*) or MQREGO_JOIN_EXCLUSIVE (MQLONG)

Subscriptions are not shared.

MQPS_REGISTRATION_OPTIONS = MQPS_JOIN_SHARED (MQCHAR*) or MQREGO_JOIN_SHARED (MQLONG)

Subscriptions are not shared.

MQPS_REGISTRATION_OPTIONS = MQPS_LOCAL (MQCHAR*) or MQREGO_LOCAL (MQLONG)

Set MQPMO option MQPMO_SCOPE_QMGR.

Note: The meaning of SCOPE has changed; see Publish/Subscribe: Local and global publications and subscriptions

MQPS_REGISTRATION_OPTIONS = MQPS_LOCKED (MQCHAR*) or MQREGO_LOCKED (MQLONG)

Subscriptions are not shared.

MQPS_REGISTRATION_OPTIONS = MQPS_NEW_PUBLICATIONS_ONLY(MQCHAR*) or MQREGO_NEW_PUBLICATIONS_ONLY (MQLONG)

Set the MQSUB option MQSO_NEW_ PUBLICATIONS_ONLY.

MQPS_REGISTRATION_OPTIONS = MQPS_NO_ALTERATION(MQCHAR*) or MQREGO_NO_ALTERATION (MQLONG)

Subscriptions are partly alterable; see Attributes that can be altered in MQSD and MQSUB .

MQPS_REGISTRATION_OPTIONS = MQPS_NON_PERSISTENT (MQCHAR*) or MQREGO_NON_PERSISTENT (MQLONG)

Set the MQSUB option MQSO_NON_PERSISTENT.

MQPS_REGISTRATION_OPTIONS = MQPS_PERSISTENT (MQCHAR*) or MQREGO_PERSISTENT (MQLONG)

Set the MQSUB option MQSO_PERSISTENT.

MQPS_REGISTRATION_OPTIONS = MQPS_PERSISTENT_AS_PUBLISH(MQCHAR*) or MQREGO_PERSISTENT_AS_PUBLISH (MQLONG)

Set the MQSUB option MQSO_PERSISTENT_AS_PUBLISH.

MQPS_REGISTRATION_OPTIONS = MQPS_PERSISTENT_AS_Q (MQCHAR*) or MQREFO_PERSISTENT_AS_Q (MQLONG)

Set the MQSUB option MQSO_PERSISTENT_AS_QUEUE_DEF.

MQPS_REGISTRATION_OPTIONS = MQPS_PUBLISH_ON_REQUEST_ONLY (MQCHAR*) or MQREGO_PUBLISH_ON_REQUEST_ONLY (MQLONG)

Set the MQSUB option MQSO_PUBLICATIONS_ON_REQUEST.

MQPS_REGISTRATION_OPTIONS = MQPS_VARIABLE_USER_ID (MQCHAR*) or MQREGO_VARIABLE_USER_ID (MQLONG)

Set the MQSUB option MQSO_ANY_USERID.

MQPS_STREAM_NAME

See Publish/Subscribe Application migration: Stream name replaced by topic object

MQPS_SUBSCRIPTION_NAME

The subscription is identified by the SubName field of the Subscription Descriptor, MQSD.

MQPS_SUBSCRIPTION_USER_DATA

Set the MQSD field, SubUserData.

Migrate return codes

Reason code returned by the publish/subscribe broker Reason code returned by the queue manager
MQRCCF_STREAM_ERROR MQRC_HOBJ_ERROR
MQRCCF_TOPIC_ERROR MQRC_HOBJ_ERROR
MQRCCF_Q_MGR_NAME_ERROR MQRC_HOBJ_ERROR
MQRCCF_Q_NAME_ERROR MQRC_HOBJ_ERROR
MQRCCF_DUPLICATE_IDENTITY MQRC_IDENTITY_MISMATCH
MQRCCF_CORREL_ID_ERROR  
MQRCCF_NOT_AUTHORIZED  
MQRCCF_UNKNOWN_STREAM MQRC_HOBJ_ERROR
MQRCCF_REG_OPTIONS_ERROR  
MQRCCF_DUPLICATE_SUBSCRIPTION MQRC_SUB_ALREADY_EXISTS
MQRCCF_SUB_NAME_ERROR  
MQRCCF_SUB_IDENTITY_ERROR Not applicable
MQRCCF_SUBSCRIPTION_IN_USE MQRC_SUBSCRIPTION_IN_USE
MQRCCF_SUBSCRIPTION_LOCKED Not applicable
MQRCCF_ALREADY_JOINED Not applicable

Registering as a subscriber

Register Subscriber

Register Subscriber (Format) descriptor


Publish/Subscribe Application migration: Request Update replaced by MQSUBRQ

Replace the Request Update command by the MQSUBRQ Message Queue Interface (MQI) call. Migrate the Request Update parameters and options to the equivalent options in the MQSUBRQ calls.

Migrate Request Update to MQSUBRQ

Request Update MQSUBRQ

MQPS_COMMAND with value MQPS_REQUEST_UPDATE

Call MQSUBRQ setting the Action parameter option MQSR_ACTION_PUBLICATION.

The request can be made only if the application set the option MQSO_PUBLICATIONS_ON_REQUEST on the MQSUB call when it made the subscription.

MQPS_TOPIC

The topic string is implied in the HSUB handle returned from the MQSUB call, which is used as a parameter on the MQSUBRQ call.

MQPS_Q_MGR_NAME

MQSUBRQ requires the connection handle of the queue manager to which the application is connected.

MQPS_Q_NAME

MQSUBRQ requires the HSUB subscription object handle returned by MQSUB.

MQPS_REGISTRATION_OPTIONS = MQPS_CORREL_ID_AS_IDENTITY     (MQCHAR*) or MQREGO_CORREL_ID_AS_IDENTITY     (MQLONG)

The subscriber identity is implicit in the connection handle obtained by MQSUB. It is released by MQCLOSE. The correlation identifier is no longer part of a subscription identity.

MQPS_REGISTRATION_OPTIONS = MQPS_VARIABLE_USER_ID     (MQCHAR*) or MQREGO_VARIABLE_USER_ID (MQLONG)

Set the MQSUB option MQSO_ANY_USERID before obtaining the subscription handle.

MQPS_STREAM_NAME

See Publish/Subscribe Application migration: Stream name replaced by topic object

MQPS_SUBSCRIPTION_NAME

The subscription is identified by the SubName field of the Subscription Descriptor, MQSD.

Request Update

Request Update (Format)


Publish/Subscribe: Broker commands

The commands used to control publish/subscribe have changed. The commands that have changed are listed.

Version 6.0 WebSphere MQ publish/subscribe broker is integrated into version 7.5 publish/subscribe. The commands you used to control a version 6.0 publish/subscribe broker are obsolete, and replaced by commands to control version 7.5 publish/subscribe. The new commands, as they relate to controlling queued publish/subscribe, are described in Controlling queued publish/subscribe . Table 1 relates the old and new commands.

WebSphere Message Broker command differences

Operation WebSphere MQ version 6.0 WebSphere MQ version 7.5
Remove broker from hierarchy clrmqbrk See Disconnect a queue manager from a broker hierarchy for instructions how to disconnect a version 7.5 queue manager from a hierarchy.
Delete broker dltmqbrk There is no publish/subscribe broker in version 7.5. The dltmqbrk command removes version 6.0 broker resources after running the command strmqbrk to migrate the version 6.0 broker to version 7.5 publish/subscribe.
Display broker dspmqbrk Use the runmqsc command DISPLAY PUBSUB to display publish/subscribe status.
Stop broker endmqbrk See Stopping queued publish/subscribe for instructions how to stop queued publish/subscribe in version 7.5.
Migrate broker to WebSphere Message Broker migmqbrk Run the migmqbrk command on a version 6.0 queue manager. Once you have upgraded, there is no migration from version 7.5 publish/subscribe to the version 6.0 or 6.1 of the WebSphere Message Broker or WebSphere Event Broker publish/subscribe broker.
Start broker strmqbrk In version 7.5, the strmqbrk command migrates publish/subscribe resources to version 7.5. See Starting queued publish/subscribe for instructions how to start queued publish/subscribe in version 7.5.


Publish/Subscribe: Adding a stream

The method of adding a publish/subscribe stream to a queue manager has changed from version 6.0. You can add stream to a queued publish/subscribe application. The new stream coexists with streams migrated from version 6.0.


Related reference :

Adding a stream, version 6.0


Publish/Subscribe: Connect a queue manager to a broker hierarchy

The method of connecting a queue manager to a publish/subscribe hierarchy has changed from version 6.0. You no longer define a connection with the strmqbrk command. A connection is defined by changing a queue manager attribute.

To connect a queue manager to a broker hierarchy, you must:

  1. Start queued publish/subscribe for broker hierarchies to work.
  2. Define the topology of the queue manager hierarchy by altering the parent the queue manager connects to.

You can combine a publish/subscribe broker running on version 6.0, with a queue manager from a later version of WebSphere MQ in the same hierarchy.


Example

 ALTER QMGR PARENT(PARENTNAME)

Broker networks, version 6.0


Publish/Subscribe: Deleting a stream

The method of deleting a publish/subscribe stream from a queue manager has changed from version 6.0. You can delete a stream that is no longer used by queue publish/subscribe applications.


Publish/Subscribe: Disconnect a queue manager from a broker hierarchy

The method of disconnecting a queue manager to a publish/subscribe hierarchy has changed from version 6.0. You no longer disconnect a queue manager with the dltmqbrk command. A queue manager is disconnected from a hierarchy by changing a queue manager attribute.

In WebSphere MQ version 6.0, queue managers were disconnected from one another using the dltmqbrk command, and required that all child queue managers were disconnected first. The dltmqbrk command is now used to discard WebSphere MQ version 6.0 broker resources after migration using the strmqbrk command.

Disconnect a version 7.5 queue manager from a broker hierarchy using the ALTER QMGR command. Unlike version 6.0, you can disconnect queue managers in any order and at any time.

The corresponding request to update the parent is sent when the connection between the queue managers is running.


Example

 ALTER QMGR PARENT('')


Publish/Subscribe: Queued publish/subscribe message attributes

The behavior of some queued publish/subscribe message attributes is now controlled by queue manager attributes.

Set the following publish/subscribe attributes using WebSphere MQ Explorer or the runmqsc command. In version 6.0, the attributes in the publish/subscribe broker configuration stanza.

Publish/subscribe attributes

Attribute Description
PSRTYCNT Command message retry count
PSNPMSG Discard undeliverable command input message
PSNPRES Behavior following undeliverable command response message
PSSYNCPT Process command messages under syncpoint


Example

 ALTER QMGR PSNPRES(SAFE)


Publish/Subscribe: Starting queued publish/subscribe

The task of starting the publish/subscribe broker has changed from running the strmqbrk command to enabling the queued publish/subscribe interface.

Enable queued publish/subscribe by setting the queue manager PSMODE attribute to ENABLED. The default is ENABLED.


Example

ALTER QMGR PSMODE(ENABLED)


Publish/Subscribe: Stopping queued publish/subscribe

The task of stopping the publish/subscribe broker has changed from running the endmqbrk command to disabling the queued publish/subscribe interface.

Set the queue manager PSMODE attribute to COMPAT to stop the queued publish/subscribe interface. The integrated publish/subscribe interface continues to run. Setting the PSMODE attribute to DISABLE stops both the queued and integrated publish/subscribe interface.

In COMPAT mode, you can run integrated publish/subscribe alongside WebSphere Event Broker or WebSphere Message Broker version 6.0 or 6.1 on the same server.


Example

ALTER QMGR PSMODE(COMPAT)


Publish/Subscribe: Configuration settings

Configuration settings for the publish/subscribe broker in version 6.0 are defined in the queue manager configuration file. Where equivalent settings exist, they are now queue manager attributes. The publish/subscribe migration program, strmqbrk, migrates the settings automatically.

Broker configuration stanza


Publish/Subscribe: Delete temporary dynamic queue

If a subscription is associated with a temporary dynamic queue, when the queue is deleted, the subscription is deleted. This changes the behavior of incorrectly written publish/subscribe applications migrated from version 6.0. Publish/subscribe applications migrated from WebSphere Message Broker are unchanged. The change does not affect the behavior of integrated publish/subscribe applications, which are written using the MQI publish/subscribe interface.


Summary

In version 7.5, you cannot create a temporary dynamic queue as the destination for publications for a durable subscription using the integrated publish/subscribe interface.

In the current fix level of version 7.5, if you use either of the queued publish/subscribe interfaces, MQRFH1 or MQRFH2, the behavior is the same. You can create a temporary dynamic queue as the subscriber queue, and if the queue is deleted, the subscription is deleted with it. Deleting the subscription with the queue retains the same the supported behavior of WebSphere MQ version 6.0, WebSphere Event Broker, and WebSphere Message Broker applications. It modifies the unsupported behavior of WebSphere MQ version 6.0 applications.


Publish/Subscribe: Local and global publications and subscriptions

The meaning of the local and global publication and subscription flags has changed from version 6.0. In version 6.0, local and global are attributes of the publication. Local or global are now attributes whether the subscription is propagated locally or globally.

The local or global publication scope is set by defining the scope of a subscription or a publication to QMGR or ALL. With this flag you can control, on a publication, whether it is propagated locally or globally. On a subscription, you can control whether it has an interest in publications made locally or globally. The flow of publications you observe is a result of the interplay of both these flags.

There has been a subtle change in the behavior between version 6.0 and version 7.5 if the publication and subscription flags are set to different options.

The operation of publication and subscription scope in version 6.0 and version 7.5 is described by Table 1 .

Note: If you specify opposite subscription scopes for the publisher and subscriber, it makes no difference which scope the publication or subscription is given. Either way round, in either version, the set of publications received is the same. In version 6.0 no publications are received, and in version 7.5 local publications are received.

publication scope between Version 6.0 and version 7.5" >

Version 6.0 and version 7.5 publication and subscription scope truth table

Publisher

Subscriber

QMGR ALL
  Version 6.0 Version 7.5 Version 6.0 Version 7.5
QMGR No change: publications go to local subscribers No publications are delivered Only subscribers local to the publisher receive publications
ALL No publications are delivered Publications go to subscribers local to the publisher No change: publications go to all subscribers


Publish/Subscribe: Mapping an alias queue to a topic

You can convert an existing application from a point-to-point to a publish/subscribe application, without changing the application. Change the application from referencing queues to referencing topics by defining or modifying an alias queue and associating it with a topic object. Existing applications are not affected unless they open an alias queue that you have associated with a topic.

Set the new QALIAS attribute TARGTYPE to TOPIC or QUEUE to specify that a queue alias resolves to a topic or a queue. The TARGQ attribute, defined in version 6.0 as the name of the queue to which the alias queue resolves, is renamed to TARGET. The attribute name TARGQ is retained for compatibility with your existing programs. Set the name of the queue or topic using either TARGET or TARGQ.

Setting TARGTYPE(TOPIC) is useful to migrate existing applications to a publish/subscribe message model; see Converting a monitoring application to use a subscription .


Queue-sharing groups and TARGTYPE

Within a queue sharing group it is possible to define a queue alias as a group object. Each queue manager in the queue sharing group creates a queue alias definition with the same name and the same properties as the QSGDISP(GROUP) object.

A queue manager can set or alter TARGTYPE in a QSGDISP(GROUP) alias queue object. Any version 6.0 queue managers in the queue-sharing group ignore the TARGTYPE attribute. A version 6.0 queue manager interprets the queue alias as a queue object, regardless of the setting of TARGTYPE.


Converting a monitoring application to use a subscription

A useful example of this feature is the queue to which statistics messages are written. In version 6.0, only a single application could read a statistics message. Only a single statistics message was written to a queue and could be read from a queue.

By defining a queue alias that points to a topic object, each application that processes statistics messages can subscribe to a topic. Rather than a single application getting statistics messages from a queue, multiple applications can subscribe to and read the same statistics information.


Publish/Subscribe: Message broker, or routing, exit changed to publish exit

WebSphere MQ version 6.0 publish/subscribe has an exit for customizing and routing publications. The exit was renamed the publish exit in WebSphere MQ version 7.0.1. It was not available in WebSphere MQ version 7.0.

The message broker exit is largely replaced by using subscription levels. Using the MQSD sublevel field, an intermediate subscriber can intercept publications to customize or block them, before they arrive at the ultimate subscribers.

The function provided by intercepting publications using the MQSD sublevel field, and using the publish exit is similar. The main differences to be aware of are twofold:

  1. If you use the subscription level mechanism, the intercepting application is a regular publish/subscribe application. You subscribe to the publications you intend to intercept. Publications are delivered to the intercepting application, which then must publish them again if they are to reach their intended destination.

    The Publish exit is written as an exit program and is called by the queue manager.

  2. Intercept all publications from a publisher by using the subscription level mechanism. Run the intercepting application on the queue manager that the publisher is connected to.

    Intercept a publication just before it is delivered to a subscriber by using a publish exit. Configure the exit on the queue manager the subscriber is connected to.


Publish/Subscribe: Metatopics

Metatopics are a special set of topics recognized by the WebSphere MQ version 6.0 publish/subscribe broker. Metatopics are no longer supported. Instead you can inquire on the list of topic names, and on individual topics, and subscriptions. If you send a subscription to a metatopic, the subscription is ignored.


Publish/Subscribe: Multi-topic publications and subscriptions

WebSphere MQ version 6.0 publish/subscribe has a concept of publishing to multiple topics, or subscribing to multiple topics, in a single publish or subscribe operation. WebSphere MQ publish/subscribe no longer has these concepts. One consequence is that a version 6.0 application running on a later version queue manager might receive two copies of a publication instead of one.

Version 6.0 publish/subscribe has separate concepts of a subscriber and a subscription. Publish/subscribe now has only the concept of a subscription. In version 6.0, a single subscriber can have multiple subscriptions. If the same publication matches more than one subscription from the same subscriber, only one copy of the publication is received.

The closest publish/subscribe now comes to eliminating duplicate publications are the concepts of overlapping and grouped subscriptions. Setting the subscription option MQSO_GROUP_SUB eliminates duplicate instances of a publication that matches multiple subscriptions that are delivering publications to the same queue.

Version 6.0 also has the concept of a joint publication. A joint publication is a single publication to multiple topics. It behaves differently to separately publishing to each of the multiple topics in version 6.0. Publish/subscribe no longer has a concept of a joint publication.


Example

Suppose a news service publishes the results of football matches. The service caters for fans of single or multiple teams who want to receive results for their favorite teams. For example, you might subscribe to the results for Chelsea and Manchester United. When the teams play together you want to receive just one result, but when the teams play in different matches you expect two results.

In version 6.0, the news service publishes the results of each match as a joint result. When Manchester United and Chelsea play together the result, Manchester United 1 :: Chelsea 0, is published in a single publication to two topics, /football/results/Manchester United and /football/results/Chelsea. A football fan might subscribe, in version 6.0, to all the teams they are following by registering a single subscription to multiple topics. A single Manchester United and Chelsea follower might register a subscription to both /football/results/Manchester United and /football/results/Chelsea.

With the version 6.0 application running on a version 6.0publish/subscribe broker, the fan receives a single publication when the teams play together. When they play separately, the fan receives separate publications for each match.

With the version 6.0 application running on a later version queue manager the behavior is different. The single publish operation is broken down into two separate publish operations to the two different topics. The single subscription registration is broken down into two separate subscriptions, with the publications sent to the same queue. Consequently, when Manchester United and Chelsea play together, the fan receives two results.

The same problem arises if the application is split between a version 6.0 and a later version queue manager. With either the publisher or subscriber running on the later version, the same difference arises.

In the example, MQSO_GROUP_SUB does not eliminate the duplication. The reason is two publications are created by emulating the joint publication. It is not a matter of a single publication matching multiple subscriptions.

Publish: Required parameters

Subscribe: Required parameters


Publish/Subscribe: Register Publisher and Deregister Publisher commands

The Register Publish and Deregister Publisher commands do nothing any more, except return a successful response message to a request. Your publisher program is not affected by the change.

Register Publisher

Deregister Publisher


Publish/Subscribe: Streams

Streams are used in version 6.0 to isolate publish/subscribe applications. Streams are now emulated. Migration from version 6.0 streams is automatic. Existing version 6.0 applications that use streams continue to work unchanged when migrated to, or interoperating with, later versions of WebSphere MQ. In some unusual cases, you might need to modify the results of the automatic migration. If you introduce new streams to version 6.0 applications, you must configure the emulated streams manually.

Version 6.0 topics can be isolated to a named stream, or topics can be created in the default stream, which is unnamed. Named streams can simplify the deployment of multiple publish/subscribe applications. If each application uses a different stream, there is no danger that publications for one application are received by subscriptions registered by a different application, even if the topic names are the same.

All topics now share the same topic space. Applications can still be isolated effectively, but the mechanism is different. Topic strings are split into two parts. The first part is managed administratively, and the second part is set by the application. An administrator creates topics that are used to isolate applications. These topics define the first part of a topic string. Applications define topics, or create topic strings dynamically, which then form the second part of a topics string. The complete topic string is constructed by an application combining the administrative topic with the application defined topic, or topic string. The application never has to refer to the topic string defined by the administrator, only to the topic name. The administrator can isolate, and modify, the topic spaces referenced by applications by creating, or modifying, the topics that define the first part of a topic string. These topics provide the same capability as streams do in version 6.0.

Streams are emulated by mapping stream names to topics. The topics are added to the beginning of the topic string defined by an application. The mapping is controlled by a namelist, SYSTEM.QPUBSUB.QUEUE.NAMELIST, that identifies all the queues that are used by version 6.0 streams that are now emulated by the queue manager. For details of the mechanism, and how to extend it to cater for unusual situations, see the related concepts.

Adding a stream, version 6.0

Deleting a stream, version 6.0

Streams, version 6.0


Publish/Subscribe: Subscription names

In WebSphere MQ version 6.0, a subscription name is unique only within the stream and not across the queue manager. A subscription name must now be unique across the queue manager.

The strqmbrk command migrates version 6.0 durable subscriptions. It ensures all subscriptions have names, and all the names are unique. If a subscription uses any other stream than SYSTEM.BROKER.DEFAULT.STREAM, the migration process appends the stream name to the subscription name. If the subscription uses SYSTEM.BROKER.DEFAULT.STREAM the subscription is migrated across without adding a stream name.

If the version 6.0 subscription has not got a name, only a traditional identity, a subscription name is generated by strqmbrk. The name does not affect existing applications.


Publish/Subscribe: Traditional identity

The behavior of traditional identities differs between WebSphere MQ version 6.0 and later versions of WebSphere MQ. In version 6.0, subscriptions do not require names to identify them; a traditional identity is sufficient. In later versions of every subscription has a name. Subscription names are generated for any version 6.0 subscriptions without a name that are migrated. Naming subscriptions has no affect on existing version 6.0 applications. The relationship between traditional identities and subscriptions can result in different behavior.

version 6.0 and later versions differ in how an identifier is assigned to a subscription. Version 6.0 identifies subscriptions in one or both of two ways. A combination of queue name, queue manager name, and optional correlation identifier always identifies a subscription in version 6.0. This combination is called a traditional identity. In addition, you can add a subscription name to a traditional identity by associating a subscription name with it. You can then identify the subscription using the subscription name. JMS always identifies a subscription in version 6.0 by using a subscription name.

In later versions, a subscription is identified only by a subscription name. When a version 6.0 publish/subscribe broker is migrated by strmqbrk, subscription names are created automatically for any subscriptions that are not named. The behavior of version 6.0 applications is not affected by the addition of a subscription name.

Note: Subscription names might change during the migration from version 6.0; see the related reference.

A version 6.0 application running on a later versions queue manager might receive more publications than it receives on version 6.0. The difference is due to the way traditional identity behaves on version 6.0. In version 6.0, because of the coupling of subscription identity, and queue manager name, queue, and correlation identifier, the only publications delivered to a subscriber are subscriptions it registers. In later versions, a subscriber can create a subscription that delivers publications to any combination of queue manager, queue, and correlation identifier. The combination of queue manager, queue, and correlation identifier is not the identity of a subscription. As a result, if MQSO_GROUP_SUB is not set, a version 6.0 subscriber might receive publications that it has not subscribed to.


Publish/Subscribe: Variable user ID

In version 6.0, only the user that creates a subscription is able to modify it, unless the user authorizes other users. The user does so by adding the subscription registration option, MQPS_VARIABLE_USER_ID, VariableUserId. A similar rule now applies, but it is implemented differently. Only the user that creates a subscription is able to modify it, unless the subscription has the VARUSER attribute set to ANY. A publish/subscribe application can now also set the subscription option, MQSO_ANY_USERID, which has the same effect.

Existing version 6.0 applications are unaffected by this difference. VARUSER has the same effect as using the version 6.0 Register Subscriber command to set the MQPS_VARIABLE_USER_ID.

Note: You can modify a subscription created by a version 6.0 application by setting the VARUSER attribute to ANY.


Publish/Subscribe: Wildcards

Publish/subscribe wildcard schemes have changed. Version 6.0 uses character-based wildcards. Now, you have a choice of using character-based and topic-based wildcards. The WebSphere MQ topic-based wildcard scheme differs slightly from the topic-based wildcard scheme in WebSphere Event Broker version 6.0 and WebSphere Message Broker version 6.0 and 6.1. The WebSphere MQ topic-based scheme allows multilevel wildcards in the middle of a topic string. The WebSphere Message Broker and WebSphere Event Broker scheme permits only topic multilevel wildcards at the end of a topic string.

The wildcard scheme defined by the MQ Telemetry Transport is the same as WebSphere Message Broker version 6.0.

The behavior of applications migrated from WebSphere MQ version 6.0 using queued publish/subscribe not affected. When writing new applications using integrated publish/subscribe interface, specify the subscription option, MQSO_WILDCARD_CHAR, to maintain compatibility with version 6.0 publish/subscribe programs.


Queue manager logs: Default sizes increased

The default size of a queue manager log files has been changed to 4096. The AMQERRnn.log has increased from 256 KB to 2 MB on UNIX, Linux, and Windows platforms. The change affects both new and migrated queue managers.


Queue manager log

In WebSphere MQ Version 7.5 default log size is 4096. For more information on setting non-default values, see The WebSphere MQ configuration file, mqs.ini .


Queue manager error log

Override the change by setting the environment variable MQMAXERRORLOGSIZE, or setting ErrorLogSize in the QMErrorLog stanza in the qm.ini file.

The change increases the number of error messages that are saved in the error logs.


SSL or TLS Authority Information Access (AIA) checked by default

If a TLS or SSL certificate has an AIA extension, the certificate is checked by default. If the certificate contains a URL, the WebSphere MQ channel tries to contact the Online Certificate Status Protocol (OCSP) responder at the URL. As a result channels might take a long time to start, or not start, depending on the delay in receiving a response. You can set up an OCSP proxy server to overcome firewall restrictions, or turn off AIA checking to improve performance.

The URL contained in the AIA extension often cannot be reached because of firewall restrictions. The typical solution to accessing an OCSP responder through a firewall is to set up an OCSP proxy server. WebSphere MQ supports proxy servers by providing the SSLHTTPProxyName configuration file variable, or on a client, the MQSSLPROXY environment variable.

The timeout on the attempt to contact the OCSP responder can be long. The response delay on contacting the OCSP can cause channels to take a long time to start. Channels might timeout, if they are running with short heartbeat intervals.

You can turn off AIA certificate extension checking using the OCSPCheckExtensions configuration file variable. You might safely turn off SSL/TLS certificate revocation checking in a test environment. In a production environment, if a certificate has an AIA extension it is important that the certificate is checked with the OCSP responder. If you do not check whether a certificate is revoked, a user presenting a revoked certificate would get unauthorized access to your system.


SSLPEER and SSLCERTI changes

WebSphere MQ version 7.5 obtains the Distinguished Encoding Rules (DER) encoding of the certificate and uses it to determine the subject and issuer distinguished names. The subject and issuer distinguished names are used in the SSLPEER and SSLCERTI fields. A SERIALNUMBER attribute is also included in the subject distinguished name and contains the serial number for the certificate of the remote partner. Some attributes of subject and issuer distinguished names are returned in a different sequence from previous releases.

The change to subject and issuer distinguished names affects channel security exits. It also affects aplications which depend upon the subject and issuer distinguished names that are returned by the PCF programming interface. Channel security exits and applications that set or query SSLPEER and SSLCERTI must be examined, and possibly changed. The fields that are affected are listed in Table 1 and Table 2 .

Channel status fields affected by changes to subject and issuer distinguished names

Channel status attribute PCF channel parameter type
SSL Peer (SSLPEER) MQCACH_SSL_SHORT_PEER_NAME
SSLCERTI MQCACH_SSL_CERT_ISSUER_NAME

Channel data structures affected by changes to subject and issuer distinguished names

Channel data structure Field
MQCD - Channel definition SSLPeerNamePtr (MQPTR)
MQCXP - Channel exit parameter SSLRemCertIssNamePtr (PMQVOID)

Existing peer name filters specified in the SSLPEER field of a channel definition are not affected. They continue to operate in the same manner as in earlier releases. The peer name matching algorithm has been updated to process existing SSLPEER filters. It is not necessary to alter any channel definitions.


Preferred alternatives: Consider replacing these version 6.0 functions with their version 7.5 alternatives

Some functions in version 6.0 are replaced by alternatives in version 7.5. The version 6.0 functions continue to work in version 7.5 for compatibility reasons. Write new applications using the version 7.5 alternatives. In some cases, new version 7.5 function is not available to an application, unless you migrate it to use the version 7.5 alternative.

List of functions in version 6.0 that have preferred version 7.5 alternatives

Version 6.0 Version 7.5 Description

Queued Publish/Subscribe

Integrated Publish/Subscribe

Publish/Subscribe: Application migration

JMS classes

Deprecated JMS API

Java classes

Deprecated Java API

com.ibm.mq.MQC

com.ibm.mq.
constants.MQConstants

A new package, com.ibm.mq.constants, is supplied with WebSphere MQ Version 7.5. com.ibm.mq.constants package contains the class MQConstants, which implements a number of interfaces.

MQConstants contains definitions of all the constants that were in the MQC interface and a number of new constants. The interfaces in this package closely follow the names of the constants header files used in WebSphere MQ.

For example, the interface CMQC contains a constant MQOO_INPUT_SHARED; this interface corresponds to the header file cmqc.h and the constant MQOO_INPUT_SHARED.

com.ibm.mq.constants can be used with both WebSphere MQ classes for Java and WebSphere MQ classes for JMS.

MQC is still present, and has the constants it previously had; however, for any new applications, you must use the com.ibm.mq.constants package.

Java System property

com.ibm.mq.
exitClasspath

Set JavaExitsClasspath in mqclient.ini

See ClientExitPath stanza of the client configuration file

STRMQMCHLI

Set up a transmission queue specifying SYSTEM.CHANNEL.INITQ as the initiation queue and enabling triggering.

See Triggering channels in WebSphere MQ for IBM i


WebSphere MQ Explorer changes

IBM WebSphere Eclipse Platform is no longer shipped with WebSphere MQ; it is not required to run MQ Explorer. The change makes no difference to administrators who run MQ Explorer. For developers who run MQ Explorer in an Eclipse development environment, a change is necessary. You must install and configure a separate Eclipse environment to be able to switch between MQ Explorer and other perspectives.


Packaging changes

In versions of WebSphere MQ earlier than version 7.5, you can select the Workbench mode preference in MQ Explorer. In workbench mode, you could switch to the other perspectives installed in the WebSphere Eclipse Platform. You can no longer set the Workbench mode preference, because the WebSphere Eclipse Platform is not shipped with MQ Explorer in version 7.5.

To switch between MQ Explorer and other perspectives, you must install MQ Explorer into your own Eclipse environment or into an Eclipse-based product. You can then switch between perspectives. For example, you can develop applications using WebSphere MQ classes for JMS or WebSphere MQ Telemetry applications; see Creating your first MQ Telemetry Transport publisher application using Java™

If you installed extensions to previous versions of MQ Explorer, such as SupportPacs or WebSphere Message Broker Explorer, you must reinstall compatible versions of the extensions after upgrading MQ Explorer to version 7.5.

If you continue to run WebSphere MQ version 7.0.1 on the same server as WebSphere MQ version 7.5, and you use MQ Explorer, each installation uses its own installation of MQ Explorer. When you uninstall version 7.0.1, its version of MQ Explorer is uninstalled. To remove IBM WebSphere Eclipse Platform, uninstall it separately. The workspace is not deleted.


Authorization services preference migration

The way that the WebSphere MQ Explorer authorization services preferences are stored has changed. Therefore, when migrating to version 7.5, the preferences are not migrated and must be manually set.


SSL client certificate store migration

After migrating SSL client certificate stores to version 7.5, you must select the Enable default SSL key repositories check box to use the SSL client certificate stores.


Test result migration

Test results are not migrated from version to version. To view any test results, you must rerun the tests.


AIX: Shared objects

In WebSphere MQ for Windows Version 7.5 the .a shared objects in the lib64 directory contains both the 32 bit and 64 bit objects. A symlink to the .a file is also placed in the lib directory. The AIX loader can then correctly pick up the correct object for the type of application being run.

This means that WebSphere MQ applications can run with the LIBPATH containing either the lib or lib64 directory, or both.


AIX: /usr/lpp/mqm symbolic link removed

Before version 6.0, WebSphere MQ placed a symbolic link in /usr/lpp/mqm on AIX. The link ensured queue managers and applications migrated from WebSphere MQ versions before version 5.3 continued to work, without change. The link is not created in Version 7.5.

In version 5.0, WebSphere MQ for AIX was installed into /usr/lpp/mqm. That changed in version 5.3 to /usr/mqm. A symbolic link was placed in /usr/lpp/mqm, linking to /usr/mqm. Existing programs and scripts that relied on the installation into /usr/lpp/mqm continued to work unchanged. That symbolic link has been removed in Version 7.5, because you can now install WebSphere MQ in any directory. Applications and command scripts are affected by the change.

The effect on applications is no different to the effect of migrating on other UNIX and Linux platforms. If the installation is made primary, then symbolic links to the WebSphere MQ link libraries are placed in /usr/lib. Most applications migrated from earlier WebSphere MQ versions search the default search path, which normally includes /usr/lib. The applications find the symbolic link to the WebSphere MQ load libraries in /usr/lib.

If the installation is not primary, then you must configure the correct search path to load the WebSphere MQ link libraries. If you choose to run setmqenv, WebSphere MQ places the WebSphere MQ link library path into LIBPATH. Unless the application is configured not to search the LIBPATH, if for example it is a setuid or setgid application, then the WebSphere MQ library is loaded successfully; see UNIX and Linux: Migrating WebSphere MQ library loading from version 7.0.1 to version 7.5 .

If you have written command scripts that run WebSphere MQ commands, you might have coded explicit paths to the directory tree where WebSphere MQ was installed. You must modify these command scripts. You can run setmqenv to create the correct environment to run the command scripts. If you have set the installation as primary, you do not have to specify the path to the command.


AIX, HP-UX, and Solaris: Building applications for TXSeries

You must rebuild WebSphere MQ applications that link to TXSeries.

Before version 7.5, WebSphere MQ applications that used the TXSeries CICS support loaded the WebSphere MQ library, mqz_r. From version 7.5 those applications must load the WebSphere MQ library, mqzi_r instead. You must change your build scripts accordingly.

Version 7.5 of mqz_r includes code to load a different version of the WebSphere MQ library. WebSphere MQ loads a different version of the WebSphere MQ library, if it detects that the queue manager the application is connected to is associated with a different installation to the one from which the library was loaded. mqzi_r does not include the additional code. When using TXSeries, the application must run with the WebSphere MQ library it loaded, and not a different library loaded by WebSphere MQ. For this reason, WebSphere MQ applications that use the WebSphere MQ TXSeries support must load the mqzi_r library, and not the mqz_r library.

An implication of applications loading mqzi_r is that the application must load the correct version of mqzi_r. It must load the one from the installation that is associated with the queue manager that the application is connected to.


UNIX and Linux: crtmqlnk and dltmqlnk removed

The crtmqlnk and dltmqlnk commands are not present in version 7.5. Before version 7.1, the commands created symbolic links in subdirectories of /usr. From version 7.1, you must use the setmqinst command instead.


UNIX and Linux: Message catalogs moved

WebSphere MQ message catalogs are no longer stored in the system directories in version 7.5. To support multiple installations, copies of the message catalogs are stored with each installation. If you want messages only in the locale of your system, the change has no affect on your system. If you have customized the way the search procedure selects a message catalog, then the customization might no longer work correctly.

Set the LANG environment variable to load a message catalog for a different language from the system locale.


UNIX and Linux: MQ services and triggered applications

WebSphere MQ version 7.5 has been configured to allow both LD_LIBRARY_PATH and $ORIGIN to work for MQ services and triggered applications. For this reason MQ Services and triggered applications have been changed so that they run under the user ID who started the queue manager and not setuid or setgid.

If any files used by the service were previously restricted to certain users, then they might not be accessible by the user ID who started the queue manager. Resources used by MQ services or triggered applications must be adjusted as appropriate.

Note:


UNIX and Linux: ps -ef | grep amq interpretation

The interpretation of the list of WebSphere MQ processes that results from filtering a scan of UNIXor Linux processes has changed. The results can show WebSphere MQ processes running for multiple installations on a server. Before version 7.5, the search identified WebSphere MQ processes running on only a single installation of WebSphere MQ on a UNIXor Linux server.

The implications of this change depend on how the results are qualified and interpreted, and how the list of processes is used. The change affects you, only if you start to run multiple installations on a single server. If you have incorporated the list of WebSphere MQ processes into administrative scripts or manual procedures, you must review the usage.


Examples

The following two examples, which are drawn from the information center, illustrate the point.

  1. In the information center, before version 7.5, the scan was used as a step in tasks to change the installation of WebSphere MQ. The purpose was to detect when all queue managers had ended. In version 7.5, the tasks use the dspmq command to detect when all queue managers associated with a specific installation have ended.
  2. In the information center, a process scan is used to monitor starting a queue manager in a high availability cluster. Another script is used to stop a queue manager. In the script to stop a queue manager, if the queue manager does not end within a period of time, the list of processes is piped into a kill -9 command. In both these cases, the scan filters on the queue manager name, and is unaffected by the change to multiple installations.


UNIX and Linux: /usr symbolic links removed

On all UNIX and Linux platforms, the links from the /usr file system are no longer made automatically. In order to take advantage of these links, you must set an installation as the primary installation or set the links up manually.

In previous releases the installation of WebSphere MQ on UNIX and Linux created the symbolic links shown in Table 1 . In version 7.5, these links are not created. You must run setmqinst to create a primary installation containing symbolic links. No symbolic links are created in other installations.

Default symbolic links created in releases before version 7.1

Symbolic link from To
/usr/bin/amq... /opt/mqm/bin/amq...
/usr/lib/amq... /opt/mqm/lib/amq...
/usr/include/cmq... /opt/mqm/inc/cmq...
/usr/share/man/... /opt/mqm/man/...

Only a subset of those links created with previous releases are now made, see External library and control command links to primary installation on UNIX and Linux


Windows: amqmsrvn.exe process removed

The amqmsrvn.exe DCOM process was replaced by a Windows service, amqsvc.exe, in version 7.1. This change is unlikely to cause any problems. However, you might have to make some changes. You might have configured the user that runs the WebSphere MQ Windows service MQSeriesServices without the user right to Log on as a service. Alternatively, the user might not have List Folder privilege on all the subdirectories from the root of the drive to the location of the service amqsvc.exe.

If you omitted the Log on as a service user privilege, or one of the subdirectories under which WebSphere MQ is installed does not grant the List Folder privilige to the user, the MQ_InstallationName WebSphere MQ Windows services in version 7.5 fails to start.


Diagnosing the problem

If the service fails to start, Windows event messages are generated:

If the Prepare WebSphere MQ wizard encounters a failure when validating the security credentials of the user performing an installation, an error is returned: WebSphere MQ is not correctly configured for Windows domain users. This error indicates that the service failed to start.


Resolution

To resolve this problem:


Related reference :


Windows: IgnoredErrorCodes registry key

The registry key used to specify error codes that you do not want written to the Windows Application Event Log has changed.

The contents of this registry key are not automatically migrated. To continue to ignore specific error codes, you must manually migrate the registry key.

Previously, the key was in the following location:

 HKLM\Software\IBM\MQSeries\CurrentVersion\IgnoredErrorCodes

The key is now in the following location:

 HKLM\Software\IBM\WebSphere MQ\Installation\MQ_INSTALLATION_NAME\IgnoredErrorCodes
where MQ_INSTALLATION_NAME is the installation name associated with a particular installation of WebSphere MQ.


Windows: Installation and infrastructure information

The location of Windows installation and infrastructure information has changed.

A top-level string value, WorkPath, in the HKLM\SOFTWARE\IBM\WebSphere MQ key, stores the location of the product data directory which is shared between all installations. The first installation on a machine specifies it, subsequent installations pick up the same location from this key.

Other information previously stored in the registry on Windows is now stored in .ini files.


Windows: Local queue performance monitoring

In WebSphere MQ for Windows Version 7.5 it is no longer possible to monitor local queues using the Windows performance monitor.

Use the performance monitoring commands, which are common to all platforms, provided by WebSphere MQ.


Windows: Logon as a service required

The user ID that runs the WebSphere MQ Windows service must have the user right to Logon as a service. If the user ID does not have the right to run the service, the service does not start and it returns an error in the Windows system event log. Typically you will have run the Prepare WebSphere MQ wizard, and set up the user ID correctly. Only if you have configured the user ID manually is it possible that you might have a problem in version 7.5.

You have always been required to give the user ID that you configure to run WebSphere MQ the user right to Logon as a service. If you run the Prepare WebSphere MQ wizard, it creates a user ID with this right. Alternatively, it ensures that a user ID you provide has this right.

It is possible that you ran WebSphere MQ in earlier releases with a user ID that did not have the Logon as a service right. You might have used it to configure the WebSphere MQ Windows service MQSeriesServices, without any problems. If you run a WebSphere MQ Windows service in version 7.5 with the same user ID that does not have the Logon as a service right, the service does not start.

The WebSphere MQ Windows service MQSeriesServices, with the display name IBM MQSeries, changed in version 7.1. A single WebSphere MQ Windows service per server is no longer sufficient. A WebSphere MQ Windows service per installation is requried. Each service is named MQ_InstallationName, and has a display name WebSphere MQ (InstallationName). The change, which is necessary to run multiple installations of WebSphere MQ, has prevented WebSphere MQ running the service under a single specific user ID. In version 7.5, a MQ_InstallationName service must run as a service.

The consequence is a user ID that is configured to run the Windows service MQ_InstallationName must be configured to Logon as a service. If the user ID is not configured correctly, errors are returned in the Windows system event log.

Many installations on earlier releases, and installations from version 7.1 onwards, configure WebSphere MQ with the Prepare WebSphere MQ wizard. The wizard sets up the user ID with the Logon as a service right and configures the WebSphere MQ Windows service with this user ID. Only if, in previous releases, you have configured MQSeriesServices with another user ID that you configured manually, might you have this migration problem to fix.


Windows: Migration of registry information

Before version 7.1 all WebSphere MQ configuration information, and most queue manager configuration information, was stored in the Windows registry. From version 7.1 onwards all configuration information is stored in files.

The change does not affect the operation of existing applications or queue managers, but it does affect any administrative procedures and scripts that reference the registry.

In version 7.5, all WebSphere MQ configuration information about Windows is stored in files; the same files as in UNIX and Linux. If you are migrating an existing Windows system to version 7.5, the transfer of configuration data from the registry to files is automatic. It takes place when the installation is upgraded to version 7.5.


Windows: MSCS restriction with multiple installations

When you install or upgrade to WebSphere MQ version 7.5, the first WebSphere MQ installation on the server is the only one that can be used with Microsoft Cluster Server (MSCS). No other installations on the server can be used with MSCS. This restriction limits the use of MSCS with multiple WebSphere MQ installations.

When you run the haregtyp command it defines the first WebSphere MQ to be installed as an MSCS resource type; see WebSphere MQ MSCS support utility programs . The implications are as follows:

  1. You must associate queue managers that are participating in an MSCS cluster with the first installation on the server.
  2. Setting the primary installation has no effect on which installation is associated with the MSCS cluster.


Windows: Relocation of the mqclient.ini file

In WebSphere MQ for Windows Version 7.5 the mqclient.ini file has moved from FilePath to WorkPath. This is similar to the model already used on UNIX and Linux systems.

For users who supply separate file and work paths you will see a change in behavior. You have an additional step to perform when you choose to uninstall WebSphere MQ version 7.0 before installing WebSphere MQ version 7.5. Before uninstalling WebSphere MQ version 7.0, you must copy mqclient.ini directly to the Config directory in your data path so that it can be picked up by the WebSphere MQ version 7.5 installation.


Windows: SPX support on Windows Vista

The Sequenced Package Exchange protocol (SPX) is not supported on Windows Vista or Windows Server 2008. SPX is supported on Windows XP and Windows 2003 only.


Windows: Task manager interpretation

The interpretation of the processes listed by the Windows Task Manager has changed. The results can show WebSphere MQ processes running for multiple installations on a server. Before version 7.5, the process list identified WebSphere MQ processes running on only a single installation of WebSphere MQ on a Windows server.

The implications of this change depend on how the results are qualified and interpreted, and how the list of processes is used. The change affects you, only if you start to run multiple installations on a single server. If you have incorporated the list of WebSphere MQ processes into administrative scripts or manual procedures, you must review the usage.


Windows: WebSphere MQ Installation affected by User Account Control on Windows systems

On some Windows systems, installation of WebSphere MQ is interrupted at various points by User Account Control (UAC). You must acknowledge the Windows UAC prompt to allow installation to run with elevated authority. You must run silent installation from an elevated command prompt.

UAC is enabled by default. UAC restricts the actions users can perform on certain operating system facilities, even if they are members of the Administrators group.

At certain points during interactive installation, migration, and uninstallation, you must acknowledge the Windows UAC prompt to allow installation to run with elevated authority. The points at which you must accept the Windows prompt for UAC are noted in the specific topics affected.

If you opt to run silent installation, you must start the installation process from an elevated command prompt. The points at which you must use an elevated command prompt are noted in the specific topics affected.


Windows: WebSphere MQ Active Directory Services Interface

The WebSphere MQ Active Directory Services Interface is no longer available.

If your application uses the WebSphere MQ Active Directory Services Interface, you must rewrite your application to use Programmable Command Formats.


WebSphere MQ version 7.5, IBM i and z/OS

WebSphere MQ version 7.5 is not available for IBM i and z/OS.

The latest version of WebSphere MQ for these platforms is:

For information about the latest versions of WebSphere MQ for IBM i and z/OS, see http://www.ibm.com/software/integration/wmq/ .