+

Search Tips   |   Advanced Search

Set up the external scheduler interface using IBM MQ

We can install and configure the batch high performance external scheduler connector. This connector is the native WSGrid connector implemented in a native compiled language and that uses IBM MQ for communication.

The benefit of native WSGrid is twofold:

  1. It makes more efficient use of z/OS system processors by preventing the need for Java virtual machine (JVM) startup processing on each use.
  2. It uses the most robust messaging service available on z/OS to ensure reliable operation with a messaging service already known and used by most z/OS customers.

The authenticated user ID of the environment that starts WSGRID is propagated to the batch job scheduler. The resulting batch job runs using that user ID. This user ID must also have sufficient WebSphere privileges to submit batch jobs, that is, lradmin or lrsubmitter roles. For example, if JCL job WSGRID1 is submitted to run under technical user ID TECH1, the resultant batch job also runs under user ID TECH1. User ID TECH1 must be permitted to get and put to and from the IBM MQ input and output queues used by WSGRID.


Tasks

  1. Define IBM MQ queues.

    Queue manager must be local. Two queues are required: one for input, one for output. We can name the queues according to your naming conventions. As an example, the name WASIQ is used for input queues and the name WASOQ is used for output queues. The queues must be set in shared mode.

  2. Update the MQ_INSTALL_ROOT WebSphere variable.

    1. In the administrative console, click Environment > WebSphere variables.

    2. Select the node scope where the job scheduler runs.

    3. Select MQ_INSTALL_ROOT .

    4. For Value, put in the directory path to where IBM MQ is installed.

      For example, Value can be /USR/lpp/mqm/V6R0M0.

    5. Click Apply and save the changes.

  3. From the deployment manager, run the installWSGridMQ.py script with the following input parameters:

    The installWSGridMQ.py script installs a system application, and then sets up the JMS connection factory, JMS input and output queues, and other necessary parameters.

    wsadmin.sh -f -user <username> -password <userpassword> installWSGridMQ.py

    -install | -install <APP | MQ>

    {-cluster clusterName | -node nodeName -server <server>}

    MQ parameters are not required when doing an APP install.

    -remove | -remove <APP | MQ>

    {-cluster clusterName | -node nodeName -server <server>}

    MQ parameters are not required when doing an APP remove.

    -qmgr

    <queueManagerName>

    -inqueue

    <inputQueueName>

    -outqueue

    <outputQueueName>

    For example, for clusters:

    wsadmin.sh -f installWSGridMQ.py -install -cluster clusterName -qmgr <queueManagerName>
     -inqueue <inputQueueName> -outqueue <outputQueueName>
    

    For example, for nodes:

    wsadmin.sh -f installWSGridMQ.py -install -node nodeName -server serverName
     -qmgr <queueManagerName> -inqueue <inputQueueName> -outqueue <outputQueueName>
    

    For example, for installing only the Application at the cluster level:

    wsadmin.sh -f installWSGridMQ.py -install APP -cluster clusterName 
    

    For example, for installing only the MQ components at the node/server level:

    wsadmin.sh -f installWSGridMQ.py -install MQ -node nodeName -server serverName 
    

  4. Run osgiCfgInit.sh|.bat -all for each server whose MQ_INSTALL_ROOT WebSphere variable you updated in a previous step.

    The osgiCfgInit command resets the class cache that the OSGi runtime environment uses.

  5. Create the WSGRID load module:

    1. Locate the unpack script in the app_server_root/bin directory.

      The unpackWSGRID script is a REXX script.

    2. Perform an unpack using the unpackWSGrid script. To display the command options, issue the unpackWSGRID script with no input: unpackWSGRID <was_home> [<hlq>] [<work_dir>] [<batch>] [<debug>]

      <was_home>

      Required WAS home directory.

      <hlq>

      Optional high-level qualifier of output data sets. The default is <user id>.

      <work_dir>

      Optional working directory. The default is /tmp.

      <batch>

      Optional run mode for this script. Specify batch or interactive. The default is interactive.

      <debug>

      Optional debug mode. Specify debug or nodebug. The default is nodebug.

      /u/USER26> unpackWSGRID /WebSphere/ND/AppServer
      

      Sample output:

      Unpack WSGRID with values:
      WAS_HOME=/WebSphere/ND/AppServer
      HLQ =USER26
      WORK_DIR=/tmp
      BATCH =INTERACTIVE
      DEBUG =NODEBUG
      Continue? (Y|N)
      Y
      User response: Y
      Unzip /WebSphere/ND/AppServer/bin/cg.load.xmi.zip
      extracted: cg.load.xmi
      ove cg.load.xmi to /tmp
      Delete old dataset 'USER26.CG.LOAD.XMI'
      Allocate new dataset 'USER26.CG.LOAD.XMI'
      Copy USS file /tmp/cg.load.xmi to dataset 'USER26.CG.LOAD.XMI'
      Delete USS file /tmp/cg.load.xmi
      Delete old dataset 'USER26.CG.LOAD'
      Go to TSO and issue RECEIVE INDSN('USER26.CG.LOAD.XMI') to create
      CG.LOAD
      
    3. Go to TSO, ISPF, option 6 - Command, and do a receive operation.

      For example:

      RECEIVE INDSN('USER26.CG.LOAD.XMI')
      

      The following output is the result:

      Dataset BBUILD.CG.LOAD from BBUILD on PLPSC
      The incoming data set is a 'PROGRAM LIBRARY'
      Enter restore parameters or 'DELETE' or 'END' +
      

      Click Enter to end. Output similar to the following output is displayed.

      IEB1135I IEBCOPY FMID HDZ11K0 SERVICE LEVEL UA4
      07.00 z/OS   01.07.00 HBB7720  CPU 2097
      IEB1035I USER26   WASDB2V8 WASDB2V8   17:12:15 MON
      COPY INDD=((SYS00006,R)),OUTDD=SYS00005
      IEB1013I COPYING FROM PDSU  INDD=SYS00006 VOL=CPD
      USER26.R0100122
      IEB1014I
      IGW01551I MEMBER WSGRID HAS BEEN LOADED
      IGW01550I 1 OF 1 MEMBERS WERE LOADED
      IEB147I END OF JOB - 0 WAS HIGHEST SEVERITY CODE
      Restore successful to dataset 'USER26.CG.LOAD'
      ***
      

  6. Restart the servers that we just configured. Also, restart the node agents.

We have configured an external job scheduler interface.


What to do next

Submit a job from the external job scheduler interface to batch.

  • Integrating batch features in z/OS operating systems
  • Submitting jobs from an external job scheduler
  • WSGrid job template