+

Search Tips   |   Advanced Search

Create the job scheduler and grid endpoint database

Create a database for the job scheduler and grid endpoint if we do not use the default Apache Derby database. The job scheduler stores job information in a relational database while the grid endpoint uses the database to track the progress of a batch job.

When we install the product, one Derby JDBC provider is created. The Derby JDBC provider contains two data sources. One is the default Derby data source, JNDI name jdbc/lrsched, that points to the default Derby job scheduler database. The other, JNDI name jdbc/pgc, is the batch execution environment data source. If we decide to use the default data source we do not need to create the job scheduler database. The default Derby database for the job scheduler is created when the job scheduler host (deployment target) is selected through the administrative console. The default Derby database for the endpoint is created when a batch application is first installed on a node. Embedded Derby databases cannot be shared by multiple processes and are unsuitable for environments where the job scheduler must move from one node to another. For example, the job scheduler must move from one node to another in high availability scenarios.

You may create multiple base profiles, but when operating in a WAS non-network deployment environment, such as the WAS Base product, profiles/instances should not share a relational database when more than a single instance will be active at one given time. The instances cannot communicate and inconsistent deletion of job data from tables is the usual result. Such a use of multiple base profiles is not supported.

The product supports Derby, DB2, and Oracle databases. We can use the following steps to configure the job scheduler and grid endpoint database if we decide to use a database other than the Derby database. When creating the database manually, the job scheduler and grid endpoint can use the same database.


Tasks

  1. Select the correct file based on the type of database that we are going to use.

    The product provides DDL files except for DB2 on the z/OS operating system. Use the DDL files to define the job scheduler database in directory...

      app_install_root/util/Batch

    The DDL files for creating the job scheduler database are named...

      CreateLRSCHEDTablesXxx.ddl

    ...where Xxx indicates the type of database manager that the scripts are intended for. These same DDL files are used for the grid endpoint.

    (ZOS) The product provides a SPUFI script for DB2 in the <WAS_install_root>/util/Batch directory. The SPUFI script is SPFLRS.

  2. See the documentation of our database vendor for details on customizing scripts and using the database tools to run it.


What to do next

After creating the database.

  1. Define the XA JDBC provider for the database through the administrative console.

  2. Create the data source using the JDBC provider through the administrative console.

    Define the data source at the cell level. Doing so guarantees that the database is available for each application server that hosts the job scheduler.

  3. Verify that the database has been created by testing the connection on the data source that we created in the previous step.

  4. Configure the job scheduler by selecting the JNDI name of the newly created data source in the job scheduler panel.

  5. Specify the JNDI name of the data source that we created in a previous step as the value of the GRID_ENDPOINT_DATASOURCE variable.


  • Configure the job scheduler
  • Configure WebSphere grid endpoints
  • Verifying the job scheduler installation
  • Job scheduler administrative roles and privileges
  • Configure a JDBC provider and data source