Develop a simple transactional batch application
We can write a simple batch application using a batch job controller and EJB data stream, the command line, or the Apache ANT tool.
Avoid trouble: If the batch step uses a batch data stream (BDS) whose data is local to the file system of the application server to which the batch application is deployed, then certain steps must be followed to support job restart scenarios. If such a batch application is deployed to application servers that can run on multiple machines, then there is no guarantee that the restart request is accepted by the machine on which the batch job originally ran. This occurs when the batch application is deployed to a cluster, and if a batch job that runs against such an application is canceled and then restarted. In this scenario, the deployment might send the restart request to an application server that runs on a different machine. Therefore, in cases where file-based affinity is required, we can apply the following solutions to support the job restart scenario:
- Ensure that the data is equally available to every machine on which the batch application can be started. Use a network file system for this example. This action might reduce performance of application.
- Deploy the application on application servers that can only run on the machine where the local data exists. Complete this action by deploying the application to a cluster that exists in a node group that has only one member node.
gotcha
The batch application developer must ensure that transactional work done in the batch step callback methods inherits the global transaction started by the grid endpoints. This action ensures that work performed under a batch step only gets committed at every checkpoint and rolls back if the step fails.gotcha
Some commands are split on multiple lines for printing purposes.
- Create batch jobs using a batch job controller and an EJB data stream.
- Create batch job steps.
- Create a Java class that implements the com.ibm.websphere.BatchJobStepInterface interface.
- Implement business logic.
If the step has one input and one output stream we can alternatively use the Generic batch step of GenericXDBatchStep.
- Create batch data streams.
- Create a Java class that implements the interface com.ibm.websphere.batch.BatchDataStream.
Batch data streams are accessed from the business logic, for example, from the batch job steps by calling BatchDataStreamMgr with jobID and stepID. JobID and stepID are retrieved from the step bean properties list using keys BatchConstants.JOB_ID and BatchConstants.STEP_ID.
- Map BatchConstants.JOB_ID to com.ibm.websphere.batch.JobID and map BatchConstants.STEP_ID to com.ibm.websphere.batch.StepID.
You should already have access to the BatchConstants class.
The batch datastream framework provides several ready-to-use patterns to work with different types of datastreams such as file and database. To use the batch datastream framework...
- Identify the data stream type with which to operate, such as TextFile, ByteFile, JDBC, or z/OS stream.
- Identify whether you would read from the stream or write to the stream.
- See the table in the batch data stream framework and patterns. Select the class from the supporting classes column that matches your data stream type and operation. For example, to read data from a text file, then select TextFileReader.
- Implement the interface listed in the pattern name column that corresponds to the supporting class you selected in the previous step. The supporting class handles all the book keeping activities related to the stream and the batch programming model. The implementation class focuses on the stream processing logic.
- Declare the supporting class and the implementation class in the xJCL.
- Repeat this procedure for each datastream required in the step.
- Optional: Obtain the job step context.
JobStepContext ctx= JobStepContextMgr.getContext();
The JobStepContextMgr service class enables the batch job step to obtain a reference to its JobStepContext object. The job step context provides the following capabilities:
- Access to information that uniquely identifies the context in which the current batch job step runs, for example, the job ID
- A transient user data area where application-specific information can be passed among the batch programming framework methods during the life of the batch job step
- A persistent user data area where application-specific information can be passed across steps
We can use the PersistentMap helper class to simplify the storing of basic types such as Boolean and double in the persistent user data area of the job step context.
- Define batch data streams in xJCL.
<batch-data-streams> <bds> <logical-name>inputStream</logical-name> <props> <prop name="PATTERN_IMPL_CLASS" value="MyBDSStreamImplementationClass"/> <prop name="file.encoding" value="8859_1"/> <prop name="FILENAME" value="${inputDataStream}" /> <prop name="PROCESS_HEADER" value="true"/> <prop name="AppendJobIdToFileName" value="true"/> </props> <impl-class>com.ibm.websphere.batch.devframework.datastreams.patterns.FileByteReader </impl-class> </bds>The PATTERN_IMPL_CLASS class denotes the user implementation of the BDS framework pattern and the impl-class property denotes the supporting class.
- Optional: Enable skip-level processing.
Use skip-record processing to skip read and write record errors in transactional batch jobs. Specify skip-record policies in the xJCL. Read the topic on skip record processing for further information.
- Optional: Enable retry-step processing.
Use retry-step processing to try job steps again when the processJobStep method encounters errors in a transactional batch job. Specify retry-step policies in the xJCL. Read the topic on retry-step processing for further information.
- Optional: Configure the transaction mode.
Use the transaction mode to define whether job-related artifacts are called in global transaction mode or local transaction mode. Read the topic on the configurable transaction mode for further information.
- Optional: Provide a job listener.
Provide an implementation for the com.ibm.websphere.batch.listener.JobListener interface to add additional initialization and clean up for jobs and steps. Specify the job listener in the xJCL using the job-level listener element.
The job listener beforeJob() method is invoked before any user artifact is invoked. The job listener afterJob() method is invoked after the last user artifact is invoked. The job listener beforeStep() method is invoked before any step-related user artifact. The job listener afterStep() method is invoked as the last step-related user artifact. Each time the job listener is invoked, it logs a message to the job log.
- Declare a batch job controller.
- Add a stateless session bean to the deployment descriptor and point to the implementation class that the product provides. Do so by specifying com.ibm.ws.batch.BatchJobControllerBean as the bean class. Do this specification only once per batch application.
- Use com.ibm.ws.batch.BatchJobControllerHome for the remote home interface class and com.ibm.ws.batch.BatchJobController for the remote interface class.
- Configure the EJB deployment descriptor.
- Configure a resource reference on the Controller bean to the default WorkManager wm/BatchWorkManager of the type commonj.work.WorkManager.
Avoid trouble: We must declare the deployment descriptor of the batch controller bean in the EJB deployment descriptor of a batch application. Only one controller bean can be defined per batch application.gotcha
- Create batch jobs using the command line.
- Create batch job steps.
- Create a Java class that implements the com.ibm.websphere.BatchJobStepInterface interface.
- Implement business logic.
If the step has exactly one input and one output stream you could alternatively use the Generic batch step of GenericXDBatchStep.
- Create batch data streams.
- Create a Java class that implements the interface com.ibm.websphere.batch.BatchDataStream.
Batch data streams are accessed from the business logic, for example, from the batch job steps by calling BatchDataStreamMgr with jobID and stepID. JobID and stepID are retrieved from the step bean properties list using keys BatchConstants.JOB_ID and BatchConstants.STEP_ID.
- Map BatchConstants.JOB_ID to com.ibm.websphere.batch.JobID and map BatchConstants.STEP_ID to com.ibm.websphere.batch.StepID.
You should already have access to the BatchConstants class.
The batch datastream framework provides several ready-to-use patterns to work with different types of datastreams such as file and database. To use the batch datastream framework...
- Identify the data stream type with which to operate, such as TextFile, ByteFile, JDBC, or z/OS stream.
- Identify whether you would read from the stream or write to the stream.
- See the table in the batch data stream framework and patterns. Select the class from the supporting classes column that matches your data stream type and operation. For example, to read data from a text file, then choose TextFileReader.
- Implement the interface listed in the pattern name column that corresponds to the supporting class you selected in the previous step. The supporting class handles all the book keeping activities related to the stream and the batch programming model. The implementation class focuses on the stream processing logic.
- Declare the supporting class and the implementation class in the xJCL.
- Repeat this procedure for each datastream required in the step.
- Optional: Obtain the job step context.
JobStepContext ctx= JobStepContextMgr.getContext();
The JobStepContextMgr service class enables the batch job step to obtain a reference to its JobStepContext object. The job step context provides the following capabilities:
- Access to information that uniquely identifies the context in which the current batch job step runs, for example, the job ID
- A transient user data area where application-specific information can be passed among the batch programming framework methods during the life of the batch job step
- A persistent user data area where application-specific information can be passed across steps
We can use the PersistentMap helper class to simplify the storing of basic types such as Boolean and double in the persistent user data area of the job step context.
- Open a command prompt and ensure that Java is on the path.
- Issue the following command, all on a single line.
java -cp ${WAS_INSTALL_ROOT}/plugins/com.ibm.ws.batch.runtime.jar com.ibm.ws.batch.packager.WSBatchPackager -appname=<Application_Name> -jarfile=<jarfile containing the POJO batch steps> -earfile=<name of the output EAR file without the .ear extension> [-utilityjars=<semicolon separated list of utility jars>] [-debug] [-gridJob]For example, for batch jobs, issue
java -cp ${WAS_INSTALL_ROOT}/plugins/com.ibm.ws.batch.runtime.jar com.ibm.ws.batch.packager.WSBatchPackager -appname=XDCGIVT -jarfile=XDCGIVTEJBs.jar -earfile=XDCGIVT(zos)
java -cp ${WAS_INSTALL_ROOT}/plugins/com.ibm.ws.batch.runtime.jar com.ibm.ws.batch.packager.WSBatchPackager -Dfile.encoding=ISO8859-1 -appname=<Application_Name> -jarfile=<jarfile containing the POJO batch steps> -earfile=<name of the output EAR file without the .ear extension> [-utilityjars=<semicolon separated list of utility jars>] [-debug] [-gridJob]For example for batch jobs, issuejava -cp ${WAS_INSTALL_ROOT}/plugins/com.ibm.ws.batch.runtime.jar com.ibm.ws.batch.packager.WSBatchPackager -Dfile.encoding=ISO8859-1 -appname=XDCGIVT -jarfile=XDCGIVTEJBs.jar -earfile=XDCGIVTAvoid trouble: If we do not include -Dfile.encoding=ISO8859-1, code page differences result that yield invalid EAR and Enterprise JavaBeans (EJB) JAR descriptors.gotcha
- Package a batch application.
Use one of the following methods.
- Package the application using the WSBatchPackager script.
(dist)
<WASHOME>/stack_products/WCG/bin/WSBatchPackager.sh -appname=<application_name> -jarfile=<jar_file_containing_POJO_step_classes> -earfile=<output_ear_file_name> [-utilityjars=<semicolon_separated_utility_jars>] [-nonxadsjndiname=<non-xa_datasource_JNDI_name_for_CursorHoldableJDBCReader>;<non-XA_datasource_JNDI_name_2>;...] [-debug]For example, issue
./WSBatchPackager.sh -appname=XDCGIVT -jarfile=XDCGIVTEJBs.jar -earfile=XDCGIVT -utilityjars=myutility.jar -nonxadsjndiname=jdbc/ivtnonxa
(zos)
<WASHOME>/stack_products/WCG/bin/WSBatchPackager.sh -Dfile.encoding=ISO8859-1 -appname=<application_name> -jarfile=<jar_file_containing_POJO_step_classes> -earfile=<output_ear_file_name> [-utilityjars=<semicolon_separated_utility_jars>] [-nonxadsjndiname=<non-xa_datasource_JNDI_name_for_CursorHoldableJDBCReader>;<non-XA_datasource_JNDI_name_2>;...] [-debug]For example, issue
./WSBatchPackager.sh -Dfile.encoding=ISO8859-1 -appname=XDCGIVT -jarfile=XDCGIVTEJBs.jar -earfile=XDCGIVT -utilityjars=myutility.jar -nonxadsjndiname=jdbc/ivtnonxaAvoid trouble: If we do not include -Dfile.encoding=ISO8859-1, code page differences result that yield invalid EAR and EJB JAR descriptors.gotcha
- Package the application using the java command.
Open a command prompt and ensure that java is on the path.
- Create batch jobs using ANT.
- Create batch job steps.
- Create a Java class that implements the com.ibm.websphere.BatchJobStepInterface interface.
- Implement business logic.
If the step has exactly one input and one output stream you could alternatively use the Generic batch step of GenericXDBatchStep.
- Create batch data streams.
- Create a Java class that implements the interface com.ibm.websphere.batch.BatchDataStream.
Batch data streams are accessed from the business logic, for example, from the batch job steps by calling BatchDataStreamMgr with jobID and stepID. JobID and stepID are retrieved from the step bean properties list using keys BatchConstants.JOB_ID and BatchConstants.STEP_ID.
- Map BatchConstants.JOB_ID to com.ibm.websphere.batch.JobID and map BatchConstants.STEP_ID to com.ibm.websphere.batch.StepID.
You should already have access to the BatchConstants class.
The batch datastream framework provides several ready-to-use patterns to work with different types of datastreams such as file and database. To use the batch datastream framework...
- Identify the data stream type with which to operate, such as TextFile, ByteFile, JDBC, or z/OS stream.
- Identify whether you would read from the stream or write to the stream.
- See the table in the batch data stream framework and patterns. Select the class from the supporting classes column that matches your data stream type and operation. For example, to read data from a text file, then select TextFileReader.
- Implement the interface listed in the pattern name column that corresponds to the supporting class you selected in the previous step. The supporting class handles all the book keeping activities related to the stream and the batch programming model. The implementation class focuses on the stream processing logic.
- Declare the supporting class and the implementation class in the xJCL.
- Repeat this procedure for each datastream required in the step.
- Optional: Obtain the job step context.
JobStepContext ctx= JobStepContextMgr.getContext();
The JobStepContextMgr service class enables the batch job step to obtain a reference to its JobStepContext object. The job step context provides the following capabilities:
- Access to information that uniquely identifies the context in which the current batch job step runs, for example, the job ID
- A transient user data area where application-specific information can be passed among the batch programming framework methods during the life of the batch job step
- A persistent user data area where application-specific information can be passed across steps
We can use the PersistentMap helper class to simplify the storing of basic types such as Boolean and double in the persistent user data area of the job step context.
- For a batch job, ensure the com.ibm.ws.batch.runtime.jar file is on the class path.
- Declare the task.
Use the following command to declare the task:
<taskdef name="pgcpackager" classname="com.ibm.ws.batch.packager.PGCPackager" classpath="${WAS_INSTALL_ROOT}/plugins/com.ibm.ws.batch.runtime.jar" />
- After compiling the Java files in the application, invoke the pgcpackager task.
<pgcpackager appname="<appname>" earFile="<location name of EAR file to generate>" jarfile="location of the POJO jar file"/>
Results
You have developed a simple transactional batch application using a batch job controller and EJB data stream, the command line, or the ANT tool.
What to do next
Install the compute-intensive application and configure WebSphere grid endpoints.
Subtopics
- Components of a batch application
The batch application developer and the batch run time environment provide the components of a batch application.
- Batch programming model
Batch applications are EJB based Java EE applications. These applications conform to a few well-defined interfaces that allow the batch runtime environment to manage the start of batch jobs destined for the application.
- Skip-record processing
Use skip-record processing to skip read and write record errors in transactional batch jobs. Specify skip-record policies in the xJCL.
- Retry-step processing
Use retry-step processing to try job steps again when the processJobStep method encounters errors in a transactional batch job. Specify retry-step policies in the xJCL.
- Configurable transaction mode
Use the transaction mode to define whether job-related artifacts are called in global transaction mode or local transaction mode. Specify the transaction mode in the xJCL.
Related concepts
Batch job steps Batch controller bean Batch data stream framework and patterns xJCL elements
Related tasks
Configure WebSphere grid endpoints Install the batch application Deploy batch applications
Retry-step processing