Automated component testing

The automated component testing features in the Rational Developer products allow you to create, edit, deploy, and run automated tests for Java components, EJBs, and Web services. These features comply with the UML Testing Profile standard and use the JUnit testing framework.

With these features you can perform the following actions:

 

Working with JUnit test scripts

All tests that you create with the Rational Developer products are extensions of JUnit tests. In addition, you can import, edit, and execute existing JUnit tests. The automated component testing features extend JUnit with the following families of primitives:

A major difference between validation actions and the original JUnit assert methods is that, with validation actions, failed assertions do not stop the execution of the entire JUnit test suite.

More information on JUnit can be found at the following locations:

 

Creating test projects

To test your components, first create a test project.

The test project is linked to one or several development projects that contain the components you want to test. Development projects can include Java development projects, Enterprise Application projects, or Dynamic Web projects. The components targeted for each test are known as the CUT (component-under-test).

Test projects contain execution-oriented elements (test suites and runs) and code-oriented elements (test behavior scripts and stubs). You can browse and edit test suites and test runs in the Test Navigator view, while test behavior scripts and stubs are shown in the Package Explorer view.

 

Creating and editing tests and test data

You create tests with the Component Test wizard. After you create the test, you can then edit the resulting test assets, which include a test behavior script, a test suite, test cases, and a test data table.

 

Using stubs

In testing, it is often necessary to stub out components that the CUT interacts with. This lets you test the CUT in isolation so that you can be certain you are testing the CUT and not the other component. Like a test, a stub is defined by behavior and data. Stub behavior is defined in the stub's user code class, which can be viewed and edited in the Java code view. Stub data is supplied in the stub data table, which defines the output behavior of a stubbed class in response to certain inputs. With the stub data table, you simulate the stubbed class by specifying the actual input and return values for each stubbed method.

Stubs can be generated automatically through analysis of the CUT during test creation.

 

Deploying and running tests

Test deployment is the phase in which you specify the conditions in which the test is executed. Mainly for EJBs, test deployment data includes application server information. Use the Test Suite editor to choose the server configuration to deploy the CUT. Server configurations are defined in the Server perspective.

You can specify a launch configuration for any component of a test project (a test suite, a test case, or a single equivalence class), so that the test can be run with or without profiling or debug options. During test execution, a data collector monitors the CUT to retrieve test results.

 

Analyzing test results

Test execution produces a test run, which can be seen in the Test Navigator view. You can expand a test run to see the individual tests and their verdicts (Pass, Fail, Inconclusive, or Error). From an individual test, you can go to the result details displayed in the Test Data Comparator view to see the actual test results compared to expected results.

Contextual cross-navigation is provided between the test verdicts, test data, test suites, test behavior scripts, and the relevant portions of code in the CUT.

 

Running regression tests

The main benefit of generating and setting up a persistent test environment is to facilitate reuse of your tests for regression testing. Regular regression testing is a reliable means of ensuring that new defects are not introduced and that existing ones are fixed.

A good practice is to create your tests early in the development project as it helps in the interpretation of product specifications. You can then run the same tests periodically during the development process in order to detect unexpected errors that may appear every time the code is modified or moved into a new environment.

 

Related tasks

Creating new component test projects
Running component tests
Viewing component test results