Activity:
|
Purpose
| |
Role: Implementer | |
Frequency: Typically once for each corresponding activity that develops implementation elements | |
Steps
| |
Input Artifacts:
| Resulting Artifacts:
|
Tool Mentors: | |
More Information:
|
Workflow Details:
|
Purpose: | To identify the Component under Test and define a set of tests that are of most benefit in the current iteration |
---|
In a formal environment the components and the tests needed to be developed are specified in the Test Design artifact, making this step optional. There are other occasions when the developer tests are driven by Change Requests, bug fixes, implementation decisions that need to be validated, subsystem testing with only the Design Model as input. For each of these cases:
Purpose: | To determine the appropriate technique to implement the test |
---|
There are various techniques available to implement a test, but they can be considered in terms of two general categories: manual and automated testing. Most of the developer tests are implemented using automated testing techniques:
Although the most popular approach is the "programmed test" one, in some cases - GUI related testing for example, the more efficient way to conduct a test is manually, following a sequence of instructions that have been captured in a textual description form.
Purpose: | To implement the tests identified in the definition step/activity |
---|
Implement all the elements defined in the first step. Detail and clearly specify the test environment pre-conditions and what are the steps to get the component under test to the state where the test(s) could be conducted. Identify the clean-up steps to be followed in order to restore the environment to the original state. Pay special attention to the implementation of the observation/control points, as these aspects might need special support that has to be implemented in the component under test.
Purpose: | To create and maintain data, stored externally to the test, that are used by the test during execution |
---|
In most of the cases, decoupling the Test Data from the Test leads to a more maintainable solution. If the test's life span is very short, hardcoding the data within the test might be more efficient, but if many test execution cycles are needed using different data sets, the simplest way is to store them externally. There are some other advantages if the Test Data is decoupled from the Test:
Purpose: | To verify the correct workings of the Test |
---|
Test the Test. Check the environment setup and clean-up instructions. Run the Test, observe its behavior and fix the test's defects. If the test will be long-lived, ask a person with less inside knowledge to run it and check if there is enough support information. Review it with other people within the development team and other interested parties.
Purpose: | To enable impact analysis and assessment reporting to be performed on the traced item |
---|
Depending on the level of formality, you may or may not need to maintain traceability relationships. If you do, use the traceability requirements outlined in the Test Plan to update the traceability relationships as required.
Rational Unified Process
|