Analyzing test results and solving performance problems | Verify pass criterion
Test results to be collected and verified
For each test executed the following results would typically be recorded for analysis, as well as future reference:
- Input attributes:
- Number and type of virtual users
- Think time
- Scenario
- Duration
- Control attributes
- Store including file assets and data
- Environment
- Hardware and site topology
Hardware and software configuration can often be important since component levels may change during your site development, depending on the duration of your project, as well as the number of products integrating with WebSphere Commerce. It is important to know exactly what software stack created the successful result.
- Output values:
- Minimum/average/maximum response time for all page hits
- Minimum/average/maximum test scenario response time
- Transaction/page hit/scenario throughput
- Page hit/scenario failure ratio
- Resource utilization (memory, CPU, I/O, and so on)
- Additional information as required by your business, such as orders/hour.
- Logs such as WebSphere Application Server logs, JVM logs, database logs, and test client logs.
The purpose of recording this information is that:
- One may understand what is being tested.
- One may diagnose performance concerns without necessarily having to rerun the test case. This can also speed up getting support from subject matter experts who may not be directly involved in testing, including IBM Support.
- Someone can reproduce the exact same test and reproduce the same exact result in the future, for example, for comparison or for problem determination purposes.
- To have sufficient information about the results of test cases, which can then be used to predict the site behavior as the workload of the site changes. This information could also help better understand the site configuration changes that may be required to accommodate the workload changes.