Applying performance testing to WebSphere Commerce | Common test execution steps


Key attributes of a performance test

Before we delve into different types of performance tests we define the features that distinguish one performance test from another.

In the case of eCommerce, the test attributes that dictate workload include:

  1. Number and type of virtual users

    Simply put, virtual users are the concurrent, simulated users. A test run with 100 virtual users means that there are 100 concurrent, simulated users who are using the site at any given time during the test.

    These users can be configured to use the same session or a different session for every new scenario execution. Usually, you would need to start a new scenario execution with a new session.

    Also, if 100 virtual users includes registered users then it does not mean that the same logon IDs will be used over and over again. On the contrary, almost always you would want to use random users. The only exception to this rule is in the case when your unique business requirements specifically call out for such a scenario. In our testing we noticed that if we keep using the same test IDs over and over again, then the stress-endurance test or the reliability test show gradual throughput degradation.

    Virtual users can be store administrators, shoppers, marketing managers, and so on. Depending on your scenario, you may need to further define the users based on certain demographic requirements of your business. This information should be available from your business analyst.

  2. Thinktime

    Thinktime is decided by the distribution and type of incoming user traffic. Again, your business analyst should be able provide thinktime specifications (or requirements) for your testing.

  3. Scenario

    Scenario includes the series of actions that virtual users execute while interacting with the WebSphere Commerce site. Inherently, this also includes different interfaces or utilities what might be used to interact with the WebSphere Commerce site. The scenario is decided by the use cases for your site design. Use cases should be available in your site design documents.

  4. Duration

    The duration of a test is dictated by the intention of the test case. If the intention of the test case is to find deadlocks, memory leaks, or gradual throughput degradation (GTD), then the test should be run for longer-running into hours or days, as is the case with soak tests. If, however, you are testing scalability of the system, then the test can be run for a much shorter duration-running into an hour or a few.

    The control attributes are related to the site setup and configuration:

  5. Store

    Store refers to both the file and data assets that constitute your store. As discussed in Store complexity scalability, the complexity of your store pages and database customizations design impact the performance of your site. The amount of data could be another factor that could influence your site's performance.

    It would be ideal to test your site's performance with a backup copy of your production database. Although your initial tendency may be to focus on the performance considerations of your database customizations, you should also consider purging unwanted data from your database using the WebSphere Commerce dbclean utility. IBM recommends that you run the WebSphere Commerce dbclean utility periodically.

  6. Environment

    Environment includes both the hardware and software, along with their configuration settings.

  7. Hardware and site topology

    The hardware that you employ for your site impacts its performance (for example, CPU speed, amount of memory, disk size and speed, disk controllers, disk cache, and so on). The topology that you have for your site also has a significant impact on your site's performance. For example, having WebSphere Commerce database on the same machine as the WebSphere Commerce application will cause both of them to compete for the same hardware resources. While deciding on topology, you also need to consider the level of clustering that you need for your site, the active/active or active/passive support that you require, security or firewall options, and so on.

  8. Hardware and software configuration

    This refers to all the various hardware and software configurations possible to performance tune your site, as discussed in this book. For example, setting up a 32-bit or 64-bit database, WebSphere Application Server, WebSphere Commerce configuration, and so on.

    The test results provide a rich set of indicators. Most of the small to medium sites test for throughput and response time, whereas breaking point and capacity testing is left for major upgrades to the site.

  9. Throughput

    The number of client interactions with WebSphere Commerce. The unit of interactions can be defined at different levels of granularity.

  10. Response time

    Elapsed time between client request and server response.

  11. Error rate

    Error rate should be defined in similar terms as the throughput. For the reasons discussed in 1.2.4, Throughput IBM recommends that you define throughput in terms of scenarios completed. In such a case the error rate would be defined as:

    Scenario Error Rate = Total # of failed scenarios / Total # of attempted scenarios

    However, if the throughput is defined in terms of transactions then the error rate would be defined as:

    Transaction Error Rate = Total # of failed scenarios / Total # of attempted scenarios

  12. Capacity

    Capacity can be maximum system capacity, required business capacity, or expected peak capacity.

  13. Breaking point

    The point of meltdown where the site performance degrades severely and unpredictably.

    The key idea of performance testing a WebSphere Commerce site is that, for a given set of scenarios and control attributes, you would tweak the virtual users and thinktime to put the system under stress.