General reference for performance testing
1. Data correlation rules
2. Error conditions
3. Resource monitoring data sources
4. Response time breakdown data sources5.1. Performance Requirements report
5.2. Synchronization Point report
5.3. Transaction report
5.4. Transaction Percentile report
5.5. Transaction Net Server Time Percentile report
5.6. HTTP performance test reports
5.6.1. Performance report
5.6.2. Page Element report
5.6.3. Why response time of a page does not equal the sum of its requests
5.6.4. Page Percentile report
5.6.5. Verification Points report5.7. SAP performance test reports
5.7.1. SAP Performance report
5.7.2. SAP Verification Points report5.8. Citrix performance test reports
5.8.1. Citrix Performance report
5.8.2. Citrix Verification Points report
5.8.3. Citrix response time percentile report5.9. Web service performance test reports
5.9.1. Service Performance report
5.9.2. Web Service Verification Points report5.10. Socket performance test reports
5.10.1. Socket Performance report
5.11. HTTP counters
5.11.1. Byte counters
5.11.2. Page counters
5.11.3. Run counters
5.11.4. Test counters
5.11.5. Transaction counters5.12. SAP counters
5.12.1. SAP screen counters
5.13. Citrix counters
5.13.1. Citrix window synchronization counters
5.13.2. Citrix image synchronization counters
5.13.3. Citrix timer counters
5.13.4. Citrix user action counters
5.13.5. Citrix window counters5.14. Service counters
5.14.1. Service call counters
5.14.2. Service verification point counters5.15. Socket counters
5.15.1. Socket counters
1. Data correlation rules
You can customize how data is correlated by using data correlation rules.
Rules that create elements
- Create a built in data source
- Inserts a built-in data source in the test.
- Create a custom code
- Inserts a custom code element in the test.
- Create a datapool column
- Creates a datapool column that can be used by substitution sites.
- Create a reference
- Creates a reference in data that matches a specified regular expression.
- Create a substitution
- Creates a substitution site in data that matches a specified regular expression.
- Create a variable assignment
- Inserts a variable assignment in the test.
- Create a variable declaration
- Creates a variable that can be used by substitution sites.
Rules that change elements
- Encode a substitution
- Specifies whether substitution fields are encoded or decoded.
- Rename a data source site
- Changes the name of a data source that matches a regular expression.
- Rename a substitution site
- Changes the name of a substitution site that matches a regular expression.
- Replace reference regular expression
- Changes the regular expressions that are used by references.
- Unlink a substitution
- Removes the links between substitution sites and references and other data sources.
Rules that find elements
- Find a reference
- Returns a reference. Add child conditions to specify the reference to find.
- Find a substitution
- Returns a substitution site. Add child conditions to specify the substitution site to find.
- Find a variable
- Returns a variable. Add child conditions to specify the variable to find.
Rules that remove elements
- Remove a built in data source
- Deletes data sources from the test. Add child conditions to specify the data sources to delete. If you do not add child conditions, this rule deletes all data sources in the test.
- Remove a custom code
- Deletes custom code elements from the test. Add child conditions to specify the custom code elements to delete. If you do not add child conditions, this rule deletes all custom code elements in the test.
- Remove a reference
- Deletes references from the test. Add child conditions to specify the references to delete. If you do not add child conditions, this rule deletes all references in the test.
- Remove a substitution
- Deletes substitution sites from the test. Add child conditions to specify the substitution sites to delete. If you do not add child conditions, this rule deletes all substitution sites in the test.
- Remove a variable assignment
- Deletes variable assignments from the test. Add child conditions to specify the variable assignments to delete. If you do not add child conditions, this rule deletes all variable assignments in the test.
- Remove a variable declaration
- Deletes variables from the test. Add child conditions to specify the variables to delete. If you do not add child conditions, this rule deletes all variables in the test.
2. Error conditions
Error conditions include verification point failures, connection failures, server timeouts, custom code alerts, custom code exceptions, and problems with data correlation.
Page Title Verification Point Failure [HTTP]
The returned title for the primary request for an HTTP page does not match the expected title. The default value of the expected page title is what is returned between the <title></title> tags during recording.
Response Code Verification Failure [HTTP]
The returned response code does not match the expected response code. You can specify an exact match or a relaxed match.
Response Size Verification Failure [HTTP]
The number of bytes returned does not match the expected number of bytes. You can control how closely the returned response size must match the recorded response size.
Content Verification Point Failure
The received data does not match the expected data. Content verification point controls are protocol-specific.
Connection Failure
The workbench or agent computers cannot connect to the server under test.
Authentication Failure
An attempt to log in to the server under test failed.
End of Datapool reached
The last row of the datapool is reached.
Reference Extraction Failure
The response received during playback is different from the response received when the test was recorded. Data correlation failed because the regular expression that is associated with the reference did not match the expected value.
Substitution Failure
A reference for an expected data substitution is a null reference.
Server Timeout
The server under test does not respond before the timeout interval elapses.
Custom Verification Point Failure
A custom verification point did not return a Pass status after performing a verification written in Java. code.
Custom Code Alert
Custom code reported an RPTCondition.CustomCodeAlert condition. The following code reports a custom code alert:tes.getTestLogManager().reportErrorCondition(RPTCondition.CustomCodeAlert);
Custom Code Exception
The custom code in a test has an exception. By default, Rational Performance Tester exits the user whenever there is an exception in custom code. behavior
3. Resource monitoring data sources
Resource monitoring data can be captured or imported from a number of sources.
IBM Tivoli Monitoring
IBM Tivoli Monitoring monitors and manages system and network applications on a variety of platforms and keeps track of the availability and performance of all parts of your enterprise network. IBM Tivoli Monitoring provides reports that you can use to track trends and troubleshoot problems.Not all IBM Tivoli Monitoring agents are supported. Over 100 IBM Tivoli Monitoring agents are available from IBM and non-IBM vendors. The following IBM Tivoli Monitoring agents are supported for resource monitoring data collection:
- Operating system agents
- Monitoring Agent for Linux OS
- Monitoring Agent for UNIX OS
- Monitoring Agent for Windows OS
- Monitoring Agent for z/OS
- Application agents
- Monitoring Agent for Citrix
- Monitoring Agent for IBM DB2
- Monitoring Agent for IBM Tivoli Composite Application Manager for WebSphere
- Monitoring Agent for IBM WebSphere Application Server
- Monitoring Agent for IBM WebSphere MQ
- Monitoring Agent for Oracle Database
- Monitoring Agent for SNMP-MIB2 (only)
IBM DB2 Monitoring
IBM DB2 collects information from the database manager, its databases, and any connected applications. The snapshot monitor captures the state of database activity at a particular point in time.
IBM WebSphere Performance Monitoring Infrastructure
IBM WebSphere Application Server collects performance data and provides interfaces so that external applications can monitor that performance data. To help identify performance problems and help tune an environment that runs web applications, data is collected through the Performance Monitoring Infrastructure (PMI). The Performance Monitoring Infrastructure is the underlying framework in WebSphere Application Server that gathers performance data from various runtime resources, such as Java. Virtual Machine (JVM) and Thread Pools, and application components, such as servlets and Enterprise JavaBeans (EJB) components.
Java Management Extensions
Java Management Extensions (JMX) can monitor performance characteristics of application servers and applications that are run on application servers. The following application servers support JMX monitoring:
- Apache HTTP Server
- Apache Tomcat
- JBoss Application Server
- Oracle WebLogic Server
- SAP NetWeaver
Java Virtual Machines also support JMX monitoring.
Oracle Database Metrics
Oracle Database collects metrics that are related to database health and workload.
UNIX rstatd
With the rstatd daemon, users can collect performance statistics remotely from networked UNIX (or Linux) computers. The rstatd daemon collects statistics that are related to network, virtual memory, interrupt, disk, and processor usage.
Simple Network Management Protocol (SNMP) agents
The Simple Network Management Protocol (SNMP) is typically used to monitor network health, performance, and hardware. SNMP agents are software components that are installed on managed devices and collect management information.
Windows Performance Monitor
Windows Performance Monitor (PerfMon) collects data from performance objects. The Microsoft Windows operating system provides performance objects for the major hardware components: memory, processors, and so on. Each performance object provides specific performance counters. For example, the Memory object provides a Pages/sec counter that tracks the rate of memory paging. Other programs on the computer, including Internet Information Services (IIS) and Microsoft SQL Server, can install their own performance objects. For example, a mail server program might install a mail performance object. The specific counters depend on the version of the Windows operating system and on the additional programs that are installed on the computer.
4. Response time breakdown data sources
Response time breakdown data can be imported from a number of sources.
IBM Tivoli Composite Application Manager for Application Diagnostics
IBM Tivoli Composite Application Manager for Application Diagnostics enables users to view the health of web applications and servers, then drill down to diagnostic information for specific application requests to identify the root cause of problems. See the following website for more information: http://publib.boulder.ibm.com/infocenter/tivihelp/v24r1/index.jsp?topic=/com.ibm.itcamfad.doc_7101/ic-homepage.html .
IBM Tivoli Composite Application Manager for Response Time Tracking
IBM Tivoli Composite Application Manager for Response Time Tracking measures the level of service that the application delivers to users. It does this by monitoring the availability and response time that users experience at the client desktop. It works with a wide range of web-based, e-business, and Microsoft Windows applications that run in many different environments. See the following web site for more information: http://publib.boulder.ibm.com/infocenter/ieduasst/tivv1r0/topic/com.ibm.iea.cam4rtt/plugin_coverpage.html .
IBM Tivoli Composite Application Manager for WebSphere
IBM Tivoli Composite Application Manager for WebSphere provides immediate problem determination, availability monitoring, and performance analysis for enterprise WebSphere applications running on Windows, UNIX, OS/400 , and z/OS environments. IBM Tivoli Composite Application Manager for WebSphere monitors heterogeneous environments consisting of both mainframes and distributed systems. See the following Web site for more information: http://publib.boulder.ibm.com/infocenter/tivihelp/v3r1/topic/com.ibm.itcamwas.doc/welcome.htm .
IBM Tivoli Monitoring for Transaction Performance
IBM Tivoli Monitoring for Transaction Performance is a centrally managed suite of software components that monitor the availability and performance of web-based services and Windows applications. See the following web site for more information: http://publib.boulder.ibm.com/tividd/td/ITMFTP/GC32-9189-00/en_US/HTML/tmtp_config02.htm .
5.1. Performance Requirements report
The Performance Requirements report validates the performance requirements that you set in a schedule, and, if your protocol supports it, in a test.
The Performance Requirements report validates the performance requirements that you set in a schedule. If your protocol supports test requirements, the report validates performance requirements that you set in a test. Validation is accomplished by comparing the data in the run to the performance requirements that you set on the data.
Status Summary page
With the Status Summary page, you can quickly analyze the performance requirements that are defined in a test result. The page contains two tables.
- The first table provides the overall status of the run and the percentage of performance requirements that passed
- The second table lists all standard performance requirements that you defined.
Each performance requirement has a row in the table that explains the target of the performance requirement (for example, an HTTP page), specification, and status.
Overall Summary page
The Overall Summary page provides a high-level, graphical analysis of performance requirements, both standard and supplemental. The Overall Summary page contains two tables and two pie charts.The first table presents the pass/fail status and percentage pass data for performance requirements and supplemental performance requirements.
Both performance requirements and supplemental performance requirements are represented in the center of the report by a pie chart. In both cases, failures are shown in red and passes are shown in green.
The final graphic on this page presents numeric details for each performance requirement type and for the union of both types.
Details page
The Details page provides a detailed analysis of standard performance requirements. It contains two tables and a bar chart.As with the Status Summary page, the overall status of the run and percent passed value for the run are presented in a table at the top of the page. The bar chart at the center of the report displays the margin of each performance requirement. The margin is a percentage value that allows improvement or regression analysis with regards to a performance requirement. When a performance requirement is in the .passed. state, it will have a margin >= 0. The margin is calculated as follows:
% Margin = abs(specification . observed)/specification
When a performance requirement is in the .failed. state, it will have a value <=0. The margin is calculated as follows:
% Margin = .abs(specification . observed)/specification
Margins are useful in comparison mode because you can detect improvement or regressions before they are significant enough to change the requirement from pass to fail. The table at the bottom of the Details page contains a representative row for each performance requirement. Each requirement is presented with an explanation of its target, observed result, specification, margin, and status. Passed statuses are shown in green and failed statuses are shown in red.
Supplemental Details page
The Supplemental Details page shows a detailed analysis of supplemental performance requirements. It contains the same data as the Details page, except that it pertains to supplemental requirements.
5.2. Synchronization Point report
This report provides information about the synchronization points in test runs and lets you manually release a user from a synchronization point. To release a user, right-click in the report and select Manage Synchronization Points.
The Synchronization Point report contains the following information for each synchronization point in the run:
- The name of the synchronization point.
- Time-out Value. The time after which the synchronization point is automatically released. A value of 0 means that the synchronization point will be released after the arrival of the first user.
- Users Late. The number of virtual users that have arrived "late"; that is, after the synchronization point was released.
- Users Arrived. The number of virtual users that have arrived at the synchronization point.
- Current State. The state of the synchronization point. The state can be:
- Inactive. No users have arrived or all locations are inactive.
- Active. At least one user has arrived.
- Released. All users have arrived, been release manually, or have timed out.
- Users Expected. The number of virtual users still expected to arrive.
- Run Duration. The time between the first user's arrival and the first user's release. The count begins as soon as any location reports an Active state, and stops as soon as any location reports a released state. The accuracy of the time might be adversely affected by a large statistics interval.
5.3. Transaction report
This report summarizes the success of transactions in the run, graphs the response trend of each transaction during a specific sample interval, and shows the transaction throughput.
This report graphs sample intervals within a run. You set the Statistics sample interval value in the schedule, as a schedule property.
Overall page
The Overall Transaction Rate graph shows the average elapsed time for all transactions during a specific sample interval. Elapsed time is the actual time spent within the transaction container. If you have set staged loads in the schedule, this graph also delineates the stages with time range markers, in various colors, at the top.The table under the graph lists the following information:
- The average elapsed time for all transactions in the entire run.
- The standard deviation of the elapsed time. The standard deviation tells you how tightly the data is grouped about the mean. For example, System A and System B both have an average elapsed time of 12 ms. However, this does not mean that the elapsed times are similar. System A might have elapsed times of 11, 12, 13, and 12 ms. System B might have elapsed times of 1, 20, 25, and 2. Although the mean time is the same, the standard deviation of System B is greater.and the elapsed time is more varied.
- The longest transaction in the entire run.
- The shortest transaction in the entire run.
- The average net server time for all transactions in the entire run.
- The standard deviation of the net server time.
- The longest net server time in the entire run.
- The shortest net server time in the entire run.
Elapsed Time vs. Time page
The Elapsed Time vs. Time graph shows the average response of each transaction during a specific sample interval. Each transaction is represented by a separate line. If you have set staged loads in the schedule, this graph also delineates the stages with time range markers, in various colors, at the top.The table under the graph lists the following information for each transaction:
- The minimum elapsed time for the entire run.
- The average elapsed time for the entire run. This is similar to the graph, but the information in the table is the average for the entire run rather than the average per sample interval.
- The standard deviation of the average response time. The standard deviation tells you how tightly the data is grouped about the mean. For example, System A and System B both have an average response time of 12 ms. However, this does not mean that the response times are similar. System A might have response times of 11, 12, 13, and 12 ms. System B might have response times of 1, 20, 25, and 2. Although the mean time is the same, the standard deviation of System B is greater.and the response time is more varied.
- The maximum elapsed time for the entire run.
- The rate, per second, at which the transaction was completed.
- The number of attempts for the transaction.
Net Server Time vs. Time page
Net server time for a transaction is a measured time of interactions with the server. Typically, this does not include think times or Rational Performance Tester processing. The server interaction calculation is protocol specific. For example, in HTTP protocol, the net server time is exactly the sum of all page response times. The elapsed time (wall clock time) includes think time and other RPT processing overhead
Transaction Throughput Tab
These line graphs provide an overview of the transaction frequency and the number of users that are adding load, both over the course of a run.
- The Transaction Hit Rate graph shows the overall rates for starting and completing transactions during a specified sample interval. If you have set staged loads in the schedule, this graph also delineates the stages with time range markers, in various colors, at the top. The summary table under the graph lists the transaction rate per second and the number of transactions that were completed for the entire run.
- The User Load graph shows the number of active users and the number of users that completed testing over the course of the run. The summary table under the graph lists the number of active users, the number of users that completed testing, and the total number of users for the latest sample interval.
5.4. Transaction Percentile report
This report shows the 85th, 90th, and 95th percentile elapsed times for all users and the union of all transactions in a run, as well as for the 10 slowest transactions in a run.
The default percentiles in this report, 85, 90, and 95, are sufficient for most purposes. However, if you are required to report on a different percentile set, click Window > Preferences > Test > Percentile Analysis Targets to change the percentiles in this report and in the Page Percentile report.
The Summary page of this report has a graph with three bars, which represent the 85th percentile, 90th percentile, and 95th percentile elapsed times for all users and for all transactions in the run. For the 85th percentile bar, 85% of all users achieved the indicated elapsed time or better. For the 90th percentile bar, 90% of all users achieved the indicated elapsed time or better. And for the 95% percentile bar, 95% of all users achieved the indicated elapsed time or better.
The 85%, 90%, and 95% pages show the elapsed time percentiles of the 10 slowest transactions in the run. For example, if you click the tab for the 85th percentile, and the total for a transaction is 110 (the total is beneath each bar), you know that 85 percent of the elapsed times for that transaction are less than or equal to 110 milliseconds (ms).
This graph provides an overall idea of the elapsed times for each transaction. For example, the Transaction report might indicate that a Login transaction is one of the 10 slowest transactions. However, it is possible that only one instance of the Login transaction was extremely slow while the other instances of the Login transaction were within acceptable range. The Transaction Percentile report shows which transactions have slow elapsed time averages because they were slow in general, not because a few elapsed times (out of many) were extremely slow.
The table beneath the graph provides more detailed information for each transaction:
- The minimum elapsed time for the run.
- The average elapsed time for the run.
- The standard deviation of the elapsed time. The standard deviation tells you how tightly the data is grouped about the mean. For example, System A and System B both have an average elapsed time of 12 ms. However, this does not mean that the elapsed times are similar. System A might have elapsed times of 11, 12, 13, and 12 ms. System B might have elapsed times of 1, 20, 25, and 2. Although the mean time is the same, the standard deviation of System B is greater.and the elapsed time is more varied.
- The maximum elapsed time for the run.
- The 85th percentile for the run. That is, for this particular transaction, 85% of the elapsed times were equal to or faster than this time.
- The 90th percentile for the run. That is, for this particular transaction, 90% of the elapsed times were equal to or faster than this time.
- The 95th percentile for the run. That is, for this particular transaction, 95% of the elapsed times were equal to or faster than this time.
- The number of attempts in the run.
5.5. Transaction Net Server Time Percentile report
This report shows the 85th, 90th, and 95th percentile net server times for all users and the union of all transactions in a run. The report also shows the 10 slowest transactions in a run. The net server time corresponds to the cumulative server response times within a transaction. Net server time does not include think times and delays, which are included in the elapsed time.
The default percentiles in this report, 85th, 90th, and 95th, are sufficient for most purposes. However, if you must report on a different percentile set, click Window > Preferences > Test > Percentile Analysis Targets to change the percentiles in this report and in the Page Percentile report.
The Summary page of this report has a graph with three bars, which represent the 85th percentile, 90th percentile, and 95th percentile net server times for all users and for all transactions in the run. For the 85th percentile bar, 85% of all users achieved the indicated net server time or better. For the 90th percentile bar, 90% of all users achieved the indicated net server time or better. And for the 95% percentile bar, 95% of all users achieved the indicated net server time or better.
The 85%, 90%, and 95% pages show the net server time percentiles of the 10 slowest transactions in the run. For example, if you click the tab for the 85th percentile, and the total for a transaction is 110 (the total is beneath each bar), you know that 85 percent of the net server times for that transaction are less than or equal to 110 milliseconds (ms).
This graph provides an overall idea of the net server times for each transaction. For example, the Transaction report might indicate that a login transaction is one of the 10 slowest transactions. However, it is possible that only one instance of the login transaction was extremely slow while the other instances of the login transaction were within acceptable range. The Transaction Net Server Time Percentile report shows which transactions have slow net server time averages because they were slow in general, not because a few net server times (out of many) were extremely slow.
The table beneath the graph provides more detailed information for each transaction:
- The minimum net server time for the run.
- The average net server time for the run.
- The standard deviation of the net server time. The standard deviation tells you how tightly the data is grouped about the mean.
- The maximum net server time for the run.
- The 85th percentile for the run. That is, for this particular transaction, 85% of the net server times were equal to or faster than this time.
- The 90th percentile for the run. That is, for this particular transaction, 90% of the net server times were equal to or faster than this time.
- The 95th percentile for the run. That is, for this particular transaction, 95% of the net server times were equal to or faster than this time.
- The number of attempts in the run.
5.6. HTTP performance test reports
When you test an HTTP system, these reports are produced during a run and saved after a run.
5.6.1. Performance report
This report summarizes the validity of the run, summarizes the data most significant to the run, shows the response trend of the slowest 10 pages in the test, and graphs the response trend of each page for a specified interval.
Overall page
The Overall page provides this information:
- A progress indicator that shows the state of the run.
- A bar chart that shows the following information to indicate the overall success of the run:
- For the entire run, the percentage of status codes that indicated page status code success.
Page status code success means that the response code verification point passed for the primary request. If a primary request has no verification points, success means that the server received the primary request and returned a response with a status code in the 200 or 300 category or returned an expected response with a status code in the 400 or 500 category.
- For the entire run, the percentage of status codes that indicated page element success.
Page element success means that the response code verification point passed for that request. If a request has no verification points, success means that the server received the request and returned a response with a status code in the 200 or 300 category or returned an expected response with a status code in the 400 or 500 category.
- The percentage of page title verification points that passed, if any were set.
- The percentage of response code or response size verification points that passed, if any were set.
Summary page
The Summary page summarizes the most important data about the test run, so that you can analyze the final or intermediate results of a test at a glance.The Summary page displays the following Run Summary information:
- The name of the test.
- The number of users that are active and the number of users that have completed testing. This number is updated during the run.
- The elapsed time. This is the run duration, which is displayed in hours, minutes, and seconds.
- The status of the run. This can be Initializing Computers, Adding Users, Running, Transferring data to test log, Stopped, or Complete.
- Displaying results for computer: All Hosts. To see summary results for individual computers, click the computer name in the Performance Test Runs view.
The Summary page displays the following Page Summary information:
- The total number of page attempts and hits. A page attempt means that a primary request was sent; it does not include requests within the page. A hit means that the server received the primary request and returned any complete response.
- The average response time for all pages. Response time is the sum of response times for all page elements (including the connect time and inter-request delays). Response time counters omit page response times for pages that contain requests with status codes in the range of 4XX (client errors) to 5XX (server errors). The only exception is when the failure (for example, a 404) is recorded and returned, and the request is not the primary request for the page. Page response times which contain requests that time out are always discarded.
- The standard deviation of the average response time for all pages.
- The maximum response time for all pages.
- The minimum response time for all pages.
- A summary of the results for page verification points, if these verification points were set.
The Summary page displays the following Page Element Summary information:
- The total number of page element attempts and hits. A page element attempt means that a request was sent. A hit means that the server received the request and returned any complete response.
- The total number of page elements where no request was sent to the server because the client determined that the page elements were fresh in the local cache.
- The average response time for all page elements. Response time is the time between the first request character sent and the last response character received. Response times for HTTP requests that time out or that return an unexpected status code (the recorded and played back codes do not match) in the range of 4XX (client errors) to 5XX (server errors) are discarded from the reported values.
- The standard deviation of the average response time. The standard deviation tells you how tightly the data is grouped about the mean. For example, System A and System B both have an average response time of 12 ms. However, this does not mean that the response times are similar. System A might have response times of 11, 12, 13, and 12 ms. System B might have response times of 1, 20, 25, and 2. Although the mean time is the same, the standard deviation of System B is greater.and the response time is more varied.
- The percentage of verification points that passed.
- A summary of the results for page element verification points, if these verification points were set.
If you have set transactions in your test, the Summary page displays the following Transaction information:
- The minimum, maximum, and average response time for all transactions. Response time is the actual time spent within the transaction container.
- The standard deviation of the average response time. The standard deviation tells you how tightly the data is grouped about the mean. For example, System A and System B both have an average response time of 12 ms. However, this does not mean that the response times are similar. System A might have response times of 11, 12, 13, and 12 ms. System B might have response times of 1, 20, 25, and 2. Although the mean time is the same, the standard deviation of System B is greater.and the response time is more varied.
- The total number of transactions that were started and the total number that have been completed.
Page Performance page
The Page Performance page shows the average response of the slowest 10 pages in the test as the test progresses. With this information, you can evaluate system response during and after the test.The bar chart shows the average response time of the 10 slowest pages. Each bar represents a page that you visited during recording. As you run the test, the bar chart changes, because the 10 slowest pages are updated dynamically during the run. For example, the Logon page might be one of the 10 slowest pages at the start of the run, but then, as the test progresses, the Shopping Cart page might replace it as one of the 10 slowest. After the run, the page shows the 10 slowest pages for the entire run.
The table under the bar chart provides the following additional information:
To display the 10 slowest page element response times, right-click a page and click Display Page Element Responses.
- The minimum response time for each page in the run. Response time is the time between the first request character sent and the last response character received. Response time counters omit page response times for pages that contain requests with status codes in the range of 4XX (client errors) to 5XX (server errors). The only exception is when the failure (for example, a 404) is recorded and returned, and the request is not the primary request for the page. Page response times which contain requests that time out are always discarded.
- The average response time for each page in the run. This matches the information in the bar chart.
- The standard deviation of the average response time. The standard deviation tells you how tightly the data is grouped about the mean. For example, System A and System B both have an average response time of 12 ms. However, this does not mean that the response times are similar. System A might have response times of 11, 12, 13, and 12 ms. System B might have response times of 1, 20, 25, and 2. Although the mean time is the same, the standard deviation of System B is greater.and the response time is more varied.
- The maximum response time for each page in the run.
- The number of attempts per second to access each page. An attempt means that a primary request was sent; it does not include requests within the page.
- The total number of attempts to access the page.
Response vs. Time Summary page
The Response vs. Time Summary page shows the average response trend as graphed for a specified interval. It contains two line graphs with corresponding summary tables. When a schedule includes staged loads, colored time-range markers at the top of the graph delineate the stages.
- The Page Response vs. Time graph shows the average response time for all pages during the run. Each point on the graph is an average of what has occurred during that interval. The table after the graph lists the total average response time for all pages in the run and the standard deviation of the average response time.
- The Page Element response vs. Time graph shows the average response time for all page elements during the run. Each point on the graph is an average of what has occurred during that interval. The table under the graph lists the total average response time for all page elements in the run and the standard deviation of the average response time. The table also lists the total number of page elements where no request was sent to the server because the client determined that the page elements were fresh in the local cache. You set the Statistics sample interval value in the schedule, as a schedule property.
Response vs. Time Detail page
The Response vs. Time Detail page shows the response trend as graphed for the sample intervals. Each page is represented by a separate line.The Average Page Response Time graph shows the average response of each page for each sample interval. When a schedule includes staged loads, colored time-range markers at the top of the graph delineate the stages. The table after the graph provides the following additional information:
- The minimum page response time for the run. Response time is the time between the first request character sent of the primary request and the last response character received. Response time counters omit page response times for pages that contain requests with status codes in the range of 4XX (client errors) to 5XX (server errors). The only exception is when the failure (for example, a 404) is recorded and returned, and the request is not the primary request for the page. Page response times which contain requests that time out are always discarded.
- The average page response time for the run. This is similar to the graph, but the information in the table includes the entire run.
- The maximum page response time for the run.
- The standard deviation of the average response time. The standard deviation tells you how tightly the data is grouped about the mean. For example, System A and System B both have an average response time of 12 ms. However, this does not mean that the response times are similar. System A might have response times of 11, 12, 13, and 12 ms. System B might have response times of 1, 20, 25, and 2. Although the mean time is the same, the standard deviation of System B is greater.and the response time is more varied.
- The rate of page attempts per interval for the most recent statistics sample interval. A page attempt means that the primary request was sent; it does not include requests within the page. You set the Statistics sample interval value in the schedule, as a schedule property.
- The number of page attempts per interval.
Page Throughput page
The Page Throughput page provides an overview of the frequency of requests being transferred per sample interval.
If the number of requests and hits are not close, the server might be having trouble keeping up with the workload.
- The Page Hit Rate graph shows the page attempt rate and page hit rate per sample interval for all pages.
A page attempt means that the primary request was sent; it does not include requests within the page.
A hit means that the server received the primary request and returned any complete response. When a schedule includes staged loads, colored time-range markers at the top of the graph delineate the stages. The summary table after the graph lists the total hit rates and counts for each page in the run.
- The User Load graph shows active users and users that have completed testing, over the course of a run. The summary table after the graph lists the results for the most recent sample interval. You set the Statistics sample interval value in the schedule, as a schedule property. As the run nears completion, the number of active users decreases and the number of completed users increases. The summary table after the graph lists the active and completed users for the entire run.
Note: To set the sample interval value, open the schedule, click the Statistics tab, and then view or modify Statistics sample interval.
If you add virtual users during a run and watch these two graphs in tandem, you can monitor the ability of your system to keep up with the workload. As the page hit rate stabilizes, even though the active user count continues to climb and the system is well tuned, the average response time will naturally slow down. This response time reduction happens because the system is running at its maximum effective throughput level and is effectively throttling the rate of page hits by slowing down how quickly it responds to requests.
Server Throughput page
The Server Throughput page lists the rate and number of bytes that are transferred per interval and for the entire run. The page also lists the status of the virtual users for each interval and for the entire run.
The bytes sent and bytes received throughput rate, which is computed from the client perspective, shows how much data Rational Performance Tester is pushing through your server. Typically, you analyze this data with other metrics, such as the page throughput and resource monitoring data, to understand how network throughput demand affects server performance.
- The Byte Transfer Rates graph shows the rate of bytes sent and received per interval for all intervals in the run. When a schedule includes staged loads, colored time-range markers at the top of the graph delineate the stages. The summary table after the graph lists the total number of bytes sent and received for the entire run.
- The User Load graph shows active users and users that have completed testing, per sample interval, over the course of a run. You set the Statistics sample interval value in the schedule, as a schedule property. As the run nears completion, the number of active users decreases and the number of completed users increases. The summary table after the graph lists the active and completed users for the entire run.
Server Health Summary page
The Server Health Summary page gives an overall indication of how well the server is responding to the load.
- The Page Health chart shows the total number of page attempts, page hits, and status code successes for the run. The table under the bar chart lists the same information.
A page attempt means that a primary request was sent; it does not include requests within the page.
A hit means that the server received the primary and returned any complete response.
A success means that the response code verification point passed for that request. If the request has no verification points, a success means that the server received a request and returned a response where the status code was in the 200 or 300 category or returned an expected response in the 400 or 500 category.
- The Page Element Health chart shows the total number of page element attempts, page element hits, status code successes, and page element redirections for the run. The table under the bar chart lists the same information and the total number of page elements where no request was sent to the server because the client determined that the page elements were fresh in the local cache.
Server Health Detail page
The Server Health Detail page provides specific details for the 10 pages with the lowest success rate.
- The bar chart shows the 10 pages with the lowest success rate.
- The summary table under the chart lists, for all pages, the number of attempts, hits, and successes in the run and the attempts per second during the run.
An attempt means that a primary request was sent; it does not include requests within the page.
A hit means that the server received the primary and returned any complete response.
A success means that the response code verification point passed for that request. If the request has no verification point, a success means that the server received a request and returned a response where the status code was in the 200 or 300 category or returned an expected response in the 400 or 500 category.
Caching Details page
The Caching Details page provides specific details on caching behavior during a test run.
- The Caching Activity graph shows the total number of page element cache attempts, page element cache hits, and page element cache misses for the run. These values correspond to responses from the server, indicating whether the content has been modified. Additionally, the bar chart shows the total number of page elements in cache that were skipped for the run. That value indicates the cache hits that were still fresh in the local cache, where communication with the server was not necessary.
- The Page Element Cache Hit Ratios graph shows the percentage of cache attempts that indicate server-confirmed success and client-confirmed success for the run. Server-confirmed cache hits occur when the server returns a 304 response code. Client-confirmed cache hits occur when the content is still fresh in the local cache and no communication with the server is required.
Resources page
The Resources page shows all resource counters that were monitored during the schedule run.
- The Resources graph shows the values of the resources that counters monitored during the schedule run. When a schedule includes staged loads, colored time-range markers at the top of the graph delineate the stages.
The chart scales automatically to accommodate the highest resource counter value.
- The summary table under the chart lists the most recent values of the resource counters that were monitored during the schedule run. This table also lists the minimum, maximum, and average values of the resource counters that were monitored during the schedule run. This table is organized by resource monitoring hosts.
Page Element Responses
The Page Element page shows the 10 slowest page element responses for the selected page.
Page Response Time Contributions
The Page Response Time Contributions page shows how much time each page element contributes to the overall page response time and the client delay time and connection time.
5.6.2. Page Element report
This report summarizes the most important page element data for the run.
The graphs in this report show time intervals, attempts, hits, and successes.
- The interval depends on the Statistics sample interval value that you set for the schedule.
- An attempt means that a request was sent.
- A hit means that the server received the request and returned any response.
- A success means that the response code verification point passed for that request. If the request has no verification point, a success means that the server received a request and returned a response where the status code was in the 200 or 300 category, or returned an expected response in the 400 or 500 category.
- The response time is the time between the first request character sent and the last response character received. The response time does not include HTTP requests that time out or requests that return a status code in the range of 4XX (client errors) - 5XX (server errors) or requests that timed out.Response times for HTTP requests that time out or that return an unexpected status code (the recorded and played back codes do not match) in the range of 4XX (client errors) to 5XX (server errors) are discarded from the reported values.
Overall page
The Overall line graph shows the average response time for all page elements during a specified interval. When you have set staged loads in the schedule, this graph delineates the stages with time range markers, in various colors, at the top. The table after the graph provides the following information:
- The average response time for all page elements in the entire run
- The standard deviation of the average response time. The standard deviation tells you how tightly the data is grouped about the mean. For example, System A and System B both have an average response time of 12 ms. However, this does not mean that the response times are similar. System A might have response times of 11, 12, 13, and 12 ms. System B might have response times of 1, 20, 25, and 2. Although the mean time is the same, the standard deviation of System B is greater.and the response time is more varied.
- The average number of page elements attempted for the entire run
- The total page element attempts for the entire run
Response vs. Time Summary page
The Response vs. Time Summary line graph shows the response time for the 10 slowest page elements in the run. When you have set staged loads in the schedule, this graph delineates the stages with time range markers, in various colors, at the top. The table after the graph lists the parent page and the page element, and provides the following information:
- The average response time for that page element during the entire run
- The standard deviation of the average response time. The standard deviation tells you how tightly the data is grouped about the mean. For example, System A and System B both have an average response time of 12 ms. However, this does not mean that the response times are similar. System A might have response times of 11, 12, 13, and 12 ms. System B might have response times of 1, 20, 25, and 2. Although the mean time is the same, the standard deviation of System B is greater.and the response time is more varied.
- The attempts per second during the most recent sample interval
- The number of attempts during the most recent sample interval
Response vs. Time Detail page
The Response vs. Time Detail page shows the response time for each page element in the run. The table lists the parent page and page element, and provides the following information for each page element in the entire run:
- The average response time
- The standard deviation of the average response time. The standard deviation tells you how tightly the data is grouped about the mean. For example, System A and System B both have an average response time of 12 ms. However, this does not mean that the response times are similar. System A might have response times of 11, 12, 13, and 12 ms. System B might have response times of 1, 20, 25, and 2. Although the mean time is the same, the standard deviation of System B is greater.and the response time is more varied.
- The number of attempts
- The number of attempts per second
Page Element Throughput page
The Page Element Throughput page shows the average response trend during a specified interval. It contains two line graphs with corresponding summary tables:
- The Page Element Hit Rate graph shows the combined attempt rate and hit rate for all page elements during the last recorded interval. The table after the graph lists one number: the average hit rate for all pages in the run. When a schedule includes staged loads, colored time-range markers at the top of the graph delineate the stages.
- The User Load graph shows active users compared to users that have completed testing. The table after the graph lists the number of active users, the number of users that have completed testing, and the total user count for the entire run.
Server Health Detail page
The Server Health Detail bar chart shows the percentage of successes for the 10 slowest page elements in the run. The table under the chart lists the parent page and page element, and provides the following information for the entire run:
- The number of attempts
- The number of hits
- The number of successes
- The percent of successes (matches the information in the bar chart)
- The number of attempts per second
Caching Details page
The Caching Details page provides specific details on caching behavior during a test run.
- The Caching Activity graph shows the total number of page element cache attempts, page element cache hits, page element cache misses for the run. These values correspond to responses from the server indicating whether the content has been modified. Additionally, the bar chart shows the total number of page elements in cache skipped for the run. That value indicates the cache hits that were still fresh in the local cache, where communication with the server was not necessary.
- The Page Element Cache Hit Ratios graph shows the percentage of cache attempts that indicate server-confirmed success and client-confirmed success for the run. Server-confirmed cache hits occur when the server returns a 304 response code. Client-confirmed cache hits occur when the content is still fresh in the local cache and no communication with the server is required.
- The summary table under the charts lists the the total number of page elements found fresh in the cache for the run. This value indicates the cache hits that were still fresh in the local cache, where communication with the server was not necessary. The table also lists the number of attempts, the total number of page element cache attempts sent to the server, and the total number of page element cache hits confirmed by the server for the run. Additionally, the table lists the percentage of cache attempts that indicate server-confirmed success and client-confirmed success for the run.
5.6.3. Why response time of a page does not equal the sum of its requests
The response time for a page typically differs from the sum of its requests. This does not mean that your data is incorrect. The difference can be caused by concurrent requests, page connection times, inter-request delays, and custom code within a page.
The most common reason for the sum of the individual request times within a page to exceed the total page response time is that requests are often sent concurrently (in parallel) to a server. Thus some of the individual request response times overlap so the sum of the request response times would exceed the page response time.
Additionally, the page response time can exceed the sum of the individual request response times within the page for the following reasons:
5.6.4. Page Percentile report
This report shows the 85th, 90th, and 95th percentile response times for all users and all pages in a run, as well as for the 10 slowest pages in a run.
The default percentiles in this report, 85, 90, and 95, are sufficient for most purposes. However, if you are required to report on a different percentile set, click Window > Preferences > Test > Percentile Analysis Targets to change the percentiles in this report and in the Transaction Percentile report.
The Summary page shows a graph with three bars, which represent the 85th percentile, 90th percentile, and 95th percentile response times for all users and for all pages in the run. For the 85th percentile bar, 85% of all users achieved the indicated response time or better. For the 90th percentile bar, 90% of all users achieved the indicated response time or better. And for the 95% percentile bar, 95% of all users achieved the indicated response time or better.
The 85%, 90%, and 95% pages show the response-time percentiles of the 10 slowest pages in the run. For example, if you click the tab for the 85th percentile, and the total for a page is 110 (the total is beneath each bar), you know that 85 percent of the response times for that page are less than or equal to 110 milliseconds (ms).
This graph provides an overall idea of the response times for each page. For example, the Page Performance report might indicate that a Login page is one of the 10 slowest pages. However, it is possible that one page attempt was extremely slow, but the other attempts were within range. The Page Percentile report shows which pages have slow responses because they were slow in general, not because a few responses (out of many) were extremely slow.
The table beneath the graph provides more detailed information for each page:
- The minimum response time for the run.
- The average response time for the run.
- The standard deviation of the average response time. The standard deviation tells you how tightly the data is grouped about the mean. For example, System A and System B both have an average response time of 12 ms. However, this does not mean that the response times are similar. System A might have response times of 11, 12, 13, and 12 ms. System B might have response times of 1, 20, 25, and 2. Although the mean time is the same, the standard deviation of System B is greater.and the response time is more varied.
- The maximum response time for the run.
- The 85th percentile for the run. That is, for this particular page, 85% of the response times were equal to or faster than this time.
- The 90th percentile for the run. That is, for this particular page, 90% of the response times were equal to or faster than this time.
- The 95th percentile for the run. That is, for this particular page, 95% of the response times were equal to or faster than this time.
- The number of attempts in the run.
5.6.5. Verification Points report
This report shows the status of the verification points in your tests.
This report is displayed if your tests verify page titles, the return code for a page element, or the response size of a page element. To have your tests verify these items, complete the following steps:
- Before recording, set the verification points: Click Window > Preferences > Test > Test Generation > HTTP Test Generation and under Automatically include verification point of select one of the verification point behavior to test. Or, during test editing, enable page verification points individually by right-clicking the test item.
- In the schedule, set the level for logging statistics to Pages or All.
The VP Pass Rate vs. Time graph on the Summary page lists the percentage of verification points that passed per sample interval during the last recorded interval. You set the Statistics sample interval value in the schedule, as a schedule property. When a schedule includes staged loads, colored time-range markers at the top of the graph delineate the stages.
The Page Verification Points page lists the following information for each page that has verification points:
- The number of verification points that passed in the run
- The number of verification points that failed in the run
- The percentage of verification points that passed
The Page Element Verification Points page lists the following information for each page element that has verification points:
- The name of the page and the element
- The number of verification points that passed in the run
- The number of verification points that failed in the run
- The percentage of verification points that passed
5.7. SAP performance test reports
When you test an SAP application, these performance test reports are produced during a run and saved after a run.
5.7.1. SAP Performance report
The SAP Performance report summarizes the health of the run, displays the data most significant to the run, shows the response trend of the slowest 10 transactions in the test, and graphs the response trend of each transaction for a specified interval.
Summary page
The Summary page summarizes the most important data about the test run, so that you can analyze the final or intermediate results of a test at a glance.The SAP Summary section displays the following information:
- A progress indicator that shows the state of the run.
- The number of virtual users that are active and the number of virtual users that have completed testing. This number is updated during the run.
- The elapsed time. This figure is the run duration, which is displayed in hours, minutes, and seconds.
- The location and name of the test suite or schedule.
- Results for computer, All Hosts. To see summary results for individual computers, click the computer name in the Performance Test Runs view.
- The status of the run. This can be Initializing Computers, Adding Users, Running, Performing Execution History Data Transfer, Stopped, or Complete.
- The total number of virtual users that are simulated during the test.
The Transaction Summary section displays the following information:
- The minimum, maximum, average, and standard deviation execution time for all transactions. Execution time is the sum of response times for all screens of a transaction (including the connect time and inter-request delays).
- The total number of transactions that were completed and started.
The Screen Summary section displays the following information:
- The minimum, maximum, average, and standard deviation screen request response times for all SAP screens. Response time is the time between the first request character that is sent and the last response character that is received.
- The total number of SAP screens that were completed and started.
- Percent of verification points that passed.
- Total verification points that failed.
- Total verification point errors.
- Total verification points that passed.
- Total verification points that are inconclusive.
The Element Summary section displays the following information:
- The total number of SAP set, SAP get, and SAP call elements that were attempted.
- The total number of SAP set, SAP get, and SAP call elements that were completed.
- The percentage of completion of all SAP set, SAP get, and SAP call elements.
Screen Performance page
The Screen Performance page shows the average response time of the longest SAP screen requests in the test as the test progresses. With this information, you can evaluate system response during and after the test.If you select Do not measure performance on this screen on a SAP screen in the test editor, then the report does not include the response time results for that screen.
The bar chart shows the average response time of the 10 slowest transactions. Each bar represents a page that you visited during recording. As you run the test, the bar chart changes, because the 10 slowest screens are updated dynamically during the run. For example, the SAP Easy Access screen might be one of the 10 slowest pages at the start of the run, but then, as the test progresses, Display Material (Initial Screen) might replace it as one of the 10 slowest. After the run, the page shows the 10 slowest transactions for the entire run.
The Performance Summary table that follows the bar chart provides the following additional information for each SAP screen:
- The minimum response time for each SAP screen in the run. Response time is the time between the first request character that is sent and the last response character that is received.
- The average response time for each SAP screen in the run. This matches the information in the chart.
- The maximum response time for each SAP screen in the run.
- The standard deviation response time for each SAP screen in the run.
- The rate per second at which each SAP screens is started.
- The total number of times each SAP screen is started.
Response vs. Time Summary page
The Response vs. Time Summary page shows the average response trend as graphed for a specified interval. The line graph shows the average response time for all SAP screens during the run. Each point on the graph is an average of what has occurred during that interval. You set the Statistics sample interval in the schedule, as a schedule property.
The table that follows the graph displays the average and standard deviation response time for all SAP screens during the run.
Response vs. Time Detail page
The Response vs. Time Detail page shows the response trend by screen as graphed for a specified interval. Each separate line represents a SAP screen. You set the Statistics sample interval in the schedule, as a schedule property.The line graph shows the average response of each page during a specified interval. The table that follows the graph provides the following additional information for each SAP screen:
- The minimum SAP screen request response time for the run. This is the time between the moment the input is validated in the SAP GUI and the time the resulting SAP screen is displayed.
- The average SAP screen request response time for the run. This is similar to the graph, but the information in the table is for the duration of the entire run.
- The maximum SAP screen request response time for the run.
- The standard deviation for SAP screen request response time for the run.
- The rate per second at which each SAP screen is started for the most recent interval.
- The total number of times each SAP screen is started for the most recent interval.
Screen Throughput page
The Screen Throughput page provides an overview of the frequency of requests that are being transferred per interval.
- The line graph on the left shows two lines that represent the rate for intervals at which SAP screens are started and the rate at which SAP screens are completed. The summary table under the graph lists the start rates and the completed rates and counts for each SAP screen in the run.
- The line graph on the right shows active users and users that have completed testing, over the course of a run. The summary table under the graph lists the results for the most recent sample interval. You set the Statistics sample interval in the schedule, as a schedule property.
Server Health page
The Server Health Summary page shows an overall indication of how well the server is responding to the load.
The bar chart on the left represents the total number of SAP screens started. The bar chart on the right represents the percentage of SAP screens completed compared to the SAP screens started on each interval. The percentage can be over 100% if more screens were completed than started on a given interval.
The table that follows the charts lists the same information for each SAP screen.
Batch Input Transaction page
The Batch Input Transaction page shows the number of batch input tests that were run during the test.
5.7.2. SAP Verification Points report
The SAP Verification Points report shows the status of the verification points in your tests.
Summary page
The Summary page displays a line graph representing the percentage of verification points that passed per interval. You set the Statistics sample interval value in the schedule, as a schedule property.
Below the graph, the Verification Point Summary table lists the following information:
- The percentage of verification points that passed during the run.
- The number of verification points that were tested.
- The number of verification points that passed.
- The number of verification points that failed.
- The number of verification points that produced an error.
- The number of verification points that were inconclusive.
Screen Verification Points page
The Screen Verification Points page lists the following information for each page element that has verification points:
- The number of verification points that passed during the run
- The number of verification points that failed during the run
- The percentage of verification points that passed during the run
Verification points set on the screen title are counted with verification points set on get events in the screen.
5.8. Citrix performance test reports
When you test a Citrix XenApp application, these reports are produced during a run and saved after a run.
5.8.1. Citrix Performance report
The Citrix performance report summarizes the validity of the run, summarizes the data most significant to the run, shows the response trend of the slowest 10 windows in the test, the server health depending on requests, and graphs the response trend of each window for a specified interval.
Citrix Overall page
The Overall page provides the following information:
- A progress indicator that shows the state of the run.
- The bar chart on the left indicates the overall success of the run with the percentage of window synchronization successes and timeouts. Synchronization success indicates that the expected window events in the test match the actual window events in the test run.
- The bar chart on the right indicates the overall success of the run with the percentage of image synchronization successes and timeouts. Synchronization success indicates that the expected image area or extracted text in the test matches the actual image area or extracted text in the test run.
Performance Summary page
The Summary page summarizes the most important data about the test run, so that you can analyze the final or intermediate results of a test at a glance.The Run Summary table displays the following information:
- The number of virtual users that are active and the number of virtual users that have completed testing. This number is updated during the run.
- The elapsed time. This is the total duration of the run, which is displayed in hours, minutes, and seconds.
- The location and name of the test.
- Displaying results for computer: All Hosts. To see summary results for individual computers, click the computer name in the Performance Test Runs view.
- The status of the run. This can be Initializing Computers, Adding Users, Running, Performing Execution History Data Transfer, Stopped, or Complete.
- The total number of virtual users emulated during the test.
The Citrix Summary section displays the following information:
- The average response time for all response time measurements. Response times are determined by measurements that are located in the tests. Response time measurements can be automatically generated between the last input action before a window create event and the window create event. The table does not display values that equal zero.
- The total number of image synchronization attempts.
- The total number of image synchronization successes.
- The total number of image synchronization timeouts. A timeout occurs when the synchronization fails.
- The total number of window synchronization attempts.
- The total number of window synchronization successes.
- The total number of window synchronization timeouts. A timeout occurs when the synchronization fails.
- The maximum response time for all response time measurements. This indicates the highest response time that was measured during the run.
- The minimum response time for all response time measurements. This indicates the lowest response time that was measured during the run.
- The standard deviation response time for all response time measurements.
- Total user actions for run. This indicates the total number of user input actions that were emulated during the run.
Response Time Results page
The Response Time Results page shows the average response of the window events in the test as the test progresses. With this information, you can evaluate system response during and after the test. Response times are determined by measurements that are located in the tests. Response time measurements can be automatically generated between the last input action before a window create event and the window create event.The bar chart shows the average response time of each window event. Each bar represents a window that was created during the test. As you run the test, the bar chart changes, because the window response times are updated dynamically during the run.
The table under the bar chart provides the following additional information for each window:
- The minimum response time during the run.
- The average response time during the run. This matches the information in the chart.
- The maximum response time during the run.
- The standard deviation response time during the run.
Response vs. Time Summary page
The Response vs. Time Summary page shows the average response trend as graphed for a specified interval. You set the Statistics sample interval value in the schedule, as a schedule property. Response times are determined by measurements that are located in the tests. Response time measurements can be automatically generated between the last input action before a window create event and the window create event.
The line graph shows the average response time for all measurements during the run. Each point on the graph is an average of what has occurred during that interval. The table under the graph lists one number: the total average response time for all measurements in the run.
Response vs. Time Details page
The Response vs. Time Details page shows the response trend as graphed for a specified interval. You set the Statistics sample interval value in the schedule, as a schedule property. Response times are determined by measurements that are located in the tests. Response time measurements can be automatically generated between the last input action before a window create event and the window create event.
The line graph shows the average response time of each measurement for a specified interval. Each measurement is represented by a separate line.
The table under the graph provides the following additional information for each response time measurement:
- The minimum response time during the run.
- The average window response time during the run. This is similar to the graph, but the information in the table includes the entire run.
- The maximum window response time during the run.
- The standard deviation window response time during the run.
User Action Throughput page
The User Action Throughput page provides an overview of the frequency of requests being transferred per interval. You set the Statistics sample interval value in the schedule, as a schedule property.
- The line graph on the left shows the user action rate per interval for all windows. This represents the activity of virtual user input actions per second for each interval. The table under the graph lists the user action rate per second for the entire run, and the total number of user actions for the run.
- The line graph on the right shows active users and users that have completed testing, over the course of a run. The summary table under the graph lists the results for the most recent sample interval.
Server Health Summary page
The Server Health Summary page provides an overall indication of how well the server has performed. The graph does not display values that equal zero.
The bar chart shows the following information:
- The total number of window synchronization attempts.
- The total number of window synchronization successes.
- The total number of window synchronization timeouts.
- The total number of image synchronization attempts.
- The total number of image synchronization successes.
- The total number of image synchronization timeouts.
Server Timeout page
The Server Timeout page shows when the synchronization timeouts and server errors occurred during the run. The graph does not display values that equal zero.
The line graph shows the following information:
- Citrix window synchronization timeouts.
- Citrix image synchronization timeouts.
- Citrix server errors or errors encountered during test execution.
Resources page
The Resources page shows all resource counters monitored during the schedule run.
- The line chart shows the values of the resources counters monitored during the schedule run.
The chart scales automatically to accommodate the highest resource counter value.
- The summary table under the chart lists the average values of the resource counters monitored during the schedule run. This table is organized by resource monitoring hosts.
5.8.2. Citrix Verification Points report
The Citrix Verification Points report shows the status of the verification points in your tests.
Summary page
The Summary page displays a line graph representing the percentage of verification points that passed per interval. You set the Statistics sample interval value in the schedule, as a schedule property.
Below the graph, the Window Verification Point Summary table lists the following information:
- The percentage of window verification points that passed during the run
- The number of window verification points that were attempted
- The number of window verification points that passed
- The number of window verification points that failed
The Image Synchronization Verification Point Summary table lists the following information:
- The percentage of image synchronization verification points that passed during the run
- The number of image synchronization verification points that were tested
- The number of image synchronization verification points that passed
- The number of image synchronization verification points that failed
Citrix Verification Points page
The Citrix Verification Points page contains tables with verification point details.
The Window Verification Points table lists the following information:
- The number of window verification points that passed during the run
- The number of window verification points that failed during the run
- The number of window verification points that caused an error during the run
- The number of window verification points that were inconclusive during the run
- The percentage of window verification points that passed during the run
The Image Synchronization Verification Points table lists the following information:
- The number of image synchronization verification points that passed during the run
- The number of image synchronization verification points that failed during the run
- The number of image synchronization verification points that caused an error during the run
- The number of image synchronization verification points that were inconclusive during the run
- The percentage of image synchronization verification points that passed during the run
5.8.3. Citrix response time percentile report
This report shows the 85th, 90th, and 95th percentile response times for all users in a run.
The default percentiles in this report, 85, 90, and 95, are sufficient for most purposes. However, if you are required to report on a different percentile set, click Window > Preferences > Test > Percentile Analysis Targets to change the percentiles in this report.
The Summary page shows a graph with three bars, which represent the 85th percentile, 90th percentile, and 95th percentile response times for all users in the run. For the 85th percentile bar, 85% of all users achieved the indicated response time or better. For the 90th percentile bar, 90% of all users achieved the indicated response time or better. And for the 95% percentile bar, 95% of all users achieved the indicated response time or better.
The 85%, 90%, and 95% pages show the response-time percentiles of the 10 slowest window events in the run. For example, if you click the tab for the 85th percentile, and the total for a page is 110 (the total is beneath each bar), you know that 85 percent of the response times for that window event are less than or equal to 110 milliseconds (ms).
This graph provides an overall idea of the response times for each window event. For example, the Response Time Measurements page of the Citrix performance report might indicate that a Login screen is one of the 10 slowest measurements. However, it is possible that one attempt was extremely slow, but the other attempts were within range. The Citrix response time percentile report shows which measurements have slow responses because they were slow in general, not because a few responses (out of many) were extremely slow.
The table beneath the graph provides more detailed information for each window event:
- The minimum response time for the run.
- The average response time for the run.
- The standard deviation of the average response time. The standard deviation tells you how tightly the data is grouped about the mean. For example, System A and System B both have an average response time of 12 ms. However, this does not mean that the response times are similar. System A might have response times of 11, 12, 13, and 12 ms. System B might have response times of 1, 20, 25, and 2. Although the mean time is the same, the standard deviation of System B is greater.and the response time is more varied.
- The maximum response time for the run.
- The 85th percentile for the run. That is, for this particular measurement, 85% of the response times were equal to or faster than this time.
- The 90th percentile for the run. That is, for this particular measurement, 90% of the response times were equal to or faster than this time.
- The 95th percentile for the run. That is, for this particular measurement, 95% of the response times were equal to or faster than this time.
- The number of attempts in the run.
5.9. Web service reports
When you test a web service, these reports are produced during a run and saved after a run.
5.9.1. Service Performance report
The Service Performance report summarizes the validity of the run, summarizes the data most significant to the run, shows the response trend of the slowest 10 service calls in the test, the server health depending on requests, and graphs the response trend of each service calls for a specified interval.
Overall page
The Overall page provides the following information:
- A progress indicator that shows the state of the run.
- The bar graph on the left indicates the percentage of successful service calls during the run.
- The bar graph on the right indicates the percentage of verification points with a Pass status for the run.
Summary page
The Summary page summarizes the most important data about the test run, so that you can analyze the final or intermediate results of a test at a glance.The Run Summary table displays the following information:
- The number of virtual users that are active and the number of virtual users that have completed testing. These numbers are updated during the run.
- The elapsed time. This is the total duration of the run, which is displayed in hours, minutes, and seconds.
- The location and name of the test.
- The results for the computer and for all computers. To see summary results for individual computers, click the computer name in the Performance Test Runs view.
- The status of the run. This can be Initializing Computers, Adding Users, Running, Performing Execution History Data Transfer, Stopped, or Complete.
- The total number of virtual users emulated during the test.
The Call Summary section displays the following information:
- The percentage of verification points with a Pass status.
- The total number of verification points with a Fail status.
- The total number of verification points with an Error status.
- The total number of attempted service calls.
- The total number of successful service calls.
- The total number of service calls that produced a timeout.
The Bytes Summary section displays the following information:
- The minimum, maximum, and average number of bytes sent and received for each call in the run.
- The byte rate per second for the run.
- The total number of bytes sent and received for the run.
Response Time Results page
The Response Time Results page shows the average response of the service calls in the test as the test progresses. With this information, you can evaluate system response during and after the test. The delay between the moment a service call is invoked and the moment the corresponding message return is received, determines the Response times.The bar chart shows the average response time of each service call. Each bar represents a service call that was invoked during the test. As you run the test, the bar chart changes, because the response times are updated dynamically during the run.
The table that follows the bar chart provides the following additional information for each service call:
- The minimum response time during the run.
- The average response time during the run. This matches the information in the chart.
- The maximum response time during the run.
- The standard deviation response time during the run.
Response Time vs. Time Summary page
The Response vs. Time Summary page shows the average response trend as graphed for a specified interval. You set the Statistics sample interval value in the schedule, as a schedule property. Measurements that are located in the tests determine the Response times. Response time measurements can be automatically generated between the last input action before a service call and the corresponding message return event.
The line graph shows the average response time for all measurements during the run. Each point on the graph is an average of what has occurred during that interval. The table that follows the graph lists one number: the total average response time for all measurements in the run.
Response Time vs. Time Details page
The Response vs. Time Details page shows the response trend as graphed for a specified interval. You set the Statistics sample interval value in the schedule, as a schedule property. The delay between the moment a service call is invoked and the moment the corresponding message return is received determines the Response times.
The line graph shows the average response time of each measurement for a specified interval. A separate line represents each measurement.
The table under the graph provides the following additional information for each response time measurement:
- The minimum response time during the run.
- The average service call response time during the run. This is similar to the graph, but the information in the table includes the entire run.
- The maximum service call response time during the run.
- The standard deviation service call response time during the run.
Data Volume page
The Data Volume page provides details about the volume of data that is sent to and received from the service. You set the Statistics sample interval value in the schedule, as a schedule property.
- The Sent and Received line graph shows the total bytes sent and received per interval.
- The Received Summary table lists, for each call, the received volume rate (bytes per second) for the entire run, the minimum and maximum received bytes per interval, and the average number of bytes received for each call.
- The Sent Summary table table lists, for each call, the sent volume rate (bytes per second) for the entire run, the minimum and maximum sent bytes per interval, and the average number of bytes sent for each call.
Call Throughput page
The Call Throughput page provides an overview of the frequency of service calls that are being transferred per interval. You set the Statistics sample interval value in the schedule, as a schedule property.
- The line graph shows the calls that are started and ended per interval. Ended calls can be: success, fail, or timeout.
- The Performance Summary table table lists the details of the number of call starts, successes, failures or timeouts for each call and for the run.
Resources page
The Resources page shows all resource counters that are monitored during the schedule run.
- The line chart shows the values of the resources counters monitored during the schedule run.
The chart scales automatically to accommodate the highest resource counter value.
- The summary table that follows the chart lists the average values of the resource counters that are monitored during the schedule run. This table is organized by resource monitoring hosts.
5.9.2. Web Service Verification Points report
The web service verification points report shows the status of the verification points in your tests.
Summary page
The Summary page displays a bar graph representing the percentage of successful web service calls for the run. You set the Statistics sample interval value in the schedule, as a schedule property.
Below the graph, the Verification Point Summary table lists the following information:
- The percentage of verification points that passed during the run
- The number of verification points that were attempted
- The number of verification points that passed
- The number of verification points that failed
Verification Points Detail page
The Verification Points page displays the details for all types of verification points that were checked during the run.
The Verification Points table lists the following information:
- The number of verification points that passed during the run
- The number of verification points that failed during the run
- The number of verification points that caused an error during the run
- The number of verification points that were inconclusive during the run
- The percentage of verification points that passed during the run
Return Contain Verification Points page
The Return Contain Verification Points page displays the details for contain verification points that were checked during the run.
The table lists the following information:
- The number of verification points that passed during the run
- The number of verification points that failed during the run
- The number of verification points that caused an error during the run
- The number of verification points that were inconclusive during the run
- The percentage of verification points that passed during the run
Return Equal Verification Points page
The Return Equal Verification Points page displays the details for equal verification points that were checked during the run.
The table lists the following information:
- The number of verification points that passed during the run
- The number of verification points that failed during the run
- The number of verification points that caused an error during the run
- The number of verification points that were inconclusive during the run
- The percentage of verification points that passed during the run
Return Query Verification Points page
The Return Query Verification Points page displays the details for query verification points that were checked during the run.
The table lists the following information:
- The number of verification points that passed during the run
- The number of verification points that failed during the run
- The number of verification points that caused an error during the run
- The number of verification points that were inconclusive during the run
- The percentage of verification points that passed during the run
Return Attachment Verification Points page
The Return Attachment Verification Points page displays the details for attachment verification points that were checked during the run.
The table lists the following information:
- The number of verification points that passed during the run
- The number of verification points that failed during the run
- The number of verification points that caused an error during the run
- The number of verification points that were inconclusive during the run
- The percentage of verification points that passed during the run
5.10. Socket performance test reports
When you test a socket application, these performance test reports are produced during a run and saved after a run.
5.10.1. Socket Performance report
The Socket Performance report summarizes the health of the run, displays the data most significant to the run, shows the response trend of the slowest 10 transactions in the test, and graphs the response trend of each transaction for a specified interval, for socket API performance tests.
Overall page
The Overall page summarizes the most important data about the test run, so that you can analyze the final or intermediate results of a test at a glance.
The top of the page displays a progress indicator bar that shows the state of the run.
The Virtual Users Activity bar chart displays the number of virtual users that are active and the number of virtual users that have completed testing. This number is updated during the run.
The Run Summary section displays the following information:
- The name of the test or schedule.
- The number of virtual users that are active and the number of virtual users that have completed testing. This number is updated during the run.
- The total number of virtual users that are simulated during the test.
- The elapsed time. This figure is the run duration, which is displayed in hours, minutes, and seconds.
- The status of the run. This can be Initializing Computers, Adding Users, Running, Performing Execution History Data Transfer, Stopped, or Complete.
- Results for the computer, All Hosts. To see summary results for individual computers, click the computer name in the Performance Test Runs view.
Connect Performance page
The Connect Performance page shows the performance of socket connection actions. With this information, you can evaluate system response during and after the test.The Connect Actions vs. Time bar chart shows the attempted and successful connection actions during the test run.
The Connect Times vs. Time bar chart shows average, maximum, minimum, and standard deviation of connection times during the test run.
The Connect Actions Summary displays this information:
- The number of connections that were attempted
- The number of connections that were successful
The Connect Times Summary displays the average, maximum, minimum, and standard deviation of connection times.
Send/Receive Performance page
The Send/Receive Performance page shows the data throughput trend as graphed for a specified interval. The line graph shows the average response time for all socket send and receive actions during the run. Each point on the graph is an average of what has occurred during that interval. You set the Statistics sample interval in the schedule as a schedule property.
The Send/Receive Actions vs. Time bar chart shows the attempted and successful send and receive actions during the test run.
The Response Times vs. Time bar chart shows average, maximum, minimum, and standard deviation of response times during the test run.
The Connect Actions Summary displays this information:
- The number of send and receive actions that were attempted
- The number of send and receive actions that were successful
The Connect Times Summary displays the average, maximum, minimum, and standard deviation of response times.
Byte Performance page
The Byte Performance page provides an overview of the frequency of requests that are being transferred per interval:
- The Exchanged Bytes vs. Time line graph shows the quantity of bytes sent and received during the test run.
- The Exchanged Bytes summary table after the graph lists the total number of bytes sent and received.
5.11. HTTP counters
HTTP counters, which are displayed in the Performance Test Runs view, enable you to customize your reports with dynamic information that is updated for each run.
5.11.1. Byte counters
These counters provide throughput information regarding the rate and the number of bytes sent and received during a sample interval and during a run.
The counters in the following tables provide an aggregate value for all tests. Folders that contain aggregate counters have a clock superimposed on them:
.
Run > Throughput
Counter name Description Bytes Received [for Run] The total number of bytes received for all tests for the entire run Bytes Sent [for Run] The total number of bytes sent for all tests for the entire run
Run > Throughput > Bytes Received
Counter name Description Count [for Interval] The total number of bytes received for all tests within the most recent sample interval Rate [per second] [for Interval] The bytes per second received for all tests within the most recent sample interval Rate [per second] [for Run] The bytes per second received for all tests for the entire run
Run > Throughput > Bytes Sent
Counter name Description Count [for Interval] The total number of bytes sent for all tests within the most recent sample interval Rate [per second] [for Interval] The bytes per second sent for all tests within the most recent sample interval Rate [per second] [for Run] The bytes per second sent for all tests for the entire run
5.11.2. Page counters
These counters provide information about page attempts, page hits, response time, response success, and verification points. Some counters produce an aggregate value, where the values for several pages are rolled up into one value; others produce values for each page.
Aggregate counters use the values for all the pages in a test to produce a single value for a report. This value is rolled up from all values that satisfy the counter. When you drag an aggregate counter onto a report, one value is displayed. Folders that contain aggregate counters have a clock superimposed on the folder icon:
. The tables below that list aggregate counters have this icon in their title.
Individual counters produce values for each item that satisfies the counter, rather than a single rolled-up value. Folders that contain individual counters have an asterisk superimposed on the folder icon:
. The tables below that list individual counters have this icon in their title.
Some counters pertain to intervals in the run. You set the Statistics sample interval value in the schedule, as a schedule property.
Adjustment counters
The counters in this section provide information about adjustments made to page response times.
Pages >
Adjustments
Counter name Description Average [for Interval] The average of all adjustments applied during the most recent sample interval Average [for Run] The average of all adjustments applied for the entire run Maximum [for Interval] The maximum adjustment applied during the most recent sample interval Maximum [for Run] The maximum adjustment applied for the entire run Minimum [for Interval] The minimum adjustment applied during the most recent sample interval Minimum [for Run] The minimum adjustment applied for the entire run Standard Deviation [for Interval] The standard deviation for adjustments applied within the most recent sample interval Standard Deviation [for Run] The standard deviation for adjustments applied for the entire run The counters in the following table are available only after a run, and only after you have displayed the Page Percentile report.
Pages > Adjustments >
Percentile Data
Counter name Description 85 85% of the users had a maximum adjustment of this amount applied 90 90% of the users had a maximum adjustment of this amount applied 95 95% of the users had a maximum adjustment of this amount applied
Attempt counters
The counters in this section provide information about attempts. When an attempt refers to a page, it means that the primary request was sent; it does not include requests within the page. When an attempt refers to a page element, it means that a request was sent.
Pages > Attempts >
All Pages
Counter name Description Count [for Interval] The number of attempts for a specific page within the most recent sample interval Count [for Run] The number of attempts for a specific page for the entire run Rate [per second] [for Interval] The rate at which attempts occurred for a specific page during the most recent sample interval Rate [per second] [for Run] The rate at which attempts occurred for a specific page for the entire run
Pages > Attempts Completed >
All Pages
Counter name Description Count [for Interval] The total number of page attempts completed for a specific page during the most recent sample interval
Pages > Attempts Completed > All Pages > Element >
All Elements
Counter name Description Count [for Interval] The total number of page attempts completed for a specific page element for the most recent sample interval
Pages > Attempts > All Pages > Element >
All Elements
Counter name Description Count [for Interval] The number of attempts for a specific page element within the most recent sample interval Count [for Run] The number of attempts for a specific page element for the entire run Rate [per second] [for Interval] The rate at which attempts occurred for a specific page element during the most recent sample interval Rate [per second] [for Run] The rate at which attempts occurred for a specific page element for the entire run
Pages >
Attempts
Counter name Description Page Attempt Rate [per second] [for Interval] The rate at which page attempts occurred within the most recent sample interval Page Attempt Rate [per second] [for Run] The rate at which page attempts occurred for the entire run Page Element Attempt Rate [per second] [for Interval] The rate at which page element attempts occurred within the most recent sample interval Page Element Attempt Rate [per second] [for Run] The rate at which page element attempts occurred for the entire run Total Page Attempts [for Interval] The number of page attempts during the most recent sample interval Total Page Attempts [for Run] The number of page attempts for the entire run Total Page Element Attempts [for Interval] The number of page element attempts within the most recent sample interval Total Page Element Attempts [for Run] The number of page element attempts for the entire run Each counter in the following table provides a single value to the report. This aggregated value has been calculated from all pages.
Hit counters
Pages > Hits >
All Pages
Counter name Description Count [for Interval] The number of hits for a specific page within the most recent sample interval Count [for Run] The number of hits for a specific page for the entire run Rate [per second] [for Interval] The rate at which hits occurred for a specific page during the most recent sample interval Rate [per second] [for Run] The rate at which hits occurred for a specific page for the entire run
Pages > Hits > All Pages > Element >
All Elements
Counter name Description Count [for Interval] The number of hits for a specific page element within the most recent sample interval Count [for Run] The number of hits for a specific page element for the entire run Rate [per second] [for Interval] The rate at which hits occurred for a specific page element during the most recent sample interval Rate [per second] [for Run] The rate at which hits occurred for a specific page element for the entire run
Pages >
Hits
Counter name Description Page Element Hit Rate [per second] [for Interval] The rate of page element hits during the most recent sample interval Page Element Hit Rate [per second] [for Run] The rate of page element hits for the entire run Page Hit Rate [per second] [for Interval] The rate of page hits during the most recent sample interval Page Hit Rate [per second] [for Run] The rate of page hits for the entire run Total Page Element Hits [for Interval ] The total number of page element hits within the most recent sample interval Total Page Element Hits [for Run] The total number of page element hits for the entire run Total Page Hits [for Interval] The total number of page hits during the most recent sample interval Total Page Hits [for Run] The total number of page hits for the entire run
Response Time counters
Response time counters omit page response times for pages that contain requests with status codes in the range of 4XX (client errors) to 5XX (server errors). The only exception is when the failure (for example, a 404) is recorded and returned, and the request is not the primary request for the page. Page response times which contain requests that time out are always discarded.
Pages > Response Time [ms] >
All Pages
Counter name Description Average [for Interval] The average response time for a specific page within the most recent sample interval Average [for Run] The average response time for a specific page for the entire run Maximum [for Interval] The maximum response time for a specific page within the most recent sample interval Maximum [for Run] The maximum response time for a specific page for the entire run Minimum [for Interval] The minimum response time for a specific page within the most recent sample interval Minimum [for Run] The minimum response time for a specific page for the entire run Standard Deviation [for Interval] The standard deviation for a specific page within the most recent sample interval Standard Deviation [for Run] The standard deviation for a specific page for the entire run
Pages > Response Time [ms] > All Pages > Element >
All Elements
Counter name Description Average [for Interval] The average response time for a specific page within the most recent sample interval Average [for Run] The average response time for a specific page for the entire run The counters in the following table are available only after a run, and only after you have displayed the Page Percentile report.
Pages > Response Time [ms] >
All Pages > Percentile Data
Counter name Description 85 85% of the users experienced this response time or better on this specific page 90 90% of the users experienced this response time or better on this specific page 95 95% of the users experienced this response time or better on this specific page The counters in the following table are available only after a run, and only after you have displayed the Page Percentile report.
Pages > Response Time [ms] >
Percentile Data Summary for All Pages
Counter name Description 85 85% of the users experienced this response time or better 90 90% of the users experienced this response time or better 95 95% of the users experienced this response time or better Response time counters omit page response times for pages that contain requests with status codes in the range of 4XX (client errors) to 5XX (server errors). The only exception is when the failure (for example, a 404) is recorded and returned, and the request is not the primary request for the page. Page response times which contain requests that time out are always discarded.
Pages >
Response Time [ms]
Counter name Description Average Response Time for All Page Elements [ms] [for Interval] The average response time for all page elements within the most recent sample interval Average Response Time for All Page Elements [ms] [for Run] The average response time for all page elements for the entire run Average Response Time for All Pages [ms] [for Interval] The average response time for all pages within the most recent sample interval Average Response Time for All Pages [ms] [for Run] The average response time for all pages for the entire run Maximum Response Time for All Pages [ms] [for Interval] The maximum response time for all pages within the most recent sample interval Maximum Response Time for All Pages [ms] [for Run] The maximum response time for all pages for the entire run Minimum Response Time for All Pages [ms] [for Interval] The minimum response time for all pages within the most recent sample interval Minimum Response Time for All Pages [ms] [for Run] The minimum response time for all pages for the entire run Response Time Standard Deviation for All Page Elements [for Interval] The standard deviation for all page elements within the most recent sample interval Response Time Standard Deviation for All Page Elements [for Run] The standard deviation for all page elements for the entire run Response Time Standard Deviation for All Pages [for Interval] The standard deviation for all pages within the most recent sample interval Response Time Standard Deviation for All Pages [for Run] The standard deviation for all pages for the entire run
Status Code Success counters
A status code success means that the response code verification point passed for that request. If the request has no verification points, a success means that the server received a request and returned a response where the status code was in the 200 or 300 category, or returned an expected response in the 400 or 500 category.
Pages > Status Code Successes >
All Pages
Counter name Description Count [for Interval] The number of status code successes for a specific page during the most recent sample interval Count [for Run] The number of status code successes for a specific page for the entire run Percent Status Code Success [for Interval] The percentage of status code successes for a specific page during the most recent sample interval Percent Status Code Success [for Run] The percentage of status code successes for a specific page for the entire run
Pages > Status Code Successes > Element >
All Elements
Counter name Description Count [for Interval] The number of status code successes for a specific page element during the most recent sample interval Count [for Run] The number of status code successes for a specific page element for the entire run Percent Status Code Success [for Interval] The percentage of status code successes for a specific page element during the most recent sample interval Percent Status Code Success [for Run] The percentage of status code successes for a specific page element for the entire run
Pages >
Status Code Successes
Counter name Description Percent Page Element Status Code Success [for Interval] The percentage of status code successes for all page elements during the most recent sample interval Percent Page Element Status Code Success [for Run] The percentage of status code successes for all page elements for the entire run Percent Page Status Code Success [for Interval] The percentage of status code successes for all pages during the most recent sample interval Percent Page Status Code Success [for Run] The percentage of status code successes for all pages for the entire run Total Page Element Status Code Successes [for Interval] The number of status code successes for all page elements during the most recent sample interval Total Page Element Status Code Successes [for Run] The number of status code successes for all page elements for the entire run Total Page Status Code Successes [for Interval] The number of status code successes for all pages within the most recent sample interval Total Page Status Code Successes [for Run] The number of status code successes for all pages for the entire run
Verification Point counters
The tables in this section provide information about verification points. The verdict for a verification point can be Pass, Fail, Error, or Inconclusive.
- Pass indicates that all verification points matched or received the expected response. For example, a response code verification point is set to PASS when the recorded response code is received during playback. If your test does not contain verification points, PASS means that all primary requests in the test were successful.
- Fail indicates that at least one verification point did not match the expected response or that the expected response was not received.
- Error indicates one of the following results: a primary request was not successfully sent to the server, no response was received from the server for a primary request, or the primary request response was incomplete or could not be parsed.
- The verdict is set to Inconclusive only if you provide custom code that defines a verdict of Inconclusive.
Pages > Verification Points >
All Pages
Counter name Description Percent Pass The percentage of page title verification points that passed for a specific page for the entire run
Pages > Verification Points > All Pages > Element >
All Elements
Counter name Description Percent Pass The percentage of response code or response size verification points that passed for a specific page element for the entire run
Pages > Verification Points > All Pages > Element >
All Elements > Error
Counter name Description Count [for Interval] The number of response code or response size verification points classified as Error for a specific page element during the most recent sample interval Count [for Run] The number of response code or response size verification points classified as Error for a specific page element for the entire run
Pages > Verification Points > All Pages > Element >
All Elements > Fail
Counter name Description Count [for Interval] The number of response code or response size verification points that failed for a specific page element during the most recent sample interval Count [for Run] The number of response code or response size verification points that failed for a specific page element for the entire run
Pages > Verification Points > All Pages > Element >
All Elements > Inconclusive
Counter name Description Count [for Interval] The number of response code or response size verification points classified as Inconclusive for a specific page element during the most recent sample interval Count [for Run] The number of response code or response size verification points classified as Inconclusive for a specific page element for the entire run
Pages > Verification Points > All Pages > Element >
All Elements > Pass
Counter name Description Count [for Interval] The number of response code or response size verification points that passed for a specific page element during the most recent sample interval Count [for Run] The number of response code or response size verification points that passed for a specific page element for the entire run
Pages > Verification Points >
All Pages > Error
Counter name Description Count [for Interval] The number of response code or response size verification points classified as Error for a specific page during the most recent sample interval Count [for Run] The number of response code or response size verification points classified as Error for a specific page for the entire run
Pages > Verification Points >
All Pages > Fail
Counter name Description Count [for Interval] The number of response code or response size verification points that failed for a specific page during the most recent sample interval Count [for Run] The number of response code or response size verification points that failed for a specific page for the entire run
Pages > Verification Points >
All Pages > Inconclusive
Counter name Description Count [for Interval] The number of response code or response size verification points classified as Inconclusive for a specific page during the most recent sample interval Count [for Run] The number of response code or response size verification points classified as Inconclusive for a specific page for the entire run
Pages > Verification Points >
All Pages > Pass
Counter name Description Count [for Interval] The number of response code or response size verification points that passed for a specific page during the most recent sample interval Count [for Run] The number of response code or response size verification points that passed for a specific page for the entire run
Pages >
Verification Points
Counter name Description Percent Page Element VPs Passed [for Interval] The percentage of response code or response size verification points that passed during the most recent sample interval Percent Page Element VPs Passed [for Run] The percentage of response code or response size verification points that passed for the entire run Percent Page VPs Passed [for Interval] The percentage of page title verification points that passed during the most recent sample interval Percent Page VPs Passed [for Run] The percentage of page title verification points that passed for the entire run Total Page Element VPs Attempted [for Interval] The number of response code or response size verification points executed during the most recent sample interval Total Page Element VPs Attempted [for Run] The number of response code or response size verification points executed for the entire run Total Page Element VPs Error [for Interval] The number of response code or response size verification points with a verdict of Error within the most recent sample interval Total Page Element VPs Error [for Run] The number of response code or response size verification points with a verdict of Error for the entire run Total Page Element VPs Failed [for Interval] The number of response code or response size verification points that failed during the most recent sample interval Total Page Element VPs Failed [for Run] The number of response code or response size verification points that failed for the entire run Total Page Element VPs Inconclusive [for Interval] The number of response code or response size verification points that were marked as Inconclusive within the most recent sample interval Total Page Element VPs Inconclusive [for Run] The number of response code or response size verification points that were marked as Inconclusive for the entire run Total Page Element VPs Passed [for Interval] The number of response code or response size verification points that passed during the most recent sample interval Total Page Element VPs Passed [for Run] The number of response code or response size verification points that passed for the entire run Total Page VPs Attempted [for Interval] The percentage of page title verification points that were executed during the most recent sample interval Total Page VPs Attempted [for Run] The percentage of page title verification points that were executed for the entire run Total Page VPs Error [for Interval] The percentage of page title verification points with a verdict of Error during the most recent sample interval Total Page VPs Error [for Run] For each page containing verification points, the total with a verdict of Error for the entire run Total Page VPs Failed [for Interval] The number of page title verification points that failed during the most recent sample interval Total Page VPs Failed [for Run] The number of page title verification points that failed for the entire run Total Page VPs Inconclusive [for Interval] The number of page title verification points that were marked as Inconclusive during the most recent sample interval Total Page VPs Inconclusive [for Run] The number of page title verification points that were marked as Inconclusive for the entire run Total Page VPs Passed [for Interval] The number of page title verification points that passed during the most recent sample interval Total Page VPs Passed [for Run] The number of page title verification points that passed for the entire run
5.11.3. Run counters
These counters provide information about the active users in the run, the users that have completed the run, and the HTTP status codes that were received.
Some counters pertain to intervals in the run. You set the Statistics sample interval value in the schedule as a schedule property.
Run
Counter name Description Displaying Results for computer The name of the host Run Status The run status or errors
Run > Active Users
Counter name Description Count [for Run] The number of users that are currently active
Run > Collection Time
Counter Name Description Count [for Interval] The length of time it took to collect all the statistics to send for the most recent sample interval. Typically a brief period. This value has limited use in a custom report.
Run > Completed Users
Counter name Description Count [for Run] The number of users that have completed the run
Run > HTTP Status Codes > 100 Codes
Counter Name Description Count [for Interval] The number of HTTP status codes between 100 and 199 that were received during the most recent sample interval Count [for Run] The number of HTTP status codes between 100 and 199 that were received in the entire run
Run > HTTP Status Codes > 200 Codes
Counter name Description Count [for Interval] The number of HTTP status codes between 200 and 299 that were received during the most recent sample interval Count [for Run] The number of HTTP status codes between 200 and 299 that were received in the entire run
Run > HTTP Status Codes > 300 Codes
Counter name Description Count [for Interval] The number of HTTP status codes between 300 and 399 that were received during the most recent sample interval Count [for Run] The number of HTTP status codes between 300 and 399 that were received in the entire run
Run > HTTP Status Codes > 400 Codes
Counter name Description Count [for Interval] The number of HTTP status codes between 400 and 499 that were received during the most recent sample interval Count [for Run] The number of HTTP status codes between 400 and 499 that were received in the entire run
Run > HTTP Status Codes > 500 Codes
Counter name Description Count [for Interval] The number of HTTP status codes between 500 and 599 that were received during the most recent sample interval Count [for Run] The number of HTTP status codes between 500 and 599 that were received in the entire run
Run > Run Duration
Counter name Description Count [for Run] The duration of the run, in milliseconds (ms) Elapsed Time [H:M:S] The duration of the run, in hours, minutes, and seconds
Run > Sample Interval Length
Counter name Description Count [for Run] The length of the sample interval
Run > Schedule
Counter name Description Executed Test The web address of the schedule or test that was run
Run > Start Time
Counter name Description Count [for Run] The clock value at the start of the run, in milliseconds (ms)
Run > Total Users
Counter Name Description Count [for Run] The total number of users that were involved in the run
5.11.4. Test counters
These counters provide information about the execution time of the tests in a run. Some counters produce an aggregate value, where the values for all tests are rolled up into one value; others produce individual values for each test.
Some counters pertain to intervals in the run. You set the Statistics sample interval value in the schedule, as a schedule property.
The counters in the following table provide an aggregate value for all tests. Folders that contain aggregate counters have a clock superimposed on them:
.
Tests > Elapsed Time > All Tests
Counter name Description Average [for Interval] The average execution time for all tests within the most recent sample interval Average [for Run] The average execution time for all tests in the entire run Maximum [for Interval] The maximum execution time for all tests within the most recent sample interval Maximum [for Run] The maximum execution time for all tests in the entire run Minimum [for Interval] The minimum execution time for all tests within the most recent sample interval Minimum [for Run] The minimum execution time for all tests in the entire run Standard Deviation [for Interval] The standard deviation for all tests within the most recent sample interval Standard Deviation [for Run] The standard deviation for all tests in the entire run The counters in the following table provide individual values for each test. Folders that contain individual counters have an asterisk superimposed on them:
.
Tests > Elapsed Time
Counter name Description Average Execution Time for All Tests [ms] [for Interval] The average execution time for each test within the most recent sample interval Average Execution Time for All Tests [ms] [for Run] The average execution time for each test in the entire run Execution Time Standard Deviation for All Tests [for Interval] The standard deviation for each test within the most recent sample interval Execution Time Standard Deviation for All Tests [for Run] The standard deviation for each test in the entire run Maximum Execution Time for All Tests [ms] [for Interval] The maximum execution time for each test within the most recent sample interval Maximum Execution Time for All Tests [ms] [for Run] The maximum execution time for each test in the entire run Minimum Execution Time for All Tests [ms] [for Interval] The minimum execution time for each test within the most recent sample interval Minimum Execution Time for All Tests [ms] [for Run] The minimum execution time for each test in the entire run
5.11.5. Transaction counters
These counters provide information about transactions that were attempted, transactions that were completed, and the elapsed time for the transactions. Some counters produce an aggregate value, where the values for all transactions are rolled up into one value; others produce individual values for each transaction.
Some counters pertain to intervals in the run. You set the Statistics sample interval value in the schedule, as a schedule property.
The counters in the following table provide an aggregate value for all transactions. Folders that contain aggregate counters have a clock superimposed on them:
.
Transactions > Attempts > All Transactions
Counter name Description Count [for Interval] The total number of transactions that were attempted within the last recorded interval Count [for Run] The total number of transactions that were attempted in the entire run Rate [per second] [for Interval] The number of transactions that were attempted per second in the last recorded interval Rate [per second] [for Run] The number of transactions that were attempted per second in the entire run The counters in the following table provide individual values for each transaction. Folders that contain individual counters have an asterisk superimposed on them:
.
Transactions > Attempts
Counter name Description Total Transactions Started [for Interval] The number of transactions that were started within the last recorded interval Transaction Start Rate [per second] [for Interval] The rate that transactions that were started within the last recorded interval Transaction Start Rate [per second] [for Run] The rate that transactions that were started in the entire run The counters in the following table provide an aggregate value for all transactions. Folders that contain aggregate counters have a clock superimposed on them:
.
Transactions > Completed > All Transactions
Counter name Description Count [for Interval] The total number of transactions that were completed within the last recorded interval Count [for Run] The total number of transactions that were completed in the entire run Rate [per second] [for Interval] The number of transactions that were completed per second in the last recorded interval Rate [per second] [for Run] The number of transactions that were completed per second in the entire run The counters in the following table provide individual values for each transaction. Folders that contain individual counters have an asterisk superimposed on them:
.
Transactions > Completed
Counter name Description Total Transactions Completed [for Run] The total number of transactions that were completed Transaction Completion Rate [per second] [for Interval] The rate that transactions were completed in the last recorded interval Transaction Completion Rate [per second] [for Run] The rate that transactions were completed in the entire run The counters in the following table provide an aggregate value for all transactions. Folders that contain aggregate counters have a clock superimposed on them:
.
Transactions > Elapsed Time > All Transactions
Counter name Description Average [for Interval] The average elapsed time for all transactions within the most recent sample interval Average [for Run] The average elapsed time for all transactions in the entire run Maximum [for Interval] The maximum elapsed time for all transactions within the most recent sample interval Maximum [for Run] The maximum elapsed time for all transactions in the entire run Minimum [for Interval] The minimum elapsed time for all transactions within the most recent sample interval Minimum [for Run] The minimum elapsed time for all transactions in the entire run Standard deviation [for Interval] The standard deviation for all transactions within the most recent sample interval Standard deviation [for Run] The standard deviation for all transactions for the entire run The counters in the following table provide individual values for each transaction. Folders that contain individual counters have an asterisk superimposed on them:
.
Transactions > Elapsed Time
Counter name Description Average Elapsed Time for All Transactions [ms] [for Interval] The average elapsed time for each transaction within the most recent sample interval Average Elapsed Time for All Transactions [ms] [for Run] The average elapsed time for each transaction in the entire run Elapsed Time Standard Deviation for All Transactions [for Interval] The standard deviation for each transaction within the most recent sample interval Elapsed Time Standard Deviation for All Transactions [for Run] The standard deviation for each transaction in the entire run Maximum Elapsed Time for All Transactions [ms] [for Interval] The maximum elapsed time for each transaction within the most recent sample interval Maximum Elapsed Time for All Transactions [ms] [for Run] The maximum elapsed time for each transaction in the entire run Minimum Elapsed Time for All Transactions [ms] [for Interval] The minimum elapsed time for each transaction within the most recent sample interval Minimum Elapsed Time for All Transactions [ms] [for Run] The minimum elapsed time for each transaction in the entire run
5.12. SAP counters
SAP generic counters, which are displayed in the Performance Test Runs view, enable you to customize your SAP reports with dynamic information that is updated for each run.
5.12.1. SAP screen counters
These counters provide information about screens started, screens completed, response times, and verification points. Some counters produce an aggregate value, where the values for several screens are rolled up into one value; others produce values for each screen.
Aggregate counters use the values for all the screens in a test to produce a single value for a report. This value is rolled up from all values that satisfy the counter. When you drag an aggregate counter onto a report, one value is displayed. Folders that contain aggregate counters have a clock superimposed on the folder icon:
. The tables below that list aggregate counters have this icon in their title.
Individual counters produce values for each item that satisfies the counter, rather than a single rolled-up value. Folders that contain individual counters have an asterisk superimposed on the folder icon:
. The tables below that list individual counters have this icon in their title.
Some counters pertain to intervals in the run. You set the Statistics sample interval value in the schedule, as a schedule property.
SAP Screen Request Response Time counters
The counters in this section provide information about the SAP application response time. This is the time that elapses from the point that the SAP GUI client sends a request and the moment the response is received and displayed by the SAP GUI client.
SAP Screens> SAP Screen Request Response Time>
All SAP Screens
Counter name Description Average [for Interval] The average response time for a specific SAP screen during the most recent sample interval Average [for Run] The average response time for a specific SAP screen for the entire run Maximum [for Interval] The maximum response time for a specific SAP screen during the most recent sample interval Maximum [for Run] The maximum response time for a specific SAP screen for the entire run Minimum [for Interval] The minimum response time for a specific SAP screen during the most recent sample interval Minimum [for Run] The minimum response time for a specific SAP screen for the entire run Standard Deviation [for Interval] The standard deviation of response times for a specific SAP screen during the most recent sample interval Standard Deviation [for Run] The standard deviation of response times for a specific SAP screen for the entire run
SAP Screens>
SAP Screen Request Response Time
Counter name Description Average Screen Request Response Time for All SAP Screens [ms] [for Interval] The average response time for all SAP screens within the most recent sample interval Average Screen Request Response Time for All SAP Screens [ms] [for Run] The average response time for all SAP screens for the entire run Maximum Screen Request Response Time for All SAP Screens [ms] [for Interval] The maximum response time for all SAP screens within the most recent sample interval Maximum Screen Request Response Time for All SAP Screens [ms] [for Run] The maximum response time for all SAP screens for the entire run Minimum Screen Request Response Time for All SAP Screens [ms] [for Interval] The minimum response time for all SAP screens within the most recent sample interval Minimum Screen Request Response Time for All SAP Screens [ms] [for Run] The minimum response time for all SAP screens for the entire run Standard Deviation Screen Request Response Time for All SAP Screens [ms] [for Interval] The standard deviation of response times for all SAP screens during the most recent sample interval Standard Deviation Screen Request Response Time for All SAP Screens [ms] [for Run] The standard deviation of response times for all SAP screens for the entire run
SAP Screen Request Interpretation Time counters
The counters in this section provide information about the SAP GUI interpretation time. This is the duration from the time that the SAP GUI client receives data from the server and the moment when this data is displayed. This measures the health of the SAP GUI client used for the test rather than the performance of the SAP R/3 server.
SAP Screens> SAP Screen Request Interpretation Time>
All SAP Screens
Counter name Description Average [for Interval] The average interpretation time for a specific SAP screen during the most recent sample interval Average [for Run] The average interpretation time for a specific SAP screen for the entire run Maximum [for Interval] The maximum interpretation time for a specific SAP screen during the most recent sample interval Maximum [for Run] The maximum interpretation time for a specific SAP screen for the entire run Minimum [for Interval] The minimum interpretation time for a specific SAP screen during the most recent sample interval Minimum [for Run] The minimum interpretation time for a specific SAP screen for the entire run Standard Deviation [for Interval] The standard deviation of interpretation times for a specific SAP screen during the most recent sample interval Standard Deviation [for Run] The standard deviation of interpretation times for a specific SAP screen for the entire run
SAP Screens>
SAP Screen Request Interpretation Time
Counter name Description Average Screen Request Interpretation Time for All SAP Screens [ms] [for Interval] The average interpretation time for all SAP screens during the most recent sample interval Average Screen Request Interpretation Time for All SAP Screens [ms] [for Run] The average interpretation time for all SAP screens for the entire run Maximum Screen Request Interpretation Time for All SAP Screens [ms] [for Interval] The maximum interpretation time for all SAP screens during the most recent sample interval Maximum Screen Request Interpretation Time for All SAP Screens [ms] [for Run] The maximum interpretation time for all SAP screens for the entire run Minimum Screen Request Interpretation Time for All SAP Screens [ms] [for Interval] The minimum interpretation time for all SAP screens during the most recent sample interval Minimum Screen Request Interpretation Time for All SAP Screens [ms] [for Run] The minimum interpretation time for all SAP screens for the entire run Standard Deviation Screen Request Interpretation Time for All SAP Screens [ms] [for Interval] The standard deviation of interpretation times for all SAP screens during the most recent sample interval Standard Deviation Screen Request Interpretation Time for All SAP Screens [ms] [for Run] The standard deviation of interpretation times for all SAP screens for the entire run
SAP Screens Started counters
The counters in this section provide information about screens that are started.
SAP Screens > SAP Screens Started>
All SAP Screens
Counter name Description Count [for Interval] The number of started screens for a specific SAP screen during the most recent sample interval Count [for Run] The number of started screens for a specific SAP screen for the entire run Rate [per second] [for Interval] The rate at which screens were started for a specific SAP screen during the most recent sample interval Rate [per second] [for Run] The rate at which screens were started for a specific SAP screen for the entire run
SAP Screens>
SAP Screens Started
Counter name Description SAP Screens Started Rate [for Interval] The rate at which SAP screens were started within the most recent sample interval SAP Screens Started Rate [for Run] The rate at which SAP screens were started for the entire run Total SAP Screens Started [for Interval] The number of started SAP screens for the entire run Total SAP Screens Started [for Run] The number of started SAP screens during the most recent sample interval
SAP Screens Completed counters
The counters in this section provide information about SAP screens that were completed.
SAP Screens> SAP Screens Completed>
All SAP Screens
Counter name Description Count [for Interval] The number of completed screens for a specific SAP screen during the most recent sample interval Count [for Run] The number of completed screens for a specific SAP screen for the entire run Rate [per second] [for Interval] The rate at which screens were completed for a specific SAP screen during the most recent sample interval Rate [per second] [for Run] The rate at which screens were completed for a specific SAP screen for the entire run
SAP Screens>
SAP Screens Completed
Counter name Description SAP Screens Completed Percent [for Interval] The percentage of completed SAP screens within the most recent sample interval SAP Screens Completed Percent [for Run] The percentage of completed SAP screens for the entire run SAP Screens Completed Rate [for Interval] The rate at which SAP screens were completed within the most recent sample interval SAP Screens Completed Rate [for Run] The rate at which SAP screens were completed for the entire run Total SAP Screens Completed [for Interval] The number of completed SAP screens within the most recent sample interval Total SAP Screens Completed [for Run] The number of completed SAP screens for the entire run
Verification Point counters
The tables in this section provide information about SAP screen verification points. The verdict for a verification point can be Pass, Fail, Error, or Inconclusive.
- Pass indicates that the verification point matched or received the expected response. For example, a response code verification point is set to Pass when the recorded response code is received during playback. If your test does not contain verification points, it means that the connection succeeded.
- Fail indicates that the verification point did not match the expected response or that the expected response was not received.
- Error indicates that the primary request was not successfully sent to the server, no response was received from the server, or the response was incomplete or could not be parsed.
- The verdict is set to Inconclusive only if you provide custom code that defines a verdict of Inconclusive.
SAP Screens > Verification Points >
All SAP Screens
Counter name Description Percent Pass The percentage of verification points that passed for a specific SAP screen for the entire run
SAP Screens > Verification Points > All SAP Screens >
All Elements > Error
Counter name Description Count [for Interval] The number of verification points classified as Error for a specific SAP screen during the most recent sample interval Count [for Run] The number of verification points classified as Error for a specific SAP screen for the entire run
SAP Screens > Verification Points > All SAP Screens >
All Elements > Fail
Counter name Description Count [for Interval] The number of verification points that failed for a specific SAP screen during the most recent sample interval Count [for Run] The number of verification points that failed for a specific SAP screen for the entire run
SAP Screens > Verification Points > All SAP Screens >
All Elements > Inconclusive
Counter name Description Count [for Interval] The number of verification points classified as Inconclusive for a specific SAP screen during the most recent sample interval Count [for Run] The number of verification points classified as Inconclusive for a specific SAP screen for the entire run
SAP Screens > Verification Points > All SAP Screens >
All Elements > Pass
Counter name Description Count [for Interval] The number of verification points that passed for a specific SAP screen during the most recent sample interval Count [for Run] The number of verification points that passed for a specific SAP screen for the entire run
SAP Screens >
Verification Points
Counter name Description Percent SAP Screens VPs Passed [for Interval] The percentage of verification points that passed within the most recent sample interval Percent SAP Screens VPs Passed [for Run] The percentage of verification points that passed for the entire run Total SAP Screens VPs Attempted [for Interval] The number of verification points executed within the most recent sample interval Total SAP Screens VPs Attempted [for Run] The number of verification points executed for the entire run Total SAP Screens VPs Error [for Interval] The number of verification points with a verdict of Error within the most recent sample interval Total SAP Screens VPs Error [for Run] The number of verification points with a verdict of Error for the entire run Total SAP Screens VPs Failed [for Interval] The number of verification points that failed within the most recent sample interval Total SAP Screens VPs Failed [for Run] The number of verification points that failed for the entire run Total SAP Screens VPs Inconclusive [for Interval] The number of verification points that were marked as Inconclusive within the most recent sample interval Total SAP Screens VPs Inconclusive [for Run] The number of verification points that were marked as Inconclusive for the entire run Total SAP Screens VPs Passed [for Interval] The number of verification points that passed within the most recent sample interval Total SAP Screens VPs Passed [for Run] The number of verification points that passed for the entire run
SAP Elements Attempted counters
The counters in this section provide information about SAP set elements, SAP get elements or SAP calls that were attempted.
SAP Elements > SAP Elements Attempted>
All SAP Screens
Counter name Description Count [for Interval] The number of attempted elements for a specific SAP screen during the most recent sample interval Count [for Run] The number of attempted elements for a specific SAP screen for the entire run Rate [per second] [for Interval] The rate at which elements were attempted for a specific SAP screen within the most recent sample interval Rate [per second] [for Run] The rate at which elements were attempted for a specific SAP screen for the entire run
SAP Elements >
SAP Elements Attempted
Counter name Description SAP Elements Attempted Rate [for Interval] The rate at which SAP elements were attempted within the most recent sample interval SAP Elements Attempted Rate [for Run] The rate at which SAP elements were attempted for the entire run Total SAP Elements Attempted [for Interval] The number of attempted SAP elements for the entire run Total SAP Elements Attempted [for Run] The number of attempted SAP elements within the most recent sample interval
SAP Elements Completed counters
The counters in this section provide information about SAP set elements, SAP get elements or SAP calls that terminated normally. If an element does not complete, it is counted in the SAP Elements Attempted, but not in SAP Elements Complete.
SAP Screens> SAP Screens Completed>
All SAP Screens
Counter name Description Count [for Interval] The number of completed elements for a specific SAP screen during the most recent sample interval Count [for Run] The number of completed elements for a specific SAP screen for the entire run Rate [per second] [for Interval] The rate at which elements were completed for a specific SAP screen during the most recent sample interval Rate [per second] [for Run] The rate at which elements were completed for a specific SAP screen for the entire run
SAP Elements >
SAP Elements Completed
Counter name Description SAP Elements Completed Percent [for Interval] The percentage of completed SAP elements within the most recent sample interval SAP Elements Completed Percent [for Run] The percentage of completed SAP elements for the entire run SAP Elements Completed Rate [for Interval] The rate at which SAP elements were completed within the most recent sample interval SAP Elements Completed Rate [for Run] The rate at which SAP elements were completed for the entire run Total SAP Elements Completed [for Interval] The number of completed SAP elements within the most recent sample interval Total SAP Elements Completed [for Run] The number of completed SAP elements for the entire run
5.13. Citrix counters
Citrix generic counters, which are displayed in the Performance Test Runs view, enable you to customize your Citrix reports with dynamic information that is updated for each run.
5.13.1. Citrix window synchronization counters
Citrix window synchronization counters provide information about window synchronizations that were attempted, completed, or produced a timeout within the specified timeout limit. Window synchronization is the mechanism used by the test to compare windows from the test with actual windows in the run.
Some counters produce an aggregate value, where the values for several web service message returns are combined into a single value; others produce values for each web service message return.
Aggregate counters use the values for all the elements in a test to produce a single value for a report. This value is rolled up from all values that satisfy the counter. When you drag an aggregate counter onto a report, one value is displayed. Folders that contain aggregate counters have a clock superimposed on the folder icon:
. In the tables that follow, the aggregate counters are listed after the titles that contain this icon.
Individual counters produce values for each item that satisfies the counter, rather than an aggregate value. Folders that contain individual counters have an asterisk superimposed on the folder icon:
. In the tables that follow, the individual counters are listed after the titles that contain this icon.
Some counters pertain to intervals in the run. You set the Statistics sample interval value in the schedule, as a schedule property.
The counters in the following table provide an aggregate value for all window synchronizations. Folders that contain aggregate counters have a clock superimposed on them:
.
Citrix Window Synchronizations> Citrix Window Synchronization Attempts
Counter name Description Count [for Interval] The total number of window synchronizations that were attempted within the last recorded interval Count [for Run] The total number of window synchronizations that were attempted in the entire run The counters in the following table provide an aggregate value for all window synchronizations. Folders that contain aggregate counters have a clock superimposed on them:
.
Citrix Window Synchronizations> Citrix Window Synchronization Successes
Counter name Description Count [for Interval] The total number of window synchronizations that succeeded within the last recorded interval Count [for Run] The total number of window synchronizations that succeeded in the entire run Percent Citrix Synchronization Success for Interval The percentage of window synchronizations that succeeded within the last recorded interval Percent Citrix Synchronization Success for Run The percentage of window synchronizations that succeeded in the entire run The counters in the following table provide an aggregate value for all transactions. Folders that contain aggregate counters have a clock superimposed on them:
.
Citrix Window Synchronizations> Citrix Window Synchronization Timeouts
Counter name Description Count [for Interval] The total number of window synchronizations that produced a timeout within the last recorded interval Count [for Run] The total number of window synchronizations that produced a timeout in the entire run
5.13.2. Citrix image synchronization counters
Citrix image synchronization counters provide information about image synchronizations that were attempted, completed, or produced a timeout within the specified timeout limit. Image synchronization is the mechanism used by the test to compare image areas from the test with actual image area in the run.
Some counters produce an aggregate value, where the values for several web service message returns are combined into a single value; others produce values for each web service message return.
Aggregate counters use the values for all the elements in a test to produce a single value for a report. This value is rolled up from all values that satisfy the counter. When you drag an aggregate counter onto a report, one value is displayed. Folders that contain aggregate counters have a clock superimposed on the folder icon:
. In the tables that follow, the aggregate counters are listed after the titles that contain this icon.
Individual counters produce values for each item that satisfies the counter, rather than an aggregate value. Folders that contain individual counters have an asterisk superimposed on the folder icon:
. In the tables that follow, the individual counters are listed after the titles that contain this icon.
Some counters pertain to intervals in the run. You set the Statistics sample interval value in the schedule, as a schedule property.
The counters in the following table provide an aggregate value for all image synchronizations. Folders that contain aggregate counters have a clock superimposed on them:
.
Citrix Image Synchronizations> Citrix Image Synchronization Attempts
Counter name Description Count [for Interval] The total number of image synchronizations that were attempted within the last recorded interval Count [for Run] The total number of image synchronizations that were attempted in the entire run The counters in the following table provide an aggregate value for all image synchronizations. Folders that contain aggregate counters have a clock superimposed on them:
.
Citrix Image Synchronizations> Citrix Image Synchronization Successes
Counter name Description Count [for Interval] The total number of image synchronizations that succeeded within the last recorded interval Count [for Run] The total number of image synchronizations that succeeded in the entire run Percent Citrix Synchronization Success for Interval The percentage of image synchronizations that succeeded within the last recorded interval Percent Citrix Synchronization Success for Run The percentage of image synchronizations that succeeded in the entire run The counters in the following table provide an aggregate value for all transactions. Folders that contain aggregate counters have a clock superimposed on them:
.
Citrix Image Synchronizations> Citrix Image Synchronization Timeouts
Counter name Description Count [for Interval] The total number of image synchronizations that produced a timeout within the last recorded interval Count [for Run] The total number of image synchronizations that produced a timeout in the entire run
Citrix Image Synchronizations> Image Synchronization Verification Points >
All Image Synchronizations
Counter name Description Percent Pass The percentage of image synchronization verification points that passed for a specific window event for the entire run
Citrix Image Synchronizations> Image Synchronization Verification Points > All Image Synchronizations >
Error
Counter name Description Count [for Interval] The number of image synchronization verification points classified as Error for a specific window event during the most recent sample interval Count [for Run] The number of image synchronization verification points classified as Error for a specific window for the entire run
Citrix Image Synchronizations> Image Synchronization Verification Point > All Image Synchronizations >
Fail
Counter name Description Count [for Interval] The number of image synchronization verification points that failed for a specific window event during the most recent sample interval Count [for Run] The number of image synchronization verification points that failed for a specific window event for the entire run
Citrix Image Synchronizations> Image Synchronization Verification Points > All Image Synchronizations >
Inconclusive
Counter name Description Count [for Interval] The number of image synchronization verification points classified as Inconclusive for a specific window event during the most recent sample interval Count [for Run] The number of image synchronization verification points classified as Inconclusive for a specific window event for the entire run
Citrix Image Synchronizations> Image Synchronization Verification Points > All Image Synchronizations >
Pass
Counter name Description Count [for Interval] The number of image synchronization verification points that passed for a specific window event during the most recent sample interval Count [for Run] The number of image synchronization verification points that passed for a specific window event for the entire run
Citrix Image Synchronizations>
Image Synchronization Verification Points
Counter name Description Percent Image Synchronization VPs Passed For Interval The percentage of image synchronization verification points that passed during the most recent sample interval Percent Image Synchronization VPs Passed For Run The percentage of image synchronization verification points that passed for the entire run Total Image Synchronization VPs Attempted [for Interval] The number of image synchronization verification points executed during the most recent sample interval Total Image Synchronization VPs Attempted [for Run] The number of image synchronization verification points executed for the entire run Total Image Synchronization VPs Error [for Interval] The number of image synchronization verification points with a verdict of Error during the most recent sample interval Total Image Synchronization VPs Error [for Run] The number of image synchronization verification points with a verdict of Error for the entire run Total Image Synchronization VPs Failed [for Interval] The number of image synchronization verification points that failed during the most recent sample interval Total Image Synchronization VPs Failed [for Run] The number of image synchronization verification points that failed for the entire run Total Image Synchronization VPs Inconclusive [for Interval] The number of image synchronization verification points that were marked as Inconclusive within the most recent sample interval Total Image Synchronization VPs Inconclusive [for Run] The number of image synchronization verification points that were marked as Inconclusive for the entire run Total Image Synchronization VPs Passed [for Interval] The number of image synchronization verification points that passed during the most recent sample interval Total Image Synchronization VPs Passed [for Run] The number of image synchronization verification points that passed for the entire run
5.13.3. Citrix timer counters
Citrix timer counters provide information about the response time of window events in a run. Response times are determined by measurements that are located in the tests.
Some counters produce an aggregate value, where the values for several web service message returns are combined into a single value; others produce values for each web service message return.
Aggregate counters use the values for all the elements in a test to produce a single value for a report. This value is rolled up from all values that satisfy the counter. When you drag an aggregate counter onto a report, one value is displayed. Folders that contain aggregate counters have a clock superimposed on the folder icon:
. In the tables that follow, the aggregate counters are listed after the titles that contain this icon.
Individual counters produce values for each item that satisfies the counter, rather than an aggregate value. Folders that contain individual counters have an asterisk superimposed on the folder icon:
. In the tables that follow, the individual counters are listed after the titles that contain this icon.
Some counters pertain to intervals in the run. You set the Statistics sample interval value in the schedule, as a schedule property.
The counters in the following table provide an aggregate value for all tests. Folders that contain aggregate counters have a clock superimposed on them:
.
Citrix Timers> Response Time
Counter name Description Average Response Time For All Timers For Interval [ms] The average response time for all measurements within the most recent sample interval Average Response Time For All Timers For Run [ms] The average response time for all measurements in the entire run Maximum Response Time For All Timers For Interval [ms] The maximum response time for all measurements within the most recent sample interval Maximum Response Time For All Timers For Run [ms] The maximum response time for all measurements in the entire run Minimum Response Time For All Timers For Interval [ms] The minimum response time for all measurements within the most recent sample interval Minimum Response Time For All Timers For Run [ms] The minimum response time for all measurements in the entire run Standard Deviation Response Time For All Timers For Interval [ms] The standard deviation response time for all measurements during the most recent sample interval Standard Deviation Response Time For All Timers For Run [ms] The standard deviation response time for all measurements in the entire run The counters in the following table provide individual values for each test. Folders that contain individual counters have an asterisk superimposed on them:
.
Citrix Timers> Response Time > All Response Times
Counter name Description Average [for Interval] The average response time for all measurements within the most recent sample interval Average [for Run] The average response time for all measurements in the entire run Maximum [for Interval] The maximum response time for all measurements within the most recent sample interval Maximum [for Run] The maximum response time for all measurements in the entire run Minimum [for Interval] The minimum response time for all measurements within the most recent sample interval Minimum [for Run] The minimum response time for all measurements in the entire run Standard Deviation [for Interval] The standard deviation response time for all measurements during the most recent sample interval Standard Deviation [for Run] The standard deviation response time for all measurements in the entire run
5.13.4. Citrix user action counters
Citrix user action counters provide information about emulated user input actions in the run.
Some counters pertain to intervals in the run. You set the Statistics sample interval value in the schedule, as a schedule property.
The counters in the following table provide an aggregate value for all transactions. Folders that contain aggregate counters have a clock superimposed on them:
.
User Actions
Counter Name Description Count [for Interval] The total number of user input actions within the most recent sample interval Count [for Run] The total number of user input actions in the entire run Rate [per second] [for Interval] The rate per second of user input actions within the most recent sample interval Rate [per second] [for Run] The rate per second of user input actions in the entire run The counters in the following table provide individual values for each test. Folders that contain individual counters have an asterisk superimposed on them:
.
User Actions > All User Actions
Counter Name Description Total User Actions For Interval The total number of user input actions within the most recent sample interval Total User Actions For Run The total number of user input actions in the entire run User Action Rate For Interval The rate per second of user input actions within the most recent sample interval User Action Rate For Run The rate per second of user input actions in the entire run
5.13.5. Citrix window counters
Citrix window counters provide information about verification points.
The verdict for a verification point can be Pass, Fail, Error, or Inconclusive.
- Pass indicates that the verification point matched or received the expected response. For example, a response code verification point is set to Pass when the recorded response code is received during the run. If your test does not contain verification points, it means that the connection succeeded.
- Fail indicates that the verification point did not match the expected response or that the expected response was not received.
- Error indicates that the primary request was not successfully sent to the server, no response was received from the server, or the response was incomplete or could not be parsed.
- The verdict is set to Inconclusive only if you provide custom code that defines a verdict of Inconclusive.
Some counters produce an aggregate value, where the values for several window events are rolled up into one value; others produce values for each window event.
Aggregate counters use the values for all the elements in a test to produce a single value for a report. This value is rolled up from all values that satisfy the counter. When you drag an aggregate counter onto a report, one value is displayed. Folders that contain aggregate counters have a clock superimposed on the folder icon:
. The tables below that list aggregate counters have this icon in their title.
Individual counters produce values for each item that satisfies the counter, rather than a single rolled-up value. Folders that contain individual counters have an asterisk superimposed on the folder icon:
. The tables below that list individual counters have this icon in their title.
Some counters pertain to intervals in the run. You set the Statistics sample interval value in the schedule, as a schedule property.
Citrix Windows> Verification Points >
All Windows
Counter name Description Percent Pass The percentage of window title verification points that passed for a specific window event for the entire run
Citrix Windows> Verification Points > All Windows >
Error
Counter name Description Count [for Interval] The number of window title verification points classified as Error for a specific window event within the most recent sample interval Count [for Run] The number of window title verification points classified as Error for a specific window for the entire run
Citrix Windows> Verification Points > All Windows >
Fail
Counter name Description Count [for Interval] The number of window title verification points that failed for a specific window event during the most recent sample interval Count [for Run] The number of window title verification points that failed for a specific window event for the entire run
Citrix Windows> Verification Points > All Windows >
Inconclusive
Counter name Description Count [for Interval] The number of window title verification points classified as Inconclusive for a specific window event within the most recent sample interval Count [for Run] The number of window title verification points classified as Inconclusive for a specific window event for the entire run
Citrix Windows> Verification Points > All Windows >
Pass
Counter name Description Count [for Interval] The number of window title verification points that passed for a specific window event during the most recent sample interval Count [for Run] The number of window title verification points that passed for a specific window event for the entire run
Citrix Windows> Verification Points >
All Windows
Counter name Description Percent Window VPs Passed For Interval The percentage of window title verification points that passed during the most recent sample interval Percent Window VPs Passed For Run The percentage of window title verification points that passed for the entire run Total Window VPs Attempted [for Interval] The number of window title verification points executed during the most recent sample interval Total Window VPs Attempted [for Run] The number of window title verification points executed for the entire run Total Window VPs Error [for Interval] The number of window title verification points with a verdict of Error during the most recent sample interval Total Window VPs Error [for Run] The number of window title verification points with a verdict of Error for the entire run Total Window VPs Failed [for Interval] The number of window title verification points that failed within the most recent sample interval Total Window VPs Failed [for Run] The number of window title verification points that failed for the entire run Total Window VPs Inconclusive [for Interval] The number of window title verification points that were marked as Inconclusive during the most recent sample interval Total Window VPs Inconclusive [for Run] The number of window title verification points that were marked as Inconclusive for the entire run Total Window VPs Passed [for Interval] The number of window title verification points that passed during the most recent sample interval Total Window VPs Passed [for Run] The number of window title verification points that passed for the entire run
5.14. Service counters
Service counters, which are displayed in the Test Runs view, enable you to customize your service test reports with dynamic information that is updated for each run.
5.14.1. Service call counters
Service call counters provide information about the calls that are invoked during the service test.
Some counters produce an aggregate value, where the values for several service calls are combined into a single value; others produce values for each service call.
Aggregate counters use the values for all the elements in a test to produce a single value for a report. This value combines all values that satisfy the counter. When you drag an aggregate counter onto a report, one value is displayed. Folders that contain aggregate counters have a clock superimposed on the folder icon:
. In the tables that follow, the aggregate counters are listed after the titles that contain this icon.
Individual counters produce values for each item that satisfies the counter, rather than an aggregate value. Folders that contain individual counters have an asterisk superimposed on the folder icon:
. In the tables that follow, the individual counters are listed after the titles that contain this icon.
Some counters pertain to intervals in the run. You set the Statistics sample interval value in the schedule, as a schedule property.
Web Service Calls> Connection Time>
All Connection Times
Counter name Description Average [for Interval] The average time, in milliseconds, required to establish a connection to the web service for all web service calls within the most recent sample interval Average [for Run] The average time, in milliseconds, required to establish a connection to the web service for all web service calls for the entire run Standard deviation [for Interval] The standard deviation time, in milliseconds, required to establish a connection to the web service for all web service calls within the most recent sample interval Standard deviation [for Run] The standard deviation time, in milliseconds, required to establish a connection to the web service for all web service calls for the entire run Maximum [for Interval] The longest time, in milliseconds, required to establish a connection to the web service for all web service calls within the most recent sample interval Maximum [for Run] The longest time, in milliseconds, required to establish a connection to the web service for all web service calls for the entire run Minimum [for Interval] The shortest time, in milliseconds, required to establish a connection to the web service for all web service calls within the most recent sample interval Minimum [for Run] The shortest time, in milliseconds, required to establish a connection to the web service for all web service calls for the entire run
Web Service Calls>
Connection Time
Counter name Description Average connection time [ms] [for Interval] The average time, in milliseconds, required to establish a connection to the web service for a specific web service call within the most recent sample interval Average connection time [ms] [for Run] The average time, in milliseconds, required to establish a connection to the web service for a specific web service call for the entire run Connection time standard deviation [ms] [for Interval] The standard deviation time, in milliseconds, required to establish a connection to the web service for a specific web service call during the most recent sample interval Connection time standard deviation [ms] [for Run] The standard deviation time, in milliseconds, required to establish a connection to the web service for a specific web service call for the entire run Maximum connection time [ms] [for Interval] The longest time, in milliseconds, required to establish a connection to the web service for a specific web service call within the most recent sample interval Maximum connection time [ms] [for Run] The longest time, in milliseconds, required to establish a connection to the web service for a specific web service call for the entire run Minimum connection time [ms] [for Interval] The shortest time, in milliseconds, required to establish a connection to the web service for a specific web service call within the most recent sample interval Minimum connection time [ms] [for Run] The shortest time, in milliseconds, required to establish a connection to the web service for a specific web service call for the entire run
Web Service Calls> Received Bytes>
All Received Bytes
Counter name Description Average [for Interval] The average data volume, in bytes, received as a response from the web service for all web service calls during the most recent sample interval Average [for Run] The average data volume, in bytes, received as a response from the web service for all web service calls for the entire run Count [for Interval] The total data volume, in bytes, received as a response from the web service for all web service calls during the most recent sample interval Count [for Run] The total data volume, in bytes, received as a response from the web service for all web service calls for the entire run Maximum [for Interval] The largest data volume, in bytes, received as a response from the web service for all web service calls during the most recent sample interval Maximum [for Run] The largest data volume, in bytes, received as a response from the web service for all web service calls for the entire run Maximum Count [for Run] The largest cumulated data volume, in bytes, received as a response from the web service for all web service calls for the entire run Minimum [for Interval] The smallest data volume, in bytes, received as a response from the web service for all web service calls during the most recent sample interval Minimum [for Run] The smallest data volume, in bytes, received as a response from the web service for all web service calls for the entire run Minimum Count [for Run] The smallest cumulated data volume, in bytes, received as a response from the web service for all web service calls for the entire run Rate [per second] [for Interval] The data volume throughput, in bytes per second, received as a response from the web service for all web service calls within the most recent sample interval Rate [per second] [for Run] The data volume throughput, in bytes per second, received as a response from the web service for all web service calls for the entire run Standard deviation [for Interval] The standard deviation data volume, in bytes, received as a response from the web service for all web service calls within the most recent sample interval Standard deviation [for Run] The standard deviation data volume, in bytes, received as a response from the web service for all web service calls for the entire run
Web Service Calls>
Received Bytes
Counter name Description Average received bytes for all calls [Bytes] [for Interval] The average data volume, in bytes, received as a response from the web service for all web service calls during the most recent sample interval Average received bytes for all calls [Bytes] [for Run] The average data volume, in bytes, received as a response from the web service for all web service calls for the entire run Maximum received bytes [Bytes] [for Interval] The largest data volume, in bytes, received as a response from the web service for a specific web service call during the most recent sample interval Maximum received bytes [per Interval] [for Run] The largest data volume, in bytes, received as a response from the web service for a specific web service call for the entire run Maximum received bytes for all calls [Bytes] [for Interval] The largest data volume, in bytes, received as a response from the web service for all web service calls during the most recent sample interval Maximum received bytes for all calls [Bytes] [for Run] The largest data volume, in bytes, received as a response from the web service for all web service calls for the entire run Minimum received bytes [Bytes] [for Interval] The smallest data volume, in bytes, received as a response from the web service for a specific web service call during the most recent sample interval Minimum received bytes [per Interval] [for Run] The smallest data volume, in bytes, received as a response from the web service for a specific web service call for the entire run Minimum received bytes for all calls [Bytes] [for Interval] The smallest data volume, in bytes, received as a response from the web service for a specific web service call during the most recent sample interval Minimum received bytes for all calls [Bytes] [for Run] The smallest data volume, in bytes, received as a response from the web service for a specific web service call for the entire run Rate received bytes [for Interval] The data volume throughput, in bytes per second, received as a response from the web service for all web service calls within the most recent sample interval Rate received bytes [for Run] The data volume throughput, in bytes per second, received as a response from the web service for a specific web service call for the entire run Received bytes standard deviation [ms] [for Interval] The standard deviation data volume, in bytes, received as a response from the web service for a specific web service call within the most recent sample interval Received bytes standard deviation [ms] [for Run] The standard deviation data volume, in bytes, received as a response from the web service for a specific web service call for the entire run Total received bytes [Bytes] [for Interval] The total data volume, in bytes, received as a response from the web service for a specific web service call during the most recent sample interval Total received bytes [Bytes] [for Run] The total data volume, in bytes, received as a response from the web service for a specific web service call for the entire run
Web Service Calls> Sent Bytes>
All Sent Bytes
Counter name Description Average [for Interval] The average data volume, in bytes, sent as a call from the web service for all web service calls during the most recent sample interval Average [for Run] The average data volume, in bytes, sent as a call from the web service for all web service calls for the entire run Count [for Interval] The total data volume, in bytes, sent as a call from the web service for all web service calls within the most recent sample interval Count [for Run] The total data volume, in bytes, sent as a call from the web service for all web service calls for the entire run Maximum [for Interval] The largest data volume, in bytes, sent as a call from the web service for all web service calls during the most recent sample interval Maximum [for Run] The largest data volume, in bytes, sent as a call from the web service for all web service calls for the entire run Maximum Count [for Run] The largest cumulated data volume, in bytes, sent as a call from the web service for all web service calls for the entire run Minimum [for Interval] The smallest data volume, in bytes, sent as a call from the web service for all web service calls during the most recent sample interval Minimum [for Run] The smallest data volume, in bytes, sent as a call from the web service for all web service calls for the entire run Minimum Count [for Run] The smallest cumulated data volume, in bytes, sent as a call from the web service for all web service calls for the entire run Rate [per second] [for Interval] The data volume throughput, in bytes per second, sent as a call from the web service for all web service calls during the most recent sample interval Rate [per second] [for Run] The data volume throughput, in bytes per second, sent as a call from the web service for all web service calls for the entire run Standard deviation [for Interval] The standard deviation data volume, in bytes, sent as a call from the web service for all web service calls during the most recent sample interval Standard deviation [for Run] The standard deviation data volume, in bytes, sent as a call from the web service for all web service calls for the entire run
Web Service Calls>
Sent Bytes
Counter name Description Average sent bytes for all calls [Bytes] [for Interval] The average data volume, in bytes, sent as a call from the web service for all web service calls during the most recent sample interval Average sent bytes for all calls [Bytes] [for Run] The average data volume, in bytes, sent as a call from the web service for all web service calls for the entire run Maximum sent bytes [Bytes] [for Interval] The largest data volume, in bytes, sent as a call from the web service for a specific web service call during the most recent sample interval Maximum sent bytes [per Interval] [for Run] The largest data volume, in bytes, sent as a call from the web service for a specific web service call for the entire run Maximum sent bytes for all calls [Bytes] [for Interval] The largest data volume, in bytes, sent as a call from the web service for all web service calls during the most recent sample interval Maximum sent bytes for all calls [Bytes] [for Run] The largest data volume, in bytes, sent as a call from the web service for all web service calls for the entire run Minimum sent bytes [Bytes] [for Interval] The smallest data volume, in bytes, sent as a call from the web service for a specific web service call during the most recent sample interval Minimum sent bytes [per Interval] [for Run] The smallest data volume, in bytes, sent as a call from the web service for a specific web service call for the entire run Minimum sent bytes for all calls [Bytes] [for Interval] The smallest data volume, in bytes, sent as a call from the web service for a specific web service call during the most recent sample interval Minimum sent bytes for all calls [Bytes] [for Run] The smallest data volume, in bytes, sent as a call from the web service for a specific web service call for the entire run Rate sent bytes [for Interval] The data volume throughput, in bytes per second, sent as a call from the web service for all web service calls during the most recent sample interval Rate sent bytes [for Run] The data volume throughput, in bytes per second, sent as a call from the web service for a specific web service call for the entire run Sent bytes standard deviation [ms] [for Interval] The standard deviation data volume, in bytes, sent as a call from the web service for a specific web service call within the most recent sample interval Sent bytes standard deviation [ms] [for Run] The standard deviation data volume, in bytes, sent as a call from the web service for a specific web service call for the entire run Total sent bytes [Bytes] [for Interval] The total data volume, in bytes, sent as a call from the web service for a specific web service call during the most recent sample interval Total sent bytes [Bytes] [for Run] The total data volume, in bytes, sent as a call from the web service for a specific web service call for the entire run
Web Service Calls> Response Time>
All Response Times
Counter name Description Average [for Interval] The average time, in milliseconds, required to receive a response from the web service for all web service calls within the most recent sample interval Average [for Run] The average time, in milliseconds, required to receive a response from the web service for all web service calls for the entire run Standard deviation [for Interval] The standard deviation time, in milliseconds, required to receive a response from the web service for all web service calls within the most recent sample interval Standard deviation [for Run] The standard deviation time, in milliseconds, required to receive a response from the web service for all web service calls for the entire run Maximum [for Interval] The longest time, in milliseconds, required to receive a response from the web service for all web service calls within the most recent sample interval Maximum [for Run] The longest time, in milliseconds, required to receive a response from the web service for all web service calls for the entire run Minimum [for Interval] The shortest time, in milliseconds, required to receive a response from the web service for all web service calls within the most recent sample interval Minimum [for Run] The shortest time, in milliseconds, required to receive a response from the web service for all web service calls for the entire run
Web Service Calls>
Response Time
Counter name Description Average connection time [ms] [for Interval] The average time, in milliseconds, required to receive a response from the web service for a specific web service call within the most recent sample interval Average connection time [ms] [for Run] The average time, in milliseconds, required to receive a response from the web service for a specific web service call for the entire run Response time standard deviation [ms] [for Interval] The standard deviation time, in milliseconds, required to receive a response from the web service for a specific web service call during the most recent sample interval Response time standard deviation [ms] [for Run] The standard deviation time, in milliseconds, required to receive a response from the web service for a specific web service call for the entire run Maximum response time [ms] [for Interval] The longest time, in milliseconds, required to receive a response from the web service for a specific web service call within the most recent sample interval Maximum response time [ms] [for Run] The longest time, in milliseconds, required to receive a response from the web service for a specific web service call for the entire run Minimum response time [ms] [for Interval] The shortest time, in milliseconds, required to receive a response from the web service for a specific web service call within the most recent sample interval Minimum response time [ms] [for Run] The shortest time, in milliseconds, required to receive a response from the web service for a specific web service call for the entire run
Web Service Calls> Web Service Call Starts>
All Call Starts
Counter name Description Count [for Interval] The number of times web service calls have been invoked during the most recent sample interval Count [for Run] The number of times web service calls have been invoked during the entire run Maximum [for Interval] The maximum number of times web service calls have been invoked within an interval Maximum [for Run] The maximum number of times web service calls have been invoked during the entire run Rate [per second] [for Interval] The number of times per second that web service calls have been invoked during the most recent sample interval Rate [per second] [for Run] The number of times per second that web service calls have been invoked for the entire run
Web Service Call>
Web Service Call Starts
Counter name Description Maximum Rate Web Service Call Started [for Run] The maximum number of times per second that a specific web service call has been invoked for the entire run Maximum Web Service Call Started [for Interval] The maximum number of times a specific web service call has been invoked during the most recent sample interval Maximum Web Service Call Started [for Run] The maximum number of times a specific web service call has been invoked for the entire run Minimum Rate Web Service Call Started [for Run] The minimum number of times per second that a specific web service call has been invoked for the entire run Minimum Web Service Call Started [for Interval] The minimum number of times a specific web service call has been invoked during the most recent sample interval Minimum Web Service Call Started [for Run] The minimum number of times a specific web service call has been invoked for the entire run Rate Web Service Call Started [for Interval] The number of times per second that a specific web service call has been invoked during the most recent sample interval Rate Web Service Call Started [for Run] The number of times per second that a specific web service call has been invoked for the entire run Total Web Service Call Started [for Interval] The number of times that a specific web service call has been invoked during the most recent sample interval Total Web Service Call Started [for Run] The number of times that a specific web service call has been invoked for the entire run
Web Service Calls> Web Service Call Successes>
All Call Successes
Counter name Description Count [for Interval] The number of times web service calls have returned a valid response during the most recent sample interval Count [for Run] The number of times web service calls have returned a valid response during the entire run Maximum [for Interval] The maximum number of times web service calls have returned a valid response within an interval Maximum [for Run] The maximum number of times web service calls have returned a valid response during the entire run Rate [per second] [for Interval] The number of times per second that web service calls have returned a valid response during the most recent sample interval Rate [per second] [for Run] The number of times per second that web service calls have returned a valid response for the entire run
Web Service Call>
Web Service Call Successes
Counter name Description Maximum Rate Web Service Call Success [for Run] The maximum number of times per second that a specific web service call has returned a valid response for the entire run Maximum Web Service Call Success [for Interval] The maximum number of times a specific web service call has returned a valid response during the most recent sample interval Maximum Web Service Call Success [for Run] The maximum number of times a specific web service call has returned a valid response for the entire run Minimum Rate Web Service Call Success [for Run] The minimum number of times per second that a specific web service call has returned a valid response for the entire run Minimum Web Service Call Success [for Interval] The minimum number of times a specific web service call has returned a valid response during the most recent sample interval Minimum Web Service Call Success [for Run] The minimum number of times a specific web service call has returned a valid response for the entire run Rate Web Service Call Success [for Interval] The number of times per second that a specific web service call has returned a valid response during the most recent sample interval Rate Web Service Call Success [for Run] The number of times per second that a specific web service call has returned a valid response for the entire run Total Web Service Call Success [for Interval] The number of times that a specific web service call has returned a valid response during the most recent sample interval Total Web Service Call Success [for Run] The number of times that a specific web service call has returned a valid response for the entire run
Web Service Calls> Web Service Call Failures>
All Call Failures
Counter name Description Count [for Interval] The number of times web service calls have failed within the most recent sample interval Count [for Run] The number of times web service calls have failed during the entire run Maximum [for Interval] The maximum number of times web service calls have failed within an interval Maximum [for Run] The maximum number of times web service calls have failed during the entire run Rate [per second] [for Interval] The number of times per second that web service calls have failed during the most recent sample interval Rate [per second] [for Run] The number of times per second that web service calls have failed for the entire run
Web Service Call>
Web Service Call Failures
Counter name Description Maximum Rate Web Service Call Failures [for Run] The maximum number of times per second that a specific web service call has failed for the entire run Maximum Web Service Call Failures [for Interval] The maximum number of times a specific web service call has failed during the most recent sample interval Maximum Web Service Call Failures [for Run] The maximum number of times a specific web service call has failed for the entire run Minimum Rate Web Service Call Failures [for Run] The minimum number of times per second that a specific web service call has failed for the entire run Minimum Web Service Call Failures [for Interval] The minimum number of times a specific web service call has failed during the most recent sample interval Minimum Web Service Call Failures [for Run] The minimum number of times a specific web service call has failed for the entire run Rate Web Service Call Failures [for Interval] The number of times per second that a specific web service call has failed during the most recent sample interval Rate Web Service Call Failures [for Run] The number of times per second that a specific web service call has failed for the entire run Total Web Service Call Failures [for Interval] The number of times that a specific web service call has failed during the most recent sample interval Total Web Service Call Failures [for Run] The number of times that a specific web service call has failed for the entire run
Web Service Calls> Web Service Call Timeouts>
All Call Timeouts
Counter name Description Count [for Interval] The number of times web service calls produced a timeout during the most recent sample interval Count [for Run] The number of times web service calls produced a timeout during the entire run Maximum [for Interval] The maximum number of times web service calls produced a timeout within an interval Maximum [for Run] The maximum number of times web service calls produced a timeout during the entire run Rate [per second] [for Interval] The number of times per second that web service calls produced a timeout during the most recent sample interval Rate [per second] [for Run] The number of times per second that web service calls produced a timeout for the entire run
Web Service Call>
Web Service Call Timeouts
Counter name Description Maximum Rate Web Service Call Timeouts [for Run] The maximum number of times per second that a specific web service call produces a timeout for the entire run Maximum Web Service Call Timeouts [for Interval] The maximum number of times a specific web service call produces a timeout during the most recent sample interval Maximum Web Service Call Timeouts [for Run] The maximum number of times a specific web service call produces a timeout for the entire run Minimum Rate Web Service Call Timeouts [for Run] The minimum number of times per second that a specific web service call produces a timeout for the entire run Minimum Web Service Call Timeouts [for Interval] The minimum number of times a specific web service call produces a timeout during the most recent sample interval Minimum Web Service Call Timeouts [for Run] The minimum number of times a specific web service call produces a timeout for the entire run Rate Web Service Call Timeouts [for Interval] The number of times per second that a specific web service call produces a timeout during the most recent sample interval Rate Web Service Call Timeouts [for Run] The number of times per second that a specific web service call produces a timeout for the entire run Total Web Service Call Timeouts [for Interval] The number of times that a specific web service call produces a timeout during the most recent sample interval Total Web Service Call Timeouts [for Run] The number of times that a specific web service call produces a timeout for the entire run
5.14.2. Service verification point counters
Service verification point counters provide information about verification points.
The verdict for a verification point can be Pass, Fail, Error, or Inconclusive.
- Pass indicates that the verification point matched or received the expected response. For example, a response code verification point is set to Pass when the recorded response code is received during the run. If your test does not contain verification points, it means that the connection succeeded.
- Fail indicates that the verification point did not match the expected response or that the expected response was not received.
- Error indicates that the primary request was not successfully sent to the server, no response was received from the server, or the response was incomplete or could not be parsed.
- Inconclusive is returned only if you provide custom code that defines an Inconclusive verdict.
Some counters produce an aggregate value, where the values for several web service message returns are combined into a single value; others produce values for each web service message return.
Aggregate counters use the values for all the elements in a test to produce a single value for a report. This value is rolled up from all values that satisfy the counter. When you drag an aggregate counter onto a report, one value is displayed. Folders that contain aggregate counters have a clock superimposed on the folder icon:
. In the tables that follow, the aggregate counters are listed after the titles that contain this icon.
Individual counters produce values for each item that satisfies the counter, rather than an aggregate value. Folders that contain individual counters have an asterisk superimposed on the folder icon:
. In the tables that follow, the individual counters are listed after the titles that contain this icon.
Some counters pertain to intervals in the run. You set the Statistics sample interval value in the schedule, as a schedule property.
Web Service Verification Points > All >
All Returns
Counter name Description Percent Pass The percentage of all verification points that passed for a specific web service message return for the entire run
Web Service Verification Points > All > All Returns >
Error
Counter name Description Count [for Interval] The number of all verification points classified as Error for a specific web service message return during the most recent sample interval Count [for Run] The number of all verification points classified as Error for a specific web service message return for the entire run
Web Service Verification Points > All > All Returns >
Fail
Counter name Description Count [for Interval] The number of all verification points that failed for a specific web service message return during the most recent sample interval Count [for Run] The number of all verification points that failed for a specific web service message return for the entire run
Web Service Verification Points > All > All Returns >
Inconclusive
Counter name Description Count [for Interval] The number of all verification points classified as Inconclusive for a specific web service message return during the most recent sample interval Count [for Run] The number of all verification points classified as Inconclusive for a specific web service message return for the entire run
Web Service Verification Points > All > All Returns >
Pass
Counter name Description Count [for Interval] The number of all verification points that passed for a specific web service message return during the most recent sample interval Count [for Run] The number of all verification points that passed for a specific web service message return for the entire run
Web Service Verification Points >
All Returns
Counter name Description Percent All VPs Passed For Interval The percentage of all verification points that passed during the most recent sample interval Percent All VPs Passed For Run The percentage of all verification points that passed for the entire run Total All VPs Attempted [for Interval] The number of all verification points executed within the most recent sample interval Total All VPs Attempted [for Run] The number of all verification points executed for the entire run Total All VPs Error [for Interval] The number of all verification points with a verdict of Error during the most recent sample interval Total All VPs Error [for Run] The number of all verification points with a verdict of Error for the entire run Total All VPs Failed [for Interval] The number of all verification points that failed within the most recent sample interval Total All VPs Failed [for Run] The number of all verification points that failed for the entire run Total All VPs Inconclusive [for Interval] The number of all verification points that were marked as Inconclusive during the most recent sample interval Total All VPs Inconclusive [for Run] The number of all verification points that were marked as Inconclusive for the entire run Total All VPs Passed [for Interval] The number of all verification points that passed within the most recent sample interval Total All VPs Passed [for Run] The number of all verification points that passed for the entire run
Web Service Verification Points > Attachment >
All Returns
Counter name Description Percent Pass The percentage of attachment verification points that passed for a specific web service message return for the entire run
Web Service Verification Points > Attachment > All Returns >
Error
Counter name Description Count [for Interval] The number of attachment verification points classified as Error for a specific web service message return during the most recent sample interval Count [for Run] The number of attachment verification points classified as Error for a specific web service message return for the entire run
Web Service Verification Points > Attachment > All Returns >
Fail
Counter name Description Count [for Interval] The number of attachment verification points that failed for a specific web service message return during the most recent sample interval Count [for Run] The number of attachment verification points that failed for a specific web service message return for the entire run
Web Service Verification Points > Attachment > All Returns >
Inconclusive
Counter name Description Count [for Interval] The number of attachment verification points classified as Inconclusive for a specific web service message return during the most recent sample interval Count [for Run] The number of attachment verification points classified as Inconclusive for a specific web service message return for the entire run
Web Service Verification Points > Attachment > All Returns >
Pass
Counter name Description Count [for Interval] The number of attachment verification points that passed for a specific web service message return during the most recent sample interval Count [for Run] The number of attachment verification points that passed for a specific web service message return for the entire run
Web Service Verification Points >
All Returns
Counter name Description Percent Attachment VPs Passed For Interval The percentage of attachment verification points that passed during the most recent sample interval Percent Attachment VPs Passed For Run The percentage of attachment verification points that passed for the entire run Total Attachment VPs Attempted [for Interval] The number of attachment verification points executed during the most recent sample interval Total Attachment VPs Attempted [for Run] The number of attachment verification points executed for the entire run Total Attachment VPs Error [for Interval] The number of attachment verification points with a verdict of Error during the most recent sample interval Total Attachment VPs Error [for Run] The number of attachment verification points with a verdict of Error for the entire run Total Attachment VPs Failed [for Interval] The number of attachment verification points that failed during the most recent sample interval Total Attachment VPs Failed [for Run] The number of attachment verification points that failed for the entire run Total Attachment VPs Inconclusive [for Interval] The number of attachment verification points that were marked as Inconclusive during the most recent sample interval Total Attachment VPs Inconclusive [for Run] The number of attachment verification points that were marked as Inconclusive for the entire run Total Attachment VPs Passed [for Interval] The number of attachment verification points that passed during the most recent sample interval Total Attachment VPs Passed [for Run] The number of attachment verification points that passed for the entire run
Web Service Verification Points > Contain >
All Returns
Counter name Description Percent Pass The percentage of contain verification points that passed for a specific web service message return for the entire run
Web Service Verification Points > Contain > All Returns >
Error
Counter name Description Count [for Interval] The number of contain verification points classified as Error for a specific web service message return during the most recent sample interval Count [for Run] The number of contain verification points classified as Error for a specific web service message return for the entire run
Web Service Verification Points > Contain > All Returns >
Fail
Counter name Description Count [for Interval] The number of contain verification points that failed for a specific web service message return during the most recent sample interval Count [for Run] The number of contain verification points that failed for a specific web service message return for the entire run
Web Service Verification Points > Contain > All Returns >
Inconclusive
Counter name Description Count [for Interval] The number of contain verification points classified as Inconclusive for a specific web service message return during the most recent sample interval Count [for Run] The number of contain verification points classified as Inconclusive for a specific web service message return for the entire run
Web Service Verification Points > Contain > All Returns >
Pass
Counter name Description Count [for Interval] The number of contain verification points that passed for a specific web service message return during the most recent sample interval Count [for Run] The number of contain verification points that passed for a specific web service message return for the entire run
Web Service Verification Points >
All Returns
Counter name Description Percent Contain VPs Passed For Interval The percentage of contain verification points that passed during the most recent sample interval Percent Contain VPs Passed For Run The percentage of contain verification points that passed for the entire run Total Contain VPs Attempted [for Interval] The number of contain verification points executed within the most recent sample interval Total Contain VPs Attempted [for Run] The number of contain verification points executed for the entire run Total Contain VPs Error [for Interval] The number of contain verification points with a verdict of Error during the most recent sample interval Total Contain VPs Error [for Run] The number of contain verification points with a verdict of Error for the entire run Total Contain VPs Failed [for Interval] The number of contain verification points that failed during the most recent sample interval Total Contain VPs Failed [for Run] The number of contain verification points that failed for the entire run Total Contain VPs Inconclusive [for Interval] The number of contain verification points that were marked as Inconclusive during the most recent sample interval Total Contain VPs Inconclusive [for Run] The number of contain verification points that were marked as Inconclusive for the entire run Total Contain VPs Passed [for Interval] The number of contain verification points that passed during the most recent sample interval Total Contain VPs Passed [for Run] The number of contain verification points that passed for the entire run
web Service Verification Points > Equal >
All Returns
Counter name Description Percent Pass The percentage of equal verification points that passed for a specific web service message return for the entire run
Web Service Verification Points > Equal > All Returns >
Error
Counter name Description Count [for Interval] The number of equal verification points classified as Error for a specific web service message return during the most recent sample interval Count [for Run] The number of equal verification points classified as Error for a specific web service message return for the entire run
Web Service Verification Points > Equal > All Returns >
Fail
Counter name Description Count [for Interval] The number of equal verification points that failed for a specific web service message return during the most recent sample interval Count [for Run] The number of equal verification points that failed for a specific web service message return for the entire run
Web Service Verification Points > Equal > All Returns >
Inconclusive
Counter name Description Count [for Interval] The number of equal verification points classified as Inconclusive for a specific web service message return during the most recent sample interval Count [for Run] The number of equal verification points classified as Inconclusive for a specific web service message return for the entire run
Web Service Verification Points > Equal > All Returns >
Pass
Counter name Description Count [for Interval] The number of equal verification points that passed for a specific web service message return during the most recent sample interval Count [for Run] The number of equal verification points that passed for a specific web service message return for the entire run
Web Service Verification Points >
All Returns
Counter name Description Percent Equal VPs Passed For Interval The percentage of equal verification points that passed during the most recent sample interval Percent Equal VPs Passed For Run The percentage of equal verification points that passed for the entire run Total Equal VPs Attempted [for Interval] The number of equal verification points executed within the most recent sample interval Total Equal VPs Attempted [for Run] The number of equal verification points executed for the entire run Total Equal VPs Error [for Interval] The number of equal verification points with a verdict of Error during the most recent sample interval Total Equal VPs Error [for Run] The number of equal verification points with a verdict of Error for the entire run Total Equal VPs Failed [for Interval] The number of equal verification points that failed during the most recent sample interval Total Equal VPs Failed [for Run] The number of equal verification points that failed for the entire run Total Equal VPs Inconclusive [for Interval] The number of equal verification points that were marked as Inconclusive during the most recent sample interval Total Equal VPs Inconclusive [for Run] The number of equal verification points that were marked as Inconclusive for the entire run Total Equal VPs Passed [for Interval] The number of equal verification points that passed during the most recent sample interval Total Equal VPs Passed [for Run] The number of equal verification points that passed for the entire run
Web Service Verification Points > Query >
All Returns
Counter name Description Percent Pass The percentage of query verification points that passed for a specific web service message return for the entire run
Web Service Verification Points > Query > All Returns >
Error
Counter name Description Count [for Interval] The number of query verification points classified as Error for a specific web service message return during the most recent sample interval Count [for Run] The number of query verification points classified as Error for a specific web service message return for the entire run
Web Service Verification Points > Query > All Returns >
Fail
Counter name Description Count [for Interval] The number of query verification points that failed for a specific web service message return during the most recent sample interval Count [for Run] The number of query verification points that failed for a specific web service message return for the entire run
Web Service Verification Points > Query > All Returns >
Inconclusive
Counter name Description Count [for Interval] The number of query verification points classified as Inconclusive for a specific web service message return during the most recent sample interval Count [for Run] The number of query verification points classified as Inconclusive for a specific web service message return for the entire run
Web Service Verification Points > Query > All Returns >
Pass
Counter name Description Count [for Interval] The number of query verification points that passed for a specific web service message return during the most recent sample interval Count [for Run] The number of query verification points that passed for a specific web service message return for the entire run
Web Service Verification Points >
All Returns
Counter name Description Percent Query VPs Passed For Interval The percentage of query verification points that passed during the most recent sample interval Percent Query VPs Passed For Run The percentage of query verification points that passed for the entire run Total Query VPs Attempted [for Interval] The number of query verification points executed within the most recent sample interval Total Query VPs Attempted [for Run] The number of query verification points executed for the entire run Total Query VPs Error [for Interval] The number of query verification points with a verdict of Error during the most recent sample interval Total Query VPs Error [for Run] The number of query verification points with a verdict of Error for the entire run Total Query VPs Failed [for Interval] The number of query verification points that failed during the most recent sample interval Total Query VPs Failed [for Run] The number of query verification points that failed for the entire run Total Query VPs Inconclusive [for Interval] The number of query verification points that were marked as Inconclusive during the most recent sample interval Total Query VPs Inconclusive [for Run] The number of query verification points that were marked as Inconclusive for the entire run Total Query VPs Passed [for Interval] The number of query verification points that passed during the most recent sample interval Total Query VPs Passed [for Run] The number of query verification points that passed for the entire run
5.15. Socket counters
With socket generic counters, which are displayed in the Performance Test Runs view, you can customize your socket reports with dynamic information that is updated for each run.
5.15.1. Socket counters
These counters provide information about connections started, send and receive actions completed, and connection times. Some counters produce an aggregate value, where the values for several send and receive actions are represented in one value; others produce values for each action.
Aggregate counters use the values for all the connections in a test to produce a single value for a report. This value is calculated from all values that satisfy the counter. When you drag an aggregate counter onto a report, one value is displayed. Folders that contain aggregate counters have a clock superimposed on the folder icon:
. The following tables that list aggregate counters have this icon in their title.
Individual counters produce values for each connection that satisfies the counter, rather than a single aggregate value. Folders that contain individual counters have an asterisk superimposed on the folder icon:
. The following tables that list individual counters have this icon in their title.
Some counters pertain to intervals in the run. You set the Statistics sample interval value in the schedule as a schedule property.
Bytes Received counters
The counters in this section provide information about the number of bytes that were received.
Socket Counters>
Bytes Received
Counter name Description Count [for Interval] The number of bytes received for a specific connection during the most recent sample interval Count [for Run] The number of bytes received for a specific connection for the entire run
Bytes Sent counters
The counters in this section provide information about the number of bytes that were sent.
Socket Counters>
Bytes Sent
Counter name Description Count [for Interval] The number of bytes sent for a specific connection within the most recent sample interval Count [for Run] The number of bytes sent for a specific connection for the entire run
Connect Attempts counters
The counters in this section provide information about the number of attempts to establish a connection with the server.
Socket Counters>
Connect Attempts
Counter name Description Count [for Interval] The number of attempts to establish a connection within the most recent sample interval Count [for Run] The number of attempts to establish a connection for the entire run
Connect Times counters
The counters in this section provide information about the connection response time. This is the time, in milliseconds, that elapses from the point that the test client attempts to establish a connection with the server and the moment the connection is established.
Socket Counters>
Connect Times
Counter name Description Average [for Interval] The average connection time for a specific connection during the most recent sample interval Average [for Run] The average connection time for a specific connection for the entire run Maximum [for Interval] The maximum connection time for a specific connection during the most recent sample interval Maximum [for Run] The maximum connection time for a specific connection for the entire run Minimum [for Interval] The minimum connection time for a specific connection during the most recent sample interval Minimum [for Run] The minimum connection time for a specific connection for the entire run Standard Deviation [for Interval] The standard deviation of connection times for a specific connection during the most recent sample interval Standard Deviation [for Run] The standard deviation of connection times for a specific connection for the entire run
Socket Counters> Connect Times>
Socket Connection Times
Counter name Description Average [ms] [for Interval] The average connection time for all connections within the most recent sample interval Average [ms] [for Run] The average connection time for all connections for the entire run Maximum [ms] [for Interval] The maximum connection time for all connections within the most recent sample interval Maximum [ms] [for Run] The maximum connection time for all connections for the entire run Minimum [ms] [for Interval] The minimum connection time for a all connections within the most recent sample interval Minimum [ms] [for Run] The minimum connection time for a all connections for the entire run Standard Deviation [ms] [for Interval] The standard deviation of connection times for all connections during the most recent sample interval Standard Deviation [ms] [for Run] The standard deviation of connection times for all connections for the entire run
Connects counters
The counters in this section provide information about the number of connections that were established.
Socket Counters>
Connects
Counter name Description Count [for Interval] The number of established connections for a specific connection during the most recent sample interval Count [for Run] The number of established connections for a specific connection for the entire run
Receive Attempts counters
The counters in this section provide information about the number of attempts to receive data from the connection.
Socket Counters>
Receive Attempts
Counter name Description Count [for Interval] The number of receive attempts for a specific connection during the most recent sample interval Count [for Run] The number of receive attempts for a specific connection for the entire run
Receives counters
The counters in this section provide information about the number of receive actions that were successfully completed that were received.
Socket Counters>
Receives
Counter name Description Count [for Interval] The number of completed receives for a specific connection during the most recent sample interval Count [for Run] The number of completed receives for a specific connection for the entire run
Send Attempts counters
The counters in this section provide information about the number of attempts to send data from the connection.
Socket Counters>
Send Attempts
Counter name Description Count [for Interval] The number of send attempts for a specific connection during the most recent sample interval Count [for Run] The number of send attempts for a specific connection for the entire run
Sends counters
The counters in this section provide information about the number of send actions that were successfully completed.
Socket Counters>
Sends
Counter name Description Count [for Interval] The number of completed sends for a specific connection during the most recent sample interval Count [for Run] The number of completed sends for a specific connection for the entire run