Low Intensity Performance Testing

In performance testing, the user load is gradually increased from low intensity to high intensity. Testing the performance of the application with very few users is referred to as Low Intensity Performance Testing. Typically, you require performance test assets to do performance testing. However, in IBM® Rational® Functional Tester, you can do functional testing and low intensity performance testing with the same functional test assets.

Doing low intensity performance testing helps you find performance bottlenecks in the initial phase of development itself thereby reducing the cost and effort that would be required when the performance issue is discovered in the later phases of the release.

As part of low intensity performance testing, you can measure the various response times such as Net End-to-End time, Net Server Time, On App and Off App response time for the transactions and each UI action. The UI Test Statistical report displays all of these response times.
  • Net End-to-End time : It is a measured time of interactions with the server and a client such as a browser or a device. Time taken over the network is also taken into account. Typically, this does not include think time or processing time by the workbench.
  • Net Server time : It is a sum of the time of interactions with the server and the network. This time does not include client (browser or device) time, think time or processing time by the workbench.
  • On App time - It is a measured time of interactions on the client (browser or device) itself.
  • Off App time - It is a sum of the time of interactions with the server and the network. The time spent on the client is not taken into account.
A transaction is a logical grouping of a few UI actions. For example, creating a user is a transaction and filling up each user field to create a user is a UI action. For an end user, time taken to complete a business task (transaction) is more useful than time taken for each UI action. Over time certain business tasks get higher priority to be measured for performance improvement. The Transaction report displays information about all of the transactions in a test.

Using a transaction might give you data about the response time taken by a transaction. However, it would be more useful if you knew that that response time for a transaction is at the expected level. The end users would have already defined response times for each business task in the service level agreement. So, you can define those performance requirements in the tool and check whether the measured response time adheres to the agreement. The Performance Requirement report displays information whether the transactions adheres to the performance criteria.