Performance Testing Insights: Part I

Performance Testing can be viewed as the systematic process of collecting and monitoring the results of system usage, then analyzing them to aid system improvement towards desired results. As part of the performance testing process, the tester needs to gather statistical information, examine server logs and system state histories, determine the system’s performance under natural and artificial conditions and alter system modes of operation.

Performance testing complements functional testing. Functional testing can validate proper functionality under correct usage and proper error handling under incorrect usage. It cannot, however, tell how much load an application can handle before it breaks or performs improperly. Finding the breaking points and performance bottlenecks, as well as identifying functional errors that only occur under stress, requires performance testing.

The purpose of Performance testing is to demonstrate that:

1. The application processes required business process and transaction volumes within specified response times in a real-time production database (Speed).

2. The application can handle various user load scenarios (stresses), ranging from a sudden load “spike” to a persistent load “soak” (Scalability).

3. The application is consistent in availability and functional integrity (Stability).

4. Determination of minimum configuration that will allow the system to meet the formal stated performance expectations of stakeholders

When should I start testing and when should I stop?

When to Start Performance Testing:

A common practice is to start performance testing only after functional, integration, and system testing are complete; that way, it is understood that the target application is “sufficiently sound and stable” to ensure valid performance test results. However, the problem with the above approach is that it delays performance testing until the latter part of the development lifecycle. Then, if the tests uncover performance-related problems, one has to resolve problems with potentially serious design implications at a time when the corrections made might invalidate earlier test results. In addition, the changes might destabilize the code just when one wants to freeze it, prior to beta testing or the final release.
A better approach is to begin performance testing as early as possible, just as soon as any of the application components can support the tests. This will enable users to establish some early benchmarks against which performance measurement can be conducted as the components are developed.

When to Stop Performance Testing:

The conventional approach is to stop testing once all planned tests are executed and there is consistent and reliable pattern of performance improvement. This approach gives users accurate performance information at that instance. However, one can quickly fall behind by just standing still. The environment in which clients will run the application will always be changing, so it’s a good idea to run ongoing performance tests. Another alternative is to set up a continual performance test and periodically examine the results. One can “overload” these tests by making use of real world conditions. Regardless of how well it is designed, one will never be able to reproduce all the conditions that application will have to contend with in the real-world environment.

Similar Posts