Marnie L. Hutcheson entered the software testing industry at Prodigy Services Company(PSC) in 1987. Her early software testing experience consists of engineering and finance/banking automated software testing. Her career through the mid-90’s primarily consisted of a variety of projects relating to distance/computer learning and updating legacy software to internet capability. Since 2000, she has provided consulting services to a variety of projects including Microsoft .NET Developers Training, Microsoft Network Solution Providers, and other agile projects. She continues to run her own business Ideva, which provides planning, design, and documentation services for web applications, GUIs, and SharePoint related content. Presently, the Ideva webpage, http://www.ideva.com/, lists her as the Present Trainer for the International Institute for Software Testing. Additional info about her can be found at her Ideva bio, http://www.ideva.com/Marnies_Resume.htm.
This book is an overview of the management of the software testing process. The author describes many of her experiences in the real-world environment. These typically were in relation to large scale integrations of multiple independent software foundations, environments, and development staff. Furthermore, many of the cited test efforts were described as outside testing consultants brought into a time-strapped product release schedule. The direct result of the outsider approach caused several necessities that none the less have good applications in the testing process regardless of the testing group’s origins. The test process is broken down into several key components as described below.
The test contract allows the assumptions and expectations between the upper-level management and the testing group to become clear before the testing process begins. This document will aid the testing group in procuring defense from management in the case of product release failure and aid communication between the testing group and development. Once the contract is formalized, estimations of the overall size, risk involved, and time requirement can begin.
The test inventory is the primary tool of test size estimation. A preliminary inventory will consist of bug fixes, new functionality(if the project is a first-time implementation, then everything is new functionality), structural/design/integration(below GUI), application base function(unit tests), and environment catalog(hardware/software). The preliminary inventory will be fleshed out using a series of interviews with relative staff from management and development. The purpose of the management or high-level interviews are to acquire information regarding high-level requirements such as owners, deliverables, dependencies, accessible resources, and priorities. This will allow the tester to have a macro-view of the conditions that are to be tested. The development or mid-level interviews serve to fill in the gaps of the macro-view. This may include logic flows, data requirements, prior testing efforts, system integrations, and types of testing required (functional/load). The book cites several examples of how to properly create and automate the test inventory process.
Risk analysis is used in the context of prioritization, demonstrating the value of the test effort, estimating the cost of failure, and the determination of “how much” to test each item. Prioritization is determined on the scale of highest rank containing clear violations of the service level agreement and lowest rank containing trivial errors that do not violate any of the formal requirements. The designation of ranked items in the test inventory will aid in the determination of the Most Important Tests. Furthermore, the highest ranked items will produce the largest cost from failure. The estimation of the cost of failure is demonstrated by comparing the cost of support vs the cost of a successful test effort. Finally, the determination of coverage or “how much” to test each item can be utilized. The estimation of the cost of failure will allow particular time and focus on those tests with potential critical failures. In other words, a test that contains potential high-cost failures will be tested 100%. And, a slightly lower ranked test would be tested within 50-60% coverage. In this manner, a clear time and resource requirement can be established and presented to upper-management.
The time requirement is supported by the key fundamentals of hot spots, path analysis, data analysis, and system environments. Hot spots are identified troublesome pieces of code or prior sources of critical bugs. A path analysis is similar to the mapping of a particular piece of software’s logic flow. In particular, strategies to determine the maximum amount of unique paths that a user could take through the test system. Data analysis is the examination of data required as input to the system. Important components to a data analysis are boundary input, elimination of redundant input, input restricted/validated only by a human GUI(i.e. a web browser), input key to logical branches in a typical user path, and reduction of the test suite based upon boundary data analysis. The final step of a time requirement is the system environments. The software under test will be required to operate on multiple configurations of hardware, software, and other factors. Path analysis and data analysis may not hold true under a new system environment. Therefore, it is important to automate your tests so that they are easily portable to new environments. Each step of the entire test management process should be automated to aid in the regression of testing while moving through a testing process.
Overall, this book demonstrated a high-level view of product testing. It was written for an audience of the management of the testing process from a source outside the original software developers. In particular, testing strategies lent themselves to large projects in corporate atmospheres of engineering or financial standards. The strategies mentioned here may not lend themselves well to Xtreme programming or testing efforts requiring excessive or redundant testing. Cost, time, and synergy are recurring themes throughout. However, the basics of near 100% coverage is almost always impossible, determining logical pathways, boundary data, common data, uncommon data, and exception data are covered in some detail. Mitigating risks from the before mentioned basics is essential to the book’s central theme. There are a wide variety of lessons learned examples throughout the text that provide interesting real-world insights.
The aspects of this book that are personally useful relate to my current work in load testing for the LoadStorm project at CustomerCentrix. The development of tools to automate the testing process will cut down the tester’s overall test time and allow scalable testing to larger infrastructures, newer versioning, and varieties of hardware. Furthermore, debugging strategies are a continually developing issue to computer science students like myself.