Test Planning & Strategy | orchestrate testing at all the necessary levels
Test planning and strategy

Test planning is a process that allows all stakeholders to proactively decide what the important issues are in software testing and how to best deal with them in terms of strategy, resource utilization, responsibilities, risks, and priorities.

Ideally test planning should start during the requirements analysis phase and then proceed concurrently with software development. This will help focus on the individual levels of testing required, and will support and enhance the actual development process. Planning for a specific level of testing requires a clear understanding of the unique considerations associated with that level, e.g. scope, constraints, test environment (hardware, software, interfaces), and test data.

A strategic approach to testing

A strategy for software testing integrates test design techniques into a planned series of steps and provides a roadmap that describes the steps to be conducted as part of testing, when these steps are executed, and how much effort, time, and resources will be required.

Risk analysis should be applied during the testing process to determine the level of detail and the amount of time to be dedicated to testing. Software with a high-risk assessment should be tested more heavily. The estimation of the tasks comprising testing is critical because it is imperative that the schedule prioritizes the work according to the risk assessment, achieving the desired levels of testing, and still fits within the project timescale. The collection and interpretation of metrics provides a crucial feedback loop for the planning process, whereas test metrics include measures that supply information for evaluating the effectiveness of testing techniques and the testing process.

It is important that a testing strategy be flexible enough to promote the creativity and customization that are necessary to adequately test software systems, without preventing reasonable planning and tracking as the project progresses. It must provide guidance and milestones, progress must be measurable and problems must be recognized as early as possible.

A tiered strategy for testing software

A strategy for testing should accommodate low-level tests to verify that software has been correctly implemented, and high-level tests that validate system functionality against requirements.

Initially, tests focus on individual software components, ensuring they each function properly as a unit. Unit testing uses white-box techniques to exercise specific paths through the software. Automated unit test frameworks, e.g. NUnit and HTTPUnit can be extremely effective tools that help locate defects as software is developed.

The software components are integrated to form a sub-system. Integration testing addresses the issues associated with verification and program construction. Black-box testing techniques are prevalent during integration, although white-box testing may be used to ensure coverage of major functionality.

The validation criteria established during requirements analysis are used during validation testing to demonstrate traceability and conformance to all functional, behavioral, and performance requirements. Black-box testing techniques are used exclusively during validation.

Validated sub-systems are combined to form a software system. System testing verifies that all sub-systems properly interact through their interfaces, and that overall function and performance is achieved.

Finally, acceptance testing performed by the various customers and end-users, provides ultimate confirmation that the system satisfies the requirements and verifies that the system is ready for commercial release.

Testing comes in many shapes and sizes

*Unit testing is white-box oriented and focuses on the verification of individual software components against a technical specification. The development team typically performs unit tests. Simple frameworks, e.g. NUnit, enable developers to write repeatable unit tests and incrementally build test suites that can be executed automatically at suitable junctures during development.

*Integration testing is a methodical technique for constructing a sub-system using unit-tested components, while simultaneously conducting tests to expose faults in the interfaces and in the interaction between the integrated components. Integration tests can be scheduled at specific intervals during development. Alternatively, continuous integration rolls the testing activities into an ongoing, highly iterative process that occurs each evening.

*System testing exercises the entire system, including the interfaces to third-party systems, and verifies that all sub-system elements have been properly integrated and perform the allocated functions.

*Validation testing is black-box oriented and aims to demonstrate the conformity of the system to the requirements, showing that the system functions in a manner that can be reasonably expected by the customer.

*Acceptance testing is formal validation testing conducted by the customer, end-user or other authorized person to determine whether to accept a system or component. It is based largely on the requirements and demonstrates that those requirements have been satisfied.

*Operation readiness testing evaluates a system or component in an operational environment to ensure that all the interdependent elements - systems, resources, strategies and procedures - will cooperate successfully.

*Performance testing is designed to test the run-time performance of software, in constant operational conditions, and within the context of a fully integrated system.

*Load testing is designed to test the run-time performance of software under varying operational conditions, e.g. number of users or transactions, while the configuration remains constant, and aims to identify the peak load conditions at the point when failure to control required processing volumes within acceptable time spans occurs.

*Stress testing is designed to confront software with abnormal situations, executing a system in a manner that demands resources in abnormal quantity, frequency or volume.

*Security testing attempts to verify that protection mechanisms built into a system will provide a defense against unauthorized access. The testing may include attempts to acquire passwords; a system may be attacked by custom software designed to breakdown defenses; denial of service may be achieved through saturation activities; insecure data may be searched to locate clues or keys to system entry.

*Regression testing retests functionality that has been modified since it was last tested. This attempts to ensure that faults have not been introduced or uncovered as a result of the changes made.

*Usability testing determines the effectiveness, efficiency, and satisfaction with which users can achieve tasks using an interactive interface to a system.

Back to top ^^

This page is valid XHTML 1.0 This page uses valid CSS

Test Planning & Strategy | Test Implementation | Agile Testing | Defect Management

Methodologies | Project Management | Analysis & modeling | Development | Testing | Quality Assurance

Home | Services | Contact Us