This is a sample of an outline for a test plan. It has been designed for medium to small test projects, and thus is fairly lightweight. It is by necessity general, because each enterprise, each development group, each testing group, and each development project is different. This outline should be used as a set of guidelines for creating your own standard template; add to it or subtract from it as you find appropriate. Bear in mind that it is generally better to have an excess of detail in the template—detail which can be removed when creating a specific test plan—than to have to remember to add something that is not in the template.
(Looking for a heavy-duty test plan? The government and the military are good sources. Try the one on the IRS Web site:
[url]http://www.irs.ustreas.gov/bus_info/tax_pro/irm-part/part02/28781a.html[/url])
Make sure to fill in the running headers and footers with the product name, draft numbers, revision dates, and page numbers; this is important in places with lots of test projects on the go. Make sure to include the author’s name, too, so that errors or questions can be addressed to the right person.

1.      OVERVIEW
1.1.      PRODUCT NAME
1.2.      PRODUCT REVISION
1.3.      PROJECT LEADS
1.3.1.      Marketing Lead (or other customer representative)
1.3.2.      Program Manager
1.3.3.      Development Lead
1.3.4.      Test Lead
1.3.5.      Build and Release Control Engineer
1.3.6.      Legal representative
Include names, phone numbers, and email addresses for each. Note that this table will differ for a particular company or group. The goal is to ensure that anyone walking into the company or into the test role can easily identify and contact the people he/she needs to reach.
1.4.      TEST PROJECT STAFF
1.4.1.      Test requirements designers
1.4.2.      Test case designers
1.4.3.      Test personnel
1.4.3.1.      For manual (i.e. non-automated) tests
1.4.3.2.      For automated tests
1.4.3.3.      Test automation programmers
1.4.4.      Documentation reviewers
1.4.5.      Legal reviewer
Include names, phone numbers, and email addresses for each. Note that there may be several people in each role, that one person may be obliged to fill multiple roles, and that some roles (e.g. legal reviewer) won’t be required for all projects.
1.5.      PRODUCT OVERVIEW
This could also be “description of the change requirements” for maintenance projects.
1.5.1.      Cut and paste from requirements document or specification a brief summary description of the product or change, or describe the project as understood by the developers—if the latter, make sure that there is agreement and sign-off from the customer.
1.6.      TRACKING AND REPORTING SYSTEMS
1.6.1.      Identify the defect tracking system in use.
1.6.2.      Identify the manner and schedule by which defect reports are expected to be delivered to developers.
1.6.3.      Identify parties that may have access to the tracking system.
1.6.4.      Identify the change control system.
1.6.5.      Identify the means by which the team is to be notified of changes to the requirements, the product, the test plan, etc.
2.      TESTING SYNOPSIS
2.1.      Items to be tested
2.1.1.      Refer to the functional requirements that specify the features and functions to be tested. The description of the change need not be excessively detailed when there is a complete description to refer to in some other document. On the other hand, if there is no reasonable specification available, more detail is called for here.
2.2.      Items not to be Tested
2.2.1.      List the features and functions that will not be covered in this test plan. Identify briefly the reasons for leaving them out.
2.3.      System Requirements
2.3.1.      This section should be filled out in detail for new projects. For existing maintenance tasks, a simple cross-reference to the document describing existing system requirements is fine. Note any changes to previous system requirements, especially when support for a given product or platform is being dropped.
2.3.2.      If there is a system requirement that could be unclear, make it specific; for example, for Web-based projects, identify not only the supported browsers but also the minimum versions of the supported browsers.
2.4.      Standards/Reference material
2.4.1.      List any standards or other reference material used in the creation of this test plan.
2.4.2.      Identify standards for acceptance criteria, defect severity, testable specifications, and so on. (These standards may have to be created, or adapted from time to time; the first use of this test plan will require more work than later iterations.)
2.5.      Glossary
2.5.1.      In cases where terminology could be unfamiliar or open to interpretation, provide a list defining the unclear terms.
2.5.2.      Obtain agreement on these terms from all interested parties.
2.5.3.      Note: If no one is forthcoming with the information you need, make something up; they might not have done their jobs from the outset, but they’ll be happy to correct your work! You will have achieved the goal, which is clarity and agreement.
3.      TYPES OF TESTING
3.1.      ACCEPTANCE TESTING
3.1.1.      Detail a set of acceptance criteria—conditions that must be met before testing can begin. A smoke test should represent the bare minimum of acceptance testing.
3.1.2.      As noted above, the ideal is to create a separate document for acceptance criteria that can be reused and referred to here. If any particular, specialized test cases not listed in that document will be used, refer to them here.
3.2.      FEATURE LEVEL TESTING
This is the real meat of the test plan. The test categories below are filled in itemizing categories of tests, along with references to the test library or catalog. Individual test cases should not be listed here; test requirements generally should not be either; the details should exist elsewhere and can be cross-referenced.
3.2.1.      Task-Oriented Functional Tests
3.2.1.1.      This is a detailed section, listing tests requirements for program features against functional specifications, user guides or other design related documents. If there are test matrices available listing these features and their interdependence (and there should be), refer to them.
3.2.2.      Forced-Error Tests
3.2.2.1.      Provide or refer to a list of all error conditions and messages. Identify the tests that will be run to force the program into error conditions.
3.2.3.      Boundary Tests
3.2.3.1.      Boundary tests—tests carried out at the lines between valid and invalid input, acceptable and unacceptable system requirements (such as memory, disk space, or timing), and other tests at the limits of performance—are the keys to eliminating duplication of effort. Identify the types of boundary tests that will be carried out. Note that such tests can also fall into the categories outlined below, so this section may be removed, or made a sub-section of those categories.
3.2.4.      Integration Tests
3.2.4.1.      Identify components or modules that can be combined and tested independently to reduce dependence on system testing. Identify any test harnesses or drivers that need to be developed.
3.2.5.      System-Level Tests
3.2.5.1.      Specify the tests will be carried out to fully exercise the program as a whole to ensure that all elements of the integrated system function properly. Note that when unit and integration testing have been properly performed, the dependence upon system testing can be reduced.
3.2.6.      Real World User-Level Test
3.2.6.1.      In contrast to types of testing designed to find defects, identify tests that will demonstrate the successful functioning of the program as you expect the customer to use it. What type of workflow tests will be run? What type of “real work” will be carried out using the program?
3.2.7.      Unstructured Tests
3.2.7.1.      Specify the amount of ad-hoc or exploratory testing that will be carried out. Identify the scope and the time associated with this form of testing.
3.2.8.      Volume Tests
3.2.8.1.      Indicate the types of tests will be carried out to see how the program deals with very large amounts of data, or with a large demand on timely processing. Note that these tests can rarely be performed without automation; identify the automation tools, test harnesses, or scripts that will be used. Ensure that the programs developed for the test automation effort are accompanied by their own sets of requirements, specifications, and development processes.
3.2.9.      Stress Tests
3.2.9.1.      Identify the limits under which the program is expected to perform. These may include number of transactions per unit time, timeouts, memory constraints, disk space constraints, and so on. Volume tests and stress tests are closely related; you may consider wrapping both into the same category.
3.2.9.2.      How will the product be tested to push the upper functional limits of the program? Will specific tools or test suites be used to carry out stress tests? Ensure that these are reusable.
3.2.10.      Performance Tests
3.2.10.1.      Refer to the functional requirements that specify acceptable performance. Identify the functions that need to be measured, and the tests needed to show conformance to the requirements.
3.3.      REGRESSION TESTING
3.3.1.      At each stage of new development or maintenance, a subset of the regression test library should be run, focusing on the feature or function that has changed from the previous version. Unit, integration, and system tests are all viable places for regression testing. For small maintenance fixes, identify this subset. A good version control system can allow the building of older versions of the software for comparative purposes.
3.3.2.      In the final phase of a complete development cycle, a full regression test cycle is run. Identify the test case libraries and suites that will be run.
3.3.3.      Whether a subset or a full regression test run, existing test scripts, matrices and test cases should be used, whether automation is available or not. Identify the documents that describe the details. Emphasize regression tests for functions that are new or that have changed, for components that have had a history of vulnerability, for high-risk defects, and for previously-fixed severe defects.
3.4.      CONFIGURATION AND COMPATIBILITY TESTING
3.4.1.      If applicable, identify the types of software and hardware compatibility tests that will be carried out.
3.4.2.      List operating systems, software applications, device drivers etc. that the product will be tested with or against.
3.4.3.      List hardware environments required for in-house testing.
3.5.      DOCUMENTATION TESTING/ONLINE HELP TESTING
3.5.1.      Documentation and online help testing will be carried out to verify technical accuracy of documented material.
3.5.2.      If a license agreement is included in or displayed by the product, or the portion of it to which this test plan refers, ensure the correct one is being used (see the next item below).
3.6.      COPYRIGHTS AND LICENSE AGREEMENT
3.6.1.      Identify any copyright notices displayed by the program. Verify that they are accurate and up to date.
3.6.2.      In cases where an End-User License Agreement (EULA) is displayed by the program, which EULA will be used in this product? Provide a link to the file. Ensure that it is the consistent with the one included in the product.
3.6.3.      Receive sign-off from the legal department that this is the correct EULA for this product.
3.7.      UTILITY, TOOL KIT, AND COLLATERAL TESTS
3.7.1.      If there are any additional products or components to be included in the final product, or on the distribution media, list the types of tests that will be carried out, and the extent to which they shall be performed.
3.8.      INSTALL/UNINSTALL TESTS
3.8.1.      How will deployment and installation be tested?
3.8.2.      How will the uninstallation or rollback process be tested?
3.8.3.      Since some form of deployment is required for all software products, what generic installation and uninstallation test catalogs will be used or adapted for these tests?
3.9.      CODE COVERAGE
3.9.1.      What tools or processes will be used to assure that each line of code is run at least once during testing?
3.9.2.      Have the developers performed coverage tests during unit or integration testing? Have they provided the results of these tests? Have they provided source code, test harnesses, or test tools?
3.9.3.      Are there plans to cover all code during regression testing? If not, why not?
3.10.      YEAR 2000 AND DATE COMPLIANCE
3.10.1.      Identify date and time values that are accepted, calculated, and output by the program. Pay attention both to hard dates and timespans.
3.10.2.      What tests, if any, will be carried out to make sure the program will continue to work correctly when dates on both sides of the year 2000 are processed?
3.10.3.      What tests, if any, will be run to ensure that other forms of date processing are done correctly?
3.11.      INTERNATIONALIZATION
3.11.1.      For products intended for global markets, what tests will be carried out to make sure the product can be easily localized (that is, adapted for a specific local market)? For products intended for Asian markets, what tests will be performed to verify that the program correctly handles multiple-byte character sets?
4.      TEST SCHEDULE AND RESOURCES
4.1.      Identify the estimated effort required to execute the test plan. Include a both a range and a confidence level. Use the guidelines on pp. 3.23 – 3.44 of the course book to plan and review estimates.
4.2.      Identify the resources available to carry out the test plan.
4.3.      Identify time or resource constraints that will lead to a risk of the test project falling behind schedule, below expected scope, or below expected quality. Cross-reference this with the Unresolved Issues and Risks section later in this document.
4.4.      If any testing is to be handled by another entity, such as another department or a third party test lab, identify them. List names and contact information at the beginning of this document. List the specific tasks they will be assigned to carry out. Include references to contracts with these people, and ensure that contracts are approved and signed.
5.      TEST PHASES AND COMPLETION CRITERIA
5.1.      Detail the planned test cycles and phases; these should be linked to the development plan for the project. Specify the type of testing being done in each phase. Typically unit testing will be done by the developer of the code, and need not be covered in detail in the test plan. Integration and system testing phases should be detailed here.
5.2.      Outline the criteria for assessing the severity of found defects. List expectations for setting the priorities on resolving them. Collaborate with the developer(s), project managers, and the customer representatives on this.
5.3.      Identify in advance the criteria that must be fulfilled before each stage of testing can be considered complete. Make these specific, measurable, and decidable; otherwise, expectations will differ and time will be wasted on discussion and debate.
5.4.      If there are to be staged releases of system testing – typically alpha for internal releases, beta for limited releases to external test sites, and final releases – sometime called “gold master”, define them. Define acceptance standards for each phase. Ideally these should be in a separate document that can be referred to here

Bear in mind that there is a chance that the standards set here are subject to being overruled by some authority or another; for example, a product may ship with a higher than satisfactory number of minor defects, at the behest of a marketing department or CFO that wants the product released with time as the most important consideration. Be prepared to accept such decisions dispassionately, but also be prepared to record them as failures to fulfill the standards set and agreed upon in advance. Companies and individuals can forget easily and repeat mistakes when there is no record of breached agreements and their consequences; people learn and improve more easily when records of successes and failures are available.
6.      UNRESOLVED ISSUES AND RISKS
6.1.      Identify issues that have yet to be decided as of this draft of the plan. Note these as risks to the schedule, scope, or quality of the test effort.
6.2.      Identify other risks that may have an impact on the success of the plan. Use the risks outlined in the course book and the attached speaker notes as a guideline to identifying common risks. Refer also to the Software Project Survival Guide (Steve McConnell), which includes a good list of risks for every phase of development. When assessing risk, don’t be optimistic; the quality of the test plan and the risk assessment is weakened by failure to assess risk realistically.
7.      TEST PLAN REVIEW
7.1.      Include plans for review of this test plan. Identify the parties to review and approve the document, either within the test group or with another set of developers or test engineers. Look at sample test plan checklists, such as that on p. 3.45 – 3.47 of the course book, or those in Software Project Survival Guide. Use ideas from these checklists to develop your own checklists, appropriate to the size and scope of the product. Identify here the checklist(s) that will be used.
7.2.      Meet with developers and customers or customer representatives to ensure that the test plan meets their requirements.