Friday, July 20, 2012

Software Testing documentation process


Software Testing documentation process
The IEEE specifies eight stages in the documentation  process, each stage producing its own separate document.
   no preview
  • Test Plan: A detail of how the test will proceed, who will do the testing, what will be tested, in how much time the test will take place, and to what quality level the test will be performed.
  • Test Design Specification: A detail of the test conditions and the expected outcome. This document also includes details of how a successful test will be recognized.
  • Test Case Specification: A detail of the specific data that is necessary to run tests based on the conditions identified in the previous stage.
  • Test Procedure Specification: A detail of how the tester will physically run the test, the physical set-up required, and the procedure steps that need to be followed.
  • Test Item Transmittal Report: A detail of when specific tested items have been passed from one stage of testing to another.
  • Test Log: A detail of what tests cases were run, who ran the tests, in what order they were run, and whether or not individual tests were passed or failed.
  • Test Incident Report: A detail of the actual versus expected results of a test, when a test has failed, and anything indicating why the test failed.
  • Test Summary Report: A detail of all the important information to come out of the testing procedure, including an assessment of how well the testing was performed, an assessment of the quality of the system, any incidents that occurred, and a record of what testing was done and how long it took to be used in future test planning. This final document is used to determine if the software being tested is viable enough to proceed to the next stage of development.

Agile Testing Methodology


Software Testing in Agile

Agile testing is a software testing practice that follows the rules of the agile manifesto, treating software development as the customer of testing.
Agile testing involves testing from the customer perspective as early as possible, testing early and often as code becomes available and stable enough from module/unit level testing.







Since working increments of the software is released very often in agile software development there is also a need to test often. This is often done by using automated acceptance testing to minimize the amount of manual labor. Doing only manual testing in agile development would likely result in either buggy software or slipping schedules because it would most often not be possible to test the whole software manually before every release.







Why Automated Testing for Agile Development?
Let's dive a little deeper into the reasons why testing needs to be automated in an agile delivery environment. The people involved in testing are part of the delivery team - not an isolated group that developers give the code to as a final step to release. Ideally, testers sit side-by-side with the developers, who, as they create code, pass it to testers early and continually throughout the process for evaluation against acceptance criteria. Since capability is built iteratively and the team needs to maintain velocity, the code assets have to be verified quickly. For agile to truly succeed, functional testing has to be quick, iterative, and responsive.
  • The Need for Speed: Accelerating the Code-and-Test Process - Automation enables testers to create simple, reusable scripts that they can deploy to save time and increase the consistency of testing across similar user stories, story points, or requirements in and across projects. Tests can be developed from the user story to drive the functional capabilities and then run rapidly and iteratively. The speed of automation significantly lightens the workload of testers and eliminates the need for late night and weekend testing marathons that can burn teams out.
  • The Need for Repeatability: Doing the Same Tests and Scripting Tests against the Right Acceptance Criteria - With agile development, regression testing should be done at the close of every new iteration - in some cases this means daily. Regression testing requires that 1) you do the same tests every time you test a particular piece of code and 2) that the test is scripted against the acceptance criteria of each respective user story. Whenever code changes (or is extended to include new capability), you need to rerun all functional tests for all user stories up to the latest change to ensure other user stories weren't impacted inadvertently.
Repeatability is nearly impossible to achieve with manual testing due to human error, variability, and inconsistency. People simply can't remember exactly which tests they ran for each piece of code for the last iterative cycle - and even one oversight can cause problems in the final code. Not to mention the fact that with large code bases the time needed to manually test usually exceeds the time allotted for the iteration. But with automated, repeatable functional and regression testing, one can execute tests consistently whenever necessary.
The automation element adds key benefits that can't be achieved with manual testing. For example, it can greatly accelerate the code-and-test process by supporting fast automated test scripts. Automation also ensures the repeatability of tests to maintain regression testing from sprint-to-sprint, iteration-to-iteration. It can also enhance test efficiency with robust yet flexible test management processes, helping customers avoid the inherent inaccuracies that manual processes inject in to the process.

Testing in an agile environment can be a challenge, but the benefits to having good testing are enormous. Here are a few keys to agile testing success:
  • Test Early – The key to agile is iteration: developing, testing, and developing again. To get the most out of an agile process, you have to test early. That means that you think about testing not just after the first couple of sprints, but at the very beginning of the development cycle.
  • Test Often – A good agile process emphasizes frequent testing. You are looking for defects early in the testing cycle. The longer defects wait in the code, the harder and more expensive they will be to remove.
  • Refactoring/Regression – Stop every few weeks to focus on stability. Fix bugs, refactor old code, and run extensive regression testing to make sure you didn’t miss bugs during the ongoing testing process.
  • Test from a Customer Point of View – As with any development process, it’s critical that the testers and developers know the customer’s point of view. That means having good stories with customer relevant material, and then sharing those stories with the development team as well as the testing team.
  • Separate Testing from Development – This is often difficult for smaller teams, but testers should be independent. Keeping testers separate means they can develop true testing expertise while focusing on finding bugs.
  • Communicate – Having good communication between the testers, developers, and product guys is a key essential to a solid agile process. Even though testers should be separate from developers, they should work closely together to get the most from testing.
  • Automate What you Can – The best agile teams automate as much of their testing load as they can. Repeatedly testing the same case over and over is a waste of time. Locating new bugs is far more valuable for any tester.

Test Summary Report


test summary report is a testing work product that formally summarizes the results of all testing on an endeavor.

no preview
Why Required?
  • Summarizes all of the testing that was performed since the previous test summary report.
  • Enables project management and customer to know the status of project testing.
Benefits
Project Management and end customer can:
  • Get project testing status
  • Get application quality status
  • Take corrective actions, if required
Guidelines
1.       It should be in metrics, charts and table forms, if possible
2.       To write  a test summary report, pre requisites are required – test plan should be completed, test execution should occur and respective test reports should be available.




Test Summary Report


1.0                Overview
Provide a high level description of the overall testing and results.

2.0                Test Coverage/Results
Describe the specific functionality (area) tested and the results of the testing.

3.0                Functionality NOT Tested
Document any functionality that should have been tested, but was either not tested or did not receive adequate testing.  Provide reason why testing was not accomplished.  Such as:
                . Late delivery of product
                . No impact from changes to this functionality based on
. Development assessment
. Not enough time for testing –
. Project Manager approved deferral of testing until next release

4.0                Test Confidence
Describe the confidence level of the testing that was performed.  As an outline for now we can use the following levels:
                . Extremely High
                . High
                . Medium
                . Low

Provide the reason for the rating if not Extremely High or High.

5.0                Test Issues and Concerns
Document any issues or concerns you may have about the release.  This may encompass things such as:
. No documentation of changes in area xxxx made testing pretty darn difficult.
                . Late delivery of release caused impact to available resources.
                . Slip of code complete caused a one-week impact to the test schedule

Test Cases


Test Case Document Test case document is also a part of test deliverables, by reading test case document stakeholders get an idea about the quality of test cases written and the effectiveness of those test cases. Stakeholders can also provide inputs about the current set of test cases as well as suggest some more missing test cases.

What is a Test Case?

A test case is a documentation which specifies input values, expected output and the preconditions for executing the test.




Test Case 


Product Area:                                                     Module Ref:                                                         Release Number:                                                                Date:

Requirements Satisfied:

TC Number

Functional Area

Description
Expected
Results
Test Status
(P/F)
Comments
(Build ID, PVCS#, etc.)
Table/Field
Names
















































































Test Requirements Matrix

Test Requirements Matrix   - is used for tracking and managing testing, based on requirements throughout the project life cycle.

Traceability ensures completeness, that all lower level requirements come from higher level requirements, and that all higher level requirements are allocated to lower level ones. Traceability also provides the basis for test planning.

Requirements traceability ensures that each business need is tied to an actual requirement, and that each requirement is tied to a deliverable.




Req. ID

Requirement

Additional Notes
Testable
Y/N
Design Ref.
Test Case Ref.
Completion Date
Build ID