Professional Documents
Culture Documents
The duration or time span between the first time bug is found (New) and closed
successfully (status: Closed), rejected, postponed or deferred is called as Bug/Error
Life Cycle.
(Right from the first time any bug is detected till the point when the bug is fixed and
closed, it is assigned various statuses which are New, Open, Postpone, Pending Retest,
Retest, Pending Reject, Reject, Deferred, and Closed. For more information about various
statuses used for a bug during a bug life cycle, you can refer to article Software Testing
Bug & Statuses Used During A Bug Life Cycle)
There are seven different life cycles that a bug can passes through:
< I > Cycle I:
1) A tester finds a bug and reports it to Test Lead.
2) The Test lead verifies if the bug is valid or not.
3) Test lead finds that the bug is not valid and the bug is Rejected.
< II > Cycle II:
1) A tester finds a bug and reports it to Test Lead.
2) The Test lead verifies if the bug is valid or not.
3) The bug is verified and reported to development team with status as New.
4) The development leader and team verify if it is a valid bug. The bug is invalid and is
marked with a status of Pending Reject before passing it back to the testing team.
5) After getting a satisfactory reply from the development side, the test leader marks the
bug as Rejected.
< III > Cycle III:
1) A tester finds a bug and reports it to Test Lead.
2) The Test lead verifies if the bug is valid or not.
3) The bug is verified and reported to development team with status as New.
4) The development leader and team verify if it is a valid bug. The bug is valid and the
development leader assigns a developer to it marking the status as Assigned.
5) The developer solves the problem and marks the bug as Fixed and passes it back to
the Development leader.
6) The development leader changes the status of the bug to Pending Retest and passes
on to the testing team for retest.
7) The test leader changes the status of the bug to Retest and passes it to a tester for
retest.
8) The tester retests the bug and it is working fine, so the tester closes the bug and marks
it as Closed.
< IV > Cycle IV:
1) A tester finds a bug and reports it to Test Lead.
2) The Test lead verifies if the bug is valid or not.
3) The bug is verified and reported to development team with status as New.
4) The development leader and team verify if it is a valid bug. The bug is valid and the
development leader assigns a developer to it marking the status as Assigned.
5) The developer solves the problem and marks the bug as Fixed and passes it back to
the Development leader.
6) The development leader changes the status of the bug to Pending Retest and passes
on to the testing team for retest.
7) The test leader changes the status of the bug to Retest and passes it to a tester for
retest.
8) The tester retests the bug and the same problem persists, so the tester after
confirmation from test leader reopens the bug and marks it with Reopen status. And the
bug is passed back to the development team for fixing.
< V > Cycle V:
1) A tester finds a bug and reports it to Test Lead.
2) The Test lead verifies if the bug is valid or not.
3) The bug is verified and reported to development team with status as New.
4) The developer tries to verify if the bug is valid but fails in replicate the same scenario
as was at the time of testing, but fails in that and asks for help from testing team.
5) The tester also fails to re-generate the scenario in which the bug was found. And
developer rejects the bug marking it Rejected.
< VI > Cycle VI:
1) After confirmation that the data is unavailable or certain functionality is unavailable,
the solution and retest of the bug is postponed for indefinite time and it is marked as
Postponed.
< VII > Cycle VII:
1) If the bug does not stand importance and can be/needed to be postponed, then it is
given a status as Deferred.
This way, any bug that is found ends up with a status of Closed, Rejected, Deferred or
Postponed.
Priority Coverage
Not everything can be tested. Not even a significant subset of everything can be tested.
Therefore testing needs to assign effort reasonably and prioritize thoroughly. This is be no
means a simple topic. Generally you'd like to have every feature covered with at least one
valid input case. This ensures at least a base line utility to the software.
Beyond the base line you'll need to test further input permutations, invalid input, and
non-functional requirements. In each case the realistic use of the software should be
considered. Highly present and frequent use scenarios should have more coverage than
infrequently encountered and specialty scenarios. Overall you target a wide breadth of
coverage with depth in high use areas and as time permits.
Traceable
Exactly what was tested, and how it was tested, are needed as part of an ongoing
development process. In many environments such proof of activities are required as part
of a certification effort, or simply as a means to eliminate duplicate testing effort. This
shouldn't mean extra documentation, it simply means keeping your test plans clear
enough to be reread and understood.
You will have to agree on the documentation methods; every member of the team should
not have their own. Not all features should be documented the same way however:
several different methods will likely be employed. Unfortunately there aren't a lot of
commonly agreed principles in this area, so in a way you're kind of on your own.
Unbiased
Tests must balance the written requirements, real-world technical limitations, and user
expectations. Regardless of the development process being employed there will be a lot
unwritten or implicit requirements. It is the job of the tester to keep all such requirements
in mind while testing the software. A tester must also realize they are not a user of the
software, they are part of the development team. Their personal opinions are but one of
many considerations. Bias in a tester invariably leads to a bias in coverage.
The end user's viewpoint is obviously vital to the success of the software, but it isn't all
that matters. If the needs of the administrators can't be met the software may not be
deployable. If the needs of the support team aren't met, it may be unsupportable. If the
needs of marketing can't be met, it may be unsellable. The programmers also can't be
ignored; every defect has to be prioritized with respect to their time limits and technical
constraints.
Deterministic
The discovery of issues should not be random. Coverage criteria should expose all
defects of a decided nature and priority. Furthermore, later surfacing defects should be
identifiable as to which branch in the coverage it would have occurred, and can thus
present a definite cost in detecting such defects in future testing.
This goal should be a natural extension to having traceable tests with priority coverage. It
reiterates that the testing team should not be a chaotic blackbox. Quality control is a well
structured, repeatable, and predictable process. Having clean insight into the process
allows the business to better gauge costs and to better direct the overall development.
If you need assistance or advice with the monitoring and testing of your web application
then I can help. My company EverSystems provides professional application monitoring
and automated testing services.