You are on page 1of 7

What is a Bug Life Cycle?

The duration or time span between the first time bug is found (New) and closed
successfully (status: Closed), rejected, postponed or deferred is called as Bug/Error
Life Cycle.
(Right from the first time any bug is detected till the point when the bug is fixed and
closed, it is assigned various statuses which are New, Open, Postpone, Pending Retest,
Retest, Pending Reject, Reject, Deferred, and Closed. For more information about various
statuses used for a bug during a bug life cycle, you can refer to article Software Testing
Bug & Statuses Used During A Bug Life Cycle)
There are seven different life cycles that a bug can passes through:
< I > Cycle I:
1) A tester finds a bug and reports it to Test Lead.
2) The Test lead verifies if the bug is valid or not.
3) Test lead finds that the bug is not valid and the bug is Rejected.
< II > Cycle II:
1) A tester finds a bug and reports it to Test Lead.
2) The Test lead verifies if the bug is valid or not.
3) The bug is verified and reported to development team with status as New.
4) The development leader and team verify if it is a valid bug. The bug is invalid and is
marked with a status of Pending Reject before passing it back to the testing team.
5) After getting a satisfactory reply from the development side, the test leader marks the
bug as Rejected.
< III > Cycle III:
1) A tester finds a bug and reports it to Test Lead.
2) The Test lead verifies if the bug is valid or not.
3) The bug is verified and reported to development team with status as New.
4) The development leader and team verify if it is a valid bug. The bug is valid and the
development leader assigns a developer to it marking the status as Assigned.
5) The developer solves the problem and marks the bug as Fixed and passes it back to
the Development leader.
6) The development leader changes the status of the bug to Pending Retest and passes
on to the testing team for retest.
7) The test leader changes the status of the bug to Retest and passes it to a tester for
retest.
8) The tester retests the bug and it is working fine, so the tester closes the bug and marks
it as Closed.
< IV > Cycle IV:
1) A tester finds a bug and reports it to Test Lead.
2) The Test lead verifies if the bug is valid or not.
3) The bug is verified and reported to development team with status as New.

4) The development leader and team verify if it is a valid bug. The bug is valid and the
development leader assigns a developer to it marking the status as Assigned.
5) The developer solves the problem and marks the bug as Fixed and passes it back to
the Development leader.
6) The development leader changes the status of the bug to Pending Retest and passes
on to the testing team for retest.
7) The test leader changes the status of the bug to Retest and passes it to a tester for
retest.
8) The tester retests the bug and the same problem persists, so the tester after
confirmation from test leader reopens the bug and marks it with Reopen status. And the
bug is passed back to the development team for fixing.
< V > Cycle V:
1) A tester finds a bug and reports it to Test Lead.
2) The Test lead verifies if the bug is valid or not.
3) The bug is verified and reported to development team with status as New.
4) The developer tries to verify if the bug is valid but fails in replicate the same scenario
as was at the time of testing, but fails in that and asks for help from testing team.
5) The tester also fails to re-generate the scenario in which the bug was found. And
developer rejects the bug marking it Rejected.
< VI > Cycle VI:
1) After confirmation that the data is unavailable or certain functionality is unavailable,
the solution and retest of the bug is postponed for indefinite time and it is marked as
Postponed.
< VII > Cycle VII:
1) If the bug does not stand importance and can be/needed to be postponed, then it is
given a status as Deferred.
This way, any bug that is found ends up with a status of Closed, Rejected, Deferred or
Postponed.

Classification of Defects / Bugs


Severity Wise:
Major: A defect, which will cause an observable product failure or departure
from requirements.
Minor: A defect that will not cause a failure in execution of the product.
Fatal: A defect that will cause the system to crash or close abruptly or effect other
applications.
Work product wise:

SSD: A defect from System Study document


FSD: A defect from Functional Specification document
ADS: A defect from Architectural Design Document
DDS: A defect from Detailed Design document
Source code: A defect from Source code
Test Plan/ Test Cases: A defect from Test Plan/ Test Cases
User Documentation: A defect from User manuals, Operating manuals

Type of Errors Wise:

Comments: Inadequate/ incorrect/ misleading or missing comments in the source


code
Computational Error: Improper computation of the formulae / improper
business validations in code.
Data error: Incorrect data population / update in database
Database Error: Error in the database schema/Design
Missing Design: Design features/approach missed/not documented in the design
document and hence does not correspond to requirements
Inadequate or sub optimal Design: Design features/approach needs additional
inputs for it to be completeDesign features described does not provide the best
approach (optimal approach) towards the solution required
In correct Design: Wrong or inaccurate Design
Ambiguous Design: Design feature/approach is not clear to the reviewer. Also
includes ambiguous use of words or unclear design features.
Boundary Conditions Neglected: Boundary conditions not addressed/incorrect
Interface Error: Internal or external to application interfacing error, Incorrect
handling of passing parameters, Incorrect alignment, incorrect/misplaced
fields/objects, un friendly window/screen positions
Logic Error: Missing or Inadequate or irrelevant or ambiguous functionality in
source code
Message Error: Inadequate/ incorrect/ misleading or missing error messages in
source code
Navigation Error: Navigation not coded correctly in source code
Performance Error: An error related to performance/optimality of the code

Missing Requirements: Implicit/Explicit requirements are missed/not


documented during requirement phase
Inadequate Requirements: Requirement needs additional inputs for to be
complete
Incorrect Requirements: Wrong or inaccurate requirements
Ambiguous Requirements: Requirement is not clear to the reviewer. Also
includes ambiguous use of words e.g. Like, such as, may be, could be, might
etc.
Sequencing / Timing Error: Error due to incorrect/missing consideration to
timeouts and improper/missing sequencing in source code.
Standards: Standards not followed like improper exception handling, use of E &
D Formats and project related design/requirements/coding standards
System Error: Hardware and Operating System related error, Memory leak
Test Plan / Cases Error: Inadequate/ incorrect/ ambiguous or duplicate or
missing - Test Plan/ Test Cases & Test Scripts, Incorrect/Incomplete test setup
Typographical Error: Spelling / Grammar mistake in documents/source code
Variable Declaration Error: Improper declaration / usage of variables, Type
mismatch error in source code

The Five Goals of Software Testing


Testing can mean many different things depending on who is doing it, and where in a
process it is being performed. The programmers, administrators, users, and consultants all
have something different in mind when they are testing. A dedicated tester can often feel
lost in the competing interpretations. To be effective however a tester needs a specific job
description. These five goals of software testing are a very good basis.
Verification
Most misunderstood about testing is the primary objective. If you think it is to find
defects then you are wrong. Defects will be found by everybody using the software.
Testing is a quality control measure used to verify that a product works as desired.
Testing provides a status report of the actual product in comparison to requirements
(written and implicit). At its simplest this is a pass/fail listing of product features; at detail
it includes confidence numbers and expectations of defect rates throughout the software.
This is important since a tester can hunt bugs forever yet not be able to say whether the
product is fit for release. Having a multitude of defect reports is of a little use if there is
no method by which to value them. A corporate policy needs to be in place regarding the
quality of the product. It must state what conditions are required to release the software.
The tester's job is to determine whether the software fulfills those conditions.

Priority Coverage
Not everything can be tested. Not even a significant subset of everything can be tested.
Therefore testing needs to assign effort reasonably and prioritize thoroughly. This is be no
means a simple topic. Generally you'd like to have every feature covered with at least one
valid input case. This ensures at least a base line utility to the software.
Beyond the base line you'll need to test further input permutations, invalid input, and
non-functional requirements. In each case the realistic use of the software should be
considered. Highly present and frequent use scenarios should have more coverage than
infrequently encountered and specialty scenarios. Overall you target a wide breadth of
coverage with depth in high use areas and as time permits.
Traceable
Exactly what was tested, and how it was tested, are needed as part of an ongoing
development process. In many environments such proof of activities are required as part
of a certification effort, or simply as a means to eliminate duplicate testing effort. This
shouldn't mean extra documentation, it simply means keeping your test plans clear
enough to be reread and understood.
You will have to agree on the documentation methods; every member of the team should
not have their own. Not all features should be documented the same way however:
several different methods will likely be employed. Unfortunately there aren't a lot of
commonly agreed principles in this area, so in a way you're kind of on your own.
Unbiased
Tests must balance the written requirements, real-world technical limitations, and user
expectations. Regardless of the development process being employed there will be a lot
unwritten or implicit requirements. It is the job of the tester to keep all such requirements
in mind while testing the software. A tester must also realize they are not a user of the
software, they are part of the development team. Their personal opinions are but one of
many considerations. Bias in a tester invariably leads to a bias in coverage.
The end user's viewpoint is obviously vital to the success of the software, but it isn't all
that matters. If the needs of the administrators can't be met the software may not be
deployable. If the needs of the support team aren't met, it may be unsupportable. If the
needs of marketing can't be met, it may be unsellable. The programmers also can't be
ignored; every defect has to be prioritized with respect to their time limits and technical
constraints.
Deterministic
The discovery of issues should not be random. Coverage criteria should expose all
defects of a decided nature and priority. Furthermore, later surfacing defects should be
identifiable as to which branch in the coverage it would have occurred, and can thus
present a definite cost in detecting such defects in future testing.

This goal should be a natural extension to having traceable tests with priority coverage. It
reiterates that the testing team should not be a chaotic blackbox. Quality control is a well
structured, repeatable, and predictable process. Having clean insight into the process
allows the business to better gauge costs and to better direct the overall development.
If you need assistance or advice with the monitoring and testing of your web application
then I can help. My company EverSystems provides professional application monitoring
and automated testing services.

You might also like