You are on page 1of 1

Testing is a check of relationship between three parties

Functional specification (abbreviated to SPEC): what the user of the system wish
the system to behave
Software (abbreviated to SOFT): what the system really behaves
Test case (abbreviated to TEST): what the test designer thinks that the system
SHOULD behave.
"Incident" is the name of a situation when there is a difference between SOFT and
TEST. If you use a tool to report the success of test case, then you would often
see that an incident is marked with red, a non-incident with green.

Depending on the relationship between these 3 parties, there can be different


situations for an incident to occur:

SOFT == SPEC and TEST == SPEC: it means TEST == SOFT, not any incident occurs.
SOFT != SPEC and TEST == SPEC: it means TEST != SOFT, and an "incident" occurs
SOFT == SPEC and TEST != SPEC: it means TEST != SOFT, and an "incident" occurs
SOFT != SPEC and TEST != SPEC and SOFT == TEST: it means not any incident occurs
SOFT != SPEC and TEST != SPEC and SOFT != TEST: it means this is an "incident"
Looking at these 5 situations, we see that

When an "incident" occurs, it can be that the software is erroneous, the test case
is erroneous, or both
When there is no "incident", it can still be that both the test case and the
software are erroneous, or both of them are "good" (they follow the specification).
So a green does not guarantee that your software is good, a red incident does not
guarantee that your software is bad (here I use "good" and "bad" mean that the SOFT
follows the SPEC or not)
-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
--------------------------------------------------------------------
Second Definition-
-------------------
An incident in Software Testing is basically any situation where the system
exhibits questionable behaviour, but often we refer to an incident as a defect only
when the root cause is some problem in the item we are testing.

Other causes of incidents includes mis-configuration or failure of the test


environment, corrupted test data, bad tests, invalid expected results and tester
mistakes.

This means to indicate the possibility that a specious behaviour is not necessarily
a true defect.

When an incident is initially recognized, an incident report should be generated


with a set of supporting information as:

Identification of the incident, including unique number, heading, trigger event,


proposed fix, if possible, and documentation (e.g., screen dumps).
Identification of the environment, including hardware, software, vendor, item in
which the incident was seen, and fix description, if any.
Identification of the people involved, including originator and investigator.
Related time information, for example, system time, CPU time, and wall time as
appropriate.

You might also like