Professional Documents
Culture Documents
This
includes, but is not limited to, the process of executing a program or application with the
intent of finding software bugs.
Over its existence, computer software has continued to grow in complexity and size.
Every software product has a target audience. For example, a video game software has its
audience completely different from banking software. Therefore, when an organization
develops or otherwise invests in a software product, it presumably must assess whether
the software product will be acceptable to its end users, its target audience, its purchasers,
and other stakeholders. Software testing is the process of attempting to make this
assessment.
.[2Scope
The software faults occur through the following process. A programmer
makes an error (mistake), which results in a defect (fault, bug) in the
software source code. If this defect is executed, in certain situations the
system will produce wrong results, causing a failure.[3] Not all defects will
necessarily result in failures. For example, defects in a dead code will never
result in failures. A defect can turn into a failure when the environment is
changed. Examples of these changes in environment include the software
being run on a new hardware platform, alterations in source data or
interacting with different software.[3]
• Verification: Have we built the software right (i.e., does it match the
specification)?
• Validation: Have we built the right software (i.e., is this what the
customer wants)?
Software testing can be done by software testers. Until the 1950s the term
software tester was used generally, but later it was also seen as a separate
profession. Regarding the periods and the different goals in software
testing[6] there have been established different roles: test lead/manager,
tester, test designer, test automater/automation developer, and test
administrator.
Testing methods
Software testing methods are traditionally divided into black box testing and
white box testing. These two approaches are used to describe the point of
view that a test engineer takes when designing test cases.
White box testing, however, is when the tester has access to the internal
data structures, code, and algorithms. White box testing methods include
creating tests to satisfy some code coverage criteria. For example, the test
designer can create tests to cause all statements in the program to be
executed at least once. Other examples of white box testing are mutation
testing and fault injection methods. White box testing includes all static
testing.
In recent years the term grey box testing has come into common usage. This
involves having access to internal data structures and algorithms for
purposes of designing the test cases, but testing at the user, or black-box
level.. Grey box testing may also include reverse engineering to determine,
for instance, boundary values.
Special methods exist to test non-functional aspects of software.
Performance testing checks to see if the software can handle large quantities
of data or users. Usability testing is needed to check if the user interface is
easy to use and understand. Security testing is essential for software which
processes confidential data and to prevent system intrusion by hackers.
• Unit testing
tests the minimal
software
component, or
module. Each unit
(basic
component) of the
software is tested
to verify that the
detailed design
for the unit has
been correctly
implemented. In
an object-oriented
environment, this
is usually at the
class level, and
the minimal unit
tests include the
constructors and
destructors.[19]
• Integration
testing exposes
defects in the
interfaces and
interaction
between
integrated
components
(modules).
Progressively
larger groups of
tested software
components
corresponding to
elements of the
architectural
design are
integrated and
tested until the
software works as
a system.[citation
needed]
• System
testing tests a
completely
integrated system
to verify that it
meets its
requirements.[20]
• System
integration testing
verifies that a
system is
integrated to any
external or third
party systems
defined in the
system
requirements.
[citation needed]
Before shipping the final version of software, alpha and beta testing are
often done additionally:
• Alpha
testing is
simulated or
actual operational
testing by
potential
users/customers
or an independent
test team at the
developers' site.
Alpha testing is
often employed
for off-the-shelf
software as a
form of internal
acceptance
testing, before the
software goes to
beta testing.[citation
needed]
• Beta testing
comes after alpha
testing. Versions
of the software,
known as beta
versions, are
released to a
limited audience
outside of the
programming
team. The
software is
released to groups
of people so that
further testing can
ensure the
product has few
faults or bugs.
Sometimes, beta
versions are made
available to the
open public to
increase the
feedback field to
a maximal
number of future
users.[citation needed]
The test script is the combination of a test case, test procedure, and test data.
Initially the term was derived from the product of work created by
automated regression test tools. Today, test scripts can be manual,
automated, or a combination of both.
The most common term for a collection of test cases is a test suite. The test
suite often also contains more detailed instructions or goals for each
collection of test cases. It definitely contains a section where the tester
identifies the system configuration used during testing. A group of test cases
may also contain prerequisite states or steps, and descriptions of the
following tests.
A test specification is called a test plan. The developers are well aware what
test plans will be executed and this information is made available to the
developers. This makes the developers more cautious when developing their
code. This ensures that the developers code is not passed through any
surprise test case or test plans.
The software, tools, samples of data input and output, and configurations are
all referred to collectively as a test harness.
• Requiremen
ts analysis:
Testing should
begin in the
requirements
phase of the
software
development life
cycle. During the
design phase,
testers work with
developers in
determining what
aspects of a
design are
testable and with
what parameters
those tests work.
• Test
planning: Test
strategy, test plan,
testbed creation.
A lot of activities
will be carried out
during testing, so
that a plan is
needed.
• Test
development:
Test procedures,
test scenarios, test
cases, test scripts
to use in testing
software.
• Test
execution: Testers
execute the
software based on
the plans and tests
and report any
errors found to
the development
team.
• Test
reporting: Once
testing is
completed, testers
generate metrics
and make final
reports on their
test effort and
whether or not the
software tested is
ready for release.
• Retesting
the defects. Not
all errors or
defects reported
must be fixed by
a software
development
team. Some may
be caused by
errors in
configuring the
test software to
match the
development or
production
environment.
Some defects can
be handled by a
workaround in the
production
environment.
Others might be
deferred to future
releases of the
software, or the
deficiency might
be accepted by
the business user.
There are yet
other defects that
may be rejected
by the
development team
(of course, with
due reason) if
they deem it
An SQA plan can take a number of paths, testing for different capabilities
and performing different analyses, depending on the demands of project, the
users, and the software itself. But any rigorous SQA plan carried out
scrupulously by seasoned QA professionals will confer certain benefits:
What's more, without proper testing, it's virtually impossible to know how
new users will respond to an application's functions, options, and usability
features. Unbiased software quality assurance (SQA) specialists come to a
project fresh, with a clear outlook, and so serve as the first line of defense
against unintuitive user interfaces and broken application functionality. A
quality application is guaranteed to result in enhanced customer satisfaction.
Validation Testing
Validation testing is the act of entering data that the tester knows to be
erroneous into an application. For instance, typing "Hello" into an edit box
that is expecting to receive a numeric entry.
Data Comparison
Stress testing
A stress test is when the software is used as heavily as possible for a period
of time to see whether it copes with high levels of load. Often used for
server software that will have multiple users connected to it simultaneously.
Also known as Destruction testing.
Usability testing
Sometimes getting users who are unfamiliar with the software to try it for a
while and offer feedback to the developers about what they found difficult to
do is the best way of making improvements to a user interface.
Static testing is a form of software testing where the software isn't actually
used. This is in contrast to dynamic testing. It is generally not detailed
testing, but checks mainly for the sanity of the code, algorithm, or document.
It is primarily syntax checking of the code or and manually reading of the
code or document to find errors. This type of testing can be used by the
developer who wrote the code, in isolation. Code reviews, inspections and
walkthroughs are also used.
From the black box testing point of view, static testing involves review of
requirements or specifications. This is done with an eye toward
completeness or appropriateness for the task at hand. This is the verification
portion of Verification and Validation.
Bugs discovered at this stage of development are less expensive to fix than
later in the development cycle.
Dynamic Testing involves working with the software, giving input values
and checking if the output is as expected. These are the Validation activities.
Unit Tests, Integration Tests, System Tests and Acceptance Tests are few of
the Dynamic Testing methodologies.
1. To reduce
the number of test
cases to a
necessary
minimum.
2. To select the
right test cases to
cover all possible
scenarios.
Although in rare cases equivalence partitioning is also applied to outputs of
a software component, typically it is applied to the inputs of a tested
component. The equivalence partitions are usually derived from the
specification of the component's behaviour. An input has certain ranges
which are valid and other ranges which are invalid. This may be best
explained at the following example of a function which has the pass
parameter "month" of a date. The valid range for the month is 1 to 12,
standing for January to December. This valid range is called a partition. In
this example there are two further partitions of invalid ranges. The first
invalid partition would be <= 0 and the second invalid partition would be >=
13.
... -2 -1 0 1 .............. 12 13 14 15
.....
--------------|-------------------|----------
-----------
invalid partition 1 valid partition invalid
partition 2
The testing theory related to equivalence partitioning says that only one test
case of each partition is needed to evaluate the behaviour of the program for
the related partition. In other words it is sufficient to select one test case out
of each partition to check the behaviour of the program. To use more or even
all test cases of a partition will not find new faults in the program. The
values within one partition are considered to be "equivalent". Thus the
number of test cases can be reduced considerably.
An additional effect by applying this technique is that you also find the so
called "dirty" test cases. An inexperienced tester may be tempted to use as
test cases the input data 1 to 12 for the month and forget to select some out
of the invalid partitions. This would lead to a huge number of unnecessary
test cases on the one hand, and a lack of test cases for the dirty ranges on the
other hand.
[edit] Introduction
Testing experience has shown that especially the boundaries of input ranges
to a software component are liable to defects. A programmer implement e.g.
the range 1 to 12 at an input, which e.g. stands for the month January to
December in a date, has in his code a line checking for this range. This may
look like:
But a common programming error may check a wrong range e.g. starting the
range at 0 by writing:
For more complex range checks in a program this may be a problem which
is not so easily spotted as in the above simple example.
To set up boundary value analysis test cases you first have to determine
which boundaries you have at the interface of a software component. This
has to be done by applying the equivalence partitioning technique. Boundary
value analysis and equivalence partitioning are inevitably linked together.
For the example of the month in a date you would have the following
partitions:
... -2 -1 0 1 .............. 12 13 14 15
.....
--------------|-------------------|----------
-----------
invalid partition 1 valid partition invalid
partition 2
Applying boundary value analysis you have to select now a test case at each
side of the boundary between two partitions. In the above example this
would be 0 and 1 for the lower boundary as well as 12 and 13 for the upper
boundary. Each of these pairs consists of a "clean" and a "dirty" test case. A
"clean" test case should give you a valid operation result of your program. A
"dirty" test case should lead to a correct and specified input error treatment
such as the limiting of values, the usage of a substitute value, or in case of a
program with a user interface, it has to lead to warning and request to enter
correct data. The boundary value analysis can have 6 testcases.n, n-1,n+1 for
the upper limit and n, n-1,n+1 for the lower limit.
The great advantage of fuzz testing is that the test design is extremely
simple, and free of preconceptions about system behavior
if (a && b)
c = 1;
else
c = 0;
The condition mutation operator would replace '&&' with '||' and produce the
following mutant:
if (a || b)
c = 1;
else
c = 0;
Now, for the test to kill this mutant, the following condition should be met:
• Test input
data should cause
different program
states for mutant
and original
program. For
example, a test
with a=1 and b=0
would do this.
• The value of
'c' should be
propagated to the
program's output
and checked by
the test.
Weak mutation testing (or weak mutation coverage) requires that only the
first condition is satisfied. Strong mutation testing requires that both
conditions are satisfied. Strong mutation is more poweful, since it ensures
that the test suite can really catch the problems. Weak mutation is closely
related to code coverage methods. It requires much less computing power to
ensure that the test suite satisfies weak mutation testing than strong mutation
testing.
Regression Testing
A sanity test or sanity check is a basic test to quickly evaluate the validity
of a claim or calculation. In mathematics, for example, when multiplying by
three or nine, verifying that the sum of the digits of the result is a multiple of
3 or 9 respectively is a sanity test.