You are on page 1of 12

Practical Road Map For Software Testing

Ms. Ranita Ganguly


Faculty Member, ICFAI National College - Garia, Kolkata, West Bengal, India
E-mail: ranita_g80@rediffmail.com

Software Testing is a traditional method of checking out software quality. The prime
benefit of testing is that it results in an improved quality of the software.
This paper explains what are the different steps followed in Software Testing, how bugs
are tracked and logged based on its Severity and Priority, what is the latest testing
methodology and combinations of testing techniques applied to remove defects from the
software and the different testing documents and formats required.

The software engineers build a tangible project from an abstract concept and a series of
test cases are developed with an intension to demote the software that has been built.
Testing is considered to be destructive rather than constructive. However, the objective of
testing is constructive.

1. Introduction:
Software Testing is a critical element of software quality assurance, which represents the
ultimate view of specification, design and code generation.
Testing is a critical activity requiring ample amount of time and sufficient planning.
The goal of testing is to discover defects in a program, which means bug fixing of errors
in the requirement analysis, design and coding phase of developing software. The
primary goal is bug prevention and secondary goal is bug discovery. In software
development activities, errors are introduced due to logical errors, careless or improper
communication, and the need to rush through the whole process of software development.
Testing is, therefore, conducted to uncover and reduce those errors.

The basic rationale behind software testing is the process of executing software in a
controlled manner to determine whether the software performs according to the customer

1
satisfaction or not. The main purpose of testing is to discover defects in a program. When
planning test, reporting the status of the defect and recommending actions, it is also
important to have an understanding of the relative criticality of defects.
2. What are Bugs? What is Debugging?
Bugs are errors found in a computer, during execution of the program, which is
introduced due to logical or syntactical faults. Bugs can be software bug or hardware bug.
Some bugs may be deferred or postponed for fixing or locking in subsequent release of
the next version of the software. These bugs are called ‘The Deferred Bugs’.
Debugging is the process of analyzing, locating and fixing bugs, when the software does
not behave as expected. Debugging is an activity, which supports testing but cannot
replace them. No amount of testing is said to be sufficient to grantee a hundred percent
error free software.
2.1. Severity and Priority of Bugs:
Severity indicates how serious the bug is and reflects its impact on the product and
customers of the product. It can be critical, major, minor, cosmetic or suggestion.
Critical severity: The bug is of critical severity if it causes system crash, data loss or
data corruption.
Major severity: The bug is of major severity if it causes operational errors, wrong
results and loss of functionality.
Minor severity: The bug is of minor severity if it causes defect in user interface layout or
spelling mistakes.
Priority indicates how important it is to fix the bug and when it should be fixed.
Immediate Priority: The bug is of immediate priority if it blocks further testing and is
very visible.
At the earliest Priority: The bug must be fixed at the earliest before the product is
released.
Normal Priority: The bug should be fixed if time permits.
Later Priority: The bug may be fixed, but can be released as it is.
Example: Classification of bugs as per their severity and priority.
Bug Type Severity Priority
Data corruption bug that happens very rarely. Critical Normal
Release of software, for testing that crashes as soon as you start it out. Critical Immediate

2
A button should be moved a little further down the page Minor Later

3. Steps for testing software:


Smoke Testing / Sanity Testing / Gorilla Testing / Qualification Testing  Ad-hoc
Testing  Write test cases from Functional Requirement Specification (FRS) 
Executing test cases manually (Module Testing)  Logging / Reporting Bugs through
Bug Tracking life cycle  Regression testing done to ensure stable position of software
 Test Automation  Integration Testing  System Testing  User Acceptance testing
In Smoke testing each time the test team receives a new version of the program, an initial
test is performed to determine whether the built is stable enough to be tested. It is a short
test hitting all the major pieces of functionality i.e. “A shot through the dark”, to
determine if the software is performing well enough to be accepted for major testing
efforts.
Ad-hoc testing is a type of creative informal test that are based on formal test cases, thus
need not be documented by the testing team. Tests are random and are based on error
guessing ability, knowledge of the business process. The test includes initial and later
steps in testing. A program may pass at the first time but may fail for the second time.
Regression testing is done when a change is made to the source code and a new module is
added. A set of predefined test cases has to be checked to determine whether any other
portions of the software are affected or not.
In Integration testing the combined parts of the application are tested to determine if they
together function correctly or not. They can be units, modules and their interfaces,
individual applications, clients or servers.
3.1 The Bug Tracking Life Cycle:
This helps in logging bugs and has several phases.
Phase I: Tester finds the bug and these new bugs are entered in the defect tracking tools
like Bugzilla, MS-Excel, MS- word etc.
Phase II: The project leader analyses the bug and also assigns priority to them and then
passes it to the concerned developers.
Phase III: Developer fixes the bug and changes its status to ‘Fixed’ along with the ‘Fixed’
details. The new version, with all its fixtures, is then released to the Quality Control team.

3
Phase IV: The Quality Control team or the tester performs regression test and checks the
status of the bug fixed by the developers.
Phase V: Fixed bugs are closed and if the defects reoccur, issues are reopened and again
it goes to the developers.
3.2. Defect report format:
The fields of defect report format are Serial No., Defect found, Type of defect,
Classification of defect (Critical / Minor/ Major), Status of the defect (New / Removed),
Time of removal of defect, Stage at which defect was injected, Stage at which defect was
removed.
3.3. Test Plan format:
The test plan contains the details of the testing process and is prepared during the project
planning stage. Planning for testing is necessary to prevent real time testing processes to
consume majority of the overall testing resources. The plan contains all the details of the
required resources, testing approaches to be followed, testing methodologies and test
cases to be designed. The fields of the test plan format are as follows:
i. Project Name
ii. Estimated start date for testing
iii. Estimated end date for testing
iv. Actual start date for testing
v. Actual end date for testing
vi. Estimated effort in person-months or person-hours
vii. Test set up including the hardware and software environment and other
peripherals required, any special tool or equipment required.
viii. Test personnel and their respective responsibility.
ix. Types of testing to be carried out including functional testing, structural testing, α-
testing, β –testing, Gorilla testing, usability testing, performance testing, stress
testing etc.
x. For each testing technique, the test cases have to be specified.
xi. Testing tools to be used are to be specified.
xii. Test schedule for each type of testing is prepared.
xiii. Defect reporting format are specified.

4
3.4. Test Availability:
It is how easily a computer program can be tested. There are several matrices to measure
the test availability. It tests for operability, observability, controllability, simplicity and
understandability. In system development life cycle, the requirements are translated to
specifications, through which the code is developed. Once the construction process is
over, the product goes through various stages of testing before it is finally released.
‘Traceability’ is the common thread that ensures that verification and validation of the
product is complete.
3.4.1. Traceability matrix:
Requirement tracing is the process of documenting the links between the user
requirements for the system we are building and the whole work-product to be
implemented. The whole process is called as ‘Traceability matrix’. This helps in areas of
Requirement management, Change management and Defect management. Traceability
matrix is created even before any test cases are written because it is a complete list
indicating what has to be tested.
3.5. Test Automation:
Automation is useful for regression testing. It is done for software when its built is stable
enough, otherwise updating the script will be expensive. Automation is done for long
duration projects or for projects with large number of data. Possibly for small projects,
the time needed to learn and implement the testing tools may not be worth it as compared
to that of large projects or ongoing long term projects. A common type of automated tool
would act like a recorder or play back type which allows the tester to click through all
combinations of menu choices, dialog box choices, buttons in a graphical user interface
and have them ‘recorded’ and the results logged by a tool. The benefits of automated
testing are that it is fast, reliable, repeatable, programmable, comprehensive and reusable.
4. Some important testing techniques:
I. Web Specific Testing:
Due to more complex user interface, technical issues and compatibility combinations,
the testing effort required for web application is considerably larger than that for
application without any web interface. Web testing not only includes the tests that are

5
defined without an interface but also includes several web specific tests like
compatibility testing.
Compatibility testing is a testing is a testing technique, which determines if an
application is performing as expected with the various combinations of hardware and
software. This includes testing of different browsers like Netscape Navigator, Internet
Explorer and their various releases, different types of operating system like Windows
95, Windows 98, Windows 2000, Windows NT, Unix, Linux, different monitor
resolution like color resolution, font settings, display settings etc.
Compatibility testing can also include the testing of different hardware configurations
like PC, laptop and other hand held devices, different Internet connections like proxy
server, firewalls, modems etc, different desktop items like display, sound and video.
II. Security Testing:
Security testing determines the ability of the application to resist unauthorized entry
or modification in the system. The security tester plays the role of the individual to
penetrate or attack the system. It verifies whether proper protection mechanisms are
built in the system or not. Security testing can be accomplished through the following
ways.
a. Audit: Ensures that all the products installed in the site are secured when
checked against the known vulnerabilities.
b. Attacking: Attacking the server through the vulnerabilities in the network
infrastructure.
c. Hacking: Hacking directly through the website and HTML code.
d. Cookies attack: Finding patterns in the cookies and attempting to create the
algorithm.
III. Functional testing for web applications:
This includes all types of tests listed in general testing in addition to the following.
Link testing: Verifies that a link takes us to the expected destination without failure.
The link is not broken and all the parts of the site are connected. The links to be tested
includes embedded links (underlying extension indicating that more stuff is
available), structural link (links to a set of pages that are subordinate to the current

6
page and reference link), associative link (additional links that may be of some
interest to the users).
IV. Static Testing:
Here, the main objective is to identify the problem but not to solve them. It includes
verification or review of documentation, technology and code. It is a testing technique
that does not execute the code. Static testing is performed at the beginning of the
development life cycle and is performed to detect and remove defects early in the
development and test cycle. Thus, preventing the migration of defects to the later
phases of development. This further improves communication with the entire project
development team. There are basically two stages of static testing - inspection and
walkthrough.
V. Dynamic Testing:
It includes validation of the system or a component during or at the end of
development process to determine whether it satisfies the specified requirements or
not. It is a testing that executes the code. The bulk of the testing effort is dynamic
testing.
Generally verification testing finds 20% of the total bugs while validation testing
finds 80% of the total bugs by ‘Pareto’ principle.
VI. Concurrency Testing:
This is a testing technique followed when there are multiple clients with the same
server. It is performed to ensure that the server can properly handle simultaneous
requests from the clients.
VII. Parallel Testing:
This is a testing technique performed by using same data to run both old and new
system.
VIII. Client Server Testing:
The application function test, server test, database test and network communication
test are commonly conducted on client server architecture.
In general, testing of client server software occurs in three different levels.
In the first level, individual client applications are tested in ’disconnected mode’ and
the operations of the server and the underlying network are not considered.

7
In the second level, the client software and server applications are tested, but the
network operations are not.
In the third level, the complete client server architecture is tested.
IX. Load Testing:
It verifies that running a large number of concurrent clients do not break connection
with the server or / and client software. It is subjecting a system to a statistically
representative load in support of the software and loading is steadily increased until the
system fails. Example: It is used in discovering deadlock and problem with queries.
X. Stress Testing:
It is subjecting a system to unreasonable amount of work load, while it denies resources
like RAM, disks etc. It checks the system for robustness to determine whether it can
continue to operate in adverse situations demanding resources in abnormal frequency,
quantity or volume.
XI. Gray Box testing:
It is partially white box and partially black box testing, which means it is related to
coding as well as specification.
5. Testing Life cycle: There are several stages of Testing Life Cycle.
1. Corresponding to the test strategy the test strategy review is done.
2. Corresponding to test plan and specification, the test plan and specification
review is done.
3. Corresponding to test execution, the quality review is done.
4. Corresponding to defect reporting and closure, the defect reporting review is
done.
5. Corresponding to the test results received, the test result review is done.
6. Corresponding to the final inspection and review, the final inspection and
quality review is done.
7. Finally, we have product delivery.
6. V – Model, An Effective Test Methodology:
This model is very costly for implementation, with phases are as follows:
Phase I: This is the requirement gathering phase, and correspondingly the user
acceptance test plan is prepared.

8
Phase II: When the functional specification is prepared, correspondingly the system
test plan is designed.
Phase III: This is the design phase and corresponding to it, the integration test plan
is given shape.
Phase IV: This is the program specification stage, where the high level design and
low level design are made and correspondingly the code and unit test report is
generated.
Now we have all the test plans for further phases prepared and accordingly all the
testing is done based on the defined phase wise tests, which needs to be done till
project implementation.

V V
E A
R L
I I
F D
I A
C T
A I
T O
I N
O
N

7. Test Cases Design:


Test cases are systematically designed to uncover different classes of error with minimum
amount of time and effort. Test cases are devised as a set of data with the intent of
determining whether the system processes them correctly.

Test cases can be defined as “A set of input parameters for which the software will be
tested”. The test cases are selected, the programs are executed and the results are
compared with the estimated results. Test cases have to be designed based on two criteria,
- Reliability and Validity. A set of test cases is considered to be reliable if it detects all
errors. A set of test cases is considered to be valid, if at least one test case reveals error.

9
Select test cases Execute test cases Analyze test results

Test Case Design Format:


Test Case No./ Pre- Condition / Input Action/ Expected Actual Comments/
Test No. Pre-requisition Test Action Results Results Remark

We develop test cases using the Black Box testing and White Box testing techniques.
7.1 Black box testing / Functional testing / Behavioral testing:
The main purpose is to ensure that the functional requirements and specifications of the
clients are fulfilled. The test cases are generated to evaluate the correctness of the system.
It is “testing by looking at the requirements to develop test cases”.
Advantages:
• It simulates actual system users.
• It makes no assumption on system structure.
Disadvantages:
• It has the potential of missing logical errors in the software.
• It involves a possibility of redundant testing.
7.2 White Structural testing / Glass box testing:
The main purpose of this technique is to ensure that the technical and house keeping
functions of the system works. The test cases are designed to verify that the system
structure is sound enough and can perform the intended task. It is “testing by looking at
the structure of the program”.
Advantages:
• We can test the structural logic of the software.
• Every statement is tested thoroughly.
Disadvantages:
• It does not ensure that the user requirements are fulfilled.
• The tests may not be applicable in real world situation.

10
8. Types of testing documents:
There are generally three types of test documents:
a. Plan Documents like Project plan, Quality plan and SCM plan.
b. Input Output Documents like Requirement Specification, High Level Design,
Low Level Design.
c. Deliverables like User Manuals, Installation and customization guide.
9. Testing as a validation strategy:
Validation checks “Are we building the product right?” The validation strategy
includes Unit Testing, Integration Testing, System testing, Performance testing, α –
testing, User acceptance testing, Installation testing and β – testing.
10. International Organization For Standardization (ISO):
ISO is responsible for ensuring global standard of products and quality services.
ISO has got a series of standards which are as follows.
ISO 9001: Sets out the requirements for an organization whose business process
range from design, development to production, installation and servicing.
ISO 9002: It is an appropriate standard for an organization, which does not design
development products since it does not include the design control requirements of
ISO 9001.
ISO 9003: It is an appropriate standard for an organization, which focuses on
inspection and testing to ensure that final products and services meet specific
requirements.
11. When is testing complete?
It is difficult to say when testing is complete. The criteria for determining the
completion of testing are as follows:
i. When we run out of time.
ii. When we run out of money.
iii. Based on statistical criteria.
iv. Deadline is reached for releasing the software.
v. Test cases are completed with certain percentage.
vi. Test budget is depleted.
vii. Code coverage or functionality requirement reaches a specific point.

11
viii. Bug rate falls below a certain level.
ix. α- testing or β –testing period ends.
12. Conclusion:
Software testing is successful if all errors from the software are removed. A good
testing technique is one, which finds maximum uncovered errors. However, there is
no fool-proof testing technique is there, that can find out all errors. So the success
of testing depends on selecting appropriate combination of testing techniques,
developing test conditions, test evaluation criteria, creating test script required to
test rest of the above conditions, managing fixes and re-testing and also the need to
meet the rest of the testing objectives. Moreover, everyday new bugs are being
invented along with the testing techniques, putting the technicians of testing and
researchers under steeper and tougher challenges.

Bibliography:
1. “Software Testing” - Wikipedia, the free encyclopedia,
- http://en.wikipedia.org/wiki/Software_testing

2. “Software Testing Stuff”


- http://www.testingstuff.com/

3. “Testing Terminology (Test Glossary)”,


- http://www.devbistro.com/articles/Testing/Testing-Terminology-Glossary

4. “Computing Reference”
- http://www.elook.org
5. ‘Software QA and Testing Resource Center”
- www.softwareqatest.com/
6. Stephen H. Kan, Metrics and Models in Software Quality Engineering (Second
Edition), Addison-Wesley Professional, Sep 26, 2002.
7. Cem Kaner, Jack Falk, & Hung Quoc Nguyen , Testing Computer Software (Second
Edition), International Thomson Computer Press, 1993.
8. Roger S. Pressman, Software Engineering: A Practitioner's Approach (Sixth Edition),
Mc Graw Hill, 2005

12