You are on page 1of 85

Software

A software is a set of programs. They will take input and provide outputs. They are
two types 1) Software Application 2) Software Product

1) A software development for a specific customer requirements called as Software


Application.
2) A software development depending on overall requirements in market called as
software product. The interested customers are purchasing the licenses of Software
Product.

Software Bidding :

A proposal to develop a New Software is called Software Bidding. In Software


Application Development, the proposal is coming from specific customer. In product
development our organization is taking their own proposal.

Kick of Meeting :

The CEO category person is conducting a meeting with high level management
and select a Project Manager to handle the New Software Development Process.

PIN (Project Initiation Note) Document :

The selected Project Manager (PM) is preparing this document to estimate the
required people, the required technologies, required time and required resources. He/She
submitting the report to CEO. The CEO is conducting a review to give green signal to
Project Manager.

SDLC (Software Development Life Cycle) : (Water Model)

Required Gathering

Analysis & Planning

Designing

Code

Testing

Release & Maintenance
In above SDLC process, the single stage of testing is available and conducting the
testing by Developers. Due to these reasons, the organizations are concentrating on
Multiple Stages of Testing and separate testing teams to achieve quality.

Software Quality :

→ Meet Customer Requirements (Functionality)


→ Meet Customer Expectations (Usability Performance)
→ Cost to Purchase License
→ Time to Release

Software Quality Assurance (SQA) :

The Monitoring and Measuring the strength of development process is called as


Software Quality Assurance / Verification.

Software Quality Control (SQC) :

The Validation of product with respect to customer requirements is calling as


Software Quality Control / Validation / Testing.

“V” Model :

‘V’ Stands for Verification & Validation. This model is defining development
process with Testing Stages. This model is extension of SDLC Model.

Verification Validation

Requirements User Acceptance Testing


Gathering & Review

Analysis & Planning System Testing


With Review

High Level Design Integration Testing


& Review (Programs Testing)

Low Level Design


& Review Unit Testing (Program Testing)

Coding
In above ‘V’ Model Reviews are calling as Verification Methods and Testing
levels are calling as Validations. In small and medium scale organizations the
management is maintaining the separate Testing Team for System Testing Only to
decrease project cost, because the System Testing is Bottle Next Stage in Software
Development Process.

I) Reviews in Analysis :
In general the software development process is starting with requirements
gathering from Specific Customer in Application Development and requirements
gathering from Model Customers in Product development. After gathering requirements
the responsible Business Analyst is preparing BRS ( Business Requirements
Specification) document. This document is also known as User Requirement
Specification or Customer Requirement Specification.

After gathering requirements, the business analyst sit with Project Manger and
develop SRS and Project Plan. The Software Requirements Specification Consists of
functional requirements to be developed and system requirements to be used.

Example :

BRS SRC
Functional Requirement :

2 Inputs , 1 Out Put, ‘+’ is


Addition Operation

System Requirement :

‘C’ Language

What? How?

After completion of BRS & SRS preparations, the corresponding Business


Analyst is conducting a review to estimate completeness and correctness of the
documents.

→ Are they Correct Requirements?


→ Are they Complete Requirements?
→ Are they Achievable Requirements?
→ Are they Reasonable(Time) Requirements?
→ Are they Testable Requirements? Go to V Model Next
II) Reviews in Design :
After completion of successful Analysis and Review, the Design Category people
are preparing HLD, LLDs (High Level Design & Low Level Designs) The High Level
Design specifies the overall architecture of the Software. It is also known as System
Design or Architectural Design.
Example : Root
LOGIN

Mailing
Chatting

LOGOUT
Leaf :

Every Functionality or Module Internal Structure specified by Low Level Design


Documents. These are also known as Structural Design or Component Design.
Example :
User

User ID & Password


Invalid
LOGIN Data Base
Re-Login

Valid

Next Window

HLD is a system level design and LLD is component or Module level design. So
one Software design consists of one HLD and Multiple LLDs.

The corresponding designers are conducting a review on that document for


completeness and correctness.

→ Are they Understandable Designs?


→ Are they Correct Designs?
→ Are they Complete Designs?
→ Are they Followable Designs? Go to V Model Next
III) Unit Testing :
After completion of successful designs and reviews the corresponding
programmers are starting coding to construct a Software Physically. In this stage the
programmers are writing programs and Test each program using White Box / Glass Box /
Open Box Testing Techniques.

}
→ Basic Paths Coverage
→ Control Structure Coverage
→ Program Technique Coverage
→ Mutation Coverage

Programs

(A) Basic Paths Coverage :


The programmers are using this technique to estimate the Execution of a
programs. In this technique the programmer Executing a program more than one time to
cover all areas of that program in execution.

(B) Control Structure Coverage :


After completion of successful Basic path coverage the corresponding
programmer is concentrating on the Correctness of that program execution in terms of
Inputs, Process and Outputs.

(C) Program Technique Coverage


After successful Basic Paths & Control Structure Coverage, the corresponding
programmer is calculating the execution of that program. If that program execution speed
is not acceptable then the programmer is performing changes in that program structure
without disturbing the functionality.

In this coverage the programmers are using Monitors and Profiles like 3rd party
software to calculate the execution speed of the program.

Note :

Monitors are used in VB.net


Profilers are used in Java
(D) Mutation Coverage
Mutation means a change in program. Programmers are performing changes in
programs to estimate the completeness and correctness of that program testing.
Test Repeat Test Test
↓ ↓ ↓

Change Change

↓ ↓ ↓
Passed Passed (Incomplete Test Failed (Complete Testing)
Basics Paths Coverage, Control Structure Coverage and Program Technique
Coverage are applicable on a program to test. Mutation Coverage is applicable Program
Testing to estimate completeness and correctness of that Testing. Go to V Model Next

IV) Integration Testing :


After completion of dependent programs development and Unit Testing, the
programmers are interconnecting them to form a complete System / Software.
This testing is also known as Interface Testing there are Four Approaches to
Integrate Programs and Testing.

A) Top Down Approach :-

In this approach the programmers are interconnecting main program and some of
subprograms. In the place of remaining sub-programs, the programmers are using
Temporary programs called “Stub"

Main

STUB (Under Construction)

Sub1 Sub2
B) Bottom Up Approach :-
In this approach the programmers are interconnecting sub-programs without
coming from Main Program.
Main

Driver (Under Construction)

Sub1

Sub2

C) Hybrid Approach :-
In is a combined approach of Top Down & Bottom Up approaches. It is also
known as Sand Witch Approach.

Main

Driver (Under Construction)

Sub1

Driver (Under Construction)


Sub2

Sub3

D) System Approach :-

The Integration of programs after completion of 100% coding is called System


Approach or Big Bang Approach
V) System Testing :
After completion of successful Integration Testing, the Development Team is
Releasing a Software Build to separate Testing Team in our organization. This System
Testing classified into Three Sub Stages.

1. Usability Testing
2. Functional Testing
3. Non-Functional Testing

1. Usability Testing :
In general the testing execution is starting with Usability Testing. During this Test
the Testing Team is Concentrating on “User Friendliness of Software Build” There are 2
sublevels in this Usability Testing.

a) User Interface Testing :

→ Ease of Use (Understandable Screens)


→ Look & Feel (Attractive Screens)
→ Speed in Interface (short Navigations in Screens)

b) Manuals Support Testing :

In this test the Testing Team is verifying the Help of that Software.

Case Study :

Receive S/w Build from Developers after Integration Testing.


User Interface Testing

Functional Testing

Usability Testing Non-Functional Testing

Manuals Testing
2. Functional Testing :
It is a Mandatory Testing level in System Testing. During this test the Testing
Team is concentrating on the Correctness of Customer requirements in that S/w Build.
This Testing classified into below sub tests.

a) Control Flow Testing :-


The changes in properties of objects in an Application / S/w Build with respect to
mouse and keyboard operations.

b) Error Handling Testing :-


The prevention of wrong operations with meaningful messages.

c) Input Domain Coverage :-


Whether our S/w Build is taking valid type and size of inputs or not?

d) Manipulations Coverage :-
Whether our S/w Build is providing customer expected output or not?

e) Database Testing :-
The input of Front End Screens operations on Back End database contact

f) Sanitation Testing :-
Finding extra functionality with respect to Customer Requirements

Case Study :-

Software Build

Screens
(Front End) Data Base
(Back End)

Control Flow
Error Handling Data Base
I/p Domain Testing
Manipulations
Sanitation

Functional / Black Box Testing


3. Non-Functional Testing :
It is an optional level in System Testing. This level is expensive and complex to
conduct. During this test the Testing Team is concentrating on extra characteristics of
Software.

a) Reliability Testing :-

It is also known as Recovery Testing. During this test the Testing Team is
validating whether our S/w Build is changing from Abnormal State to Normal State or
not?

b) Compatibility Testing :-

It is also known Portability Testing. During this test the Testing Team is
concentrating on whether our S/w Build is running on Customer Expected platform or
not?
Platform means Operating System, Browser, Compilers and Other System
Software’s.

c) Configuration Testing :-

It is also known as Hardware Compatibility Testing. During this test the


Testing Team is concentrating on whether our S/w Build is supporting different
technology hardware devices or not?

Ex :- Different Technology Printers, Networks … etc.,

d) Inter System Testing :-

It is also known as End to End Testing or Interoperability Testing. During this


test the Testing Team is concentrating on whether our S/w Build is co-existence with
other Software application to share common resources or not?

Case Study :-
Compatibility Testing S/w Build → Operating System
S/w Build → H/w Device
Configuration Testing
Ex : Printers
Inter System Testing S/w Build → Other S/w Build
e) Data Volume Testing :-

During this test the Testing Team is inserting model data in our Application Build
to estimate peak limit of data. This data limit estimate is calling as Data Volume Testing.

Ex : 1) M.S.Access Technology Software are managing 2GB Data Base, SQL


Server managing 6-7GB Data Base and Orcle Tech. managing 10-12GB Data
Base as maximum.

f) Installation Testing :-

S/w Build Customer expected configuration system


+ Install Customer expected size of Ram, HDD,
Supported S/w Processor, OS…. Etc.,

→ Setup program execution to start Installation.


→ Easy interface during Installation.
→ Occupied disk space after Installation.

g) Load Testing :-

Load means that in number of Concurrent users are using our S/w Build at a
time. During this test the Testing Team is executing our S/w Build under customer
expected configuration and customer expected load to estimate speed of processing or
performance.

Client 1 □ Server
Client 2 □. S/w Build
. Process
.
Client N □
h) Stress Testing :-

The execution of our S/w Build under customer expected configuration and more
than Customer Load to estimate peak limit of Load is called Stress Testing.

i) Endurance Testing :-

The execution of our S/w Build under Customer Expected configuration and
customer expected load to estimate continuity in processing is called Endurance Testing.
j) Security Testing :-
It is also known as penetration testing. During this test the Testing Team is
concentrating on three factors.

Authorizations : S/w Build is allowing valid users and preventing invalid users.

Ex : Login with password, PIN, Digital Signatures, Finger Prints, Eye Retina,
Scratch Cards….etc.,

Access Control : The permission of valid users to access functionality in Build.

Ex : Admin, User

Encryption / Decryption : The code conversation in between client and server


process.
Client Server

Request
Response
Decrypted

Encrypted

Decrypted Cipher Text Encrypted

Cipher Text

k) Localization and Internationalization Testing :-

This testing is applicable for Multi Languity Software. This type of softwares are
allowing multiple user language characters. Ex : English, Spanish, French …. Etc.,
In localization testing the Test Engineer is providing multiple language characters
as Inputs to the S/w Build. In Internationalization Testing the Test Engineer is providing
a common language character (English) to S/w as Input. In this scenario the 3rd party
tools transfer common language character to other language characters.

Note : Java Unicode is better technology to develop multi languity softwares.

l) Parallel Testing :-
It is also known as Competitive / Comparative Testing. During this test the
Testing Team is comparing our S/w Build with old version of same S/w or with similar
product in market to estimate competitiveness.
VI) User Acceptance Testing :
After completion of successful System Testing the Project Manger is
concentrating on UAT to collect feedback from real customers or model customers.
There are two ways in this User Acceptance Testing.

α Alpha Testing β Beta Testing


→ For S/w Application → For S/w Products
→ By real customers with involvement → By Model Customers
Of Developers and Testers
→ In Development Site → In Model Customer Site

VII) Release Testing :


After completion of UAT and their modifications the Project Manger is forming
Release Team or On Site Team to release application to Real Customer or to release
Product to license purchased customer. This release team or onsite team consists of Few
Programmers, Few Testers, Few Hardware Engineers with a Team Lead. This team is
observing below factors in Customer Site.

1) Complete Installation
2) Overall Functionality
3) Input devices handling (Key Board, Mouse….etc.,)
4) Output devices handling (Monitor, Printer….etc.,)
5) Secondary storage devices handling (Floppy, Pen Drive…etc.,)
6) O/s error handling
7) Co-existence with other S/w in customer site.
The above factors checking in customer site is also known as Port Testing /
Deployment Testing.
After successful release, the release team is conducting training sessions to
customer site people & then back to our organization.
VIII) Maintenance:
During utilization of a Software, the customer site people are sending Software
Change Request (SCR) to our organization. These requests received by a special team in
our organization called Change Control Board (CCB). This team is consists of Few
Programmers, Few Testers, Few Hardware Engineers along with Project Manager.

S/w Change Request

Enhancement Missed Deffects

Impact Analysis Impact Analysis


↓ ↓
Perform S/w Perform S/w Changes
Changes ↓
↓ Conducted by CCB Test S/w Changes
Test S/w Changes ↓
Improve Testing
Process & People
Capability
Case Study :-
Deliverable to be
Testing Stages Responsibility Testing Techniques
Tested
Walk Through,
Reviews in Analysis BRS & SRS BA Inspections & Peer
Reviews
Walk Through,
Review in Design HLD & LLDs Designers Inspections & Peer
Reviews
White Box Testing
Unit Testing Programs Programmers
Techniques
Interface in between Top Down, Bottom
Integration Testing Programmers
Programs Up, Hybrid, System
Usability,
Test Engineers /
Functional / Black
System Testing S/w Build Quality Control
Box, Non-
Engineers
Functional Testing
User Acceptance Real Customers / α -Testing,
S/w Build
Test Model Customers β - Testing
S/w Release Factors
Releasing Testing S/w Build Release Team
(7 Factors in VII)
Maintenance Level
S/w changes CCB Regressing Testing
Testing
Walk Through :- A document study to estimate completeness and correctness

Inspection :- Search & Issue in a document called as Inspection

Peer Reviews :- Comparing the document with other similar document.

Challenges in Software Testing


In general every Testing Team is planning formal testing to conduct. Due to some
challenges in testing, the Testing Teams are going to conduct Ad-hoc Testing or
Informal Testing. There are Five Styles of Ad-Hoc Testing.

a) Monkey / Chimpangy Testing :-

Due to lack of time the Testing Team is conducting testing on Main Activities of
a Software. This type / stage of testing is called as Monkey Testing.

b) Buddy Testing :-

Due to lack of time the Project Management is combining one programmer and
one Tester as a Buddy. This teams are conducting Development & Testing Parallely.

c) Exploratory Testing :-

It is also known as Artistic Testing. Due to lack of Documentation, the Test


Engineers are depending on Past Experience, Discussions with others, Video Conference
with customer site people, Internet Browsing & Similar S/w surfing to understand
customer requirements. This type of testing is called Exploratory Testing.

d) Pair Testing :-

Due to lack of knowledge the Senior Test Engineers are groping with Junior Test
Engineers to share their knowledge. This style of testing is called Pair Testing.

e) Bebugging:-

To estimate the efforts of Test Engineers the Development People are adding
defects to coding. This informal way is called Bebugging or Defect Feeding / Seeding.
System Testing Process

Test Test Test Test Test


Initiation Planning Design Execution Closure

Test
Reporting

Development Vs System Testing

S/w Bidding

Kick of meeting

PIN Document

Requirements Gathering (BRS)

Analysis & Planning (SRS & Project Plan)

S/w Design & Review (HLD, LLDs) System Test Initiation


↓ ↓
Coding → Unit Testing (White Box Technique) System Test Planning
↓ ↓
Integration → Integration Testing Test Design

Initial Build

System Test Execution
Test ↓
Reporting System Test closure

User Acceptance Test

Release & Maintenance
I) System Test Initiation :
In general the System Testing process is starting with System Test Initiation by
Project Manager or Test Manager. They will develop Test Strategy or Test Methodology
Document. This document defines the reasonable Test to be applied in current project.

SRS Test Initiation Test Strategy


I/P O/P

Project Manager / Test Manager

Components in Test Strategy :

The Test Strategy Document consists of below components to define Test


Approach to be followed by Team in current project.

1. Scope & Objective :-

The Purpose of Testing in current project

2. Business Issues :-

The Budget allocation for Testing in current project

Ex : 100% → Project Cost

64% 36%
Development System Testing
& Maintenance

3. Rolls & Responsibilities :-

The names of jobs in Testing Team and responsibility of each job in current
project

4. Communication & Status Reporting :-

The required negotiations in between various jobs in Testing Team


* 5. Test Responsibility Matrix (TRM) :-
**
The list of reasonable test to be applied in current project.
Ex.

Testing Topic Yes/No Comment


UI Testing Yes -
Manual Testing Yes -
Functional Testing Yes -
Load Testing No Lack of Resources
Stress Testing No Lack of Resources
Endurance Testing No Lack of Resources
Compatibility
Yes -
Testing
No need with
Inter System
No respect to
Testing
requirements
..etc,, ..etc,, ..etc,,

6. Test Automation & Testing Tools :-

The purpose of automation testing in current project and available testing tools in
our organization.

7. Defect Reporting & Seeking :-

The required negotiation in between Testing Team and Development Team to


report & solve defects.

8. Change & Configuration Management :-

The maintenance of deliverable in testing for future reference.

9. Risks & Assumptions :-

The expected list of risks and solutions to over come.

10. Testing measurements & Metrics

The list of measurements & Metrics to estimate test status.

11. Training Plan :-

The required number of training sessions to Testing Team to understand customer


requirements.
II) Test Planning :
After completion of Test Strategy document preparation the Test Lead Category
people are concentrating on Test Plan Documents Preparation.

SRS, HLD & LLDs


Testing Team Formation
Project Plan Identify Risks Test Plans
Prepare Detailed Text Plans
Test Strategy Review Plans

Testing Team Formation :

In general the Test Planning is starting with Testing Team formation. In this stage
the Test Lead is depending on below factors.

→ Project Size (No. of Functional Prints)


→ No.of Testers available on the bench
→ Test Duration W.R.T Project Plan
→ Available Test Environment Resources. (Ex. Testing Tools….)

Case Study :

Type of Project Developers : Testers


→ ERP, Client / Server, Website 3:1
→ System S/w Application 1:1
→ Machine Critical 1:7

Identify Risks :

After completion of Testing Formation the Test Lead is concentrating on Team


Level Risks Analysis.

Ex :-
Risk 1 : Lack of Time
Risk 2 : Lack of Resources
Risk 3 : Lack of Documentation
Risk 4 : Delays in Delivery
Risk 5 : Lack of Development Process Seriouness
Risk 6 : Lack of Communication
Prepare Detailed Test Plans :
After Completion of Testing Team Formation and the risks analysis, the test lead
is concentrating on test plan document preparation in IEEE 829 Format (Institute of
Electrical and Electronics Engineer)

Format :

1. Test Plan ID : Unique number or name for future reference about


project.
2. Introduction : About Project
3. Test Items : The names of Modules or Functionalities in Project
What
4. Features to be Tested : The names of functionalities to be tested.
to Test
5. Features not to be Tested : The names of tested modules if available.
6. Test Approach : The List of selected tests by P.M.
7. Test Environment : The required Hardwares & Softwares to using testing.
8. Entry Criteria : Test Cases Designed, Test Environment Established,
S/w Build received from Developers.
How
to Test 9. Suspension Criteria : → Test Environment Abounded
→ Shows stopper in build (Build not working)
→ Pending defects are more
10. Exit Criteria : → All modules in build covered
→ Test duration exceeded
→ All major defects solved
11. Test Deliverables : The list of testing documents to be prepared by test
engineers in testing. (Test Scenarios, Test Cases,
Automation Programs, Test Log, Defects reports and
weekend reports)
12. Staff and Training Needs : The names of selected test engineers & required
Whom training sessions to understand customer requirements.
to Test
13. Responsibilities : Work allocation to above selected test engineers. 9 All
responsible tests on specified modules or specified
testing on all modules.)
When 14. Schedule : The dates & times to conduct testing
to Test
15. Risks & Assumptions : The previously analyzed risks and solutions to over
come.
16. Approvals : The signature of Test Lead & Project Manager.
Review Test Plan :

After completion of Test Plan document preparation the test is conducting a


review meeting to estimate completeness and correctness of that planed document.
→ Requirements / Module / Features / Functionalities Coverage
→ Testing Topics Coverage
→ Risks Oriented Coverage

Note :
After completion of Test Planning and before starting Test Designs, the Business
Analyst and Test Lead are conducting Training Sessions to select Test Engineers on that
customer requirements in Project. Some organizations are inviting Domain Experts /
Subject Experts for that Training Sessions from out side.

III) Test Design :


After completion of required training sessions on customer requirements the
corresponding Test Engineers are concentrating on Test Design to prepare Test Scenarios
and Test Cases.
The Test Scenarios specifies “What” to test. The Test Cases specifies “How” to
test including a detailed procedure. From these sentences the Test Cases are drawing
from Test Scenarios. There are four methods in this Test Design.

Functional 1. Functional Specification Based Test Case Design


Testing 2. Use Cases Based Test Case Design
UT 3. User Interface Based Test Case Design
NFT 4. Functional & System Specification Based Test Case Design

1. Functional Specification Based Test Case Design :


To prepare Test Scenarios and Cases for Functional Testing, the Test Engineers
are using this method. In this approach, the Test Engineers are preparing Scenarios and
Cases depending on Functional Specifications in SRS.

BRS
↓ Test Design
SRS (Functional Test Scenarios
Specifications) ↓

Test Cases
HLD

LLDs

System Test Execution
S/w Build
Approach :
Step 1 :-
Collect Functional Specifications related to responsible areas.
Step 2 :-
Take one specified and read that specification to gather entry point, required
inputs, normal flow, coming outputs, alternative flows, exit point and exceptions are
rules.
Step 3 :-
Prepare Test Scenarios depending on above gathering information
Step 4 :-
Preview that Test Scenarios and implement them as Test Cases
Step 5 :-
Go to Step2 until all responsible Functional Specifications Study.

Functional Specification – 1 :-

A login process allows User ID& Password to Authorized users. The User ID
object is taking alphanumeric in lower case from 4 to 16 characters long. The password
object is taking alphabets in lower case from 4 to 8 characters long.Prepare Test Scenario.

Test Scenario 1 :- Verify User ID object

Boundary Value Analysis (BVA) (Size) :

Min = 4 Char. → Pass Max = 16 Char. → Pass


Min-1 = 3 Char. → Fail Max-1 = 15 Char. → Pass
Min+1 = 5 Char. → Pass Max+1 = 17Char. → Fail

Equivalence Class Partition (ECP) (Type) :


Valid In-Valid
a-z, 0-9 A-Z, Special Characters, Blank Field

Test Scenario 2 :- Verify Password Object

Boundary Value Analysis (BVA) (Size) :


Min = 4 Char. → Pass Max = 8 Char. → Pass
Min-1 = 3 Char. → Fail Max-1 = 7 Char. → Pass
Min+1 = 5 Char. → Pass Max+1 = 9 Char. → Fail

Equivalence Class Partition (ECP) (Type) :


Valid In-Valid
0-9 a-z, A-Z, Special Characters, Blank Field
Test Scenario 3 :- Verify Password Object Login Operation

Decision Table :

User Id Password Expected O/p


Valid Value Valid Value Next Window
Valid Value In Valid Error Message
Invalid Valid Error Message
Valid Blank Field Error Message
Bland Valid Error Message

Note : Exhaustive Testing is not possible due to this reason. The Testing Team is
conducting Optimal Testing using Black Box Testing Techniques like BVA,ECP,
Decision Table, regular expressions … etc.,

Functional Specification – 2 :-

In an Insurance application, users are applying for different types of Insurance


policies. If a user select Type-A Insurance, then our system asks the age of that user. The
age value should be grater than 16 years and should be less than 80 years. Prepare Test
Scenario.

Test Scenario 1 :- Verify Type-A selection

Test Scenario 2 :- Verify focus to Age when you selected Type-A Insurance

Test Scenario 3 :- Verify Age Value

Boundary Value Analysis (BVA) (Range) :

Min = 17 → Pass Max = 79 → Pass


Min-1 = 16 → Fail Max-1 = 78 → Pass
Min+1 = 18 → Pass Max+1 = 80 → Fail

Equivalence Class Partition (ECP) (Type) :


Valid In-Valid
0-9 a-z, A-Z, Special Characters, Blank Field

Functional Specification – 3 :-

In a shopping application users are applying for different type to items purchase
orders. The purchase order is allowing user to select Item No. and to enter Qty. up to 10.
The purchase order returns Total Amount along with one item price. Prepare Test
Scenario.
Test Scenario 1 :- Verify Item No. Selection

Test Scenario 2 :- Verify Qty. Value

Boundary Value Analysis (BVA) (Range) :

Min = 1 → Pass Max = 10 → Pass


Min-1 = 0 → Fail Max-1 = 9 → Pass
Min+1 = 2 → Pass Max+1 = 11 → Fail

Equivalence Class Partition (ECP) (Type) :


Valid In-Valid
0-9 a-z, A-Z, Special Characters, Blank Field

Test Scenario 3 :- Verify Total Amount, given Qty. * Item Pass

Functional Specification – 4 :-

A Door Opened when a person comes to in front of the door and that door closed
when that person went to inside. Prepare Test Scenario.

Test Scenario 1 :- Verify Door Open

Person Door Criteria


Present Opened Pass
Present Closed Fail
Absent Opened Fail
Absent Closed Pass

Test Scenario 2 :- Verify Door Close

Person Door Criteria


Inside Closed Pass
Inside Opened Fail

Test Scenario 3 :- Verify Door operation when a person is standing at the middle of the
door.

Functional Specification – 5 :-

In an e-banking application, the customers are connecting to Bank Server through


a login process. This login allows customer to fill below fields.
Password : 6 digits number
Prefix : 3 Digits number but does not start with 0 & 1
Suffix : 6 Digits alphanumeric
Area Code : 3 Digits no but it is optional
Command : Cheque Deposit, Money Transfer, Mini Statement and Bills Paid.
Prepare Test Scenario.

Test Scenario 1 :- Verify Password Value

Boundary Value Analysis (BVA) (Size) :

Min = Max = 6 Digits → Pass 5 Digits → Fail 7 Digits → Fail

Equivalence Class Partition (ECP) (Type) :


Valid In-Valid
0-9 a-z, A-Z, Special Characters, Blank Field

Test Scenario 2 :- Verify Prefix

Boundary Value Analysis (BVA) (Size) :

Min = Max = 3 Digits → Pass 2 Digits → Fail 4 Digits → Fail

Equivalence Class Partition (ECP) (Type) :


Valid In-Valid
[2-9][0-9][0-9] a-z, A-Z, Special Characters, Blank Field

Test Scenario 3 :- Verify Suffix

Boundary Value Analysis (BVA) (Size) :

Min = Max = 6 Digits → Pass 5 Digits → Fail 7 Digits → Fail

Equivalence Class Partition (ECP) (Type) :


Valid In-Valid
0-9, a-z, A-Z Special Characters, Blank Field

Test Scenario 4 :- Verify Area Code

Boundary Value Analysis (BVA) (Size) :

Min = Max = 3 Digits → Pass 2 Digits → Fail 4 Digits → Fail

Equivalence Class Partition (ECP) (Type) :


Valid In-Valid
0-9, Blank Field a-z, A-Z, Special Characters
Test Scenario 5 :- Verify command selection like Cheque Deposit, Money Transfer,
Mini Statement and Bills Paid.

Test Scenario 6 :- Verify login operation to connect to Bank Server

Remaining Fields Area Code Expected O/p


All are valid Valid Next Window
All are valid Blank Field Next Window
All are valid Invalid Error Message
Any one Invalid Valid/Blank Error Message
Any one Blank Field Valid/Blank Error Message

Functional Specification – 6 :-

In a library Management System the readers are applying for Identity No. to get
this no., the reader is filling below fields.

Reader Name : Alphabets in lower case with Init Cap as single word
House Name : Alphabets in lower case as single word
PIN Code : Related to India Postal Department
City Name : Alphabets in uppercase as single word
Phone No. : Related to India Subscribers and optional

Prepare Test Scenario

Test Scenario 1 :- Verify Reader Name

Boundary Value Analysis (BVA) (Size) :

Min = 1Char. → Pass Max = 256Char. → Pass


Min-1 = 0Char. → Fail Max-1 = 255Char. → Pass
Min+1 = 2Char. → Pass Max+1 = 257Char. → Fail
(In any front end developed programs the default max. char are 256.)

Equivalence Class Partition (ECP) (Type) :


Valid In-Valid
[A-Z][a-z]* 0-9, Special Characters, Blank Field

Test Scenario 2 :- Verify House Name

Boundary Value Analysis (BVA) (Size) :


Min = 1Char. → Pass Max = 256Char. → Pass
Min-1 = 0Char. → Fail Max-1 = 255Char. → Pass
Min+1 = 2Char. → Pass Max+1 = 257Char. → Fail
Equivalence Class Partition (ECP) (Type) :
Valid In-Valid
[a-z]* A-Z, 0-9, Special Characters, Blank Field

Test Scenario 3 :- Verify PIN Code

Boundary Value Analysis (BVA) (Size) :

Min = Max = 6 Digits → Pass 5 Digits → Fail 7 Digits → Fail

Equivalence Class Partition (ECP) (Type) :


Valid In-Valid
[1-9][0-9][0-9][0-9][0-9][0-9] a-z, A-Z, Special Characters, Blank Field

Test Scenario 4 :- Verify City Name

Boundary Value Analysis (BVA) (Size) :

Min = 1Char. → Pass Max = 256Char. → Pass


Min-1 = 0Char. → Fail Max-1 = 255Char. → Pass
Min+1 = 2Char. → Pass Max+1 = 257Char. → Fail

Equivalence Class Partition (ECP) (Type) :


Valid In-Valid
[A-Z]* a-z, 0-9, Special Characters, Blank Field

Test Scenario 5 :- Verify Phone Number

Boundary Value Analysis (BVA) (Size) :


Min = 10 Digits → Pass Max = 12 Digits → Pass
Min-1 = 9 digits → Fail Max+1 = 13 Digits → Fail
Min+1 = 11 Digits → Pass
Equivalence Class Partition (ECP) (Type) :
Valid In-Valid
0-9, Blank Field A-Z, a-z, Special Characters
Test Scenario 6 :- Verify Reader Registration

Decision Table :
Remaining Fields Telephone Number Expected O/p
All are valid Valid Identity No.
All are valid Blank Field Identity No.
All are valid Invalid Error Msg.
Any one Invalid Valid / Blank Error Msg.
Any one Blank Field Valid / Blank Error Msg.

Functional Specification – 7 :- A Computer Shut Down Operation

Test Scenario 1 : Verify Shut Down option selection using Shut Down

Test Scenario 2 : Verify Shut Down option selection using Alt+F4

Test Scenario 3 : Verify Shut Down option selection using Ctr+Alt+Del

Test Scenario 4 : Verify Shut Down operation success

Test Scenario 5 : Verify Shut Down operation using Run Command.

Test Scenario 6 : Verify Shut Down operation when a process is running

Test Scenario 7 : Verify Shut Down operation using Power Off Button

Functional Specification – 8 :-

Money With Drawl From ATM with all Rules and Regulations

Test Scenario 1 : Verify Card Insertion

Test Scenario 2 : Verify Card Insertion in Wrong Angle

Test Scenario 3 : Verify Cancel After Card Insertion

Test Scenario 4 : Verify Language Selection

Test Scenario 5 : Verify Cancel after selection of Language

Test Scenario 6 : Verify PIN Entry

Test Scenario 7 : Verify operation with wrong PIN

Test Scenario 8 : Verify operation when you enter wrong PIN 3 times consecutively
Test Scenario 9 : Verify Cancel after enter PIN

Test Scenario 10 : Verify Amount type selection

Test Scenario 11 : Verify operation when you selected wrong account type with
respected to the inserted card

Test Scenario 12 : Verify cancel after account type selection

Test Scenario 13 : Verify with drawl option selection

Test Scenario 14 : Verify cancel after selection of with drawl

Test Scenario 15 : Verify amount entry

Test Scenario 16 : Verify operation with wrong denomination in amount

Test Scenario 17 : Verify with drawl operation success. (Correct amount, right receipt,
able to take card back)

Test Scenario 18 : Verify with drawl operation with grater than possible balance.

Test Scenario 19 : Verify with drawl operation with grater than day limit.

Test Scenario 20 : Verify with drawl operation with Net work problem

Test Scenario 21 : Verify with drawl amount with lack of amount in ATM

Test Scenario 22 : Verify with drawl operation with exceeded no.of Transactions per
day

Test Scenario 23 : Verify with drawl operation with other bank card

Test Scenario 24 : Verify with drawl operation with stolen card


2. Use Cases Based Test Case Design :

It is an alternative method for Functional Specification Based Test Case Design.


In this method the Test Engineers are depending on Use Cases instead of Functional
Specifications to prepare Test Scenarios and Test Cases.

BRS
↓ Use Cases
SRS (Functional BA + Test Lead
Test Scenarios
Specifications) ↓

Test Cases
HLD

LLDs

System Test Execution
Coding
(UT & IT)
S/w Build

From the above diagram the Business Analyst and Test Lead category people are
developing use cases depending on corresponding functional specifications in SRS.
Every Use Case is an Implemented Form of Functional Specifications.

Use Case Format :-

1. Use Case ID : Unique number or name for future reference


2. Use Case Description : The summery of corresponding Functionality
3. Required Inputs : The required Inputs for corresponding Functionality
4. Precondition : The necessary Condition to follow before operating
corresponding functionality
5. Events List :

Events / Tasks Expected O/p or Out come

(A Step by Step procedure with expected outputs)


6. Activity Flow Diagram : A Pictorial / Diagrammatic of corresponding
functionality
7. Post Condition : Necessary tasks to do after corresponding functionality
8. Alternative events list : Alternative procedures to do this functionality if
available
9. Proto Type : A screen shot related to corresponding functionality.
10. Related use cases : The names of other Use Cases relation to corresponding
functionality

Approach :
Step1 : Collect use cases of responsible areas
Step2 : Take one use case and study
Step3 : Identify Entry Point, Required I/p, Normal Flow, Expected O/p, Exit Point,
Alternative Flows and Exceptions rules.
Step4 : Prepare Test Scenarios depending on above Identified Information.
Step5 : Review that scenario and implement them as Test Cases
Step6 : Go to Step2 until all responsible Use Cases Study

Use Case 1 :

1. Use Case ID : UC_Login


2. Use Case Description : Login operation is authorization
3. Required Inputs : User ID is in alphabets lower from 4-16 characters
long. The Password alpha numeric in lower case from
4-8Char. Long.
4. Precondition : New User Registration to get valid User ID & Password
5. Events List :

Events / Tasks Expected O/p or Out come

Enter User ID an Next window for valid user


Password Values and and invalid data error msg.
then click OK Button for Invalid user.
6. Activity Flow Diagram :
Example :
User

User ID & Password


Error Msg.
LOGIN Data Base
Re-Login

Valid

Next Window

7. Post Condition : Log out operation is mandatory after successful Login


8. Alternative events list : None
9. Proto Type :

10. Related use cases : UC_New User, UC_Logout


Test Scenario 1 :- Check User ID

Boundary Value Analysis (BVA) (Size) :

Min = 4Char. → Pass Max = 16Char. → Pass


Min-1 = 3Char. → Fail Max-1 = 15Char. → Pass
Min+1 = 5Char. → Pass Max+1 = 17Char. → Fail

Equivalence Class Partition (ECP) (Type) :


Valid In-Valid
a-z A-Z, 0-9, Special Characters, Blank Field

Test Scenario 2 :- Check Password

Boundary Value Analysis (BVA) (Size) :

Min = 4Char. → Pass Max = 8Char. → Pass


Min-1 = 3Char. → Fail Max-1 = 7Char. → Pass
Min+1 = 5Char. → Pass Max+1 = 9Char. → Fail

Equivalence Class Partition (ECP) (Type) :


Valid In-Valid
a-z,0-9 A-Z, Special Characters, Blank Field

Test Scenario 3 :- Check Ok Button Click

User ID Password Expected Out Put


Valid Valid Next Window
Valid Invalid Invalid Data Error Msg.
Invalid Valid Invalid Data Error Msg.
Value Blank Field Invalid Data Error Msg.
Blank Value Value Invalid Data Error Msg.

Test Scenario 4 :- Check Cancel Button

Event Expected Out Put


Click Cancel after open login Login Window Closed
Click Cancel after enter user ID Login Window Closed
Click cancel after enter Password Login Window Closed

Test Scenario 5 :- Check Minimize Icon


Test Scenario 6 :- Check Maximize Icon
Test Scenario 7 :- Check Close Icon
Use Case 2 :

1. Use Case ID : UC_Book_Issue


2. Use Case Description : Issue a Book for Valid User
3. Required Inputs : User ID is in below format
Mm_yy-xxxx (4 digits)
Book ID is in below format
BOOK_xxxx
4. Precondition : New User Registration to get valid User ID
5. Events List :

Events / Tasks Expected O/p or Out come

Enter User ID Focus to Book ID for Valid User


and then click and Invalid User error msg. for
“Go” Button Invalid User

Enter Book ID Book issued message for available


and click “Go” book and unavailable book
Button message for unavailable book id

6. Activity Flow Diagram :


Example :
User

Valid User ID
Invalid User BOOK
ISSUE Data Base
Re-Login
Valid Book ID
Unavailable BOOK
Book ISSUE Data Base
Re-Login
Valid
“Book Issued”

7. Post Condition : Received that issued book from Computer Operator


8. Alternative events list : None
9. Proto Type :

Book Issue - □X

User ID Go

Book ID Go

10. Related use cases : UC_New User, UC_Book Feeding

Test Scenario 1 :- Verify User ID

Boundary Value Analysis (BVA) (Size) :

Min = Max = 10 Position Value → Pass


= 9 Position Value → Fail
= 11 Position Value → Fail

Equivalence Class Partition (ECP) (Type) :


Valid In-Valid
[0][1-9][_][0-9][0-9][_][0-9][0-9][0-9][0-9] a-z, A-Z, 0-9,
[1][0-2][_][0-9][0-9][_][0-9][0-9][0-9][0-9] Special Char. except _,Blank Field

Test Scenario 2 :- Verify “Go” button click

User ID Expected O/p after click ‘Go’


Valid Value Focus to Book ID
Invalid Value “Invalid User” Error Message
Blank Field “Invalid User” Error Message

Test Scenario 3 :- Verify User ID


Boundary Value Analysis (BVA) (Size) :

Min = Max = 8 Position Value → Pass


= 7 Position Value → Fail
= 9 Position Value → Fail

Equivalence Class Partition (ECP) (Type) :


Valid In-Valid
[B][O][O][K][_][0-9][0-9][0-9][0-9] a-z, A-Z Except B,O,K,
Special Char. except _,Blank Field
Test Scenario 4 :- Verify “Go” Click
Book ID Expected O/p after click “Go”
Valid Book ID “Book issued” Msg.
Invalid Book ID “Unavailable Book” Message
Blank Field “Unavailable Book” Message

Test Scenario 5 :- Verify minimized Icon


Test Scenario 6 :- Verify maximized Icon
Test Scenario 7 :- Verify close Icon

3. User Interface Based Test Design :

The Functional Specification Based Test Design or The Use Cases Based Test
Designs are using to prepare Test Scenarios and Cases for Functional Testing. This User
Interface Based Test Design is using by Test Engineers to prepare Test Scenarios and
cases for “Usability Testing”.

BRS

SRS (UI Test Scenarios
Requirements) ↓

Test Cases
HLD

LLDs

System Test Execution
Coding
(UT & IT)
S/w Build

In this method the Test Engineers are depending on User Interface Requirements
in SRS.
In general the Test Engineers are writing Common Test Scenarios for Usability
Testing, which are applicable on any type of Application Scenarios.

Test Scenario 1 :- Verify Spelling in every scenario

Test Scenario 2 :- Verify error msg. meaning

Test Scenario 3 :- Verify Int.Cap of labels in every screen


Test Scenario 4 :- Verify color uniqueness through out the screens

Test Scenario 5 :- Verify Font or Style uniqueness through the screens

Test Scenario 6 :- Verify size uniqueness throughout the scene

Test Scenario 7 :- Verify alignment of objects in every screens

Test Scenario 8 :- Verify line spacing uniqueness through out the screens

Test Scenario 9 :- Verify Tool Tips of icons in every screen.

Test Scenario 10 :- Verify default object in every screen.

Test Scenario 11 :- Verify Uniform Background colors of objects in every screen.

Test Scenario 12 :- Verify Scroll Bars when our screen size is grater than Desk Top

Test Scenario 13 :- Verify keyboard accessing of every object in every screen

Test Scenario 14 :- Verify abbreviations & Short cuts in screens

Test Scenario 15 :- Verify Multiple Data Object positions in every screen.

Ex : List Box, Menu, Table … etc.,

Test Scenario 16 :- Verify Help Messages (Manual Support Testing)

Test Scenario 17 :- Verify Functionally Grouped Objects in every screen.

Test Scenario 18 :- Verify Boarders of Functionally Grouped Objects in every screens

Test Scenario 19 :- Verify Labels of objects with respect to Functionality

Test Scenario 20 :- Verify Window Labels with respect to Functionality

4. Functional and System Specification Based Test Design :

After completion of Test Scenarios selection for Functional and Usability Testing
the Test Engineers are concentrating on Test Scenario selection for Non-Functional
Testing depending on Functional and System Specifications in SRS.

Functional Specifications are describing the required functionalities in Software


and System specifications are describing the required environment to be used.
BRS

SRS Test Scenarios

(Functional
Test Cases
Specifications +
System Specifications)

HLD & LLDs
↓ System Test Execution
Coding (UT & IT)
S/w Build

Example Test Scenarios for Compatibility Testing :

Test Scenario 1 : Verify Login in Win NT with Customer expected configuration


Test Scenario 2 : Verify Login in Win 2000 with Customer expected configuration
Test Scenario 3 : Verify Login in Win Vista with Customer expected configuration
And more…

Example Test Scenarios for Performance Testing :

Test Scenario 1 : Verify Login Under Customer expected Load and Configuration
Test Scenario 2 : Verify Login Under more than Customer expected configuration
And more….

Example Test Scenarios for Installation Testing :

Test Scenario 1 : Verify Setup Program to Start Installation.


Test Scenario 2 : Verify Interface easiness during Installation
Test Scenario 3 : Verify occupied disk space after Installation
And more…

Test Case Format :

After completion of Test Scenarios selection for responsible areas in terms of


Functional, Usability and Non-Functional Testing, the Test Engineers are implementing
them as Test Cases. Test Engineers are using IEEE (Institute of Electrical & Electronics
Engineer) 829 Test Case Format.

1. Test Case ID : Unique Number / Name for Future Reference


2. Test Case Name : The Corresponding Test Scenario
3. Feature to be Tested : The Name corresponding Module or Functionality
4. Test Suite ID : The Unique number or name of a Test Batch. This case
is a member in that Batch
5. Priority : The importance of this Test Case (P0 priority for
Functional Test Cases, P1 Priority for Non-Functional
Test Cases and P2 Priority for Usability Test Cases.)
6. Test Environment : The required Hardware and Software to execute this
test.
7. Test Effort : Person per hour (Ex.20min is average Test Execution
Time)
8. Test Duration : The data and time to execute this test.
9. Test Setup : The necessary tasks to do before start this test
execution.
10. Test Procedure / Data Matrix :

Step Action / Required Expected Actual Defects


Result Comments
No. Task event I/p O/p O/p Id

Test Design Test Execution

}
ECP (Type) BVA (Range / Size)
I/p Object Data Matrix
in
Valid Invalid Min Max

11. Test Case Pass / Fail Criteria : The Final result of this Test Case after execution

Note 1 : In general the test engineers are not interesting to fill all fields in Test Case
Format due to lack of time and similarity in fields values of Test Cases.

Note 2 : The test engineers are using test procedure for operation test cases and data
matrix for input object test cases.

Functional Specification :
In a Banking application the valid employees are creating fixed deposit operations
with depositors provided information. In this fixed deposit operation, the employees are
filling below fields.
Depositor Name : Alphabets in Lower Case with Int.Cap, allows multiple words in name
Amount : 1500 to 1,00,000
Time : Up to 12 months
Interest : Numeric with one decimal
If the time>10months, then the Interest>10% from Bank Rules
Prepare Test Scenarios and Test Cases :

Test Scenario 1 : Verify Depositor Name


Test Scenario 2 : Verify Amount
Test Scenario 3 : Verify Time
Test Scenario 4 : Verify Interest
Test Scenario 5 : Verify Fixed Deposit Operation
Test Scenario 6 : Verify Fixed Deposit Operation with Bank Rule

Test Case Documents :


Test Case 1 :-
1. Test Case ID : TC_FD_Ravi_24th May_1
2. Test Case Name : Verify Depositor Name
3. Test Suit ID : TS_FD
4. Priority : P0
5. Test Setup : Depositor Name is taking inputs
6. Data Matrix :

ECP (Type) BVA (Size)


I/p Object
Valid Invalid Min Max

Depositor Name ([A-Z][a-z]*)* 0-9,Spl.Char, Blank Field 1 Char 256 Char

Test Case 2 :-
1. Test Case ID : TC_FD_Ravi_24th May_2
2. Test Case Name : Verify Amount
3. Test Suit ID : TS_FD
4. Priority : P0
5. Test Setup : Depositor Object is taking inputs
6. Data Matrix :

ECP (Type) BVA (Range)


I/p Object
Valid Invalid Min Max

Amount 0-9 a-z, A-Z, Spl.Char, Blank Field 1500 100000

Test Case 3 :-
1. Test Case ID : TC_FD_Ravi_24th May_3
2. Test Case Name : Verify Time
3. Test Suit ID : TS_FD
4. Priority : P0
5. Test Setup : Time Object is taking inputs
6. Data Matrix :

ECP (Type) BVA (Range)


I/p Object
Valid Invalid Min Max

Time 0-9 a-z, A-Z, Spl.Char, Blank Field 1 Month 12 Months

Test Case 4 :-
1. Test Case ID : TC_FD_Ravi_24th May_4
2. Test Case Name : Verify Interest
3. Test Suit ID : TS_FD
4. Priority : P0
5. Test Setup : Interest Object is taking inputs
6. Data Matrix :

ECP (Type) BVA (Range)


I/p Object
Valid Invalid Min Max

Interest 0-9 . 0-9 with one decimal a-z, A-Z, Spl.Char, Blank Field 0.1 100
Test Case 5 :-
1. Test Case ID : TC_FD_Ravi_24th May_5
2. Test Case Name : Verify Fixed Deposit Operation
3. Test Suit ID : TS_FD
4. Priority : P0
5. Test Setup : Valid Values are available in hand
6. Test Procedure :

Step No. Action Required I/p Expected O/p

1. Connect Bank Server Valid Exp Id Menu Appears

2. Select “FD” Option None Fixed Deposit Form Opened

All are valid Acknowledgement

3. Fill Fields and Click Ok Any one Invalid Error Msg.

Any one Blank Field Error Msg.

Test Case 6 :-
1. Test Case ID : TC_FD_Ravi_24th May_6
2. Test Case Name : Verify Fixed Deposit Operation with Bank Rule
3. Test Suit ID : TS_FD
4. Priority : P0
5. Test Setup : Valid Values are available in hand
6. Test Procedure :

Step
Action Required I/p Expected O/p
No.

Connect Bank
1. Valid Exp Id Menu Appears
Server

Select “FD” Fixed Deposit Form


2. None
Option Opened

Valid Name, Amount, Time>10


Acknowledgement
Fill Fields and with Interest>10
3.
Click Ok
Valid Name, Amount, Time>10
Error Msg.
With Interest <=10
Like as above example the Test Engineers are implementing Test Scenarios as Test
Cases. Every Test Case is a combination of corresponding Test Scenario and required
details to apply this test on S/w Build.

Test Cases Selection Review :


After completion of Test Scenarios and Cases writing the Test Lead & Test
Engineers are conducting a review meeting to estimate the completeness and correctness
of that documents. In this review the Testing Team is depending on below coverages.
□ Requirements Oriented Coverage (Modules)
□ Testing Topic Oriented Coverage (UT,FT,NFT)

IV. Test Execution :-


After completion of Test Design and Review the Testing Team is concentrating
on below issue.
□ Formal meeting with developers
□ Test Environment Establishment
□ Levels of Test Execution

□ Formal Meeting :-
In general the Test Execution process is starting with a Formal Meeting in
between Testing Team & Development Team representatives. In this meeting the
corresponding representatives are concentrating on Build Version Control and Defect
Tracking.
From Build version control concept, the Development Team is modifying S/w
Build Coding, to resolve defects and they will release that modified build with Unique
version number. This version numbering system is understandable to Test Engineers to
distinguish Old Build & Modified Build. For this version controlling, the Developers are
using Version Control Tools also. (Ex : - VSS (Visual Source Safe))
To report mismatches to Development Team the Test Engineers are reporting that
mismatch to Defect Tracking Team (DTT) First
Test Lead + Project Manager + Project Lead + Business Analyst → DTT
□ Test Environment Establishment :-
After completion of Formal Meeting, the Testing Team is concentrating on Test
Environment Establishment with required all Hardware and Software

SERVER

Configuration Repository
TCP/IP TCP/IP

FTP FTP

TCP/IP FTP

Development Project
Environment Management

Test Environment
FTP : File Transfer Petrol (Single Location)
TCP/IP : Transmission Control Protocol / Internet Protocol (Different Location(s))

□ Levels of Test Execution:-


Development Testing

Initial Build
Level-0 (Sanity)
Stable Build

Defect Report Level-1 (Comprehensive)

Defect Modified Build


Fixing Level-2 (Regression)

Level-3 (Final Regression)


Case Study :-
Initial Build

Sanity Testing (Level-0)

Stable Build

Comprehensive (Level-1)

Defect Detection

Modified Build

Regression Test (Level-2)

Defect Closing

Master Build

Final Regression (Leve-3)

Golden Build (Able to Release)

□ Levels of Test Execution Vs Test Cases :-

Level -0 → Some P0 (Functional) Test Cases


Level–1 → All P0,P1&P2 Test Cases
Level-2 → Selected P0,P1&P2 Test Cases with respect to modification
Level-3 → Selected P0,P1&P2 Test Cases with respect to Defect Density

□ Level-0 Sanity Testing :-


After Downloading Initial Build from Configuration Reporting in server, the
Testing Team is concentrating on Level-0 sanity testing to estimate Testability of that
Software. Testability means that Understandable, Operatable, Observable, Controllable,
Consistency, Simplicity, Maintainable and Automatable.
If that Initial Build is not Stable then the Testing Team sends back that Build to
Developers. If that build is Stable Build then the Test Engineers are concentrating on
Level-1 Test Execution to detect defects. This Level-0 testing is also known as Sanity
Testing / Smoke Testing / Testability Testing / Tester Acceptance Testing or Build
Verification Testing /n Octangle Testing.
□ Level-1 Comprehensive / Real Testing :-
In this Level-1 Test Execution, the Test Engineers are executing all Test Cases as
Batches. Every Test Batch Consist of a set of dependent Test Cases. In these test batches
the end state of one test is Base State to Next State. Test batches are also known as Test
Suite or Test Set or Test Build or Test Chain.

Receive Stable Make Test Select Select a


Build from Cases as A Batch Test Case
Developers Batches Next
Batch

Yes Next
Case

Defect Step Take a Step


Reporting No Expected in Case
= Actual
Build

From the above diagram the Test Engineers are continuing Test Execution Batch
by Batch and Case by Case in every Batch. If our Test Case Step expected is not equal to
actual then the Test Engineer is concentrating on Defect Reporting. If possible, they will
continue Test Execution also.
In this Level-1 test execution, the Test Engineers are preparing Test Log
Document to specify test results.
Test Log Document Format :-

Test Case Results (Pass / Defect Executed Executed


Comments
ID Fail) ID By On

There are three types of Test Results.


→ Passed, All expected values are equal to Actual
→ Failed, Any one expected are not equal to Actual
→ Blocked, Test execution postponed due to incorrect parent functionality
V. Defect Reporting & Tracking :-
During Level-1 Test Execution, the Test Case expected values are not equal to
Actual. These mismatches are calling as Defects / Issues / Bugs / Flaws
Defect Report :-
1. Defect ID : Unique No. or Name
2. Description : Summary of that mismatch in between Tester expected
value and Build actual value
3. Build Version ID : The version number of Current Build
(The Test Engineers detected this defect in that Build)
4. Feature : The Name of Module or Functionality
(Test Engineers detected this defect in that Module)
5. Test Case ID : The ID of failed test case
(Test Engineers detected this defect in that case
execution)
6. Reproducible : Yes → Defect appears every time in Test Execution
No → Defect appears rarely in Test Execution
7. If Yes, attach procedure :
8. If No, attach procedure and screen shots :
9. Severity : The seriousness of defect in terms of Functionality
High / Critical :- Not able to continue testing without
resolving.
Medium / Major :- Able to Continue Testing but
Compulsory / Mandatory to resolve
Low / Minor :- Able to continue, May or May Not to
resolve.
10. Priority : The importance of defect to solve in-terms of customer
interest. (High / Medium / Low)
11. Detected By : The name of the Test Engineer
12. Detected On : The data of detection and submission
13. Status : New : Reporting first time
Re-Open : Re-Reporting
14. Assigned to : Report to Tracking Team
15. Suggested Fix : Suggestion to Solve that Defect.
(Optional)
Defect Reporting Process :

Test Engineer Report


Defect to DTT as New

DTT Analize that Defect

Accepted Defect Status


No Changed to
“ Rejected”

Yes

Categorized that
defect and change
status to “Open”

No

Data Assigned to
Yes Testing Team
related
Defect

No

Procedure Assigned to
Yes Testing Team
Related
Defect

No
No

H/w or Assigned to
Yes H/w Team
Infrastruct
ure Defect

No

Code Related defect is


Assigned to Development Team

Case Study :-

Report
Test Defect Assigned Project Lead
Engineer Defect Tracking Team +
Programmers

Code Related Defect

Report
Test Defect Assigned
Engineer Tracking Team BA+TL+TE
Defect

Test Case Procedure & Test Data Related Defect

H/w or
Report Infrastructure
Test Defect Assigned
Engineer Tracking Team Team
Defect

H/w or Environment Related Defect


Defect Life Cycle or Bug Life Cycle :

New

Assigned Reject Deferred

Open

Fixed
Reopen

Closed

New : Reporting First Time


Assigned : Accepted by DTT
Reject : Not Accepted by DTT
Deferred : Accepted but not interested to solve due to low severity and low priority.
Open : Responsible Team is ready to resolve
Fixed : Defect not Correctly solved (or) Re reporting
Closed : Defect correctly solved and confirmed through Regression Testing.

Test Data Related Defect Fixing :


If our reported defect accepted by Defect Tracking Team (DTT) and they
decided that defect as Test Data Related Mismatch. In this situation the responsible
testing team is concentrating Correct Data Collection (CDC) without having conceptual
gap with the help of BA and TL and then, the Test Engineers are re-executing previously
failed test on same Build with correct test data. This test repetition is calling as Retesting
or Confirmation Testing.
Testing Build Failed
Test Case Defect Reporting

Data Related
Defect

Repeat Test Case Collect Correct


Build
With correct Data
Data

Retesting / Confirmation Testing


Test Script or Procedure Related Defect Fixing :

If our reported defect accepted as Test Procedure Related Defect by DTT,


then Responsible Testing Team is preparing Correct Procedure for that Test Case with
help of TL and BA

Testing Build Failed


Test Case Report to DTT

Procedure
Related Defect

Correct Test
Repeat Test Case Procedure
Build
In correct Prepared by Test
procedure Engineers

Retesting / Confirmation Testing

Infrastructure Related Defect Fixing :

If our Report Defect Accepted by DTT as Environment Related or


Infrastructure Related or Hardware Related Defect, then responsible Hardware Team is
Re-establishing correct test environment.
Testing Build Failed
Test Case Report to DTT

Environment
Related Defect

Re-establish Test
Repeat Test Case Environment by
Build
In modified H/w Team
environment

Retesting / Confirmation Testing


Code Related Defect Fixing :-
If our reported defect accepted as Code Related Defect, then the responsible
Programmers / Developers are performing changes in Build Coding to Resolve that
defect.
PL Updates the Impact Analysis Selected Coding
status of Defect by areas reviewed
to “Open” Programmers by PL

Review Document, Changes by


changes by concerned Changes
Yes Required in
BA/Designers & person
Project Lead (BA/Design) Documents

Unit Test & Changes in No


Make modified coding by
Build Programmers

PL changes Release Modified Build


defect status to with Unique Version
“Fixed” Number and Release Note

After receiving build from Development Team, the Testing Team is


concentrating on re-testing & Regression Testing

Test Cases

Related Passed
Tests Modified Passed
Build
Failed Test Build Passed

Programmers
Pass
Report Defect
Faild DTT Code Related Defect
From the above model the test engineer is re-executing previously failed test
on modified build to confirm defect fixing, called as Retesting or Confirmation Testing.
To identify side effects of defect fixing modifications in modified build, the test
engineers is re-executing previously passed related test on that modified build called
Regression Testing.

Level-2 Regression Testing :

Take Modified Build and Release Note

Identify severity of fixed defect in that Modified Build

High Medium Low

All P0 All P0 Some P0


All P1 Carefully Selected P1 Some P1
Carefully Selected P2 Cases And Some P2 Test Cases Some P2 Test Cases

On that modified build to detect Side Effects in Build with respect


to Modifications Specified in Release Note

Case 1:-
If the development team fixed defect severity is High then the Test Engineers
are repeating All P0, All P1 and Carefully Selected P2 Test Cases on that Modified Build
w.r.t. modifications specified in release note.
Case 2 :-
If the Development Team fixed defect severity is Medium then the Test
Engineers are repeating All P0, Carefully Selected P1 and Some P2 Test Cases on that
modified build w.r.t. modifications specified in release note.
Case 3 :-
If the Development Team fixed defect severity is Low then the Test Engineers
are repeating Some P0, Some P1 and Some P2 Test Cases on that modified build w.r.t.
modifications specified in release note.
Case 4:-
If the development team release modified build w.r.t. changes in Customer
Requirements then the Test Engineers are re-executing All P0, All P1 and Carefully
Selected P2 Test Cases on that Modified Build w.r.t. changes in Customer requirements.
In this case Test Engineers are performing changes in Test Scenarios and Test Cases
w.r.t. changes in Customer Requirement.
VI. Test Closure :-
After completion of all reasonable tests and detected defects closing, the test
lead is conducting a review meeting to Stop Testing. In this review the TL is analyzing
below factors with the involvement of Test Engineers.

1. Coverage Analysis :-
→ Requirements Oriented Coverage (Module)
→ Testing Topic Related Coverage (Usability, Functional, Non-Functional)
2. Defect Density Calculation :
Ex :
Modules / Requirement %
A 20%
B 20%
C 40% ( Need Regression Test )
D 20%
Total 100%

3. Analysis of Deferred Defect :


Whether the deferred defects are postponed or not?

Level-3 Final Regression Testing :

After completion of successful Test Closure review the Testing Team is


concentrating Leve-3 or Final Regression Testing.

Identify High
Defect Density Person /
Module Hour

Golden Defect Effort


Reporting If Estimation
Required

Regression Plan
Testing Regression

VII. User Acceptance Testing (UAT) :


After Completion of Final Regression Testing the Project Management is
concentrating on User Acceptance Testing to collect feedback from Real Customers /
Model Customers.
There are two ways in User Acceptance Testing, such as Alpha Testing and
Beta Testing.

VIII. Sign Off :

After completion of successful User Acceptance Testing and there


modifications, the Test Lead is preparing Final Test Summary Report and review
corresponding Test Engineer from this project. The final Test Summary Report is a
combination below document.
→ Test Strategy / Methodology
→ Test Plan(s)
→ Test Scenarios
→ Test Cases
→ Test Logs
→ Defect Reports
→ Requirements Traceability Matrix
Required Test Case Result Detected Status
Comments
ID ID (Pass / Fail) ID (Closed / Deferred)
It is a mapping between requirements and defects via test cases.

Case Study (5Months of Testing Process) :-

Deliverable Responsibility Duration


Test Strategy PM / TM 4-5 days
Test Planning Test Lead 4-5 days
Requirements Training to
BA + Domain / Subject Experts 5-10 days
Test Engineers
Test Scenarios & Review Test Engineer 5-10 days
Test Cases Implementation Test Engineer 10-15 days
Review Build + Level-0
Test Engineer 2-3 days
(Sanity Testing)
** Test Automation Test Engineer 10-15 days
Level-1 and Level-2
Test Engineer 30-40 days
Testing Execution
Deliverable Responsibility Duration
On Going
Defect Reporting Test Engineer
(Same Day)
Status Reporting Test Lead Weekly Twice
Test Closure & Level-3 Test Lead & Test Engineer 5-10 days
Real / Model Customers with In front of
User Acceptance Testing 3-5 days
Developers and Testers
Sign Off Test Lead 1-2 days

W-Model
System Testing
Development And Manual Test Automation

N.F.T Load Runner &


Req. Analysis J Meter
F.T Win Runner / QTP
S/w Design / Robot / Silk
Usability
Coding + Unit Testing Testing No Tools in Market

Integration Testing
Note : Test Automation is
Build Optional

From the above W-Model, the Testing Tools are available for Functional Testing
and Some of Non-Functional Testing and Endurance Testing and Data Volume
Testing.
The remaining Non-Functional Tests and Usability Testing conducted by Test
Engineers Manually.
Win Runner 8.0 :

¾ Developed by Mercury Inter Active and Take over by Hewlett Packed (HP)
¾ Functional Testing Tool
¾ This Version released in “2005”January
¾ Supports VB, .Net, Java, Power Builder, HTML, Delphi, VC++, D2K, and Siebel
and Siebel Technology Software for Functional Testing.
¾ To Support SAP, People Soft, XML, Multimedia and Oracle Applications
(“ERPS”) including above technologies, Test Teams are using Quick Test
Professional (QTP)
¾ Win Runner runs on windows only
¾ X-Runner for Unix / Linux

Win Runner Test Process :

Receive Stable Build From Developers after Sanity Testing



Identify Functional Test Cases (Priority P0) to Automate (English + Manual)

Create Automation Programs (TSL) for that Functional Test Cases

Runs Programs on S/w Build to detect defects

Test Reporting if required

From the above approach, the Test Engineers are concentrating Manual
Functional Test Cases into Test Script Language (TSL) programs.
TSL is a “C” like language

Add-in Manager :
This window list out all Win Runner supporting technologies with respect to
license. Test Engineers are selecting current project technology in that list

Welcome Screen :

After Successful Win Runner launching Welcome Screen is coming on the


Desktop. The screen consists of 3 New Options like
→ Create a New Test
→ Open an Existing Test
→ A Quick Preview of Win Runner
Win Runner Icons :
Start Recording
↓ Run From Top
→ Run From Arrow
 Stop Recording
Pause (Stop Run)

Win Runner Test Automation Frame Works :


The Win Runner 8.0 is allowing you to convert our Manual Functional Test Cases
into Test Script Language (TSL) programs in 4 ways

→ Record and Playback Frame Work


→ Data Driven Frame Work
→ Keyword Driven Frame Work
→ Hybrid Frame Work

I. Record & Playback Frame Work :

In this frame work the Test Engineers are converting manual test cases into
automation programs with Two Steps of procedure.

A. Recording Operations
B. Inserting Check Points

A. Recording Operations :-

In Test Automation program creation, the Test Engineers are recording S/w Build
operations. There are two modes in recording such as Context Sensitive Mode and
Analog Mode.
In Context Sensitive Mode, the tool is recording Mouse and Keyboard operations
with respect to objects and window in build. To select this mode the Test Engineers are
using below options.

Click “Start Recording” icon Once


Test Menu → Record Context Sensitive Option.

To record mouse pointer movements with respect to desktop co-ordinates, Test


Engineers are using Analog Mode in Win Runner. To select this mode we can use below
options.
Click “Start Recording” icon Twice
Test Menu → Record Analog
Ex :-
Digital Signatures, Graphs Drawing and Image Movements.

“F2” is a short cut key to change from one mode to another mode.

Note :-
In Analog Mode the Win Runner is Recording Mouse Pointer Movements with
respect to Desktop Co-ordinate. Due to this reason the Test Engineers are not changing
corresponding window position and monitor resolution.

B. Inserting Check Point :

After recording build operations, the Test Engineers are inserting check points
with respect to expectations. Every check point is comparing Test Engineer given
Expected Value and Build Actual Value. There are Four check points in Win Runner.

¾ GUI (Graphical User Interface) Check Point


¾ Bitmap Check Point
¾ Database Check Point
¾ Text Check Point

Î GUI (Graphical User Interface) Check Point :

To verify properties of Objects, we can use this check point. It consists of 3 sub
options.
i. For Single Property
ii. For Object / Window
iii. For Multiple Object

i. For Single Property :-


To verify one property of one object we can use this option.

Ex.-1 :
Test Procedure :-
Step
Action Required I/p Expected O/p
No.
Open an order in Flight Order No. as Delete Order button
1
Reservation Window Valid “enabled”
Build :- Flight Reservation Window

Automation Program :-

set_window (“Flight Reservation”,1);


menu_select_item (“File;Open Order….”);
set_window (“Open Order”,1);
button_set (“Order No.”, ON);
edit_set (“Edit”, “1”);
button_press (“OK”);
set_window (“Flight Reservation”,1);
button_check_info (“Delete Order”, “enabled”, 1);

Ex.-2 :
Test Procedure :-
Step
Action Required I/p Expected O/p
No.
Open an order in Flight Order No. as Insert Order button
1
Reservation Window Valid “disabled”

Build :- Flight Reservation Window

Automation Program :-

set_window (“Flight Reservation”,1);


menu_select_item (“File;Open Order….”);
set_window (“Open Order”,1);
button_set (“Order No.”, ON);
edit_set (“Edit”, “1”);
button_press (“OK”);
set_window (“Flight Reservation”,1);
button_check_info (“Insert Order”, “enabled”, 0);

Note :- TSL is case sensitive language and it is taking # symbol for comments.
Ex.-3 :
Test Procedure :-
Step
Action Required I/p Expected O/p
No.
Open an existing order in Flight Valid Order Update Order button
1
Reservation Window No. “disabled”

Build :- Flight Reservation Window

Automation Program :-

set_window (“Flight Reservation”,1);


menu_select_item (“File;Open Order….”);
set_window (“Open Order”,1);
button_set (“Order No.”, ON);
edit_set (“Edit”, “1”);
button_press (“OK”);
set_window (“Flight Reservation”,1);
button_check_info (“Update Order”, “enabled”, 0);

Note :- No need to “Stop Recording” before “Inserting Check Point”

Case Study :-

1) Manual : Click a button


TSL : button_press (“Button Name”);

2) Manual : Select a Menu Option


TSL : menu_select_item (“Menu Name; Option Name”);

3) Manual : Fill a Text Box


TSL : edit_set (“Edit Box Name”, “Given Text”);

4) Manual : Select a Radio Button


TSL : button_set (“Radiobutton Name”, ON);

5) Manual : Check Box Selection


TSL : button_set (“Check Box Name”, ON/OFF);
6) Manual : Select an Item in List Box
TSL : list_select_item (“List Box Name”, “Selected Item”);

7) Manual : Fill a Password Box


TSL : password_edit_set (“Password Object”, “encrypted value”);

8) Manual : Activate a window


TSL : win_active (“Window Name”);

9) Manual : Auto Focus to window through an object operation


TSL : set_window (“Window Name”, time);

Ex.-4 :
Test Procedure :-
Step
Action Required I/p Expected O/p
No.
Enter User ID and Valid User ID & “OK” button
1
Password Password “enabled”

Build :-
Login

User ID

Password OK

Automation Program :-

set_window (“Login”,Time);
edit_set (“User ID”, “Valid Value”);
passoword_edit_set (“Password”, “Encrypted Value”);
button_check_info (“OK”, “enabled”, 1);
Ex.-5 :
Test Procedure :-
Step No. Action Required I/p Expected O/p
1 Focus to Student window None “Submit” button “disabled”
2 Select Roll No. None “Submit” button “disabled”
3 Enter Student Name Valid Name “Submit” button “enabled”

Build :-
Student

Roll No.

Name Submit

Automation Program :-

win_active (“Student”);
button_check_info (“Submit”, “enabled”, 0);
list_select_item (“Roll No.”, “Selected Item”);
button_check_info (“Submit”, “enabled”, 0);
edit_set (“Name”, “Valid Value”);
button_check_info (“Submit”, “enabled”, 1);

Ex.-6 :
Test Procedure :-
Step No. Action Required I/p Expected O/p
1 Focus to Student window None “Submit” button “disabled”
2 Select Roll No. None “Submit” button “disabled”
3 Enter Student Name Valid Name “Submit” button “enabled”
Build :-
Employee

Emp. No.

Name

○ Male ○ Female Accept

Automation Program :-
win_active (“Employee”);
button_check_info (“Accept”, “enabled”, 0);
list_select_item (“Emp. No.”, “Selected Item”);
button_check_info (“Accept”, “enabled”, 0);
edit_set (“Name”, “Valid Value”);
button_check_info (“Accept”, “enabled”, 0);
button_set (“Button Name (Male/Female)”, ON);
button_check_info (“Accept”, “enabled”, 1);

Ex.-7 :
Test Procedure :-
Step
Action Required I/p Expected O/p
No.
Focus to Flight Reservation
1 None “Update” disabled
Window
Valid Order
2 Open an Existing Order “Update” Order disabled
No.
“Update” button
3 Perform a Change in that Order Valid Change
“enabled”
Build :- Flight Reservation

Automation Program :-
win_active (“Flight Reservation”);
button_check_info (“Update”, “enabled”, 0);
menu_select_item (“File; Open Order…”);
set_window (“Open Order”,1);
button_set (“Order No”,ON);
edit_set (“Edit”,1);
button_press (“OK”);
button_check_info (“Update”, “enabled”,0);
edit_set (“Name”, “Ravi Kiran”);
button_check_info (“Update”, “enable”,1);

Ex.-8 :
Test Procedure :-
Step No. Action Required I/p Expected O/p
1 Focus to Flight Reservation None Date of Flight object focused
2 Open an Existing Order Valid Order No. Date of Flight object focused
Build :- Flight Reservation
Automation Program :-
win_active (“Flight Reservation”);
obj_check_info (“Date of Flight Object”, “focused”,1);
set_window (“Flight Reservation”,1);
menu_select_item (“File; Open Order…”);
set_window (“Open Order”,1);
button_set (“Order No”,ON);
edit_set (“Edit”,1);
button_press (“OK”);
obj_check_info (“Date of Flight Object”, “focused”,1);

ii. For object / window :-


To verify more than one property of one object we can use this option.
Ex.-1 :
Test Procedure :-
No. Action Req. I/p Expected O/p
Focus to Flight Reservation and Valid Tickets Object value is Numeric
1
Open an Existing Order Order No. and value inbetween 1-10
Build :- Flight Reservation
Automation Program :-
win_active (“Flight Reservation”);
set_window (“Flight Reservation”,1);
menu_select_item (“File; Open Order…”);
set_window (“Open Order”,1);
button_set (“Order No”,ON);
edit_set (“Edit”,1);
button_press (“OK”);
set_window (“Flight Reservation”,1);
obj_check_gui (“Tickets:”, “list1.ckl”, “gui1”,1);
# list1.ckl consists Range and Regular Expression # gui1 consists of 1-10 and [0-9]*
ii. For multiple objects:-

We can use this option to verify more than one property of more than one object.
Ex.-1 :
Test Procedure :-
Step Required
Action Expected O/p
No. I/p
Focus to Flight Insert Order, Delete Order and Update
1 None
Reservation Window Order buttons are disabled
Insert Order and Update Order buttons
Open an Existing Valid Order
2 are disabled and Delete Order button is
Order No.
Enabled
Perform a Change in Valid Insert Order is Disabled, Update Order
3
that Order Change and Delete Order Enabled

Build :- Flight Reservation


Automation Program :-
win_active (“Flight Reservation”);
win_check_gui (“Flight Reservation”, “list.ckl”, “gui1”,1); #Check Point
set_window (“Flight Reservation”,1);
menu_select_item (“File; Open Order…”);
set_window (“Open Order”,1);
button_set (“Order No”,ON);
edit_set (“Edit”,1);
button_press (“OK”);
win_check_gui (“Flight Reservation”, “list.ckl”, “gui2”,1); # Check Point
set_window (“Flight Reservation”,1);
edit_set (“Tickets:”, “3”);
win_check_gui (“Flight Reservation”, “list.ckl”, “gui3”,1); # Check Point

Note :
To save Test Creation and Execution Time, the Test Engineers are inserting “For
Multiple Object” Check Point
“For Multiple Objects” option is applicable on Multiple Object in same window

Case Study-1 :
obj_check_info() Î for single property
obj_check_gui () Î for object / window
win_check_gui () Î for multiple objects
Case Study-2 :
Object Type Testable Properties
Push Button Enabled, Focused
Radio Button Enabled, Status (ON / OFF)
Check Box Enabled, Status (ON / OFF)
List / Combo
Enabled, Value, Count
Box
Menu Enabled, Count
Test Box / Edit Enabled, Value, Focused, Range, Regular Expression, Date Format,
Box Time Format, …
Table Grid Columns Count, Rows Count, Cell Content

Case Study-3 :
BRS

SRS Test Scenarios
Use ↓
(Functional
Cases Test Cases
Specifications)

↓ Automation
HLD & LLDs Programs

Coding (UT & IT) Manual Testing

Automation Testing
Build

iii. Bitmap Check Point :

We can use this check point to compare images. This check point is supporting
Static Images only.
To Support movies like dynamic images comparison the Test Engineers are using
Manual Testing (or) QTP Tool.
Ex.-1 :
Test Procedure :-
Step Required
Action Expected O/p
No. I/p
Focus to Flight Reservation Build The Old Version Logo is equal
1 Old Version and select About None to Flight Reservation Build
Option in Help Menu New Version Logo
Build :- Flight Reservation
Automation Program :-
win_active (“Flight Reservation”);
set_window (“Flight Reservation”,1);
menu_select_item (“Help; About….”);
set_window (“About Flight Reservation System”,1);
obj_check_bitmap (“Button”, “Imgl”, 1); # Check Point
Ex.-2 :
Test Procedure :-
Step Required
Action Expected O/p
No. I/p
Focus to Flight Reservation
Graph opened for existing
1 Window and select analysis menu, None
data
graphs option
Existing Graph changed with
Open an Existing Order and Valid
2 respect to changes in No.of
perform a change in no. of Tickets Change
Tickets
Build :- Flight Reservation
Automation Program :-
win_active (“Flight Reservation”);
set_window (“Flight Reservation”,1);
menu_select_item (“Analysis ; Graphs…”);
set_window (“Graphs”,1);
obj_check_bitmap (“Gs_Drawing”, “Img1”,1, 158,26,178,154)
# Screen area Check Point
win_close (“Graph”);
set_window (“Flight Reservation”,1);
menu_select_item (“File; Open Order…”);
set_window (“Open Order”,1);
button_set (“Order No”,ON);
edit_set (“Edit”,1);
button_press (“OK”);
set_window (“Flight Reservation”,1);
edit_set (“Tickets:”, “3”); # Changes in Tickets
button_press (“Update Order”); Order”);

Note :
The Win Runner Bitmap check point is comparing Complete Images or Part of
Images.
For object / window :
obj_check_bitmap (“Image Name”, “Image File”, Time);
For screen area :
obj_check_bitmap (“Image Name”, “Image File”, Time, x,y,width, height);

To verify manipulations (or) calculations for our application build we can use
check point. This check point is a combination of “2 concepts” such as
Get Text Option and If Condition.
The Get Text option consists of 2 sub options
1. From Object / Window 2. From Screen Area.

1. From Object / Window : To capture an object value we can use this option.

Navigation :- Insert Menu → Get Text → Select Required Object

Syntax :- obj_get_text ( “Object Name”, Variable);

2. From Screen Area : To capture selected value from a screen, we can use this option

Navigation :- Insert Menu → Get Text → From Screen Area → Select required value
region in that screen → Right click to relive from selection.

Syntax :- obj_get_text (“Screen Name”, Variable, x1,y1,x2,y2);

If Condition :-
TSL is a “C” like language. It allows you to write control statements with “c”
syntaxes.
if (condition)
{
----
----
}
else
{
----
----
}
Ex-1 :-
Manual Expected :- Output = Input * 100
Build :-
Sample

Input xxxxxxx

Output xxxxxxx

Automation Program :-
set_window (“Sample”,1);
obj_get_text (“Input”, x);
obj_get-text (“Output”, y);
if (y = = x*100)
printf (“Test is Pass”);
else
printf (“Test is Fail”);

Ex-2 :-

Manual Expected :- Total = Tickets * Price in an opened order

Build :- Flight Reservation

Automation Program :-

set_window (“Flight Reservation”,1);


menu_select_item (“File; Open Order…”);
button_set ( “Order No.”, ON);
edit_set (“Edit”,1);
button_press (“OK”);
obj_get_text (“Total”,tot);
obj_get_text (“Price”, p);
obj_get_text (“Tickets”, t);
p = substr(p,2,length(p)-1);
tot = substr(tot,2,length(tot)-1);
if (tot = = p*t)
printf (“Test is Pass”);
else
printf (“Test is Fail”);
Ex-3 :
Manual Expected : - Total = File1 size + File2 Size
Build :
Audit

File1 xxxxxxx KB

File2 xxxxxxx KB

Total xxxxxxx KB

Automation Program :
set_window (“Audit”,1);
obj_get_text (“File1”,x);
obj_get_text (“File2”,y);
obj_get_text (“Total”,z);
x = substr(x,1,length(x)-2);
y = substr(y,1,length(y)-2);
z = substr(z,1,length(z)-2);
if (z = = x+y)
printf (“Test is Pass”);
else
printf (“Test is Fail”);

Ex-4 :

Manual Expected : - Total = Price * Qunatity

Build :
Shopping

Quantity xxxxxxx

Price Rs: xxxxx /-

Total Rs: xxxxx /-


Automation Program :

set_window (“Shopping”,1);
obj_get_text (“Quantity”,Q);
obj_get_text (“Price”,P);
obj_get_text (“Total”,T);
P = substr(y,4,length(P)-5);
T = substr(z,4,length(T)-5);
if (T = = P * Q)
printf (“Test is Pass”);
else
printf (“Test is Fail”);

tl_step ( ) :-
“tl” stands for Test Log (Test Result). We can use this statement to prepare our
own Pass / Fail Result.
Syntax :-
tl_step (“Step Name”, 0/1, “Message”);
‘0’ for Pass
Other than ‘0’ Fail

Note :-
Substr() : we can use this function to get required value from given string.
Syntax :-
Substr (“String Value” / Variable, Starting Position, length(“String Value” / Variable));

Data Base Check Point :


In a software functional testing, the test engineers are concentrating on back end
coverage, In this coverage, the test engineers are estimating the correctness of Front End
screens operations on Back End Tables content in terms of Data Validations and Data
Integrity.

Employee Emp Table Dept Table


Emp No. : 101 Driven Emp.No. Name Dept Dept.No. Name Strenght
Name : Abc X X X 10 Sales 20
Dep.No. : 10 X X X 21
Provider X X X
OK 101 Abc 10

Data Validation Data Integrity


Driven : - Data Stored in Same System
Provider :- Data Stored in another system.

From the above example the insertion of new data correctness is called as “Data
Validation” The changes in existing data correctness is calling as Data Integrity.
To automate this Data Base Testing, test engineers are using “Data Base Check
Point”. It consist of 3 sub points.

A. Default Check B. Custom Check C. Runtime Record Check

A. Default Check :-

To conduct Data Base testing, depending on the content of Data Base, we can use
this option.

Ex :-
Î Create DB Check Point
(Current Content of DB selected as Expected)
Î Perform Front End Operation
Î Run DB Check Point = = Fail
(Current Content of DB selected as Actual) ! = Pass

Navigation:

Open Win Runner → Insert Menu → Data Base Check Point → Default Check →
Specify Connect to Data Base Using ODBC (or) Data Junction (ODBC for Local Data
Base and Data Junction for Remote Data Base) → Click Next → Click Create to select
connectivity provided by developers → Write select statement → Click Finish → Open
our application build in Front End → Perform an operation manually → Run data base
check point → Analyze Results Manually.

Note :-
From the above Navigation, the Test Engineers are gathering some information
from developers like the name of connectivity in between our application build Front End
and Back End, the names of Tables including columns, Back End and the mapping in
between Front End Screens and Back End Tables. This information is also known as
“Data Base Design Document”.
B. Custom Check :-

To conduct data base testing depending on Rows Count, Columns Count and
Content, we can use this option.
In general the test engineers are using Default Check Option. This option is
showing content only by default. The content of Data Base is measurable in-terms of
Rows Count and Columns Count. Due to this reason the Test Engineers are using Default
Check instead of Custom Check.

Syntax :- db_check (“Check list. Cdl”, “Expected Values File”);

In above syntax checklist file specifies content as property in Default Check and
Rows count, Columns Count and Content as properties in Custom Check.
Expected values file specifies the current content of data base with respect to
select statement.

C. Runtime Record Check :

We can use this option to estimate the correctness of Back End Table columns
and Front End Report Objects

Default / Custom Check


Front end Screen

User Forms
Data
Base
User Reports

Runtime Record Check

Navigation :-

Insert Menu → Data Base Check Point → Run Time Record Check → Click Next →
Click Create to select connectivity provided by developers → Write select statements
with doubtful columns → Select doubtful objects for that columns → Click Next →
Select one or more matching records option → Click Finish

Ex-1 :
Objects DB Table Columns
Order No. orders.order_number
Name orders.customer_name
(Pass)
Ex-2 :
Objects (Fail) DB Table Columns
Tickets orders.order_number
Name orders.customer_name

Syntax:- db_record_check(“Checklist.cvr”,DVR_ONE_OR_MORE_MATCH,variable);
In above syntax check list file specifies expected mapping in between Back End
Table Columns and Front End Report Objects.
The indicator specifies the need of check point execution more that one time.
Variable specifies that the no.of records matched.

Case Study :-
Check Point TSL Statement
For Single Property in GUI obj_check_info (“object name”, “property”, expected
Check Point value);
For object/window in GUI obj_check_gui (“cbject Name”, “checklist.ckl”, “expected
value file”, time);
For multiple objects in GUI win_check_gui (“window name”, “checklist.ckl”,
“expeted value file”, time);

For object/window in obj_check_bitmap (“image object”, “image file”, time);


Bitmap Check Point
For Screen Area in Bitmap obj_check_bitmap (“image object’, “image file”, time,
x,y,width,height);

From object/window in Get obj_get_text (“object name”, variable);


Text Check Point
From screen area in Get obj_get_text(“object area name”, variable, x1,y1,x2,y2)
Text

Default Check in Database db_check (“checklist.cdl”,”expecteddatabase content”);


Check Point
Custom Check in Database db_check (“checklist.cdl”, “expected database” , rows
count columns count and content)
Runtime Record Check in db_record_check (“checklist.cvr”,
Database DVR_ONE_OR_MORE_MATCH, variable);
II. Data Driven Automation Frame Work :-

It is an advanced automation frame work in win runner testing tool. The test
engineers are executing an automation program with multiple test data in this frame
work. There are 4ways in Data Driven Testing.

Key Board

Flat File Build /


Test Application
Front End Under Test
Objects Data (AUT)

Excel Sheet
Automation
Program in TSL

A. Test Data From Key Board :-


To read values from keyboard, the test engineers are using below TSL
Statement.
Variable = create_input_dialog (“Message”);

Ex:-1
Manual Expected :- Delete Order Button Enabled After Open an Existing Order.
Build : Flight Reservation
Text Data : 5 Valid Order Numbers
Automation Program :
for (i=1; i<=5; i++)
{
x = create_input_dialog (“Enter Order Number”);
set_window (“Flight Reservation”,1);
menu_select_item (“File;Open Order….”);
set_window (“Open Order”,1);
button_set (“Order No.”, ON);
edit_set (“Edit”, x);
button_press (“OK”);
set_window (“Flight Reservation”,1);
button_check_info (“Delete Order”, “enabled”, 1);
}
# Sample Input in Automation Program is replaced by multiple inputs in execution is
called Parameterization.
Ex:-2
Manual Expected :- Tickets Object value is numeric in an Open Order.
Build : Flight Reservation Text Data : 5 Valid Order Numbers
Automation Program :
for (i=1; i<=5; i++)
{
x = create_input_dialog (“Enter Order Number”);
set_window (“Flight Reservation”,1);
menu_select_item (“File;Open Order….”);
set_window (“Open Order”,1);
button_set (“Order No.”, ON);
edit_set (“Edit”, x);
button_press (“OK”);
set_window (“Flight Reservation”,1);
obj_check_info (“Tickets:”, “list1.ckl”, “gui1”, 2);
}

Ex:-3
Manual Expected :- Total = Number of Tickets * Price in an opened order
Build : Flight Reservation Text Data : 5 Valid Order Numbers
Automation Program :
for (i=1; i<=5; i++)
{
x = create_input_dialog (“Enter Order Number”);
set_window (“Flight Reservation”,1);
menu_select_item (“File;Open Order….”);
set_window (“Open Order”,1);
button_set (“Order No.”, ON);
edit_set (“Edit”, x);
button_press (“OK”);
set_window (“Flight Reservation”,1);
obj_get_text (“Tickets:”, t);
obj_get_text (“Price:”, p);
obj_get_text (“Total:”, tot);
p = substr (p,2,length(p)-1);
tot = substr (tot,2,length(tot)-1);
if(tot == p*t)
tl_step (“T1”,0,”Test Pass”);
else
tl_step (“T1”, 1, “Test Fail”);
}
Ex-4 :
Manual Expected : - Results = Input1 * Input2
Build:-
Multiply

Input1
Input2
Ok
Result

Test Data :- 10 pairs valid inputs


Automation Program :
for (i=1; i<=5; i++)
{
x = create_input_dialog (“Enter Input1”);
y = create_input_dialog (“Enter Input2”);
set_window (“Multiply”,1);
edit_set (“Input1”, x);
edit_set (“Input2”, y);
button_press (“OK”);
obj_get_text (“Result:”, r);
if(r == x*y)
tl_step (“T1”,0,”Test Pass”);
else
tl_step (“T1”, 1, “Test Fail”);
} Go Top

B. Test Data From Flat File :-


In this approach the test engineers are maintaining test data in a Flat File. In this
approach the Win Runner is not taking the interaction of Test Engineers while running
test.

Build /
Test Application
Under Test
Data (AUT)

.txt Automation Program in TSL


To use file content as test data, the Test Engineers are using below TSL
Statements

file_open (“Path of File”, FO_MODE_READ);


file_getline (“Path of File”, variable);
file_close (“path of file”);

Ex-1:
Manual Expected :- Delete order button enabled after open an order
Build :- Flight Reservation
Test Data : C:\Documents and Settings\Balaji\Desktop\Temp\Ravi.txt
Automation Program:

f=”C:\Documents and Settings\Balaji\Desktop\Temp\Ravi.txt”;


file_open (f,FO_MODE_READ):
while (file_getline(f,x) != E_FILE_EOF)
{
set_window (“Flight Reservation”,1);
menu_select_item (“File;Open Order….”);
set_window (“Open Order”,1);
button_set (“Order No.”, ON);
edit_set (“Edit”, x); # Parameterization
button_press (“OK”);
set_window (“Flight Reservation”,1);
button_check_info (“Delete Order”, “enabled”, 1);
}
file_close(f);

Silent Mode :-
Win Runner continues test execution when a check point is failed also. The Test
Engineers are using this option to continue test execution without interaction.

Navigation :- Tools Menu – General Options – Run Tab – Select Run in Batch Mode
Check Box – Click Ok

Note :- In silent mode the win runner is not executing create_input_dialog( ) Statement.
Ex-3 :-
Manual Expected : Total = Price * Quantity
Build :
Shopping
Item No.
Quantity
Ok

Price $xxxxxx

Total $xxxxxx

Test Data : C:\Documents and Settings\Balaji\Desktop\Temp\Ravi.txt


Ravi.txt contains like below statements
Ramu Purchased 101 item as 10 pieces
Bhasha Purchased 102 item as 27 pieces .... etc.,
Automation Program :
f=”C:\Documents and Settings\Balaji\Desktop\Temp\Ravi.txt”;
file_open (f,FO_MODE_READ):
while (file_getline(f,x) != E_FILE_EOF)
{
split (x,y,” “);
set_window (“Shopping”,1);
edit_set (“Item No”, y[3]); # Parameterization
edit_set (“Quantity”, y[6]); # Parameterization
button_press (“OK”);
obj_get_text (“Price”,p);
obj_get_text (“Total”,t);
p=substr(p,2,length(p)-1);
t=substr(t,2,length(t)-1);
if (t== p*y[6])
tl_step (“C1”, 0, “Calculation is Pass”);
else
tl_step (“C1”, 1, “Calculation is Fail”);
}
file_close(f); Go Top
C. Test Data From Front End Objects

Some times the Test Engineers are re-executing their automation program
depending on multiple data objects in build like Menu’s, List Boxes, Tables, Activex
Controls and Data Windows.

Build /
Application
Under Test
(AUT)
Test Data
From Build Object

Ex-1:
Manual Expected : The Selected City Name in “Fly From” doesn’t appear in “Fly To”
Build:
Journey

Fly From

Fly To

Test Data : All Existing City Names in “Fly From”


Automation Program :
set_window (“Journey”,1);
list_get_info (“Fly From”, “count”, n);
for (i=0; i<n; i++)
{
list_get_item (“Fly From”, i, x);
list_select_item (“Fly From”,x);
if (list_select_item (“Fly To”,x) != E_OK)
tl_step (“J1”, 0, “Item doesn’t appear”);
else
tl_step (“J1”, 1, “Item Appears”);
}
Ex-2 :
Manual Expected : Total = Price * Quantity in every row of bill
Build : Bill is window name
Sl.No. Quantity Price Total
1 X $xxxx $xxxx
2 X $xxxx $xxxx
3 X $xxxx $xxxx
.etc .etc .etc .etc
Test Data : - All existing rows in Bill table
Automation Program :
set_window (“Shopping”,1);
tbl_get_rows_count (“Bill”, n);
for (i=1; i<n; i++)
{
tbl_get_cell_data (“Bill”, “#”&i; “#1”, q);
tbl_get_cell_data (“Bill”, “#”&i; “#2”, p);
tbl_get_cell_data (“Bill”, “#”&i; “#3”,t);
p=substr(p,2,length(p)-1);
t=substr(t,2,length(t)-1);
if (t == p*q)
tl_step (“S1”, 0, “Test Pass”);
else
tl_step (“S1”, 1, “Test Fail”);
}

PRACTICE :
Total = Internal Marks + External Marks of every student
Build : Marks is a window
Roll No. Name Internals Externals Total
101 xxxxx xxxxx xxxxx xxxxx
102 xxxxx xxxxx xxxxx xxxxx
.etc. .etc. .etc. .etc. .etc.

Go Top
D. Test Data From an Excel Sheet :-
Some times the test engineers are re-executing automation programs depending
on multiple inputs in an excel sheet, instead of Key Board, Flat Files and Front End
Objects. In this method the test engineers are filling Excel Sheet through importing data
from Build Data Base or with Manual Entry.

Build / AUT
Test
Front
End DB
Data

.xls Automation Program in TSL


II) Import Data
I) Manual From DB
Entry

To create excel sheet oriented data driven test, Test Engineers are following
below navigation.
Navigation:-
Open Win Runner & Build – Create an Automation Program for Sample Inputs
– Table Menu – Data Driven Wizard – Click Next – Specify the path of Excel Sheet –
Specify Variable Name to Store that Excel Sheet Path – Select Import Data from Data
Base – Click Next – Specify Connective DB Using UDBC / Data Junction – Select
Specify SQL Statement Option – Click Next – Click Create to Select Connectivity of DB
Provided by Developers – Write Select Statement to Import Data From Connected DB –
Click Next – Replace Sample Input With Imported Excel Sheet Column Name in
Automation Program – Say Yes/No to Show Data Table (Excel Sheet) – Click Finish –
Put Build in Base State and Click Run – Analyze Results after execution.

Note :
By Default the Win Runner is providing a default excel sheet for every test
instead of our own excel sheet.
Ex-1 :
Manual Expected : Delete order button enabled after open an existing order.
Build : Flight Reservation
Test Data : Default.xls (Import Data From DB)
Automation Program :
table = “default.xls”;
rc = ddt_open(table, DDT_MODE_READWRITE);
if (rc!=E_OK && rc!=E_FILE_OPEN)
pause (“Cannot Open Table”);
ddt_update_from_db (“table”,”msqr1.sql”,count);
ddt_save (table);
ddt_get_row_count (table,n);
for (i=1; i<=n; i++)
{
ddt_set_row (table,i);
set_window (“Flight Reservation”,1);
menu_select_tem (“File; Open Order….”);
set_window (“Open Order”,1);
button_set (“Order No.”,ON);
edit_set (“Edit”, ddt_val (table, “order_number”));
button_press (“OK”);
set_window (“Flight Reservation”,1);
button_check_info (“Delete Order”, “enabled”, 1);
}
ddt_close (table); Go Top

Case Study-1:

ddt_open( ):- we can use this function to open an excel sheet in specified mode.
Syntax :
ddt_open (“Path of Excel Sheet”, DDT_MODE_READ / READWRITE);

ddt_update_from_db ( ):-
We can use this function to perform changes in excel sheet with respect to
changes in Data Base.
Syntax:
ddt_update_from_db (“Path of Excel Sheet”, “Select Statement query file”, Variable);

ddt_save ( ): We can use this function to save excel sheet modification.


Syntax : ddt_save (“Path of Excel Sheet”);
ddt_get_row_count ( ):-
We can use this function to find no.of rows in an excel sheet.
Syntax : ddt_get_row_count (“Path of Excel Sheet”, variable);

ddt_set_row ( ):-
We can use this function to point specific row in an excel sheet.
Syntax : ddt_set_row (“Path of Excel Sheet”, row number);

ddt_val ( ):- We can use this function to capture specified column value
Syntax : ddt_val (“Path of Excel Sheet”, column name);

ddt_close ( ): To close a opened file, we can use this function.


Syntax : ddt_close (“Path of Excel Sheet”);

Case Study – 2 :
DDT TSL Statement Silent Mode Test Engineer Interaction
Approach (During Run Time)
Test Data from create_input_dialog ( ); Off Mandatory
Key Board
Test Data file_open ( ); On / Off Optional
From file_getline ( );
Flat File
file_close ( );
Test Data list_get_item ( ); On / Off Optional
From list_get_infoe ( );
Front End
tble_get_rows_count ( );
Object
tble_get_cell_data ( );
Test Data From ddt_open ( ); On / Off Optional
Excel Sheet ddt_save ( );
ddt_set_row ( );
ddt_update_from_db ( );
ddt_get_row_count ( );
ddt_val ( );
ddt_set_val ( );
ddt_close ( );

You might also like