Professional Documents
Culture Documents
A software is a set of programs. They will take input and provide outputs. They are
two types 1) Software Application 2) Software Product
Software Bidding :
Kick of Meeting :
The CEO category person is conducting a meeting with high level management
and select a Project Manager to handle the New Software Development Process.
The selected Project Manager (PM) is preparing this document to estimate the
required people, the required technologies, required time and required resources. He/She
submitting the report to CEO. The CEO is conducting a review to give green signal to
Project Manager.
Required Gathering
↓
Analysis & Planning
↓
Designing
↓
Code
↓
Testing
↓
Release & Maintenance
In above SDLC process, the single stage of testing is available and conducting the
testing by Developers. Due to these reasons, the organizations are concentrating on
Multiple Stages of Testing and separate testing teams to achieve quality.
Software Quality :
“V” Model :
‘V’ Stands for Verification & Validation. This model is defining development
process with Testing Stages. This model is extension of SDLC Model.
Verification Validation
Coding
In above ‘V’ Model Reviews are calling as Verification Methods and Testing
levels are calling as Validations. In small and medium scale organizations the
management is maintaining the separate Testing Team for System Testing Only to
decrease project cost, because the System Testing is Bottle Next Stage in Software
Development Process.
I) Reviews in Analysis :
In general the software development process is starting with requirements
gathering from Specific Customer in Application Development and requirements
gathering from Model Customers in Product development. After gathering requirements
the responsible Business Analyst is preparing BRS ( Business Requirements
Specification) document. This document is also known as User Requirement
Specification or Customer Requirement Specification.
After gathering requirements, the business analyst sit with Project Manger and
develop SRS and Project Plan. The Software Requirements Specification Consists of
functional requirements to be developed and system requirements to be used.
Example :
BRS SRC
Functional Requirement :
System Requirement :
‘C’ Language
What? How?
Mailing
Chatting
LOGOUT
Leaf :
Valid
Next Window
HLD is a system level design and LLD is component or Module level design. So
one Software design consists of one HLD and Multiple LLDs.
}
→ Basic Paths Coverage
→ Control Structure Coverage
→ Program Technique Coverage
→ Mutation Coverage
Programs
In this coverage the programmers are using Monitors and Profiles like 3rd party
software to calculate the execution speed of the program.
Note :
Change Change
↓ ↓ ↓
Passed Passed (Incomplete Test Failed (Complete Testing)
Basics Paths Coverage, Control Structure Coverage and Program Technique
Coverage are applicable on a program to test. Mutation Coverage is applicable Program
Testing to estimate completeness and correctness of that Testing. Go to V Model Next
In this approach the programmers are interconnecting main program and some of
subprograms. In the place of remaining sub-programs, the programmers are using
Temporary programs called “Stub"
Main
Sub1 Sub2
B) Bottom Up Approach :-
In this approach the programmers are interconnecting sub-programs without
coming from Main Program.
Main
Sub1
Sub2
C) Hybrid Approach :-
In is a combined approach of Top Down & Bottom Up approaches. It is also
known as Sand Witch Approach.
Main
Sub1
Sub3
D) System Approach :-
1. Usability Testing
2. Functional Testing
3. Non-Functional Testing
1. Usability Testing :
In general the testing execution is starting with Usability Testing. During this Test
the Testing Team is Concentrating on “User Friendliness of Software Build” There are 2
sublevels in this Usability Testing.
In this test the Testing Team is verifying the Help of that Software.
Case Study :
↓
User Interface Testing
↓
Functional Testing
↓
Usability Testing Non-Functional Testing
↓
Manuals Testing
2. Functional Testing :
It is a Mandatory Testing level in System Testing. During this test the Testing
Team is concentrating on the Correctness of Customer requirements in that S/w Build.
This Testing classified into below sub tests.
d) Manipulations Coverage :-
Whether our S/w Build is providing customer expected output or not?
e) Database Testing :-
The input of Front End Screens operations on Back End database contact
f) Sanitation Testing :-
Finding extra functionality with respect to Customer Requirements
Case Study :-
Software Build
Screens
(Front End) Data Base
(Back End)
Control Flow
Error Handling Data Base
I/p Domain Testing
Manipulations
Sanitation
a) Reliability Testing :-
It is also known as Recovery Testing. During this test the Testing Team is
validating whether our S/w Build is changing from Abnormal State to Normal State or
not?
b) Compatibility Testing :-
It is also known Portability Testing. During this test the Testing Team is
concentrating on whether our S/w Build is running on Customer Expected platform or
not?
Platform means Operating System, Browser, Compilers and Other System
Software’s.
c) Configuration Testing :-
Case Study :-
Compatibility Testing S/w Build → Operating System
S/w Build → H/w Device
Configuration Testing
Ex : Printers
Inter System Testing S/w Build → Other S/w Build
e) Data Volume Testing :-
During this test the Testing Team is inserting model data in our Application Build
to estimate peak limit of data. This data limit estimate is calling as Data Volume Testing.
f) Installation Testing :-
g) Load Testing :-
Load means that in number of Concurrent users are using our S/w Build at a
time. During this test the Testing Team is executing our S/w Build under customer
expected configuration and customer expected load to estimate speed of processing or
performance.
Client 1 □ Server
Client 2 □. S/w Build
. Process
.
Client N □
h) Stress Testing :-
The execution of our S/w Build under customer expected configuration and more
than Customer Load to estimate peak limit of Load is called Stress Testing.
i) Endurance Testing :-
The execution of our S/w Build under Customer Expected configuration and
customer expected load to estimate continuity in processing is called Endurance Testing.
j) Security Testing :-
It is also known as penetration testing. During this test the Testing Team is
concentrating on three factors.
Authorizations : S/w Build is allowing valid users and preventing invalid users.
Ex : Login with password, PIN, Digital Signatures, Finger Prints, Eye Retina,
Scratch Cards….etc.,
Ex : Admin, User
Request
Response
Decrypted
Encrypted
Cipher Text
This testing is applicable for Multi Languity Software. This type of softwares are
allowing multiple user language characters. Ex : English, Spanish, French …. Etc.,
In localization testing the Test Engineer is providing multiple language characters
as Inputs to the S/w Build. In Internationalization Testing the Test Engineer is providing
a common language character (English) to S/w as Input. In this scenario the 3rd party
tools transfer common language character to other language characters.
l) Parallel Testing :-
It is also known as Competitive / Comparative Testing. During this test the
Testing Team is comparing our S/w Build with old version of same S/w or with similar
product in market to estimate competitiveness.
VI) User Acceptance Testing :
After completion of successful System Testing the Project Manger is
concentrating on UAT to collect feedback from real customers or model customers.
There are two ways in this User Acceptance Testing.
1) Complete Installation
2) Overall Functionality
3) Input devices handling (Key Board, Mouse….etc.,)
4) Output devices handling (Monitor, Printer….etc.,)
5) Secondary storage devices handling (Floppy, Pen Drive…etc.,)
6) O/s error handling
7) Co-existence with other S/w in customer site.
The above factors checking in customer site is also known as Port Testing /
Deployment Testing.
After successful release, the release team is conducting training sessions to
customer site people & then back to our organization.
VIII) Maintenance:
During utilization of a Software, the customer site people are sending Software
Change Request (SCR) to our organization. These requests received by a special team in
our organization called Change Control Board (CCB). This team is consists of Few
Programmers, Few Testers, Few Hardware Engineers along with Project Manager.
Due to lack of time the Testing Team is conducting testing on Main Activities of
a Software. This type / stage of testing is called as Monkey Testing.
b) Buddy Testing :-
Due to lack of time the Project Management is combining one programmer and
one Tester as a Buddy. This teams are conducting Development & Testing Parallely.
c) Exploratory Testing :-
d) Pair Testing :-
Due to lack of knowledge the Senior Test Engineers are groping with Junior Test
Engineers to share their knowledge. This style of testing is called Pair Testing.
e) Bebugging:-
To estimate the efforts of Test Engineers the Development People are adding
defects to coding. This informal way is called Bebugging or Defect Feeding / Seeding.
System Testing Process
Test
Reporting
S/w Bidding
↓
Kick of meeting
↓
PIN Document
↓
Requirements Gathering (BRS)
↓
Analysis & Planning (SRS & Project Plan)
Initial Build
↓
System Test Execution
Test ↓
Reporting System Test closure
↓
User Acceptance Test
↓
Release & Maintenance
I) System Test Initiation :
In general the System Testing process is starting with System Test Initiation by
Project Manager or Test Manager. They will develop Test Strategy or Test Methodology
Document. This document defines the reasonable Test to be applied in current project.
2. Business Issues :-
64% 36%
Development System Testing
& Maintenance
The names of jobs in Testing Team and responsibility of each job in current
project
The purpose of automation testing in current project and available testing tools in
our organization.
In general the Test Planning is starting with Testing Team formation. In this stage
the Test Lead is depending on below factors.
Case Study :
Identify Risks :
Ex :-
Risk 1 : Lack of Time
Risk 2 : Lack of Resources
Risk 3 : Lack of Documentation
Risk 4 : Delays in Delivery
Risk 5 : Lack of Development Process Seriouness
Risk 6 : Lack of Communication
Prepare Detailed Test Plans :
After Completion of Testing Team Formation and the risks analysis, the test lead
is concentrating on test plan document preparation in IEEE 829 Format (Institute of
Electrical and Electronics Engineer)
Format :
Note :
After completion of Test Planning and before starting Test Designs, the Business
Analyst and Test Lead are conducting Training Sessions to select Test Engineers on that
customer requirements in Project. Some organizations are inviting Domain Experts /
Subject Experts for that Training Sessions from out side.
BRS
↓ Test Design
SRS (Functional Test Scenarios
Specifications) ↓
↓
Test Cases
HLD
↓
LLDs
↓
System Test Execution
S/w Build
Approach :
Step 1 :-
Collect Functional Specifications related to responsible areas.
Step 2 :-
Take one specified and read that specification to gather entry point, required
inputs, normal flow, coming outputs, alternative flows, exit point and exceptions are
rules.
Step 3 :-
Prepare Test Scenarios depending on above gathering information
Step 4 :-
Preview that Test Scenarios and implement them as Test Cases
Step 5 :-
Go to Step2 until all responsible Functional Specifications Study.
Functional Specification – 1 :-
A login process allows User ID& Password to Authorized users. The User ID
object is taking alphanumeric in lower case from 4 to 16 characters long. The password
object is taking alphabets in lower case from 4 to 8 characters long.Prepare Test Scenario.
Decision Table :
Note : Exhaustive Testing is not possible due to this reason. The Testing Team is
conducting Optimal Testing using Black Box Testing Techniques like BVA,ECP,
Decision Table, regular expressions … etc.,
Functional Specification – 2 :-
Test Scenario 2 :- Verify focus to Age when you selected Type-A Insurance
Functional Specification – 3 :-
In a shopping application users are applying for different type to items purchase
orders. The purchase order is allowing user to select Item No. and to enter Qty. up to 10.
The purchase order returns Total Amount along with one item price. Prepare Test
Scenario.
Test Scenario 1 :- Verify Item No. Selection
Functional Specification – 4 :-
A Door Opened when a person comes to in front of the door and that door closed
when that person went to inside. Prepare Test Scenario.
Test Scenario 3 :- Verify Door operation when a person is standing at the middle of the
door.
Functional Specification – 5 :-
Functional Specification – 6 :-
In a library Management System the readers are applying for Identity No. to get
this no., the reader is filling below fields.
Reader Name : Alphabets in lower case with Init Cap as single word
House Name : Alphabets in lower case as single word
PIN Code : Related to India Postal Department
City Name : Alphabets in uppercase as single word
Phone No. : Related to India Subscribers and optional
Decision Table :
Remaining Fields Telephone Number Expected O/p
All are valid Valid Identity No.
All are valid Blank Field Identity No.
All are valid Invalid Error Msg.
Any one Invalid Valid / Blank Error Msg.
Any one Blank Field Valid / Blank Error Msg.
Test Scenario 1 : Verify Shut Down option selection using Shut Down
Test Scenario 7 : Verify Shut Down operation using Power Off Button
Functional Specification – 8 :-
Money With Drawl From ATM with all Rules and Regulations
Test Scenario 8 : Verify operation when you enter wrong PIN 3 times consecutively
Test Scenario 9 : Verify Cancel after enter PIN
Test Scenario 11 : Verify operation when you selected wrong account type with
respected to the inserted card
Test Scenario 17 : Verify with drawl operation success. (Correct amount, right receipt,
able to take card back)
Test Scenario 18 : Verify with drawl operation with grater than possible balance.
Test Scenario 19 : Verify with drawl operation with grater than day limit.
Test Scenario 20 : Verify with drawl operation with Net work problem
Test Scenario 21 : Verify with drawl amount with lack of amount in ATM
Test Scenario 22 : Verify with drawl operation with exceeded no.of Transactions per
day
Test Scenario 23 : Verify with drawl operation with other bank card
BRS
↓ Use Cases
SRS (Functional BA + Test Lead
Test Scenarios
Specifications) ↓
↓
Test Cases
HLD
↓
LLDs
↓
System Test Execution
Coding
(UT & IT)
S/w Build
From the above diagram the Business Analyst and Test Lead category people are
developing use cases depending on corresponding functional specifications in SRS.
Every Use Case is an Implemented Form of Functional Specifications.
Approach :
Step1 : Collect use cases of responsible areas
Step2 : Take one use case and study
Step3 : Identify Entry Point, Required I/p, Normal Flow, Expected O/p, Exit Point,
Alternative Flows and Exceptions rules.
Step4 : Prepare Test Scenarios depending on above Identified Information.
Step5 : Review that scenario and implement them as Test Cases
Step6 : Go to Step2 until all responsible Use Cases Study
Use Case 1 :
Valid
Next Window
Valid User ID
Invalid User BOOK
ISSUE Data Base
Re-Login
Valid Book ID
Unavailable BOOK
Book ISSUE Data Base
Re-Login
Valid
“Book Issued”
Book Issue - □X
User ID Go
Book ID Go
The Functional Specification Based Test Design or The Use Cases Based Test
Designs are using to prepare Test Scenarios and Cases for Functional Testing. This User
Interface Based Test Design is using by Test Engineers to prepare Test Scenarios and
cases for “Usability Testing”.
BRS
↓
SRS (UI Test Scenarios
Requirements) ↓
↓
Test Cases
HLD
↓
LLDs
↓
System Test Execution
Coding
(UT & IT)
S/w Build
In this method the Test Engineers are depending on User Interface Requirements
in SRS.
In general the Test Engineers are writing Common Test Scenarios for Usability
Testing, which are applicable on any type of Application Scenarios.
Test Scenario 8 :- Verify line spacing uniqueness through out the screens
Test Scenario 12 :- Verify Scroll Bars when our screen size is grater than Desk Top
After completion of Test Scenarios selection for Functional and Usability Testing
the Test Engineers are concentrating on Test Scenario selection for Non-Functional
Testing depending on Functional and System Specifications in SRS.
Test Scenario 1 : Verify Login Under Customer expected Load and Configuration
Test Scenario 2 : Verify Login Under more than Customer expected configuration
And more….
}
ECP (Type) BVA (Range / Size)
I/p Object Data Matrix
in
Valid Invalid Min Max
11. Test Case Pass / Fail Criteria : The Final result of this Test Case after execution
Note 1 : In general the test engineers are not interesting to fill all fields in Test Case
Format due to lack of time and similarity in fields values of Test Cases.
Note 2 : The test engineers are using test procedure for operation test cases and data
matrix for input object test cases.
Functional Specification :
In a Banking application the valid employees are creating fixed deposit operations
with depositors provided information. In this fixed deposit operation, the employees are
filling below fields.
Depositor Name : Alphabets in Lower Case with Int.Cap, allows multiple words in name
Amount : 1500 to 1,00,000
Time : Up to 12 months
Interest : Numeric with one decimal
If the time>10months, then the Interest>10% from Bank Rules
Prepare Test Scenarios and Test Cases :
Test Case 2 :-
1. Test Case ID : TC_FD_Ravi_24th May_2
2. Test Case Name : Verify Amount
3. Test Suit ID : TS_FD
4. Priority : P0
5. Test Setup : Depositor Object is taking inputs
6. Data Matrix :
Test Case 3 :-
1. Test Case ID : TC_FD_Ravi_24th May_3
2. Test Case Name : Verify Time
3. Test Suit ID : TS_FD
4. Priority : P0
5. Test Setup : Time Object is taking inputs
6. Data Matrix :
Test Case 4 :-
1. Test Case ID : TC_FD_Ravi_24th May_4
2. Test Case Name : Verify Interest
3. Test Suit ID : TS_FD
4. Priority : P0
5. Test Setup : Interest Object is taking inputs
6. Data Matrix :
Interest 0-9 . 0-9 with one decimal a-z, A-Z, Spl.Char, Blank Field 0.1 100
Test Case 5 :-
1. Test Case ID : TC_FD_Ravi_24th May_5
2. Test Case Name : Verify Fixed Deposit Operation
3. Test Suit ID : TS_FD
4. Priority : P0
5. Test Setup : Valid Values are available in hand
6. Test Procedure :
Test Case 6 :-
1. Test Case ID : TC_FD_Ravi_24th May_6
2. Test Case Name : Verify Fixed Deposit Operation with Bank Rule
3. Test Suit ID : TS_FD
4. Priority : P0
5. Test Setup : Valid Values are available in hand
6. Test Procedure :
Step
Action Required I/p Expected O/p
No.
Connect Bank
1. Valid Exp Id Menu Appears
Server
□ Formal Meeting :-
In general the Test Execution process is starting with a Formal Meeting in
between Testing Team & Development Team representatives. In this meeting the
corresponding representatives are concentrating on Build Version Control and Defect
Tracking.
From Build version control concept, the Development Team is modifying S/w
Build Coding, to resolve defects and they will release that modified build with Unique
version number. This version numbering system is understandable to Test Engineers to
distinguish Old Build & Modified Build. For this version controlling, the Developers are
using Version Control Tools also. (Ex : - VSS (Visual Source Safe))
To report mismatches to Development Team the Test Engineers are reporting that
mismatch to Defect Tracking Team (DTT) First
Test Lead + Project Manager + Project Lead + Business Analyst → DTT
□ Test Environment Establishment :-
After completion of Formal Meeting, the Testing Team is concentrating on Test
Environment Establishment with required all Hardware and Software
SERVER
Configuration Repository
TCP/IP TCP/IP
FTP FTP
TCP/IP FTP
Development Project
Environment Management
Test Environment
FTP : File Transfer Petrol (Single Location)
TCP/IP : Transmission Control Protocol / Internet Protocol (Different Location(s))
Initial Build
Level-0 (Sanity)
Stable Build
Yes Next
Case
From the above diagram the Test Engineers are continuing Test Execution Batch
by Batch and Case by Case in every Batch. If our Test Case Step expected is not equal to
actual then the Test Engineer is concentrating on Defect Reporting. If possible, they will
continue Test Execution also.
In this Level-1 test execution, the Test Engineers are preparing Test Log
Document to specify test results.
Test Log Document Format :-
Yes
Categorized that
defect and change
status to “Open”
No
Data Assigned to
Yes Testing Team
related
Defect
No
Procedure Assigned to
Yes Testing Team
Related
Defect
No
No
H/w or Assigned to
Yes H/w Team
Infrastruct
ure Defect
No
Case Study :-
Report
Test Defect Assigned Project Lead
Engineer Defect Tracking Team +
Programmers
Report
Test Defect Assigned
Engineer Tracking Team BA+TL+TE
Defect
H/w or
Report Infrastructure
Test Defect Assigned
Engineer Tracking Team Team
Defect
New
↓
Assigned Reject Deferred
↓
Open
↓
Fixed
Reopen
↓
Closed
Data Related
Defect
Procedure
Related Defect
Correct Test
Repeat Test Case Procedure
Build
In correct Prepared by Test
procedure Engineers
Environment
Related Defect
Re-establish Test
Repeat Test Case Environment by
Build
In modified H/w Team
environment
Test Cases
Related Passed
Tests Modified Passed
Build
Failed Test Build Passed
Programmers
Pass
Report Defect
Faild DTT Code Related Defect
From the above model the test engineer is re-executing previously failed test
on modified build to confirm defect fixing, called as Retesting or Confirmation Testing.
To identify side effects of defect fixing modifications in modified build, the test
engineers is re-executing previously passed related test on that modified build called
Regression Testing.
Case 1:-
If the development team fixed defect severity is High then the Test Engineers
are repeating All P0, All P1 and Carefully Selected P2 Test Cases on that Modified Build
w.r.t. modifications specified in release note.
Case 2 :-
If the Development Team fixed defect severity is Medium then the Test
Engineers are repeating All P0, Carefully Selected P1 and Some P2 Test Cases on that
modified build w.r.t. modifications specified in release note.
Case 3 :-
If the Development Team fixed defect severity is Low then the Test Engineers
are repeating Some P0, Some P1 and Some P2 Test Cases on that modified build w.r.t.
modifications specified in release note.
Case 4:-
If the development team release modified build w.r.t. changes in Customer
Requirements then the Test Engineers are re-executing All P0, All P1 and Carefully
Selected P2 Test Cases on that Modified Build w.r.t. changes in Customer requirements.
In this case Test Engineers are performing changes in Test Scenarios and Test Cases
w.r.t. changes in Customer Requirement.
VI. Test Closure :-
After completion of all reasonable tests and detected defects closing, the test
lead is conducting a review meeting to Stop Testing. In this review the TL is analyzing
below factors with the involvement of Test Engineers.
1. Coverage Analysis :-
→ Requirements Oriented Coverage (Module)
→ Testing Topic Related Coverage (Usability, Functional, Non-Functional)
2. Defect Density Calculation :
Ex :
Modules / Requirement %
A 20%
B 20%
C 40% ( Need Regression Test )
D 20%
Total 100%
Identify High
Defect Density Person /
Module Hour
Regression Plan
Testing Regression
W-Model
System Testing
Development And Manual Test Automation
Integration Testing
Note : Test Automation is
Build Optional
From the above W-Model, the Testing Tools are available for Functional Testing
and Some of Non-Functional Testing and Endurance Testing and Data Volume
Testing.
The remaining Non-Functional Tests and Usability Testing conducted by Test
Engineers Manually.
Win Runner 8.0 :
¾ Developed by Mercury Inter Active and Take over by Hewlett Packed (HP)
¾ Functional Testing Tool
¾ This Version released in “2005”January
¾ Supports VB, .Net, Java, Power Builder, HTML, Delphi, VC++, D2K, and Siebel
and Siebel Technology Software for Functional Testing.
¾ To Support SAP, People Soft, XML, Multimedia and Oracle Applications
(“ERPS”) including above technologies, Test Teams are using Quick Test
Professional (QTP)
¾ Win Runner runs on windows only
¾ X-Runner for Unix / Linux
From the above approach, the Test Engineers are concentrating Manual
Functional Test Cases into Test Script Language (TSL) programs.
TSL is a “C” like language
Add-in Manager :
This window list out all Win Runner supporting technologies with respect to
license. Test Engineers are selecting current project technology in that list
Welcome Screen :
In this frame work the Test Engineers are converting manual test cases into
automation programs with Two Steps of procedure.
A. Recording Operations
B. Inserting Check Points
A. Recording Operations :-
In Test Automation program creation, the Test Engineers are recording S/w Build
operations. There are two modes in recording such as Context Sensitive Mode and
Analog Mode.
In Context Sensitive Mode, the tool is recording Mouse and Keyboard operations
with respect to objects and window in build. To select this mode the Test Engineers are
using below options.
“F2” is a short cut key to change from one mode to another mode.
Note :-
In Analog Mode the Win Runner is Recording Mouse Pointer Movements with
respect to Desktop Co-ordinate. Due to this reason the Test Engineers are not changing
corresponding window position and monitor resolution.
After recording build operations, the Test Engineers are inserting check points
with respect to expectations. Every check point is comparing Test Engineer given
Expected Value and Build Actual Value. There are Four check points in Win Runner.
To verify properties of Objects, we can use this check point. It consists of 3 sub
options.
i. For Single Property
ii. For Object / Window
iii. For Multiple Object
Ex.-1 :
Test Procedure :-
Step
Action Required I/p Expected O/p
No.
Open an order in Flight Order No. as Delete Order button
1
Reservation Window Valid “enabled”
Build :- Flight Reservation Window
Automation Program :-
Ex.-2 :
Test Procedure :-
Step
Action Required I/p Expected O/p
No.
Open an order in Flight Order No. as Insert Order button
1
Reservation Window Valid “disabled”
Automation Program :-
Note :- TSL is case sensitive language and it is taking # symbol for comments.
Ex.-3 :
Test Procedure :-
Step
Action Required I/p Expected O/p
No.
Open an existing order in Flight Valid Order Update Order button
1
Reservation Window No. “disabled”
Automation Program :-
Case Study :-
Ex.-4 :
Test Procedure :-
Step
Action Required I/p Expected O/p
No.
Enter User ID and Valid User ID & “OK” button
1
Password Password “enabled”
Build :-
Login
User ID
Password OK
Automation Program :-
set_window (“Login”,Time);
edit_set (“User ID”, “Valid Value”);
passoword_edit_set (“Password”, “Encrypted Value”);
button_check_info (“OK”, “enabled”, 1);
Ex.-5 :
Test Procedure :-
Step No. Action Required I/p Expected O/p
1 Focus to Student window None “Submit” button “disabled”
2 Select Roll No. None “Submit” button “disabled”
3 Enter Student Name Valid Name “Submit” button “enabled”
Build :-
Student
Roll No.
Name Submit
Automation Program :-
win_active (“Student”);
button_check_info (“Submit”, “enabled”, 0);
list_select_item (“Roll No.”, “Selected Item”);
button_check_info (“Submit”, “enabled”, 0);
edit_set (“Name”, “Valid Value”);
button_check_info (“Submit”, “enabled”, 1);
Ex.-6 :
Test Procedure :-
Step No. Action Required I/p Expected O/p
1 Focus to Student window None “Submit” button “disabled”
2 Select Roll No. None “Submit” button “disabled”
3 Enter Student Name Valid Name “Submit” button “enabled”
Build :-
Employee
Emp. No.
Name
Automation Program :-
win_active (“Employee”);
button_check_info (“Accept”, “enabled”, 0);
list_select_item (“Emp. No.”, “Selected Item”);
button_check_info (“Accept”, “enabled”, 0);
edit_set (“Name”, “Valid Value”);
button_check_info (“Accept”, “enabled”, 0);
button_set (“Button Name (Male/Female)”, ON);
button_check_info (“Accept”, “enabled”, 1);
Ex.-7 :
Test Procedure :-
Step
Action Required I/p Expected O/p
No.
Focus to Flight Reservation
1 None “Update” disabled
Window
Valid Order
2 Open an Existing Order “Update” Order disabled
No.
“Update” button
3 Perform a Change in that Order Valid Change
“enabled”
Build :- Flight Reservation
Automation Program :-
win_active (“Flight Reservation”);
button_check_info (“Update”, “enabled”, 0);
menu_select_item (“File; Open Order…”);
set_window (“Open Order”,1);
button_set (“Order No”,ON);
edit_set (“Edit”,1);
button_press (“OK”);
button_check_info (“Update”, “enabled”,0);
edit_set (“Name”, “Ravi Kiran”);
button_check_info (“Update”, “enable”,1);
Ex.-8 :
Test Procedure :-
Step No. Action Required I/p Expected O/p
1 Focus to Flight Reservation None Date of Flight object focused
2 Open an Existing Order Valid Order No. Date of Flight object focused
Build :- Flight Reservation
Automation Program :-
win_active (“Flight Reservation”);
obj_check_info (“Date of Flight Object”, “focused”,1);
set_window (“Flight Reservation”,1);
menu_select_item (“File; Open Order…”);
set_window (“Open Order”,1);
button_set (“Order No”,ON);
edit_set (“Edit”,1);
button_press (“OK”);
obj_check_info (“Date of Flight Object”, “focused”,1);
We can use this option to verify more than one property of more than one object.
Ex.-1 :
Test Procedure :-
Step Required
Action Expected O/p
No. I/p
Focus to Flight Insert Order, Delete Order and Update
1 None
Reservation Window Order buttons are disabled
Insert Order and Update Order buttons
Open an Existing Valid Order
2 are disabled and Delete Order button is
Order No.
Enabled
Perform a Change in Valid Insert Order is Disabled, Update Order
3
that Order Change and Delete Order Enabled
Note :
To save Test Creation and Execution Time, the Test Engineers are inserting “For
Multiple Object” Check Point
“For Multiple Objects” option is applicable on Multiple Object in same window
Case Study-1 :
obj_check_info() Î for single property
obj_check_gui () Î for object / window
win_check_gui () Î for multiple objects
Case Study-2 :
Object Type Testable Properties
Push Button Enabled, Focused
Radio Button Enabled, Status (ON / OFF)
Check Box Enabled, Status (ON / OFF)
List / Combo
Enabled, Value, Count
Box
Menu Enabled, Count
Test Box / Edit Enabled, Value, Focused, Range, Regular Expression, Date Format,
Box Time Format, …
Table Grid Columns Count, Rows Count, Cell Content
Case Study-3 :
BRS
↓
SRS Test Scenarios
Use ↓
(Functional
Cases Test Cases
Specifications)
↓
↓ Automation
HLD & LLDs Programs
↓
Coding (UT & IT) Manual Testing
↓
Automation Testing
Build
We can use this check point to compare images. This check point is supporting
Static Images only.
To Support movies like dynamic images comparison the Test Engineers are using
Manual Testing (or) QTP Tool.
Ex.-1 :
Test Procedure :-
Step Required
Action Expected O/p
No. I/p
Focus to Flight Reservation Build The Old Version Logo is equal
1 Old Version and select About None to Flight Reservation Build
Option in Help Menu New Version Logo
Build :- Flight Reservation
Automation Program :-
win_active (“Flight Reservation”);
set_window (“Flight Reservation”,1);
menu_select_item (“Help; About….”);
set_window (“About Flight Reservation System”,1);
obj_check_bitmap (“Button”, “Imgl”, 1); # Check Point
Ex.-2 :
Test Procedure :-
Step Required
Action Expected O/p
No. I/p
Focus to Flight Reservation
Graph opened for existing
1 Window and select analysis menu, None
data
graphs option
Existing Graph changed with
Open an Existing Order and Valid
2 respect to changes in No.of
perform a change in no. of Tickets Change
Tickets
Build :- Flight Reservation
Automation Program :-
win_active (“Flight Reservation”);
set_window (“Flight Reservation”,1);
menu_select_item (“Analysis ; Graphs…”);
set_window (“Graphs”,1);
obj_check_bitmap (“Gs_Drawing”, “Img1”,1, 158,26,178,154)
# Screen area Check Point
win_close (“Graph”);
set_window (“Flight Reservation”,1);
menu_select_item (“File; Open Order…”);
set_window (“Open Order”,1);
button_set (“Order No”,ON);
edit_set (“Edit”,1);
button_press (“OK”);
set_window (“Flight Reservation”,1);
edit_set (“Tickets:”, “3”); # Changes in Tickets
button_press (“Update Order”); Order”);
Note :
The Win Runner Bitmap check point is comparing Complete Images or Part of
Images.
For object / window :
obj_check_bitmap (“Image Name”, “Image File”, Time);
For screen area :
obj_check_bitmap (“Image Name”, “Image File”, Time, x,y,width, height);
To verify manipulations (or) calculations for our application build we can use
check point. This check point is a combination of “2 concepts” such as
Get Text Option and If Condition.
The Get Text option consists of 2 sub options
1. From Object / Window 2. From Screen Area.
1. From Object / Window : To capture an object value we can use this option.
2. From Screen Area : To capture selected value from a screen, we can use this option
Navigation :- Insert Menu → Get Text → From Screen Area → Select required value
region in that screen → Right click to relive from selection.
If Condition :-
TSL is a “C” like language. It allows you to write control statements with “c”
syntaxes.
if (condition)
{
----
----
}
else
{
----
----
}
Ex-1 :-
Manual Expected :- Output = Input * 100
Build :-
Sample
Input xxxxxxx
Output xxxxxxx
Automation Program :-
set_window (“Sample”,1);
obj_get_text (“Input”, x);
obj_get-text (“Output”, y);
if (y = = x*100)
printf (“Test is Pass”);
else
printf (“Test is Fail”);
Ex-2 :-
Automation Program :-
File1 xxxxxxx KB
File2 xxxxxxx KB
Total xxxxxxx KB
Automation Program :
set_window (“Audit”,1);
obj_get_text (“File1”,x);
obj_get_text (“File2”,y);
obj_get_text (“Total”,z);
x = substr(x,1,length(x)-2);
y = substr(y,1,length(y)-2);
z = substr(z,1,length(z)-2);
if (z = = x+y)
printf (“Test is Pass”);
else
printf (“Test is Fail”);
Ex-4 :
Build :
Shopping
Quantity xxxxxxx
set_window (“Shopping”,1);
obj_get_text (“Quantity”,Q);
obj_get_text (“Price”,P);
obj_get_text (“Total”,T);
P = substr(y,4,length(P)-5);
T = substr(z,4,length(T)-5);
if (T = = P * Q)
printf (“Test is Pass”);
else
printf (“Test is Fail”);
tl_step ( ) :-
“tl” stands for Test Log (Test Result). We can use this statement to prepare our
own Pass / Fail Result.
Syntax :-
tl_step (“Step Name”, 0/1, “Message”);
‘0’ for Pass
Other than ‘0’ Fail
Note :-
Substr() : we can use this function to get required value from given string.
Syntax :-
Substr (“String Value” / Variable, Starting Position, length(“String Value” / Variable));
From the above example the insertion of new data correctness is called as “Data
Validation” The changes in existing data correctness is calling as Data Integrity.
To automate this Data Base Testing, test engineers are using “Data Base Check
Point”. It consist of 3 sub points.
A. Default Check :-
To conduct Data Base testing, depending on the content of Data Base, we can use
this option.
Ex :-
Î Create DB Check Point
(Current Content of DB selected as Expected)
Î Perform Front End Operation
Î Run DB Check Point = = Fail
(Current Content of DB selected as Actual) ! = Pass
Navigation:
Open Win Runner → Insert Menu → Data Base Check Point → Default Check →
Specify Connect to Data Base Using ODBC (or) Data Junction (ODBC for Local Data
Base and Data Junction for Remote Data Base) → Click Next → Click Create to select
connectivity provided by developers → Write select statement → Click Finish → Open
our application build in Front End → Perform an operation manually → Run data base
check point → Analyze Results Manually.
Note :-
From the above Navigation, the Test Engineers are gathering some information
from developers like the name of connectivity in between our application build Front End
and Back End, the names of Tables including columns, Back End and the mapping in
between Front End Screens and Back End Tables. This information is also known as
“Data Base Design Document”.
B. Custom Check :-
To conduct data base testing depending on Rows Count, Columns Count and
Content, we can use this option.
In general the test engineers are using Default Check Option. This option is
showing content only by default. The content of Data Base is measurable in-terms of
Rows Count and Columns Count. Due to this reason the Test Engineers are using Default
Check instead of Custom Check.
In above syntax checklist file specifies content as property in Default Check and
Rows count, Columns Count and Content as properties in Custom Check.
Expected values file specifies the current content of data base with respect to
select statement.
We can use this option to estimate the correctness of Back End Table columns
and Front End Report Objects
User Forms
Data
Base
User Reports
Navigation :-
Insert Menu → Data Base Check Point → Run Time Record Check → Click Next →
Click Create to select connectivity provided by developers → Write select statements
with doubtful columns → Select doubtful objects for that columns → Click Next →
Select one or more matching records option → Click Finish
Ex-1 :
Objects DB Table Columns
Order No. orders.order_number
Name orders.customer_name
(Pass)
Ex-2 :
Objects (Fail) DB Table Columns
Tickets orders.order_number
Name orders.customer_name
Syntax:- db_record_check(“Checklist.cvr”,DVR_ONE_OR_MORE_MATCH,variable);
In above syntax check list file specifies expected mapping in between Back End
Table Columns and Front End Report Objects.
The indicator specifies the need of check point execution more that one time.
Variable specifies that the no.of records matched.
Case Study :-
Check Point TSL Statement
For Single Property in GUI obj_check_info (“object name”, “property”, expected
Check Point value);
For object/window in GUI obj_check_gui (“cbject Name”, “checklist.ckl”, “expected
value file”, time);
For multiple objects in GUI win_check_gui (“window name”, “checklist.ckl”,
“expeted value file”, time);
It is an advanced automation frame work in win runner testing tool. The test
engineers are executing an automation program with multiple test data in this frame
work. There are 4ways in Data Driven Testing.
Key Board
Excel Sheet
Automation
Program in TSL
Ex:-1
Manual Expected :- Delete Order Button Enabled After Open an Existing Order.
Build : Flight Reservation
Text Data : 5 Valid Order Numbers
Automation Program :
for (i=1; i<=5; i++)
{
x = create_input_dialog (“Enter Order Number”);
set_window (“Flight Reservation”,1);
menu_select_item (“File;Open Order….”);
set_window (“Open Order”,1);
button_set (“Order No.”, ON);
edit_set (“Edit”, x);
button_press (“OK”);
set_window (“Flight Reservation”,1);
button_check_info (“Delete Order”, “enabled”, 1);
}
# Sample Input in Automation Program is replaced by multiple inputs in execution is
called Parameterization.
Ex:-2
Manual Expected :- Tickets Object value is numeric in an Open Order.
Build : Flight Reservation Text Data : 5 Valid Order Numbers
Automation Program :
for (i=1; i<=5; i++)
{
x = create_input_dialog (“Enter Order Number”);
set_window (“Flight Reservation”,1);
menu_select_item (“File;Open Order….”);
set_window (“Open Order”,1);
button_set (“Order No.”, ON);
edit_set (“Edit”, x);
button_press (“OK”);
set_window (“Flight Reservation”,1);
obj_check_info (“Tickets:”, “list1.ckl”, “gui1”, 2);
}
Ex:-3
Manual Expected :- Total = Number of Tickets * Price in an opened order
Build : Flight Reservation Text Data : 5 Valid Order Numbers
Automation Program :
for (i=1; i<=5; i++)
{
x = create_input_dialog (“Enter Order Number”);
set_window (“Flight Reservation”,1);
menu_select_item (“File;Open Order….”);
set_window (“Open Order”,1);
button_set (“Order No.”, ON);
edit_set (“Edit”, x);
button_press (“OK”);
set_window (“Flight Reservation”,1);
obj_get_text (“Tickets:”, t);
obj_get_text (“Price:”, p);
obj_get_text (“Total:”, tot);
p = substr (p,2,length(p)-1);
tot = substr (tot,2,length(tot)-1);
if(tot == p*t)
tl_step (“T1”,0,”Test Pass”);
else
tl_step (“T1”, 1, “Test Fail”);
}
Ex-4 :
Manual Expected : - Results = Input1 * Input2
Build:-
Multiply
Input1
Input2
Ok
Result
Build /
Test Application
Under Test
Data (AUT)
Ex-1:
Manual Expected :- Delete order button enabled after open an order
Build :- Flight Reservation
Test Data : C:\Documents and Settings\Balaji\Desktop\Temp\Ravi.txt
Automation Program:
Silent Mode :-
Win Runner continues test execution when a check point is failed also. The Test
Engineers are using this option to continue test execution without interaction.
Navigation :- Tools Menu – General Options – Run Tab – Select Run in Batch Mode
Check Box – Click Ok
Note :- In silent mode the win runner is not executing create_input_dialog( ) Statement.
Ex-3 :-
Manual Expected : Total = Price * Quantity
Build :
Shopping
Item No.
Quantity
Ok
Price $xxxxxx
Total $xxxxxx
Some times the Test Engineers are re-executing their automation program
depending on multiple data objects in build like Menu’s, List Boxes, Tables, Activex
Controls and Data Windows.
Build /
Application
Under Test
(AUT)
Test Data
From Build Object
Ex-1:
Manual Expected : The Selected City Name in “Fly From” doesn’t appear in “Fly To”
Build:
Journey
Fly From
Fly To
PRACTICE :
Total = Internal Marks + External Marks of every student
Build : Marks is a window
Roll No. Name Internals Externals Total
101 xxxxx xxxxx xxxxx xxxxx
102 xxxxx xxxxx xxxxx xxxxx
.etc. .etc. .etc. .etc. .etc.
Go Top
D. Test Data From an Excel Sheet :-
Some times the test engineers are re-executing automation programs depending
on multiple inputs in an excel sheet, instead of Key Board, Flat Files and Front End
Objects. In this method the test engineers are filling Excel Sheet through importing data
from Build Data Base or with Manual Entry.
Build / AUT
Test
Front
End DB
Data
To create excel sheet oriented data driven test, Test Engineers are following
below navigation.
Navigation:-
Open Win Runner & Build – Create an Automation Program for Sample Inputs
– Table Menu – Data Driven Wizard – Click Next – Specify the path of Excel Sheet –
Specify Variable Name to Store that Excel Sheet Path – Select Import Data from Data
Base – Click Next – Specify Connective DB Using UDBC / Data Junction – Select
Specify SQL Statement Option – Click Next – Click Create to Select Connectivity of DB
Provided by Developers – Write Select Statement to Import Data From Connected DB –
Click Next – Replace Sample Input With Imported Excel Sheet Column Name in
Automation Program – Say Yes/No to Show Data Table (Excel Sheet) – Click Finish –
Put Build in Base State and Click Run – Analyze Results after execution.
Note :
By Default the Win Runner is providing a default excel sheet for every test
instead of our own excel sheet.
Ex-1 :
Manual Expected : Delete order button enabled after open an existing order.
Build : Flight Reservation
Test Data : Default.xls (Import Data From DB)
Automation Program :
table = “default.xls”;
rc = ddt_open(table, DDT_MODE_READWRITE);
if (rc!=E_OK && rc!=E_FILE_OPEN)
pause (“Cannot Open Table”);
ddt_update_from_db (“table”,”msqr1.sql”,count);
ddt_save (table);
ddt_get_row_count (table,n);
for (i=1; i<=n; i++)
{
ddt_set_row (table,i);
set_window (“Flight Reservation”,1);
menu_select_tem (“File; Open Order….”);
set_window (“Open Order”,1);
button_set (“Order No.”,ON);
edit_set (“Edit”, ddt_val (table, “order_number”));
button_press (“OK”);
set_window (“Flight Reservation”,1);
button_check_info (“Delete Order”, “enabled”, 1);
}
ddt_close (table); Go Top
Case Study-1:
ddt_open( ):- we can use this function to open an excel sheet in specified mode.
Syntax :
ddt_open (“Path of Excel Sheet”, DDT_MODE_READ / READWRITE);
ddt_update_from_db ( ):-
We can use this function to perform changes in excel sheet with respect to
changes in Data Base.
Syntax:
ddt_update_from_db (“Path of Excel Sheet”, “Select Statement query file”, Variable);
ddt_set_row ( ):-
We can use this function to point specific row in an excel sheet.
Syntax : ddt_set_row (“Path of Excel Sheet”, row number);
ddt_val ( ):- We can use this function to capture specified column value
Syntax : ddt_val (“Path of Excel Sheet”, column name);
Case Study – 2 :
DDT TSL Statement Silent Mode Test Engineer Interaction
Approach (During Run Time)
Test Data from create_input_dialog ( ); Off Mandatory
Key Board
Test Data file_open ( ); On / Off Optional
From file_getline ( );
Flat File
file_close ( );
Test Data list_get_item ( ); On / Off Optional
From list_get_infoe ( );
Front End
tble_get_rows_count ( );
Object
tble_get_cell_data ( );
Test Data From ddt_open ( ); On / Off Optional
Excel Sheet ddt_save ( );
ddt_set_row ( );
ddt_update_from_db ( );
ddt_get_row_count ( );
ddt_val ( );
ddt_set_val ( );
ddt_close ( );