You are on page 1of 20

Test Metrics

Reference from : GLT Testing Services

What are Test Metrics?


Parameters to objectively measure the
software testing process on various aspects
Test Effort
Test Schedule
Test Status
Defects
Test Efficiency
Test Effectiveness

Why do we need Test Metrics?


To quantitatively analyze the current level of maturity in testing and set
goals/objectives for future
To provide means for managing, tracking and controlling the status of
testing
To provide a basis for estimation (Todays data is tomorrows historic data)
To objectively measure the effectiveness and efficiency of testing
To identify areas for process improvements
To give insight into the quality of the product

Index
Base Metrics
Project Management Metrics
Test Progress Metrics
Defects Metrics
Derived Metrics
Test Efficiency Metrics
Test Effectiveness Metrics
Group Standard Testing Metrics
4

Project Management Metrics


Effort Variance
Indicates deviation of actual effort from planned
Where collected
Data Source
Main User
Reason for Use
effort
for the project.
All Phases

NIKU

Test Lead, Test


Manager

Track progress,
manage and replan
tasks

Test Effort Variance (Planned vs Actual)

Effo rt (P e rso n da ys)

82
80

80

78

77

76

76
75

75

74
72
70

72
71

70

68
66
64
1

Release version
Planned Effort (person days)

Actual Effort (person days)

Project Management Metrics


Schedule Variance
Indicates deviation of actual schedule dates (start
date and end dates) from planned schedule dates for
the
project.
Where
collected
Data Source
Main User
Reason for Use
All Phases

NIKU

Test Lead, Test


Manager

Track progress,
manage and replan
tasks

Schedule Variance
15-Sep-07
26-Aug-07
06-Aug-07

D a te

17-Jul-07
27-Jun-07
07-Jun-07
18-May-07
28-Apr-07
08-Apr-07
1

Release Version
Planned Start Date

Actual Start Date

Planned End Date

Actual End Date

Test Progress Metrics


Test Case Preparation Status
Indicates status of test case preparation against the
number of test cases planned to be written.
Where collected

Data Source

Main User

Reason for Use

Test Case Preparation


Phase

Manual Tracking, QC

Test Lead, Test


Manager

Track progress,
manage and replan
tasks

Test Case Preparation Status

Test Cases Completed


Test Cases Inprogress
Test Cases not completed

60
50
50
40
40
30
20
10
10
0

Test Progress Metrics


Test Execution Status
Indicates status of test execution against the
number of test cases planned to be executed.
Where collected

Data Source

Main User

Reason for Use

Test Execution Phase

QC

Test Lead, Test


Manager

Track progress,
manage and replan
tasks

Test Progress Metrics


Planned vs Actual Execution
Where collected

Data Source

Main User

Reason for Use

Test Execution Phase

QC

Test Lead, Test


Manager

Track progress,
manage and replan
tasks

Test Status - S Curve


140

No. of Te s t Cas e s

120
100

120
116

105
102

92100

80
60
40
20
0
1

Week No.
Planned Execution

Actual Execution

Defect Metrics
Defects Distribution by Severity/Status
Where collected

Data Source

Main User

Reason for Use

Test Execution Phase

QC

Test Lead, Test


Manager

Track progress,
manage and replan
tasks

10

Defect Metrics
Defects Distribution by Root cause
Where collected

Data Source

Main User

Reason for Use

Test Execution Phase

QC

Test Lead, Test


Manager

Process Improvement

Defects Distribution by Root cause


25

Number of defects

20
15
Number of defects

10
5
0
Requirements
miss/unclear

Environment

Integration
Test

Code/Unit Test

Script miss

Control
Record Setup

Data

Root Cause

11

Test Efficiency Metrics


Test Case Writing Productivity
Total Test Cases created / Total person days of effort
involved in creating test cases
Where collected

Data Source

Main User

Reason for Use

Test Case Preparation


Phase

QC, NIKU

Test Lead, Test


Manager

Track progress,
manage and replan
Analyse test efficiency
, Process
Improvement

Test Case Execution Productivity


Total Test Cases executed / Total person days of effort
involved in executing test cases
Where collected

Data Source

Main User

Reason for Use

Test Execution Phase

QC, NIKU

Test Lead, Test


Manager

Track progress,
manage and replan
Analyse test efficiency
, Process
Improvement

12

Test Efficiency Metrics


% (Review +Rework) Effort
( A / B) X 100
where

A = (Review + Rework) effort for writing test cases


B = Total effort for writing test cases (creation + review + rework)

Where collected

Data Source

Main User

Reason for Use

Test Case Preparation


Phase

NIKU

Test Lead, Test


Manager

Track progress,
manage and replan
Analyse test efficiency,
Process Improvement

13

Test Efficiency Metrics


Rejected Defects
% rejected defects = (Number of defects rejected/ Total number of defects logged) X 100
Where collected

Data Source

Main User

Reason for Use

Test Execution Phase

QC

Test Lead, Test Manager

Analyse Test Efficicency,


Process Improvement

Defect Detected v/s Defect Rejected

Defect Rejection Ratio

200
160

3%

140
Total

120
100
80

R ejected

R ejected
Valid

154

60
40

97%

20
0

Root Cause of rejected defects


6

Num b er o f d efects

Number of Defects

180

5
4
3

Number of Defects

2
1
0
Code/Unit Test

Data

Use Case Update


Root Cause

Working As Designed

14

Test Effectiveness Metrics


Defect Removal Efficiency (DRE)
% DRE (for a testing phase)
X100

# valid defects detected in the current testing phase

(# valid defects detected in the current phase + # valid defects detected in the next testing phase)

% DRE (for all testing phases)

X100

# valid defects detected pre-production


(# valid defects detected pre-production + # defects detected post-production)

Where collected

Data Source

Main User

Reason for Use

Post Test Execution


Phase

QC, Defects data for


next testing phase

Test Lead, Test


Manager

Analyse test
effectiveness, Process
Improvement

15

Test Effectiveness Metrics


Requirements Coverage
Indicates the distribution of requirements covered by test
cases along with the status
Where collected

Data Source

Main User

Reason for Use

Test Case Preparation


and Test Execution
Phase

QC

Test Lead, Test


Manager

Analyse test
effectiveness, Process
Improvement

16

Test Effectiveness Metrics


Test Coverage
# valid defects not mapped to test cases Vs
# valid defects mapped to test cases
Where collected

Data Source

Main User

Reason for Use

Test Execution Phase

QC

Test Lead, Test


Manager

Analyse test
effectiveness, Process
Improvement

Test Coverage

11%

Valid Defects not mapped to


Test Cases

Valid Defects Mapped to Test


Cases

89%

17

Test Effectiveness Metrics


Cost of Quality
Total Testing effort (person days)/ #valid defects found
Where collected

Data Source

Main User

Reason for Use

Test Execution Phase

QC, NIKU

Test Lead, Test


Manager

Analyse test
effectiveness,
efficiency
Process Improvement

18

Group Standard Testing


Metrics
Testing Effort
Actual Testing Effort / Total Project Effort (in
person hours )
Where collected

Data Source

Main User

Reason for Use

All testing phases

NIKU

Test Lead, Test


Manager

Measures the
proportion of effort
spent on testing
against the whole
project

19

Group Standard Testing


Metrics
Test Effectiveness Indicator
A / (A + B)
where

A = number of defects found in all test stages


B = number of latent defects during the first month after implementation

Where collected

Data Source

Main User

Reason for Use

All testing phases

QC, Manual
submission by Testing
CoE or Regional IT
Quality Leads.

Test Lead, Test


Manager

Measures the
proportion of the
number of defects
discovered in all
formal testing stages,
i.e. SIT, UAT, NFR or
performance test, OAT
etc, with that
discovered in the first
month of production
operations.

20

You might also like