You are on page 1of 96

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

1 of 96

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

improveandinnovate
Thoughts and ideas on Quality , Improvement
and Innovation

CSSBB Tutorial Series : Lesson 9


(http://improveandinnovate.wordpress.co
m/2014/07/18/cssbb-tutorial-serieslesson-9/)
July 18, 2014July 19, 2014 Lean Six Sigma

Lesson 9 : Measure Phase Part 5 of 5


Topics Covered :
Process Capability Analysis

o Process Capability Indices : Cp/Cpk/Pp/Ppk


o Application to non- normal & aribute data
o Six Sigma Metrics : DPMO / PPM /RTY
Process Capability Analysis
The only man who behaved sensibly was my tailor: he took my measure anew every time

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

2 of 96

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

he saw me, whilst all the rest went on with their old measurements and expected them to t
me.
George Bernard Shaw
Process Capability Analysis
Process Capability refers to the ability of a process to meet customers
specications. Depending on the process and the quality characteristic of interest,
several methods are available for computing process capabilities.
Process Capabilities are generally expressed in terms of unitless numbers called
Process Capability Indices or Process Performance Indices. These are ratios of
process spread to tolerance (i.e customers specications)
Key requirements for computing Process Capability are :
The process should be stable i.e , the process operates within the Upper Control
Limit and the Lower Control Limit . This means the corresponding control chart
should be studied to assess process stability. ( Will be discussed in a later lesson
on Control Charts ).
The data follows a normal or near normal distribution . If the data is not normal ,
the process capabilities can still be computed but aer transforming the data .
Process Capability Indices ( Cp , Cpk)
Figure 32(a and b) shows two processes with same set of customer specications (
Lower Specication Limit LSL and Upper Specication Limit USL ) . Which
process has a beer capability of meeting customers requirements ?

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

3 of 96

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hps://improveandinnovate.les.wordpress.com/2014/07/slide232.jpg)
Quite obviously , the process in 32(a). Part of the process in Fg 32(b) is outside the
customers specication limits and hence will produce more non conforming product
than the process in Fig. 32(a). The only information one cannot obtain from the
above gures is how much non conforming product will each process produce? Such
information can be obtained with the help of process capability analysis.
Process Capability Indices are helpful when comparing two processes . For example:
Vendor evaluation and rating can be done using Process Capability Indices.
Another advantage is that the Process Capability Indices provide a universal
language that can be used to communicate process performance across industries &
processes .

(UCL LCL) is called the Process Width and is given by : UCL LCL = 6, hence

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

4 of 96

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

The concept of Cp is explained in the Fig. 33

(hps://improveandinnovate.les.wordpress.com/2014/07/slide201.jpg)
ii) Cpk : The assumption while computing Cp using the earlier formula is that the
process mean is centered between the customer specication , which may not be
always true . Fig. 34 shows two process with the same Cp but with dierent
capabilities. In such cases , the Cp fails to provide a correct estimate of the process
performance.

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

5 of 96

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hps://improveandinnovate.les.wordpress.com/2014/07/slide192.jpg)
Hence, we use a second process performance index called Cpk which is given by :

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

6 of 96

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hps://improveandinnovate.les.wordpress.com/2014/07/slide172.jpg)
Note : Cp Vs. Cpk : While the Cp is a good indicator of the potential of the process
to perform , the Cpk is a realistic measure of the ability of the process to meet
customers specications.
Typical Values of Cp & Cpk :
Cp = Cpk implies the process mean is equal to the customers target.
Cpk = 1 implies 99.72% of the process is within customer specication limits (
Refer g. 17 under normal distribution curve). This means the process is just
capable. This is also called a 3 sigma level process
Cpk = 1.33 implies a 4 sigma level process . Quite oen customers specify this
as the minimum requirement for their vendors.
Cpk = 1.67 implies a 5 sigma level process
Cpk = 2 implies a 6 sigma level process
Note : Refer Six Sigma Metrics in the following sections for details on Process Sigma
Level
Example : A lling machine in a boling process is expected to ll an average of 300ml.
Specications for this process are :
LSL = 295ml , USL = 305ml
A sample of 72( 24 subgroups of size n= 3) boles from the process were taken and the

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

7 of 96

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

control chart indicated the process was stable with Lower Control limit ( LCL ) = 298 and
Upper Control Limit ( UCL) = 306 with a mean of 302ml . Compute Cp & Cpk for this
process.
Solution :
Cp = ( USL LSL ) / (UCL LCL)
= (305 295) / (306 298)
= 10/8 = 1.25

(hps://improveandinnovate.les.wordpress.com/2014/07/slide162.jpg)
Note : UCL LCL= 6* = 8 ;

Hence , 3* = 4

The Cp indicates the process is capable. However, since the process is not centered , the
Cpk will not be equal to the Cp . The actual capability of the process is much lower than the
Cp suggests !

Cp Vs. Cpk : The following set of gures ( Fig. 36) show how the Cp and Cpk are
related.

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

8 of 96

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hps://improveandinnovate.les.wordpress.com/2014/07/slide152.jpg)
Process Performance Indices ( Pp , Ppk, Cpm)
The Cp & Cpk discussed earlier are called short term capability indices. This is
because the estimates of variation ( or standard deviation) are based on short term
samples . The short term standard deviation used for Cp / Cpk calculations is
estimated by :
st = (R(bar) / d2 , where
R(bar) is the average range of subgroups in the X(bar) R control chart and d2 is a
constant ( refer Xbar R Control Charts in subsequent lessons ! )

On the other hand , the Process Performance Indicators ( Pp & Ppk) use the overall (
long term ) standard deviation computed by the formula :

(hps://improveandinnovate.les.wordpress.com

/2014/07/slide142.jpg)

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

9 of 96

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

The Process Performance Indicators ( Pp , Ppk ) were introduced by the Automotive


Industry Action Group ( AIAG) .
The formulae for Pp and Ppk are very much similar to those of Cp & Cpk except that
the standard deviation is computed using dierent methods as discussed above.
Hence ,

Note : It is a common practice to report Pp and Ppk for processes that are not in
control. Many experts consider the use of Pp and Ppk unnecessary as it is
meaningless to estimate capability of a process that is not stable.
Cpm : The Cpm is another important Capability Ratio , and is given by :

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

Process Capability for Non Normal Data


As discussed earlier, the formula for the process capability indices assume that the
data comes from a normally distributed process. However ,If one were to compute
the capability of processes assuming normality when it is actually not , the results
could be in error .
Hence , it is important to rst assess if the data is normally distributed . This can be
done with methods such as probability ploing or more easily by hypothesis testing
methods such as the Anderson Darling test.
If the data is not normally distributed , one can try
o transforming the data to obtain a normal distribution Or
o Try ing other known distributions to the data such as Exponential , Weibull,
Lognormal etc. This can be done with standard statistical soware packages such as
the MINITAB.
o If no known distribution ts the data , one should work with non- parametric
methods and the above capability indices will not be valid.
The Box Cox Transformation Method
One approach to making non-normal data resemble normal data is by using a
transformation. Among the many methods available for transforming non- normal
data to normal , the Box Cox is one of the most popular. All transformation
methods use transformation functions that convert a non- normal distribution to a
normal distribution . The Box-Cox transformation function is dened as
Yt = Y
where , Yt is the transformed value of Y( the response variable ) and is the
transformation parameter. For = 0, the natural log of the data is taken instead of
using the above formula. For example :

10 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

For = -1 , the transformed value of Y will be 1 /Y ,


For = 2 , the transformed value of Y will be Y2
For = 1/2 , the transformed value of Y will be
= 0 , the transformed value of Y will be ln(y)

Example : The following table ( Fig 37) shows data of time taken to resolve customer
complaints ( TAT in hrs. ) An Anderson Darling Test using MINITAB indicates that the
data is not normally distributed. Refer Fig. 38 : Minitab Output . The p-value for the test is
0.032 ( < of 0.05)

(hps://improveandinnovate.les.wordpress.com/2014/07/slide113.jpg)
Note :
i) 10 data points may not be a statistically signicant sample size to establish normality .
ii) Concept of hypothesis testing and p-value is discussed in detail in subsequent lessons :
The Analyze Phase.
The data was transformed using the Box Cox Transformation method . Figure 39 is
the Minitab Output for the Box Cox plot . It shows the most likely value of , the
transformation parameter . In this case , the value of = 0 . This means the original
data can be transformed using ln(y) as the transformation function. The
transformed data is shown in the table in Figure 40 .

11 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

A normality test of the transformed data , using Anderson Darling Test conrms
that the transformed data is normally distributed. The corresponding p value is
0.079 ( > ) . Refer Figure 41 .

(hps://improveandinnovate.les.wordpress.com/2014/07/slide93.jpg)

(hps://improveandinnovate.les.wordpress.com/2014/07/slide84.jpg)

12 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hps://improveandinnovate.les.wordpress.com/2014/07/slide74.jpg)
Six Sigma Metrics & Capability Analysis for Aribute Data
The Six Sigma Methodology uses a set of metrics to measure process performance
that are similar to the Process Capability Indices , These metrics are oen used to
demonstrate the amount of improvement achieved by six sigma improvement teams
by estimating process capabilities before and aer the project . The Six Sigma
metrics can be used for variable as well as for aribute data ( count and proportions)
. Some of the commonly used metrics are :
Defects Per Million Opportunities ( DPMO)
Process Yields or PPM Defective
Rolled Throughput Yields ( RTY)
Each of the above results can be translated into a common metric called Process
Sigma Level Or Sigma Capability . These metrics are derived from the normal
probability distribution .
The sigma capability ( also called z value) is a metric expressed as a single number
that indicates defect rate of a process. This means , higher the sigma capability, the
lower will be the defect rate and vice- versa . For example , A 6 sigma level process
produces only 3.4 Defects in a Million Opportunities ( 3.4 DPMO ) , whereas a 3sigma
level produces 66,807 defects in a Million Opportunities. The following table (Fig.42 )
shows process sigma levels for various defect rates

13 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hps://improveandinnovate.les.wordpress.com/2014/07/slide65.jpg)
Note : The above DPMO values include a 1.5 shi to factor the long term process
variability a concept discussed later in the chapter.
Cpk & Process Sigma Level
Fig. 43 relates to the example discussed for computing Cp & Cpk. With reference to
this gure, it is easy to correlate Cpk with Process Sigma Level.
With reference to the normal probability distribution , the sigma level of a process
can be dened as the no. of standard deviations that can be ed between the
process mean and the CLOSEST specication limit .

14 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

Applying this denition of process sigma level we have :


The closest specication limit to the mean is the USL , Hence , the distance between
process mean and the CLOSEST specic limit is :
USL- = 305 302 = 3 ( Refer g. 43 )
Standard deviation = ( UCL LCL) /6
= ( 306 -298) / 6 = 8/6 = 1.33
Process sigma level ( z value ) = (USL )/
= 3/1.33 = 2.25
One would now observe that , Process sigma Level = 3* Cpk !
Defects Per Million Opportunities ( DPMO)
The DPMO is a Six Sigma metric that is used when a team needs to monitor process
performance with respect to defects.
Defect Vs. Defective : A defect is a non conformity in the product or process
with respect to a single quality characteristic that does not aect the functioning
of the product / process, whereas a defective is an entire unit that is
unacceptable to the customer. For example : A dent or a scratch on the body of car is
one defect whereas a defective car is one that fails to start.
An Opportunity is a Critical to Quality Characteristic ( CTQ) specied by the
customer. Thus , any failure to meet a CTQ requirement is termed a defect .
To compute DPMOs , consider the following example :
Example : A component has 4 dened opportunities /CTQs . 30 samples of the component
are inspected for defects against the 4 CTQs . Total defect count is 44 .
The Defects Per Opportunity( DPO ) is given by :
DPO = Observed Defects / Total Possible Defects

15 of 96

= 44/( 4 x 30)

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

= 0.366666
Note : Each component provides 4 opportunities to produce a defect. Hence Total Possible
Defects = 4 x 30
6
Hence , DPMO = DPO x 10
= 366666
To translate this into a process sigma level we can look up the DPMO & Yield
conversion tables ( refer standard tables in Six Sigma Handbooks ) which gives a process
sigma level of 1.84
We can also use the following formula in MS Excel:
NORMSINV( 1- defects/volume) + 1.5
Proportion Defective ( Yield or PPM)
This metric is used when monitoring rejections in processes / products . Commonly
used terms in the industry are Scrap% , First Time Thru , First Pass Yield , PPM
etc
Estimating Process Sigma Level in this case is fairly simple : Convert the data in % yield =
( 1 proportion defective) X100
Example : 92 out of 950 fasteners manufactured are defective.
Yield = ( 1 92/950) * 100
= 90.31%
From the DPMO& Yield conversion tables , this translates into a process sigma level of
2.79
We can also use the following formula in MS Excel :
NORMSINV( yield in fraction) + 1.5
Rolled Throughput Yield ( RTY)

16 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

This is an important six sigma metric and is useful when multiple processes are
connected in series ( Fig. 44 )

(hps://improveandinnovate.les.wordpress.com/2014/07/slide46.jpg)
Example : Consider the 4 processes shown in Fig 44 above .Following are yields of each of
the processes
o Process A : 97.5%
o Process B : 98%
o Process C : 99%
o Process D : 97.5%

The Rolled Thruput Yield is given as the product of all four yields i.e
RTY = (0.975 x 0.98 x 0.99 x 0.975) x 100 %
= 92.23 % !
From the conversion tables , this translates into a process sigma level of 1.42 !
Note : The RTY is a very important metric to establish the fact that even when each of the
4 processes performs at 6sigma levels , the customer will receive an output that will be
lower than a 6sigma level.
17 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

RTY & DPU : The RTY and Defects / Unit ( DPU ) can be related through a special
form of Poisson distribution as given below :
RTY = e

DPU

Short Term and Long Term Capability : The 1.5 Shi


In the Six Sigma methodology , one comes across a term called the 1.5 shi .
This is an interesting theory that has been a subject of much debate. The inventors of
the Six Methodology argued that in the long term a process is likely to show higher
variation than in the short term. Hence , when computing process sigma levels using
short term data ( samples) , one can obtain only short term process capabilities. To
factor in the long term variation , a process shi of 1.5 from the mean is
considered .
Example : If sample data indicates the process at 4.5 , then the long term sigma is
considered as 4.5 1.5 = 3 . This can be generalized as :

18 of 96

Long Term Sigma Level = Short Term Sigma Level 1.5


Or
Zlt = Zst 1.5

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

The Table in Fig . 42 shows long term sigma levels. We can add another column to
show corresponding short term sigma levels. Refer Fig. 45.

(hps://improveandinnovate.les.wordpress.com/2014/07/slide29.jpg)
Using the normal probability distribution concepts learnt earlier in the chapter , it is
easy to establish that a long term 6 sigma process actually produces only 0.002
DPMO This is also called 2 parts per billion ( 2ppb) . This is equivalent to a short term
process sigma level of 7.5 !

Suggested Reading :
1. Statistics For Management by Levin & Rubin
2. Jurans Quality Handbook
3. Introduction to Statistical Quality Control by D C Montgomery

Leave a comment

CSSBB Tutorial Series : Lesson 8


19 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(http://improveandinnovate.wordpress.co
m/2014/07/16/cssbb-tutorial-serieslesson-8/)
July 16, 2014July 17, 2014 Lean Six Sigma, Uncategorized

Lesson 8 : The Measure Phase Part 4

Topics Covered
Measurement Systems
Measurement Methods & Gauges
Measurement System Analysis Gage R & R Studies
Metrology-Basics
Measurement Systems
The only man who behaved sensibly was my tailor: he took my measure anew every time
he saw me, whilst all the rest went on with their old measurements and expected them to t
me.
George Bernard
Shaw
Introduction
Validating measurement systems is vital to successful data collection and analysis.
Quite oen , Six Sigma teams end up making wrong decisions due to poor quality of
data . Data accuracy and precision is a function of the operator or inspector
responsible for collecting the data as well as the gauges ( measuring instruments)
used to collect data. An inadequate Measurement System might adversely impact
the process and (or) the improvement project in the following ways :
o Inaccurate data analysis might impact further decision making
20 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

o The problem might get solved by simply xing the measurement system and the
project might not be required at all.
o The team might tamper with process in an aempt to reduce variation without
actually realizing that the root cause(s) is elsewhere.
Measurement Methods & Gauges : Data collected with the help of any
measuring instrument can be of two types : i) aribute or
ii) variable. Various
measurement methods and gauges are used depending on the process
requirements and quality characteristic to be measured. Some of the commonly
used measurement methods are :
o Mechanical Systems : Example vernier calipers , micrometers , ring gages etc.
o Electronic Systems : Example Co-ordinate measuring machines ( CMMs)
o Optical Systems : Example Infrared Thermometers
o Pneumatic Systems : Example air calipers and air ring gauges
o Electron based systems : Example hot cathode ionization gauge
Measurement Systems Analysis ( MSA)
A Measurement System Analysis, is a designed experiment to identify the
components of variation in the process . The objective of any MSA is to ensure that
variation due to measurement system is under control and does not adversely impact
analysis of the observed process variation. The MSA is an important part of any Six
Sigma improvement project.
The ow chart in Fig. 27 explains the concept of measurement system analysis.

21 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hps://improveandinnovate.les.wordpress.com/2014/07/slide102.jpg)
Measurement System Analysis : Terminologies
Measurement System Variation and Error can be classied into two categories :
Accuracy and Precision
Accuracy : The ability of a measurement system to provide the correct results.
Accuracy of a measurement system has three components :
o Bias : The absolute dierence between the observed value and the true ( actual )
value.
o Linearity A measure of the consistency of the accuracy of gage across the entire
range of the measurement system.
o Stability Ameasure of the accuracy of the system over a period of time.
Precision : Precision is a measure of the variation obtained from repeated
readings with the same gauge.Precision has two components :
o Repeatability : Variation when one operator repeatedly measures the same unit
with the same measuring equipment.
o Reproducibility : Variation when multiple operators measure the same unit with
the same measuring equipment.

22 of 96

Discrimination : The ability of the measurement system to detect small changes


in the value of the quality characteristic. As a rule of thumb , gauge selection

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

should be such that it can detect atleast 1/10 th of the tolerance i.e ( USL LSL)
specied by the customer.
Measurement System Analysis Types
Depending on the quality characteristic to be monitored , MSA can be classied into
two types : i) Variable and ii) Aribute
i) Variable MSA : The measurement system analysis used for variable data is called
variable Gauge Repeatability and Reproducibility ( Gauge R & R ). It is typically used
in manufacturing environment for inspection and measuring tools such as
micrometers , Vernier calipers , height gauges etc. Two methods are commonly used
in the variable MSA :
o Analysis of Variance ( ANOVA) Method and
o X(bar) R Method.
The ANOVA method is a more robust method as we can compute the Operator*Part
variation with this method( Refer Fig. 27). This is not possible with the X(bar) R
method. MSA computations can be quite cumbersome and hence require use of a
soware.
Note : Readers are advised to familiarize themselves with the ANOVA method (will be
discussed later !) for a good understanding of the variable MSA method.
Gauge R & R : Concept
2
The total observed variation in a process ( Tota) can be represented as :
2Total = 2 measurement process + 2 Process
and the variation due to measurement system can be represented as :
2
2
2
2
Measurement Process = gage = (repeatability ) + (reproducibility)
Hence , the Gauge R&R , which is a % of measurement variation over the total
variation is given by :

23 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hps://improveandinnovate.les.wordpress.com/2014/07/slide104.jpg)Note : Typical
GR&R studies are done with 10 parts , 3 operators and each operator taking two
measurements (trials) per part.
Example :The following data ( Fig. 28) relates to heights of aluminum ns ( mm) produced
by a n mill , measured by a height gauge. 10 ns were drawn at random from the process
.Three operators measured heights of the 10 ns , twice , in random order. The objective
was to compute Gauge R & R %

(hps://improveandinnovate.les.wordpress.com/2014/07/slide92.jpg)
Solution :
The Gauge R & R can be worked out easily with the help of Minitab Statistical Soware ,
using the following commands :
Stat > Quality Tools > Gage Study > Gage R & R Study ( Crossed). Following are the
results : ( MINITAB OUTPUT in Fig. 29)

24 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hps://improveandinnovate.les.wordpress.com/2014/07/slide83.jpg)
Acceptance Criteria for Gauge R & R Study
2
2
1. For % contribution = ( gage)/ ( total)*100 , the following is the acceptance
criteria :
If % contribution is less than 1 % Measurement System is Excellent
If % contribution is between 1% 10% Measurement System is acceptable ,
but should be used with caution
If % contribution is greater than 10% Measurement System is unacceptable
needs to be replaced.
Note : The corresponding value is circled in the MINITAB Output Table ( Fig. 29)
2 .For % Study Variation = Precision to Total Variation ( P /TV )% , the following is
the acceptance criteria :
% Study Variation is Given By : 6* (Measurement SD / Total SD)
If % Study Variation is less than 10 % Measurement System is Acceptable
If % Study Variation is between 10% 30% Measurement System is acceptable ,
but use with caution
If % Study Variation is greater than 30% Measurement System is unacceptable
needs to be replaced
Note : The corresponding value is circled in the MINITAB Output Table (Fig. 29)
3. No. of Distinct Categories ( NDC) is the no. of dierent groups in the data that
25 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

the measurement system can discern . This should be greater than 5 . It is


computed using the following formula :
NDC = 1.41x(Standard Deviation of Parts ) / Standard Deviation of Gauge
1. % Tolerance : Optionally, MINITAB will also return another important
information called SV/ Tolerance which is given by :
% Tolerance = (6 * Measurement SD) / ( USL LSL )
where, USL & LSL are the upper and lower specication limits provided by the
customer.
The operator * part interaction can be shown separately by the ANOVA method.
Minitab Output omits this from the table if the p value for Op* Part interaction >
0.25 . In the example discussed , there is no signicant Operator * Part interaction.
Gauge R & R : Graphical Results : Minitab Output
The Minitab Output also provides a graphical summary of the values shown in the
tables earlier. ( Fig. 30)

26 of 96

Components of variation graph displays Repeatability , Reproducibility and


Gauge R & R as a % of Total Variation
An important part of the graphical summary is the X(bar) & Range chart This
concept will be discussed in detail in subsequent lessons : The Control Phase .
Note : For an acceptable MSA , one would expect most points in the X bar
Chart to be outside the control limits indicating that the variation is primarily due
to dierences between the parts and not due to the measurement system . On
the other hand , if most points on the X-bar chart fall within control limits it
indicates the variation is primarily due to measurement system

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hps://improveandinnovate.les.wordpress.com/2014/07/slide55.jpg)
NOTE : All acceptance guidelines are as dened by the Automotive Industry Action
rd
Group ( AIAG) , MSA Reference Manual , 3 edition. The AIAG was formed by the big
three automakers i.e Ford , GM & Chrysler in the year 1991.Readers are advised to refer
to the latest AIAG manuals for any changes in the guidelines.
The AIAG method uses the following terminologies and denitions :
Gauge R & R , GRR =
(hps://improveandinnovate.les.wordpress.com
/2014/07/slide73.jpg)
% Gauge R & R = %GRR = (GRR/TV)*100
EV = Equipment Variation = Repeatability
Hence , % EV = (EV/TV) *100
AV = Appraiser Variation = Reproducibility
Hence , % AV = (AV/TV) *100

27 of 96

Total Variation ( TV ) =

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hps://improveandinnovate.les.wordpress.com/2014/07/slide64.jpg)
where , PV = Part- to Part Variation
The values of EV , AV and PV are computed using constants from the Xbar R
Control Chart tables ( can be found in any book on SPC)
Gauge R & R : X(bar) R Method :
This method is similar to the ANOVA method and can be performed easily with
MINITAB. One can expect minor dierences in the results due to the fact that this
method approximates standard deviation with range values using the control chart
method. Refer Chapter 8 for detailed information on the Xbar R Control Chart. As
discussed earlier , this method does not compute Operator*Part interaction
separately.
Measurement System Analysis : Aribute Data
MSA for aribute data ( also called Aribute Gauge R & R ) is used when the quality
characteristic to be monitored is aribute in nature , for example , ratings or rankings
, pass/ fail in an inspection , accepting / rejecting an application form. etc . Following
are the steps for conducting an Aribute Gauge R & R:
1. Identify the sample ,usually more than 30 ( some good ,some bad and some
borderline cases)
2. Have the items rated / graded rst by an expert ,
3. Select inspectors who would be rating the items
4. Pass the items in a random order to each inspector and record the ratings
5. Repeat the process to obtain ratings for a second trial ( in random order again ! ).
Note : This method is also known as kappa method
Example : The Manager of a placement rm is concerned about the consistency of
her executives in short listing resumes for various positions . The Manager would like
to validate her concerns by using an Aribute Gauge R & R method . 20 resumes of
applicants to a position advertised were identied by the manager ( a mix of good
ts , poor ts and borderline cases) . Three executives were asked to grade the
resumes ( Accept or Reject) with reference to the Job Description. Each executive
was given two trials , in random order . The resumes were also graded separately by
the manager regarded as the expert ( reference). The results are displayed in Fig. 30

28 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hps://improveandinnovate.les.wordpress.com/2014/07/slide45.jpg)
Solution :
The following information can be gathered from the table :
No. of times an executive agrees with herself In Sl.no. 3 ( shaded), Exec. A is not
consistent on both trials for the same resume
No. of times an executive agrees with herself and also with other executives In Sl.
No. 8(shaded) , all executives agree with themselves but executive B does not agree
with executives A & C
No. of times all executives agree with each other and also with the expert. In Sl.
No. 20 (shaded) , all executives agree with each other , but not with the expert. This
is an estimate of Aribute Gauge R & R %
In the above example, we have 14 occasions ( marked * ) where all executivesagreed
( accept or reject) with each other and also with the expert . Hence , the Aribute
Gage R & R % is given as : (14/20 ) x 100 = 70 % . The target is 100% for all cases with
a lower limit of 80%.
The same result can also obtained with MINITAB using the following command :
Stat > Quality Tools > Aribute Agreement Analysis
Fig. 31 shows the Minitab output and Fig. 32 shows a graphical analysis of the
appraisers performance .

29 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hps://improveandinnovate.les.wordpress.com/2014/07/slide35.jpg)
Conclusion :
Since the agreement is only 70% , the measurement system is not adequate . Figure
31 also shows the 95% condence interval for the agreement % and the Fleiss
kappa & Cohens kappa statistics . The corresponding p-values will indicate whether
or not to accept the null hypothesis.
Subsequent lessons will deal with condence intervals , hypothesis testing and
p-values.
Fig. 32 is a graphical analysis of the appraisers performance. Appraisers A and B
have only 85% agreement with the expert ( standard) while the within appraiser
agreement is low for appraiser A ( 90%)

(hps://improveandinnovate.les.wordpress.com/2014/07/slide28.jpg)
30 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

Metrology Basics
Metrology is dened as the science of measurement, embracing both experimental
and theoretical determinations at any level of uncertainty in any eld of science and
technology ( Source : The International Bureau of Weights and Measures ).
Metrology deals with the following subjects :
Development and establishment of units of measurements , traceability standards.
Application of the science of measurement in various manufacturing processes
which include selection of the right measuring instruments and their calibration.
Compliance to regulatory standards such as weights and measures , safety of the
consumers etc.
Traceability : A key concept in metrology is traceability which is the capability to
verify the history, location, or application of an item by means of documented or
recorded identication.
Traceability refers to an unbroken chain of comparisons relating an instruments
measurements to a known standard. Every country maintains its own metrology
system which dene various standards.
Calibration : Calibration is a comparison between measurements one of known
magnitude or correctness made or set with one device ( known as the standard) and
another measurement made in as similar a way as possible with a second device. This
is done to ensure an instruments accuracy ( bias , stability and linearity) during its
useful life.
Calibration is a periodic activity and follows a pre dened schedule that depends
on the measuring instrument and external factors. Calibration details need to be
recorded and are oen a subject of quality audits.

Leave a comment

CSSBB Tutorial Series : Lesson 7


(http://improveandinnovate.wordpress.co
m/2014/07/15/cssbb-tutorial-series31 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

lesson-6-part-3/)
July 15, 2014July 17, 2014 Lean Six Sigma

Lesson 7 : The Measure Phase Part 3

Topics Covered :
o Probability Distributions Discrete and Continuous

Probability Distributions
The only man who behaved sensibly was my tailor: he took my measure anew every time
he saw me, whilst all the rest went on with their old measurements and expected them to t
me.

George Bernard Shaw


I . Probability Distributions
A probability distribution is a statistical model that describes characteristics of a
population. Probability distributions can be used in Six Sigma for :
o Predicting probabilities of occurrence of future events
o Baselining process performance i.e understand how a process is currently
performing
o Comparing performance of multiple vendors ( processes ) etc.
Probability Distributions are primarily of two categories :

32 of 96

Discrete Probability Distributions, and

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

Continuous Probability Distributions


Probability Distributions Terminologies
Random Variable : A variable can be called random if it takes unique numerical
values with every outcome of an experiment. The value of the random variable
will vary from trial to trial as the experiment is repeated. Random Variables can
be discrete or continuous . For example :
o A coin is tossed several times. The outcome x is a discrete random variable as it can
take only values 0,1,2 etc.
o A process is run several times . The time taken to complete the process each time ( called
the cycle time) is an example of a continuous random variable as it can take any positive
value and not just integer values..
o Probability Density Function (pdf) is the probability of the random variable
taking a value equal to x, i.e
F(x) = P(X=x)
Cumulative Distribution Function (cdf) is denoted by F(x) and represents the
probability of the random variable ,X such that ,
F(x) = P(X x)
o The expected value (or population mean) of a random variable indicates its
average or central value. It is a summary value of the variables distribution. Thus
o For a discrete random variable the expected value is given by :
= E(X) = xi p(xi)
o For a continuous random variable , the expected value is given by :
= E(X) = x f(x)dx
Discrete Probability Distributions : Some situations call for discrete data, such as;
the no. of applications rejected , the no. of abandoned calls
etc. Such data can be represented by discrete probability distribution i.e a probability
distribution that can take only discrete values. The following are some commonly
33 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

used discrete probability distribution functions :


1. Poisson
2. Binomial
3. Hypergeometric
1. A Poisson distribution describes the count of the number of events that occur in
a certain time interval or space. For example, the number of customers arriving every
hour , the number of calls received by a switchboard during a given time period etc. The
Poisson probability density function is given by :

is the mean ,
x is the Poisson distributed random variable and
x! , called n factorial = n*(n-1)*(n-2).x1
Note :
i)

The mean of the Poisson process is ,

ii) The variance of the process is also , that is , 2= , so that the standard
deviation is =
iii)

The Poisson distribution can be used when:

o The no. of possible occurrences is large


o The average no. of occurrences is constant
o The probability of the event is small

34 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

Example : A soware averages 8 defects in 8000 lines of code. What is the probability of
exactly 2 errors in 4000 lines of code ? What is the probability of less than 3 errors?
Solution :
The average defects in 4000 line of code (= ) = 4,
Probability of exactly 2 errors is P(x=2) = (e-4x 42)/2! = 0.1465
Use of Poisson Tables : An easier way to arrive at the solution is to refer to the
Poisson Tables and read o the value relating to = 4 & x = 2.
This problem can also be easily solved with the help of MS Excel using the formula :
Poisson ( 2,4,0) = 0.146525.
Note : The 0 at the end of the formula gives the probability density function . For
cumulative density function , the zero should be replaced with 1
o Probability of less than 3 errors
P( x<3 ) = P(x=0) + P(x=1) + P(x=2)
From the table in Appendix VII ,
P(x<3) = 0.0183 + 0.0733 + 0.1465 = 0.2381
o As seen earlier , this can be solved easily with MS Excel using the following
formula :
P ( x<3 ) = Poisson (2,4,1)= 0.2381
Note : The 1 at the end of the formula gives the cumulative density function.
2. A Binomial random variable describes the number of successes in a series of trials.
For example, the number of students passed in class of 50. The binomial probability
density function is given as :

35 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hps://improveandinnovate.les.wordpress.com/2014/07/slide211.jpg)
where,
P(x,n,p) is the probability of exactly x successes in n trials with a probability of
success equal to p on each trial and

1. 1. the total number of trials is xed ;


2. there are only two possible outcomes for each trial : success and failure
3. the outcomes of all the trials are statistically independent;
4. all the trials have an equal probability of success.
Note :The mean and variance of the binomial distribution are

36 of 96

= np and

2 = np(1-p)

Example : The success rate of a Black Belt Certication test is 65% ! In a class of 10
participants , what is the probability of 7 participants clearing the test. ? What is the
probability of more than 8 participants clearing the test?
Solution :

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

p = 0.65 , n = 10 , substituting ,
i) P( x=7) = 0.2522
Using the Binomial Tables : An easier way to arrive at the solution is to use the
Binomial Table and read o the value relating to p=0.65 , n= 10 & x = 7 .
Note : This can also be solved with the help of MS Excel using the formula
BINOMDIST(7,10,0.65,0) = 0.25222
The binomial probability density function for n >8 is given by :
P ( x <=8) = P(x=0) + P(x=1) + P(x=2) + P (x=8)
Hence , the probability of more than 8 participants clearing the test is :
P(x> 8) = 1- P(x <=8)
= 1- Binomdist( 8,10,0.65,1) = 0.086
3. Hypergeometric Distribution : The hypergeometric distribution is a binomial
distribution without replacement in a nite sample and is given by :

(hps://improveandinnovate.les.wordpress.com

/2014/07/slide20.jpg)
where,
p(x,n,,m,N) is the probability of exactly x successes in a sample of n drawn from a
population of N containing m successes .
Note : Unlike the case of binomial distribution, in a hyper geometric distribution :

37 of 96

the trials are not independent and hence


the probability of success changes from trial to trial

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

Example : A consignment of 25 parts contains 4 defectives. What is the probability that a


sample of 8 drawn at random will contain 2 defectives ? What is the probability that the
sample will contain less than 2 defectives ?
Solution :
n=8 , N = 25, m = 4 ,
This problem can be solved with MS Excel using the formula HYPGEOMDIST()
P (x=2 ) = HYPGEOMDIST(2,8,4,25) =0.30
P( x<2) = P( x=0) + P( x= 1)
= HYPGEOMDIST(0,8,4,25) + HYPGEOMDIST(1,8,4,25)
= 0.19 + 0.43 = 0.62
Note : Excel does not have a cumulative form of the hyper geometric distribution.
Continuous Probability Distributions : Continuous random variables can take
innite no.of values between nite or innite ranges. Examples include : Distance
travelled , Cycle time , etc.
Most commonly used continuous probability distributions are :
Normal Distribution
Lognormal Distribution
Weibull Distribution
Exponential Distribution
1. The Normal Probability Distribution : This is the most widely used probability
distribution in statistical analysis . The probability density function of the normal
distribution is given by:

38 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hps://improveandinnovate.les.wordpress.com/2014/07/slide191.jpg)
where
x is the random variable such that
is the population mean
is the population standard deviation
= 3.14159
e = 2.71828
The normal probability distribution has certain properties that are very useful in our
understanding of the characteristics of the underlying process.

39 of 96

The distribution is symmetric i.e its skewness is zero


For the normal probability distribution , the mean = median = mode.
The curve extends from - to + along the x- axis.
The width ( spread) of the curve is a function of the standard deviation . Higher
the standard deviation , wider the curve and vice- versa. Refer to Fig. 18

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hps://improveandinnovate.les.wordpress.com/2014/07/slide181.jpg)
Fig. 18 : Normal Distribution Curve
The areas under the curve represent the probabilities of the distribution for
various values of the random variable. The mean () divides the curve into two
equal halves i.e 50% of the area under the curve is to the le of the mean and the
other 50% is to the right of the mean. The areas under the curve between any two
nite limits can be obtained by integrating the density function between the two
limits. Some important areas to remember are :
o 68.26 % of the area under the curve is within the +/- 1 limit. This means 68.26%
of the values of a normal random variable fall within the +/- 1 limit. Similarly,
o 95.44% of the area under the curve is within the +/- 2 limit., and
o 99.72% of the area under the curve is within the +/- 3 limit.
Areas under the normal curve between any nite limits , such as the above, can be
computed easily either with the help of tables or with statistical soware . Refer to
Fig 19a , 19b & 19c.

40 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hps://improveandinnovate.les.wordpress.com/2014/07/slide171.jpg)

The Standard Normality Probability Distribution


The Standard Normal Probability Distribution is a normal probability distribution
of a random variable ( called z ) with a mean of 0 and a standard deviation of 1 .
The random variable z is given by:

(hps://improveandinnovate.les.wordpress.com/2014/07/slide161.jpg)
Thus , the z value can be seen as the number of standard deviations that a point
x is from
Example :The cycle time of an assembly process is known to be normally distributed with a
mean of 28 mins. and a standard deviation of 8 mins. i) What % of assemblies have a
41 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

cycle time of less than 16 mins ? ii) What % of assemblies have a cycle time of more than
42 mins ? iii) What % of assemblies have a cycle time between 16mins. and 42 mins.
Solution :

(hps://improveandinnovate.les.wordpress.com/2014/07/slide151.jpg)
Fig. 20 : Areas under the normal Curve
The areas under the normal curve that are of interest are marked in Figure 20. First
the x values need to be converted to their corresponding z-values .
i)

z1 = (16-28)/8 = 1.5 ,

Hence , P(x <16) = P(z1 <1.5) , the ve sign indicates z is to the le of the mean.
From the standard normal table (refer to any book on statistics for the normal tables),
the area under the curve for a z value of 1.5 = 0.0668 .
Note : This value can also be obtained with MS Excel using the formula
NORMSDIST(1.5)
Conclusion : This means apprx. 6.68% of the assemblies have a cycle time of less
than 16 mins.
ii) z2 = (42 28)/8 = 1.75
Hence , P( x > 42) = P (z2 >1.75)
42 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

From the standard normal table , the area under the curve for a z- value of 1.75 =
0.9599.
Note : This value can also be obtained with MS Excel using the formula
NORMSDIST(1.75)
Conclusion : As indicated in the table in Appendix I , this value ( 0.9599) is the area
under the normal curve from - to z ( = 1.75 ) . Hence P(z>1,75) is the area to the
right of z , which is equal to (1 0.9599) = 0.04. This means approximately 4% of
assemblies have a cycle time of more than 42 mins.
iii) z3 is the area between z1 & z2 ( Refer Fig. 20 ) . It can be easily computed as :
1 (area under z1 + area under z2)
Hence , P ( 16<x<42) = P(-1.5<x<1.75)
= 1- (0.0668 + 0.04) = 0.8932 .
Conclusion :This means approximately 89% of assemblies have cycle time between
16 mins. and 42 mins.
2. The Students t Distribution : The t- distribution ( or Students t) is used as an
approximation for the normal distribution , when the sample sizes are small and the
population standard deviation is not known.The t distribution is a family of similar
probability distributions. The shape of the distribution depends on what is known as
degrees of freedom which is given by n-1 where n is the sample size. Thus , as the
sample size increases the t-distribution approaches normality. Fig. 21 shows typical tdistribution curves for n= 6 and n = 90 degrees of freedom. The t- distribution is
discussed in detail in The Analyze Phase

43 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hps://improveandinnovate.les.wordpress.com/2014/07/slide141.jpg)
Fig. 21 : t- distribution
3. The F- Distribution :The F distribution is used in hypothesis testing for comparing
two variances. Like the t distributions, the F distribution is actually a family of
distributions .However , unlike the t- distribution , the F distribution is
characterized by a pair of degrees of freedom i.e a numerator degrees of freedom
and a denominator degrees of freedom. The random variable is the F ratio , which
is a ratio of two variances. Fig . 22 shows four F Distributions for various numerator
and denominator degrees of freedom. Use of F- Distribution is discussed in detail in
The Analyze Phase.

(hps://improveandinnovate.les.wordpress.com/2014/07/slide132.jpg)
Fig. 22 : The F Distribution

44 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

4. The Exponential Distribution : The exponential distribution is used commonly in


reliability engineering.It is used to model items with a constant failure rate such as
electronic & mechanical components and other applications such as wait times at
customer service counters. The exponential distribution & the Poisson distribution
are inversely related. That is , if a random variable, x, is exponentially distributed,
then the reciprocal of x, y=1/x follows a Poisson distribution. Likewise, if x is Poisson
distributed, then y=1/x is exponentially distributed. Refer Fig . 23 .
The exponential probability density function is given by :

(hps://improveandinnovate.les.wordpress.com/2014/07/slide122.jpg)
The cumulative distribution function is given by :

(hps://improveandinnovate.les.wordpress.com/2014/07

/slide25.jpg)
= constant failure rate , for example : Number of failures / hr Or No. of failure / cycle
Thus , can also be expressed as , 1/
where , is the Mean Time Between Failures (MTBF)

45 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

The standard deviation of the exponential distribution is given by


(hps://improveandinnovate.les.wordpress.com/2014/07/slide112.jpg)

(hps://improveandinnovate.les.wordpress.com/2014/07/slide101.jpg)
Fig 23 : The exponential distribution
Example :
The average life of a component is 1000 hrs. It is known that the failure rate of the
component is exponentially distributed . i) What is the probability that the component will
last 800 hrs ? ii) What is the probability that the component will last atleast 800 hrs ?
Solution :
Since the average life is 1000 hrs . ,
= 1/1000 = 0.001
i) Probability of exactly 800 hrs. = P(x=800) = 0.001*e^(-0.001*800)
= 0.000449
Note : The same value can be obtained with MS Excel by using the formula
EXPONDIST(800,0.001,0)
46 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

ii ) Probability of at least 800 hrs. = P ( x > 800) = 1- P ( x <=800)


=

1- ( 1- e^(-0.001*800))

0.4493

Note :
This is a case of cumulative distribution function
The same value can also be obtained with MS Excel using the formula 1
EXPONDIST ( 800,0.001, 1 ) . The1 at the end of the formula indicates a
cumulative distribution function.
5. Weibull Distribution : The Weibull Distribution is used to model the time to
failure of products that have a varying failure rate. It is one of the most commonly
used distributions in reliability engineering.
The three parameter Weibull probability density function is given by :

(hps://improveandinnovate.les.wordpress.com/2014/07

/slide91.jpg)
where
is the shape parameter, >0
is the scale parameter, >0 , and
is the location parameter.
Figure 24 shows probability density functions of Weibull distribution for = 1 , = 0
and for several values of

47 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hps://improveandinnovate.les.wordpress.com/2014/07/slide82.jpg)
Fig. 24 : The Weibull Distribution

The shape parameter gives the Weibull Distribution its exibility. For example at
= 1 , the Weibull is identical to the exponential distribution . if is between 3
and 4 the Weibull distribution approximates the normal distribution. Refer Fig.
24.
The scale parameter, ,determines the range of the distribution.
The location parameter , , indicates the location of the distribution along the xaxis.
The cumulative distribution function of a Weibull random variable is given by :

(hps://improveandinnovate.les.wordpress.com/2014/07

/slide72.jpg)
Example : A typical application of the Weibull Distribution is to describe the time to
failure of electronic components . The time to failure of an electronic component in a
Television Set is known to follow a Weibull Distribution with a shape parameter of 0.5 , a
48 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

scale parameter of 70 hrs. and a location parameter of 0. What is the probability of the
component lasting at least 200 hrs ?
Solution
= 0.5 , = 70 and = 0
P ( at least 200 hrs. ) = P ( x > 200 )
= 1- P ( x < = 200)
= 1- 0.8155
= 0.1844
Note :The same value can be obtained with MS Excel using the formula :
1 Weibull ( 200, 0.5 , 70,1)
6. The Lognormal Distribution : The log-normal distribution is the single-tailed
probability distribution of any random variable whose logarithm is normally
distributed.If a data set is known to follow a lognormal distribution, transforming the
data by taking a logarithm yields a data set that is normally distributed. While it is
common to use natural logarithm ( denoted as ln),any base logarithm, such as base
10 or base 2, can also be used to yield a normal distribution. The lognormal
distribution is commonly used to model the time to failure of mechanical
components , where the failure is fatigue or stress related. Its probability density
function is given by :

(hps://improveandinnovate.les.wordpress.com

/2014/07/slide63.jpg)
is called the location parameter which is also the log mean . It is the mean of
49 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

the transformed data transformed data. is called the scale parameter which is also
the log SD. It is the standard deviation of the . Fig 25 shows lognormal distributions
for several values of .

(hps://improveandinnovate.les.wordpress.com/2014/07/slide54.jpg)
Fig. 25 : The Lognormal Distribution
Example : The time to failure of a mechanical component is known to follow a lognormal
distribution . The following is the time to failure data in hours for 6 samples tested .
221, 365 , 420 , 310 , 396 , 289
i) What is the probability that a component will last upto 360 hrs. ? ii) What is the
probability that a component will last more than 440 hrs ?
Solution :
The table below shows the transformed data using natural logarithm as the base

50 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hps://improveandinnovate.les.wordpress.com/2014/07/slide44.jpg)
i) The probability of a component lasting upto 360 hrs. is given by :
p(x < =360) = p( x <=ln ( 360 ) = p(x <= 5.8861)
z = (x-)/ = (5.8861 -5.7871)/0.2379 = 0.4161
From the normal dist. tables, p( z <= 0.4161) = 0.66
Note : The same value can be obtained in MS Excel using the formula :
NORMDIST( ln(360) , 5.7871 , 0.2379 , 1)
= 0.6613
ii) p(x > = 440) = 1- p( x <=400)
= 1- p( x <=ln (440)) = 1- p(x <=6.0867)
z = (x-)/ = (6.0867 5.7871)/0.2379 = 1.2596
From the normal dist. tables, p(z <=1.2596) = 0.8961
Hence , the probability of a component lasting more than 440 hrs. is
`1- 0.8961 = 0.1039
Note : The same value can be obtained in MS Excel using the formula :
51 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

1 NORMDIST( ln(440) , 5.7871 , 0.2379 , 1)


= 0.1039
7. The Chi- Square (2) Distribution
If n random values z1, z2, , zn are drawn from a standard normal distribution,
2
2
2
squared, and summed, (z1 + z2 + zn ) the resulting statistic is said to have a
chi-squared distribution with n -1 degrees of freedom This is a one-parameter family
of distributions, and the parameter, n, is the degrees of freedom of the distribution (
Refer Fig. 26)

(hps://improveandinnovate.les.wordpress.com/2014/07/slide34.jpg)
Fig. 26 : The Chi Square Distribution
The Chi- Square statistic is given by :

(hps://improveandinnovate.les.wordpress.com/2014/07/slide26.jpg)

, where ,
s2 = sample variance of sample size n
2 = estimated population variance

52 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

The Chi Square distribution has several applications in inferential statistics such as
interval estimates of variances and standard deviations , hypothesis testing such for
Goodness of Fit etc . Application of the Chi-Square distribution is discussed in The
Analyze Phase.
8. Bivariate Distribution :
In all the probability distributions discussed so far , we have seen distributions
relating to only one variable . These are called univariate distributions. If multiple
variables need to be studied simultaneously, the resulting distribution will be called a
multivariate distribution. A bivariate distribution is a from of multivariate
distribution with two variables .Bivariatedata arises from populations in which two
variables are associated with each observation .For example : A nutritionist may be
interested in studying how a particular diet inuences both height and weight of children.
The two variables of interest are height and weight. If both the variables are normally
distributed , we have a bivariate normal distribution.
Approximations to Probability Distributions
In many situations , one may nd it necessary to approximate the real probability
distribution with a simpler distribution. For example , It is known that an underlying
distribution is binomial but the analyst feels it would be easier to assume a normal
distribution and compute the probabilities . This can be done provided the data meets
certain conditions. Some of the commonly used approximations are described below
:
The Normal approximation to the Binomial : The normal distribution may be
used to approximate the binomial if :
np 5 and n(1-p) 5
The Poisson approximation to the Binomial : The Poisson distribution may be
used to approximate the binomial if :
n is large and p is small ( < 0.1) such that np < 5
In this case , , the mean of the Poisson distribution = np

53 of 96

The Normal approximation to the Poisson : The normal distribution may be


used to approximate the Poisson if :

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

The mean > 10


In this case , the mean () and variance (2 ) of the normal distribution will be equal
to
The Binomial approximation of the Hypergeometric : The binomial distribution
may be used to approximate the hypergeotmetric if :
n/N is small ( < 0.1)
In this case , probability of the binomial distribution, p = m/N ,
where , m is the number of successes in the population N

3 Comments

CSSBB Tutorial Series : Lesson 6


(http://improveandinnovate.wordpress.co
m/2014/07/12/cssbb-tutorial-serieslesson-6/)
July 12, 2014July 19, 2014 Lean Six Sigma

Lesson 6 : The Measure Phase Part 2


Topics Covered :
Handling Data
o Sampling techniques
o Data collection
o Basic Statistics & Probability

The only man who behaved sensibly was my tailor: he took my measure anew every time
54 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

he saw me, whilst all the rest went on with their old measurements and expected them to t
me.
George Bernard Shaw
I. Handling Data
Collecting and analyzing data is one of the key requirements in the Measure Phase .
The Six Sigma professional is expected to be an expert in data analysis. This section
will deal with some basic concepts of data management and statistics & probability.
Types of Data : Processes produce data of various types. The following chart (
Fig. 1) shows data types with examples.

(hps://improveandinnovate.les.wordpress.com/2014/07/slide13.jpg)
Fig. 1 : Data Types
Variable Data : Data that has units of measure ( ratio or interval type) . Example :
Temperature , Pressure. Variable Data is of two types
Continuous Data : Data that can take fractional values. Example : Height , Weight
, Time etc.
Discrete Data : Data that can take only integer values . Also called count data .
Example : No. of guests in a hotel , no. of calls received in a day etc.
Aribute Data : Data that is non- numeric and categorical . It is used to measure
aributes of a product or process. Example : customer satisfaction ratings ( good ,
55 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

satisfactory , poor ) , shades of colour in a fabric etc. Aribute Data can be of two types :
Nominal Data : Categorical data without a specic order is called nominal data .
Example : red , blue, black , green etc.
Ordinal Data : Data that has ordered categories but no meaningful intervals
between the measurement is called ordinal data. Example : Customer satisfaction
ratings on a scale of 1-5 etc.
Note : The more continuous the data , the easier it is to analyse.
Levels of Measurement : Measurements are categorized into several levels . A
particular level or scale of measurement will dene how the data should be
treated mathematically. Following are the scales of measurement :
o Nominal data have no order . They have names / labels to various categories.
o Ordinal data will have an order, but the interval between measurements is not
meaningful.
o Interval data have meaningful intervals between measurements, but there is no
true reference point (zero).
o Ratio data have the highest level of measurement. Ratios between measurements
as well as intervals are meaningful because there is a reference point (zero).
Sampling Methods :
For large populations , the time and cost involved in gathering data may be
infeasible. Sampling makes it easier to study a limited amount of data and draw
inferences about the underlying population
Sampling is generally of two types :
1. Judgemental sampling or non random sampling where sampling is done based
on ones expertise and opinion.
2. Random or probability sampling where all items / data points in the population
have an equal chance of being chosen. Statistical analysis can be done with this
data and inferences can be made as it is representative of the population.
Random Sampling : Methods

56 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

The following are the four commonly used methods of random sampling :
i) Simple Random Sampling : In this method each item in the population has equal
probability of geing picked. This removes the element of bias in sampling. This is
the simplest form of random sampling and can be used to estimate population
parameters based on summary statistics. The easiest way to do a simple random
sampling is with the help of random numbers. Using MS Excel , one can generate a
table of random numbers and use it to select the sample.
i) Systematic Sampling : In this method items are selected from the population at
pre- dened intervals of time or space . For example : samples drawn from a process
every 30 mins or every 5th component from the assembly line etc. The disadvantages of
such a sampling method are quite apparent. There could be a bias introduced due to
a pre- dened interval. However , it requires less time and consumes lesser resources
compared to the simple random sampling method.
ii)
Stratied Sampling : This method involves dividing the original
population into homogenous groups ( strata) and then drawing a sample at random
from each group ( stratum) . For example : A market survey team would like to know
how well a new brand of shampoo will be received . For this purpose , the team may rst
divide the population into various age groups ( teens , middle aged and senior citizens etc.
) . Random sampling from each of the groups will ensure that each group is represented in
the sample. This method helps in reducing sampling eort for populations with large
variances.
iii)
Cluster Sampling :This method is similar to Stratied Sampling . The
dierence being, stratied sampling is used when variation within groups is small ,
but variation between groups is large. The opposite is the case for cluster sampling
i.e the groups are essentially similar but within groups variation is large.
Data Collection
A formal data collection process should be established by the Six Sigma team. This
process will ensure that the data collected is NOT time or person dependent. In case
the six sigma team has delegated the data collection activity to others , they should
monitor such activities by :

57 of 96

Questioning collectors for the understanding of operations denitions.


Verifying collected data with source, through sampling or ad hoc QC
Ensuring that pre-dened procedures are followed during the data collection

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

process.
The following are some important properties of data that one needs to ensure while
collecting data :
Data integrity : Is the data genuine ?
Data precision & accuracy : Using the right operational denitions &
appropriate gauges for collecting data
Data consistency : Comparing apples with apples . The Six Sigma team should
ensure there are no changes in measurement systems , operational denitions ,
metrics etc during the course of the project.
Time traceability : Data must be traceable to the time it was collected.
Teams may use dierent templates ( formats ) for data collection depending upon
specic processes or organizational needs . A check sheet is a good example of a
data collection format Refer Fig. 2

(hps://improveandinnovate.les.wordpress.com/2014/07/slide23.jpg)
Fig 2 Check Sheet

Other techniques of data collection include

58 of 96

Data coding
Automating the data collection system

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

Both the techniques will ensure that the data is free from human & gauge errors .

II. Basics Statistics


Terminologies and their Denitions : The table in Fig. 3 shows commonly used
terminologies in statistics and their denitions.

(hps://improveandinnovate.les.wordpress.com/2014/07/slide32.jpg)
Summary Statistics :Summary Statistics are single numbers to describe
characteristics of a data set. Some of the most important summary statistics are :
1. Measures of central tendency : The three measure of central tendency are mean
, median and mode.
2. Mean ( arithmetic, weighted & geometric) : The arithmetic mean of a set of data
is the average of all the values and is given by formula
where , xbar is the sample mean,
is the population mean,
n is size of the sample and
N is the size of the population

59 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

Median : Median is the middle value in a set of numbers , when the numbers are
arranged in an ascending or descending order. If the data set contains even no. of
observations , the median is the average of the two middle values . The advantage of
median over the mean is that extreme values in a data set do not aect the value of
the median.
Mode : Mode of a data set is the value that occurs with the highest frequency.
Though not used frequently by Six Sigma experts , this is an important measure of
central tendency. A data set can have one mode , two modes ( bi-modal) or multiple
modes ( multi modal).
2. Measures of dispersion : The three measures of dispersion are Range , Variance
and Standard Deviation.
Range : The range is the dierence between the largest and the smallest value in
the data set . The range is easy to compute and hence can sometimes be a good
approximation of the standard deviation. However , when the data contains extreme
values , this can lead to inaccurate estimates of variation in the underlying process..
Variance : The variance is a measure of dispersion that indicates how the data is
spread about the mean of the data set . A high value of variance indicates high
variation in the process i.e the individual values are far away from the mean , and
vice versa. Variance is given by the formula :
(hps://improveandinnovate.les.wordpress.com
/2014/07/slide24.jpg)where , s2 is the sample variance ,
and
2 is the population variance
n is size of the sample and
N is the size of the population
Note : n-1 is the no. of degrees of freedom , a concept
that will be discussed in subsequent chapters.

Standard deviation : The standard deviation is the square root of the variance and
is given by the formula :
(hps://improveandinnovate.les.wordpress.com/2014/07/slide33.jpg)Where , s is
60 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

the sample standard deviation , and


is the population standard deviation
Example : Computing Standard Deviation
Given ve values : 1, 2,3 4 & 5

The standard deviation can be computed using the following table :

(hp://improveandinnovate.les.wordpress.com/2014/07/slide71.jpg)
Computing Standard Deviation
Standard Deviation = 10/5 = 1.414
Note :
i) Standard deviation ( SD ) is the most commonly used measure of dispersion , as it
carries the same unit of measure as the individual values or the mean.
ii) However , standard deviation values cannot be added up to compute a total SD .
Instead , rst the variances should be added and the corresponding square root gives the
total SD, i.e

61 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hps://improveandinnovate.les.wordpress.com/2014/07

/slide43.jpg)
The Central Limit Theorem
The central limit theorem ( CLT ) provides a relationship between the shape of the
population distribution & the shape of the sampling distribution of the mean. The
CLT is one of the most important theorems in statistics. The central limit theorem
states that :
Regardless of the shape of the population distribution and the size of sample, the mean of
the sampling distribution of the means will equal the population mean
Regardless of the shape of the population distribution , the sampling distribution will
approach normality as the sample size increases.
Mean of the Sampling distribution of the Means
Given ve values : 1 ,2, 3, 4, & 5 , the mean of sampling distribution of the means
for a randomly drawn sample of size n = 3 would include the means of all possible
samples of size n = 3 . ( Fig. 5)

(hp://improveandinnovate.les.wordpress.com/2014/07/slide9.jpg)
Fig 5 : Mean of Sampling distribution of means
Thus according to the Central Limit Theorem ,
62 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

The population mean will be approximately 2.987


A probability distribution of the means of all possible samples is a distribution of
the sample means
Thus, instead of 5 values , if we could consider a larger population and take all
possible samples of size n = 2 ,3,4 20, 40 etc , the distribution of the means will
approach normality as the sample size increases. Fig. 6 shows such a distribution for
sample size n = 3 for the data displayed in Fig. 5
The standard deviation of the sampling distribution of means is also called the
standard error of the mean and is given by :
(hps://improveandinnovate.les.wordpress.com/2014/07
/slide53.jpg)
Where , sigma is the population standard deviation.

(hp://improveandinnovate.les.wordpress.com/2014/07/slide111.jpg)
Fig 6: Sampling Distribution of Means for n=3
Note : The Central Limit Theorem allows us to make inferences about the population
parameters based upon sample statistics and hence is a very powerful theorem.
3. Skewness : A distribution is said to be skewed when the curve is not symmetrical
about its central tendency . Distributions can be right skewed ( positively skewed) or
le skewed ( negatively skewed). Refer to Fig. 7 for examples of skewed
distributions.
4. Kurtosis : Kurtosis is a measure of the peaked-ness of a curve. Two curves may
63 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

have the same central tendency and dispersion properties , but one may be more
peaked than the other. The two curves are said to be having dierent degrees of
kurtosis.
Graphical Methods : Graphical methods are an an easy to understand approach
to data analysis and hence are commonly used for displaying descriptive statistics.
However , making some of the graphs manually can be quite cumbersome and time
consuming. Hence, use of soware is recommended.
1.Histograms : A histogram is a bar graph of raw system data. It displays basic
information about the data such as central location, width of spread, and shape (Fig
7) . The X- axis of the histogram shows the scale of measures which are divided into
several intervals. The Y- Axis shows frequency of occurrence of data . Each bar of the
histogram indicates frequency of data in the corresponding class interval. The
following gure ( Fig.7) gives several possible shapes of a histogram.

(hp://improveandinnovate.les.wordpress.com/2014/07/slide121.jpg)
Each histogram represents a unique data distribution. For example,
A symmetrical histogram indicates data is distributed equally about the mean. A
symmetrical histogram also indicates normal distribution ( mean = median = mode)
A bi-modal histogram indicates that the data may have come from two dierent
sources or populations ( ex.: data from two machines / two locations etc. )
A positive skew occurs when the mean > median (ex. : cycle times , surface nish
etc)
A negative skew occurs when the mean < median ( ex. : yields , cash ows etc)
A random histogram indicates an unpredictable / uncontrolled process .

64 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

2. Scaer Diagram : The scaer diagram is a graph showing pairs of ploed values of
two factors, to examine whether the factors are related. The scaer diagram is a
basic form of regression analysis which is used to establish a y = mx + c
relationship. A scaer diagram may show :
A positive correlation : As x increases , Y increases ( Fig 8a)
A negative Correlation : A x increases , Y decreases ( Fig. 8b)
A zero correlation : As x increases , Y remains constant ( Fig. 8c)

(hp://improveandinnovate.les.wordpress.com/2014/07/slide131.jpg)
Note : The Scaer Diagram does not indicate a cause and eect relationship . It only
shows the direction of relationship between two variables.

Fig 8 : Scaer Diagrams


3. Box and Whisker Plot : A box and whisker plot is used to understand data
distribution based on the median ( as compared to the histogram that shows how
data is distributed about the mean.)
Example : The following is the data for time taken (in days) to process 20 loan
applications ( also called Turnaround Time or TAT)
14 12 13 26 13 22 16 13 14 30 33 21 33 28 26 38 20 30 26 23
65 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

A box and whisker plot for the data is shown in Fig 9.

(hp://improveandinnovate.les.wordpress.com/2014/07/slide14.jpg)
Fig. 9. : Box and Whisker Plot
Lower Whisker : It extends to the lowest value within the lower limit. Lower limit
= Q1- 1.5 (Q3 Q1)
First Quartile ( Q1) : 25% of the data values are less than or equal to this value.
The Median 50% of the observations are less than or equal to it.
Third Quartile (Q3) : 75% of the data values are less than or equal to this value.
Upper whisker : the upper whisker extends to the highest data value within the
upper limit. Upper limit = Q3 + 1.5 (Q3 Q1)
Note : A * beyond the whiskers indicates outliers.
5. Run Charts : Run charts are simple trend charts useful in monitoring process
variation with respect to time. Fig. 10 below is an example of a run chart.

66 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hp://improveandinnovate.les.wordpress.com/2014/07/slide15.jpg)
Fig. 10 : Example of a Run Chart
6. Normal Probability Plot : The normal probability plot is a graphical technique for
assessing whether or not a data set is approximately normally distributed. The data
are ploed against a theoretical normal distribution in such a way that the points
should form an approximate straight line. Values lying away from this line are
indications of departures from normality . Refer Fig. 11 . These plots can be made
easily with the help of standard statistical soware.

(hp://improveandinnovate.les.wordpress.com/2014/07/slide16.jpg)
Fig. 11 : Normal Probability Plot
The straight line in blue is the best t line connecting the points. If most points fall
on or around this line , we can conclude that the data represents a normal
distribution. The curved lines on either side of the straight line represent a 95%
condence interval for the best t . The concept of normality assumptions and
condence interval are discussed in more detail in Chapter 6 The Analyze Phase.
Basic Probability
Probability is the chance that an event will happen. Probabilities are expressed in
fractions such as 1/4, 2/5 or 0.25, 0.1 etc. Probability of an event occurring is given by
the formula :
P(event) = No. of successful outcomes of the event /Total no. of possible outcomes
Probability Terminologies & Rules
o Events are said to be mutually exclusive if only one of the events can happen at a
time, else the events will be called not mutually exclusive. Refer Figs. 12a & 12b. For
67 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

example , the toss of a coin will result in either a head or a tail , not both. Hence ,
P(Head) or P(Tail) = 0.5 .
For mutually exclusive events : P( A or B) = P(A) + P(B)
For events that are not mutually exclusive : P( A or B) = P(A) + P(B) P( A and B )

(hp://improveandinnovate.les.wordpress.com/2014/07/slide17.jpg)
Fig. 12a : Mutually exclusive events
Fig. 12b : Mutually non- exclusive events

68 of 96

Events are said to be independent if the occurrence of one event does not aect
the probability of the occurrence of the other. Independent events can have three
types of probabilities :
Marginal probability i.e is the simple probability of occurrence of an event. For
example : probability of obtaining a head or a tail for every toss of a fair coin is
always 0.5
Joint probability is the probability of two or more events occurring together and is
given by the product of their marginal probabilities , i.e P( AB) = P(A) x P(B)
Conditional Probability is the probability of an event occurring if another event

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

has occurred preceding it i.e P(B/A) read as probability of event B given A has
occurred = P(B) since A & B are independent events.

If the events are not independent ( dependent)


Conditional probability is given by P(B/A) =P(BA)/P(A)
Hence joint probability is given by P(BA) =P(B/A)*P(A)
Example : A box contains a total of 266 balls of four dierent colours ( pink, yellow,
black and blue) and each colour is also of three dierent paerns ( doed , striped &
plain) . A contingency table ( Refer Fig. 13) can be made to describe the composition
of the box.

(hp://improveandinnovate.les.wordpress.com/2014/07/slide18.jpg)
Fig. 13 : Contingency Table
What is the probability that a ball drawn at random will be pink ?
o Solution : P(pink) = 66/266 =0.248
What is the probability that a ball drawn at random will be striped ?
o Solution : P (striped) =57 /266 = 0.2142
What is the probability that a ball drawn at random is yellow and plain?
69 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

o Solution : P ( yellow and plain) = 34/ 266=0.128


What is the probability that a ball drawn at random will be black or blue ?
o Solution : This is a typical case of mutually exclusive events. Hence, P(black or
blue) = P (black) + P(blue) = 69/266 + 54/266 =0.462
What is the probability that a ball drawn at random is yellow or plain ?
o Solution : This is not a mutually exclusive event
o P(yellow or plain) : P(yellow) + P(plain) P(yellow and plain)
= 77/266 + 104/266 34/266 = 0.552
A ball drawn at random is found to be black. What is the probability that the ball is
doed ?
o Solution : This is a case of conditional probability. Hence, P (doed, given the ball is
black) = P( doed/black) = P(doed and black) / P(black) = 36/266 69/266 = 0.133

Leave a comment

CSSBB Tutorial Series : Lesson 5


(http://improveandinnovate.wordpress.co
m/2014/07/11/cssbb-tutorial-serieslesson-5/)
July 11, 2014July 17, 2014 Lean Six Sigma

Lesson 5 : The Measure Phase Part 1


70 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

Topics Covered:
Process Characteristics
Introduction to Measure Phase
Mapping Process Flows : The SIPOC & Value Stream Mapping
Analyzing Business Processes

The only man who behaved sensibly was my tailor: he took my measure anew every time
he saw me, whilst all the rest went on with their old measurements and expected them to t
me.

George Bernard Shaw


1. Process Characteristics
Introduction to Measure Phase : The Measure Phase of the DMAIC is required
for validating the data that was used in the Dene Phase to establish the
improvement opportunity. Key activities in the Measure Phase are :
o Establish & Measure Project Ys
o Plan for Data Collection
o Validate Measurement System
o Establish Baseline Sigma
To begin with , key process elements and their relationships need to be identied .
In Chapter 2 : Business Process Management & Metrics we learnt the basic
concepts of business processes. In this section , we will discuss how to identify key
processes variables such as inputs and outputs and how they impact the process
outcome.
The SIPOC ( Supplier Input Process Output Customer) is a simple tool that is
useful in identifying process variables and their relationships. The SIPOC is a high
level process map and is used by Six Sigma project teams to obtain a macro picture of
71 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

the process as it ows from the suppliers to customers. The benets of using SIPOC
are :
It provides a birds-eye view of the process that can be easily communicated to
all stakeholders of the project
Helps in identifying key process input variables ( KPIVs)and key process output
variables ( KPOVs) and their linkages.
The SIPOC helps in clearly dening the project scope i.e boundary limits.
Steps in making a SIPOC
The following is the sequence of steps in constructing a SIPOC :
Refer to the Project Charter
Identify the process (es) that are to be improved
Write down the rst and last steps of the process ( i.e the project scope)
Fill in the intermediate steps ( remember the SIPOC is only a macro picture of the
process , hence it is enough to describe the process in 6-10 high level steps. )
For each process step :
o Identify the Output (s)
o Identify the all the Customers that will receive the Outputs of this step.
o Identify the Input (s) required for the Process to function properly.
o Identify the Supplier(s ) of the Inputs that are required by this step.
o Optional : Dene customer requirements in terms of metrics , target / tolerance
for outputs
Review / validate the SIPOC with process owners / process experts.
An example of a SIPOC is shown in Fig. 1 .

72 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hps://improveandinnovate.les.wordpress.com/2014/07/slide22.jpg)
II. Analyzing Business Processes
Process performance with regards to eciency & eectiveness can be evaluated with
the help of various types of process ow diagrams . An important activity during
process analysis is to identify the non value adding activities ( NVAs) in the process
.
Key process performance indicators that can be used to study process constraints
and identify non- value adding activities are
i) Work In Process ( WIP) : WIP refers to all products ( material , information etc)
that are waiting to be completed. WIP includes components and assemblies that are
currently being worked on as well as semi-nished products and assemblies that are
waiting between work centers. Higher WIP inventories indicate process ineciencies
. This means resources ( raw material , labour , space etc ) have been locked up
and cannot be cashed until the process in completed.
ii)
Throughput : Throughput is the average rate at which the process
produces output . Example : Throughput rate of an garments manufacturing line
could be 120 pieces /hr.
iii)
Cycle Time : Cycle time is the total time taken by the process to complete
one cycle of operation. This is nothing but the reciprocal of throughput rate.

73 of 96

Example : if throughput rate = 120pieces/hr , Cycle time is the time taken to produce one
piece = 1/120 i.e 0.5 mins.

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

iv)
Takt Time : This is a commonly used metric in the Lean Methodology to
estimate how much time is available to produce a product . This of course , depends
on the rate at which customer pulls product from the assembly line( i.e rate of
customer demand) . Hence , takt time is given by the formula :
Takt Time =(Ne Time Available) / (Rate of Customer Demand)
Example :
o Net available time in a shi = 420 mins. ( 8 hr shi minus 30 min lunch break , 2 * 15
mins. Breaks for tea etc.)
o Customer demand = 210 units / shi
o Hence Takt Time = 420 / 210 = 2 mins. This means the maximum allowable time to
produce one piece is 2 mins. If the plant take more than this , deliveries will be aected .On
the other hand , if the plant takes less time than this , it could result in inventories.
Note : The takt time & cycle time are related as shown :
Cycle time / takt time = min # of operators reqd.
III. Process Analysis Tools
A number of tools are used for describing and analyzing processes .Some of the
commonly used tools are :
1. Process maps / Flow Charts : A process map is a graphical representation of a
process as it actually operates ( also called the as- is process map. ) .
Frequently , the words owcharts and process maps are used
interchangeably , the dierence being additional information regarding input and
output variables that is displayed on process maps. The benets of process
mapping are :
1. A graphical representation of a process is an easy to follow approach to
documenting process ows. This is done using standard symbols .
2. The process map identies critical input variables and output variables
3. Process Maps facilitate easy study / audit of processes
4. Standard Operating Procedures ( SOPs) and work instructions can be derived
from the process map.
74 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

Following are some of the commonly used process map symbols ( Refer Fig. 2)

(hps://improveandinnovate.les.wordpress.com/2014/07/slide31.jpg)
Process Map Input Classication :
Inputs in a process can be classied as :
Controllable ( C) : Also known as knob variables , the values of these inputs can
be changed to obtain the desired output. Examples are temp., pressure , rpm etc.
Uncontrollable ( N ) : Also called noise variables, these are nuisance factors and
cannot be changed / controlled easily. Examples are ambient temp., humidity etc.
Critical ( X) : These are inputs which have a signicant impact on the output of
the process.
Standard Operating Procedures ( SOP) : Inputs that have documented methods
/ procedures for operation. Examples include : selecting participants , taking
feedback etc.
Steps for creating Process Maps : Following are some simples steps for creating
process maps ( Fig. 3) :

75 of 96

Dene the process boundaries ( First step and last step of the process)
Observe the process in operation
Using standard symbols draw the process map in proper sequence.
List all outputs & inputs
Classify the inputs
Get the process Map validated by an expert / process owner. This is called the
As-Is process map.

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hps://improveandinnovate.les.wordpress.com/2014/07/slide41.jpg)
Fi g. 3. : Process Map for a Training Program
2. Value stream map : A Value Stream (VS) is all the actions (both valueadded
and non-value-added) currently required to bring a product through the main ows
essential to every product (Refer Learning to See by Rother & Shook).
Value Stream Mapping is a Lean technique used to analyze the ow of materials
and information that are required to produce and deliver a product or service to the
consumer. This technique is also known as Material and Information Flow
Mapping .
Value Stream Mapping is commonly used as a part of Lean Management initiative to
reduce process cycle time . Similar to the ow charts / process maps , value stream
maps use standard symbols to depict ow of material / information in the value
streams.
Apart from its use in manufacturing related processes , the Value Stream Mapping is
also frequently used in strategic sourcing , supply chain management, new product
development , healthcare, soware development . An example of a Value Stream
Map is shown in Fig. 4.

76 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hps://improveandinnovate.les.wordpress.com/2014/07/slide51.jpg)
Fig. 4 : Current State Value Stream Map : An example
A key application of the value stream map is to identify wasteful activities ( called
muda in Japanese) in the value stream and try to eliminate / reduce them. This will
result in reducing the cycle time and cost of running the process .
From the value stream map in Fig. 4 , we observe that the total production lead time
is 22 days compared to a total process cycle time of only 81 secs ! For details on how to
create value stream maps , the reader is advised to refer to Learning to See by Mike
Rother & John Shook.
The As- Is value stream map is also called the Current State value stream map.
Analysis and elimination of wasteful activities will result in a future state map.

3. Spaghei Diagram : The spaghei diagram is a type of owchart that uses a


continuous line to trace the path of a part /document through all phases of the
process. The spaghei diagram is used for tracking movement of material and people
within the plant / oce layout. A spaghei diagram exposes all such unnecessary
movement and action can be taken to modify the layout / operations to reduce waste
( Fig. 5)

77 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hps://improveandinnovate.les.wordpress.com/2014/07/slide61.jpg)
Fig 5 : Spaghei Diagram : An example
Suggested Reading for Value Stream Mapping :
Learning to See by Rother and Shook

Leave a comment

CSSBB Tutorial Series : Lesson 4


(http://improveandinnovate.wordpress.co
m/2014/07/10/cssbb-tutorial-serieslesson-4/)
July 10, 2014July 10, 2014 Lean Six Sigma

Lesson 4 : Six Sigma Improvement Methodology ( The DMAIC )

I . Introduction to the DMAIC methodology


78 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

According to Dr. Joseph Juran all improvement takes place project by project ,and
in no other way
The Six Sigma improvement methodology follows a modied Plan Do Check
Act ( PDCA ) approach . Improvement is carried out using a structured ve- phased
method called the DMAIC ( Dene Measure Analyze Improve Control ) .
Fig. 1 below is a summary of activities / tasks required to be completed in each of the
ve phases :

(hps://improveandinnovate.les.wordpress.com/2014/07/slide12.jpg)
II. Identifying Improvement Opportunities
The Dene Phase is the rst phase of the DMAIC methodology .This phase involves
activities relating to identication of improvement opportunities in the organization,
framing problem statements & improvement goals , forming teams and dening
project schedules.
What is improvement ?
Identication of improvement opportunities begins with answering the question
what is improvement ? Every member in the team should be able to interpret the
term improvement in the same way.

79 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

A clear understanding of the term improvement comes from the famous Juran
Trilogy Diagram ( Fig.2 ) of planning , control & improvement ( Refer Quality
Planning & Analysis , Dr. Joseph Juran & Frank Gryna , Third edition)

(hps://improveandinnovate.les.wordpress.com/2014/07/slide21.jpg)

The Juran Trilogy diagram indicates that chronic wastes are designed into a
process during process planning . All such chronic wastes translate into
improvement opportunities in a business . For example :
8% scrap generated during a machining processes
11 % rework at the end of line assembly processes
6% absenteeism in an organization etc.
The sum total of all such chronic wastes in an organization can be mind boggling !
Elimination of chronic wastes in an organization will result in i) reduced product cost
, ii ) reduced defects or iii) both.
Improvement refers to activities directed at reducing / eliminating chronic wastes in
processes. An analysis of all chronic wastes will lead to identication of all possible
improvement opportunities in the organization . These are also called the Pain
Areas in the organization.
Each improvement opportunity can be translated into a Six Sigma Improvement
80 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

Project . Prioritization of projects can be done with respect to the nancial impact
and improvement in customers satisfaction levels .

Voice of the Customer ( VoC ) is a trigger for many Six Sigma Improvement projects
in organizations. Both Reactive and Proactive VoC provide useful information as to
where the organization should focus its Six Sigma eorts . However collecting the
VoC is a complex process and needs meticulous planning and execution.
The VoC Process : Following are the steps for capturing & analysing VoC
1. Determine the customer segments : Customer segmentation will answer the
question Who are our Customers ? The targeted segment can then approached
for collecting the VoC data.
Segmentation can be done :
1. By Geography : A select few regions / locations contribute to majority of the
revenues .
2. By demography : A few sections of society ( age / class / income groups etc.)
contributing to a signicant percentage of sales
3. By Value : A few customers contribute to majority of sales revenue
4. By Volume : A majority of product despatches are to a small proportion of
customers
5. By Contribution : A few customers contribute signicantly to our prots)
6. By Potential : A few customers will be future A class customers
2. Capture Voice of the Customer ( VoC ) : This is, undoubtedly, the most dicult
part of the VoC process ! The Voice of Customer is a reection of the needs and
expectations of the customers. VoC capture can be carried out using standard
templates & tools , the most common being :
1. Surveys : This involves mailing questionnaires and requesting the targeted
customers to respond by lling up . The questionnaire can be administered either
by mail or telephone
1. Focus Groups : A small group ( 8- 10 ) of qualied people who engage in
discussions with the help of an experienced facilitator . The objective of the focus
group is to provide feedback on what they feel about a new product , service ,
81 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

package , advertisement etc. Example : A focus group can be used to design


questionnaires for a survey !
1. Interviews : This is one of the most powerful feedback gathering techniques . In
this method , a member of the survey team meets the customer and gathers
his/her requirements through a face-to-face interviews . This helps generate
useful qualitative information which is not possible to capture through
mailers/telephonic interviews. For larger sample sizes , this method can become
time-consuming and expensive.
1. Observation : A hands-on technique where select employees of the organization
try to understand customer behaviour with respect to product usage. This
technique originated in Japan ( Murmur). In this method , employees of the
organization are required to observe how the customers are using their products
and if there are problems being faced. Accordingly, the product can be
redesigned.
3. Analyze the Voice of Customer ( VoC ) and translate into Critical to Quality (
CTQs) Requirements
Quite oen , the VoC is the customers language and cannot be used, as is , by Six
Sigma teams for product / process improvement. The VoC needs to be analyzed and
translated into a technical language , which is nothing but a set of product /
process features that will meet the customers requirements. These are called Critical
to Quality requirement ( CTQs).
Two most commonly used tools in VoC Analysis are i) The Kano model and ii) The
Quality Function Deployment ( QFD )
i)
Kano Model : This is a simple tool developed by Prof. Noriaki Kano to
analyze customer preferences with regard to product features. The Kano model
oers some insight into customers perception of the product features. It helps
product development teams to focus on a few critical features . In this model ( Fig. 4
) , customer preferences of product features are classied into four categories as
shown :
1. Dis-satisers These are basic requirements , expected features or
characteristics of a product or service. These needs are typically unspoken. If
these needs are not fullled, the customer will be dissatised.
2. Satisers Performance requirements. Standard characteristics that increase or
82 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

decrease satisfaction by theirdegree (cost/price, ease of use, speed). These needs


are typically spoken.
3. Delighters Excitement requirements. Unexpected features or characteristics
that impress customers and earn you extra credit. These needs are also
typically unspoken.
4. Indierent Needs : I dont care . The customer is unwilling to pay for features
which do not provide value . Such features will result in increasing the cost of the
product / service

(hps://improveandinnovate.les.wordpress.com/2014/07/slide4.jpg)
1. Voice of the Customer & CTQ ow down
Voice of the Customer (VoC) is a term used to describe Customers ( internal &
external) needs and their perceptions of a product or service. For example : your
products are good , but you take too long to deliver
VoC is of two types : ( Refer Fig. 3 )

83 of 96

Reactive : This is the feedback received from customers when they experience
problems or failures in the product / service .
Proactive : This feedback is received when the organization proactively meets the
customers to demonstrate that they care for the customer and the customers
genuinely feel that the some feature of the product or service can be improved .

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hps://improveandinnovate.les.wordpress.com/2014/07/slide3.jpg)
II. Quality Function Deployment ( QFD ) :
Quality Function Deployment (QFD) is a structured approach to dening customer
needs or requirements and translating them into specic product / service features (
CTQs) . Following is a general template of QFD ( also called the House of Quality)
Fig. 5

(hps://improveandinnovate.les.wordpress.com/2014/07/slide5.jpg)
Steps For Making A QFD :
84 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

1.State the QFD Objective clearly


2. Identify Customer Segment
3. Capture customer needs
4. Rank Customers Importance of Needs
5. Capture Customers rating of how well competitors are meeting these needs (
Benchmarking)
6. Identify functional requirements to meet needs
7. Identify co-relationship between customer needs & functional requirements (
Scale 1, 3, 9 or equivalent)
8. Cross Multiply Customer Importance Ratings & Co- relationship rating
9. Add values across columns to get total score for each functional requirement .
Prioritize vital few product features The rst set of CTQs.
10. Assign targets / tolerances , if available, for all functional features
11. Identify direction of Improvement for each product feature ( more is beer ,
less is beer etc.)
12. Identify co-relationships between functional features ( roof of the house)
Fig 6 shows a completed QFD template for a Television Remote Control unit

85 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hps://improveandinnovate.les.wordpress.com
/2014/07/slide6.jpg)
QFD Example

IV. The Project Charter


A Project Charter ( Fig. 8) is a wrien document that denes the projects purpose
and its deliverables . It is a roadmap that denes key issues to be addressed by the
project. Once approved , the project charter , provides the required legitimacy to the
teams for working on the projects.
Following are the critical elements of a Project Charter :

86 of 96

Business Case : The need to do the project . Who wants the project and why ?
Problem and Goal Statement : Project problem and improvement goal in
distinct and measurable terms
Problem and goal statements should be distinct and measurable . Example :
Currently Our service Turn Around Times ( TATs) are poor . The goal is to make
it best in- class . Such statements are vague and should be avoided. A beer
way of dening the same problem would be : Currently our average TATs are
18 hrs . The goal is to reduce the average TATs to 4 hrs. within the next 6 months.
Problem and Goal statements should address what is popularly known as the
SMART framework
Specic

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

Measurable
Achievable
Relevant and
Time Bound
Note : A Project can have multiple goals
Project Performance Measures : As discussed in the earlier section , the project
charter should dene metrics for measuring the project outcome . Use of
qualitative statements such as good , poor , signicant etc. will make in
dicult to monitor project progress and measure its outcome. The impact of
improvement should preferably be translated into monetary terms (reduced cost of
operation, increased revenue etc.)
Project Scope and Boundaries: Scoping the project is a critical element . With
limited time and resources , the project team should set goals that are practical &
achievable. What s included and whats excluded , should be clearly stated , if
possible with the help of a process map.
Project Team: The project charter should document names , designations ,
certications ( MBB , BB , GB , YB etc.)of team members , facilitators , coaches
and subject maer experts who are likely to be involved in the project.
Project Timelines : Like any other project , time is the essence of a Six Sigma
Improvement Project. The Six Sigma Team should dene clear timelines for
project completion. This would involve breaking down the project into detailed
activities ( also called Work Breakdown Structure) . Graphical tools such as Gan
chart , PERT / CPM techniques are commonly used to communicate activitywise schedules. In the least , the project charter should clearly indicate the start
date & end date of the project and completion dates for each of DMAIC phases (
also called Toll Gates) .
V . Project Scheduling & Monitoring
The Gan Chart is a useful tool for displaying schedules of critical activities of a
project . It resembles a bar chart , where each bar indicates start & nish dates of an
activity in the project ( Fig. 7 ). An important requirement for making a Gan chart is
to break down the project into detailed activities / tasks ( called the work break down
structure).

87 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hps://improveandinnovate.les.wordpress.com/2014/07/slide7.jpg)
Toll Gate Reviews
A toll gate review is a system of monitoring project progress with senior
management . Periodic reviews of the project are carried out ( in this case at the end
of each phase ) to help the project team and the senior management to verify if all
the requirements of the current phase have been met and the project team can be
allowed to proceed to the next phase. Any concerns / issues can be sorted out during
the toll gate review.
Following are the activities of a toll gate review process :

88 of 96

Determine whether the project has successfully met the criteria dened for the
current phase.
Hence , decide if the project can be allowed to proceed to the next phase.
If not , what additional tasks need to be completed and by when.
Review the criteria for the next phase
Review the schedule to reach the next gate.
Discuss concerns / barriers encountered in the current phase . Dene actions to
remove the barriers.
Review resources requirement for the next phase Add resources or remove
surplus and redeploy as required.

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hps://improveandinnovate.les.wordpress.com/2014/07/slide8.jpg)

Further Reading :
1. The Six Sigma Project Planner by Thomas Pyzdek
2. Quality Planning & Analysis : Dr Juran and Dr Gryna

Leave a comment

CSSBB Tutorial Series : Lesson 3


(http://improveandinnovate.wordpress.co
m/2014/07/09/cssbb-tutorial-serieslesson-3/)
89 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

July 9, 2014July 9, 2014 Lean Six Sigma

Lesson 3 : Team Management

I. Types of teams & Team formation


A Team is a group of people carrying out interdependent tasks to meet common
goals.
Many of the modern concepts on teams and their management can be traced to the
Quality Circle movement that started in the 70s in Japan .
Types of Teams : Teams are classied based on their structure and objectives .
Following are the three common team types :
Cross-functional Teams ( CFTs)
A group of individuals ( oen managerial & supervisory level ) with expertise in
various areas , who come together to solve chronic problems in the organization.
Typically used in Six Sigma Improvement Projects and similar activities.
Problem-solving Teams (Also called quality circles)
The objective of such teams is to enable workmen to resolve day to day issues
/problems . These teams are formed by employees ( workmen ) from the same
department and functional area. Training on problems solving tools and team
dynamics is provided to all members of such teams.
Self-managed Work Teams ( SMTs)
A group of people, with dierent skill sets who work without the usual supervision,
toward a common purpose or goal.
The table below is a comparison of various teams with respect to purpose and
structure.

90 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hps://improveandinnovate.les.wordpress.com/2014/07/slide1.jpg)
II. Roles & Responsibilities
Irrespective of the team type , roles should be clearly identied ad responsibilties
xed for every member of the team. Following are some of the key roles within a
team .
Team Member : Every team member is individually and jointly responsible for the
outcome of the team. Team members bring diverse skill sets that help in planning or
solving problems . Team members are required to share information , aend
meetings and complete out of meeting assignments on time that are prescribed by
the team leader.
Team Leader : Every team will be led by a team leader who is responsible for
managing the teams resources to meet the teams objectives. The team leader , apart
from being a technical expert , is expected to be skilled in team dynamics i.e decision
making , monitoring & reporting team progress , tracking performance of individual
team members , resolving conicts between team members etc
Facilitator & Scribe : Although dierent , these roles can be combined . The
facilitator may be a team member or someone external to the team. The facilitator
ensures team members stay focused on the team objectives and spend time
eectively. The facilitator also helps the team in decision making and drawing up
action plans. The role of a scribe , is to record useful information during meetings ,
brainstorming sessions and communicate the same to all team members.
91 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

Sponsor : The sponsor ( also the process owner ) is a senior member in the
organization and is responsible for providing required resources for the eective
functioning of the team. This could include approval of budgets , time for meetings
and training etc. All critical decisions of the team should have the approval of the
sponsor.
III. Team Formation & Evolution
Once formed, teams go through various stages of growth & maturity. Eective
management of teams becomes possible by understanding various phases of a teams
evolution. Following are the important phases of team development
1. Forming : During the rst few meetings , one will observe an atmosphere of
formality . Members display nervousness, excitement, impatience and reluctance to
participate. In this phase , the facilitator needs to break ice to help the members
relax. A key deliverable of this phase is dening goals , roles & responsibilities of
team members
2. Storming : During this phase , the team gets down to business. The team leader is
expected to drive the team to working together. Teams start brainstorming sessions
on various issues/ problems . Conict / blame game amongst team members is quite
common in this phase. Hence , conict resolution will be a key activity for the
facilitator. A key deliverable in this phase is kick starting the problem solving
process.
3. Norming : In this phase , the focus shis from individual concerns to working as a
team.The team becomes less dependent on the leader / facilitator when it comes to
decision making . The team leader only reviews progress and provides technical
assistance when required. Key deliverable in this phase is signicant progress with
respect to the team objective.
4. Performing : Individual needs have been met and the team is now in complete
harmony with all team members working towards a common goal. Teams activities
are focused only on the nal objective i.e Key Result Areas. Key deliverable in this
phase is tangible results in terms of the team objectives.
5. Adjourning : This stage includes activities relating to the closing of projects such
as nal meetings & wrap ups
6. Recognition : In this phase the management acknowledges the teams
contribution & achievement.

92 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

(hps://improveandinnovate.les.wordpress.com
/2014/07/slide2.jpg)
Fig 2. : Development Phases & Behaviour

IV. Team Facilitation & Conict Resolution


Facilitation is a process wherein a neutral person helps a group work together
more eectively. The facilitator is a person who makes things easy for the team
Facilitators may be internal or external to the team .
Facilitation helps in providing a direction to the teams discussions / activities. It
also ensures teams do not waste time through issues unrelated to the objectives.
Above all , in times of conict , the facilitator acts as a neutral party and steers the
team towards its objectives.
Facilitation : Dos & Donts
Make sure all members are included during discussions.
Dene ground rules for conduct , procedure and discussions right up.
Probe, do not Provoke.
Do not suggest solutions . Reframe ideas / opinions without diluting / modifying
When summarizing discussions , only highlight areas of agreement/disagreement.
Be prepared to deal with domineering participants.
Make sure you have recorded all the important ideas / opinions put forth by each
participant.
93 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

V. Team Dynamics & Facilitation Techniques :


Facilitation techniques vary with dierent team behaviours. The team leader /
facilitators should be skilled in the various techniques that he/she should use .
Following is a brief list of team behaviours and corresponding facilitator technique :
Floundering : This happens when theres a lack of clarity or the team has lost track
of its goals & objectives . The facilitator should reinforce the goals & objectives from
time to time so that the team stays focused.
Dominating participants : Some participants can be over- enthusiastic to the
extent of dominating the teams activities. The facilitator should be quick to identify
such team members and structure the discussions in a way that everyone gets an
opportunity to speak
Reluctant Participants : These are participants who do not speak much . The
facilitator should keep an eye on such participants and take extra eort in geing
them involved , ,such as directing questions exclusively to such participants e.g
Whats your take on this ?
Accepting opinions without data : This is a serious issue and requires a
disciplined approach . The facilitator must cross check the validity of all decisions
made by the team.
Groupthink : Groupthink is a type of thought exhibited by group members who
try to minimize conict and reach consensus without critically testing, analyzing, and
evaluating ideas . This is a sure sign of the team operating in their comfort zone. The
facilitator should provide triggers to make the team think and do things dierently.
Digression / Tangents : These are irrelevant discussions and conversations. The
facilitator should stop such discussions and get the team back to focusing on their
real objectives
Quarrelsome Participants / Feuding : Some participants get into conicts involving
personal issues. The facilitator should remind such members to discuss such issues
outside the meeting.
VI. Decision Making & Planning
Decision Making Tools : Following tools can be used by teams to aid the decision
making process
Brainstorming : A technique used by teams for free & uninhibited ow of ideas.
Brainstorming ensures everyone on the team participates in the problem solving
process and it also creates an environment of openness & out of box thinking.
Nominal Group Technique : A modied brainstorming tool , it helps teams in
quick decision making through a voting / ranking system. For example , if several
94 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

ideas have been generated during a improvement project , the best ( most feasible)
idea can be chosen by using the Nominal Group technique. Following are simple
steps for the Nominal Group Technique :
1. State the objective ( should be clearly understood by all participants)
2. Write down the ideas : Each team member writes down ideas in 3 x 5 card (
without discussions)
3. Share the cards : The team leader / facilitator collects the cards and shares ideas
with the team . Rewording of ideas and consolidation of all ideas is done without
repetition.
4. Vote :Each team member votes on the ideas ( without discussions ) . The idea with
the highest votes gets selected.
Multi voting : A tool that can be used aer a brainstorming exercise , to narrow
down a long list of ideas to a manageable number. In other words , Multi voting can
be used to prioritize ideas generated during a brainstorming session that need to
taken up for further study. It is carried out with a series of voting rounds . Example :
The participants vote ( by show of hands or by writing down) for each of the ideas
from the initial list . Any idea that gets more than 50% of the votes is carried over to
the next round of voting. The same rule is applied to the subsequent rounds of
voting. This continues till the team has shortlisted 3- 5 ideas that are needed for
further analysis.

VII. Planning Tools for Teams


Team can use the 7 management and planning tools which are also known as the
New 7 QC tools. These are interrelationship digraph, anity diagram (KJ Method),
systematic (tree) diagram, matrix diagram, Prioritization matrix , process decision
program chart (PDPC) and activity network diagram.
Note : This is not to be confused with the old 7 QC Tools (Check Sheet, Flow Charts ,
Cause & Eect diagram , Pareto Chart, Scaer Diagram, Histogram ,Control Charts)
which will be discussed in subsequent lessons.
Suggested Reading
Managerial Breakthrough by Dr J M Juran
Juran on Leadership For Quality An Executive Handbook

95 of 96

7/19/2014 7:19 PM

improveandinnovate | Thoughts and ideas on Quality , Improvement and In...

http://improveandinnovate.wordpress.com/?goback=.gde_3151110_memb...

Leave a comment

improveandinnovate (hp://improveandinnovate.wordpress.com/)
Blog at WordPress.com (hp://wordpress.com/?ref=footer_blog). The Big Brother
Theme (hp://theme.wordpress.com/themes/big-brother/).
Follow

Follow improveandinnovate

96 of 96

Powered by WordPress.com (hp://wordpress.com/signup/?ref=lof)

7/19/2014 7:19 PM

You might also like