Professional Documents
Culture Documents
Classic! This handbook is an excellent reference for engineers and quality professionals in any eld.
Kevin W. Williams
Vice President, Quality
General Motors North America
ITT has learned from experience that Dr. Taguchis teachings can prevent variation before it happens. This book is a great demonstration of this powerful approach and how it can make a meaningful difference in any type of business. It
takes a dedicated engineering approach to implement, but the pay back in customer satisfaction and growth is dramatic.
Lou Giuliano
Chairman, President and CEO
ITT Industries
Dr. Taguchis enduring work is of great importance to the eld of quality.
Dr. A.V. Feigenbaum
President and CEO
General Systems Company, Inc.
This handbook represents a new bible for innovative technical leadership.
Saburo Kusama
President
Seiko Epson
It is a very well thought out and organized handbook, which provides practical
and effective methodologies to enable you to become the best in this competitive
world for product quality.
Sang Kwon Kim
Chief Technical Ofcer
Hyundai Motor Company & Kia Motors Corporation
For those who see the world as connected, this landmark book offers signicant
insights in how to design together, build together, and lead together.
Dr. Bill Bellows
Associate Technical Fellow
The Boeing Company
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
Dr. Taguchi has been an inspiration and has changed the paradigm for quality
with his concept of loss function. Robust Engineering is at the heart of our innovation process and we are proud to be associated with this work.
Donald L. Runkle
Vice Chairman and CTO
Delphi Corporation
The elegantly simple Taguchi theory is demonstrated through many successful
case studies in this excellent book.
Don Dees
Vice President, Manufacturing
DaimlerChrysler
We have and are practicing the methods and teachings in this valuable handbook;
it simply works.
Dr. Aly A. Badawy
Vice President, Engineering
TRW Automotive
To have all of the theories and application of the lifetime of learning from these
three gentlemen in one book is amazing. If it isnt in here, you probably dont
need it.
Joseph P. Sener, P.E.
Vice President, Business Excellence
Baxter Healthcare
Taguchis Quality Engineering Handbook is a continuation of the legacy by Dr. Taguchi, Subir Chowdhury and Yuin Wu in Robust Engineering, which is increasingly
being recognized in the automotive industry as a must have tool to be
competitive.
Dan Engler
Vice President, Engineering and Supply Management
Mark IV Automotive
Taguchis principles, methods, and techniques are a fundamental part of the
foundation of the most highly effective quality initiatives in the world!
Rob Lindner
Vice President, Quality, CI and Consumer Service
Sunbeam Products, Inc.
The value of Dr. Genichi Taguchis thinking and his contributions to improving
the productivity of the engineering profession cannot be overstated. This handbook captures the essence of his enormous contribution to the betterment of
mankind.
John J. King
Engineering Methods Manager, Ford Design Institute
Ford Motor Company
I am glad to see that Taguchis Quality Engineering Handbook will now be available
for students and practitioners of the quality engineering eld. The editors have
done a great service in bringing out this valuable handbook for dissemination of
Taguchi Methods in quality improvement.
C. R. Rao, Sc.D., F.R.S.
Emeritus Eberly Professor of Statistics and Director of the Center for Multivariate Analysis
The Pennsylvania State University
This book is very practical and I believe technical community will improve its
competence through excellent case studies from various authors in this book.
Takeo Koitabashi
Executive Vice President
Konica-Minolta
Digitization, color, and multifunction are the absolute trend in imaging function
today. It is becoming more and more difcult for product development to catch
up with market demand. In this environment, at Fuji Xerox, Taguchis Quality
Engineering described in this handbook is regarded as an absolutely necessary
tool to develop technology to meet this ever-changing market demand.
Kiyoshi Saitoh
Corporate Vice President, Technology & Development
Fuji Xerox Co., Ltd.
In the todays IT environment, it is extremely critical for technical leaders to
provide an effective strategic direction. We position Taguchi Methods of Robust
Engineering as a trump card for resolving difculty during any stage of technology
development. Technology is advancing rapidly and competition is getting global
day by day. In order to survive and to win in this environment, we use the Taguchi
Methods of Robust Engineering described in this handbook to continue developing highly reliable products one product after another product.
Haruo Kamimoto
Executive Vice President
Ricoh
I was in a shock when I have encountered Taguchi Methods 30 years ago. This
book will provide a trump card to achieve improved Product Cost and Product
Quality with less R&D Cost simultaneously.
Takeshi Inoo
Executive Director
Isuzu Motor Co.
Director
East Japan Railroad Co.
Taguchis Quality
Engineering Handbook
Genichi Taguchi
Subir Chowdhury
Yuin Wu
Associate Editors
Shin Taguchi and Hiroshi Yano
Copyright 2005 by Genichi Taguchi, Subir Chowdhury, Yuin Wu. All rights reserved.
Published by John Wiley & Sons, Inc., Hoboken, New Jersey
Published simultaneously in Canada
No part of this publication may be reproduced, stored in a retrieval system, or transmitted
in any form or by any means, electronic, mechanical, photocopying, recording, scanning,
or otherwise, except as permitted under Section 107 or 108 of the 1976 United States
Copyright Act, without either the prior written permission of the Publisher, or
authorization through payment of the appropriate per-copy fee to the Copyright Clearance
Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470,
or on the web at www.copyright.com. Requests to the Publisher for permission should be
addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street,
Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, e-mail:
permcoordinator@wiley.com.
Limit of Liability / Disclaimer of Warranty: While the publisher and author have used their
best efforts in preparing this book, they make no representations or warranties with
respect to the accuracy or completeness of the contents of this book and specically
disclaim any implied warranties of merchantability or tness for a particular purpose. No
warranty may be created or extended by sales representatives or written sales materials.
The advice and strategies contained herein may not be suitable for your situation. You
should consult with a professional where appropriate. Neither the publisher nor author
shall be liable for any loss of prot or any other commercial damages, including but not
limited to special, incidental, consequential, or other damages.
For general information on our other products and services or for technical support,
please contact our Customer Care Department within the United States at (800) 762-2974,
outside the United States at (317) 572-3993 or fax (317) 572-4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears
in print may not be available in electronic books. For more information about Wiley
products, visit our web site at www.wiley.com.
Library of Congress Cataloging-in-Publication Data:
Taguchi, Genichi, 1924
Taguchis quality engineering handbook / Genichi Taguchi, Subir Chowdhury,
Yuin Wu.
p. cm.
Includes bibliographical references and index.
ISBN 0-471-41334-8
1. Quality controlStatistical methods. 2. Production managementQuality
control. 3. Engineering design. I. Chowdhury, Subir. II. Wu, Yuin. III.
Title.
TS156.T343 2004
658.562dc22
2004011335
Printed in the United States of America.
10 9 8 7 6 5 4 3 2 1
Contents
Preface
Acknowledgments
About the Authors
xxv
xxix
xxxi
SECTION 1
THEORY
Part I
Genichi Taguchis Latest Thinking
1
The Second Industrial Revolution and Information
Technology
2
Management for Quality Engineering
25
3
Quality Engineering: Strategy in Research and
Development
39
4
Quality Engineering: The Taguchi Method
Part II
Quality Engineering: A Historical Perspective
56
125
5
Development of Quality Engineering in Japan
127
ix
Contents
6
History of Taguchis Quality Engineering in the United
States
153
Part III
Quality Loss Function
169
7
Introduction to the Quality Loss Function
171
8
Quality Loss Function for Various Quality Characteristics
180
9
Specication Tolerancing
192
10
Tolerance Design
208
Part IV
Signal-to-Noise Ratio
221
11
Introduction to the Signal-to-Noise Ratio
223
12
SN Ratios for Continuous Variables
239
13
SN Ratios for Classied Attributes
290
Part V
Robust Engineering
311
14
System Design
313
15
Parameter Design
318
16
Tolerance Design
340
Contents
xi
17
Robust Technology Development
352
18
Robust Engineering: A Managers Perspective
377
19
Implementation Strategies
389
Part VI
MahalanobisTaguchi System (MTS)
395
20
MahalanobisTaguchi System
397
Part VII
Software Testing and Application
423
21
Application of Taguchi Methods to Software System
Testing
425
Part VIII
On-Line Quality Engineering
435
22
Tolerancing and Quality Level
437
23
Feedback Control Based on Product Characteristics
454
24
Feedback Control of a Process Condition
468
25
Process Diagnosis and Adjustment
474
Part IX
Experimental Regression
483
26
Parameter Estimation in Regression Equations
485
xii
Contents
Part X
Design of Experiments
501
27
Introduction to Design of Experiments
503
28
Fundamentals of Data Analysis
506
29
Introduction to Analysis of Variance
515
30
One-Way Layout
523
31
Decomposition to Components with a Unit Degree of
Freedom
528
32
Two-Way Layout
552
33
Two-Way Layout with Decomposition
563
34
Two-Way Layout with Repetition
573
35
Introduction to Orthogonal Arrays
584
36
Layout of Orthogonal Arrays Using Linear Graphs
597
37
Incomplete Data
609
38
Youden Squares
617
Contents
xiii
SECTION 2
APPLICATION (CASE STUDIES)
Part I
Robust Engineering: Chemical Applications
629
Biochemistry
Case 1
Optimization of Bean Sprouting Conditions by Parameter
Design
631
Case 2
Optimization of Small Algae Production by Parameter
Design
637
Chemical Reaction
Case 3
Optimization of Polymerization Reactions
643
Case 4
Evaluation of Photographic Systems Using a Dynamic
Operating Window
651
Measurement
Case 5
Application of Dynamic Optimization in Ultra Trace
Analysis
659
Case 6
Evaluation of Component Separation Using a Dynamic
Operating Window
666
Case 7
Optimization of a Measuring Method for Granule
Strength
672
Case 8
A Detection Method for Thermoresistant Bacteria
679
xiv
Contents
Pharmacology
Case 9
Optimization of Model Ointment Prescriptions for In Vitro
Percutaneous Permeation
686
Separation
Case 10
Use of a Dynamic Operating Window for Herbal
Medicine Granulation
695
Case 11
Particle-Size Adjustment in a Fine Grinding Process for a
Developer
705
Part II
Robust Engineering: Electrical Applications
715
Circuits
Case 12
Design for Amplier Stabilization
717
Case 13
Parameter Design of Ceramic Oscillation Circuits
732
Case 14
Evaluation Method of Electric Waveforms by Momentary
Values
735
Case 15
Robust Design for Frequency-Modulation Circuits
741
Electronic Devices
Case 16
Optimization of Blow-off Charge Measurement Systems
746
Case 17
Evaluation of the Generic Function of Film Capacitors
753
Contents
xv
Case 18
Parameter Design of Fine-Line Patterning for IC
Fabrication
758
Case 19
Minimizing Variation in Pot Core Transformer Processing
764
Case 20
Optimization of the Back Contact of Power MOSFETs
771
Electrophoto
Case 21
Development of High-Quality Developers for
Electrophotography
780
Case 22
Functional Evaluation of an Electrophotographic Process
788
Part III
Robust Engineering: Mechanical Applications
793
Biomechanical
Case 23
Biomechanical Comparison of Flexor Tendon Repairs
795
Machining
Case 24
Optimization of Machining Conditions by Electrical
Power
806
Case 25
Development of Machining Technology for HighPerformance Steel by Transformability
819
Case 26
Transformability of Plastic Injection-Molded Gear
827
xvi
Contents
Material Design
Case 27
Optimization of a Felt-Resist Paste Formula Used in
Partial Felting
836
Case 28
Development of Friction Material for Automatic
Transmissions
841
Case 29
Parameter Design for a Foundry Process Using Green
Sand
848
Case 30
Development of Functional Material by Plasma Spraying
852
Material Strength
Case 31
Optimization of Two-Piece Gear Brazing Conditions
858
Case 32
Optimization of Resistance Welding Conditions for
Electronic Components
863
Case 33
Tile Manufacturing Using Industrial Waste
869
Measurement
Case 34
Development of an Electrophotographic Toner Charging
Function Measuring System
875
Case 35
Clear Vision by Robust Design
882
Contents
xvii
Processing
Case 36
Optimization of Adhesion Condition of Resin Board and
Copper Plate
890
Case 37
Optimization of a Wave Soldering Process
895
Case 38
Optimization of Casting Conditions for Camshafts by
Simulation
900
Case 39
Optimization of Photoresist Prole Using Simulation
904
Case 40
Optimization of a Deep-Drawing Process
911
Case 41
Robust Technology Development of an Encapsulation
Process
916
Case 42
Gas-Arc Stud Weld Process Parameter Optimization
Using Robust Design
926
Case 43
Optimization of Molding Conditions of Thick-Walled
Products
940
Case 44
Quality Improvement of an Electrodeposited Process for
Magnet Production
945
Case 45
Optimization of an Electrical Encapsulation Process
through Parameter Design
950
Case 46
Development of Plastic Injection Molding Technology by
Transformability
957
xviii
Contents
Product Development
Case 47
Stability Design of Shutter Mechanisms of Single-Use
Cameras by Simulation
965
Case 48
Optimization of a Clutch Disk Torsional Damping System
Design
973
Case 49
Direct-Injection Diesel Injector Optimization
984
Case 50
Optimization of Disk Blade Mobile Cutters
1005
Case 51
D-VHS Tape Travel Stability
1011
Case 52
Functionality Evaluation of Spindles
1018
Case 53
Improving Minivan Rear Window Latching
1025
Case 54
Linear Proportional Purge Solenoids
1032
Case 55
Optimization of a Linear Actuator Using Simulation
1050
Case 56
Functionality Evaluation of Articulated Robots
1059
Case 57
New Ultraminiature KMS Tact Switch Optimization
1069
Case 58
Optimization of an Electrical Connector Insulator Contact
Housing
1084
Contents
xix
Case 59
Airow Noise Reduction of Intercoolers
1100
Case 60
Reduction of Boosting Force Variation of Brake Boosters 1106
Case 61
Reduction of Chattering Noise in Series 47 Feeder
Valves
1112
Case 62
Optimal Design for a Small DC Motor
1122
Case 63
Steering System On-Center Robustness
1128
Case 64
Improvement in the Taste of Omelets
1141
Case 65
Wiper System Chatter Reduction
1148
Other
Case 66
Fabrication Line Capacity Planning Using a Robust
Design Dynamic Model
1157
Part IV
MahalanobisTaguchi System (MTS)
1169
Human Performance
Case 67
Prediction of Programming Ability from a Questionnaire
Using the MTS
1171
Case 68
Technique for the Evaluation of Programming Ability
Based on the MTS
1178
xx
Contents
Inspection
Case 69
Application of Mahalanobis Distance for the Automatic
Inspection of Solder Joints
1189
Case 70
Application of the MTS to Thermal Ink Jet Image
Quality Inspection
1196
Case 71
Detector Switch Characterization Using the MTS
1208
Case 72
Exhaust Sensor Output Characterization Using the MTS 1220
Case 73
Defect Detection Using the MTS
1233
Medical Diagnosis
Case 74
Application of Mahalanobis Distance to the
Measurement of Drug Efcacy
1238
Case 75
Use of Mahalanobis Distance in Medical Diagnosis
1244
Case 76
Prediction of Urinary Continence Recovery among
Patients with Brain Disease Using the Mahalanobis
Distance
1258
Case 77
Mahalanobis Distance Application for Health
Examination and Treatment of Missing Data
1267
Contents
xxi
Case 78
Forecasting Future Health from Existing Medical
Examination Results Using the MTS
1277
Product
Case 79
Character Recognition Using the Mahalanobis Distance
1288
Case 80
Printed Letter Inspection Technique Using the MTS
1293
Part V
Software Testing and Application
1299
Algorithms
Case 81
Optimization of a Diesel Engine Software Control
Strategy
1301
Case 82
Optimizing Video Compression
1310
Computer Systems
Case 83
Robust Optimization of a Real-Time Operating System
Using Parameter Design
1324
Software
Case 84
Evaluation of Capability and Error in Programming
1335
Case 85
Evaluation of Programmer Ability in Software Production 1343
Case 86
Robust Testing of Electronic Warfare Systems
1351
xxii
Contents
Case 87
Streamlining of Debugging Software Using an
Orthogonal Array
1360
Part VI
On-Line Quality Engineering
1365
On-Line
Case 88
Application of On-Line Quality Engineering to the
Automobile Manufacturing Process
1367
Case 89
Design of Preventive Maintenance of a Bucket Elevator
through Simultaneous Use of Periodic Maintenance and
Checkup
1376
Case 90
Feedback Control by Quality Characteristics
1383
Case 91
Control of Mechanical Component Parts in a
Manufacturing Process
1389
Case 92
Semiconductor Rectier Manufacturing by On-Line
Quality Engineering
1395
Part VII
Miscellaneous
1399
Miscellaneous
Case 93
Estimation of Working Hours in Software Development
1401
Case 94
Applications of Linear and Nonlinear Regression
Equations for Engineering
1406
Contents
xxiii
SECTION 3
TAGUCHIS METHODS VERSUS OTHER
QUALITY PHILOSOPHIES
39
Quality Management in Japan
1423
40
Deming and Taguchis Quality Engineering
1442
41
Enhancing Robust Design with the Aid of TRIZ and
Axiomatic Design
1449
42
Testing and Quality Engineering
1470
43
Total Product Development and Taguchis Quality
Engineering
1478
44
Role of Taguchi Methods in Design for Six Sigma
1492
APPENDIXES
A
Orthogonal Arrays and Linear Graphs: Tools for Quality
Engineering
1525
B
Equations for On-Line Process Control
1598
C
Orthogonal Arrays and Linear Graphs for Chapter 38
Glossary
Bibliography
Index
1602
1618
1625
1629
Preface
To compete successfully in the global marketplace, organizations must have the
ability to produce a variety of high-quality, low-cost products that fully satisfy customers needs. Tomorrows industry leaders will be those companies that make
fundamental changes in the way they develop technologies and design and produce products.
Dr. Genichi Taguchis approach to improving quality is currently the most
widely used engineering technique in the world, recognized by virtually any engineer, though other quality efforts have gained and lost popularity in the business
press. For example, in the last part of the twentieth century, the catch phrase total
quality management (TQM) put Japan on the global map along with the revolution in automotive quality. Americans tried with less success to adopt TQM into
the American business culture in the late 1980s and early 1990s. Under Jack
Welchs leadership, GE rediscovered the Six Sigma Quality philosophy in the late
1990s, resulting in corporations all over the world jumping on that bandwagon.
These methods tend to be but a part of a larger, more global quality effort that
includes both upstream and downstream processesespoused by Dr. Taguchi since
the 1960s. It is no surprise that the other methods use parts of his philosophy of
Robust Engineering in what they try to accomplish, but none is as effective.
Dr. Taguchi strongly feels that an engineering theory has no value unless it is
effectively applied and proven to contribute to society. To that end, a Taguchi study
group led by Dr. Taguchi has been meeting on the rst Saturday of every month
in Nagoya, Japan, since 1953. The group has about one hundred members
engineers from various industriesat any given time. Study group members learn
from Dr. Taguchi, apply new ideas back on the job, and then assess the effectiveness of these ideas. Through these activities, Dr. Taguchi continues to develop
breakthrough approaches. In Tokyo, Japan, a similar study group has been meeting
monthly since 1960. Similar activities continue on a much larger scale among the
more than two thousand members of the Quality Engineering Society.
For the past fty years, Dr. Taguchi has been inventing new quality methodologies year after year. Having written more than forty books in the Japanese language, this engineering genius is still contributing profoundly to this highly
technical eld, but until now, his work and its implications have not been collected
in one single source. Taguchis Quality Engineering Handbook corrects that omission.
This powerful handbook, we believe, will create an atmosphere of great excitement
in the engineering community. It will provide a reference tool for every engineer
xxv
xxvi
Preface
and quality professional of the twenty-rst century. We believe that this book could
remain in print for one hundred years or more.
The handbook is divided into three sections. The rst, Theory, consists of
ten parts:
Genichi Taguchis Latest Thinking
Quality Engineering: A Historical Perspective
Quality Loss Function
Signal-to-Noise Ratio
Robust Engineering
MahalanobisTaguchi System (MTS)
Software Testing and Application
On-Line Quality Engineering
Experimental Regression
Design of Experiments
After presenting the cutting-edge ideas Dr. Taguchi is currently working on in
Part I, we provide an overview of the quality movement from earliest times in Japan
through to current work in the United States. Then we cover a variety of quality
methodologies, describe them in detail, and demonstrate how they can be applied
to minimize product development cost, reduce product time-to-market, and increase overall productivity.
Dr. Taguchi has always believed in practicing his philosophy in a real environment. He emphasizes that the most effective way to learn his methodology is to
study successful and failed case studies from actual applications. For this reason,
in Section 2 of this handbook we are honored to feature ninety-four case studies
from all over the world from every type of industry and engineering eld, mechanical, electrical, chemical, electronic, etc. Each of the case studies provides a
quick study on subject areas that can be applied to many different types of products
and processes. It is recommended that readers study as many of these case studies
as they can, even if they are not ones they can relate to easily. Thousands of
companies have been using Dr. Taguchis methodologies for the past fty years.
The benets some have achieved are phenomenal.
Section 3 features an explanation of how Dr. Taguchis work contrasts with other
quality philosophies, such as Six Sigma, Deming, Axiomatic Design, TRIZ, and
others. This section shows readers ways to incorporate Taguchi Methods into existing corporate activities.
In short, the primary goals of this handbook are
To provide a guidebook for engineers, management, and academia on the
Quality Engineering methodology.
To provide organizations with proven techniques for becoming more competitive in the global marketplace.
To collect Dr. Taguchis theories and practical applications within the connes of one book so the relationships among the tools will be properly
understood.
To clear up some common misinterpretations about Taguchis Quality Engineering philosophy.
Preface
To spread the word about Quality Engineering throughout the world, leading to greater success in all elds in the future.
We strongly believe that every engineer and engineering manager in any eld,
every quality professional, every engineering faculty member, research and development personnel in all types of organizations, every consultant, educator, or student, and all technical libraries will benet from having this book.
Subir Chowdhury
Yuin Wu
April 14, 2004
xxvii
Acknowledgments
The authors gratefully acknowledge the efforts of all who assisted in the completion of this
handbook:
To all of the contributors and their organizations for sharing their successful
case studies.
To the American Supplier Institute, ASI Consulting Group, LLC, and its employees, Board of Directors, and customers for promoting Dr. Taguchis
works over the years.
To our two associate editors, Shin Taguchi of the United States and Hiroshi
Yano of Japan, for their dedication and hard work on the manuscript
development.
To two of the nest teachers and consultants, Jim Wilkins and Alan Wu, for
nonstop passionate preaching of Dr. Taguchis philosophy.
To our colleagues and friends, Dr. Barry Bebb, Jodi Caldwell, Ki Chang, Bill
Eureka, Craig Jensen, John King, Don Mitchell, Gopi Palamaner, Bill Sutherland, Jim Quinlan, and Brad Walker, for effectively promoting Dr. Taguchis philosophy.
To Emy Facundo for her very hard work on preparing the manuscript.
To Bob Argentieri, our editor at Wiley, for his dedication and continuous support on the development of this handbook.
To Rebecca Taff for her dedicated effort toward rening the manuscript.
Finally, this handbook never would have been possible without the continuous
support of our wonderful wives, Kiyo Taguchi, Malini Chowdhury, and Suchuan Wu.
xxix
xxxii
Section 1
Theory
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
Part I
Genichi Taguchis
Latest Thinking
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
The Second
Industrial Revolution
and Information
Technology
1
1.1. Elements of Productivity
10
13
Risk Management
13
Countermeasures against Risks
13
16
Fords Strategy
16
Determining Tolerance
16
Evaluation of Quality Level and Consumer Loss
Balance of Cost and Quality Level
18
17
23
References
23
24
Economy and
Productivity
Therefore, wages and living standards are not important from a national point
of view. What is important is that we increase the number of products by improving
productivity; that once we achieve it, we develop new products, create new industries, and offer jobs to unemployed people; and that once we enhance the quality
of products and services, we sell them at the same prices as before. In fact, the
price of a color television set is decreasing more than the price of a black-andwhite TV set, so although the former is superior in quality, the cost of large-scale
integrated circuits remains the same in the end, despite the high degree of integration. These are typical examples of developed technologies that have enabled
us to produce higher-quality products at the same cost.
Although Japans standard of living is lower than that of the United States,
Japans income is higher. An exchange rate is determined by competitive products
that can be exported. On the one hand, Japan is highly productive because of
several high-level technologies. On the other hand, Japans productivity in terms
of other products and services is low. On balance, Japans overall living standard
remains low. This is because a standard of living depends totally on productivity
and allocation of products and services. To raise Japans overall standard of living,
quantitative productivity in agriculture, retail trade, and the service industry should
be increased. Otherwise, qualitative productivity should be improved. As a result,
Japan should invest in R&D to improve productivity. This is regarded as the most
essential role of top management in society. The objective of quality engineering
is to enhance the productivity of R&D itself.
As noted earlier, there are two different types of productivity improvement, one a
quantitative approach, the other qualitative. As for the latter, the word qualitative
can be replaced by grade, although in this section we focus on quality rather than
grade. In the case of an automobile, quality indicates losses that a product imposes
on society, such as fuel inefciency, noise, vibration, defects, or environmental
pollution. An auto engine at present has only 25% fuel efciency; that is, to gain
the horsepower claimed, four times as much gasoline is consumed as would seem
to be required. Now, if we double fuel efciency, we can expect to halve noise,
vibration, and environmental pollution as well as fuel consumed. It is obvious that
the excess fuel consumption is converted into vibration, noise, and exhaust,
thereby fueling environmental pollution. Also, it can be theorized that if the fuel
efciency of an engine doubles, Japans trade decit with the United States would
disappear and global environmental pollution would be reduced by 50%. In contrast, sales in oil-producing nations and petroleum companies would be decreased
by half. Even if these companies doubled the price of gasoline to maintain the
same number of workers, the global standard of living would be improved, owing
to decreased pollution because fuel efciency would not have changed. Therefore,
instead of reducing economically unimportant defects by 0.1% in the production
process, engineers should focus on more substantial quality improvement, such as
in engine efciency. That is, since more than 75% of gasoline fuel is wasted, thus
incurring an enormous economic loss, to improve fuel efciency is desirable.
Indeed, petroleum companies might cause joblessness because they cannot
maintain their doubled prices; in actuality, however, there are quite a few people
who wish to purchase a car. In the United States, many families need a car for
each adult. Therefore, if the price of a car and that of fuel decline by half, the
demand for cars would increase, and eventually, fuel consumption would decrease
Quality and
Productivity
In nations that have a wider variety of products than that in developing countries,
qualitative improvement is preferable to quantitative from a social perspective. This
is because the qualitative approach tends to reduce pollution and resource consumption. I believe that taxation is more desirable than pollution regulation to
facilitate quality improvement. This would include a consumption tax for natural
10
Product Quality
Design for a
Financial System
11
Credit means that money can be lent without security and that credit limits
such as $50,000 or $10,000 are determined based on each persons data. How we
judge individual credibility is a technological issue. In designing a system, how we
predict and prevent functional risk is the most critical design concern. This is
about how we offer credit to each person in a society. How to collect credit information and how much credit to set for a given person constitute the most important details in the era of information. In the eld of quality engineering, an
approach to minimizing risk of deviation, not from function design per se, but
from functionality or an ideal state is called functionality design. In the twenty-rst
century, this quality engineering method is being applied to both hardware and
software design.
An organization collecting money from a number of people and corporations
and lending to those in need of money is a bank. Its business is conducted in the
world of information or credit. Such a world of reliance on information (from
plastic money to electronic money) has already begun. The question for the customer is whether a bank pays higher interest than others do, or runs a rational
business that makes a sufcient prot. What is important is whether a bank will
be able to survive in the future because its information system is sufciently rationalized and its borrowers are properly assessed, even if it keeps loan interest
low to borrowers, and conversely, pays somewhat higher interest to money providers. After all, depositors evaluate a bank. A credible bank that can offer higher
interest is regarded as a good and highly productive bank. Therefore, we consider
that a main task of a banks R&D department is to rationalize the banking business
and lead to a system design method through rationalization of functional evaluation in quality engineering.
Unfortunately, in Japan, we still lack research on automated systems to make a
proper decision instantaneously. Because a computer itself cannot take any responsibility, functional evaluation of a system based on software plays a key role in
establishing a company. After a company is established, daily routine management
of software (update and improvement of the database) becomes more important.
Globalization of information transactions is progressing. A single information center will soon cover all the world and reduce costs drastically. Soon, huge bank
buildings with many clerks will not be required.
We dene productivity as follows: Total social productivity (GDP) is the sum of individual freedoms. Freedom includes situations where we can obtain what we want
freely, that is, without restraint of individual freedom by others. As discussed earlier, when electronic money systems are designed, numerous people become unemployed because many bank clerks are no longer needed. Once only 100 bank
clerks can complete a job that has required 10,000 people, the banks productivity
is regarded as improved 100-fold. Nevertheless, unless the 9900 redundant people
produce something new, total social productivity does not increase.
To increase productivity (including selling a higher-quality product at the same
price) requires technological research, and engineers designing a productive system constitute an R&D laboratory. An increase in productivity is irreversible. Of
course, a bank does not itself need to offer new jobs to people unemployed due
to improved productivity. It is widely said that one of the new key industries will
be the leisure industry, including travel, which has no limit and could include
space travel.
What Is
Productivity?
12
Assuming that each consumers taste and standard of living (disposable income)
is different for both hardware and software that he or she wishes to buy, we plan
a new product. Toyota is said to be able to deliver a car to a customer within 20
days of receiving an order. A certain watch manufacturer is said to respond to 30
billion variations of dial plate type, color, and size and to offer one at the price of
$75 to $125 within a short period of time. An engineer required to design a shortcycle-time production process for only one variation or one product is involved in
a exible manufacturing system (FMS), used to produce high-mix, low-volume products. This eld belongs to production engineering, whose main interest is production speed in FMS.
13
Risk Management
Countermeasures
against Risks
Now lets look at noises from number 1 in the list above from the standpoint of
quality engineering. Among common countermeasures against such noises are the
training of users to head off misuse and the prevention of subsequent loss and
14
There are two types of environment: natural and articial. The natural environment includes earthquakes and typhoons. From an economic point of view, we
should not design buildings that can withstand any type of natural disaster. For an
earthquake, for which point on the seismic intensity scale we design a building is
determined by a standard in tolerance design. If we design the robustness of a
building using the quality engineering technique, we select a certain seismic intensity, such as 4, and study it to minimize the deformation of the building. However, this does not mean that we design a building that is unbreakable even in a
large-scale earthquake.
To mitigate the effects of an earthquake or typhoon on human life, we need
to forecast such events. Instead of relying on cause-and-effect or regression relationships, we should focus on prediction by pattern recognition. This technique,
integrating multidimensional information obtained to date, creates Mahalanobis
space (see Section 4.7) using only time-based data with a seismic intensity scale
below 0.5, at which no earthquake happens. The Mahalanobis distance becomes
1, on average, in the space. Therefore, distance D generated in the Mahalanobis
space remains within the approximate range 1 1. We assume that the Mahalanobis space exists as unit space only if there is no earthquake. We wish to see
how the Mahalanobis distance changes in accordance with the SN ratio (forecasting accuracy) proportional to the seismic intensity after we calculate a formal equation of the distance after an earthquake. If the Mahalanobis distance becomes large
enough and is sufciently proportional to its seismic intensity, it is possible to
predict an earthquake using seismic data obtained before the earthquake occurs.
Many social systems focus on crimes committed by human beings. Recently, in the
world of software engineering, a number of problems have been brought about
by hackers. Toll collection systems for public telephones, for example, involve numerous problems. Especially for postpaid phones, only 30% of total revenue was
collected by the Nippon Telegraph and Telephone Public Corporation. Eventually,
the company modied its system to a prepaid basis. Before you call, you insert a
coin. If your call is not connected, the coin is returned after the phone is hung
up. Dishonest people put tissue paper in coin returns to block returning coins
because many users tend to leave a phone without receiving their change. Because
phone designers had made the coin return so small that a coin could barely drop
through, change collectors could ll the slot with tissue paper using a steel wire
and then burn the paper away with a hot wire and take the coins. To tackle this
crime, designers added an alarm that buzzes when coins are removed from a
phone; but change collectors decoy people in charge by intentionally setting off
alarms at some places, in the meantime stealing coins from other phones.
A good design would predict crimes and develop ways to know what criminals
are doing. Although the prepaid card system at rst kept people from not paying,
bogus cards soon began to proliferate. Then it became necessary to deal with
counterfeiting of coins, bills, and cards. Another problem is that of hackers, who
have begun to cause severe problems on the Internet. These crimes can be seen
as intentional noises made by malicious people. Education and laws are prepared
to prevent them and the police are activated to punish them. Improvement in
peoples living standard leads to the prevention of many crimes but cannot eliminate them.
No noises are larger than the war that derives from the fact that a national
policy is free. To prevent this disturbance, we need international laws with the
backing of the United Nations, and to eradicate the noise, we need United
Nations forces. At the same time, the mass media should keep check on UN
activities to control the highest-ranked authority. Although the prevention of wars
around the world is not an objective of quality engineering, noises that accompany
businesses should be handled by quality engineering, and MTS can be helpful by
15
16
The rst process to which Ford applied the quality engineering method was not
product design but the daily routine activity of quality control (i.e., on-line quality
engineering). We reproduce below a part of a research paper by Willie Moore.
Quality Engineering at Ford
Over the last ve years, Fords awareness of quality has become one of the newest and
most advanced examples. To date, they have come to recognize that continuous improvement of products and services in response to customers expectation is the only
way to prosper their business and allocate proper dividends to stockholders. In the
declaration about their corporate mission and guiding principle, they are aware that
human resources, products, and prots are fundamentals for success and quality of
their products and services is closely related to customer satisfaction. However, these
ideas are not brand-new. If so, why is it considered that Fords awareness of quality is
one of the newest and advanced? The reason is that Ford has arrived at the new
understanding after reconsidering the background of these ideas. In addition, they
comprehend the following four simple assertions by Dr. Taguchi:
1. Cost is the most important element for any product.
2. Cost cannot be reduced without any inuences on quality.
3. Quality can be improved without cost increase. This can be achieved by the
utilization of the interactions with noise.
4. Cost can be reduced through quality improvement.
Historically, the United States has developed many quality targets, for example,
zero-defect movement, conformity with use, and quality standards. Although these
targets include a specic denition of quality or philosophy, practical ways of training to attain dened quality targets have not been formulated and developed.
Currently, Ford has a philosophy, methods, and technical means to satisfy customers. Among technical means are methods to determine tolerances and economic
evaluation of quality levels. Assertion 4 above is a way of reducing cost after improving the SN ratio in production processes. Since some Japanese companies
cling too much to the idea that quality is the rst priority, they are losing their
competitive edge, due to higher prices. Quality and cost should be well balanced.
The word quality as discussed here means market losses related to defects, pollution, and lives in the market. A procedure for determining tolerance and on- and
off-line quality engineering are explained later.
Determining
Tolerance
Quality and cost are balanced by the design of tolerance in the product design
process. The procedure is prescribed in JIS Z-8403 [2] as a part of national standards. JIS Z-8404 is applied not only to Ford but also to other European companies. Balancing the cost of quality involves how to determine targets and tolerances
at shipping.
Tolerance is determined after we classify three quality characteristics:
17
AA
(1.1)
where A0 is the economic loss when a product or service does not function
in the marketplace, A the manufacturing loss when a product or service does
not meet the shipping standard, and 0 the functional limit. Above m 0
or below m 0, problems occur.
2. Smaller-the-better characteristic: a characteristic that should be smaller (e.g., detrimental ingredient, audible noise). Its tolerance is calculated as follows:
AA
(1.2)
AA
0
(1.3)
Since a tolerance value is set by the designers of a product, quite often production engineers are not informed of how tolerance has been determined. In
this chapter we discuss the quality level after a tolerance value has been established.
A manufacturing department is responsible for producing specied products routinely in given processes. Production engineers are in charge of the design of
production processes. On the other hand, hardware, such as machines or devices,
targets, and control limits of quality characteristics to be controlled in each process, and process conditions, are given to operators in a manufacturing department
as technical or operation standards. Indeed, the operators accept these standards;
however, actual process control in accordance with a change in process conditions
tends to be left to the operators, because this control task is regarded as a calibration or adjustment. Production cost is divided up as follows:
production cost material cost process cost
control cost pollution cost
(1.4)
Moreover, control cost is split up into two costs: production control costs and
quality control costs. A product design department deals with all cost items on the
right side of equation (1.4). A production engineering department is charged with
the design of production processes (selection of machines and devices and setup
of running conditions) to produce initially designed specications (specications
at shipping) as equitably and quickly as possible. This departments responsibility
covers a sum of the second to fourth terms on the right-hand side of (1.4). Its
particularly important task is to speed up production and attempt to reduce total
cost, including labor cost of indirect employees after improving the stability of
production processes. Cost should be regarded not only as production cost but as
companywide cost, which is several times that of production cost as expressed in
(1.4).
Evaluation of Quality
Level and Consumer
Loss
18
Since price basically gives a customer the rst loss, production cost can be considered more important than quality. In this sense, balance of cost and quality is
regarded as a balance of price and quality. Market quality as discussed here includes the following items mentioned earlier:
1. Operating cost
2. Loss due to functional variability
3. Loss due to evil effects
The sum of 1, 2, and 3 is the focus of this chapter. Items 1 and 3 are almost
always determined by design. Item 1 is normally evaluated under conditions of
standard use. For items 2 and 3, we evaluate their loss functions using an average
sum of squared deviations from ideal values. However, for only an initial specication (at the point of shipping) in daily manufacturing, we assess economic loss
due to items 2 and 3, or the quality level as dened in this book, using the average
sum of squared deviations from the target. In this case, by setting loss when an
19
A 2
Then, is calculated as follows, where n data are y1, y2, ... , yn:
2
1
[(y1 m)2 (y2 m)2 (yn m)2]
n
(1.6)
1 2
(y y 22 y 2n)
n 1
(1.7)
Now
2
1
n
1
1
1
2 2
y 21
y2
yn
(1.8)
(1.9)
We evaluate daily activity in manufacturing as well as cost by clarifying the evaluation method of quality level used by management to balance quality and cost.
QUALITY LEVEL: NOMINAL-THE-BEST CHARACTERISTIC
Example
A certain plate glass maker whose shipping price is $3 has a dimensional tolerance
of 2.0 mm. Data for differences between measurements and target values (ms)
regarding a product shipped from a certain factory are as follows:
0.3
0.6
0.5
0.0
0.2
0.8
0.2
0.0
1.0
1.2
0.8 0.6
0.9
1.1 0.5
0.2
0.0
0.3
1.3
0.8
Next we calculate the quality level of this product. To compute loss due to
variability, after calculating an average sum of squared deviations from the target,
mean-squared error 2, we substitute it into the loss function. From now on, we
call mean-squared error 2 variance.
1
2
2
2
2
2
20 (0.3 0.6 1.3 ) 0.4795 mm
(1.10)
20
Plugging this into equation (1.5) for the loss function gives us,
L
A 2
300
2 (0.4795) 36 cents
2
2
(1.11)
2.66
20
20
( 20)
(1.12)
( 1)
(1.13)
(1.14)
300
(0.365) (75)(0.365) 27 cents
22
(1.15)
As compared to equation (1.11), for one sheet of plate glass, this brings us a quality
improvement of
36.0 27.4 8.6 cents
(1.16)
If 500,000 sheets are produced monthly, we can obtain $43,000 through quality
improvement.
No special tool is required to adjust an average to a target. We can cut plate
glass off with a ruler after comparing an actual value with a target value. If the
Table 1.1
ANOVA table
S
(%)
2.66
2.66
2.295
23.9
19
6.93
0.365
7.295
76.1
Total
20
9.59
0.4795
9.590
100.0
Factor
21
Example
Now suppose that a tolerance standard of roundness is less than 12 m, and if a
product is disqualied, loss A costs 80 cents. We produce the product with machines A1 and A2. Roundness data for this product taken twice a day over a twoweek period are as follows:
Morning:
Afternoon: 6
10
23.4 m2
L
A 2
80
(23.4) 13 cents
2
122
(1.17)
(1.18)
22
Example
Now we dene y as a larger-the-better characteristic and calculate a loss function.
Suppose that the strength of a three-layer reinforced rubber hose is important:
K1: adhesion between tube rubber and reinforcement fiber
K2: adhesion between reinforcement fiber and surface rubber
Both are crucial factors for rubber hose. For both K1 and K2, a lower limit for is
specied as 5.0 kgf. When hoses that do not meet this standard are discarded,
the loss is $5 per hose. In addition, its annual production volume is 120,000. After
prototyping eight hoses for each of two different-priced adhesives, A1 (50 cents)
and A2 (60 cents), we measured the adhesion strengths of K1 and K2, as shown in
Table 1.2.
Compare quality levels A1 and A2. We wish to increase their averages and reduce
their variability. The scale for both criteria is equation (1.9), an average of the sum
of squared reciprocals. By calculating A1 and A2, respectively, we compute loss
functions expressed by (1.8). For A1,
21
1
16
1
1
1
2
2
10.2
5.8
16.52
0.02284
(1.19)
1
16
1
1
1
7.62
13.72
10.62
(1.20)
0.01139
(1.21)
L2 (500)(52)(0.01139) $1.42
(1.22)
Therefore, even if A2 is 10 cents more costly than A1 in terms of adhesive cost (sum
of adhesive price and adhesion operation cost), A2 is more cost-effective than A1
by
(50 285.5) (60 142.4) $1.33
(1.23)
Table 1.2
Adhesion strength data
A1
A2
K1
10.2
5.8
4.9
16.1
15.0
9.4
4.8
10.1
K2
14.6
19.7
5.0
4.7
16.8
4.5
4.0
16.5
K1
7.6
13.7
7.0
12.8
11.8
13.7
14.8
10.4
K2
7.0
10.1
6.8
10.0
8.6
11.2
8.3
10.6
23
Responsibility of
Each Department for
Quality
The quality assurance department should be responsible for unknown items and
complaints in the marketplace. Two of the largest-scale incidents of pollution after
World War II were the arsenic poisoning of milk by company M and the PCB
poisoning of cooking oil by company K. (We do not here discuss Minamata disease
because it was caused during production.) The arsenic poisoning incident led to
many infants being killed after the intake of milk mixed with arsenic. If company
M had been afraid that arsenic would be mixed with milk and then had produced
the milk and inspected for arsenic, no one would have bought its products. This
is because a company worrying about mixtures of arsenic and milk cannot be
trusted.
The case of PCB poisoning is similar. In general, a very large pollution event
is not predicable, and furthermore, prediction is not necessary. Shortly after World
Responsibilities of
the Quality
Assurance
Department
24
References
1. Genichi Taguchi, 1989, Quality Engineering in Manufacturing. Tokyo: Japanese Standards Association.
2. Japanese Industrial Standards Committee, 1996. Quality Characteristics of Products. JIS
Z-8403. Tokyo: Japanese Standards Association.
Management for
Quality Engineering
26
28
25
29
30
30
References
38
38
Planning Strategies
25
26
Manufacturers plan products in parallel with variations in those products. If possible, they attempt to offer whatever customers wish to purchase: in other words,
made-to-order products. Toyota is said to be able to deliver a car to a customer
within 20 days after receipt of an order, with numerous variations in models, appearance, or navigation system. For typical models, they are prepared to deliver
several variations; however, it takes time to respond to millions of variations. To
achieve this, a production engineering department ought to design production
processes for the effective production of high-mix, low-volume products, which
can be considered rational processes.
On the other hand, there are products whose functions only are important: for
example, invisible parts, units, or subsystems. They need to be improved with regard to their functions only.
Selection of
Systems: Concepts
Engineers are regarded as specialists who offer systems or concepts with objective
functions. All means of achieving such goals are articial. Because of their articiality, systems and concepts can be used exclusively only within a certain period
protected by patents. From the quality engineering viewpoint, we believe that the
more complex systems are, the better they become. For example, a transistor was
originally a device invented for amplication. Yet, due to its simplicity, a transistor
27
itself cannot reduce variability. As a result, an amplier put into practical use in a
circuit, called an op-amp, is 20 times more complicated a circuit than that of a
transistor.
As addressed earlier, a key issue relates to what rationalization of design and
development is used to improve the following two functionalities: (1) design and
development focusing on an objective function under standard conditions and (2)
design and evaluation of robustness to keep a function invariable over a full period
of use under various conditions in the market. To improve these functionalities,
as many parameters (design constants) should be changed by designers as possible.
More linear and complex systems can bring greater improvement. Figure 2.1 exemplies transistor oscillator design by Hewlett-Packard engineers, who in choosing the design parameters for a transistor oscillator selected the circuit shown in
the gure. This example is regarded as a functional design by simulation based
on conceptualization. Through an L108 orthogonal array consisting of 38 control
factors, impedance stability was studied.
Top management is also responsible for guiding each project manager in an
engineering department to implement research and development efciently. One
practical means is to encourage each manager to create as complicated a system
as possible. In fact, Figure 2.1 includes a vast number of control factors. Because
a complex system contains a simpler one, we can bring a target function to an
ideal one.
The approach discussed so far was introduced in a technical journal, Nikkei
Mechanical, on February 19, 1996 [2]. An abstract of this article follows.
LIMDOW (Light Intensity Modulation Direct Overwrite) Disk, developed by Tetsuo
Hosokawa, chief engineer of the Business Development Department, Development
Division, Nikon Corp., together with Hitachi Maxell, is a typical example that has
successfully escaped from a vicious cycle of tune-up by taking advantage of the Taguchi
Method. LIMDOW is a next-generation magneto-optical disk (MO) that can be both
read and rewritten. Its writing speed is twice as fast as a conventional one because we
Figure 2.1
Transistor oscillator
28
29
Functionality
(Performance)
Evaluation, Signals,
and Noise
30
Evaluation of
Reproducibility
To evaluate functionality under conditions of mass production or various applications by means of test pieces, downsized prototypes, or limited exibilities (a life
test should be completed in less than one day, a noise test is limited to only two
conditions) in a laboratory, signals and noises expected to occur in the market, as
discussed above, should be taken into account.
However, by changing design constants called control factors, which are not conditions of use but parameters that can freely be altered by designers (including
selection of both systems and parameter levels), designers optimize a system. Even
if we take advantage of functionality evaluation, if signal factors, noises, and measurement characteristics are selected improperly, optimal conditions are sometimes
not determined. Whether optimal conditions can be determined depends on the
evaluation of reproducibility in the downstream (conditions regarding mass production or various uses in the market). Therefore, in the development phase,
development engineers should design parameters using an orthogonal array to
assess reproducibility. Or at the completion point of development, other evaluation
departments, as a management group, should assess functionality using benchmarking techniques. The former approach is desirable; however, the latter is expected to bring a paradigm shift that stimulates designers to use SN ratios.
31
32
33
34
cost of about $300,000. Because the life test was conducted once a week and
took almost a week to complete (this is called time lag), in the meantime a certain
number of engines were mounted on various helicopters. However, since the life
test at a plant can detect problems much faster than customers in the market can,
we did not consider problems of loss of life due to crashes.
As noted earlier, the inspection expense is $25,000 and the mean failure interval
(including failures both in the marketplace and at a plant), u was estimated to be
once in a few years. For the latter, we judged that they had one failure for every
2500 engines, and so set u to 2500 units. (Even if parameters deviate from actual
values, they do not have any signicant impact on inspection design.) For u
2500, we can calculate the optimal inspection interval using the equation that
follows. The proof of this equation is given in Chapter 6 of Reference 4. For the
sake of convenience, as a monetary unit, $100 is chosen here.
2uB
A
(2) (2500) (250)
3000
20.4 engines
(2.1)
This happens to be consistent with their inspection frequency. For a failure occurring
once in three years, a life test should be done weekly. We were extremely impressed
by their method, fostered through long-time experience.
We answered that the current frequency was best and that if it were lowered to
every 50, the company would lose more. We added that if they had certain evidence
that the incidence declined from once in two or three years to once in a decade,
they could conduct the life test only once for every 50 engines.
In fact, if a failure happens once in a decade, the mean failure interval u
10,000 units:
n
(2)(100)(250)
3000
41 engines
(2.2)
This number has less than a 20% difference from 50 units; therefore, we can select
n 50.
On the other hand, when we keep the current u unchanged and alter n from 25
to 50, we will experience cost increases. Primarily, in case of n 25, loss L can
be calculated as follows:
L
B
n1A
lA
n
2 u
u
250
25
25 1
2
3000
(25)(3000)
2500
2500
10 15.6 30.0
55.6 cents ($100)
(2.3)
35
250
25
50 1
2
3000
(25)(3000)
2500
2500
5 30.6 30.0
65.6 cents
(2.4)
Example
For one particular product, failures leading to recall are supposed to occur only once
a decade or once a century. Suppose that a manufacturer producing 1 billion units
with 600 product types yearly experiences eight recalls in a year. Its mean failure
interval u is
u
125,000,000 units
(2.5)
36
2uB
A
(2)(125,000,000)(50)
56,000 units
(2.6)
Now by taking into account the annual production volume of 1.2 million and assuming that there are 48 weeks in a year, we need to produce the following number
of products weekly:
1,200,000
25,000
48
(2.7)
Therefore, 56,000 units in (2.6) means that the life test is conducted once every
other week. The mean failure interval of 125,000,000 indicates that a failure occurs almost once in
125,000,000
104 years
1,200,000
(2.8)
In short, even if a failure of a certain product takes place only once a century, we
should test the product biweekly.
What happens if we cease to run such a life test? In this case, after a defect is
detected when a product is used, it is recalled. Inspection by user costs nothing.
However, in general, there should be a time lag of a few months until detection.
Now supposing it to be about two months and nine weeks, the, time lag l is
l (25,000)(9)
225,0000 units
(2.9)
Although, in most cases, loss A for each defective product after shipping is larger
that that when a defect is detected in-house, we suppose that both are equal. The
reasoning behind this is that A will not become so large on average because we
can take appropriate technical measures to prevent a larger-scale problem in the
marketplace immediately after one or more defective products are found in the
plant.
If we wait until failures turn up in the marketplace, B 0 and n 1 because
consumers check all products; this can be regarded as screening or 100% inspection. Based on product recall, loss L0 is
B
n1A
lA
n
2 u
u
L0
(2.10)
11
2
0.0072 cent
4
(225,000)(400)
125,000,000
125,000,000
(2.11)
37
(0.0072)(1,200,000) $864
(2.12)
50
50,000
50 1
2
4
(25,000)(4)
125,000,000
125,000,000
(2.13)
For an annual production volume of 1.2 million, the annual total loss is $3120:
(0.0026)(1,200,000) $3120
(2.14)
(2.15)
If we can expect this amount of gain in all 600 models as compared to the case
of no life test, we can save
(5520)(600) $3,312,000
(2.16)
(2.17)
(2.18)
As for the value of a life test, only loss in the case of no life test is increased. Putting
A0 $12.89 into A of equation (2.10), the loss L0 is calculated as to be 0.0232
cent. Comparing this loss with the loss using a life test, L 0.0026 cent, the loss
increases by
0.0232 0.0026 0.0206 cent
(2.19)
(2.20)
38
Functions of the
Quality Assurance
Department
References
1. Sun Tzu, 2002. The Art of War, Dover Publications.
2. Nikkei Mechanical, February 19, 1996.
3. Genichi Taguchi, 1984. Parameter Design for New Product Development. Tokyo: Japanese
Standards Association.
4. Genichi Taguchi, 1989. Quality Engineering Series, Vol. 2. Tokyo: Japanese Standards
Association.
5. Japanese Industrial Standards Committee, 1996. Quality Characteristics of Products. JIS
Z-8403.
Quality Engineering:
Strategy in Research
and Development
39
41
39
40
41
duct the life test of a product under varied marketplace conditions, the strategy
of using an SN ratio combined with noise, as illustrated in this chapter, has fundamentally changed the traditional approaches. The following strategies are
employed:
1. Noise factors are assigned for each level of each design parameter and compounded, because what happens due to the environment and deterioration
is unpredictable. For each noise factor, only two levels are sufcient. As a
rst step, we evaluate robustness using two noise levels.
2. The robust level of a control factor (having a higher SN ratio) does not
change at different signal factor levels. This is advantageous in the case of a
time-consuming simulation, such as using the nite element method by reducing the number of combinations in the study and improving robustness
by using only the rst two or three levels of the signal factor.
3. Tuning to the objective function can be made after robustness is improved
using one or two control or signal factors. Tuning is done under the standard
condition. To do so, orthogonal expansion is performed to nd the candidates that affect linear coefcient 1 and quadratic coefcient 2.
In quality engineering, all conditions of use in the market can be categorized into
either a signal or a noise. In addition, a signal can be classied as active or passive.
The former is a variable that an engineer can use actively, and it changes output
characteristics. The latter is, like a measurement instrument or receiver, a system
that measures change in a true value or a signal and calculates output. Since we
can alter true values or oscillated signals in research, active and passive signals do
not need to be differentiated.
What is most important in two-stage optimization is that we not use or look for
cause-and-effect relationships between control and noise factors. Suppose that we
test circuits (e.g., logic circuits) under standard conditions and design a circuit
with an objective function as conducted at Bell Labs. Afterward, under 16 different
conditions of use, we perform a functional test. Quality engineering usually recommends that only two conditions be tested; however, this is only because of the
cost. The key point is that we should not adjust (tune) a design condition in order
to meet the target when the noise condition changes. For tuning, a cause-andeffect relationship under the standard condition needs to be used.
We should not adjust the deviation caused by the usage condition change of
the model or regression relationship because it is impossible to dispatch operators
in manufacturing to make adjustments for uncountable conditions of use. Quality
engineering proposes that adjustments be made only in production processes that
are considered standard conditions and that countermeasures for noises should
utilize the interactions between noise and control factors.
TESTING METHOD AND DATA ANALYSIS
In designing hardware (including a system), let the signal factor used by customers
be M and its ideal output (target) be m. Their relationship is written as m (M).
In two-stage design, control factors (or indicative factors) are assigned to an
orthogonal array. For each run of the orthogonal array, three types of outputs
are obtained either from experimentation or from simulation: under standard
conditions N0, under negative-side compounded noise conditions N1, and under
Variability (Standard
SN Ratio)
Improvement: First
Step in Quality
Engineering
42
(3.1)
is regarded as an ideal function except for its coefcient. Nevertheless, the ideal
function is not necessarily a proportional equation, such as equation (3.1), and
can be expressed in various types of equations. Under an optimal SN ratio condition, we collect output values under standard conditions N0 and obtain the
following:
M (signal factor)
M1
M2
...
Mk
m (target value):
m1
m2
...
mk
y (output value):
y1
y2
...
yk
Although the data in N1 and N2 are available, they are not needed for adjustment because they are used only for examining reproducibility.
If a target value m is proportional to a signal M for any value of , it is analyzed
with level values of a signal factor M. An analysis procedure with a signal M is the
same as that with a target value m. Therefore, we show the common method of
adjustment calculation using a target value m, which is considered the Taguchi
method for adjustment.
Table 3.1
Data for SN ratio analysis
N0
Linear
Equation
N1
L1
N2
L2
43
If outputs y1, y2, ... , yk under the standard conditions N0 match the target value
ms, we do not need any adjustment. In addition, if y is proportional to m with
sufcient accuracy, there are various ways of adjusting to the target value of 1.
Some of the methods are the following:
1. We use the signal M. By changing signal M, we attempt to match the target
value m with y. A typical case is proportionality between M and m. For example, if y is constantly 5% larger than m, we can calibrate M by multiplying
it by 0.95. In general, by adjusting Ms levels, we can match m with y.
2. By tuning up one level of control factor levels (sometimes, more than two
control factor levels) in a proper manner, we attempt to match the linear
coefcient 1 with 1.
3. Design constants other than the control factors selected in simulation may
be used.
The difcult part in adjustment is not for coefcients but for deviations from
a proportional equation. Below we detail a procedure of adjustment including
deviations from a proportional equation. To achieve this, we make use of orthogonal expansion, which is addressed in the next section. Adjustment for other than
proportional terms is regarded as a signicant technical issue to be solved in the
future.
FORMULA OF ORTHOGONAL EXPANSION
The formula of orthogonal expansion for adjustment usually begins with a proportional term. If we show up to the third-order term, the formula is:
K3
m
K2
K K K2K5
K K K 42
3 m3 3 4
m 3 5
2
K2K4 K 3
K2K4 K 32
y 1m 2 m2
(3.2)
1
(mi1 mi2 mik)
k
(i 2, 3, ...)
(3.3)
K2, K3, ... are constant because ms are given. In most cases, third- and higher-order
terms are unnecessary; that is, it is sufcient to calculate up to the second-order
term. We do not derive this formula here.
Accordingly, after making orthogonal expansion of the linear, quadratic, and
cubic terms, we create an ANOVA (analysis of variance) table as shown in Table
3.2, which is not for an SN ratio but for tuning. If we can tune up to the cubic
term, the error variance becomes Ve. The loss before tuning,
L0
A0
V
02 T
A0
V
20 e
is reduced to
44
Table 3.2
ANOVA table for tuning
Source
S1
S2
S3
k3
Se
Ve
Total
ST
VT
after tuning. When only the linear term is used for tuning, the error variance is
expressed as follows:
Ve
1
(S S3 Se)
k 1 2
Example
This design example, introduced by Oki Electric Industry at the Eighth Taguchi
Symposium held in the United States in 1990, had an enormous impact on research and design in the United States. However, here we analyzed the average
values of N1 and N2 by replacing them with data of N0 and the standard SN ratio
based on dynamic characteristics. Our analysis is different from that of the original
report, but the result is the same.
A function of the color shift mechanism of a printer is to guide four-color ribbons
to a proper position for a printhead. A mechanism developed by Oki Electric Industry
in the 1980s guides a ribbon coming from a ribbon cartridge to a correct location
for a printhead. Ideally, rotational displacement of a ribbon cartridges tip should
match linear displacement of a ribbon guide. However, since there are two intermediate links for movement conversion, both displacements have a difference. If
this difference exceeds 1.45 mm, misfeeds such as ribbon fray (a ribbon frays as
a printhead hits the edge of a ribbon) or mixed color (a printhead hits a wrong color
band next to a target band) happen, which can damage the function. In this case,
the functional limit 0 is 1.45 mm. In this case, a signal factor is set to a rotational
angle M, whose corresponding target values are as follows:
1.3
2.6
3.9
5.2
6.5
2.685
5.385
8.097
10.821
13.555
45
For parameter design, 13 control factors are selected in Table 3.3 and assigned
to an L27 orthogonal array. It is preferable that they should be assigned to an L36
array. In addition, noise factors should be compounded together with control factors
other than design constants.
Since variability caused by conditions regarding manufacturing or use exists in
all control factors, if we set up noise factors for each control factor, the number of
noise factors grows, and accordingly, the number of experiments also increases.
Therefore, we compound noise factors. Before compounding, to understand noise
factor effects, we check how each factor affects an objective characteristic of displacement by xing all control factor levels at level 2. The result is illustrated in
Table 3.4.
By taking into account an effect direction for each noise factor on a target position, we establish two compounded noise factor levels, N1 and N2, by selecting
the worst conguration on both the negative and positive sides (Table 3.5). For
some cases we do not compound noise factors, and assign them directly to an
orthogonal array.
According to Oki research released in 1990, after establishing stability for each
rotational angle, the target value for each angle was adjusted using simultaneous
equations. This caused extremely cumbersome calculations. By resetting an output value for each rotational angle under standard conditions as a new signal M,
we can improve the SN ratio for output under noises. Yet since we cannot obtain
a program for the original calculation, we substitute an average value for each angle
at N1 and N2 for a standard output value N0. This approach holds true only for a
case in which each output value at N1 and N2 mutually has the same absolute
value with a different sign around a standard value. An SN ratio that uses an output
value under a standard condition as a signal is called a standard SN ratio, and one
substituting an average value is termed a substitutional standard SN ratio or average standard SN ratio.
Table 3.3
Factors and levels a
Level
Level
Factor
Factor
6.0
6.5
7.0
31.5
33.5
35.5
31.24
33.24
35.24
9.45
10.45
11.45
2.2
2.5
45.0
47.0
7.03
7.83
9.6
10.6
11.6
80.0
82.0
84.0
80.0
82.0
84.0
23.5
25.5
27.5
2.8
61.0
63.0
65.0
49.0
16.0
16.5
17.0
8.63
The unit for all levels is millimeters, except for F (angle), which is radians.
46
Table 3.4
Qualitative effect of noise factor on objective function
Factor
Effect
Factor
Effect
A
B
C
D
E
F
G
H
I
J
K
L
M
(1 / 2r)(S Ve)
3.51 dB
VN
(3.4)
Table 3.5
Two levels of compounded noise factor
Factor
N1
N2
Factor
N1
N2
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.15
0.15
0.1
0.1
0.15
0.15
0.05
0.5
0.15
0.15
0.5
0.5
0.15
0.15
0.1
0.1
47
Table 3.6
Data of experiment 1
Rotational Angle:
Loss
Function
1.3
2.6
3.9
5.2
6.5
Standard
Condition:
3.255
6.245
9.089
11.926
14.825
N1
2.914
5.664
8.386
11.180
14.117
463.694309
N2
3.596
6.826
9.792
12.672
15.533
524.735835
zation and noise selection in simulation. However, quite often in the United States,
instead of compounding noise factors, they assign them to an outer orthogonal
array. That is, after assigning three levels of each noise factor to an L27 orthogonal
array, they run a simulation for each combination (direct product design) formed
by them and control factor levels.
Analysis for Tuning
At the optimal SN ratio condition in simulation, shown in the preceding section,
data at N0 and the average values at both N1 and N2 are calculated as follows:
Signal factor M (deg):
1.3
2.6
3.9
5.2
6.5
2.685
5.385
8.097
10.821
13.555
2.890
5.722
8.600
11.633
14.948
These data can be converted to Figure 3.2 by means of a graphical plot. Looking
at Figure 3.2, we notice that each (standard) output value at the optimal condition
digresses with a constant ratio from a corresponding target value, and this tendency
makes the linear coefcient of output differ from 1. Now, since the coefcient 1
is more than 1, instead of adjusting 1 to 1, we calibrate M, that is, alter a signal
M to M*:
Table 3.7
Decomposition of total variation
Source
988.430144
988.430144
3.769682
3.769682
0.241979
0.030247
4.011662
0.445740
10
992.441806
Total
48
Table 3.8
Factor assignment and SN ratio based on average value
No.
(dB)
3.51
2.18
0.91
6.70
1.43
3.67
5.15
8.61
2.76
10
3.42
11
2.57
12
0.39
13
5.82
14
6.06
15
5.29
16
3.06
17
2.48
18
3.86
19
5.62
20
3.26
21
2.40
22
2.43
23
3.77
24
1.76
25
3.17
26
3.47
27
2.51
49
Table 3.9
Level totals of SN ratio
1
34.92
27.99
20.01
29.35
27.23
26.34
24.26
28.55
30.11
28.67
27.21
27.07
10.62
27.33
44.97
37.38
28.48
17.06
32.92
27.52
22.48
32.94
26.59
23.39
34.02
28.87
20.03
16.10
27.78
39.04
35.40
27.67
19.85
22.06
26.63
34.23
24.66
28.12
30.14
Standard SN Ratio
Linear Coefficient 1
Figure 3.1
Factor effect plots of
standard SN ratio, linear
coefcient, and
quadratic coefcient
A
Quadratic Coefficient 2
0.003
0.005
0.007
0.009
0.011
0.013
A
50
Table 3.10
Estimation and conrmation of optimal SN ratio (dB)
Optimal condition
Estimation by
All Factors
Conrmatory
Calculation
12.50
14.58
Initial condition
3.15
3.54
Gain
9.35
11.04
M*
1
M
1
(3.5)
Figure 3.2
Standard output value
at optimal S/N ratio
condition
Target value
Simulation value
51
( 5)
(3.6)
(m1 y1 m2 y2 m5 y5)2
m21 m22 m25
(2.685)(2.890) (13.555)(14.948)2
2.6852 13.5552
473.695042
(3.7)
m1 y1 m2 y2 m5 y5
m12 m22 m52
436.707653
402.600925
1.0847
(3.8)
(3.9)
(3.10)
Thus, the second-order term can be calculated according to the expansion equation
(3.2) as follows:
2 m2
K3
m
K2
2 m2
892.801297
m
80.520185
2(m2 11.0897m)
(3.11)
(i 1,2, ... , 5)
(3.12)
52
Now we obtain
w1 2.6852 (11.0897)(2.685) 22.567
(3.13)
(3.14)
(3.15)
(3.16)
(3.17)
Using these coefcients w1, w2, ... , w5, we compute the linear equation of the
second-order term L2:
L2 w1 y1 w2 y2 w5 y5
(22.567)(2.890) (33.477)(14.948)
16.2995
(3.18)
Then, to calculate variation in the second-order term S2, we compute r2, which
denotes a sum of squared values of coefcients w1, w2, ... , w5 in the linear equation
L2:
r2 w21 w22 w52
(22.567)2 33.4772
3165.33
(3.19)
By dividing the square of L2 by this result, we can arrive at the variation in the
second-order term S2. That is, we can compute the variation of a linear equation
by dividing the square of L2 by the number of units (sum of squared coefcients).
Then we have
S2
16.29552
0.083932
3165.33
(3.20)
L2
16.2995
0.005149
r2
3165.33
(3.21)
Therefore, after optimizing the SN ratio and computing output data under the standard conditions N0, we can arrive at Table 3.11 for the analysis of variance to
compare a target value and an output value. This step of decomposing total variation to compare target function and output is quite signicant before adjustment
(tuning) because we can predict adjustment accuracy prior to tuning.
Economic Evaluation of Tuning
Unlike evaluation of an SN ratio, economic evaluation of tuning, is absolute because
we can do an economic assessment prior to tuning. In this case, considering that
the functional limit is 1.45 mm, we can express loss function L as follows:
53
Table 3.11
ANOVA for tuning (to quadratic term)
Source
473.695042
0.083932
0.033900
473.812874
Total
2 e
(4)
(0.117832)
0.011300
(0.029458)
A0 2
A0
2
20
1.452
(3.22)
S2 Se
0.083932 0.033930
0.029458
4
4
(3.23)
0.001130
3
3
22
(3.24)
Plugging these into (3.22), we can obtain the absolute loss. For the case of tuning
only the linear term, the loss is
L1
300
(0.029458) $4.20
1.452
(3.25)
300
(0.001130) 16 cents
1.452
(3.26)
Thus, if we tune to the quadratic term, as compared to tuning only the linear term,
we can save
420.3 16.1 $4.04
(3.27)
54
Table 3.12
Level average of linear and quadratic coefcients
Linear Coefcient
Quadratic Coefcient
Factor
1.0368
1.0427
1.501
0.0079
0.0094
0.0107
1.0651
1.0472
1.0173
0.0095
0.0094
0.0091
1.0653
1.0387
1.0255
0.0124
0.0091
0.0064
1.0435
1.0434
1.0427
0.0083
0.0093
0.0104
1.0376
1.0441
1.0478
0.0081
0.0094
0.0105
1.0449
1.0434
1.0412
0.0077
0.0093
0.0109
1.0654
1.0456
1.0185
0.0100
0.0092
0.0088
1.0779
1.0427
1.0089
0.0090
0.0092
0.0098
1.0534
1.0430
1.0331
0.0094
0.0093
0.0093
1.0271
1.0431
1.0593
0.0076
0.0093
0.0111
1.0157
1.0404
1.0734
0.0083
0.0094
0.0103
1.0635
1.0405
1.0256
0.0113
0.0095
0.0073
1.0295
1.0430
1.0570
0.010
0.0095
0.0084
them has a great effect on the quadratic term. Therefore, even if we use them, we
cannot lower the quadratic terms coefcient. As the next candidates, we can choose
K or D. If the K level is changed from 1 to 2, the coefcient is expected to decrease
to zero. Indeed, this change leads to increasing the linear terms coefcient from
the original value; however, it is not vital because we can adjust using a signal. On
the other hand, if we do not wish to eliminate the quadratic terms effect with a
minimal number of factors, we can select K or D. In actuality, extrapolation can
also be utilized. The procedure discussed here is a generic strategy applicable to
all types of design by simulation, called the Taguchi method or Taguchi quality
engineering in the United States and other countries.
55
Quality Engineering:
The Taguchi Method
4.1. Introduction
57
4.2. Electronic and Electrical Engineering
59
61
73
85
Function of an Engine
86
General Chemical Reactions
87
Evaluation of Images
88
Functionality Such as Granulation or Polymerization Distribution
Separation System
91
93
97
102
MT (MahalanobisTaguchi) Method
102
Application of the MT Method to a Medical Diagnosis
104
Design of General Pattern Recognition and Evaluation Procedure
Summary of Partial MD Groups: Countermeasure for Collinearity
56
91
116
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
108
114
4.1. Introduction
4.1. Introduction
The term robust design is in wide spread use in Europe and the United States. It
refers to the design of a product that causes no trouble under any conditions and
answers the question: What is a good-quality product? As a generic term, quality
or robust design has no meaning; it is merely an objective. A product that functions
under any conditions is obviously good. Again, saying this is meaningless. All engineers attempt to design what will work under various conditions. The key issue
is not design itself but how to evaluate functions under known and unknown
conditions.
At the Research Institute of Electrical Communication in the 1950s, a telephone
exchange and telephone were designed so as to have a 40- and a 15-year design
life, respectively. These were demanded by the Bell System, so they developed a
successful crossbar telephone exchanger in the 1950s; however, it was replaced 20
years later by an electronic exchanger. From this one could infer that a 40-year
design life was not reasonable to expect.
We believe that design life is an issue not of engineering but of product planning. Thinking of it differently from the crossbar telephone exchange example,
rather than simply prolonging design life for most products, we should preserve
limited resources through our work. No one opposes the general idea that we
should design a product that functions properly under various conditions during
its design life. What is important to discover is how to assess proper functions.
The conventional method has been to examine whether a product functions
correctly under several predetermined test conditions. Around 1985, we visited the
Circuit Laboratory, one of the Bell Labs. They developed a new circuit by using
the following procedure; rst, they developed a circuit of an objective function
under a standard condition; once it could satisfy the objective function, it was
assessed under 16 different types of conditions, which included different environments for use and after-loading conditions.
If the product did not work properly under some of the different conditions,
design constants (parameters) were changed so that it would function well. This
is considered parameter design in the old sense. In quality engineering (QE), to
achieve an objective function by altering design constants is called tuning. Under
the Taguchi method, functional improvement using tuning should be made under
standard conditions only after conducting stability design because tuning is improvement based on response analysis. That is, we should not take measures for
noises by taking advantage of cause-and-effect relationships.
The reason for this is that even if the product functions well under the 16
conditions noted above, we cannot predict whether it works under other conditions. This procedure does not guarantee the products proper functioning within
its life span under various unknown conditions. In other words, QE is focused not
on response but on interaction between designs and signals or noises. Designing
parameters to attain an objective function is equivalent to studying rst-order
moments. Interaction is related to second-order moments. First-order moments
involve a scalar or vector, whereas second-order moments are studied by twodimensional tensors. Quality engineering maintains that we should do tuning only
under standard conditions after completing robust design. A two-dimensional tensor does not necessarily represent noises in an SN ratio. In quality engineering,
noise effects should have a continual monotonic tendency.
57
58
59
60
(i 1, 2, ... , 16)
(4.1)
In this case, m is a target value (or target curve). If we have 16 or more control
factors (called adjusting factors in quality engineering), we can solve equation (4.1).
If not, we determine A, B, ... , to minimize a sum of squared differences between
both sides of equation (4.1). In other words, we solve the following:
[(A,B, ... , N ) m]
16
min
(4.2)
A,B,... i1
61
In quality engineering, to assess functionality, we regard a sum of squared measurements as total output, decompose it into effective and harmful parts, and improve the ratio of the effective part to the harmful part by dening it as an SN
ratio. To decompose measurements properly into quadratic forms, each measurement should be proportional to the square root of energy. Of course, the theory
regarding quadratic form is related not to energy but to mathematics. The reason
that each measurement is expected to be proportional to a square root of energy
is that if total variation is proportional to energy or work, it can be decomposed
into a sum of signals or noises, and each gain can be added sequentially.
In an engineering eld such as radio waves, electric power can be decomposed
as follows:
apparent power effective power reactive power
The effective power is measured in watts, whereas the apparent power is measured
in volt-amperes. For example, suppose that input is dened as the following sinusoidal voltage:
input E sin(t )
(4.3)
(4.4)
EI
cos( )
2
(4.5)
(4.6)
EI
[1 cos( )]
2
(4.7)
apparent power
the reactive power is expressed by
reactive power
Functionality
Evaluation of System
Using Power
(Amplitude)
62
(4.8)
(4.9)
Using the Hermitian form, which deal with the quadratic form in complex number, we express total variation as
ST y1y1 y2y2 yn yn
(4.10)
M1 y1 M2 y2 Mn yn
M1M1 M2M2 MnMn
(4.11)
M1y1 M2y2 Mn yn
M1M1 M2M2 MnMn
(4.12)
( 1)
(4.13)
(4.14)
Se
n1
(4.15)
63
(1/r) (S Ve)
Ve
(4.16)
S 10 log
1
(S Ve)
r
(4.17)
(4.18)
(4.19)
( 3k)
(4.20)
( 1)
(4.21)
Effective divider:
r M1M1 M2M2 Mk Mk
(4.22)
( 2)
(4.23)
Error variation:
Se ST S SN
( 3k 3)
(4.24)
Table 4.1
Input/output data
Signal
Noise
M1
M2
Mk
Linear Equation
N1
y11
y12
y1k
L1
N2
y21
y22
y2k
L2
N3
y31
y32
y3k
L3
64
( 3k 1)
(4.25)
Se
3k 3
(4.26)
SN
3k 1
(4.27)
SN ratio:
10 log
(1/3r) (S Ve)
VN
(4.28)
1
(S Ve)
3r
(4.29)
Sensitivity:
S 10 log
We have thus far described application of the Hermitian form to basic output
decomposition. This procedure is applicable to decomposition of almost all quadratic forms because all squared terms are replaced by products of conjugates when
we deal with complex numbers. From here on we call it by a simpler term, decomposition of variation, even when we decompose variation using the Hermitian form
because the term decomposition of variation by the Hermitian form is too lengthy.
Moreover, we wish to express even a product of two conjugates y and y as y2 in
some cases. Whereas y2 denotes itself in the case of real numbers, it denotes a
product of conjugates in the case of complex numbers. By doing this, we can use
both the formulas and degrees of freedom expressed in the positive quadratic
form. Additionally, for decomposition of variation in the Hermitian form, decomposition formulas in the quadratic form hold true. However, as for noises, we need
to take into account phase variability as well as power variability because the noises
exist in the complex plane. Thus, we recommend that the following four-level
noise N be considered:
N1:
N2:
N3:
N4:
65
(4.30)
(4.31)
In this case, E is too small to measure. Since we cannot measure frequency as part
of wave power, we need to measure frequency itself as quantity proportional to
energy. As for output, we should keep power in an oscillation system sufciently
stable. Since the systems functionality is discussed in the preceding section, we
describe only a procedure for measuring the functionality based on frequency.
If a signal has one level and only stability of frequency is in question, measurement of time and distance is regarded as essential. Stability for this case is nominalthe-best stability of frequency, which is used for measuring time and distance. More
exactly, we sometimes take a square root of frequency.
STABILITY OF TRANSMITTING WAVE
The stability of the transmitting wave is considered most important when we deal
with radio waves. (A laser beam is an electromagnetic wave; however, we use the
term radio wave. The procedure discussed here applies also for infrared light, visible light, ultraviolet light, x-rays, and gamma rays.) The stability of radio waves is
categorized as either stability of phase or frequency or as stability of power. In the
case of frequency-modulated (FM) waves, the former is more important because
even if output uctuates to some degree, we can easily receive phase or frequency
properly when phase or frequency is used.
In Section 7.2 of Reference 2, for experimentation purposes the variability in
power source voltage is treated as a noise. The stability of the transmitting wave is
regarded as essential for any function using phase or frequency. Since only a single
frequency is sufcient to stabilize phase in studying the stability of the transmitting
wave, we can utilize a nominal-the-best SN ratio in quality engineering. More specically, by selecting variability in temperature or power source voltage or deterioration as one or more noises, we calculate the nominal-the-best SN ratio. If the
noise has four levels, by setting the frequency data at N1, N2, N3, and N4 to y1, y2,
y3, and y4, we compute the nominal-the-best SN ratio and sensitivity as follows:
14 (Sm Ve)
Ve
(4.32)
(4.33)
10 log
Quality Engineering
of System Using
Frequency
66
(y1 y2 y3 y4)2
4
( 1)
(4.34)
ST y 12 y 22 y 32 y 42
( 4)
(4.35)
Se ST Sm
( 3)
(4.36)
Ve 13 Se
(4.37)
SN
Se ST SM SN
( k)
(4.38)
(4.39)
(4.40)
Table 4.2
Cases where signal factor exists
Noise
Signal
N1
N2
Total
M1
y11
y12
y1
M2
y21
y22
y2
Mk
yk1
yk2
yk
67
Table 4.3
Decomposition of total variation
Factor
SM
VM
SN
k1
Se
2k
ST
Total
Ve
(4.41)
(4.42)
SM
k
(4.43)
Ve
Se
k1
(4.44)
VN
SN Se
k
(4.45)
10 log
Now
VM
When we transmit (transact) information using a certain frequency, we need frequency modulation. The technical means of modulating frequency is considered
to be the signal. A typical function is a circuit that oscillates a radio wave into
which we modulate a transmitting wave at the same frequency as that of the transmitting wave and extract a modulated signal using the difference between two
waves. By taking into account various types of technical means, we should improve
the following proportionality between input signal M and measurement characteristic y for modulation:
y M
(4.46)
Thus, for modulation functions, we can dene the same SN ratio as before.
Although y as output is important, before studying y we should research the frequency stability of the transmitting wave and internal oscillation as discussed in
the preceding section.
68
(L1 L2 L6)2
6r
( 1)
(4.47)
( 2)
(4.48)
L21 L26
S SF
r
( 3)
(4.49)
(F 6k 6)
(4.50)
Se ST (S SF SN(F ))
(1/6r)(S Ve)
VN
(4.51)
S 10 log
1
(S Ve)
6r
(4.52)
SN(F ) Se
6k 3
(4.53)
Now
VN
Table 4.4
Case where modulated signal exists
Modulated Signal
Noise
Frequency
M1
M2
Mk
Linear
Equation
N1
F1
F2
F3
y11
y21
y31
y12
y22
y32
y1k
y2k
y3k
L1
L2
L3
N2
F1
F2
F3
y41
y51
y61
y42
y52
y62
y4k
y5k
y6k
L4
L5
L6
69
Table 4.5
Decomposition of total variation
Factor
SF
N(F)
SN(F)
e
Total
6(k 1)
Se
6k
ST
Ve
we should research and design only analog functions because digital systems are
included in all AM, FM, and PM, and its only difference from analog systems is
that its level value is not continuous but discrete: that is, if we can improve SN
ratio as an analog function and can minimize each interval of discrete values and
eventually enhance information density.
Example
A phase-modulation digital system for a signal having a 30 phase interval such as
0, 30, 60, ... , 330 has a value that is expressed in the following equation,
even if its analog functional SN ratio is 10 dB:
10 log
2
10
2
(4.54)
By taking into account that the unit of signal M is degrees, we obtain the following small value of :
31.6 3.16
(4.55)
The fact that representing RMS is approximately 3 implies that there is almost
no error as a digital system because 3 represents one-fth of a function limit regarded as an error in a digital system. In Table 4.6 all data are expressed in radians.
Since the SN is 48.02 dB, is computed as follows:
104.802 0.00397 rad
0.22
(4.56)
The ratio of this to the function limit of 15 is 68. Then, if noises are selected
properly, almost no error occurs.
In considering the phase modulation function, in most cases frequency is handled as an indicative factor (i.e., a use-related factor whose effect is not regarded
as noise.) The procedure of calculating its SN ratio is the same as that for a system
70
Table 4.6
Data for phase shifter (experiment 1 in L18 orthogonal array; rad)
Voltage
Temperature
Frequency
V1
V2
V3
T1
F1
F2
F3
77 j101
87 j104
97 j106
248 j322
280 j330
311 j335
769 j1017
870 j1044
970 j1058
T2
F1
F2
F3
77 j102
88 j105
98 j107
247 j322
280 j331
311 j335
784 j1025
889 j1052
989 j1068
using frequency. Therefore, we explain the calculation procedure using experimental data in Technology Development for Electronic and Electric Industries [3].
Example
This example deals with the stability design of a phase shifter to advance the phase
by 45. Four control factors are chosen and assigned to an L18 orthogonal array.
Since the voltage output is taken into consideration, input voltages are selected as
a three-level signal. However, the voltage V is also a noise for phase modulation.
Additionally, as noises, temperature T and frequency F are chosen as follows:
Voltage (mV): V1 224, V2 707, V3 2234
Temperature (C): T1 10, T2 60
Frequency (MHz): F1 0.9, F2 1.0, F3 1.1
Table 4.6 includes output voltage data expressed in complex numbers for 18 combinations of all levels of noises V, T, and F. Although the input voltage is a signal
for the output voltage, we calculate the phase advance angle based on complex
numbers because we focus only on the phase-shift function:
y tan1
Y
X
(4.57)
In this equation, X and Y indicate real and imaginary parts, respectively. Using a
scalar Y, we calculate the SN ratio and sensitivity.
As for power, we compute the SN ratio and sensitivity according to the Hermitian
form described earlier in the chapter. Because the function is a phase advance (only
45 in this case) in this section, the signal has one level (more strictly, two phases,
0 and 45). However, since our focal point is only the difference, all that we need
to do is to check phase advance angles for the data in Table 4.6. For the sake of
convenience, we show them in radians in Table 4.7, whereas we can also use
degrees as the unit. If three phase advance angles, 45, 90, and 135, are selected,
we should consider dynamic function by setting the signal as M.
71
Table 4.7
Data for phase advance angle (rad)
Voltage
Temperature
Frequency
V1
V2
V3
Total
T1
F1
F2
F3
0.919
0.874
0.830
0.915
0.867
0.823
0.923
0.876
0.829
2.757
2.617
2.482
T2
F1
F2
F3
0.924
0.873
0.829
0.916
0.869
0.823
0.918
0.869
0.824
2.758
2.611
2.476
Since we can adjust the phase angle to 45 or 0.785 rad (the target angle),
here we pay attention to the nominal-the-best SN ratio to minimize variability. Now
T and V are true noises and F is an indicative factor. Then we analyze as follows:
ST 0.9192 0.9152 0.8242 13.721679
( 18)
(4.58)
15.701
13.695633
18
2
Sm
( 1)
SF
(4.59)
(4.60)
Se ST Sm SF 0.0001835
Ve
( 15)
(4.61)
Se
(0.00001223)
15
(4.62)
18 (13.695633 0.00001223)
0.00001223
47.94 dB
(4.63)
1
S 10 log
18 (13.695633 0.00001223)
1.187 dB
(4.64)
72
45
(3.1416) 1.049 dB
180
(4.65)
As a next step, under optimal conditions, we calculate the phase advance angle
and adjust it to the value in equation (4.65) under standard conditions. This process
can be completed by only one variable.
When there are multiple levels for one signal factor, we should calculate the
dynamic SN ratio. When we design an oscillator using signal factors, we may select
three levels for a signal to change phase, three levels for a signal to change output,
three levels for frequency F, and three levels for temperature as an external condition, and assign them to an L9 orthogonal array. In this case, for the effective
value of output amplitude, we set a proportional equation for the signal of voltage
V as the ideal function. All other factors are noises. As for the phase modulation
function, if we select the following as levels of a phase advance and retard signal,
Level targeted at 45: M1 M2 m1
Level targeted at 0: M2 M2 m 2
Level targeted at 45: M3 M2 m3
the signal becomes zero-point proportional. The ideal function is a proportional
equation where both the signal and output go through the zero point and take
positive and negative values. If we wish to select more signal levels, for example,
two levels for each of the positive and negative sides (in total, ve levels, M1, M2,
M3, M4, and M5), we should set them as follows:
M1 120
M2 60
M3 M4 0
(dummy)
M5 60
M6 120
Next, together with voltage V, temperature T, and frequency F, we allocate them
to an L18 orthogonal array to take measurements. On the other hand, by setting M1
150, M2 90, M3 30, M4 30, M5 90, and M6 150,
we can assign them to an L18 array. After calibrating the data measured using M0
0, they are zero proportional.
DESIGN BY SIMULATION
73
Conventional
Meaning of Robust
Design
74
New Method:
Functionality Design
Example
Imagine a vehicle steering function, more specically, when we change the orientation of a car by changing the steering angle. The rst engineer who designed such
a system considered how much the angle could be changed within a certain range
75
of steering angle. For example, the steering angle range is dened by the rotation
of a steering wheel, three rotations for each, clockwise and counterclockwise, as
follows:
(360)(3) 1080
(360)(3) 1080
In this case, the total steering angle adds up to 2160. Accordingly, the engineer
should have considered the relationship between the angle above and steering curvature y, such as a certain curvature at a steering angle of 1080, zero curvature
at 0, and a certain curvature at 1080.
For any function, a developer of a function pursues an ideal relationship between
signal M and output y under conditions of use. In quality engineering, since any
function can be expressed by an input/output relationship of work or energy, essentially an ideal function is regarded as a proportionality. For instance, in the
steering function, the ideal function can be given by
y M
(4.66)
N2:
In short, N1 represents conditions where the steering angle functions well, whereas
N2 represents conditions where it does not function well. For example, the former
represents a car with new tires running on a dry asphalt road. In contrast, the latter
is a car with worn tires running on a wet asphalt or snowy road.
By changing the vehicle design, we hope to mitigate the difference between N1
and N2. Quality engineering actively takes advantage of the interaction between
control factors and a compounded error factor N to improve functionality. Then we
76
measure data such as curvature, as shown in Table 4.8 (in the counterclockwise
turn case, negative signs are added, and vice versa). Each Mi value means angle.
Table 4.8 shows data of curvature y, or the reciprocal of the turning radius for
each signal value of a steering angle ranging from zero to almost maximum for both
clockwise and counterclockwise directions. If we turn left or right on a normal road
or steer on a highway, the range of data should be different. The data in Table 4.8
represent the situation whether we are driving a car in a parking lot or steering in
a hairpin curve at low speed. The following factor of speed K, indicating different
conditions of use, called an indicative factor, is often assigned to an outer orthogonal array. This is because a signal factor value varies in accordance with each
indicative factor level.
K1: low speed (less than 20 km/h)
K2: middle speed (20 to 60 km/h)
K3: high speed (more than 60 km/h)
The SN ratio and sensitivity for the data in Table 4.8 are computed as follows.
First, we calculate two linear equations, L1 and L2, using N1 and N2:
L M1 y11 M2 y12 M7y17
L2 M1 y21 M2 y21 M2 y22 M7y27
(4.67)
(4.68)
(f 14)
(4.69)
(L1 L2)2
2r
(4.70)
where
r M 21 M 22 M 27
(720)2 (480)2 7202
1,612,800
(4.71)
Table 4.8
Curvature data
M1
720
M2
480
M3
240
M4
0
M5
240
M6
480
M7
720
N1
y11
y12
y13
y14
y15
y16
y17
N2
y21
y22
y23
y24
y25
y26
y27
77
(L1 L2)2
2r
(4.72)
(4.73)
(1/2r) (S Ve)
VN
L1 L2
S
2r
(4.74)
(4.75)
Ve
Se
12
(4.76)
VN
SN Se
13
(4.77)
A major aspect in which quality engineering differs from a conventional research and design procedure is that QE focuses on the interaction between signal
and noise factors, which represent conditions of design and use. As we have said
Table 4.9
Decomposition of total variation
Source
SN
12
Se
Total
14
ST
Ve VN
78
79
tion. For example, the steering function discussed earlier is not a generic
but an objective function. When an objective function is chosen, we need to
check whether there is an interaction between control factors after allocating
them to an orthogonal array, because their effects (differences in SN ratio,
called gain) do not necessarily have additivity. An L18 orthogonal array is
recommended because its size is appropriate enough to check on the additivity of gain. If there is no additivity, the signal and measurement characteristics we used need to be reconsidered for change.
4. Another disadvantage is the paradigm shift for orthogonal arrays role. In
quality engineering, control factors are assigned to an orthogonal array. This
is not because we need to calculate the main effects of control factors for
the measurement characteristic y. Since levels of control factors are xed,
there is no need to measure their effects for y using orthogonal arrays. The
real objective of using orthogonal arrays is to check the reproducibility of
gains, as described in disadvantage 3.
In new product development, there is no existing engineering knowledge most
of the time. It is said that engineers are suffering from being unable to nd ways
for SN ratio improvement. In fact, the use of orthogonal arrays enables engineers
to perform R&D for a totally new area. In the application, parameter-level intervals
may be increased and many parameters may be studied together. If the system
selected is not a good one, there will be little SN ratio improvement. Often, one
orthogonal array experiment can tell us the limitation on SN ratios. This is an
advantage of using an orthogonal array rather than a disadvantage.
We are asked repeatedly how we would solve a problem using quality engineering.
Here are some generalities that we use.
1. Even if the problem is related to a consumers complaint in the marketplace
or manufacturing variability, in lieu of questioning its root cause, we would
suggest changing the design and using a generic function. This is a technical
approach to problem solving through redesign.
2. To solve complaints in the marketplace, after nding parts sensitive to environmental changes or deterioration, we would replace them with robust
parts, even if it incurs a cost increase. This is called tolerance design.
3. If variability occurs before shipping, we would reduce it by reviewing process
control procedures and inspection standards. In quality engineering this is
termed on-line quality engineering. This does not involve using Shewhart control
charts or other management devices but does include the following daily
routine activities by operators:
a. Feedback control. Stabilize processes continuously for products to be produced later by inspecting product characteristics and correcting the
difference between them and target values.
b. Feedforward control. Based on the information from incoming materials or
component parts, predict the product characteristics in the next process
and calibrate processes continuously to match product characteristics with
the target.
c. Preventive maintenance. Change or repair tools periodically with or without
checkup.
80
(4.78)
(4.79)
By doing so, we can make ST equivalent to work, and as a result, the following
simple decomposition can be completed:
total work (work by signal) (work by noise)
(4.80)
This type of decomposition has long been used in the telecommunication eld.
In cases of a vehicles acceleration performance, M should be the square root of
distance and y the time to reach. After choosing two noise levels, we calculate the
SN ratio and sensitivity using equation (4.80), which holds true for a case where
energy or work is expressed as a square root. But in many cases it is not clear if it
is so. To make sure that (4.80) holds true, it is checked by the additivity of control
factor effects using an orthogonal array. According to Newtons law, the following
formula shows that distance y is proportional to squared time t 2 under constant
acceleration:
y
b 2
t
2
(4.81)
2b y
t M
Generic Function of
Machining
y M
(4.82)
The term generic function is used quite often in quality engineering. This is not an
objective function but a function as the means to achieve the objective. Since a
function means to make something work, work should be measured as output. For
example, in case of cutting, it is the amount of material cut. On the other hand,
as work input, electricity consumption or fuel consumption can be selected. When
81
(4.83)
Since in this case, both signal of amount of cut M and data of electricity consumption y are observations, M and y have six levels and the effects of N are not
considered. For a practical situation, we should take the square root of both y and
M. The linear equation is expressed as
L M11 y11 M12 y12 M23y23
(4.84)
(4.85)
Table 4.10
Data for machining
T1
T2
T3
N1
M11
y11
M12
y12
M13
y13
N2
M21
y21
M22
y22
M23
y23
82
( 6)
(4.86)
L2
r
( 1)
(4.87)
Se ST S
( 5)
(4.88)
Se
5
Ve
(4.89)
(1/r) (S Ve)
Ve
(4.90)
S 10 log
1
(S Ve)
r
(4.91)
Since we analyze based on a root square, the true value of S is equal to the
estimation of 2, that is, consumption of electricity needed for cutting, which
should be smaller. If we perform smooth machining with a small amount of electricity, we can obviously improve dimensional accuracy and atness.
2. Improvement of work efciency. This analysis is not for machining performance
but for work efciency. In this case, an ideal function is expressed as
y T
(4.92)
( 6)
(4.93)
(L1 L2)2
2r
( 1)
(4.94)
SN
(L1 L2)2
2r
( 1)
(4.95)
Now
L1 T1 y11 T2 y12 T3y13
(4.96)
(4.97)
r T 21 T 22 T 23
10 log
(1/2r) (S Ve)
VN
S 10 log
1
(S Ve)
2r
(4.98)
(4.99)
(4.100)
83
Then
Ve 41 (ST S SN )
(4.101)
VN 51 (ST S)
(4.102)
In this case, 2 of the ideal function represents electricity consumption per unit
time, which should be larger. To improve the SN ratio using equation (4.99) means
to design a procedure of using a large amount of electricity smoothly. Since we
minimize electricity consumption by equations (4.90) and (4.91), we can improve
not only productivity but also quality by taking advantage of both analyses.
The efciency ratio of y to M, which has traditionally been used by engineers,
can neither solve quality problems nor improve productivity. On the other hand,
the hourly machining amount (machining measurement) used by managers in
manufacturing cannot settle noise problems and product quality in machining.
To improve quality and productivity at the same time, after measuring the data
shown in Table 4.10 we should determine an optimal condition based on two
values, SN ratio and sensitivity. To lower sensitivity in equation (4.91) is to minimize
electric power per unit removal amount. If we cut more using a slight quantity of
electricity, we can cut smoothly, owing to improvement in the SN ratio in equation
(4.90), as well as minimize energy loss. In other words, it means to cut sharply,
thereby leading to good surface quality after cutting and improved machining
efciency per unit energy.
Since quality engineering is a generic technology, a common problem occurs for
the mechanical, chemical, and electronic engineering elds. We explain the functionality evaluation method for on and off conditions using an example of
machining.
Recently, evaluation methods of machining have made signicant progress, as
shown in an experiment implemented in 1997 by IshikawajimaHarima Heavy
Industries (IHI) and sponsored by the National Space Development Agency of
Japan (NASDA). The content of this experiment was released in the Quality Engineering Symposium in 1998. Instead of using transformality [setting product
dimensions input through a numerically controlled (NC) machine as signal and
corresponding dimensions after machined as measurement characteristic], they
measured input and output utilizing energy and work by taking advantage of the
concept of generic function.
In 1959, the following experiment was conducted at the Hamamatsu Plant of
the National Railway (now known as JR). In this experiment, various types of cutting tools ( JIS, SWC, and new SWC cutting tools) and cutting conditions were
assigned to an L27 orthogonal array to measure the net cutting power needed for
a certain amount of cut. The experiment, based on 27 different combinations,
revealed that the maximum power needed was several times as large as the minimum. This nding implies that excessive power may cause a rough surface or
variability in dimensions. In contrast, cutting with quite a small amount of power
means that we can cut sharply. Consequently, when we can cut smoothly and power
effectively, material surfaces can be machined at and variability in dimension can
be reduced. In the Quality Engineering Symposium in June 1998, IHI released
84
(4.103)
(4.104)
The only difference between (4.103) and (4.104) is that the former is not a
generic but an objective function, whereas the latter is a generic function representing the relationship between work and energy. Since y represents a unit of
energy, a square of y does not make sense from the viewpoint of physics. Thus, by
taking the square root of both sides beforehand, we wish to make both sides equivalent to work and energy when they are squared.
On the other hand, unless the square of a measurement characteristic equals
energy or work, we do not have any rationale in calculating the SN ratio based on
a generic function. After taking the square root of both sides in equation (4.104),
Table 4.11
Cutting experiment
T1
T2
T3
N1
M: amount of cut
y: electricity consumption
M11
y11
M12
y12
M13
y13
N2
M: amount of cut
y: electricity consumption
M21
y21
M22
y22
M23
y23
85
we compute the total variation of output ST, which is a sum of squares of the square
root of each cumulative electricity consumption:
ST (y11)2 (y12)2 (y23)2 y11 y12 y23
( 6)
(4.105)
L1 L2
r1 r2
(4.107)
(4.108)
(4.109)
L1 L2
r1 r2
(4.110)
86
(4.111)
p q e2T
(4.112)
Here T represents the time needed for one cycle of the engine and p and q are
measurements in the exhaust. The total reaction rate 1 and side reaction rate 2
are calculated as
1
1
1
ln
T
p
(4.113)
1
1
ln
T
pq
(4.114)
(4.115)
S 10 log 21
(4.116)
10 log
The sensitivity is
The larger the sensitivity, the more the engines output power increases, the
larger the SN ratio becomes, and the less effect the side reaction has.
Cycle time T should be tuned such that benets achieved by the magnitude of
total reaction rate and loss by the magnitude of side reaction are balanced
optimally.
Selecting noise N (which has two levels, such as one for the starting point of
an engine and the other for 10 minutes after starting) as follows, we obtain the
data shown in Table 4.12. According to the table, we calculate the reaction rate
for N1 and N2. For an insufcient reaction,
11
1
1
ln
T
p1
(4.117)
12
1
1
ln
T
p2
(4.118)
87
Table 4.12
Function of engine based on chemical reaction
N1
N2
Insufcient reaction p
p1
p2
Objective reaction q
q1
q2
Total
p1 q1
p2 q2
1
1
ln
T
p1 q1
(4.119)
22
1
1
ln
T
p2 q2
(4.120)
Since the total reaction rates 11 and 12 are larger-the-better, their SN ratios are
as follows:
1 10 log
1
2
1
1
2
211
12
(4.121)
Since the side reaction rates 21 and 22 are smaller-the-better, their SN ratio is
2
2
2 10 log 21 (21
22
)
(4.122)
(4.123)
S 1
(4.124)
If side reactions barely occur and we cannot trust their measurements in a chemical reaction experiment, we can separate a point of time for measuring the total
reaction rate (1 p) and a point of time for measuring the side reaction rate (1
p q). For example, we can set T1 to 1 minute and T2 to 30 minutes, as
illustrated in Table 4.13. When T1 T2, we can use the procedure described in
the preceding section.
Table 4.13
Experimental data
T1
T2
Insufcient reaction p
p11
p12
Objective reaction q
q11
q12
Total
p11 q11
p12 q12
General Chemical
Reactions
88
1
1
ln
T
p11
(4.125)
1
1
ln
T2
p12 q12
(4.126)
Using the equations above, we can compute the SN ratio and sensitivity as
21
21
(4.127)
S 10 log 21
(4.128)
10 log
Evaluation of Images
89
tical experiment, we sometimes select seven levels for luminosity, such as 1000,
100, 10, 1, 1/10, 1/100, and 1/1000 times as much as current levels. At the same
time, we choose exposure time inversely proportional to each of them.
After selecting a logarithm of exposure time E for the value 0.301, the SN ratio
and sensitivity are calculated for analysis.
Next, we set a reading of exposure time E (multiplied by a certain decimal
value) for each of the following seven levels of signal M (logarithm of luminosity)
to y1, y2, ... , y7:
M1 3.0, M2 2.0,
... , M7 3.0
(4.129)
N2:
In this case, a small difference between the two sensitivity curves for N1 and N2
represents a better performance. A good way to designing an experiment is to
compound all noise levels into only two levels. Thus, no matter how many noise
levels we have, we should compound them into two levels.
Since we have three levels, K1, K2, and K3, for a signal of three primary colors,
two levels for a noise, and seven levels for an output signal, the total number of
data is (7)(3)(2) 42. For each of the three primary colors, we calculate the SN
ratio and sensitivity. Now we show only the calculation of K1. Based on Table 4.14,
90
Table 4.14
Experimental data
M1
M2
M7
Linear
Equation
K1
N1
N2
y11
y21
y12
y22
y17
y27
L1
L2
K2
N1
N2
y31
y41
y32
y42
y37
y47
L3
L4
K3
N1
N2
y51
y61
y52
y62
y57
y67
L5
L6
we proceed with the calculation. Now, by taking into account that M has seven
levels, we view M4 0 as a standard point and subtract the value of M4 from M1,
M2, M3, M5, M6, and M7, each of which is set to y1, y2, y3, y5, y6, and y7.
By subtracting a reading y for M4 0 in case of K1, we obtain the following
linear equations:
L1 3y11 2y12 3y17
(4.130)
L2 3y21 3y27
(4.131)
(L1 L2)2
2r
(4.132)
SN
(L1 L2)2
2r
(4.133)
2
2
2
ST y 11
y 12
y 27
(4.134)
1
Ve
12 (ST S SN )
(4.135)
1
VN
13 (ST S)
(4.136)
1 10 log
(1/2r) (S Ve)
VN
(4.137)
S1 10 log
1
(S Ve)
2r
(4.138)
As for K2 and K3, we calculate 2, S2, 3, and S3. The total SN ratio is computed
as the sum of 1, 2, and 3. To balance densities ranging from low to high for
K1, K2, and K3, we should equalize sensitivities for three primary colors, S1, S2, and
S3, by solving simultaneous equations based on two control factors. Solving them
such that
S1 S 2 S3
holds true under standard conditions is not a new method.
(4.139)
91
The procedure discussed thus far is the latest method in quality engineering,
whereas we have used a method of calculating the density for a logarithm of luminosity log E. The weakness of the latter method is that a density range as output
is different at each experiment. That is, we should not conduct an experiment
using the same luminosity range for lms of ASA 100 and 200. For the latter, we
should perform an experiment using half of the luminosity needed for the former.
We have shown the procedure to do so.
Quality engineering is a method of evaluating functionality with a single index
. Selection of the means for improvement is made by specialized technologies
and specialists. Top management has the authority for investment and personnel
affairs. The result of their performance is evaluated based on a balance sheet that
cannot be manipulated. Similarly, although hardware and software can be designed
by specialists, those specialists cannot evaluate product performance at their own
discretion.
If you wish to limit granulation distribution within a certain range, you must classify
granules below the lower limit as excessively granulated, ones around the center
as targeted, and ones above the upper limit as insufciently granulated, and proceed with analysis by following the procedure discussed earlier for chemical reactions. For polymerization distribution, you can use the same technique.
Functionality Such
as Granulation or
Polymerization
Distribution
Separation System
Bp*
A(1 q*) Bp*
(4.140)
Aq*
Aq* B(1 p*)
(4.141)
The error ratio p represents the ratio of copper molecules that is originally
contained in ore but mistakenly left in the slag after smelting. Then 1 p is called
the yield. Subtracting this yield from 1, we obtain the error ratio p. The error ratio
q indicates the ratio of all noncopper molecules that is originally contained in the
furnace but mistakenly included in the product or crude copper. Both ratios are
calculated as a mass ratio. Even if the yield 1 p is large enough, if the error
92
Table 4.15
Input/output for copper smelting
Output
Input
Product
Slag
Total
Copper
A(1 q*)
Bp*
Noncopper
Aq*
B(1 p*)
Total
AB
ratio q is also large, this smelting is considered inappropriate. After computing the
two error ratios p and q, we prepare Table 4.16.
Consider the two error ratios p and q in Table 4.15. If copper is supposed to
melt well and move easily in the product when the temperature in the furnace is
increased, the error ratio p decreases. However, since ingredients other than copper melt well at the same time, the error ratio q rises. A factor that can decrease
p and increase q is regarded as an adequate variable for tuning. Although most
factors have such characteristics, more or less, we consider it real technology to
reduce errors regarding both p and q rather than obtaining effects by tuning. In
short, this is smelting technology with a large functionality.
To nd a factor level reducing both p and q for a variety of factors, after making
an adjustment so as not to change the ratio of p to q, we should evaluate functionality. This ratio is called the standard SN ratio. Since gain for the SN ratio
accords with that when p q, no matter how great the ratio of p to q, in most
cases we calculate the SN ratio after p is adjusted equal to q. Primarily, we compute
p0 as follows when we set p q p0:
p0
1
1 [(1/p) 1][(1/q) 1]
(4.142)
(1 2p0)2
4p0(1 p0)
(4.143)
Table 4.16
Input/output expressed by error ratios
Output
Input
Product
Slag
Total
Copper
1p
Noncopper
1q
Total
1pq
1pq
93
for optimal and initial conditions, and calculate gains. After this, we conduct a
conrmatory experiment for the two conditions, compute SN ratios, and compare
estimated gains with those obtained from this experiment.
Based on the experiment under optimal conditions, we calculate p and q. Unless
a sum of losses for p and q is minimized, timing is set using factors (such as the
temperature in the furnace) that inuence sensitivity but do not affect the SN
ratio. In tuning, we should gradually change the adjusting factor level. This adjustment should be made after the optimal SN ratio condition is determined.
The procedure detailed here is even applicable to the removal of harmful elements or the segregation of garbage.
Example
First, we consider a case of evaluating the main effects on a cancer cell and the
side effects on a normal cell. Although our example is an anticancer drug, this
analytic procedure holds true for thermotherapy and radiation therapy using supersonic or electromagnetic waves. If possible, by taking advantage of an L18 orthogonal
array, we should study the method using a drug and such therapies at the same
time.
Quality engineering recommends that we experiment on cells or animals. We
need to alter the density of a drug to be assessed by h milligrams (e.g., 1 mg) per
unit time (e.g, 1 minute). Next, we select one cancer cell and one normal cell and
designate them M1 and M2, respectively. In quality engineering, M is called a signal
factor. Suppose that M1 and M2 are placed in a certain solution (e.g., water or a
salt solution), the density is increased by 1 mg/minute, and the length of time that
each cell survives is measured. Imagine that the cancer and normal cells die at the
eighth and fourteenth minutes, respectively. We express these data as shown in
Table 4.17, where 1 indicates alive and 0 indicates dead. In addition, M1 and
Table 4.17
Data for a single cell
T1
T2
T3
T4
T5
T6
T7
T8
T9
T10
T11
T12
T13
T14
T15
M1
M2
94
M2 are cancer and normal cells, and T1, T2, ... indicate each lapse of time: 1, 2,
... minutes.
For digital data regarding a dead-or-alive (or cured-or-not cured) state, we calculate LD50 (lethal dose 50). In this case, the LD50 value of M1 is 7.5 because a
cell dies between T7 and T8, whereas that of M2 is 13.5. LD50 represents the quantity of drug needed until 50% of cells die.
In quality engineering, both M and T are signal factors. Although both signal and
noise factors are variables in use, a signal factor is a factor whose effect should
exist, and conversely, a noise is a factor whose effect should be minimized. That
is, if there is no difference between M1 and M2, and furthermore, between each
different amount of drug (or of time in this case), this experiment fails. Then we
regard both M and T as signal factors. Thus, we set up X1 and X2 as follows:
X1 LD50 of a cancer cell
(4.144)
(4.145)
In this case, the smaller X1 becomes, the better the drugs performance becomes.
In contrast, X2 should be greater. Quality engineering terms the former the smallerthe-better characteristic and the latter larger-the-better characteristic. The following equation calculates the SN ratio:
20 log
X2
X1
(4.146)
Table 4.18
Dead-or-alive data
T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 T11 T12 T13 T14 T15 T16 T17 T18 T19 T20
M1 N1 1 1 1 1 1 0 0 0 0 0
N2 1 1 1 0 0 0 0 0 0 0
N3 1 1 1 1 1 1 1 1 1 1
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
M2 N1 1 1 1 1 1 1 1 1 1 1
N2 1 1 1 1 1 1 1 1 0 0
N3 1 1 1 1 1 1 1 1 1 1
1
0
1
1
0
1
1
0
1
1
0
1
0
0
1
0
0
1
0
0
1
0
0
1
0
0
1
0
0
0
95
Table 4.19
LD50 data
M1
5.5
3.5
11.5
M2
14.5
8.5
19.5
Since the data for M1 should have a smaller standard deviation as well as a
smaller average, after calculating an average of a sum of squared data, we multiply
its logarithm by 10. This is termed the smaller-the-better SN ratio:
1 10 log 13 (5.52 3.52 11.52) 17.65 dB
(4.147)
On the other hand, because the LD50 value of a normal cell M2 should be innitesimal, after computing an average sum of reciprocal data, we multiply its logarithm
by 10. This is larger-the-better SN ratio:
2 10 log
1
3
1
1
1
14.52
8.52
19.52
21.50 dB
(4.148)
(4.149)
For example, given two drugs A1 and A2, we obtain the experimental data shown
in Table 4.20. If we compare the drugs, N1, N2, and N3 for A1 should be consistent
with N,
1 N2, and N
3 for A2. Now suppose that the data of A1 are the same as those
in Table 4.19 and that a different drug A2 has LD50 data for M1 and M2 as illustrated
in Table 4.20. A1s SN ratio is as in equation (4.149). To compute A2s SN ratio,
we calculate the SN ratio for the main effect 1 as follows:
1 10 log 31 (18.52 11.52 20.52) 24.75 dB
(4.150)
Table 4.20
Data for comparison experiment
A1
M1
M2
5.5
14.5
3.5
8.5
11.5
19.5
A2
M1
M2
18.5
89.5
11.5
40.5
20.5
103.5
96
Next, as a larger-the-better SN ratio, we calculate the SN ratio for the side effects
2 as
2 10 log
1
3
1
1
1
89.52
40.52
103.52
35.59
(4.151)
Therefore, we obtain the total SN ratio, combining the main and side effects, by
adding them up as follows:
24.75 35.59 10.84
(4.152)
Table 4.21
Comparison of drugs A1 and A2
Main Effect
Side Effect
A1
17.65
21.50
3.85
A2
24.75
35.59
10.84
7.10
14.09
6.99
Gain
Total
97
point is 36C. If the cancer cell dies at 43C, the LD50 value is 6.5C, the difference
from the standard point of 36C. If the normal cell dies at 47C, the LD50 value is
10.5C. When a sonic or electromagnetic wave is used in place of temperature, we
need to select the waves power (e.g., raise the power by 1 W/minute.
Layout of Signal
Factors in an
Orthogonal Array
98
Software Diagnosis
Using Interaction
For the sake of convenience, we explain the procedure by using an L36 orthogonal
array; this also holds true for other types of arrays. According to procedure 3 in
the preceding section, we allocate 11 two-level factors and 12 three-level factors to
an L36 orthogonal array, as shown in Table 4.22. Although one-level factors are not
assigned to the orthogonal array, they must be tested.
We set two-level signal factors to A, B, ... , and K, and three-level signal factors
to L, M, ... , and W. Using the combination of experiments illustrated in Table
4.23, we conduct a test and measure data about whether or not software functions
properly for all 36 combinations. When we measure output data for tickets and
changes in selling tickets, we are obviously supposed to set 0 if both outputs are
correct and 1 otherwise. In addition, we need to calculate an interaction for each
case of 0 and 1.
Now suppose that the measurements taken follow Table 4.23. In analyzing data,
we sum up data for all combinations of A to W. The total number of combinations
is 253, starting with AB and ending with VW. Since we do not have enough space
to show all combinations, for only six combinations of AB, AC, AL, AM, LM, and
LW, we show a two-way table, Table 4.23. For practical use, we can use a computer
to create a two-way table for all combinations.
Table 4.23 includes six two-way tables for AB, AC, AL, AM, LM, and LW, and
each table comprises numbers representing a sum of data 0 or 1 for four conditions of A1B1 (Nos. 19), A1B2 (Nos. 1018), A2B1 (Nos. 1927), and A2B2 (Nos.
2836). Despite all 253 combinations of two-way tables, we illustrate only six. After
we create a two-dimensional table for all possible combinations, we need to consider the results.
Based on Table 4.23, we calculate combined effects. Now we need to create
two-way tables for combinations whose error ratio is 100% because if an error in
software exists, it leads to a 100% error. In this case, using a computer, we should
produce tables only for L2 and L3 for A2, M3 for A2, M2 for L3, W2 and W3 for L2,
and W2 and W3 for L3. Of 253 combinations of two-dimensional tables, we output
only 100% error tables, so we do not need to output those for AB and AC.
We need to enumerate not only two-way tables but also all 100% error tables
from all possible 253 tables because bugs in software are caused primarily by combinations of signal factor effects. It is software designers job to correct errors.
99
A
1
No.
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
B
2
C
3
D
4
E
5
F
6
G
7
H
8
Table 4.22
Layout and data of L36 orthogonal array
I
9
J
10
K
11
L
12
M
13
N
14
O
15
P
16
Q
17
R
18
S
19
T
20
U
21
V
22
W
23
Data
100
A
1
No.
25
26
27
28
29
30
31
32
33
34
35
36
B
2
C
3
D
4
E
5
F
6
G
7
H
8
I
9
J
10
K
11
L
12
M
13
N
14
O
15
P
16
Q
17
R
18
S
19
T
20
U
21
V
22
W
23
Data
101
Table 4.23
Supplemental tables
(1) AB two-way table
B1
B2
Total
A1
A2
Total
4
6
10
4
8
12
8
14
22
C2
Total
A1
A2
Total
4
7
11
4
7
11
8
14
22
L2
L3
Total
A1
A2
Total
0
2
2
4
6
10
4
6
10
8
14
22
M2
M3
Total
1
6
7
2
4
6
5
4
9
8
14
22
Total
A1
A2
Total
M2
M3
0
4
3
7
0
2
4
6
2
4
3
9
W2
W3
0
4
4
8
1
4
4
9
L1
L2
L3
Total
L1
L2
L3
Total
1
2
2
5
2
10
10
22
Total
2
10
10
22
102
103
distance only in the unit cluster of a population and dening variable distance in
a population in consideration of variable interrelationships among items.
The MTS method also denes a group of items close to their average as a unit
cluster in such a way that we can use it to diagnose or monitor a corporation. In
fact, both the MT and MTS methods have started to be applied to various elds
because they are outstanding methods of pattern recognition. What is most important in utilizing multidimensional information is to establish a fundamental
database. That is, we should consider what types of items to select or which groups
to collect to form the database. These issues should be determined by persons
expert in a specialized eld.
In this section we detail a procedure for streamlining a medical checkup or
clinical examination using a database for a group of healthy people (referred to
subsequently as normal people). Suppose that the total number of items used
in the database is k. Using data for a group of normal peoplefor example, the
data of people who are examined and found to be in good health in any year after
annual medical checkups for three years in a row (if possible, the data of hundreds
of people is preferable)we create a scale to characterize the group.
As a scale we use the distance measure of P. C. Mahalanobis, an Indian statistician, introduced in his thesis in 1934. In calculating the Mahalanobis distance in
a certain group, the group needs to have homogeneous members. In other words,
a group consisting of abnormal people should not be considered. If the group
contains people with both low and high blood pressure, we should not regard the
data as a single distribution.
Indeed, we may consider a group of people suffering only from hepatitis type
A as being somewhat homogeneous; however, the group still exhibits a wide deviation. Now, lets look at a group of normal people without hepatitis. For gender
and age we include both male and female and all adult age brackets. Male and
female are denoted by 0 and 1, respectively. Any item is dealt with as a quantitative
measurement in calculation. After regarding 0 and 1 data for male and female as
continuous variables, we calculate an average m and standard deviation . For all
items for normal people, we compute an average value m and standard deviation
and convert them below. This is called normalization.
Y
ym
(4.153)
Suppose that we have n normal people. When we convert y1, y2, ... , yn into Y1,
Y2, ... , Yn using equation (4.153), Y1, Y2, ... , Yn has an average of 0 and a standard
deviation of 1, leading to easier understanding. Selecting two from k items arbitrarily, and dividing the sum of normalized products by n or calculating covariance,
we obtain a correlation coefcient.
After forming a matrix of correlation coefcients between two of k items by
calculating its inverse matrix, we compute the following square of D representing
the Mahalanobis distance:
D2
1
k
aijYiYj
ij
(4.154)
104
y1 m1
1
Y2
y2 m 2
2
Yk
yk mk
k
(4.155)
In these equations, for k items regarding a group of normal people, set the
average of each item to m1, m 2, ... , mk and the standard deviation to 1, 2, ... ,
n. The data of k items from a person, y1, y2, ... , yk, are normalized to obtain Y1,
Y2, ... , Yk. What is important here is that a person whose identity is unknown in
terms of normal or abnormal is an arbitrary person. If the person is normal, D 2
has a value of approximately 1; if not, it is much larger than 1. That is, the Mahalanobis distance D 2 indicates how far the person is from normal people.
For practical purposes, we substitute the following y (representing not the SN
ratio but the magnitude of N):
y 10 log D 2
(4.156)
Example
The example shown in Table 4.24 is not a common medical examination but a
special medical checkup to nd patients with liver dysfunction, studied by Tatsuji
105
Table 4.24
Physiological examination items
Examination Item
Acronym
Normal Value
Total protein
TP
6.57.5 g / dL
Albumin
Alb
3.54.5 g. / dL
Cholinesterase
ChE
0.601.00 pH
GOT
225 units
GPT
022 units
Lactatdehydrogenase
LDH
130250 units
Alkaline phosphatase
ALP
2.010.0 units
-Glutamyltranspeptidase
-GTP
068 units
Lactic dehydrogenase
LAP
120450 units
Total cholesterol
TCh
140240 mg / dL
Triglyceride
TG
70120 g / dL
Phospholipases
PL
150250 mg / dL
Creatinine
Cr
0.51.1 mg / dL
BUN
523 mg / dL
Uric Acid
UA
2.58.0 mg / dL
Kanetaka at Tokyo Teishin Hospital [6]. In addition to the 15 items shown in Table
4.24, age and gender are included.. The total number of items is 17.
By selecting data from 200 people (it is desirable that several hundred people
be selected, but only 200 were chosen because of the capacity of a personal computer) diagnosed as being in good health for three years at the annual medical
checkup given by Tokyos Teishin Hospital, the researchers established a database
of normal people. The data of normal people who were healthy for two consecutive
years may be used. Thus, we can consider the Mahalanobis distance based on the
database of 200 people. Some people think that it follows an F-distribution with
an average of 1 approximate degree of freedom of 17 for the numerator and innite
degrees of freedom for its denominator on the basis of raw data. However, since
its distribution type is not important, it is wrong. We compare the Mahalanobis
distance with the degree of each patients dysfunction.
If we use decibel values in place of raw data, the data for a group of normal
people are supposed to cluster around 0 dB with a range of a few decibels. To
minimize loss by diagnosis error, we should determine a threshold. We show a
simple method to detect judgment error next.
106
Table 4.25 demonstrates a case of misdiagnosis where healthy people with data
more than 6 dB away from those of the normal people are judged not normal. For
95 new persons coming to a medical checkup, Kanetaka analyzed actual diagnostic
error accurately by using a current diagnosis, a diagnosis using the Mahalanobis
distance, and close (precise) examination.
Category 1 in Table 4.25 is considered normal. Category 2 comprises a group
of people who have no liver dysfunction but had ingested food or alcohol despite
being prohibited from doing so before a medical checkup. Therefore, category 2
should be judged normal, but the current diagnosis inferred that 12 of 13 normal
people were abnormal. Indeed, the Mahalanobis method misjudged 9 normal people as being abnormal, but this number is three less than that obtained using the
current method.
Category 3 consists of a group of people suffering from slight dysfunctions. Both
the current and Mahalanobis methods overlooked one abnormal person. Yet both
of them detected 10 of 11 abnormal people correctly.
For category 4, a cluster of 5 people suffering from moderate dysfunctions, both
methods found all persons. Since category 2 is a group of normal persons, combining categories 1 and 2 as a group of liver dysfunction , and categories 3 and
4 as a group of liver dysfunction , we summarize diagnostic errors for each method
in Table 4.26. For these contingency tables, each discriminability (0: no discriminability, 1: 100% discriminability) is calculated by the following equations (see Separation System in Section 4.4 for the theoretical background). A1s discriminability:
1
[(28)(15) (51)(1)]2
0.0563
(79)(16)(29)(66)
(4.157)
(63)(15) (16)(1)]2
0.344
(76)(16)(64)(31)
(4.158)
A2s discriminability:
2
Table 4.25
Medical checkup and discriminability
A1: Current Method
A2: Mahalanobis
Method
Category a
Normal
Abnormal
Normal
Abnormal
Total
27
39
59
66
12
13
10
10
11
Total
29
66
64
31
95
a
1, Normal; 2, normal but temporarily abnormal due to food and alcohol; 3, slightly abnormal;
4, moderately abnormal.
107
Table 4.26
2 2 Contingency table
A1: Current method
Diagnosis:
Liver dysfunction
(Normal)
(Abnormal)
Total
A2: Mahalanobis method
Diagnosis:
Liver dysfunction
(Normal)
(Abnormal)
Total
Normal
Abnormal
Total
28
51
79
15
16
29
66
95
Normal
Abnormal
Total
63
16
79
15
16
64
31
95
1
1 1
0.0563
12.2 dB
0.9437
(4.159)
0.344
2.8 dB
0.656
(4.160)
A2s SN ratio:
2 10 log
108
Design of General
Pattern Recognition
and Evaluation
Procedure
MT METHOD
There is a large gap between science and technology. The former is a quest for
truth; the latter does not seek truth but invents a means of attaining an objective
function. In mechanical, electronic, and chemical engineering, technical means
are described in a systematic manner for each objective function. For the same
objective function, there are various types of means, which do not exist in nature.
A product planning department considers types of functions. The term product
planning, including hardware and software, means to plan products to be released
in the marketplace, most of which are related to active functions. However, there
are quite a few passive functions, such as inspection, diagnosis, and prediction. In
evaluating an active function, we focus on the magnitude of deviation from its
ideal function. Using the SN ratio, we do a functional evaluation that indicates
the magnitude of error.
In contrast, a passive function generally lies in the multidimensional world,
including time variables. Although our objective is obviously pattern recognition
in the multidimensional world, we need a brand-new method of summarizing multidimensional data. To summarize multidimensional data, quality engineering
(more properly, the Taguchi method) gives the following paradigms:
Procedure 1: Select items, including time-series items. Up to several thousand
items can be analyzed by a computer.
Procedure 2: Select zero-point and unit values and base space to determine a
pattern.
Procedure 3: After summarizing measurement items only from the base space,
we formulate the equation to calculate the Mahalanobis distance. The Mahalanobis space determines the origin of multidimensional measurement
and unit value.
Procedure 4: For an object that does not belong to the base space, we compute
Mahalanobis distances and assess their accuracy using an SN ratio.
Procedure 5: Considering cost and the SN ratio, we sort out necessary items
for optimal diagnosis and proper prediction method.
Among these specialists are responsible for procedures 1 and 2 and quality
engineering takes care of procedures 3, 4, and 5. In quality engineering, any quality problem is attributed to functional variability. That is, functional improvement
will lead to solving any type of quality problem. Therefore, we determine that we
should improve functionality by changing various types of design conditions (when
dealing with multiple variables, and selecting items and the base space).
Since the Mahalanobis distance (variance) rests on the calculation of all items
selected, we determine the following control factors for each item or for each
group of items and, in most cases, allocate them to a two-level orthogonal array
when we study how to sort items:
First factor level. Items selected (or groups of items selected) are used.
Second factor level. Items selected (or groups of items selected) are not used.
109
We do not need to sort out items regarded as essential. Now suppose that the
number of items (groups of items) to be studied is l and an appropriate two-level
orthogonal array is LN. When l 30, L32 is used, and when l 100, L108 or L124
is selected.
Once control factors are assigned to LN, we formulate an equation to calculate
the Mahalanobis distance by using selected items because each experimental condition in the orthogonal array shows the assignment of items. Following procedure
4, we compute the SN ratio, and for each SN ratio for each experiment in orthogonal array LN, we calculate the control factor effect. If certain items chosen
contribute negatively or little to improving the SN ratio, we should select an optimal condition by excluding the items. This is a procedure of sorting out items
used for the Mahalanobis distance.
MTS METHOD
(4.161)
(4.162)
In this case b12 is not only the regression coefcient but also the correlation coefcient. Then the remaining part of X2, excluding the part related to x1
(regression part) or the part independent of x1, is
x 2 X2 b21x1
(4.163)
110
x x X (X
n
1j 2j
1j
j1
b21x1j) 0
2j
(4.164)
j1
Thus, x1 and x 2 become orthogonal. On the other hand, whereas x1s variance is
1, x 2s variance 22, called residual contribution, is calculated by the equation
22 1 b 221
(4.165)
The reason is that if we compute the mean square of the residuals, we obtain the
following result:
1
n
(X
2j
b21x1j)2
1
n
2
2j
2
b
n 21
X x n1 (b ) x
2
2j ij
21
2
1j
2n 2
n
b b 22l
n 2l
n
1 b 22l
(4.166)
(4.167)
X x
1
V x
n
b31
1
nV1
(4.168)
3j 1j
2
1j
(4.169)
X x
1
V x
n
b32
1
nV2
(4.170)
3j 2j
2
2j
(4.171)
(4.172)
In the Mahalanobis space, x1, x 2, and x 3 are orthogonal. Since we have already
proved the orthogonality of x1 and x 2, we prove here that of x1 and x 3 and that of
x 2 and x 3. First, considering the orthogonality of x1 and x 3, we have
x (X
1j
3j
x X
1j
3j
b31
2
1j
(4.173)
111
This can be derived from equation (4.167) dening b31. Similarly, the orthogonality
of x 2 and x 3 is proved according to equation (4.170), dening b32 as
2j
x X
2j
3j
b32
2
2j
(4.174)
For the remaining variables, we can proceed with similar calculations. The variables normalized in the preceding section can be rewritten as follows:
x1 X1
x 2 X2 b21x1
x 3 X3 b31x1 b32x 2
(4.175)
1 2
2
(x x 212 x 1n
)
n 11
V2
1
2
(x 2 x 22
x 22n)
n 1 21
Vk
1
(x 2 x 2k2 x 2kn)
n k 1 k1
(4.176)
Now setting normalized, orthogonal variables to y1, y2, ... , yk, we obtain
y1
y2
x1
V1
x2
V2
yk
xk
Vk
(4.177)
All of the normalized y1, y2, ... , yk are orthogonal and have a variance of 1. The
database completed in the end contains m and as the mean and standard deviation of initial observations, b21, V2, b31, b32, V2, ... , bk1, bk2, ... , bk(k1), Vk as the
coefcients and variances for normalization. Since we select n groups of data, there
112
(k 1)k
k (k 5)
k
2
2
(4.178)
...
1 0
0 1
0 0
0
0
(4.179)
AR
1 0
0 1
0 0
...
0
0
(4.180)
1 2
(y y 22 y k2)
k 1
(4.181)
Now, setting measured data already subtracted by m and divided by to X1, X2,
... , Xk, we compute primarily the following x1, x 2, ... , xk:
x1 X1
x 2 X2 b21x1
x 3 X3 b31x1 b32x 2
xk Xk bk1x1 bk(k1)
(4.182)
x2
V2
x3
V3
yk
xk
Vk
(4.183)
113
When for an arbitrary object we calculate y1, y2, ... , yk and D 2 in equation
(4.181), if a certain variable belongs to the Mahalanobis space, D 2 is supposed to
take a value of 1, as discussed earlier. Otherwise, D 2 becomes much larger than 1
in most cases.
Once the normalized orthogonal variables y1, y2, ... , yk are computed, the next
step is the selection of items.
A1: Only y1 is used
A2: y1 and y2 are used
( l )
(4.184)
( 1)
(4.185)
Effective divider:
r M 21 M 22 M 2l
(4.186)
Error variation:
Se ST S
( l 1)
(4.187)
Table 4.27
Signal values and Mahalanobis distance
Signal-level value
M1
M2
Mk
Mahalanobis distance
D1
D2
Dl
114
(1/r)(S Ve)
Ve
(4.188)
M1D1 M2D2 Ml Dl
r
(4.189)
(4.190)
and we estimate
M
In addition, for A1, A2, ... , Ak1, we need to compute SN ratios, 1, 2, ... , k1.
Table 4.28 summarizes the result. According to the table, we determine the number of items by balancing SN ratio and cost. The cost is not the calculation cost
but the measurement cost for items. We do not explain here the use of loss functions to select items.
Summary of Partial
MD Groups:
Countermeasure for
Collinearity
To summarize the distances calculated from each subset of distances to solve collinearity problems, this approach can be widely applied. The reason is that we can
select types of scale to use, as what is important is to select a scale that expresses
patients conditions accurately no matter what correlation we have in the Mahalanobis space. The point is to be consistent with patients conditions diagnosed by
doctors.
Now lets go through a new construction method. For example, suppose that
we have data for 0 and 1 in a 64 64 grid for computer recognition of handwriting. If we use 0 and 1 directly, 4096 elements exist. Thus, the Mahalanobis
space formed by a unit set (suppose that we are dealing with data for n persons
in terms of whether a computer can recognize a character A as A: for example,
data from 200 sets in total if 50 people write a character of four items) has a 4096
4096 matrix. This takes too long to handle using current computer capability.
How to leave out character information is an issue of information system design.
In fact, a technique for substituting only 128 data items consisting of 64 column
sums and 64 row sums for all data in a 64 64 grid has already been proposed.
Since two sums of 64 column sums and 64 row sums are identical, they have
collinearity. Therefore, we cannot create a 128 128 unit space by using the
relationship 64 64 128.
In another method, we rst create a unit space using 64 row sums and introduce
the Mahalanobis distance, then create a unit space using 64 column sums and
Table 4.28
Signal values and Mahalanobis distances
Number of items
SN ratio
Cost
C1
C2
C3
Ck
115
( 30)
(4.191)
Signal effect:
SM
D 21 D 22 D 23
10
VM
SM
3
(4.192)
(4.193)
Se ST SM
Ve
( 3)
( 27)
(4.194)
Se
27
(4.195)
By calculating row sums and column sums separately, we compute the following
SN ratio using averages of VM and Ve:
10 log
10 (VM Ve)
Ve
(4.196)
The error variance, which is a reciprocal of the antilog value, can be computed as
2 101.1
(4.197)
The SN ratio for selection of items is calculated by setting the following two levels:
Level 1: item used
Level 2: item not used
We allocate them to an orthogonal array. By calculating the SN ratio using equation (4.196), we choose optimal items.
Table 4.29
Mahalanobis distances for signals
Noise
Signal
N1
N2
N10
Total
M1 (D)
D11
D12
D1, 10
D1
M2 (E)
D21
D22
D2, 10
D2
M3 (R)
D31
D32
D3, 10
D3
116
Example
Although automobile keys are generally produced as a set of four identical keys,
this production system is regarded as dynamic because each set has a different
dimension and shape. Each key has approximately 9 to 11 cuts, and each cut has
several different dimensions. If there are four different dimensions with a 0.5-mm
step, a key with 10 cuts has 410 1,048,576 variations.
In actuality, each key is produced such that it has a few different-dimension
cuts. Then the number of key types in the market will be approximately 10,000.
Each key set is produced based on the dimensions indicated by a computer. By
using a master key we can check whether a certain key is produced, as indicated
by the computer. Additionally, to manage a machine, we can examine particular-
117
118
B
C
n0
u0
(4.198)
In addition, we need a monetary evaluation of the quality level according to dimensional variability. When the adjustment limit is D0 20 m, the dimensions are
regarded as being distributed uniformly within the limit because they are not controlled in the range. The variance is as follows:
D 20
3
(4.199)
On the other hand, the magnitude of dispersion for an inspection interval n0 and
an inspection time lag of l is
n0 1
l
2
D 20
u0
(4.200)
D 20
n0 1
l
2
D 20
u0
(4.201)
Adding equations (4.198) and (4.201) to calculate the following economic loss L0
for the current control system, we have 33.32 cents per product:
L0
B0
C
A
2
n0
u0
D 20
3
n0 1
l
2
12
58
1.90
300
19,560
302
202
D 02
u0
301
50
2
202
19,560
(4.202)
Assuming that the annual operation time is 1600 hours, we can see that in the
current system the following amount of money is spent annually to control quality:
(33.32)(300)(1600) $160,000
(4.203)
2 m
(4.204)
119
It is an optimal control system that improves the loss in equation (4.203). In short,
it is equivalent to determining an optimal inspection interval n and adjustment limit
D, both of which are calculated by the following formulas:
2uA B D
(2)(19,560)(12) 30
20
1.90
(4.205)
745
600 (twice an hour)
D
3C
D 2
A
u0
2
0
(3)(58)
1.90
(4.206)
1/2
(4.207)
202
2
19,560
1/4
6.4
(4.208)
7.0 m
(4.209)
Then, by setting the optimal measurement interval to 600 sets and adjustment limit
to 7.0 m, we can reduce the loss L:
L
B
C
A
2
n
u
D2
32
n1
l
2
D2
u
(4.210)
D2
D 20
(19,560)
72
202
2396
(4.211)
Therefore, we need to change the adjustment interval from the current level of 65
hours to 8 hours. However, the total loss L decreases as follows:
L
12
58
1.80
600
2396
302
72
601
50
2
72
2396
(4.212)
(4.213)
120
(4.214)
Management in Manufacturing
The procedure described in the preceding section is a technique of solving a balancing equation of checkup and adjustment costs, and necessary quality level, leading nally to the optimal allocation of operators in a production plant. Now, given
that checkup and adjustment take 10 and 30 minutes, respectively, the work hours
required in an 8-hour shift at present are:
(10 min) (daily no. inspections) (30 min) (daily no. adjustments)
(10)
(10)
(8)(300)
(3)(300)
(30)
n
u
2400
600
(30)
(4.215)
2400
2396
40 30.1 70 minutes
(4.216)
Assuming that one shift has a duration of 8 hours, the following the number of
workers are required:
70
0.146 worker
(8)(60)
(4.217)
(4.218)
is needed, which implies that one operator is sufcient. As compared to the following workers required currently, we have
(10)
(8)(300)
(3)(300)
(30)
83.6 minutes
300
19,560
(4.219)
so we can reduce the number of workers by only 0.028 in this process. On the
other hand, to compute the process capability index Cp, we estimate the current
standard deviation 0 using the standard deviation of a measurement error m:
0
D3 n 2 1 l Du
2
0
2
0
2m
(4.220)
202
11.9 m
301
50
2
202
22
19560
(4.221)
121
2
60
(2)(30)
(6)(11.9)
0.84
(4.222)
72
601
50
2
72
22
2396
5.24 m
(4.223)
Then we cannot only reduce the required workforce by 0.028 worker (0.84 times)
but can also enhance the process capability index Cp from the current level to
Cp
(30)(2)
1.91
(6)(5.24)
(4.224)
(4.225)
As compared to 3 times, 1.65 times implies that we can lower the retail price
0.55-fold. This estimation is grounded on the assumption that if we sell a product
122
at 45% off the original price, the sales volume doubles. This holds true when we
offer new jobs to half the workers, with the sales volume remaining at the same
level. Now, provided that the mean adjustment interval decreases by one-fourth
due to increased variability after the production speed is raised, how does the cost
eventually change? When frequency of machine breakdowns quadruples, u decreases from its current level of 2396 to 599, one-fourth of 2396. Therefore, we
alter the current levels of u0 and D0 to the values u0 599 and D0 7 m. By
taking these into account, we consider the loss function. The cost is cut by 0.6fold and the production volume is doubled. In general, as production conditions
change, the optimal inspection interval and adjustment limit also change. Thus, we
need to recalculate n and D here. Substituting 0.6A for A in the formula, we obtain
2u0.6AB D
(2)(599)(12) 30
0.6 190 7
481 600
(once in an hour)
(3)(58)
(0.6)(1.90)
(4.226)
72
(302)
599
1/4
10.3 10.00
(4.227)
u (599)
102
72
1222
(4.228)
B
C
0.6A
n
u
302
D2
n1
2l
2
12
58
(0.6)(1.90)
600
1222
302
102
D2
u
601
100
2
102
1222
(4.229)
(4.230)
(4.231)
References
123
Suppose that the production volume remains the same, (300)(1600) 480,000
sets. We can save the following amount of money on an annual basis:
(1.99 1.29)(48) $33,600,000
(4.232)
References
1. Genichi Taguchi, 1987. System of Experimental Design. Dearborn, Michigan: Unipub/
American Supplier Institute.
2. Genichi Taguchi, 1984. Reliability Design Case Studies for New Product Development.
Tokyo: Japanese Standards Association.
3. Genichi Taguchi et al., 1992. Technology Development for Electronic and Electric Industries.
Quality Engineering Application Series. Tokyo: Japanese Standards Association.
4. Measurement Management Simplication Study Committee, 1984, Parameter Design
for New Product Development. Japanese Standards Association.
5. Genichi Taguchi et al., 1989. Quality Engineering Series, Vol. 2. Tokyo: Japanese Standards Association.
6. Tatsuji Kenetaka, 1987. An application of Mahalanobis distance. Standardization and
Quality Control, Vol. 40, No. 10.
Classic! This handbook is an excellent reference for engineers and quality professionals in any eld.
Kevin W. Williams
Vice President, Quality
General Motors North America
ITT has learned from experience that Dr. Taguchis teachings can prevent variation before it happens. This book is a great demonstration of this powerful approach and how it can make a meaningful difference in any type of business. It
takes a dedicated engineering approach to implement, but the pay back in customer satisfaction and growth is dramatic.
Lou Giuliano
Chairman, President and CEO
ITT Industries
Dr. Taguchis enduring work is of great importance to the eld of quality.
Dr. A.V. Feigenbaum
President and CEO
General Systems Company, Inc.
This handbook represents a new bible for innovative technical leadership.
Saburo Kusama
President
Seiko Epson
It is a very well thought out and organized handbook, which provides practical
and effective methodologies to enable you to become the best in this competitive
world for product quality.
Sang Kwon Kim
Chief Technical Ofcer
Hyundai Motor Company & Kia Motors Corporation
For those who see the world as connected, this landmark book offers signicant
insights in how to design together, build together, and lead together.
Dr. Bill Bellows
Associate Technical Fellow
The Boeing Company
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
Dr. Taguchi has been an inspiration and has changed the paradigm for quality
with his concept of loss function. Robust Engineering is at the heart of our innovation process and we are proud to be associated with this work.
Donald L. Runkle
Vice Chairman and CTO
Delphi Corporation
The elegantly simple Taguchi theory is demonstrated through many successful
case studies in this excellent book.
Don Dees
Vice President, Manufacturing
DaimlerChrysler
We have and are practicing the methods and teachings in this valuable handbook;
it simply works.
Dr. Aly A. Badawy
Vice President, Engineering
TRW Automotive
To have all of the theories and application of the lifetime of learning from these
three gentlemen in one book is amazing. If it isnt in here, you probably dont
need it.
Joseph P. Sener, P.E.
Vice President, Business Excellence
Baxter Healthcare
Taguchis Quality Engineering Handbook is a continuation of the legacy by Dr. Taguchi, Subir Chowdhury and Yuin Wu in Robust Engineering, which is increasingly
being recognized in the automotive industry as a must have tool to be
competitive.
Dan Engler
Vice President, Engineering and Supply Management
Mark IV Automotive
Taguchis principles, methods, and techniques are a fundamental part of the
foundation of the most highly effective quality initiatives in the world!
Rob Lindner
Vice President, Quality, CI and Consumer Service
Sunbeam Products, Inc.
The value of Dr. Genichi Taguchis thinking and his contributions to improving
the productivity of the engineering profession cannot be overstated. This handbook captures the essence of his enormous contribution to the betterment of
mankind.
John J. King
Engineering Methods Manager, Ford Design Institute
Ford Motor Company
I am glad to see that Taguchis Quality Engineering Handbook will now be available
for students and practitioners of the quality engineering eld. The editors have
done a great service in bringing out this valuable handbook for dissemination of
Taguchi Methods in quality improvement.
C. R. Rao, Sc.D., F.R.S.
Emeritus Eberly Professor of Statistics and Director of the Center for Multivariate Analysis
The Pennsylvania State University
This book is very practical and I believe technical community will improve its
competence through excellent case studies from various authors in this book.
Takeo Koitabashi
Executive Vice President
Konica-Minolta
Digitization, color, and multifunction are the absolute trend in imaging function
today. It is becoming more and more difcult for product development to catch
up with market demand. In this environment, at Fuji Xerox, Taguchis Quality
Engineering described in this handbook is regarded as an absolutely necessary
tool to develop technology to meet this ever-changing market demand.
Kiyoshi Saitoh
Corporate Vice President, Technology & Development
Fuji Xerox Co., Ltd.
In the todays IT environment, it is extremely critical for technical leaders to
provide an effective strategic direction. We position Taguchi Methods of Robust
Engineering as a trump card for resolving difculty during any stage of technology
development. Technology is advancing rapidly and competition is getting global
day by day. In order to survive and to win in this environment, we use the Taguchi
Methods of Robust Engineering described in this handbook to continue developing highly reliable products one product after another product.
Haruo Kamimoto
Executive Vice President
Ricoh
I was in a shock when I have encountered Taguchi Methods 30 years ago. This
book will provide a trump card to achieve improved Product Cost and Product
Quality with less R&D Cost simultaneously.
Takeshi Inoo
Executive Director
Isuzu Motor Co.
Director
East Japan Railroad Co.
Taguchis Quality
Engineering Handbook
Genichi Taguchi
Subir Chowdhury
Yuin Wu
Associate Editors
Shin Taguchi and Hiroshi Yano
Copyright 2005 by Genichi Taguchi, Subir Chowdhury, Yuin Wu. All rights reserved.
Published by John Wiley & Sons, Inc., Hoboken, New Jersey
Published simultaneously in Canada
No part of this publication may be reproduced, stored in a retrieval system, or transmitted
in any form or by any means, electronic, mechanical, photocopying, recording, scanning,
or otherwise, except as permitted under Section 107 or 108 of the 1976 United States
Copyright Act, without either the prior written permission of the Publisher, or
authorization through payment of the appropriate per-copy fee to the Copyright Clearance
Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470,
or on the web at www.copyright.com. Requests to the Publisher for permission should be
addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street,
Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, e-mail:
permcoordinator@wiley.com.
Limit of Liability / Disclaimer of Warranty: While the publisher and author have used their
best efforts in preparing this book, they make no representations or warranties with
respect to the accuracy or completeness of the contents of this book and specically
disclaim any implied warranties of merchantability or tness for a particular purpose. No
warranty may be created or extended by sales representatives or written sales materials.
The advice and strategies contained herein may not be suitable for your situation. You
should consult with a professional where appropriate. Neither the publisher nor author
shall be liable for any loss of prot or any other commercial damages, including but not
limited to special, incidental, consequential, or other damages.
For general information on our other products and services or for technical support,
please contact our Customer Care Department within the United States at (800) 762-2974,
outside the United States at (317) 572-3993 or fax (317) 572-4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears
in print may not be available in electronic books. For more information about Wiley
products, visit our web site at www.wiley.com.
Library of Congress Cataloging-in-Publication Data:
Taguchi, Genichi, 1924
Taguchis quality engineering handbook / Genichi Taguchi, Subir Chowdhury,
Yuin Wu.
p. cm.
Includes bibliographical references and index.
ISBN 0-471-41334-8
1. Quality controlStatistical methods. 2. Production managementQuality
control. 3. Engineering design. I. Chowdhury, Subir. II. Wu, Yuin. III.
Title.
TS156.T343 2004
658.562dc22
2004011335
Printed in the United States of America.
10 9 8 7 6 5 4 3 2 1
Contents
Preface
Acknowledgments
About the Authors
xxv
xxix
xxxi
SECTION 1
THEORY
Part I
Genichi Taguchis Latest Thinking
1
The Second Industrial Revolution and Information
Technology
2
Management for Quality Engineering
25
3
Quality Engineering: Strategy in Research and
Development
39
4
Quality Engineering: The Taguchi Method
Part II
Quality Engineering: A Historical Perspective
56
125
5
Development of Quality Engineering in Japan
127
ix
Contents
6
History of Taguchis Quality Engineering in the United
States
153
Part III
Quality Loss Function
169
7
Introduction to the Quality Loss Function
171
8
Quality Loss Function for Various Quality Characteristics
180
9
Specication Tolerancing
192
10
Tolerance Design
208
Part IV
Signal-to-Noise Ratio
221
11
Introduction to the Signal-to-Noise Ratio
223
12
SN Ratios for Continuous Variables
239
13
SN Ratios for Classied Attributes
290
Part V
Robust Engineering
311
14
System Design
313
15
Parameter Design
318
16
Tolerance Design
340
Contents
xi
17
Robust Technology Development
352
18
Robust Engineering: A Managers Perspective
377
19
Implementation Strategies
389
Part VI
MahalanobisTaguchi System (MTS)
395
20
MahalanobisTaguchi System
397
Part VII
Software Testing and Application
423
21
Application of Taguchi Methods to Software System
Testing
425
Part VIII
On-Line Quality Engineering
435
22
Tolerancing and Quality Level
437
23
Feedback Control Based on Product Characteristics
454
24
Feedback Control of a Process Condition
468
25
Process Diagnosis and Adjustment
474
Part IX
Experimental Regression
483
26
Parameter Estimation in Regression Equations
485
xii
Contents
Part X
Design of Experiments
501
27
Introduction to Design of Experiments
503
28
Fundamentals of Data Analysis
506
29
Introduction to Analysis of Variance
515
30
One-Way Layout
523
31
Decomposition to Components with a Unit Degree of
Freedom
528
32
Two-Way Layout
552
33
Two-Way Layout with Decomposition
563
34
Two-Way Layout with Repetition
573
35
Introduction to Orthogonal Arrays
584
36
Layout of Orthogonal Arrays Using Linear Graphs
597
37
Incomplete Data
609
38
Youden Squares
617
Contents
xiii
SECTION 2
APPLICATION (CASE STUDIES)
Part I
Robust Engineering: Chemical Applications
629
Biochemistry
Case 1
Optimization of Bean Sprouting Conditions by Parameter
Design
631
Case 2
Optimization of Small Algae Production by Parameter
Design
637
Chemical Reaction
Case 3
Optimization of Polymerization Reactions
643
Case 4
Evaluation of Photographic Systems Using a Dynamic
Operating Window
651
Measurement
Case 5
Application of Dynamic Optimization in Ultra Trace
Analysis
659
Case 6
Evaluation of Component Separation Using a Dynamic
Operating Window
666
Case 7
Optimization of a Measuring Method for Granule
Strength
672
Case 8
A Detection Method for Thermoresistant Bacteria
679
xiv
Contents
Pharmacology
Case 9
Optimization of Model Ointment Prescriptions for In Vitro
Percutaneous Permeation
686
Separation
Case 10
Use of a Dynamic Operating Window for Herbal
Medicine Granulation
695
Case 11
Particle-Size Adjustment in a Fine Grinding Process for a
Developer
705
Part II
Robust Engineering: Electrical Applications
715
Circuits
Case 12
Design for Amplier Stabilization
717
Case 13
Parameter Design of Ceramic Oscillation Circuits
732
Case 14
Evaluation Method of Electric Waveforms by Momentary
Values
735
Case 15
Robust Design for Frequency-Modulation Circuits
741
Electronic Devices
Case 16
Optimization of Blow-off Charge Measurement Systems
746
Case 17
Evaluation of the Generic Function of Film Capacitors
753
Contents
xv
Case 18
Parameter Design of Fine-Line Patterning for IC
Fabrication
758
Case 19
Minimizing Variation in Pot Core Transformer Processing
764
Case 20
Optimization of the Back Contact of Power MOSFETs
771
Electrophoto
Case 21
Development of High-Quality Developers for
Electrophotography
780
Case 22
Functional Evaluation of an Electrophotographic Process
788
Part III
Robust Engineering: Mechanical Applications
793
Biomechanical
Case 23
Biomechanical Comparison of Flexor Tendon Repairs
795
Machining
Case 24
Optimization of Machining Conditions by Electrical
Power
806
Case 25
Development of Machining Technology for HighPerformance Steel by Transformability
819
Case 26
Transformability of Plastic Injection-Molded Gear
827
xvi
Contents
Material Design
Case 27
Optimization of a Felt-Resist Paste Formula Used in
Partial Felting
836
Case 28
Development of Friction Material for Automatic
Transmissions
841
Case 29
Parameter Design for a Foundry Process Using Green
Sand
848
Case 30
Development of Functional Material by Plasma Spraying
852
Material Strength
Case 31
Optimization of Two-Piece Gear Brazing Conditions
858
Case 32
Optimization of Resistance Welding Conditions for
Electronic Components
863
Case 33
Tile Manufacturing Using Industrial Waste
869
Measurement
Case 34
Development of an Electrophotographic Toner Charging
Function Measuring System
875
Case 35
Clear Vision by Robust Design
882
Contents
xvii
Processing
Case 36
Optimization of Adhesion Condition of Resin Board and
Copper Plate
890
Case 37
Optimization of a Wave Soldering Process
895
Case 38
Optimization of Casting Conditions for Camshafts by
Simulation
900
Case 39
Optimization of Photoresist Prole Using Simulation
904
Case 40
Optimization of a Deep-Drawing Process
911
Case 41
Robust Technology Development of an Encapsulation
Process
916
Case 42
Gas-Arc Stud Weld Process Parameter Optimization
Using Robust Design
926
Case 43
Optimization of Molding Conditions of Thick-Walled
Products
940
Case 44
Quality Improvement of an Electrodeposited Process for
Magnet Production
945
Case 45
Optimization of an Electrical Encapsulation Process
through Parameter Design
950
Case 46
Development of Plastic Injection Molding Technology by
Transformability
957
xviii
Contents
Product Development
Case 47
Stability Design of Shutter Mechanisms of Single-Use
Cameras by Simulation
965
Case 48
Optimization of a Clutch Disk Torsional Damping System
Design
973
Case 49
Direct-Injection Diesel Injector Optimization
984
Case 50
Optimization of Disk Blade Mobile Cutters
1005
Case 51
D-VHS Tape Travel Stability
1011
Case 52
Functionality Evaluation of Spindles
1018
Case 53
Improving Minivan Rear Window Latching
1025
Case 54
Linear Proportional Purge Solenoids
1032
Case 55
Optimization of a Linear Actuator Using Simulation
1050
Case 56
Functionality Evaluation of Articulated Robots
1059
Case 57
New Ultraminiature KMS Tact Switch Optimization
1069
Case 58
Optimization of an Electrical Connector Insulator Contact
Housing
1084
Contents
xix
Case 59
Airow Noise Reduction of Intercoolers
1100
Case 60
Reduction of Boosting Force Variation of Brake Boosters 1106
Case 61
Reduction of Chattering Noise in Series 47 Feeder
Valves
1112
Case 62
Optimal Design for a Small DC Motor
1122
Case 63
Steering System On-Center Robustness
1128
Case 64
Improvement in the Taste of Omelets
1141
Case 65
Wiper System Chatter Reduction
1148
Other
Case 66
Fabrication Line Capacity Planning Using a Robust
Design Dynamic Model
1157
Part IV
MahalanobisTaguchi System (MTS)
1169
Human Performance
Case 67
Prediction of Programming Ability from a Questionnaire
Using the MTS
1171
Case 68
Technique for the Evaluation of Programming Ability
Based on the MTS
1178
xx
Contents
Inspection
Case 69
Application of Mahalanobis Distance for the Automatic
Inspection of Solder Joints
1189
Case 70
Application of the MTS to Thermal Ink Jet Image
Quality Inspection
1196
Case 71
Detector Switch Characterization Using the MTS
1208
Case 72
Exhaust Sensor Output Characterization Using the MTS 1220
Case 73
Defect Detection Using the MTS
1233
Medical Diagnosis
Case 74
Application of Mahalanobis Distance to the
Measurement of Drug Efcacy
1238
Case 75
Use of Mahalanobis Distance in Medical Diagnosis
1244
Case 76
Prediction of Urinary Continence Recovery among
Patients with Brain Disease Using the Mahalanobis
Distance
1258
Case 77
Mahalanobis Distance Application for Health
Examination and Treatment of Missing Data
1267
Contents
xxi
Case 78
Forecasting Future Health from Existing Medical
Examination Results Using the MTS
1277
Product
Case 79
Character Recognition Using the Mahalanobis Distance
1288
Case 80
Printed Letter Inspection Technique Using the MTS
1293
Part V
Software Testing and Application
1299
Algorithms
Case 81
Optimization of a Diesel Engine Software Control
Strategy
1301
Case 82
Optimizing Video Compression
1310
Computer Systems
Case 83
Robust Optimization of a Real-Time Operating System
Using Parameter Design
1324
Software
Case 84
Evaluation of Capability and Error in Programming
1335
Case 85
Evaluation of Programmer Ability in Software Production 1343
Case 86
Robust Testing of Electronic Warfare Systems
1351
xxii
Contents
Case 87
Streamlining of Debugging Software Using an
Orthogonal Array
1360
Part VI
On-Line Quality Engineering
1365
On-Line
Case 88
Application of On-Line Quality Engineering to the
Automobile Manufacturing Process
1367
Case 89
Design of Preventive Maintenance of a Bucket Elevator
through Simultaneous Use of Periodic Maintenance and
Checkup
1376
Case 90
Feedback Control by Quality Characteristics
1383
Case 91
Control of Mechanical Component Parts in a
Manufacturing Process
1389
Case 92
Semiconductor Rectier Manufacturing by On-Line
Quality Engineering
1395
Part VII
Miscellaneous
1399
Miscellaneous
Case 93
Estimation of Working Hours in Software Development
1401
Case 94
Applications of Linear and Nonlinear Regression
Equations for Engineering
1406
Contents
xxiii
SECTION 3
TAGUCHIS METHODS VERSUS OTHER
QUALITY PHILOSOPHIES
39
Quality Management in Japan
1423
40
Deming and Taguchis Quality Engineering
1442
41
Enhancing Robust Design with the Aid of TRIZ and
Axiomatic Design
1449
42
Testing and Quality Engineering
1470
43
Total Product Development and Taguchis Quality
Engineering
1478
44
Role of Taguchi Methods in Design for Six Sigma
1492
APPENDIXES
A
Orthogonal Arrays and Linear Graphs: Tools for Quality
Engineering
1525
B
Equations for On-Line Process Control
1598
C
Orthogonal Arrays and Linear Graphs for Chapter 38
Glossary
Bibliography
Index
1602
1618
1625
1629
Preface
To compete successfully in the global marketplace, organizations must have the
ability to produce a variety of high-quality, low-cost products that fully satisfy customers needs. Tomorrows industry leaders will be those companies that make
fundamental changes in the way they develop technologies and design and produce products.
Dr. Genichi Taguchis approach to improving quality is currently the most
widely used engineering technique in the world, recognized by virtually any engineer, though other quality efforts have gained and lost popularity in the business
press. For example, in the last part of the twentieth century, the catch phrase total
quality management (TQM) put Japan on the global map along with the revolution in automotive quality. Americans tried with less success to adopt TQM into
the American business culture in the late 1980s and early 1990s. Under Jack
Welchs leadership, GE rediscovered the Six Sigma Quality philosophy in the late
1990s, resulting in corporations all over the world jumping on that bandwagon.
These methods tend to be but a part of a larger, more global quality effort that
includes both upstream and downstream processesespoused by Dr. Taguchi since
the 1960s. It is no surprise that the other methods use parts of his philosophy of
Robust Engineering in what they try to accomplish, but none is as effective.
Dr. Taguchi strongly feels that an engineering theory has no value unless it is
effectively applied and proven to contribute to society. To that end, a Taguchi study
group led by Dr. Taguchi has been meeting on the rst Saturday of every month
in Nagoya, Japan, since 1953. The group has about one hundred members
engineers from various industriesat any given time. Study group members learn
from Dr. Taguchi, apply new ideas back on the job, and then assess the effectiveness of these ideas. Through these activities, Dr. Taguchi continues to develop
breakthrough approaches. In Tokyo, Japan, a similar study group has been meeting
monthly since 1960. Similar activities continue on a much larger scale among the
more than two thousand members of the Quality Engineering Society.
For the past fty years, Dr. Taguchi has been inventing new quality methodologies year after year. Having written more than forty books in the Japanese language, this engineering genius is still contributing profoundly to this highly
technical eld, but until now, his work and its implications have not been collected
in one single source. Taguchis Quality Engineering Handbook corrects that omission.
This powerful handbook, we believe, will create an atmosphere of great excitement
in the engineering community. It will provide a reference tool for every engineer
xxv
xxvi
Preface
and quality professional of the twenty-rst century. We believe that this book could
remain in print for one hundred years or more.
The handbook is divided into three sections. The rst, Theory, consists of
ten parts:
Genichi Taguchis Latest Thinking
Quality Engineering: A Historical Perspective
Quality Loss Function
Signal-to-Noise Ratio
Robust Engineering
MahalanobisTaguchi System (MTS)
Software Testing and Application
On-Line Quality Engineering
Experimental Regression
Design of Experiments
After presenting the cutting-edge ideas Dr. Taguchi is currently working on in
Part I, we provide an overview of the quality movement from earliest times in Japan
through to current work in the United States. Then we cover a variety of quality
methodologies, describe them in detail, and demonstrate how they can be applied
to minimize product development cost, reduce product time-to-market, and increase overall productivity.
Dr. Taguchi has always believed in practicing his philosophy in a real environment. He emphasizes that the most effective way to learn his methodology is to
study successful and failed case studies from actual applications. For this reason,
in Section 2 of this handbook we are honored to feature ninety-four case studies
from all over the world from every type of industry and engineering eld, mechanical, electrical, chemical, electronic, etc. Each of the case studies provides a
quick study on subject areas that can be applied to many different types of products
and processes. It is recommended that readers study as many of these case studies
as they can, even if they are not ones they can relate to easily. Thousands of
companies have been using Dr. Taguchis methodologies for the past fty years.
The benets some have achieved are phenomenal.
Section 3 features an explanation of how Dr. Taguchis work contrasts with other
quality philosophies, such as Six Sigma, Deming, Axiomatic Design, TRIZ, and
others. This section shows readers ways to incorporate Taguchi Methods into existing corporate activities.
In short, the primary goals of this handbook are
To provide a guidebook for engineers, management, and academia on the
Quality Engineering methodology.
To provide organizations with proven techniques for becoming more competitive in the global marketplace.
To collect Dr. Taguchis theories and practical applications within the connes of one book so the relationships among the tools will be properly
understood.
To clear up some common misinterpretations about Taguchis Quality Engineering philosophy.
Preface
To spread the word about Quality Engineering throughout the world, leading to greater success in all elds in the future.
We strongly believe that every engineer and engineering manager in any eld,
every quality professional, every engineering faculty member, research and development personnel in all types of organizations, every consultant, educator, or student, and all technical libraries will benet from having this book.
Subir Chowdhury
Yuin Wu
April 14, 2004
xxvii
Acknowledgments
The authors gratefully acknowledge the efforts of all who assisted in the completion of this
handbook:
To all of the contributors and their organizations for sharing their successful
case studies.
To the American Supplier Institute, ASI Consulting Group, LLC, and its employees, Board of Directors, and customers for promoting Dr. Taguchis
works over the years.
To our two associate editors, Shin Taguchi of the United States and Hiroshi
Yano of Japan, for their dedication and hard work on the manuscript
development.
To two of the nest teachers and consultants, Jim Wilkins and Alan Wu, for
nonstop passionate preaching of Dr. Taguchis philosophy.
To our colleagues and friends, Dr. Barry Bebb, Jodi Caldwell, Ki Chang, Bill
Eureka, Craig Jensen, John King, Don Mitchell, Gopi Palamaner, Bill Sutherland, Jim Quinlan, and Brad Walker, for effectively promoting Dr. Taguchis philosophy.
To Emy Facundo for her very hard work on preparing the manuscript.
To Bob Argentieri, our editor at Wiley, for his dedication and continuous support on the development of this handbook.
To Rebecca Taff for her dedicated effort toward rening the manuscript.
Finally, this handbook never would have been possible without the continuous
support of our wonderful wives, Kiyo Taguchi, Malini Chowdhury, and Suchuan Wu.
xxix
xxxii
Section 1
Theory
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
Part I
Genichi Taguchis
Latest Thinking
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
The Second
Industrial Revolution
and Information
Technology
1
1.1. Elements of Productivity
10
13
Risk Management
13
Countermeasures against Risks
13
16
Fords Strategy
16
Determining Tolerance
16
Evaluation of Quality Level and Consumer Loss
Balance of Cost and Quality Level
18
17
23
References
23
24
Economy and
Productivity
Therefore, wages and living standards are not important from a national point
of view. What is important is that we increase the number of products by improving
productivity; that once we achieve it, we develop new products, create new industries, and offer jobs to unemployed people; and that once we enhance the quality
of products and services, we sell them at the same prices as before. In fact, the
price of a color television set is decreasing more than the price of a black-andwhite TV set, so although the former is superior in quality, the cost of large-scale
integrated circuits remains the same in the end, despite the high degree of integration. These are typical examples of developed technologies that have enabled
us to produce higher-quality products at the same cost.
Although Japans standard of living is lower than that of the United States,
Japans income is higher. An exchange rate is determined by competitive products
that can be exported. On the one hand, Japan is highly productive because of
several high-level technologies. On the other hand, Japans productivity in terms
of other products and services is low. On balance, Japans overall living standard
remains low. This is because a standard of living depends totally on productivity
and allocation of products and services. To raise Japans overall standard of living,
quantitative productivity in agriculture, retail trade, and the service industry should
be increased. Otherwise, qualitative productivity should be improved. As a result,
Japan should invest in R&D to improve productivity. This is regarded as the most
essential role of top management in society. The objective of quality engineering
is to enhance the productivity of R&D itself.
As noted earlier, there are two different types of productivity improvement, one a
quantitative approach, the other qualitative. As for the latter, the word qualitative
can be replaced by grade, although in this section we focus on quality rather than
grade. In the case of an automobile, quality indicates losses that a product imposes
on society, such as fuel inefciency, noise, vibration, defects, or environmental
pollution. An auto engine at present has only 25% fuel efciency; that is, to gain
the horsepower claimed, four times as much gasoline is consumed as would seem
to be required. Now, if we double fuel efciency, we can expect to halve noise,
vibration, and environmental pollution as well as fuel consumed. It is obvious that
the excess fuel consumption is converted into vibration, noise, and exhaust,
thereby fueling environmental pollution. Also, it can be theorized that if the fuel
efciency of an engine doubles, Japans trade decit with the United States would
disappear and global environmental pollution would be reduced by 50%. In contrast, sales in oil-producing nations and petroleum companies would be decreased
by half. Even if these companies doubled the price of gasoline to maintain the
same number of workers, the global standard of living would be improved, owing
to decreased pollution because fuel efciency would not have changed. Therefore,
instead of reducing economically unimportant defects by 0.1% in the production
process, engineers should focus on more substantial quality improvement, such as
in engine efciency. That is, since more than 75% of gasoline fuel is wasted, thus
incurring an enormous economic loss, to improve fuel efciency is desirable.
Indeed, petroleum companies might cause joblessness because they cannot
maintain their doubled prices; in actuality, however, there are quite a few people
who wish to purchase a car. In the United States, many families need a car for
each adult. Therefore, if the price of a car and that of fuel decline by half, the
demand for cars would increase, and eventually, fuel consumption would decrease
Quality and
Productivity
In nations that have a wider variety of products than that in developing countries,
qualitative improvement is preferable to quantitative from a social perspective. This
is because the qualitative approach tends to reduce pollution and resource consumption. I believe that taxation is more desirable than pollution regulation to
facilitate quality improvement. This would include a consumption tax for natural
10
Product Quality
Design for a
Financial System
11
Credit means that money can be lent without security and that credit limits
such as $50,000 or $10,000 are determined based on each persons data. How we
judge individual credibility is a technological issue. In designing a system, how we
predict and prevent functional risk is the most critical design concern. This is
about how we offer credit to each person in a society. How to collect credit information and how much credit to set for a given person constitute the most important details in the era of information. In the eld of quality engineering, an
approach to minimizing risk of deviation, not from function design per se, but
from functionality or an ideal state is called functionality design. In the twenty-rst
century, this quality engineering method is being applied to both hardware and
software design.
An organization collecting money from a number of people and corporations
and lending to those in need of money is a bank. Its business is conducted in the
world of information or credit. Such a world of reliance on information (from
plastic money to electronic money) has already begun. The question for the customer is whether a bank pays higher interest than others do, or runs a rational
business that makes a sufcient prot. What is important is whether a bank will
be able to survive in the future because its information system is sufciently rationalized and its borrowers are properly assessed, even if it keeps loan interest
low to borrowers, and conversely, pays somewhat higher interest to money providers. After all, depositors evaluate a bank. A credible bank that can offer higher
interest is regarded as a good and highly productive bank. Therefore, we consider
that a main task of a banks R&D department is to rationalize the banking business
and lead to a system design method through rationalization of functional evaluation in quality engineering.
Unfortunately, in Japan, we still lack research on automated systems to make a
proper decision instantaneously. Because a computer itself cannot take any responsibility, functional evaluation of a system based on software plays a key role in
establishing a company. After a company is established, daily routine management
of software (update and improvement of the database) becomes more important.
Globalization of information transactions is progressing. A single information center will soon cover all the world and reduce costs drastically. Soon, huge bank
buildings with many clerks will not be required.
We dene productivity as follows: Total social productivity (GDP) is the sum of individual freedoms. Freedom includes situations where we can obtain what we want
freely, that is, without restraint of individual freedom by others. As discussed earlier, when electronic money systems are designed, numerous people become unemployed because many bank clerks are no longer needed. Once only 100 bank
clerks can complete a job that has required 10,000 people, the banks productivity
is regarded as improved 100-fold. Nevertheless, unless the 9900 redundant people
produce something new, total social productivity does not increase.
To increase productivity (including selling a higher-quality product at the same
price) requires technological research, and engineers designing a productive system constitute an R&D laboratory. An increase in productivity is irreversible. Of
course, a bank does not itself need to offer new jobs to people unemployed due
to improved productivity. It is widely said that one of the new key industries will
be the leisure industry, including travel, which has no limit and could include
space travel.
What Is
Productivity?
12
Assuming that each consumers taste and standard of living (disposable income)
is different for both hardware and software that he or she wishes to buy, we plan
a new product. Toyota is said to be able to deliver a car to a customer within 20
days of receiving an order. A certain watch manufacturer is said to respond to 30
billion variations of dial plate type, color, and size and to offer one at the price of
$75 to $125 within a short period of time. An engineer required to design a shortcycle-time production process for only one variation or one product is involved in
a exible manufacturing system (FMS), used to produce high-mix, low-volume products. This eld belongs to production engineering, whose main interest is production speed in FMS.
13
Risk Management
Countermeasures
against Risks
Now lets look at noises from number 1 in the list above from the standpoint of
quality engineering. Among common countermeasures against such noises are the
training of users to head off misuse and the prevention of subsequent loss and
14
There are two types of environment: natural and articial. The natural environment includes earthquakes and typhoons. From an economic point of view, we
should not design buildings that can withstand any type of natural disaster. For an
earthquake, for which point on the seismic intensity scale we design a building is
determined by a standard in tolerance design. If we design the robustness of a
building using the quality engineering technique, we select a certain seismic intensity, such as 4, and study it to minimize the deformation of the building. However, this does not mean that we design a building that is unbreakable even in a
large-scale earthquake.
To mitigate the effects of an earthquake or typhoon on human life, we need
to forecast such events. Instead of relying on cause-and-effect or regression relationships, we should focus on prediction by pattern recognition. This technique,
integrating multidimensional information obtained to date, creates Mahalanobis
space (see Section 4.7) using only time-based data with a seismic intensity scale
below 0.5, at which no earthquake happens. The Mahalanobis distance becomes
1, on average, in the space. Therefore, distance D generated in the Mahalanobis
space remains within the approximate range 1 1. We assume that the Mahalanobis space exists as unit space only if there is no earthquake. We wish to see
how the Mahalanobis distance changes in accordance with the SN ratio (forecasting accuracy) proportional to the seismic intensity after we calculate a formal equation of the distance after an earthquake. If the Mahalanobis distance becomes large
enough and is sufciently proportional to its seismic intensity, it is possible to
predict an earthquake using seismic data obtained before the earthquake occurs.
Many social systems focus on crimes committed by human beings. Recently, in the
world of software engineering, a number of problems have been brought about
by hackers. Toll collection systems for public telephones, for example, involve numerous problems. Especially for postpaid phones, only 30% of total revenue was
collected by the Nippon Telegraph and Telephone Public Corporation. Eventually,
the company modied its system to a prepaid basis. Before you call, you insert a
coin. If your call is not connected, the coin is returned after the phone is hung
up. Dishonest people put tissue paper in coin returns to block returning coins
because many users tend to leave a phone without receiving their change. Because
phone designers had made the coin return so small that a coin could barely drop
through, change collectors could ll the slot with tissue paper using a steel wire
and then burn the paper away with a hot wire and take the coins. To tackle this
crime, designers added an alarm that buzzes when coins are removed from a
phone; but change collectors decoy people in charge by intentionally setting off
alarms at some places, in the meantime stealing coins from other phones.
A good design would predict crimes and develop ways to know what criminals
are doing. Although the prepaid card system at rst kept people from not paying,
bogus cards soon began to proliferate. Then it became necessary to deal with
counterfeiting of coins, bills, and cards. Another problem is that of hackers, who
have begun to cause severe problems on the Internet. These crimes can be seen
as intentional noises made by malicious people. Education and laws are prepared
to prevent them and the police are activated to punish them. Improvement in
peoples living standard leads to the prevention of many crimes but cannot eliminate them.
No noises are larger than the war that derives from the fact that a national
policy is free. To prevent this disturbance, we need international laws with the
backing of the United Nations, and to eradicate the noise, we need United
Nations forces. At the same time, the mass media should keep check on UN
activities to control the highest-ranked authority. Although the prevention of wars
around the world is not an objective of quality engineering, noises that accompany
businesses should be handled by quality engineering, and MTS can be helpful by
15
16
The rst process to which Ford applied the quality engineering method was not
product design but the daily routine activity of quality control (i.e., on-line quality
engineering). We reproduce below a part of a research paper by Willie Moore.
Quality Engineering at Ford
Over the last ve years, Fords awareness of quality has become one of the newest and
most advanced examples. To date, they have come to recognize that continuous improvement of products and services in response to customers expectation is the only
way to prosper their business and allocate proper dividends to stockholders. In the
declaration about their corporate mission and guiding principle, they are aware that
human resources, products, and prots are fundamentals for success and quality of
their products and services is closely related to customer satisfaction. However, these
ideas are not brand-new. If so, why is it considered that Fords awareness of quality is
one of the newest and advanced? The reason is that Ford has arrived at the new
understanding after reconsidering the background of these ideas. In addition, they
comprehend the following four simple assertions by Dr. Taguchi:
1. Cost is the most important element for any product.
2. Cost cannot be reduced without any inuences on quality.
3. Quality can be improved without cost increase. This can be achieved by the
utilization of the interactions with noise.
4. Cost can be reduced through quality improvement.
Historically, the United States has developed many quality targets, for example,
zero-defect movement, conformity with use, and quality standards. Although these
targets include a specic denition of quality or philosophy, practical ways of training to attain dened quality targets have not been formulated and developed.
Currently, Ford has a philosophy, methods, and technical means to satisfy customers. Among technical means are methods to determine tolerances and economic
evaluation of quality levels. Assertion 4 above is a way of reducing cost after improving the SN ratio in production processes. Since some Japanese companies
cling too much to the idea that quality is the rst priority, they are losing their
competitive edge, due to higher prices. Quality and cost should be well balanced.
The word quality as discussed here means market losses related to defects, pollution, and lives in the market. A procedure for determining tolerance and on- and
off-line quality engineering are explained later.
Determining
Tolerance
Quality and cost are balanced by the design of tolerance in the product design
process. The procedure is prescribed in JIS Z-8403 [2] as a part of national standards. JIS Z-8404 is applied not only to Ford but also to other European companies. Balancing the cost of quality involves how to determine targets and tolerances
at shipping.
Tolerance is determined after we classify three quality characteristics:
17
AA
(1.1)
where A0 is the economic loss when a product or service does not function
in the marketplace, A the manufacturing loss when a product or service does
not meet the shipping standard, and 0 the functional limit. Above m 0
or below m 0, problems occur.
2. Smaller-the-better characteristic: a characteristic that should be smaller (e.g., detrimental ingredient, audible noise). Its tolerance is calculated as follows:
AA
(1.2)
AA
0
(1.3)
Since a tolerance value is set by the designers of a product, quite often production engineers are not informed of how tolerance has been determined. In
this chapter we discuss the quality level after a tolerance value has been established.
A manufacturing department is responsible for producing specied products routinely in given processes. Production engineers are in charge of the design of
production processes. On the other hand, hardware, such as machines or devices,
targets, and control limits of quality characteristics to be controlled in each process, and process conditions, are given to operators in a manufacturing department
as technical or operation standards. Indeed, the operators accept these standards;
however, actual process control in accordance with a change in process conditions
tends to be left to the operators, because this control task is regarded as a calibration or adjustment. Production cost is divided up as follows:
production cost material cost process cost
control cost pollution cost
(1.4)
Moreover, control cost is split up into two costs: production control costs and
quality control costs. A product design department deals with all cost items on the
right side of equation (1.4). A production engineering department is charged with
the design of production processes (selection of machines and devices and setup
of running conditions) to produce initially designed specications (specications
at shipping) as equitably and quickly as possible. This departments responsibility
covers a sum of the second to fourth terms on the right-hand side of (1.4). Its
particularly important task is to speed up production and attempt to reduce total
cost, including labor cost of indirect employees after improving the stability of
production processes. Cost should be regarded not only as production cost but as
companywide cost, which is several times that of production cost as expressed in
(1.4).
Evaluation of Quality
Level and Consumer
Loss
18
Since price basically gives a customer the rst loss, production cost can be considered more important than quality. In this sense, balance of cost and quality is
regarded as a balance of price and quality. Market quality as discussed here includes the following items mentioned earlier:
1. Operating cost
2. Loss due to functional variability
3. Loss due to evil effects
The sum of 1, 2, and 3 is the focus of this chapter. Items 1 and 3 are almost
always determined by design. Item 1 is normally evaluated under conditions of
standard use. For items 2 and 3, we evaluate their loss functions using an average
sum of squared deviations from ideal values. However, for only an initial specication (at the point of shipping) in daily manufacturing, we assess economic loss
due to items 2 and 3, or the quality level as dened in this book, using the average
sum of squared deviations from the target. In this case, by setting loss when an
19
A 2
Then, is calculated as follows, where n data are y1, y2, ... , yn:
2
1
[(y1 m)2 (y2 m)2 (yn m)2]
n
(1.6)
1 2
(y y 22 y 2n)
n 1
(1.7)
Now
2
1
n
1
1
1
2 2
y 21
y2
yn
(1.8)
(1.9)
We evaluate daily activity in manufacturing as well as cost by clarifying the evaluation method of quality level used by management to balance quality and cost.
QUALITY LEVEL: NOMINAL-THE-BEST CHARACTERISTIC
Example
A certain plate glass maker whose shipping price is $3 has a dimensional tolerance
of 2.0 mm. Data for differences between measurements and target values (ms)
regarding a product shipped from a certain factory are as follows:
0.3
0.6
0.5
0.0
0.2
0.8
0.2
0.0
1.0
1.2
0.8 0.6
0.9
1.1 0.5
0.2
0.0
0.3
1.3
0.8
Next we calculate the quality level of this product. To compute loss due to
variability, after calculating an average sum of squared deviations from the target,
mean-squared error 2, we substitute it into the loss function. From now on, we
call mean-squared error 2 variance.
1
2
2
2
2
2
20 (0.3 0.6 1.3 ) 0.4795 mm
(1.10)
20
Plugging this into equation (1.5) for the loss function gives us,
L
A 2
300
2 (0.4795) 36 cents
2
2
(1.11)
2.66
20
20
( 20)
(1.12)
( 1)
(1.13)
(1.14)
300
(0.365) (75)(0.365) 27 cents
22
(1.15)
As compared to equation (1.11), for one sheet of plate glass, this brings us a quality
improvement of
36.0 27.4 8.6 cents
(1.16)
If 500,000 sheets are produced monthly, we can obtain $43,000 through quality
improvement.
No special tool is required to adjust an average to a target. We can cut plate
glass off with a ruler after comparing an actual value with a target value. If the
Table 1.1
ANOVA table
S
(%)
2.66
2.66
2.295
23.9
19
6.93
0.365
7.295
76.1
Total
20
9.59
0.4795
9.590
100.0
Factor
21
Example
Now suppose that a tolerance standard of roundness is less than 12 m, and if a
product is disqualied, loss A costs 80 cents. We produce the product with machines A1 and A2. Roundness data for this product taken twice a day over a twoweek period are as follows:
Morning:
Afternoon: 6
10
23.4 m2
L
A 2
80
(23.4) 13 cents
2
122
(1.17)
(1.18)
22
Example
Now we dene y as a larger-the-better characteristic and calculate a loss function.
Suppose that the strength of a three-layer reinforced rubber hose is important:
K1: adhesion between tube rubber and reinforcement fiber
K2: adhesion between reinforcement fiber and surface rubber
Both are crucial factors for rubber hose. For both K1 and K2, a lower limit for is
specied as 5.0 kgf. When hoses that do not meet this standard are discarded,
the loss is $5 per hose. In addition, its annual production volume is 120,000. After
prototyping eight hoses for each of two different-priced adhesives, A1 (50 cents)
and A2 (60 cents), we measured the adhesion strengths of K1 and K2, as shown in
Table 1.2.
Compare quality levels A1 and A2. We wish to increase their averages and reduce
their variability. The scale for both criteria is equation (1.9), an average of the sum
of squared reciprocals. By calculating A1 and A2, respectively, we compute loss
functions expressed by (1.8). For A1,
21
1
16
1
1
1
2
2
10.2
5.8
16.52
0.02284
(1.19)
1
16
1
1
1
7.62
13.72
10.62
(1.20)
0.01139
(1.21)
L2 (500)(52)(0.01139) $1.42
(1.22)
Therefore, even if A2 is 10 cents more costly than A1 in terms of adhesive cost (sum
of adhesive price and adhesion operation cost), A2 is more cost-effective than A1
by
(50 285.5) (60 142.4) $1.33
(1.23)
Table 1.2
Adhesion strength data
A1
A2
K1
10.2
5.8
4.9
16.1
15.0
9.4
4.8
10.1
K2
14.6
19.7
5.0
4.7
16.8
4.5
4.0
16.5
K1
7.6
13.7
7.0
12.8
11.8
13.7
14.8
10.4
K2
7.0
10.1
6.8
10.0
8.6
11.2
8.3
10.6
23
Responsibility of
Each Department for
Quality
The quality assurance department should be responsible for unknown items and
complaints in the marketplace. Two of the largest-scale incidents of pollution after
World War II were the arsenic poisoning of milk by company M and the PCB
poisoning of cooking oil by company K. (We do not here discuss Minamata disease
because it was caused during production.) The arsenic poisoning incident led to
many infants being killed after the intake of milk mixed with arsenic. If company
M had been afraid that arsenic would be mixed with milk and then had produced
the milk and inspected for arsenic, no one would have bought its products. This
is because a company worrying about mixtures of arsenic and milk cannot be
trusted.
The case of PCB poisoning is similar. In general, a very large pollution event
is not predicable, and furthermore, prediction is not necessary. Shortly after World
Responsibilities of
the Quality
Assurance
Department
24
References
1. Genichi Taguchi, 1989, Quality Engineering in Manufacturing. Tokyo: Japanese Standards Association.
2. Japanese Industrial Standards Committee, 1996. Quality Characteristics of Products. JIS
Z-8403. Tokyo: Japanese Standards Association.
Management for
Quality Engineering
26
28
25
29
30
30
References
38
38
Planning Strategies
25
26
Manufacturers plan products in parallel with variations in those products. If possible, they attempt to offer whatever customers wish to purchase: in other words,
made-to-order products. Toyota is said to be able to deliver a car to a customer
within 20 days after receipt of an order, with numerous variations in models, appearance, or navigation system. For typical models, they are prepared to deliver
several variations; however, it takes time to respond to millions of variations. To
achieve this, a production engineering department ought to design production
processes for the effective production of high-mix, low-volume products, which
can be considered rational processes.
On the other hand, there are products whose functions only are important: for
example, invisible parts, units, or subsystems. They need to be improved with regard to their functions only.
Selection of
Systems: Concepts
Engineers are regarded as specialists who offer systems or concepts with objective
functions. All means of achieving such goals are articial. Because of their articiality, systems and concepts can be used exclusively only within a certain period
protected by patents. From the quality engineering viewpoint, we believe that the
more complex systems are, the better they become. For example, a transistor was
originally a device invented for amplication. Yet, due to its simplicity, a transistor
27
itself cannot reduce variability. As a result, an amplier put into practical use in a
circuit, called an op-amp, is 20 times more complicated a circuit than that of a
transistor.
As addressed earlier, a key issue relates to what rationalization of design and
development is used to improve the following two functionalities: (1) design and
development focusing on an objective function under standard conditions and (2)
design and evaluation of robustness to keep a function invariable over a full period
of use under various conditions in the market. To improve these functionalities,
as many parameters (design constants) should be changed by designers as possible.
More linear and complex systems can bring greater improvement. Figure 2.1 exemplies transistor oscillator design by Hewlett-Packard engineers, who in choosing the design parameters for a transistor oscillator selected the circuit shown in
the gure. This example is regarded as a functional design by simulation based
on conceptualization. Through an L108 orthogonal array consisting of 38 control
factors, impedance stability was studied.
Top management is also responsible for guiding each project manager in an
engineering department to implement research and development efciently. One
practical means is to encourage each manager to create as complicated a system
as possible. In fact, Figure 2.1 includes a vast number of control factors. Because
a complex system contains a simpler one, we can bring a target function to an
ideal one.
The approach discussed so far was introduced in a technical journal, Nikkei
Mechanical, on February 19, 1996 [2]. An abstract of this article follows.
LIMDOW (Light Intensity Modulation Direct Overwrite) Disk, developed by Tetsuo
Hosokawa, chief engineer of the Business Development Department, Development
Division, Nikon Corp., together with Hitachi Maxell, is a typical example that has
successfully escaped from a vicious cycle of tune-up by taking advantage of the Taguchi
Method. LIMDOW is a next-generation magneto-optical disk (MO) that can be both
read and rewritten. Its writing speed is twice as fast as a conventional one because we
Figure 2.1
Transistor oscillator
28
29
Functionality
(Performance)
Evaluation, Signals,
and Noise
30
Evaluation of
Reproducibility
To evaluate functionality under conditions of mass production or various applications by means of test pieces, downsized prototypes, or limited exibilities (a life
test should be completed in less than one day, a noise test is limited to only two
conditions) in a laboratory, signals and noises expected to occur in the market, as
discussed above, should be taken into account.
However, by changing design constants called control factors, which are not conditions of use but parameters that can freely be altered by designers (including
selection of both systems and parameter levels), designers optimize a system. Even
if we take advantage of functionality evaluation, if signal factors, noises, and measurement characteristics are selected improperly, optimal conditions are sometimes
not determined. Whether optimal conditions can be determined depends on the
evaluation of reproducibility in the downstream (conditions regarding mass production or various uses in the market). Therefore, in the development phase,
development engineers should design parameters using an orthogonal array to
assess reproducibility. Or at the completion point of development, other evaluation
departments, as a management group, should assess functionality using benchmarking techniques. The former approach is desirable; however, the latter is expected to bring a paradigm shift that stimulates designers to use SN ratios.
31
32
33
34
cost of about $300,000. Because the life test was conducted once a week and
took almost a week to complete (this is called time lag), in the meantime a certain
number of engines were mounted on various helicopters. However, since the life
test at a plant can detect problems much faster than customers in the market can,
we did not consider problems of loss of life due to crashes.
As noted earlier, the inspection expense is $25,000 and the mean failure interval
(including failures both in the marketplace and at a plant), u was estimated to be
once in a few years. For the latter, we judged that they had one failure for every
2500 engines, and so set u to 2500 units. (Even if parameters deviate from actual
values, they do not have any signicant impact on inspection design.) For u
2500, we can calculate the optimal inspection interval using the equation that
follows. The proof of this equation is given in Chapter 6 of Reference 4. For the
sake of convenience, as a monetary unit, $100 is chosen here.
2uB
A
(2) (2500) (250)
3000
20.4 engines
(2.1)
This happens to be consistent with their inspection frequency. For a failure occurring
once in three years, a life test should be done weekly. We were extremely impressed
by their method, fostered through long-time experience.
We answered that the current frequency was best and that if it were lowered to
every 50, the company would lose more. We added that if they had certain evidence
that the incidence declined from once in two or three years to once in a decade,
they could conduct the life test only once for every 50 engines.
In fact, if a failure happens once in a decade, the mean failure interval u
10,000 units:
n
(2)(100)(250)
3000
41 engines
(2.2)
This number has less than a 20% difference from 50 units; therefore, we can select
n 50.
On the other hand, when we keep the current u unchanged and alter n from 25
to 50, we will experience cost increases. Primarily, in case of n 25, loss L can
be calculated as follows:
L
B
n1A
lA
n
2 u
u
250
25
25 1
2
3000
(25)(3000)
2500
2500
10 15.6 30.0
55.6 cents ($100)
(2.3)
35
250
25
50 1
2
3000
(25)(3000)
2500
2500
5 30.6 30.0
65.6 cents
(2.4)
Example
For one particular product, failures leading to recall are supposed to occur only once
a decade or once a century. Suppose that a manufacturer producing 1 billion units
with 600 product types yearly experiences eight recalls in a year. Its mean failure
interval u is
u
125,000,000 units
(2.5)
36
2uB
A
(2)(125,000,000)(50)
56,000 units
(2.6)
Now by taking into account the annual production volume of 1.2 million and assuming that there are 48 weeks in a year, we need to produce the following number
of products weekly:
1,200,000
25,000
48
(2.7)
Therefore, 56,000 units in (2.6) means that the life test is conducted once every
other week. The mean failure interval of 125,000,000 indicates that a failure occurs almost once in
125,000,000
104 years
1,200,000
(2.8)
In short, even if a failure of a certain product takes place only once a century, we
should test the product biweekly.
What happens if we cease to run such a life test? In this case, after a defect is
detected when a product is used, it is recalled. Inspection by user costs nothing.
However, in general, there should be a time lag of a few months until detection.
Now supposing it to be about two months and nine weeks, the, time lag l is
l (25,000)(9)
225,0000 units
(2.9)
Although, in most cases, loss A for each defective product after shipping is larger
that that when a defect is detected in-house, we suppose that both are equal. The
reasoning behind this is that A will not become so large on average because we
can take appropriate technical measures to prevent a larger-scale problem in the
marketplace immediately after one or more defective products are found in the
plant.
If we wait until failures turn up in the marketplace, B 0 and n 1 because
consumers check all products; this can be regarded as screening or 100% inspection. Based on product recall, loss L0 is
B
n1A
lA
n
2 u
u
L0
(2.10)
11
2
0.0072 cent
4
(225,000)(400)
125,000,000
125,000,000
(2.11)
37
(0.0072)(1,200,000) $864
(2.12)
50
50,000
50 1
2
4
(25,000)(4)
125,000,000
125,000,000
(2.13)
For an annual production volume of 1.2 million, the annual total loss is $3120:
(0.0026)(1,200,000) $3120
(2.14)
(2.15)
If we can expect this amount of gain in all 600 models as compared to the case
of no life test, we can save
(5520)(600) $3,312,000
(2.16)
(2.17)
(2.18)
As for the value of a life test, only loss in the case of no life test is increased. Putting
A0 $12.89 into A of equation (2.10), the loss L0 is calculated as to be 0.0232
cent. Comparing this loss with the loss using a life test, L 0.0026 cent, the loss
increases by
0.0232 0.0026 0.0206 cent
(2.19)
(2.20)
38
Functions of the
Quality Assurance
Department
References
1. Sun Tzu, 2002. The Art of War, Dover Publications.
2. Nikkei Mechanical, February 19, 1996.
3. Genichi Taguchi, 1984. Parameter Design for New Product Development. Tokyo: Japanese
Standards Association.
4. Genichi Taguchi, 1989. Quality Engineering Series, Vol. 2. Tokyo: Japanese Standards
Association.
5. Japanese Industrial Standards Committee, 1996. Quality Characteristics of Products. JIS
Z-8403.
Quality Engineering:
Strategy in Research
and Development
39
41
39
40
41
duct the life test of a product under varied marketplace conditions, the strategy
of using an SN ratio combined with noise, as illustrated in this chapter, has fundamentally changed the traditional approaches. The following strategies are
employed:
1. Noise factors are assigned for each level of each design parameter and compounded, because what happens due to the environment and deterioration
is unpredictable. For each noise factor, only two levels are sufcient. As a
rst step, we evaluate robustness using two noise levels.
2. The robust level of a control factor (having a higher SN ratio) does not
change at different signal factor levels. This is advantageous in the case of a
time-consuming simulation, such as using the nite element method by reducing the number of combinations in the study and improving robustness
by using only the rst two or three levels of the signal factor.
3. Tuning to the objective function can be made after robustness is improved
using one or two control or signal factors. Tuning is done under the standard
condition. To do so, orthogonal expansion is performed to nd the candidates that affect linear coefcient 1 and quadratic coefcient 2.
In quality engineering, all conditions of use in the market can be categorized into
either a signal or a noise. In addition, a signal can be classied as active or passive.
The former is a variable that an engineer can use actively, and it changes output
characteristics. The latter is, like a measurement instrument or receiver, a system
that measures change in a true value or a signal and calculates output. Since we
can alter true values or oscillated signals in research, active and passive signals do
not need to be differentiated.
What is most important in two-stage optimization is that we not use or look for
cause-and-effect relationships between control and noise factors. Suppose that we
test circuits (e.g., logic circuits) under standard conditions and design a circuit
with an objective function as conducted at Bell Labs. Afterward, under 16 different
conditions of use, we perform a functional test. Quality engineering usually recommends that only two conditions be tested; however, this is only because of the
cost. The key point is that we should not adjust (tune) a design condition in order
to meet the target when the noise condition changes. For tuning, a cause-andeffect relationship under the standard condition needs to be used.
We should not adjust the deviation caused by the usage condition change of
the model or regression relationship because it is impossible to dispatch operators
in manufacturing to make adjustments for uncountable conditions of use. Quality
engineering proposes that adjustments be made only in production processes that
are considered standard conditions and that countermeasures for noises should
utilize the interactions between noise and control factors.
TESTING METHOD AND DATA ANALYSIS
In designing hardware (including a system), let the signal factor used by customers
be M and its ideal output (target) be m. Their relationship is written as m (M).
In two-stage design, control factors (or indicative factors) are assigned to an
orthogonal array. For each run of the orthogonal array, three types of outputs
are obtained either from experimentation or from simulation: under standard
conditions N0, under negative-side compounded noise conditions N1, and under
Variability (Standard
SN Ratio)
Improvement: First
Step in Quality
Engineering
42
(3.1)
is regarded as an ideal function except for its coefcient. Nevertheless, the ideal
function is not necessarily a proportional equation, such as equation (3.1), and
can be expressed in various types of equations. Under an optimal SN ratio condition, we collect output values under standard conditions N0 and obtain the
following:
M (signal factor)
M1
M2
...
Mk
m (target value):
m1
m2
...
mk
y (output value):
y1
y2
...
yk
Although the data in N1 and N2 are available, they are not needed for adjustment because they are used only for examining reproducibility.
If a target value m is proportional to a signal M for any value of , it is analyzed
with level values of a signal factor M. An analysis procedure with a signal M is the
same as that with a target value m. Therefore, we show the common method of
adjustment calculation using a target value m, which is considered the Taguchi
method for adjustment.
Table 3.1
Data for SN ratio analysis
N0
Linear
Equation
N1
L1
N2
L2
43
If outputs y1, y2, ... , yk under the standard conditions N0 match the target value
ms, we do not need any adjustment. In addition, if y is proportional to m with
sufcient accuracy, there are various ways of adjusting to the target value of 1.
Some of the methods are the following:
1. We use the signal M. By changing signal M, we attempt to match the target
value m with y. A typical case is proportionality between M and m. For example, if y is constantly 5% larger than m, we can calibrate M by multiplying
it by 0.95. In general, by adjusting Ms levels, we can match m with y.
2. By tuning up one level of control factor levels (sometimes, more than two
control factor levels) in a proper manner, we attempt to match the linear
coefcient 1 with 1.
3. Design constants other than the control factors selected in simulation may
be used.
The difcult part in adjustment is not for coefcients but for deviations from
a proportional equation. Below we detail a procedure of adjustment including
deviations from a proportional equation. To achieve this, we make use of orthogonal expansion, which is addressed in the next section. Adjustment for other than
proportional terms is regarded as a signicant technical issue to be solved in the
future.
FORMULA OF ORTHOGONAL EXPANSION
The formula of orthogonal expansion for adjustment usually begins with a proportional term. If we show up to the third-order term, the formula is:
K3
m
K2
K K K2K5
K K K 42
3 m3 3 4
m 3 5
2
K2K4 K 3
K2K4 K 32
y 1m 2 m2
(3.2)
1
(mi1 mi2 mik)
k
(i 2, 3, ...)
(3.3)
K2, K3, ... are constant because ms are given. In most cases, third- and higher-order
terms are unnecessary; that is, it is sufcient to calculate up to the second-order
term. We do not derive this formula here.
Accordingly, after making orthogonal expansion of the linear, quadratic, and
cubic terms, we create an ANOVA (analysis of variance) table as shown in Table
3.2, which is not for an SN ratio but for tuning. If we can tune up to the cubic
term, the error variance becomes Ve. The loss before tuning,
L0
A0
V
02 T
A0
V
20 e
is reduced to
44
Table 3.2
ANOVA table for tuning
Source
S1
S2
S3
k3
Se
Ve
Total
ST
VT
after tuning. When only the linear term is used for tuning, the error variance is
expressed as follows:
Ve
1
(S S3 Se)
k 1 2
Example
This design example, introduced by Oki Electric Industry at the Eighth Taguchi
Symposium held in the United States in 1990, had an enormous impact on research and design in the United States. However, here we analyzed the average
values of N1 and N2 by replacing them with data of N0 and the standard SN ratio
based on dynamic characteristics. Our analysis is different from that of the original
report, but the result is the same.
A function of the color shift mechanism of a printer is to guide four-color ribbons
to a proper position for a printhead. A mechanism developed by Oki Electric Industry
in the 1980s guides a ribbon coming from a ribbon cartridge to a correct location
for a printhead. Ideally, rotational displacement of a ribbon cartridges tip should
match linear displacement of a ribbon guide. However, since there are two intermediate links for movement conversion, both displacements have a difference. If
this difference exceeds 1.45 mm, misfeeds such as ribbon fray (a ribbon frays as
a printhead hits the edge of a ribbon) or mixed color (a printhead hits a wrong color
band next to a target band) happen, which can damage the function. In this case,
the functional limit 0 is 1.45 mm. In this case, a signal factor is set to a rotational
angle M, whose corresponding target values are as follows:
1.3
2.6
3.9
5.2
6.5
2.685
5.385
8.097
10.821
13.555
45
For parameter design, 13 control factors are selected in Table 3.3 and assigned
to an L27 orthogonal array. It is preferable that they should be assigned to an L36
array. In addition, noise factors should be compounded together with control factors
other than design constants.
Since variability caused by conditions regarding manufacturing or use exists in
all control factors, if we set up noise factors for each control factor, the number of
noise factors grows, and accordingly, the number of experiments also increases.
Therefore, we compound noise factors. Before compounding, to understand noise
factor effects, we check how each factor affects an objective characteristic of displacement by xing all control factor levels at level 2. The result is illustrated in
Table 3.4.
By taking into account an effect direction for each noise factor on a target position, we establish two compounded noise factor levels, N1 and N2, by selecting
the worst conguration on both the negative and positive sides (Table 3.5). For
some cases we do not compound noise factors, and assign them directly to an
orthogonal array.
According to Oki research released in 1990, after establishing stability for each
rotational angle, the target value for each angle was adjusted using simultaneous
equations. This caused extremely cumbersome calculations. By resetting an output value for each rotational angle under standard conditions as a new signal M,
we can improve the SN ratio for output under noises. Yet since we cannot obtain
a program for the original calculation, we substitute an average value for each angle
at N1 and N2 for a standard output value N0. This approach holds true only for a
case in which each output value at N1 and N2 mutually has the same absolute
value with a different sign around a standard value. An SN ratio that uses an output
value under a standard condition as a signal is called a standard SN ratio, and one
substituting an average value is termed a substitutional standard SN ratio or average standard SN ratio.
Table 3.3
Factors and levels a
Level
Level
Factor
Factor
6.0
6.5
7.0
31.5
33.5
35.5
31.24
33.24
35.24
9.45
10.45
11.45
2.2
2.5
45.0
47.0
7.03
7.83
9.6
10.6
11.6
80.0
82.0
84.0
80.0
82.0
84.0
23.5
25.5
27.5
2.8
61.0
63.0
65.0
49.0
16.0
16.5
17.0
8.63
The unit for all levels is millimeters, except for F (angle), which is radians.
46
Table 3.4
Qualitative effect of noise factor on objective function
Factor
Effect
Factor
Effect
A
B
C
D
E
F
G
H
I
J
K
L
M
(1 / 2r)(S Ve)
3.51 dB
VN
(3.4)
Table 3.5
Two levels of compounded noise factor
Factor
N1
N2
Factor
N1
N2
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.15
0.15
0.1
0.1
0.15
0.15
0.05
0.5
0.15
0.15
0.5
0.5
0.15
0.15
0.1
0.1
47
Table 3.6
Data of experiment 1
Rotational Angle:
Loss
Function
1.3
2.6
3.9
5.2
6.5
Standard
Condition:
3.255
6.245
9.089
11.926
14.825
N1
2.914
5.664
8.386
11.180
14.117
463.694309
N2
3.596
6.826
9.792
12.672
15.533
524.735835
zation and noise selection in simulation. However, quite often in the United States,
instead of compounding noise factors, they assign them to an outer orthogonal
array. That is, after assigning three levels of each noise factor to an L27 orthogonal
array, they run a simulation for each combination (direct product design) formed
by them and control factor levels.
Analysis for Tuning
At the optimal SN ratio condition in simulation, shown in the preceding section,
data at N0 and the average values at both N1 and N2 are calculated as follows:
Signal factor M (deg):
1.3
2.6
3.9
5.2
6.5
2.685
5.385
8.097
10.821
13.555
2.890
5.722
8.600
11.633
14.948
These data can be converted to Figure 3.2 by means of a graphical plot. Looking
at Figure 3.2, we notice that each (standard) output value at the optimal condition
digresses with a constant ratio from a corresponding target value, and this tendency
makes the linear coefcient of output differ from 1. Now, since the coefcient 1
is more than 1, instead of adjusting 1 to 1, we calibrate M, that is, alter a signal
M to M*:
Table 3.7
Decomposition of total variation
Source
988.430144
988.430144
3.769682
3.769682
0.241979
0.030247
4.011662
0.445740
10
992.441806
Total
48
Table 3.8
Factor assignment and SN ratio based on average value
No.
(dB)
3.51
2.18
0.91
6.70
1.43
3.67
5.15
8.61
2.76
10
3.42
11
2.57
12
0.39
13
5.82
14
6.06
15
5.29
16
3.06
17
2.48
18
3.86
19
5.62
20
3.26
21
2.40
22
2.43
23
3.77
24
1.76
25
3.17
26
3.47
27
2.51
49
Table 3.9
Level totals of SN ratio
1
34.92
27.99
20.01
29.35
27.23
26.34
24.26
28.55
30.11
28.67
27.21
27.07
10.62
27.33
44.97
37.38
28.48
17.06
32.92
27.52
22.48
32.94
26.59
23.39
34.02
28.87
20.03
16.10
27.78
39.04
35.40
27.67
19.85
22.06
26.63
34.23
24.66
28.12
30.14
Standard SN Ratio
Linear Coefficient 1
Figure 3.1
Factor effect plots of
standard SN ratio, linear
coefcient, and
quadratic coefcient
A
Quadratic Coefficient 2
0.003
0.005
0.007
0.009
0.011
0.013
A
50
Table 3.10
Estimation and conrmation of optimal SN ratio (dB)
Optimal condition
Estimation by
All Factors
Conrmatory
Calculation
12.50
14.58
Initial condition
3.15
3.54
Gain
9.35
11.04
M*
1
M
1
(3.5)
Figure 3.2
Standard output value
at optimal S/N ratio
condition
Target value
Simulation value
51
( 5)
(3.6)
(m1 y1 m2 y2 m5 y5)2
m21 m22 m25
(2.685)(2.890) (13.555)(14.948)2
2.6852 13.5552
473.695042
(3.7)
m1 y1 m2 y2 m5 y5
m12 m22 m52
436.707653
402.600925
1.0847
(3.8)
(3.9)
(3.10)
Thus, the second-order term can be calculated according to the expansion equation
(3.2) as follows:
2 m2
K3
m
K2
2 m2
892.801297
m
80.520185
2(m2 11.0897m)
(3.11)
(i 1,2, ... , 5)
(3.12)
52
Now we obtain
w1 2.6852 (11.0897)(2.685) 22.567
(3.13)
(3.14)
(3.15)
(3.16)
(3.17)
Using these coefcients w1, w2, ... , w5, we compute the linear equation of the
second-order term L2:
L2 w1 y1 w2 y2 w5 y5
(22.567)(2.890) (33.477)(14.948)
16.2995
(3.18)
Then, to calculate variation in the second-order term S2, we compute r2, which
denotes a sum of squared values of coefcients w1, w2, ... , w5 in the linear equation
L2:
r2 w21 w22 w52
(22.567)2 33.4772
3165.33
(3.19)
By dividing the square of L2 by this result, we can arrive at the variation in the
second-order term S2. That is, we can compute the variation of a linear equation
by dividing the square of L2 by the number of units (sum of squared coefcients).
Then we have
S2
16.29552
0.083932
3165.33
(3.20)
L2
16.2995
0.005149
r2
3165.33
(3.21)
Therefore, after optimizing the SN ratio and computing output data under the standard conditions N0, we can arrive at Table 3.11 for the analysis of variance to
compare a target value and an output value. This step of decomposing total variation to compare target function and output is quite signicant before adjustment
(tuning) because we can predict adjustment accuracy prior to tuning.
Economic Evaluation of Tuning
Unlike evaluation of an SN ratio, economic evaluation of tuning, is absolute because
we can do an economic assessment prior to tuning. In this case, considering that
the functional limit is 1.45 mm, we can express loss function L as follows:
53
Table 3.11
ANOVA for tuning (to quadratic term)
Source
473.695042
0.083932
0.033900
473.812874
Total
2 e
(4)
(0.117832)
0.011300
(0.029458)
A0 2
A0
2
20
1.452
(3.22)
S2 Se
0.083932 0.033930
0.029458
4
4
(3.23)
0.001130
3
3
22
(3.24)
Plugging these into (3.22), we can obtain the absolute loss. For the case of tuning
only the linear term, the loss is
L1
300
(0.029458) $4.20
1.452
(3.25)
300
(0.001130) 16 cents
1.452
(3.26)
Thus, if we tune to the quadratic term, as compared to tuning only the linear term,
we can save
420.3 16.1 $4.04
(3.27)
54
Table 3.12
Level average of linear and quadratic coefcients
Linear Coefcient
Quadratic Coefcient
Factor
1.0368
1.0427
1.501
0.0079
0.0094
0.0107
1.0651
1.0472
1.0173
0.0095
0.0094
0.0091
1.0653
1.0387
1.0255
0.0124
0.0091
0.0064
1.0435
1.0434
1.0427
0.0083
0.0093
0.0104
1.0376
1.0441
1.0478
0.0081
0.0094
0.0105
1.0449
1.0434
1.0412
0.0077
0.0093
0.0109
1.0654
1.0456
1.0185
0.0100
0.0092
0.0088
1.0779
1.0427
1.0089
0.0090
0.0092
0.0098
1.0534
1.0430
1.0331
0.0094
0.0093
0.0093
1.0271
1.0431
1.0593
0.0076
0.0093
0.0111
1.0157
1.0404
1.0734
0.0083
0.0094
0.0103
1.0635
1.0405
1.0256
0.0113
0.0095
0.0073
1.0295
1.0430
1.0570
0.010
0.0095
0.0084
them has a great effect on the quadratic term. Therefore, even if we use them, we
cannot lower the quadratic terms coefcient. As the next candidates, we can choose
K or D. If the K level is changed from 1 to 2, the coefcient is expected to decrease
to zero. Indeed, this change leads to increasing the linear terms coefcient from
the original value; however, it is not vital because we can adjust using a signal. On
the other hand, if we do not wish to eliminate the quadratic terms effect with a
minimal number of factors, we can select K or D. In actuality, extrapolation can
also be utilized. The procedure discussed here is a generic strategy applicable to
all types of design by simulation, called the Taguchi method or Taguchi quality
engineering in the United States and other countries.
55
Quality Engineering:
The Taguchi Method
4.1. Introduction
57
4.2. Electronic and Electrical Engineering
59
61
73
85
Function of an Engine
86
General Chemical Reactions
87
Evaluation of Images
88
Functionality Such as Granulation or Polymerization Distribution
Separation System
91
93
97
102
MT (MahalanobisTaguchi) Method
102
Application of the MT Method to a Medical Diagnosis
104
Design of General Pattern Recognition and Evaluation Procedure
Summary of Partial MD Groups: Countermeasure for Collinearity
56
91
116
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
108
114
4.1. Introduction
4.1. Introduction
The term robust design is in wide spread use in Europe and the United States. It
refers to the design of a product that causes no trouble under any conditions and
answers the question: What is a good-quality product? As a generic term, quality
or robust design has no meaning; it is merely an objective. A product that functions
under any conditions is obviously good. Again, saying this is meaningless. All engineers attempt to design what will work under various conditions. The key issue
is not design itself but how to evaluate functions under known and unknown
conditions.
At the Research Institute of Electrical Communication in the 1950s, a telephone
exchange and telephone were designed so as to have a 40- and a 15-year design
life, respectively. These were demanded by the Bell System, so they developed a
successful crossbar telephone exchanger in the 1950s; however, it was replaced 20
years later by an electronic exchanger. From this one could infer that a 40-year
design life was not reasonable to expect.
We believe that design life is an issue not of engineering but of product planning. Thinking of it differently from the crossbar telephone exchange example,
rather than simply prolonging design life for most products, we should preserve
limited resources through our work. No one opposes the general idea that we
should design a product that functions properly under various conditions during
its design life. What is important to discover is how to assess proper functions.
The conventional method has been to examine whether a product functions
correctly under several predetermined test conditions. Around 1985, we visited the
Circuit Laboratory, one of the Bell Labs. They developed a new circuit by using
the following procedure; rst, they developed a circuit of an objective function
under a standard condition; once it could satisfy the objective function, it was
assessed under 16 different types of conditions, which included different environments for use and after-loading conditions.
If the product did not work properly under some of the different conditions,
design constants (parameters) were changed so that it would function well. This
is considered parameter design in the old sense. In quality engineering (QE), to
achieve an objective function by altering design constants is called tuning. Under
the Taguchi method, functional improvement using tuning should be made under
standard conditions only after conducting stability design because tuning is improvement based on response analysis. That is, we should not take measures for
noises by taking advantage of cause-and-effect relationships.
The reason for this is that even if the product functions well under the 16
conditions noted above, we cannot predict whether it works under other conditions. This procedure does not guarantee the products proper functioning within
its life span under various unknown conditions. In other words, QE is focused not
on response but on interaction between designs and signals or noises. Designing
parameters to attain an objective function is equivalent to studying rst-order
moments. Interaction is related to second-order moments. First-order moments
involve a scalar or vector, whereas second-order moments are studied by twodimensional tensors. Quality engineering maintains that we should do tuning only
under standard conditions after completing robust design. A two-dimensional tensor does not necessarily represent noises in an SN ratio. In quality engineering,
noise effects should have a continual monotonic tendency.
57
58
59
60
(i 1, 2, ... , 16)
(4.1)
In this case, m is a target value (or target curve). If we have 16 or more control
factors (called adjusting factors in quality engineering), we can solve equation (4.1).
If not, we determine A, B, ... , to minimize a sum of squared differences between
both sides of equation (4.1). In other words, we solve the following:
[(A,B, ... , N ) m]
16
min
(4.2)
A,B,... i1
61
In quality engineering, to assess functionality, we regard a sum of squared measurements as total output, decompose it into effective and harmful parts, and improve the ratio of the effective part to the harmful part by dening it as an SN
ratio. To decompose measurements properly into quadratic forms, each measurement should be proportional to the square root of energy. Of course, the theory
regarding quadratic form is related not to energy but to mathematics. The reason
that each measurement is expected to be proportional to a square root of energy
is that if total variation is proportional to energy or work, it can be decomposed
into a sum of signals or noises, and each gain can be added sequentially.
In an engineering eld such as radio waves, electric power can be decomposed
as follows:
apparent power effective power reactive power
The effective power is measured in watts, whereas the apparent power is measured
in volt-amperes. For example, suppose that input is dened as the following sinusoidal voltage:
input E sin(t )
(4.3)
(4.4)
EI
cos( )
2
(4.5)
(4.6)
EI
[1 cos( )]
2
(4.7)
apparent power
the reactive power is expressed by
reactive power
Functionality
Evaluation of System
Using Power
(Amplitude)
62
(4.8)
(4.9)
Using the Hermitian form, which deal with the quadratic form in complex number, we express total variation as
ST y1y1 y2y2 yn yn
(4.10)
M1 y1 M2 y2 Mn yn
M1M1 M2M2 MnMn
(4.11)
M1y1 M2y2 Mn yn
M1M1 M2M2 MnMn
(4.12)
( 1)
(4.13)
(4.14)
Se
n1
(4.15)
63
(1/r) (S Ve)
Ve
(4.16)
S 10 log
1
(S Ve)
r
(4.17)
(4.18)
(4.19)
( 3k)
(4.20)
( 1)
(4.21)
Effective divider:
r M1M1 M2M2 Mk Mk
(4.22)
( 2)
(4.23)
Error variation:
Se ST S SN
( 3k 3)
(4.24)
Table 4.1
Input/output data
Signal
Noise
M1
M2
Mk
Linear Equation
N1
y11
y12
y1k
L1
N2
y21
y22
y2k
L2
N3
y31
y32
y3k
L3
64
( 3k 1)
(4.25)
Se
3k 3
(4.26)
SN
3k 1
(4.27)
SN ratio:
10 log
(1/3r) (S Ve)
VN
(4.28)
1
(S Ve)
3r
(4.29)
Sensitivity:
S 10 log
We have thus far described application of the Hermitian form to basic output
decomposition. This procedure is applicable to decomposition of almost all quadratic forms because all squared terms are replaced by products of conjugates when
we deal with complex numbers. From here on we call it by a simpler term, decomposition of variation, even when we decompose variation using the Hermitian form
because the term decomposition of variation by the Hermitian form is too lengthy.
Moreover, we wish to express even a product of two conjugates y and y as y2 in
some cases. Whereas y2 denotes itself in the case of real numbers, it denotes a
product of conjugates in the case of complex numbers. By doing this, we can use
both the formulas and degrees of freedom expressed in the positive quadratic
form. Additionally, for decomposition of variation in the Hermitian form, decomposition formulas in the quadratic form hold true. However, as for noises, we need
to take into account phase variability as well as power variability because the noises
exist in the complex plane. Thus, we recommend that the following four-level
noise N be considered:
N1:
N2:
N3:
N4:
65
(4.30)
(4.31)
In this case, E is too small to measure. Since we cannot measure frequency as part
of wave power, we need to measure frequency itself as quantity proportional to
energy. As for output, we should keep power in an oscillation system sufciently
stable. Since the systems functionality is discussed in the preceding section, we
describe only a procedure for measuring the functionality based on frequency.
If a signal has one level and only stability of frequency is in question, measurement of time and distance is regarded as essential. Stability for this case is nominalthe-best stability of frequency, which is used for measuring time and distance. More
exactly, we sometimes take a square root of frequency.
STABILITY OF TRANSMITTING WAVE
The stability of the transmitting wave is considered most important when we deal
with radio waves. (A laser beam is an electromagnetic wave; however, we use the
term radio wave. The procedure discussed here applies also for infrared light, visible light, ultraviolet light, x-rays, and gamma rays.) The stability of radio waves is
categorized as either stability of phase or frequency or as stability of power. In the
case of frequency-modulated (FM) waves, the former is more important because
even if output uctuates to some degree, we can easily receive phase or frequency
properly when phase or frequency is used.
In Section 7.2 of Reference 2, for experimentation purposes the variability in
power source voltage is treated as a noise. The stability of the transmitting wave is
regarded as essential for any function using phase or frequency. Since only a single
frequency is sufcient to stabilize phase in studying the stability of the transmitting
wave, we can utilize a nominal-the-best SN ratio in quality engineering. More specically, by selecting variability in temperature or power source voltage or deterioration as one or more noises, we calculate the nominal-the-best SN ratio. If the
noise has four levels, by setting the frequency data at N1, N2, N3, and N4 to y1, y2,
y3, and y4, we compute the nominal-the-best SN ratio and sensitivity as follows:
14 (Sm Ve)
Ve
(4.32)
(4.33)
10 log
Quality Engineering
of System Using
Frequency
66
(y1 y2 y3 y4)2
4
( 1)
(4.34)
ST y 12 y 22 y 32 y 42
( 4)
(4.35)
Se ST Sm
( 3)
(4.36)
Ve 13 Se
(4.37)
SN
Se ST SM SN
( k)
(4.38)
(4.39)
(4.40)
Table 4.2
Cases where signal factor exists
Noise
Signal
N1
N2
Total
M1
y11
y12
y1
M2
y21
y22
y2
Mk
yk1
yk2
yk
67
Table 4.3
Decomposition of total variation
Factor
SM
VM
SN
k1
Se
2k
ST
Total
Ve
(4.41)
(4.42)
SM
k
(4.43)
Ve
Se
k1
(4.44)
VN
SN Se
k
(4.45)
10 log
Now
VM
When we transmit (transact) information using a certain frequency, we need frequency modulation. The technical means of modulating frequency is considered
to be the signal. A typical function is a circuit that oscillates a radio wave into
which we modulate a transmitting wave at the same frequency as that of the transmitting wave and extract a modulated signal using the difference between two
waves. By taking into account various types of technical means, we should improve
the following proportionality between input signal M and measurement characteristic y for modulation:
y M
(4.46)
Thus, for modulation functions, we can dene the same SN ratio as before.
Although y as output is important, before studying y we should research the frequency stability of the transmitting wave and internal oscillation as discussed in
the preceding section.
68
(L1 L2 L6)2
6r
( 1)
(4.47)
( 2)
(4.48)
L21 L26
S SF
r
( 3)
(4.49)
(F 6k 6)
(4.50)
Se ST (S SF SN(F ))
(1/6r)(S Ve)
VN
(4.51)
S 10 log
1
(S Ve)
6r
(4.52)
SN(F ) Se
6k 3
(4.53)
Now
VN
Table 4.4
Case where modulated signal exists
Modulated Signal
Noise
Frequency
M1
M2
Mk
Linear
Equation
N1
F1
F2
F3
y11
y21
y31
y12
y22
y32
y1k
y2k
y3k
L1
L2
L3
N2
F1
F2
F3
y41
y51
y61
y42
y52
y62
y4k
y5k
y6k
L4
L5
L6
69
Table 4.5
Decomposition of total variation
Factor
SF
N(F)
SN(F)
e
Total
6(k 1)
Se
6k
ST
Ve
we should research and design only analog functions because digital systems are
included in all AM, FM, and PM, and its only difference from analog systems is
that its level value is not continuous but discrete: that is, if we can improve SN
ratio as an analog function and can minimize each interval of discrete values and
eventually enhance information density.
Example
A phase-modulation digital system for a signal having a 30 phase interval such as
0, 30, 60, ... , 330 has a value that is expressed in the following equation,
even if its analog functional SN ratio is 10 dB:
10 log
2
10
2
(4.54)
By taking into account that the unit of signal M is degrees, we obtain the following small value of :
31.6 3.16
(4.55)
The fact that representing RMS is approximately 3 implies that there is almost
no error as a digital system because 3 represents one-fth of a function limit regarded as an error in a digital system. In Table 4.6 all data are expressed in radians.
Since the SN is 48.02 dB, is computed as follows:
104.802 0.00397 rad
0.22
(4.56)
The ratio of this to the function limit of 15 is 68. Then, if noises are selected
properly, almost no error occurs.
In considering the phase modulation function, in most cases frequency is handled as an indicative factor (i.e., a use-related factor whose effect is not regarded
as noise.) The procedure of calculating its SN ratio is the same as that for a system
70
Table 4.6
Data for phase shifter (experiment 1 in L18 orthogonal array; rad)
Voltage
Temperature
Frequency
V1
V2
V3
T1
F1
F2
F3
77 j101
87 j104
97 j106
248 j322
280 j330
311 j335
769 j1017
870 j1044
970 j1058
T2
F1
F2
F3
77 j102
88 j105
98 j107
247 j322
280 j331
311 j335
784 j1025
889 j1052
989 j1068
using frequency. Therefore, we explain the calculation procedure using experimental data in Technology Development for Electronic and Electric Industries [3].
Example
This example deals with the stability design of a phase shifter to advance the phase
by 45. Four control factors are chosen and assigned to an L18 orthogonal array.
Since the voltage output is taken into consideration, input voltages are selected as
a three-level signal. However, the voltage V is also a noise for phase modulation.
Additionally, as noises, temperature T and frequency F are chosen as follows:
Voltage (mV): V1 224, V2 707, V3 2234
Temperature (C): T1 10, T2 60
Frequency (MHz): F1 0.9, F2 1.0, F3 1.1
Table 4.6 includes output voltage data expressed in complex numbers for 18 combinations of all levels of noises V, T, and F. Although the input voltage is a signal
for the output voltage, we calculate the phase advance angle based on complex
numbers because we focus only on the phase-shift function:
y tan1
Y
X
(4.57)
In this equation, X and Y indicate real and imaginary parts, respectively. Using a
scalar Y, we calculate the SN ratio and sensitivity.
As for power, we compute the SN ratio and sensitivity according to the Hermitian
form described earlier in the chapter. Because the function is a phase advance (only
45 in this case) in this section, the signal has one level (more strictly, two phases,
0 and 45). However, since our focal point is only the difference, all that we need
to do is to check phase advance angles for the data in Table 4.6. For the sake of
convenience, we show them in radians in Table 4.7, whereas we can also use
degrees as the unit. If three phase advance angles, 45, 90, and 135, are selected,
we should consider dynamic function by setting the signal as M.
71
Table 4.7
Data for phase advance angle (rad)
Voltage
Temperature
Frequency
V1
V2
V3
Total
T1
F1
F2
F3
0.919
0.874
0.830
0.915
0.867
0.823
0.923
0.876
0.829
2.757
2.617
2.482
T2
F1
F2
F3
0.924
0.873
0.829
0.916
0.869
0.823
0.918
0.869
0.824
2.758
2.611
2.476
Since we can adjust the phase angle to 45 or 0.785 rad (the target angle),
here we pay attention to the nominal-the-best SN ratio to minimize variability. Now
T and V are true noises and F is an indicative factor. Then we analyze as follows:
ST 0.9192 0.9152 0.8242 13.721679
( 18)
(4.58)
15.701
13.695633
18
2
Sm
( 1)
SF
(4.59)
(4.60)
Se ST Sm SF 0.0001835
Ve
( 15)
(4.61)
Se
(0.00001223)
15
(4.62)
18 (13.695633 0.00001223)
0.00001223
47.94 dB
(4.63)
1
S 10 log
18 (13.695633 0.00001223)
1.187 dB
(4.64)
72
45
(3.1416) 1.049 dB
180
(4.65)
As a next step, under optimal conditions, we calculate the phase advance angle
and adjust it to the value in equation (4.65) under standard conditions. This process
can be completed by only one variable.
When there are multiple levels for one signal factor, we should calculate the
dynamic SN ratio. When we design an oscillator using signal factors, we may select
three levels for a signal to change phase, three levels for a signal to change output,
three levels for frequency F, and three levels for temperature as an external condition, and assign them to an L9 orthogonal array. In this case, for the effective
value of output amplitude, we set a proportional equation for the signal of voltage
V as the ideal function. All other factors are noises. As for the phase modulation
function, if we select the following as levels of a phase advance and retard signal,
Level targeted at 45: M1 M2 m1
Level targeted at 0: M2 M2 m 2
Level targeted at 45: M3 M2 m3
the signal becomes zero-point proportional. The ideal function is a proportional
equation where both the signal and output go through the zero point and take
positive and negative values. If we wish to select more signal levels, for example,
two levels for each of the positive and negative sides (in total, ve levels, M1, M2,
M3, M4, and M5), we should set them as follows:
M1 120
M2 60
M3 M4 0
(dummy)
M5 60
M6 120
Next, together with voltage V, temperature T, and frequency F, we allocate them
to an L18 orthogonal array to take measurements. On the other hand, by setting M1
150, M2 90, M3 30, M4 30, M5 90, and M6 150,
we can assign them to an L18 array. After calibrating the data measured using M0
0, they are zero proportional.
DESIGN BY SIMULATION
73
Conventional
Meaning of Robust
Design
74
New Method:
Functionality Design
Example
Imagine a vehicle steering function, more specically, when we change the orientation of a car by changing the steering angle. The rst engineer who designed such
a system considered how much the angle could be changed within a certain range
75
of steering angle. For example, the steering angle range is dened by the rotation
of a steering wheel, three rotations for each, clockwise and counterclockwise, as
follows:
(360)(3) 1080
(360)(3) 1080
In this case, the total steering angle adds up to 2160. Accordingly, the engineer
should have considered the relationship between the angle above and steering curvature y, such as a certain curvature at a steering angle of 1080, zero curvature
at 0, and a certain curvature at 1080.
For any function, a developer of a function pursues an ideal relationship between
signal M and output y under conditions of use. In quality engineering, since any
function can be expressed by an input/output relationship of work or energy, essentially an ideal function is regarded as a proportionality. For instance, in the
steering function, the ideal function can be given by
y M
(4.66)
N2:
In short, N1 represents conditions where the steering angle functions well, whereas
N2 represents conditions where it does not function well. For example, the former
represents a car with new tires running on a dry asphalt road. In contrast, the latter
is a car with worn tires running on a wet asphalt or snowy road.
By changing the vehicle design, we hope to mitigate the difference between N1
and N2. Quality engineering actively takes advantage of the interaction between
control factors and a compounded error factor N to improve functionality. Then we
76
measure data such as curvature, as shown in Table 4.8 (in the counterclockwise
turn case, negative signs are added, and vice versa). Each Mi value means angle.
Table 4.8 shows data of curvature y, or the reciprocal of the turning radius for
each signal value of a steering angle ranging from zero to almost maximum for both
clockwise and counterclockwise directions. If we turn left or right on a normal road
or steer on a highway, the range of data should be different. The data in Table 4.8
represent the situation whether we are driving a car in a parking lot or steering in
a hairpin curve at low speed. The following factor of speed K, indicating different
conditions of use, called an indicative factor, is often assigned to an outer orthogonal array. This is because a signal factor value varies in accordance with each
indicative factor level.
K1: low speed (less than 20 km/h)
K2: middle speed (20 to 60 km/h)
K3: high speed (more than 60 km/h)
The SN ratio and sensitivity for the data in Table 4.8 are computed as follows.
First, we calculate two linear equations, L1 and L2, using N1 and N2:
L M1 y11 M2 y12 M7y17
L2 M1 y21 M2 y21 M2 y22 M7y27
(4.67)
(4.68)
(f 14)
(4.69)
(L1 L2)2
2r
(4.70)
where
r M 21 M 22 M 27
(720)2 (480)2 7202
1,612,800
(4.71)
Table 4.8
Curvature data
M1
720
M2
480
M3
240
M4
0
M5
240
M6
480
M7
720
N1
y11
y12
y13
y14
y15
y16
y17
N2
y21
y22
y23
y24
y25
y26
y27
77
(L1 L2)2
2r
(4.72)
(4.73)
(1/2r) (S Ve)
VN
L1 L2
S
2r
(4.74)
(4.75)
Ve
Se
12
(4.76)
VN
SN Se
13
(4.77)
A major aspect in which quality engineering differs from a conventional research and design procedure is that QE focuses on the interaction between signal
and noise factors, which represent conditions of design and use. As we have said
Table 4.9
Decomposition of total variation
Source
SN
12
Se
Total
14
ST
Ve VN
78
79
tion. For example, the steering function discussed earlier is not a generic
but an objective function. When an objective function is chosen, we need to
check whether there is an interaction between control factors after allocating
them to an orthogonal array, because their effects (differences in SN ratio,
called gain) do not necessarily have additivity. An L18 orthogonal array is
recommended because its size is appropriate enough to check on the additivity of gain. If there is no additivity, the signal and measurement characteristics we used need to be reconsidered for change.
4. Another disadvantage is the paradigm shift for orthogonal arrays role. In
quality engineering, control factors are assigned to an orthogonal array. This
is not because we need to calculate the main effects of control factors for
the measurement characteristic y. Since levels of control factors are xed,
there is no need to measure their effects for y using orthogonal arrays. The
real objective of using orthogonal arrays is to check the reproducibility of
gains, as described in disadvantage 3.
In new product development, there is no existing engineering knowledge most
of the time. It is said that engineers are suffering from being unable to nd ways
for SN ratio improvement. In fact, the use of orthogonal arrays enables engineers
to perform R&D for a totally new area. In the application, parameter-level intervals
may be increased and many parameters may be studied together. If the system
selected is not a good one, there will be little SN ratio improvement. Often, one
orthogonal array experiment can tell us the limitation on SN ratios. This is an
advantage of using an orthogonal array rather than a disadvantage.
We are asked repeatedly how we would solve a problem using quality engineering.
Here are some generalities that we use.
1. Even if the problem is related to a consumers complaint in the marketplace
or manufacturing variability, in lieu of questioning its root cause, we would
suggest changing the design and using a generic function. This is a technical
approach to problem solving through redesign.
2. To solve complaints in the marketplace, after nding parts sensitive to environmental changes or deterioration, we would replace them with robust
parts, even if it incurs a cost increase. This is called tolerance design.
3. If variability occurs before shipping, we would reduce it by reviewing process
control procedures and inspection standards. In quality engineering this is
termed on-line quality engineering. This does not involve using Shewhart control
charts or other management devices but does include the following daily
routine activities by operators:
a. Feedback control. Stabilize processes continuously for products to be produced later by inspecting product characteristics and correcting the
difference between them and target values.
b. Feedforward control. Based on the information from incoming materials or
component parts, predict the product characteristics in the next process
and calibrate processes continuously to match product characteristics with
the target.
c. Preventive maintenance. Change or repair tools periodically with or without
checkup.
80
(4.78)
(4.79)
By doing so, we can make ST equivalent to work, and as a result, the following
simple decomposition can be completed:
total work (work by signal) (work by noise)
(4.80)
This type of decomposition has long been used in the telecommunication eld.
In cases of a vehicles acceleration performance, M should be the square root of
distance and y the time to reach. After choosing two noise levels, we calculate the
SN ratio and sensitivity using equation (4.80), which holds true for a case where
energy or work is expressed as a square root. But in many cases it is not clear if it
is so. To make sure that (4.80) holds true, it is checked by the additivity of control
factor effects using an orthogonal array. According to Newtons law, the following
formula shows that distance y is proportional to squared time t 2 under constant
acceleration:
y
b 2
t
2
(4.81)
2b y
t M
Generic Function of
Machining
y M
(4.82)
The term generic function is used quite often in quality engineering. This is not an
objective function but a function as the means to achieve the objective. Since a
function means to make something work, work should be measured as output. For
example, in case of cutting, it is the amount of material cut. On the other hand,
as work input, electricity consumption or fuel consumption can be selected. When
81
(4.83)
Since in this case, both signal of amount of cut M and data of electricity consumption y are observations, M and y have six levels and the effects of N are not
considered. For a practical situation, we should take the square root of both y and
M. The linear equation is expressed as
L M11 y11 M12 y12 M23y23
(4.84)
(4.85)
Table 4.10
Data for machining
T1
T2
T3
N1
M11
y11
M12
y12
M13
y13
N2
M21
y21
M22
y22
M23
y23
82
( 6)
(4.86)
L2
r
( 1)
(4.87)
Se ST S
( 5)
(4.88)
Se
5
Ve
(4.89)
(1/r) (S Ve)
Ve
(4.90)
S 10 log
1
(S Ve)
r
(4.91)
Since we analyze based on a root square, the true value of S is equal to the
estimation of 2, that is, consumption of electricity needed for cutting, which
should be smaller. If we perform smooth machining with a small amount of electricity, we can obviously improve dimensional accuracy and atness.
2. Improvement of work efciency. This analysis is not for machining performance
but for work efciency. In this case, an ideal function is expressed as
y T
(4.92)
( 6)
(4.93)
(L1 L2)2
2r
( 1)
(4.94)
SN
(L1 L2)2
2r
( 1)
(4.95)
Now
L1 T1 y11 T2 y12 T3y13
(4.96)
(4.97)
r T 21 T 22 T 23
10 log
(1/2r) (S Ve)
VN
S 10 log
1
(S Ve)
2r
(4.98)
(4.99)
(4.100)
83
Then
Ve 41 (ST S SN )
(4.101)
VN 51 (ST S)
(4.102)
In this case, 2 of the ideal function represents electricity consumption per unit
time, which should be larger. To improve the SN ratio using equation (4.99) means
to design a procedure of using a large amount of electricity smoothly. Since we
minimize electricity consumption by equations (4.90) and (4.91), we can improve
not only productivity but also quality by taking advantage of both analyses.
The efciency ratio of y to M, which has traditionally been used by engineers,
can neither solve quality problems nor improve productivity. On the other hand,
the hourly machining amount (machining measurement) used by managers in
manufacturing cannot settle noise problems and product quality in machining.
To improve quality and productivity at the same time, after measuring the data
shown in Table 4.10 we should determine an optimal condition based on two
values, SN ratio and sensitivity. To lower sensitivity in equation (4.91) is to minimize
electric power per unit removal amount. If we cut more using a slight quantity of
electricity, we can cut smoothly, owing to improvement in the SN ratio in equation
(4.90), as well as minimize energy loss. In other words, it means to cut sharply,
thereby leading to good surface quality after cutting and improved machining
efciency per unit energy.
Since quality engineering is a generic technology, a common problem occurs for
the mechanical, chemical, and electronic engineering elds. We explain the functionality evaluation method for on and off conditions using an example of
machining.
Recently, evaluation methods of machining have made signicant progress, as
shown in an experiment implemented in 1997 by IshikawajimaHarima Heavy
Industries (IHI) and sponsored by the National Space Development Agency of
Japan (NASDA). The content of this experiment was released in the Quality Engineering Symposium in 1998. Instead of using transformality [setting product
dimensions input through a numerically controlled (NC) machine as signal and
corresponding dimensions after machined as measurement characteristic], they
measured input and output utilizing energy and work by taking advantage of the
concept of generic function.
In 1959, the following experiment was conducted at the Hamamatsu Plant of
the National Railway (now known as JR). In this experiment, various types of cutting tools ( JIS, SWC, and new SWC cutting tools) and cutting conditions were
assigned to an L27 orthogonal array to measure the net cutting power needed for
a certain amount of cut. The experiment, based on 27 different combinations,
revealed that the maximum power needed was several times as large as the minimum. This nding implies that excessive power may cause a rough surface or
variability in dimensions. In contrast, cutting with quite a small amount of power
means that we can cut sharply. Consequently, when we can cut smoothly and power
effectively, material surfaces can be machined at and variability in dimension can
be reduced. In the Quality Engineering Symposium in June 1998, IHI released
84
(4.103)
(4.104)
The only difference between (4.103) and (4.104) is that the former is not a
generic but an objective function, whereas the latter is a generic function representing the relationship between work and energy. Since y represents a unit of
energy, a square of y does not make sense from the viewpoint of physics. Thus, by
taking the square root of both sides beforehand, we wish to make both sides equivalent to work and energy when they are squared.
On the other hand, unless the square of a measurement characteristic equals
energy or work, we do not have any rationale in calculating the SN ratio based on
a generic function. After taking the square root of both sides in equation (4.104),
Table 4.11
Cutting experiment
T1
T2
T3
N1
M: amount of cut
y: electricity consumption
M11
y11
M12
y12
M13
y13
N2
M: amount of cut
y: electricity consumption
M21
y21
M22
y22
M23
y23
85
we compute the total variation of output ST, which is a sum of squares of the square
root of each cumulative electricity consumption:
ST (y11)2 (y12)2 (y23)2 y11 y12 y23
( 6)
(4.105)
L1 L2
r1 r2
(4.107)
(4.108)
(4.109)
L1 L2
r1 r2
(4.110)
86
(4.111)
p q e2T
(4.112)
Here T represents the time needed for one cycle of the engine and p and q are
measurements in the exhaust. The total reaction rate 1 and side reaction rate 2
are calculated as
1
1
1
ln
T
p
(4.113)
1
1
ln
T
pq
(4.114)
(4.115)
S 10 log 21
(4.116)
10 log
The sensitivity is
The larger the sensitivity, the more the engines output power increases, the
larger the SN ratio becomes, and the less effect the side reaction has.
Cycle time T should be tuned such that benets achieved by the magnitude of
total reaction rate and loss by the magnitude of side reaction are balanced
optimally.
Selecting noise N (which has two levels, such as one for the starting point of
an engine and the other for 10 minutes after starting) as follows, we obtain the
data shown in Table 4.12. According to the table, we calculate the reaction rate
for N1 and N2. For an insufcient reaction,
11
1
1
ln
T
p1
(4.117)
12
1
1
ln
T
p2
(4.118)
87
Table 4.12
Function of engine based on chemical reaction
N1
N2
Insufcient reaction p
p1
p2
Objective reaction q
q1
q2
Total
p1 q1
p2 q2
1
1
ln
T
p1 q1
(4.119)
22
1
1
ln
T
p2 q2
(4.120)
Since the total reaction rates 11 and 12 are larger-the-better, their SN ratios are
as follows:
1 10 log
1
2
1
1
2
211
12
(4.121)
Since the side reaction rates 21 and 22 are smaller-the-better, their SN ratio is
2
2
2 10 log 21 (21
22
)
(4.122)
(4.123)
S 1
(4.124)
If side reactions barely occur and we cannot trust their measurements in a chemical reaction experiment, we can separate a point of time for measuring the total
reaction rate (1 p) and a point of time for measuring the side reaction rate (1
p q). For example, we can set T1 to 1 minute and T2 to 30 minutes, as
illustrated in Table 4.13. When T1 T2, we can use the procedure described in
the preceding section.
Table 4.13
Experimental data
T1
T2
Insufcient reaction p
p11
p12
Objective reaction q
q11
q12
Total
p11 q11
p12 q12
General Chemical
Reactions
88
1
1
ln
T
p11
(4.125)
1
1
ln
T2
p12 q12
(4.126)
Using the equations above, we can compute the SN ratio and sensitivity as
21
21
(4.127)
S 10 log 21
(4.128)
10 log
Evaluation of Images
89
tical experiment, we sometimes select seven levels for luminosity, such as 1000,
100, 10, 1, 1/10, 1/100, and 1/1000 times as much as current levels. At the same
time, we choose exposure time inversely proportional to each of them.
After selecting a logarithm of exposure time E for the value 0.301, the SN ratio
and sensitivity are calculated for analysis.
Next, we set a reading of exposure time E (multiplied by a certain decimal
value) for each of the following seven levels of signal M (logarithm of luminosity)
to y1, y2, ... , y7:
M1 3.0, M2 2.0,
... , M7 3.0
(4.129)
N2:
In this case, a small difference between the two sensitivity curves for N1 and N2
represents a better performance. A good way to designing an experiment is to
compound all noise levels into only two levels. Thus, no matter how many noise
levels we have, we should compound them into two levels.
Since we have three levels, K1, K2, and K3, for a signal of three primary colors,
two levels for a noise, and seven levels for an output signal, the total number of
data is (7)(3)(2) 42. For each of the three primary colors, we calculate the SN
ratio and sensitivity. Now we show only the calculation of K1. Based on Table 4.14,
90
Table 4.14
Experimental data
M1
M2
M7
Linear
Equation
K1
N1
N2
y11
y21
y12
y22
y17
y27
L1
L2
K2
N1
N2
y31
y41
y32
y42
y37
y47
L3
L4
K3
N1
N2
y51
y61
y52
y62
y57
y67
L5
L6
we proceed with the calculation. Now, by taking into account that M has seven
levels, we view M4 0 as a standard point and subtract the value of M4 from M1,
M2, M3, M5, M6, and M7, each of which is set to y1, y2, y3, y5, y6, and y7.
By subtracting a reading y for M4 0 in case of K1, we obtain the following
linear equations:
L1 3y11 2y12 3y17
(4.130)
L2 3y21 3y27
(4.131)
(L1 L2)2
2r
(4.132)
SN
(L1 L2)2
2r
(4.133)
2
2
2
ST y 11
y 12
y 27
(4.134)
1
Ve
12 (ST S SN )
(4.135)
1
VN
13 (ST S)
(4.136)
1 10 log
(1/2r) (S Ve)
VN
(4.137)
S1 10 log
1
(S Ve)
2r
(4.138)
As for K2 and K3, we calculate 2, S2, 3, and S3. The total SN ratio is computed
as the sum of 1, 2, and 3. To balance densities ranging from low to high for
K1, K2, and K3, we should equalize sensitivities for three primary colors, S1, S2, and
S3, by solving simultaneous equations based on two control factors. Solving them
such that
S1 S 2 S3
holds true under standard conditions is not a new method.
(4.139)
91
The procedure discussed thus far is the latest method in quality engineering,
whereas we have used a method of calculating the density for a logarithm of luminosity log E. The weakness of the latter method is that a density range as output
is different at each experiment. That is, we should not conduct an experiment
using the same luminosity range for lms of ASA 100 and 200. For the latter, we
should perform an experiment using half of the luminosity needed for the former.
We have shown the procedure to do so.
Quality engineering is a method of evaluating functionality with a single index
. Selection of the means for improvement is made by specialized technologies
and specialists. Top management has the authority for investment and personnel
affairs. The result of their performance is evaluated based on a balance sheet that
cannot be manipulated. Similarly, although hardware and software can be designed
by specialists, those specialists cannot evaluate product performance at their own
discretion.
If you wish to limit granulation distribution within a certain range, you must classify
granules below the lower limit as excessively granulated, ones around the center
as targeted, and ones above the upper limit as insufciently granulated, and proceed with analysis by following the procedure discussed earlier for chemical reactions. For polymerization distribution, you can use the same technique.
Functionality Such
as Granulation or
Polymerization
Distribution
Separation System
Bp*
A(1 q*) Bp*
(4.140)
Aq*
Aq* B(1 p*)
(4.141)
The error ratio p represents the ratio of copper molecules that is originally
contained in ore but mistakenly left in the slag after smelting. Then 1 p is called
the yield. Subtracting this yield from 1, we obtain the error ratio p. The error ratio
q indicates the ratio of all noncopper molecules that is originally contained in the
furnace but mistakenly included in the product or crude copper. Both ratios are
calculated as a mass ratio. Even if the yield 1 p is large enough, if the error
92
Table 4.15
Input/output for copper smelting
Output
Input
Product
Slag
Total
Copper
A(1 q*)
Bp*
Noncopper
Aq*
B(1 p*)
Total
AB
ratio q is also large, this smelting is considered inappropriate. After computing the
two error ratios p and q, we prepare Table 4.16.
Consider the two error ratios p and q in Table 4.15. If copper is supposed to
melt well and move easily in the product when the temperature in the furnace is
increased, the error ratio p decreases. However, since ingredients other than copper melt well at the same time, the error ratio q rises. A factor that can decrease
p and increase q is regarded as an adequate variable for tuning. Although most
factors have such characteristics, more or less, we consider it real technology to
reduce errors regarding both p and q rather than obtaining effects by tuning. In
short, this is smelting technology with a large functionality.
To nd a factor level reducing both p and q for a variety of factors, after making
an adjustment so as not to change the ratio of p to q, we should evaluate functionality. This ratio is called the standard SN ratio. Since gain for the SN ratio
accords with that when p q, no matter how great the ratio of p to q, in most
cases we calculate the SN ratio after p is adjusted equal to q. Primarily, we compute
p0 as follows when we set p q p0:
p0
1
1 [(1/p) 1][(1/q) 1]
(4.142)
(1 2p0)2
4p0(1 p0)
(4.143)
Table 4.16
Input/output expressed by error ratios
Output
Input
Product
Slag
Total
Copper
1p
Noncopper
1q
Total
1pq
1pq
93
for optimal and initial conditions, and calculate gains. After this, we conduct a
conrmatory experiment for the two conditions, compute SN ratios, and compare
estimated gains with those obtained from this experiment.
Based on the experiment under optimal conditions, we calculate p and q. Unless
a sum of losses for p and q is minimized, timing is set using factors (such as the
temperature in the furnace) that inuence sensitivity but do not affect the SN
ratio. In tuning, we should gradually change the adjusting factor level. This adjustment should be made after the optimal SN ratio condition is determined.
The procedure detailed here is even applicable to the removal of harmful elements or the segregation of garbage.
Example
First, we consider a case of evaluating the main effects on a cancer cell and the
side effects on a normal cell. Although our example is an anticancer drug, this
analytic procedure holds true for thermotherapy and radiation therapy using supersonic or electromagnetic waves. If possible, by taking advantage of an L18 orthogonal
array, we should study the method using a drug and such therapies at the same
time.
Quality engineering recommends that we experiment on cells or animals. We
need to alter the density of a drug to be assessed by h milligrams (e.g., 1 mg) per
unit time (e.g, 1 minute). Next, we select one cancer cell and one normal cell and
designate them M1 and M2, respectively. In quality engineering, M is called a signal
factor. Suppose that M1 and M2 are placed in a certain solution (e.g., water or a
salt solution), the density is increased by 1 mg/minute, and the length of time that
each cell survives is measured. Imagine that the cancer and normal cells die at the
eighth and fourteenth minutes, respectively. We express these data as shown in
Table 4.17, where 1 indicates alive and 0 indicates dead. In addition, M1 and
Table 4.17
Data for a single cell
T1
T2
T3
T4
T5
T6
T7
T8
T9
T10
T11
T12
T13
T14
T15
M1
M2
94
M2 are cancer and normal cells, and T1, T2, ... indicate each lapse of time: 1, 2,
... minutes.
For digital data regarding a dead-or-alive (or cured-or-not cured) state, we calculate LD50 (lethal dose 50). In this case, the LD50 value of M1 is 7.5 because a
cell dies between T7 and T8, whereas that of M2 is 13.5. LD50 represents the quantity of drug needed until 50% of cells die.
In quality engineering, both M and T are signal factors. Although both signal and
noise factors are variables in use, a signal factor is a factor whose effect should
exist, and conversely, a noise is a factor whose effect should be minimized. That
is, if there is no difference between M1 and M2, and furthermore, between each
different amount of drug (or of time in this case), this experiment fails. Then we
regard both M and T as signal factors. Thus, we set up X1 and X2 as follows:
X1 LD50 of a cancer cell
(4.144)
(4.145)
In this case, the smaller X1 becomes, the better the drugs performance becomes.
In contrast, X2 should be greater. Quality engineering terms the former the smallerthe-better characteristic and the latter larger-the-better characteristic. The following equation calculates the SN ratio:
20 log
X2
X1
(4.146)
Table 4.18
Dead-or-alive data
T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 T11 T12 T13 T14 T15 T16 T17 T18 T19 T20
M1 N1 1 1 1 1 1 0 0 0 0 0
N2 1 1 1 0 0 0 0 0 0 0
N3 1 1 1 1 1 1 1 1 1 1
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
M2 N1 1 1 1 1 1 1 1 1 1 1
N2 1 1 1 1 1 1 1 1 0 0
N3 1 1 1 1 1 1 1 1 1 1
1
0
1
1
0
1
1
0
1
1
0
1
0
0
1
0
0
1
0
0
1
0
0
1
0
0
1
0
0
0
95
Table 4.19
LD50 data
M1
5.5
3.5
11.5
M2
14.5
8.5
19.5
Since the data for M1 should have a smaller standard deviation as well as a
smaller average, after calculating an average of a sum of squared data, we multiply
its logarithm by 10. This is termed the smaller-the-better SN ratio:
1 10 log 13 (5.52 3.52 11.52) 17.65 dB
(4.147)
On the other hand, because the LD50 value of a normal cell M2 should be innitesimal, after computing an average sum of reciprocal data, we multiply its logarithm
by 10. This is larger-the-better SN ratio:
2 10 log
1
3
1
1
1
14.52
8.52
19.52
21.50 dB
(4.148)
(4.149)
For example, given two drugs A1 and A2, we obtain the experimental data shown
in Table 4.20. If we compare the drugs, N1, N2, and N3 for A1 should be consistent
with N,
1 N2, and N
3 for A2. Now suppose that the data of A1 are the same as those
in Table 4.19 and that a different drug A2 has LD50 data for M1 and M2 as illustrated
in Table 4.20. A1s SN ratio is as in equation (4.149). To compute A2s SN ratio,
we calculate the SN ratio for the main effect 1 as follows:
1 10 log 31 (18.52 11.52 20.52) 24.75 dB
(4.150)
Table 4.20
Data for comparison experiment
A1
M1
M2
5.5
14.5
3.5
8.5
11.5
19.5
A2
M1
M2
18.5
89.5
11.5
40.5
20.5
103.5
96
Next, as a larger-the-better SN ratio, we calculate the SN ratio for the side effects
2 as
2 10 log
1
3
1
1
1
89.52
40.52
103.52
35.59
(4.151)
Therefore, we obtain the total SN ratio, combining the main and side effects, by
adding them up as follows:
24.75 35.59 10.84
(4.152)
Table 4.21
Comparison of drugs A1 and A2
Main Effect
Side Effect
A1
17.65
21.50
3.85
A2
24.75
35.59
10.84
7.10
14.09
6.99
Gain
Total
97
point is 36C. If the cancer cell dies at 43C, the LD50 value is 6.5C, the difference
from the standard point of 36C. If the normal cell dies at 47C, the LD50 value is
10.5C. When a sonic or electromagnetic wave is used in place of temperature, we
need to select the waves power (e.g., raise the power by 1 W/minute.
Layout of Signal
Factors in an
Orthogonal Array
98
Software Diagnosis
Using Interaction
For the sake of convenience, we explain the procedure by using an L36 orthogonal
array; this also holds true for other types of arrays. According to procedure 3 in
the preceding section, we allocate 11 two-level factors and 12 three-level factors to
an L36 orthogonal array, as shown in Table 4.22. Although one-level factors are not
assigned to the orthogonal array, they must be tested.
We set two-level signal factors to A, B, ... , and K, and three-level signal factors
to L, M, ... , and W. Using the combination of experiments illustrated in Table
4.23, we conduct a test and measure data about whether or not software functions
properly for all 36 combinations. When we measure output data for tickets and
changes in selling tickets, we are obviously supposed to set 0 if both outputs are
correct and 1 otherwise. In addition, we need to calculate an interaction for each
case of 0 and 1.
Now suppose that the measurements taken follow Table 4.23. In analyzing data,
we sum up data for all combinations of A to W. The total number of combinations
is 253, starting with AB and ending with VW. Since we do not have enough space
to show all combinations, for only six combinations of AB, AC, AL, AM, LM, and
LW, we show a two-way table, Table 4.23. For practical use, we can use a computer
to create a two-way table for all combinations.
Table 4.23 includes six two-way tables for AB, AC, AL, AM, LM, and LW, and
each table comprises numbers representing a sum of data 0 or 1 for four conditions of A1B1 (Nos. 19), A1B2 (Nos. 1018), A2B1 (Nos. 1927), and A2B2 (Nos.
2836). Despite all 253 combinations of two-way tables, we illustrate only six. After
we create a two-dimensional table for all possible combinations, we need to consider the results.
Based on Table 4.23, we calculate combined effects. Now we need to create
two-way tables for combinations whose error ratio is 100% because if an error in
software exists, it leads to a 100% error. In this case, using a computer, we should
produce tables only for L2 and L3 for A2, M3 for A2, M2 for L3, W2 and W3 for L2,
and W2 and W3 for L3. Of 253 combinations of two-dimensional tables, we output
only 100% error tables, so we do not need to output those for AB and AC.
We need to enumerate not only two-way tables but also all 100% error tables
from all possible 253 tables because bugs in software are caused primarily by combinations of signal factor effects. It is software designers job to correct errors.
99
A
1
No.
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
B
2
C
3
D
4
E
5
F
6
G
7
H
8
Table 4.22
Layout and data of L36 orthogonal array
I
9
J
10
K
11
L
12
M
13
N
14
O
15
P
16
Q
17
R
18
S
19
T
20
U
21
V
22
W
23
Data
100
A
1
No.
25
26
27
28
29
30
31
32
33
34
35
36
B
2
C
3
D
4
E
5
F
6
G
7
H
8
I
9
J
10
K
11
L
12
M
13
N
14
O
15
P
16
Q
17
R
18
S
19
T
20
U
21
V
22
W
23
Data
101
Table 4.23
Supplemental tables
(1) AB two-way table
B1
B2
Total
A1
A2
Total
4
6
10
4
8
12
8
14
22
C2
Total
A1
A2
Total
4
7
11
4
7
11
8
14
22
L2
L3
Total
A1
A2
Total
0
2
2
4
6
10
4
6
10
8
14
22
M2
M3
Total
1
6
7
2
4
6
5
4
9
8
14
22
Total
A1
A2
Total
M2
M3
0
4
3
7
0
2
4
6
2
4
3
9
W2
W3
0
4
4
8
1
4
4
9
L1
L2
L3
Total
L1
L2
L3
Total
1
2
2
5
2
10
10
22
Total
2
10
10
22
102
103
distance only in the unit cluster of a population and dening variable distance in
a population in consideration of variable interrelationships among items.
The MTS method also denes a group of items close to their average as a unit
cluster in such a way that we can use it to diagnose or monitor a corporation. In
fact, both the MT and MTS methods have started to be applied to various elds
because they are outstanding methods of pattern recognition. What is most important in utilizing multidimensional information is to establish a fundamental
database. That is, we should consider what types of items to select or which groups
to collect to form the database. These issues should be determined by persons
expert in a specialized eld.
In this section we detail a procedure for streamlining a medical checkup or
clinical examination using a database for a group of healthy people (referred to
subsequently as normal people). Suppose that the total number of items used
in the database is k. Using data for a group of normal peoplefor example, the
data of people who are examined and found to be in good health in any year after
annual medical checkups for three years in a row (if possible, the data of hundreds
of people is preferable)we create a scale to characterize the group.
As a scale we use the distance measure of P. C. Mahalanobis, an Indian statistician, introduced in his thesis in 1934. In calculating the Mahalanobis distance in
a certain group, the group needs to have homogeneous members. In other words,
a group consisting of abnormal people should not be considered. If the group
contains people with both low and high blood pressure, we should not regard the
data as a single distribution.
Indeed, we may consider a group of people suffering only from hepatitis type
A as being somewhat homogeneous; however, the group still exhibits a wide deviation. Now, lets look at a group of normal people without hepatitis. For gender
and age we include both male and female and all adult age brackets. Male and
female are denoted by 0 and 1, respectively. Any item is dealt with as a quantitative
measurement in calculation. After regarding 0 and 1 data for male and female as
continuous variables, we calculate an average m and standard deviation . For all
items for normal people, we compute an average value m and standard deviation
and convert them below. This is called normalization.
Y
ym
(4.153)
Suppose that we have n normal people. When we convert y1, y2, ... , yn into Y1,
Y2, ... , Yn using equation (4.153), Y1, Y2, ... , Yn has an average of 0 and a standard
deviation of 1, leading to easier understanding. Selecting two from k items arbitrarily, and dividing the sum of normalized products by n or calculating covariance,
we obtain a correlation coefcient.
After forming a matrix of correlation coefcients between two of k items by
calculating its inverse matrix, we compute the following square of D representing
the Mahalanobis distance:
D2
1
k
aijYiYj
ij
(4.154)
104
y1 m1
1
Y2
y2 m 2
2
Yk
yk mk
k
(4.155)
In these equations, for k items regarding a group of normal people, set the
average of each item to m1, m 2, ... , mk and the standard deviation to 1, 2, ... ,
n. The data of k items from a person, y1, y2, ... , yk, are normalized to obtain Y1,
Y2, ... , Yk. What is important here is that a person whose identity is unknown in
terms of normal or abnormal is an arbitrary person. If the person is normal, D 2
has a value of approximately 1; if not, it is much larger than 1. That is, the Mahalanobis distance D 2 indicates how far the person is from normal people.
For practical purposes, we substitute the following y (representing not the SN
ratio but the magnitude of N):
y 10 log D 2
(4.156)
Example
The example shown in Table 4.24 is not a common medical examination but a
special medical checkup to nd patients with liver dysfunction, studied by Tatsuji
105
Table 4.24
Physiological examination items
Examination Item
Acronym
Normal Value
Total protein
TP
6.57.5 g / dL
Albumin
Alb
3.54.5 g. / dL
Cholinesterase
ChE
0.601.00 pH
GOT
225 units
GPT
022 units
Lactatdehydrogenase
LDH
130250 units
Alkaline phosphatase
ALP
2.010.0 units
-Glutamyltranspeptidase
-GTP
068 units
Lactic dehydrogenase
LAP
120450 units
Total cholesterol
TCh
140240 mg / dL
Triglyceride
TG
70120 g / dL
Phospholipases
PL
150250 mg / dL
Creatinine
Cr
0.51.1 mg / dL
BUN
523 mg / dL
Uric Acid
UA
2.58.0 mg / dL
Kanetaka at Tokyo Teishin Hospital [6]. In addition to the 15 items shown in Table
4.24, age and gender are included.. The total number of items is 17.
By selecting data from 200 people (it is desirable that several hundred people
be selected, but only 200 were chosen because of the capacity of a personal computer) diagnosed as being in good health for three years at the annual medical
checkup given by Tokyos Teishin Hospital, the researchers established a database
of normal people. The data of normal people who were healthy for two consecutive
years may be used. Thus, we can consider the Mahalanobis distance based on the
database of 200 people. Some people think that it follows an F-distribution with
an average of 1 approximate degree of freedom of 17 for the numerator and innite
degrees of freedom for its denominator on the basis of raw data. However, since
its distribution type is not important, it is wrong. We compare the Mahalanobis
distance with the degree of each patients dysfunction.
If we use decibel values in place of raw data, the data for a group of normal
people are supposed to cluster around 0 dB with a range of a few decibels. To
minimize loss by diagnosis error, we should determine a threshold. We show a
simple method to detect judgment error next.
106
Table 4.25 demonstrates a case of misdiagnosis where healthy people with data
more than 6 dB away from those of the normal people are judged not normal. For
95 new persons coming to a medical checkup, Kanetaka analyzed actual diagnostic
error accurately by using a current diagnosis, a diagnosis using the Mahalanobis
distance, and close (precise) examination.
Category 1 in Table 4.25 is considered normal. Category 2 comprises a group
of people who have no liver dysfunction but had ingested food or alcohol despite
being prohibited from doing so before a medical checkup. Therefore, category 2
should be judged normal, but the current diagnosis inferred that 12 of 13 normal
people were abnormal. Indeed, the Mahalanobis method misjudged 9 normal people as being abnormal, but this number is three less than that obtained using the
current method.
Category 3 consists of a group of people suffering from slight dysfunctions. Both
the current and Mahalanobis methods overlooked one abnormal person. Yet both
of them detected 10 of 11 abnormal people correctly.
For category 4, a cluster of 5 people suffering from moderate dysfunctions, both
methods found all persons. Since category 2 is a group of normal persons, combining categories 1 and 2 as a group of liver dysfunction , and categories 3 and
4 as a group of liver dysfunction , we summarize diagnostic errors for each method
in Table 4.26. For these contingency tables, each discriminability (0: no discriminability, 1: 100% discriminability) is calculated by the following equations (see Separation System in Section 4.4 for the theoretical background). A1s discriminability:
1
[(28)(15) (51)(1)]2
0.0563
(79)(16)(29)(66)
(4.157)
(63)(15) (16)(1)]2
0.344
(76)(16)(64)(31)
(4.158)
A2s discriminability:
2
Table 4.25
Medical checkup and discriminability
A1: Current Method
A2: Mahalanobis
Method
Category a
Normal
Abnormal
Normal
Abnormal
Total
27
39
59
66
12
13
10
10
11
Total
29
66
64
31
95
a
1, Normal; 2, normal but temporarily abnormal due to food and alcohol; 3, slightly abnormal;
4, moderately abnormal.
107
Table 4.26
2 2 Contingency table
A1: Current method
Diagnosis:
Liver dysfunction
(Normal)
(Abnormal)
Total
A2: Mahalanobis method
Diagnosis:
Liver dysfunction
(Normal)
(Abnormal)
Total
Normal
Abnormal
Total
28
51
79
15
16
29
66
95
Normal
Abnormal
Total
63
16
79
15
16
64
31
95
1
1 1
0.0563
12.2 dB
0.9437
(4.159)
0.344
2.8 dB
0.656
(4.160)
A2s SN ratio:
2 10 log
108
Design of General
Pattern Recognition
and Evaluation
Procedure
MT METHOD
There is a large gap between science and technology. The former is a quest for
truth; the latter does not seek truth but invents a means of attaining an objective
function. In mechanical, electronic, and chemical engineering, technical means
are described in a systematic manner for each objective function. For the same
objective function, there are various types of means, which do not exist in nature.
A product planning department considers types of functions. The term product
planning, including hardware and software, means to plan products to be released
in the marketplace, most of which are related to active functions. However, there
are quite a few passive functions, such as inspection, diagnosis, and prediction. In
evaluating an active function, we focus on the magnitude of deviation from its
ideal function. Using the SN ratio, we do a functional evaluation that indicates
the magnitude of error.
In contrast, a passive function generally lies in the multidimensional world,
including time variables. Although our objective is obviously pattern recognition
in the multidimensional world, we need a brand-new method of summarizing multidimensional data. To summarize multidimensional data, quality engineering
(more properly, the Taguchi method) gives the following paradigms:
Procedure 1: Select items, including time-series items. Up to several thousand
items can be analyzed by a computer.
Procedure 2: Select zero-point and unit values and base space to determine a
pattern.
Procedure 3: After summarizing measurement items only from the base space,
we formulate the equation to calculate the Mahalanobis distance. The Mahalanobis space determines the origin of multidimensional measurement
and unit value.
Procedure 4: For an object that does not belong to the base space, we compute
Mahalanobis distances and assess their accuracy using an SN ratio.
Procedure 5: Considering cost and the SN ratio, we sort out necessary items
for optimal diagnosis and proper prediction method.
Among these specialists are responsible for procedures 1 and 2 and quality
engineering takes care of procedures 3, 4, and 5. In quality engineering, any quality problem is attributed to functional variability. That is, functional improvement
will lead to solving any type of quality problem. Therefore, we determine that we
should improve functionality by changing various types of design conditions (when
dealing with multiple variables, and selecting items and the base space).
Since the Mahalanobis distance (variance) rests on the calculation of all items
selected, we determine the following control factors for each item or for each
group of items and, in most cases, allocate them to a two-level orthogonal array
when we study how to sort items:
First factor level. Items selected (or groups of items selected) are used.
Second factor level. Items selected (or groups of items selected) are not used.
109
We do not need to sort out items regarded as essential. Now suppose that the
number of items (groups of items) to be studied is l and an appropriate two-level
orthogonal array is LN. When l 30, L32 is used, and when l 100, L108 or L124
is selected.
Once control factors are assigned to LN, we formulate an equation to calculate
the Mahalanobis distance by using selected items because each experimental condition in the orthogonal array shows the assignment of items. Following procedure
4, we compute the SN ratio, and for each SN ratio for each experiment in orthogonal array LN, we calculate the control factor effect. If certain items chosen
contribute negatively or little to improving the SN ratio, we should select an optimal condition by excluding the items. This is a procedure of sorting out items
used for the Mahalanobis distance.
MTS METHOD
(4.161)
(4.162)
In this case b12 is not only the regression coefcient but also the correlation coefcient. Then the remaining part of X2, excluding the part related to x1
(regression part) or the part independent of x1, is
x 2 X2 b21x1
(4.163)
110
x x X (X
n
1j 2j
1j
j1
b21x1j) 0
2j
(4.164)
j1
Thus, x1 and x 2 become orthogonal. On the other hand, whereas x1s variance is
1, x 2s variance 22, called residual contribution, is calculated by the equation
22 1 b 221
(4.165)
The reason is that if we compute the mean square of the residuals, we obtain the
following result:
1
n
(X
2j
b21x1j)2
1
n
2
2j
2
b
n 21
X x n1 (b ) x
2
2j ij
21
2
1j
2n 2
n
b b 22l
n 2l
n
1 b 22l
(4.166)
(4.167)
X x
1
V x
n
b31
1
nV1
(4.168)
3j 1j
2
1j
(4.169)
X x
1
V x
n
b32
1
nV2
(4.170)
3j 2j
2
2j
(4.171)
(4.172)
In the Mahalanobis space, x1, x 2, and x 3 are orthogonal. Since we have already
proved the orthogonality of x1 and x 2, we prove here that of x1 and x 3 and that of
x 2 and x 3. First, considering the orthogonality of x1 and x 3, we have
x (X
1j
3j
x X
1j
3j
b31
2
1j
(4.173)
111
This can be derived from equation (4.167) dening b31. Similarly, the orthogonality
of x 2 and x 3 is proved according to equation (4.170), dening b32 as
2j
x X
2j
3j
b32
2
2j
(4.174)
For the remaining variables, we can proceed with similar calculations. The variables normalized in the preceding section can be rewritten as follows:
x1 X1
x 2 X2 b21x1
x 3 X3 b31x1 b32x 2
(4.175)
1 2
2
(x x 212 x 1n
)
n 11
V2
1
2
(x 2 x 22
x 22n)
n 1 21
Vk
1
(x 2 x 2k2 x 2kn)
n k 1 k1
(4.176)
Now setting normalized, orthogonal variables to y1, y2, ... , yk, we obtain
y1
y2
x1
V1
x2
V2
yk
xk
Vk
(4.177)
All of the normalized y1, y2, ... , yk are orthogonal and have a variance of 1. The
database completed in the end contains m and as the mean and standard deviation of initial observations, b21, V2, b31, b32, V2, ... , bk1, bk2, ... , bk(k1), Vk as the
coefcients and variances for normalization. Since we select n groups of data, there
112
(k 1)k
k (k 5)
k
2
2
(4.178)
...
1 0
0 1
0 0
0
0
(4.179)
AR
1 0
0 1
0 0
...
0
0
(4.180)
1 2
(y y 22 y k2)
k 1
(4.181)
Now, setting measured data already subtracted by m and divided by to X1, X2,
... , Xk, we compute primarily the following x1, x 2, ... , xk:
x1 X1
x 2 X2 b21x1
x 3 X3 b31x1 b32x 2
xk Xk bk1x1 bk(k1)
(4.182)
x2
V2
x3
V3
yk
xk
Vk
(4.183)
113
When for an arbitrary object we calculate y1, y2, ... , yk and D 2 in equation
(4.181), if a certain variable belongs to the Mahalanobis space, D 2 is supposed to
take a value of 1, as discussed earlier. Otherwise, D 2 becomes much larger than 1
in most cases.
Once the normalized orthogonal variables y1, y2, ... , yk are computed, the next
step is the selection of items.
A1: Only y1 is used
A2: y1 and y2 are used
( l )
(4.184)
( 1)
(4.185)
Effective divider:
r M 21 M 22 M 2l
(4.186)
Error variation:
Se ST S
( l 1)
(4.187)
Table 4.27
Signal values and Mahalanobis distance
Signal-level value
M1
M2
Mk
Mahalanobis distance
D1
D2
Dl
114
(1/r)(S Ve)
Ve
(4.188)
M1D1 M2D2 Ml Dl
r
(4.189)
(4.190)
and we estimate
M
In addition, for A1, A2, ... , Ak1, we need to compute SN ratios, 1, 2, ... , k1.
Table 4.28 summarizes the result. According to the table, we determine the number of items by balancing SN ratio and cost. The cost is not the calculation cost
but the measurement cost for items. We do not explain here the use of loss functions to select items.
Summary of Partial
MD Groups:
Countermeasure for
Collinearity
To summarize the distances calculated from each subset of distances to solve collinearity problems, this approach can be widely applied. The reason is that we can
select types of scale to use, as what is important is to select a scale that expresses
patients conditions accurately no matter what correlation we have in the Mahalanobis space. The point is to be consistent with patients conditions diagnosed by
doctors.
Now lets go through a new construction method. For example, suppose that
we have data for 0 and 1 in a 64 64 grid for computer recognition of handwriting. If we use 0 and 1 directly, 4096 elements exist. Thus, the Mahalanobis
space formed by a unit set (suppose that we are dealing with data for n persons
in terms of whether a computer can recognize a character A as A: for example,
data from 200 sets in total if 50 people write a character of four items) has a 4096
4096 matrix. This takes too long to handle using current computer capability.
How to leave out character information is an issue of information system design.
In fact, a technique for substituting only 128 data items consisting of 64 column
sums and 64 row sums for all data in a 64 64 grid has already been proposed.
Since two sums of 64 column sums and 64 row sums are identical, they have
collinearity. Therefore, we cannot create a 128 128 unit space by using the
relationship 64 64 128.
In another method, we rst create a unit space using 64 row sums and introduce
the Mahalanobis distance, then create a unit space using 64 column sums and
Table 4.28
Signal values and Mahalanobis distances
Number of items
SN ratio
Cost
C1
C2
C3
Ck
115
( 30)
(4.191)
Signal effect:
SM
D 21 D 22 D 23
10
VM
SM
3
(4.192)
(4.193)
Se ST SM
Ve
( 3)
( 27)
(4.194)
Se
27
(4.195)
By calculating row sums and column sums separately, we compute the following
SN ratio using averages of VM and Ve:
10 log
10 (VM Ve)
Ve
(4.196)
The error variance, which is a reciprocal of the antilog value, can be computed as
2 101.1
(4.197)
The SN ratio for selection of items is calculated by setting the following two levels:
Level 1: item used
Level 2: item not used
We allocate them to an orthogonal array. By calculating the SN ratio using equation (4.196), we choose optimal items.
Table 4.29
Mahalanobis distances for signals
Noise
Signal
N1
N2
N10
Total
M1 (D)
D11
D12
D1, 10
D1
M2 (E)
D21
D22
D2, 10
D2
M3 (R)
D31
D32
D3, 10
D3
116
Example
Although automobile keys are generally produced as a set of four identical keys,
this production system is regarded as dynamic because each set has a different
dimension and shape. Each key has approximately 9 to 11 cuts, and each cut has
several different dimensions. If there are four different dimensions with a 0.5-mm
step, a key with 10 cuts has 410 1,048,576 variations.
In actuality, each key is produced such that it has a few different-dimension
cuts. Then the number of key types in the market will be approximately 10,000.
Each key set is produced based on the dimensions indicated by a computer. By
using a master key we can check whether a certain key is produced, as indicated
by the computer. Additionally, to manage a machine, we can examine particular-
117
118
B
C
n0
u0
(4.198)
In addition, we need a monetary evaluation of the quality level according to dimensional variability. When the adjustment limit is D0 20 m, the dimensions are
regarded as being distributed uniformly within the limit because they are not controlled in the range. The variance is as follows:
D 20
3
(4.199)
On the other hand, the magnitude of dispersion for an inspection interval n0 and
an inspection time lag of l is
n0 1
l
2
D 20
u0
(4.200)
D 20
n0 1
l
2
D 20
u0
(4.201)
Adding equations (4.198) and (4.201) to calculate the following economic loss L0
for the current control system, we have 33.32 cents per product:
L0
B0
C
A
2
n0
u0
D 20
3
n0 1
l
2
12
58
1.90
300
19,560
302
202
D 02
u0
301
50
2
202
19,560
(4.202)
Assuming that the annual operation time is 1600 hours, we can see that in the
current system the following amount of money is spent annually to control quality:
(33.32)(300)(1600) $160,000
(4.203)
2 m
(4.204)
119
It is an optimal control system that improves the loss in equation (4.203). In short,
it is equivalent to determining an optimal inspection interval n and adjustment limit
D, both of which are calculated by the following formulas:
2uA B D
(2)(19,560)(12) 30
20
1.90
(4.205)
745
600 (twice an hour)
D
3C
D 2
A
u0
2
0
(3)(58)
1.90
(4.206)
1/2
(4.207)
202
2
19,560
1/4
6.4
(4.208)
7.0 m
(4.209)
Then, by setting the optimal measurement interval to 600 sets and adjustment limit
to 7.0 m, we can reduce the loss L:
L
B
C
A
2
n
u
D2
32
n1
l
2
D2
u
(4.210)
D2
D 20
(19,560)
72
202
2396
(4.211)
Therefore, we need to change the adjustment interval from the current level of 65
hours to 8 hours. However, the total loss L decreases as follows:
L
12
58
1.80
600
2396
302
72
601
50
2
72
2396
(4.212)
(4.213)
120
(4.214)
Management in Manufacturing
The procedure described in the preceding section is a technique of solving a balancing equation of checkup and adjustment costs, and necessary quality level, leading nally to the optimal allocation of operators in a production plant. Now, given
that checkup and adjustment take 10 and 30 minutes, respectively, the work hours
required in an 8-hour shift at present are:
(10 min) (daily no. inspections) (30 min) (daily no. adjustments)
(10)
(10)
(8)(300)
(3)(300)
(30)
n
u
2400
600
(30)
(4.215)
2400
2396
40 30.1 70 minutes
(4.216)
Assuming that one shift has a duration of 8 hours, the following the number of
workers are required:
70
0.146 worker
(8)(60)
(4.217)
(4.218)
is needed, which implies that one operator is sufcient. As compared to the following workers required currently, we have
(10)
(8)(300)
(3)(300)
(30)
83.6 minutes
300
19,560
(4.219)
so we can reduce the number of workers by only 0.028 in this process. On the
other hand, to compute the process capability index Cp, we estimate the current
standard deviation 0 using the standard deviation of a measurement error m:
0
D3 n 2 1 l Du
2
0
2
0
2m
(4.220)
202
11.9 m
301
50
2
202
22
19560
(4.221)
121
2
60
(2)(30)
(6)(11.9)
0.84
(4.222)
72
601
50
2
72
22
2396
5.24 m
(4.223)
Then we cannot only reduce the required workforce by 0.028 worker (0.84 times)
but can also enhance the process capability index Cp from the current level to
Cp
(30)(2)
1.91
(6)(5.24)
(4.224)
(4.225)
As compared to 3 times, 1.65 times implies that we can lower the retail price
0.55-fold. This estimation is grounded on the assumption that if we sell a product
122
at 45% off the original price, the sales volume doubles. This holds true when we
offer new jobs to half the workers, with the sales volume remaining at the same
level. Now, provided that the mean adjustment interval decreases by one-fourth
due to increased variability after the production speed is raised, how does the cost
eventually change? When frequency of machine breakdowns quadruples, u decreases from its current level of 2396 to 599, one-fourth of 2396. Therefore, we
alter the current levels of u0 and D0 to the values u0 599 and D0 7 m. By
taking these into account, we consider the loss function. The cost is cut by 0.6fold and the production volume is doubled. In general, as production conditions
change, the optimal inspection interval and adjustment limit also change. Thus, we
need to recalculate n and D here. Substituting 0.6A for A in the formula, we obtain
2u0.6AB D
(2)(599)(12) 30
0.6 190 7
481 600
(once in an hour)
(3)(58)
(0.6)(1.90)
(4.226)
72
(302)
599
1/4
10.3 10.00
(4.227)
u (599)
102
72
1222
(4.228)
B
C
0.6A
n
u
302
D2
n1
2l
2
12
58
(0.6)(1.90)
600
1222
302
102
D2
u
601
100
2
102
1222
(4.229)
(4.230)
(4.231)
References
123
Suppose that the production volume remains the same, (300)(1600) 480,000
sets. We can save the following amount of money on an annual basis:
(1.99 1.29)(48) $33,600,000
(4.232)
References
1. Genichi Taguchi, 1987. System of Experimental Design. Dearborn, Michigan: Unipub/
American Supplier Institute.
2. Genichi Taguchi, 1984. Reliability Design Case Studies for New Product Development.
Tokyo: Japanese Standards Association.
3. Genichi Taguchi et al., 1992. Technology Development for Electronic and Electric Industries.
Quality Engineering Application Series. Tokyo: Japanese Standards Association.
4. Measurement Management Simplication Study Committee, 1984, Parameter Design
for New Product Development. Japanese Standards Association.
5. Genichi Taguchi et al., 1989. Quality Engineering Series, Vol. 2. Tokyo: Japanese Standards Association.
6. Tatsuji Kenetaka, 1987. An application of Mahalanobis distance. Standardization and
Quality Control, Vol. 40, No. 10.
Part II
Quality
Engineering:
A Historical
Perspective
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
Development of
Quality Engineering
in Japan
5.1.
5.2.
127
128
129
130
131
132
Figure 5.1
Relationship between
steering angle and
turning radius
the majority of SN ratio applications. In some cases the main effect of a signal
factor when its true value was not known was analyzed using SM instead of S.
Although this Isuzu experiment is considered a monumental achievement in
research on the dynamic SN ratio, shortly after this experiment, to obtain the
Deming Prize, Isuzu stopped utilizing the quality engineering technique, because
the panel of judges disliked the technique. It is well known that afterward, Isuzu
became afliated with General Motors. Some authors, including Soya Tokumaru
[31, 32], harshly criticized this symbolic incident as doing harm to a specic company and to the Japanese concept of total quality control. In contrast, Tokumara
highly praised the Taguchi method.
Since 1975, a correspondence course on the SN ratio has been available from
the Japanese Standards Association. The primary eld in which the SN ratio was
utilized in Japan at that time was quality control; therefore, the course was targeted
to metrology and quality control engineers. When Hiroshi Yano began to work at
the National Research Laboratory of Metrology of the Agency of Industrial Science
and Technology in the Ministry of International Trade and Industry (currently
known as the National Research Laboratory of Metrology of the National Institute
of Advanced Industrial Science and Technology in the Ministry of Economy, Trade
and Industry), he attempted to introduce the SN ratio in the eld of metrology.
Yano studied under Genichi Taguchi as visiting researcher at the laboratory between 1972 and 1976 [33].
The SN ratio of formability was applied by Yano to the evaluation of plastic
injection molding at that time and led to transformability in 1989 [34] after almost
a 15-year lapse. On the other hand, adaptation to the cutting process led to the
current evaluation of machining based on electric power [35]. This change took
20 years to accomplish.
Uses of the SN ratio before and after the 1970s was related to test, analysis, and
measurement. Around 1975, when this was well established, quality engineering
began to be applied to design and production engineering for the purpose of
using dynamic characteristics. According to the latest classication, the SN ratio
used for measurement is categorized as a passive dynamic SN ratio, and the ratio
used for design and machining is seen as an active dynamic ratio. Using it as an
133
134
Figure 5.2
Quality characteristic
distribution of Japanese
and U.S. products
135
136
Figure 5.3
Depth of cut and
amount removed
In 1981, once the subcommittee on the machining precision SN ratio was organized by Hidehiko Takeyama, the chairman, and Hiroshi Yano, the chief secretary, in the Japan Society for Seikigakkai (currently known as the Japan Society
for Precision Engineering), research on the application of quality engineering to
various machining technologies began to be conducted, primarily by local public
research institutes.
These research results were reported in the research forum of the Japan Society
for Seikigakkai and in the technical journals Machine Technology, published by
Nikkan Kogyo Shimbun, and Machine and Tool, published by Kogyo Chosakai. Finally, a round-table talk titled Expected Evaluation Method Based on SN Ratio
Applicable to Design and Prototyping Such as Machine Tools was reported in
Machine Technology in August 1986 [11]. The participants were four well-known
experts in each eld: Hidehiko Takeyama (Tokyo University of Agriculture and
Technology), Sadao Moritomo (Seiko Seiki), Katsumi Miyamoto (Tokyo Seimitsu),
and Yoshito Uehara (Toshiba Tungaloy), together with Hiroshi Yano, whose job it
was to explain quality engineering.
Following are the topics discussed by the round table:
1. SN ratio (for evaluation of signal and noise effects)
2. Error-ridden experiments (regarded as more reliable)
3. Importance of successful selection of control factors
4. New product development and design of experiments
5. Possible applicability to CAD systems
6. Effectiveness in evaluation of prototypes
7. Possible applicability of method to tools
8. High value of method when used properly
9. Signicance of accumulation of data
Although the phrase design of experiments is used, all contents were sufciently
well understood. However, there were some questions about why it takes time for
the SN ratio to be applied for practical uses.
137
Selecting depth of cut as input, the zero point becomes unclear. To clarify this,
an experiment on electrode machining using the machining center was conducted
by Sanjo Seiki in 1988. In this case, as illustrated in Figure 5.4, they chose dimensions indicated by the machining center as signal factors and machined dimensions
as output. This concept is considered as the origin of transformability [39]. It was
in 1987 when the zero-point proportionality displaced the linear equation, and
thereafter, in the eld of machining, the zero-point proportionality has been in
common use.
In 1988 a study subcommittee of the Advanced Machining Tool Technology
Promotion Association encouraged application of quality engineering to the
mechanical engineering eld [15], and a quality engineering project by the IMS
in 1991 accelerated it further. When an experiment implemented by Kimihiro
Wakabayashi of Fuji Xerox was published in the journal Quality Engineering in 1995
[19], Genichi Taguchi, the chairman of the judging group, advised them to calculate the relationships between time and electric power and between work and
electric power. An experiment that reected his advice was conducted at the University of Electro-Communication in 1996 [40], and later, Ishikawajima-Harima
Heavy Industries [41] and Nachi-Fujikoshi Corp. [42] continued the experiment.
Setting electric power y to output, and time T and work M to signal factors, we
dene the following as generic functions:
1T
zM
1 large
z small
(5.1)
(5.2)
Since electric power is energy, considering that each equation is expressed in terms
of energy, we take the square root of both sides of each equation. If 2 decreases,
we can obtain a great deal of work with a small amount of electric power. That is,
we can improve energy efciency. Converting into a relationship between T and
M, we obtain
M (1/2)T
(5.3)
Tool
Instruction
value
Tool
As a result, the time efciency can be improved. Looking at the relationship between T and Y, we see that momentary electric power is high. This idea can be
applied to a system that makes use of energy conversion.
Amount of cut
Instruction
value
Dimension
Figure 5.4
Depth and amount of
cut (1978)
D
138
y = 1 M
y = 1 T
Figure 5.5
Relationship between
instructive values and
amount of cut (1980)
Electric power
y
Electric power
y
Generic function
Generic function
1
Cutting time
1
T
Removed time
139
1. Idling
2. Cutting run 1
3. Cutting run 2
Figure 5.6
Evaluation by electric
power
4. Cutting run 4
1450
1350
Electric
power 1250
1150
lbefore
Figure 5.7
Power changes of
cutting run 1
lafter
1050
5.25
15.25
25.25
30.25
40.25
50.25
70.25
70.25
80.25
140
141
142
143
144
M *2
M *k
m1
m2
mk
145
M2
Mk
( f 2k)
(5.4)
(L1 L2)2
2r
( f 1)
(5.5)
Effective divider:
r M 21 M 22 M 2k
2
2
2
y 01
y 02
y 0k
(5.6)
Linear equations:
L1 M1 y11 M2 y12 Mk y1k
L2 M1 y21 M2 y22 Mk y2k
(5.7)
(L1 L2)2
2r
( f 1)
(5.8)
Error variation:
Se ST S SN
( f 2k 2)
(5.9)
Error variance:
Ve
Se
2(k 1)
(5.10)
VN
SN Se
2k 1
(5.11)
SN ratio:
10 log
(1/2r) (S Ve)
VN
(5.12)
146
(5.13)
First, we select control factor levels to maximize the SN ratio; that is, we make
an adjustment to match the sensitivity with the target.
The standard SN ratio, which we renamed the SN ratio based on this idea, was
proposed in the chapter on reliability testing in the third edition of Design of
Experiments, Vol. 2 [4]. As the SN ratio of the relationship between onoff output
and input in the quality engineering of a digital system, this idea is explained on
the following pages. [See the calculations preceding equation (5.4) and the explanation that follows.]
The standardized error rate is
0
1
1 [(1/p) 1][(1/q) 1]
(5.14)
Assuming the standardized error rate above, we obtain Table 5.1 as the input/
output table. The contribution for this case is called the standard degree of contribution, which is calculated as follows:
0 [(1 p0)2 p20]2
(1 2p0)2
(5.15)
0
1 0
10 log
1
1
0
(5.16)
This SN ratio, calculated in the standard error rate after leveling, is called the
standard SN ratio.
The standard SN ratio above is constructed with onoff (or digital) data. The
new SN ratio, which is the same concept but is constructed with continuous variables, is still called the standard SN ratio. Genichi Taguchi considers the standard
Table 5.1
Standardized input/output table
Output
Input
Total
1 p0
p0
p0
1 p0
Total
5.11. Summary
SN ratio with continuous variables to be the quality engineering of the twenty-rst
century.
The key is to stabilize variability rst, then tune the output to the target. This
is the philosophy of two-stage optimization: that the new standard SN ratio attempts to perform thoroughly. This idea is regarded as the basis of quality engineering and a clue to the future development of quality engineering. That is, it
will be taken advantage of in all elds.
We can see that each of various problems considered in the initial phase of the
design of experiments is being investigated through the succeeding research process. In actuality, we are so occupied in working on practical studies through evolution of the design of experiments that we do not have enough time to deliberate
each of the issues that was originally brought out. Therefore, even if each problem
posed at each time is seen as standing alone, all of them are interrelated
continuously.
5.11. Summary
Most of Genichi Taguchis achievements have been announced in the technical
journal Standardization and Quality Control, published by the Japanese Standards
Association, which is not a commercial magazine but an organ of the foundation.
The journal contents are often related to standardization (JIS) and quality control
or statistical methods. Thus, even if a certain report focused on quality engineering, its title ended with the words for quality management until the 1980s, which
meant that Taguchis achievements appeared in the organ only because it was in
the eld of quality control. In the late 1980s the uniqueness of quality engineering
as distinct from quality control was nally accepted. Since 1986, the journal
regularly features quality engineering in a special article and has sometimes covered activities in the United States. The insight shown by the Japanese Standards
Association, which continued to use his reports as articles, cannot be overlooked.
In general, these reports were regarded as academic achievements that were
supposed to appear in academic journals. The reason that his reports were instead,
systematized in Standardization and Quality Management is because each academic
discipline in Japan is usually focused on achievements in a specialized eld. Therefore, the universal concept of quality engineering did not t a journal dealing with
a specic discipline.
Quality engineering as formed in the foregoing context took full shape after
1990 and led to establishment of the Quality Engineering Society. The theme of
quality engineering in the 1990s was its proposal of a new perspective in each
technological eld, which began with a mention in the publication Quality Engineering for Technological Development [17].
When we look back to the history of quality engineering, we notice that a new
proposal is initiated every time an achievement is accomplished. The eld is not
static. So what is the newest proposal? This is what quality engineering positions
as functionality evaluation or technological development methodology as a managerial issue. Unfortunately, one problem is that the majority of those who attempt to comprehend and promote quality engineering are not strategic but tactical people.
Thus, the functionality evaluation has often been seen not as a strategy but as a
tactic.
147
148
5.11. Summary
Through an annual symposium held at an alternative location, the technical
exchange of both associations was facilitated.
In the research forum, lectures based on Genichi Taguchis books were given,
and practical applications were reportedly discussed by participants from many
corporations. In 1989, the QCRG was renamed the Quality Engineering Research
Group (QRG). In line with these research forums, the two associations held a
seminar on the design of experiments (a 20-day course) led by Genichi Taguchi,
which is considered to have contributed a great deal to promoting the design of
experiments and quality engineering. This corresponds to the expert course offered by ASI in the United States.
Although the design of experiments seminar gained an extremely favorable
reception in the Japanese Standards Association, Genichi Taguchi changed the
content of the seminar as the design of experiments developed into quality engineering. Although it was not always well understood by many, it is still highly
valued.
After his success at Bell Labs renewed public awareness of quality engineering,
its evolution not only gave rise to enthusiastic believers but also kept many from
shying away from it, due to its being overly complicated. Its successful application
by Fuji Xerox triggered technological competition based on quality engineering
among Japanese photocopy manufacturers. The peak of this is recognized to be
the panel discussion Competition on Quality Engineering Applications Among
Copy Machine and Printer Manufacturers at the 7th Annual Symposium of the
Quality engineering Society in 1999.
As with the standard SN ratio in simulation, quality engineering has many methods that are not taken advantage of fully. The Technical Committee of the Quality
Engineering Society has planned to sponsor seminars on important problems since
2001.
Now we close with two remarkable points. One is management by total results, an
idea created by Genichi Taguchi that any achievement can be calculated based on
the extent of loss given to downstream or following departments as the degree
of trouble when the tasks of each department are interrelated. In fact, it seems
that this idea, developed by Genichi Taguchi in the late 1960s, was put forward
too early. However, now is considered to be an excellent time to make the most
of this idea, as more people have become receptive to it.
The other point related to experimental regression analysis, which is related to the
analysis of business data. Although this supplements regression analysis from a
methodological viewpoint, it is considered a good example of encouraging engineers to be aware of a way of thinking grounded on technologies. Unfortunately,
technical experts do not make full use of this method; however, it is hoped that
it will be utilized more widely in the future.
Since the Taguchi methods are effective to elucidate phenomena as well as to
improve technological analyses, it is expected to take much longer for these methods to move further into the scientic world from the technological eld.
It is roughly estimated that the number of engineers conducting quality
engineeringrelated experiments is several tens of times the current 2000 members in the Quality Engineering Society. We hope the trend will continue. In Chapter 6 we cover the introduction of Taguchi methods in the United States.
149
150
References
1. Genichi Taguchi, 1999. My Way of Thinking. Tokyo: Keizaikai.
2. Genichi Taguchi, 1965. Issue of adjustment. Standardization and Quality Management,
Vol. 18, No. p. 10.
3. Hitoshi Kume, Akira Takeuchi and Genichi Taguchi, 1986. QC by Taguchi method.
Quality, Vol. 16, No. 2.
4. Genichi Taguchi, 19571958. Design of Experiments, Vols. 1 and 2. Tokyo: Maruzen.
5. Shigeru Mizuno, Genichi Taguchi, Shin Miura, et al., 1959. Production plant experiments based on orthogonal array. Quality Management, Vol. 10, No. 9.
6. Genichi Taguchi, 1966. Statistical Analysis. Tokyo: Maruzen.
7. Shozo Konishi, 1972. SN Ratio Manual. Tokyo: Japanese Standards Association.
8. Genichi Taguchi, et al., 1975. Evaluation and Management of Measurement. Tokyo:
Japanese Standards Association.
9. L. E. Katz, and M. S. Phadake. 1985. Macro-quality with macro-money. Record.
10. Genichi Taguchi, 1984. Parameter Design for New Product Development. Tokyo: Japanese
Standards Association.
11. Hidehiko Takeyama, 1987. Evaluation of SN ratio in machining. Machine Technology,
Vol. 3, No. 9.
12. Hiroshi Yano, 1995. Introduction to Quality Engineering. Tokyo: Japanese Standards
Association.
13. Genichi Taguchi, 19851986. Design of experiments for quality engineering. Precision Machine, Vol. 51, No. 4 to Vol. 52, No. 3.
14. Genichi Taguchi, 19881990. Quality Engineering Series, Vols. 1 to 7. Tokyo: Japanese
Standards Association.
15. Study Subcommittee for Quality Design System in Next-Generation Production System, 1989. Research on Production Machining System Using Analytic Evaluation,
Measurement, and Inspection. Advanced Machining Tool Technology Promotion
Association.
16. Ikuo Baba, 1992. Technological Development of Transformability. Tokyo: Japanese Standards Association.
17. Genichi Taguchi, 1994. Quality Engineering for Technological Development. Tokyo: Japanese Standards Association.
18. Genichi Taguchi, 1993. Inaugural speech as a chairman. Quality Engineering, Vol. 1,
No. 1.
19. Kimihiro Wakabayashi, Yoshio Suzuki, et al., 1995. Study of precision machining by
MEEC/IP cutting. Quality Engineering, Vol. 3, No. 6.
20. Genichi Taguchi, 1996. What does quality engineering think? Machine Technology,
Vol. 44, No. 1.
21. Katsuichiro Hata, 1997. New JIS and ISO based on quality engineering concept.
Standardization and Quality Management, Vol. 50, No. 5.
22. Genichi Taguchi, 1998. Memorial lecture: quality engineering technology for 21st
century. Quality Engineering, Vol. 6, No. 4.
23. Taguchi Award Planning Subcommittee. 1999. Prescription of Quality Engineering
Society Taguchi Award (draft). Quality Engineering, Vol. 7, No. 4.
24. Akimasa Kume, 1999. Quality Engineering in Chemistry, Pharmacy, and Biology. Tokyo:
Japanese Standards Association.
References
25. Kazuhiko Hara, 2000. Technological Development in Electric and Electronic Engineering.
Tokyko: Japanese Standards Association.
26. Genichi Taguchi, 2001. Robust design by simulation: standard SN ratio. Quality
engineering, Vol. 9, No. 2.
27. Genichi Taguchi, 2001. Quality engineering for the 21st century. Collection of Research Papers for the 91st Quality Engineering Research Forum.
28. Hiroshi Yano, 2001. Technological Development in Machinery, Material, and Machining.
Tokyo: Japanese Standards Association.
29. Genichi Taguchi, and Hiroshi Yano. 1994. Management of Quality Engineering. Tokyo:
Japanese Standards Association.
30. Takeshi Inao, 1978. Vehicle drivability evaluation based on SN ratio. Design of Experiments Symposium 78. Tokyo: Japanese Standards Association.
31. Soya Tokumaru, 1999. Ups and Downs of Japanese-Style Business Administration. Tokyo:
Diamond.
32. Soya Tokumaru, 2001. Novel: Deming Prize. Tokyo: Toyo Keizai Inc.
33. Taguchi Genichi, Tokyo: Coronasha.
34. Hiroshi Yano, 1976. Technical trial for evaluation of measurement and moldability
in plastic injection molding. Plastics, Vol. 22, No. 1.
35. Genichi Taguchi, 1967. Process Adjustment. Tokyo: Japanese Standards Association.
36. Genichi Taguchi, 1995. Reduce CEOs cost. Standardization and Quality Management,
Vol. 48, No. 6.
37. Teizo Toishige, and Hiroshi Yano, 1978. Consideration of cutting error evaluation
by dynamic SN ratio. Collection of Lectures at Seikigakkai Spring Forum.
38. Keiko Yamamura, and Tsukako Kuroiwa, 1990. Parameter design of electrode machining using machining center. Plastics, Vol. 36, No. 12.
39. Kazuhito, Takahashi, Hiroshi Yano, et al, 1997. Study of dynamic SN Ratio in Cutting, Quality Engineering, Vol. 5, No. 3.
40. Kenji Fujioto, Kiyoshi Fujikake, et al., 1998. Optimization of Ti high-speed cutting.
Quality Engineering Research Forum.
41. Masaki Toda, Shinichi Kazashi, et al., 1999. Research on technological development
and productivity improvement of drilling by machining energy. Quality Engineering,
Vol. 7, No. 1.
42. Kazuhito Takahashi, Hiroshi Yano, et al., 1999. Optimization of cutting material
using electric power. Quality Engineering, Vol. 8, No. 1.
43. Genichi Taguchi, 2000. Robust Design and Functionality Evaluation. Tokyo: Japanese
Standards Association.
44. Tsuguo Tamamura, and Hiroshi Yano, 2001. Research on functionality evaluation
of spindle. Quality Engineering, Vol. 8, No. 3.
45. Hiroshi Yano, 1987. Precision machining and metrology for plastics. Polybile, Vol.
24, No. 7.
46. Susumu Sakano, 1994. Technological Development of Semiconductor Manufacturing. Tokyo: Japanese Standards Association.
47. Genichi Taguchi, 1994. Issues surrounding quality engineering (1). Quality Engineering, Vol. 2, No. 3.
48. Koya Yano, 2001. Quality Engineering, Vol. 9, No. 3.
49. Genichi Taguchi, 1995. Total evaluation and SN ratio using multi-dimensional information, and Design of multi-dimensional machining system. Quality Engineering,
Vol. 3, No. 1.
151
152
6
6.1. Introduction
History of Taguchis
Quality Engineering
in the United States
153
155
156
Bell Laboratories
156
Ford Motor Company
157
Xerox
158
6.1. Introduction
Genichi Taguchi was born in 1924 in Tokamachi, Niigata prefecture, 120 miles
north of Tokyo. His father died when he was 10 years old, leaving him with a hardworking mother, Fusaji, and three younger brothers. Tokamachi has been very
famous for centuries for the production of Japanese kimonos. Just about every
household had a family business with a weaving machine to produce kimonos. As
the oldest boy in the family, Taguchis intention was to succeed with the family
business of kimono production.
Genichis grandmother, Tomi Kokai, was known for her knowledge of how to
raise silkworms to produce the high-quality silk used in kimono production. In the
old days in Japan, it was very unusual for a woman to be a well-respected technical
consultant. It is interesting that her grandson later became one of the most successful consultants in product development, traveling internationally.
Genichi attended Kiryu Technical College, majoring in textile engineering.
However, by the time he returned to his hometown in 1943, the Japanese government had requisitioned all the weaving machines in Tokamachi for use in weapons
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
153
154
After the war, Taguchi joined the Japanese Ministry of Public Health and Welfare,
where he worked on collecting data and conducting analysis. While working there,
he encountered M. Masuyama of the University of Tokyo, the author whose book
on statistics he had studied during the war. Masuyama guided him to develop
further his knowledge of statistics and sent Taguchi to industries as a consultant
in place of himself. In 1948, Taguchi became a member of the Institute of Statistical Mathematics, part of the Ministry of Education. By the age of 26, Taguchi
had, by applying statistical methods, generated numerous successful applications.
In 1950, the newly established ECL acquired Taguchis services. He was assigned
to a section responsible to assist design engineers to put together test plans and
to conduct data analysis.
For whatever reason, ECL executives decided to compete with Bell Laboratories
in developing a cross-bar switching system, which was the state-of-the-art telecommunication system at that time. Table 6.1 compares ECL and Bell Laboratories. In
1957, ECL completed development, meeting all the very difcult requirements,
before Bell Labs, and ECL won the contract.
This was the rst success story by a Japanese company in product development
following World War II. Of course, we cannot conclude that this success was due
entirely to Taguchis ideas. However, during those six years, he developed and
applied his industrial design of experiment. He developed not only efcient use of
such tools as orthogonal arrays and linear graphs, but also developed and applied
strategies for product and process optimization, which became the essence of what
is today known as robust design. Those strategies include:
Table 6.1
ECL and Bell Labs compared
AT&T Bell Labs
NT&T ECL
Budget
No. of People
No. of Years
50
Base
Result
Superior
6.1. Introduction
155
In 1963, Taguchi went to the United States for the rst time and spent one year
at Princeton University and Bell Labs. He faced a culture shock but fell in love
with the American way of life and thinking, especially with its rational, pragmatic,
and entrepreneurial spirit. During that time, his work was concentrated in the
area of the SN ratio for digital systems.
After he returned from the United States, he was invited by Aoyama Gakuin
University to act as a professor in its newly established engineering college. He
taught industrial and management engineering for 17 years, until his retirement
in 1981. During that time, he lectured at other universities, including the University of Tokyo and Ochanomizu University.
Taguchi leads the Quality Control Research Group (QRG) at CJQCA, the Central Japan Quality Control Association. The organization includes Toyota Motor
Company, its group companies, and many others. The group has been meeting
monthly since the 1950s. He has led a similar research group at the Japanese
Standard Association ( JSA) in the Tokyo area since 1961. This group has also been
meeting monthly since its start.
While teaching at Aoyama, Taguchi has continued to teach seminars at the
Japanese Union of Scientists and Engineers, CJQCA, and JSA and to consult with
numerous companies from various industries. He says that he must have consulted
Princeton University,
Bell Laboratory, and
Aoyama Gakuin
University
156
In 1980, Taguchi wrote a letter to Bell Labs indicating that he was interested in
visiting them while on sabbatical. He visited there on his own expense and lectured
on his methodology to those who were involved with design of experiment. These
people were interested but still skeptical. They included Madhave Phadke, Rague
Kakar, Anne Shoemaker, and Vijayan Nair, who were students of the famous statistician George Box at the University of Wisconsin or of Stu Hunter at Princeton.
Taguchi asked them to bring him the most difcult technical problem that they
were facing. They brought the problem of 256k photolithography. The process was
to generate over 150,000 windows on a ceramic chip. The window size had to meet
a requirement of 3.00 0.25 m. The process yielded only 33% after several years
of research. Taguchi designed an experiment with nine process parameters assigned to an inner array of L18 and measured the window size as its response. Noise
factors in the outer array were simply positioned between chips on a wafer and
within chips. Then the response was treated as a nominal-the-best response and
the now famous two-step optimization was applied. The yield was thus improved
to an amazing 87%. People at Bell Labs were impressed and asked him to keep it
157
secret for a few years, although the study was published in the May 1983 issue of
the Bell System Technical Journal. Taguchi continued to visit Bell Labs frequently
until its huge reorganization later in the 1980s.
In 1979, Edward Deming was featured in an NBC White Paper 90-minute TV
program titled If Japan Can, Why Cant We? Deming was introduced as the
father of quality in Japan. In 1950, because of his statistical competence, Deming
visited Japan to help development of a national census. During his visit, the JUSE
had asked Deming to conduct seminars with Japanese top executives. He had
preached to Japanese industrial leaders the importance of statistical quality control
(SQC) if the Japanese were interested in catching up to Western nations. Because
Japanese top executives were good students and had nothing to lose, they practiced what Deming preached. Later, JUSE established the Deming prizes for industries and individuals. Based on his teaching, Japan had developed unique total
quality management systems, together with various practical statistical tools, including Taguchi methods.
In the United States, it became apparent in the late 1970s that made-in-Japan
automobiles were starting to eat into the market share of the U.S. Big 3, starting
with markets in California. U.S. industries were puzzled as to how the Japanese
could design and produce such high-quality products at such low cost. Then the
TV program featuring Deming was broadcast. Fords CEO Don Peterson had hired
Deming and started Fords movement called Quality Is Job #1 under Demings
guidance. Deming suggested that Ford establish the Ford Supplier Institute (FSI)
to train and help its suppliers in Demings philosophy with statistical tools such as
Shewharts process control chart. FSI conducted seminars, and thousands of Ford
and supplier people were trained in SPC and Demings quality management philosophy. Deming also suggested that they send missions to Japan to study what was
being practiced there.
During the mission in spring 1982, Ford and supplier executives visited Nippon
Denso. They found that Densos engineering standards, in their thick blue binders,
were of very high quality and of a type they had never seen. They asked how such
excellent technical knowledge had been developed as part of corporate memory.
They mentioned various statistical techniques and emphasized Taguchis design of
experiment for product/process optimization. Taguchi happened to be consulting
at Denso that day. The leader of the mission, Larry Sullivan, immediately asked
Taguchi to visit Ford and give lectures.
Taguchi visited Ford in October 1982 and gave a ve-day seminar with a very
small English handout. It was somewhat difcult for both sides because of translation problems, but the power of his ideas was understood by some participants.
FSI decided to include Taguchis design of experiment in their curriculum. Yuin
Wu, a long-time associate of Taguchis, was hired by FSI. Wa translated training
material and began to conduct seminars with FSI. Wu, who is from Taiwan, was
very knowledgeable in Japanese quality control and its culture, especially in Taguchis methodologies. He had immigrated to the United States in the mid-1970s
and is credited with conducting the very rst Taguchi-style optimization study in
the United States.
In May 1983, Shin Taguchi, a son of Taguchis, joined the FSI. Shin is a graduate
of the University of Michigan in industrial and operations engineering. Wu, along
158
At Xerox Corporation, after its xerography patent had expired, management could
not help but notice that many Japanese ofce imaging companies were shipping
small and medium-sized copy machines to the United States which produced highquality copies at low cost, taking market share from Xerox. They were especially
anxious to protect their large-machine market share. They had no choice but to
conduct a benchmark study. Xerox had a sister company called Fuji Xerox, owned
5050 by Rank Xerox of Europe and Fuji Film of Japan. Xerox found out that
Fuji Xerox could sell a copy machine at the manufacturing cost to Xerox and still
make a handsome prot.
Fuji Xerox was established in the mid-1960s and had no technical know-how
regarding copy machines. Yet, after 15 years, Fuji Xerox had developed the technical competence to threaten Xerox Corporation. They found that Fuji Xerox was
practicing total quality management and various powerful techniques, such as quality function deployment (QFD) and Taguchis methodologies for design optimization. Xerox found that Taguchi had been a consultant to Fuji Xerox since its
establishment and suspected that Fujis competence had something to do with
implementing Taguchis methodologies.
Xerox invited Taguchi to visit in 1981, and he began to provide lectures and
consultations. Xerox executive Wayland Hicks and his people, including Barry
Bebb, Maurice Holmes, and Don Clausing, were to take leadership in implementing QFD and Taguchi methods. Clausing, the creator of the concept of robustness
called operating window, gave the method Taguchi methods to Taguchis approach
to design optimization. Clausing later became a professor at MIT and taught total
product development, in which QFD and Taguchi methods played a major role.
159
160
161
162
70
60
Robust Engineering
MTS
Application to All Other Systems
Robust Technology
1993
Noise Compounding
Energy Thinking
JQES
Robust Design
Operating Window
Dynamic S/N Application to Mechanical System
Online Quality Engineering
Imported to
United States
Tolerance Design
Quality Loss Function
Deming Prize
Design
of
Experiment
Linear Graph
Orthogonal
Arrays
ANOVA
Inner Array
Outer Array
Robustness
by Control x Noise
163
Sponsor
Sponsor Tra
Training
ining
(2-day
(2-day sessions)
session s)
-- Directors
Directors && Direct
Direct Reports
Reports
Develop
Develop Deployment
Deployment Plan
Plan
-- Communication
Communication
-- Program
Management
Program Management
-- Monitoring
Monitoring System
System
-- Supplier
Supplier Involvement
Involvement
-- Promotion
Promotion Office
Office
Leadership
Tra ining
Leadership Tra
ining
(2-day)
(2 -day)
-- All
AllTo
Topp Executive
Executives
-- Department
Department Heads
Heads
Introduction
Introduction
Role
Role of
of People
People
Project
Project Identification
Identification
Project
Project Selection
Selection
Identify
Identify
Potential
Potential
Project ss
2-Week
2-Week
Robust
Robust Engineering
Engineering
Project Oriented
Tra
Training
ining
Robust
Robust Engineering
Project
-- Assign
Assign Project
Project Leader
Leader
-- Project
Project Charter
Charter
4-Week
4-Week
DFSS
DFSS
Project-Oriented
Project Oriented
Tra
Training
ining
Full
Full DFSS
DFSS Project
Project
- Assign
Assign Project
Project Leader
Leader
- Project
Project Charter
Charter
Project
Project Selection
Selection
-- Core
Core Team
Te am
-- Sponsors
Sponsors
New
New
Wave
Wave
Figure 6.2
DFSS and RE
deployment model
Cororate
CorporateMemory
Memory
Project
Project Execution
Execution
With
With
Coaching
Coaching
Va
Validation
lidation
Project
ProjectClosure
Closure
Financial
Financial Assessment
Assessment
by ASME Press in 2000. MahalanobisTaguchi System, also published by McGrawHill, appeared in 2001.
Despite all these publications, there remain many misconceptions regarding
Taguchi methods. Most people still take them to be quality problem-solving tools
and do not recognize their application to re prevention. Only those companies trained by ASI seem to be using Taguchi methods for design optimization.
Number of projects
Saving /project
(1)
(2)
(3)
(4)
Robust
Engineering
Full DFSS
Total
120
$0.5M
60
$1.0M
180
$120M
Figure 6.3
DFSS and RE
implementation
164
Build/simulate a
prototype; test it to see if
it meets requirements
(e.g.,worst-case test,
life test).
Figure 6.4
Traditional product
development
Designbuild-test
cycle
Fi
Does it meet
requirements?
Fi
No
Fi
Fi
Fi
Figure 6.5
House of quality
165
166
Figure 6.6
House of quality for pickup device
167
and change the concept, or go to the next step of tolerance design for further
improvement with a minimum increase in cost. Recognizing that robust optimization seeks for perfection of the conceptual design will prevent a poor concept
from going downstream, thus avoiding the need for reghting.
In a typical company in the United States, it is said that 75% of engineers time
is spent on ghting res. Successful reghters are recognized as good engineers
and are promoted. When one prevents res, he or she will not be recognized. This
is a serious issue that only top executives can resolve. Just ask anyone which is
more effective, reghting or re prevention and I am sure that the majority of
people will say that re prevention is more effective and is the right thing to do.
However, in a typical company, engineers developing technology and products are
not asked to optimize design for robustness. They are getting paid to meet requirements. Therefore, they rotate through the designbuildtestx cycle to test
against requirements. Or their job description does not spell out optimization;
that is, nobody is asking them to optimize the design.
Figure 6.7 shows product development with robust engineering from 30,000
feet above. This process includes robust optimization of technology elements in
the R&D phase. With energy thinking, technology elements can be optimized for
robustness without having the requirements for a specic product. For instance, a
small motor can be optimized without knowing the detailed requirements. This
concept of technology robustness is highly advantageous in achieving quality, reliability, low cost, and shorter development time.
Although it has been 20 years since Taguchi methods were introduced into the
United States, we cannot say that any company has changed its culture successfully
from reghting to re prevention. Some got close, but they did not sustain the
momentum. Many are working on it today, but struggling. For the sake of reducing
the loss to society, robust engineering should be imprinted in the DNA of
engineers, so to speak. To accomplish this vision, we need to do the following:
Vision
Mission
Strategies
&
VOC
Future
Technology
Identify
Requirements
&
Establish
Targets
Conduct
Robust
Optimization
Requirements
Generate Concepts
&
Select Concept(s)
Release Design
Manufacturing
Customer
Figure 6.7
Product development
with robust engineering
168
Part III
Quality Loss
Function
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
7
7.1. Introduction
Introduction to
the Quality
Loss Function
171
Denition of Quality
171
Quality Loss Function and Process Capability
173
176
System Design
176
Parameter Design: Two-Step Optimization
Tolerance Design
179
177
7.1. Introduction
A product is sold by virtue of its product species and its price. Product species relate
to product function and market size. Product quality relates to loss and market size.
Quality is often referred to as conformance to specications. However, Taguchi proposes a different view of quality, one that relates it to cost and loss in dollars, not
just to the manufacturer at the time of production but to the consumer and to
society as a whole.
Loss is usually thought of as additional manufacturing costs incurred up to the
point that a product is shipped. After that, it is society, the customer, who bears
the cost for loss of quality. Initially, the manufacturer pays in warranty costs. After
the warranty period expires, the customer may pay for repair on a product. But
indirectly, the manufacturer will ultimately foot the bill as a result of negative
customer reaction and costs that are difcult to capture and account for, such as
customer inconvenience and dissatisfaction, and time and money spent by customers. As a result, the companys reputation will be damaged, and eventually the
market share will be lost.
Real growth comes from the market, cost, and customer satisfaction. The money
the customer spends for a product and the perceived loss due to poor quality
ultimately come back as long-term loss to the manufacturer. Taguchi denes quality
as the loss imparted by the product to the society from the time the product is
shipped. The objective of the quality loss function is quantitative evaluation of
loss caused by functional variation of a product.
Denition of Quality
171
172
Example
Once a serious problem occurred concerning the specication of the thickness of
vinyl sheets used to build vinyl houses for agricultural production. Figure 7.1 shows
the relationship between thickness, cost, and quality versus loss. Curve L shows
the sum of cost and quality. The midvalue of specication, or target value, is the
lowest point on curve L. Assume that the specication given by the Vinyl Sheet
Manufacturing Association is
1.0 0.2 mm
(7.1)
(7.2)
Although the production specication above still meets the one given by the association, many complaints were lodged after the products within the specication of
equation (7.2) were sold.
When the thickness of vinyl sheets is increased, the production cost (including
material cost, processing cost, and material-handling cost) increases as shown by
curve C. On the other hand, when the thickness is decreased, sheets become easy
to break and farmers must replace or repair them; thus, the cost caused by inferior
quality increases following curve Q.
Initially, the midvalue of specication (denoted by m) is the lowest point on
curve L, which shows the sum of the two curves. When the average thickness is
lowered from m 1.0 mm to 0.82 mm, the manufacturing cost is cut by B dollars,
Curve L
A
Curve C
Figure 7.1
Objective value and its
limits
Curve Q
Midvalue
Thickness
7.1. Introduction
173
but consumer loss is increased by D dollars, which is much larger than B dollars.
Also, if the thickness varies by 0.2 mm on either side of the target value (m), the
manufacturer incurs a total loss of A dollars.
Because of many complaints, the manufacturer had to adjust the midvalue of
thickness back to the original value of 1.0 mm. The association still maintained
the original specication after adding a clause that the average thickness must be
1.0 mm.
Traditionally, the core of quality control is the fraction defective and its countermeasures. If defective products are shipped, it creates quality problems. If defectives are not shipped, it causes loss to the manufacturer. To avoid damage to the
companys reputation, it is important to forecast the quality before products are
shipped. Since the products within specications are shipped, it is necessary to
forecast the quality level of nondefective products. For this purpose, the process
capability index has been used. This index is calculated from the tolerance divided
by 6, which is quite abstract. (In general, root mean square is used instead of
sigma. This is described in a later chapter.) Since there is no economic issue and
no standard to follow a particular value, such as 0.8 or 1.3 or 2.0, it is difcult for
management to understand the concept.
The loss function is calculated from the square of the reciprocal of the process
capability index after multiplying a constant related to economy. It is an economic
forecasting value that is imparted to the customer in the market.
The process capability index, Cp, is calculated by the following equation:
Cp
tolerance
6(standard deviation)
Quality Loss
Function and
Process Capability
(7.3)
(7.4)
where L is the loss in dollars when the quality characteristic is equal to y, y the
value of the quality characteristic (i.e., length, width, concentration, surface nish,
atness, etc.), m the target value of y, and k a constant to be dened later. As
shown in Figure 7.2, this quadratic representation of the loss function L(y) is
Loss (dollars)
Target
Figure 7.2
Quality loss function
174
L(m)
L
(y m)
(y m)2
1!
2!
(7.5)
L
(y m)2
2!
k(y m)2
(7.6)
In reality, for each quality characteristic there exists some function that uniquely
denes the relationship between economic loss and the deviation of the quality
characteristic from its target value. The time and resources required to obtain such
a relationship for each quality characteristic would represent a considerable investment. Taguchi has found the quadratic representation of the quality loss function to be an efcient and effective way to assess the loss due to deviation of a
quality characteristic from its target value (i.e., due to poor quality). The concept
involved in Taguchi methods is that useful results must be obtained quickly and
at low cost. Use of a quadratic, parabolic approximation for the quality loss function is consistent with this philosophy.
For a product with a target value m, from most customers point of view, m
0 represents the deviation at which functional failure of the product or component occurs. When a product is manufactured with its quality characteristic at the
extremes, m 0 or at m 0, some countermeasure must be undertaken by the
average customer (i.e., the product must be discarded, replaced, repaired, etc.).
The cost of the countermeasure is A 0 since the quality loss function is
L k(y m)2
(7.4)
at
y m 0
A 0 k(m 0 m)2
k
A0
20
(7.7)
(7.8)
(7.9)
This value of k, which is constant for a given quality characteristic, and the target
value, m, completely dene the quality loss function curve (Figure 7.3).
7.1. Introduction
175
Figure 7.3
Quality loss function
Up to this point, the quality loss function is explained with one piece of product. In general, there are multiple pieces of products. In that case, the loss function
is given by
L k2
(7.10)
tolerance
20
6(standard deviation)
6
(7.11)
A 2
A 1
20
9 C 2p
(7.12)
Therefore,
L
It is seen that the loss function can be expressed mathematically by the process
capability index. But the former has dollars as a unit and the latter has no unit.
For example, the process capability index of last month was 0.9, and it is 0.92 this
month. This fact does not explain the difference from the economic or productivity viewpoint. Using the loss function, monetary expression can be made and
productivity can be compared.
There are two aspects to the quality loss function. The rst is the justice-related
aspect. Take, for example, a product specied to have a 100-g content. If the
amount is 97 g instead of 100, the customer loses 3 g. If the price is 10 cents a
gram, the customer loses 30 cents. In this case, it is easy to understand.
If the content is 105 g, the customer may be happy to have a free 5 g of product.
However, it might be a loss to the customer. If the manufacturer aimed for 105 g
in a package, there would be a 5% production loss. If a package is supposed to
contain 100 g, a planned production of 10,000 packages would result in producing
only 9500 packages in the case of 105-g packages. The manufacturer normally
176
177
Example
An element transistor A and resistor B in a power circuit affect the output voltage
as shown in Figure 7.4.
Suppose that a design engineer forecasts the midvalues of the elements in a
circuit, assemblies these to get a circuit, and puts 100 V ac into it. If the output
voltage so obtained is only 100 V instead of 115 V, he changes resistance A from
A1 200 to A 250 to adjust the 15-V deviation from the target value.
From the standpoint of quality control, this is very poor methodology. Assume that
the resistance used in the circuit is the cheapest grade. It either varies or deteriorates to a maximum range of 10% during its service period as the power source
of a TV set. From Figure 7.4, we see that the output voltage varies within a range
of 6 V. In other words, the output voltage varies by the inuence of noise, such
as the original variation in the resistance itself, or due to deterioration. If resistance
Parameter Design:
Two-Step
Optimization
Figure 7.4
Relationship between
factors A and B and the
output voltage
y (V)
178
A1
A2
A3
B1
B2
B3
Parameter design is the most important step in developing stable and reliable
products or manufacturing processes. With this technique, nonlinearity may be
utilized positively. (Factor A has this property.) In this step we nd a combination
of parameter levels that are capable of damping the inuences not only of inner
noise but of all noise sources, while keeping the output voltage constant. At the
heart of research lives a conict: to design a product that is reliable within a wide
range of performance conditions but at the lowest price. Naturally, elements or
component parts with a short life and wide tolerance variation are used. The aim
of design of experiments is the utilization of nonlinearity. Most important in applying design of experiments is to cite factors or to select objective characteristics
with this intention.
There is a difference between the traditional quality improvement approach
and parameter design. In the traditional approach, the rst step is to try to hit
the target, then reduce variability. The countermeasures of reducing variability
depend on the tolerance design approach: improving quality at high cost increase.
In parameter design, quality is improved by selecting control factor combinations.
After the robustness of a product is improved, the target is adjusted by selecting
a control factor whose level change affects the average but affects variability minimally. For this reason, parameter design is also called two-step optimization.
179
Tolerance Design
Quality Loss
Function for
Various Quality
Characteristics
180
183
186
188
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
181
Japanese Factory
U.S. Factory
Figure 8.1
Distribution of color
density in TV sets
m5
m+5
represents the color density frequency distribution associated with sets built in
Japan, and the other curve represents the same distribution for sets built in the
United States. The two factories belong to the same manufacturing company.
In 1979, an article appeared in The Asahi (a newspaper) relative to the preference of American consumers for TV sets built by a company in Japan. Apparently, identical subsidiary plants had been built in the United States and in Japan.
Both facilities were designed to manufacture sets for the U.S. market and did so
using identical designs and tolerances. However, despite the similarities, the American consumer displayed a preference for sets that had been made in Japan.
Referring again to Figure 8.1, the U.S.-built sets were defect-free; that is, no sets
with color density out of tolerance were shipped from the San Diego plant. However, according to the article and as can be seen in Figure 8.1, the capability index,
Cp, of the Japan-built sets was Cp 1.0 and represented a defect rate of 3 per 1000
units. Why, then, were the Japanese sets preferred?
Before answering that question, it is important to note that:
Conformance to specication limits is an inadequate measure of quality or
of loss due to poor quality.
Quality loss is caused by customer dissatisfaction.
Quality loss can be related to product characteristics.
Quality loss is a nancial loss.
Traditional methods of satisfying customer requirements by controlling component and/or subsystem characteristics have failed. Consider Figure 8.2, which
represents the inspection-production-oriented traditional concept, whereby all
products or processes that exist or function within some preestablished limits are
considered to be equally good, and all products or processes that are outside these
limits are considered to be equally bad. The fallacy in this type of judgment criteria
was illustrated in the example where although all of the sets manufactured in the
United States were within specications, the customer requirements were apparently better satised by the sets built in Japan.
Taguchi compared specication limits to pass/fail criteria often used for examinations. In school, for example, 60% is generally considered to be a passing
grade, but the difference between a 60% student who passes the course and a 59%
182
Figure 8.2
Conformance to
requirements
student who fails is nil. If 100 points represent perfection, the student with 60
points is more like the student with 59 than he or she is like the perfect student.
Similarly, the manufactured product that barely conforms to the specication limits is more similar to a defective part than to a perfect part.
Specication limits only provide the criteria for acceptance and/or rejection.
A product that just barely conforms to some preestablished limits functions relative
to optimum just as the 60% student functions relative to perfection. For some
customers, such deviation from optimum is not acceptable. In Figure 8.1, the quality characteristic has been graded depending on how close its value is to the target
or best value denoted by m. The gure shows that the majority of the sets built in
Japan were grade A or B, whereas the sets built in the United States were distributed uniformly in categories A, B, and C.
The grade-point average of the TV sets built in Japan was higher than that of
the U.S.-built sets. This example rejects the quality evaluation criteria that characterize product acceptance by inspection with respect to specication limits. How,
then, can quality be evaluated? Since perfection for any student is 100%, loss
occurs whenever a student performs at a lower level. Similarly, associated with every
quality characteristic is a best or target value: that is, a best length, a best concentricity, a best torque, a best surface nish, and so on. Quality loss occurs whenever
the quality characteristic deviates from this best value. Quality should, therefore,
be evaluated as a function of deviation of a characteristic from target.
If a TV color density of m is best, quality loss must be evaluated as a function
of the deviation of the color density from this m value. This new denition of
product quality is the uniformity of the product or component characteristics
around the target, not the conformity of these characteristics to specication limits.
Specication limits have nothing to do with quality control. As quality levels increase and the need for total inspection as a tool for screening diminishes, specications become the means of enunciating target values. Associated with every
product characteristic, however, there exists some limit outside of which the product, as viewed by 50% of the customers, does not function. These limits are referred to as the customers tolerance.
183
Suppose that four factories are producing the same product under the same engineering specications. The target values desired are denoted by m. The outputs
are as shown in Figure 8.3. Suppose further that the four factories carry out 100%
inspection and ship out only pieces within specication limits.
Although the four factories are delivering products that meet specications,
factory 4 offers more products at or near the desired target value and exhibits less
piece-to-piece variability than the other factories. Factory 4 is likely to be selected
as the preferred vendor. If a person selects factory 4, it is probably because he or
she believes in the loss function but being within specications is not the entire
story.
The loss function, L, as described previously, is used for evaluating 1 unit of
product:
L k (y m)2
(7.4)
where k is a proportionality constant and y is the output. More often, the qualities
of all pieces of products are evaluated. To do this, the average of (y m)2, which
is called the mean-squared deviation (MSD), is used. When there is more than one
piece of product, the loss function is given by
L k(MSD)
(8.1)
Figure 8.3
Outputs for four
factories
184
(8.2)
( y m)
1
( y y) (y m)
n
MSD
1
n
i1
n
i1
( y1 y)2 ( yn y)2
( y m)2
n
2 ( y m)2
(8.3)
where y is the average of y. The loss function for more than one piece then
becomes
L k[2 ( y m)2]
(8.4)
where 2 is the variability around the average and y is the deviation of the average
from the target.
We can now evaluate the quality of all our outputs. To reduce the loss, we must
reduce the MSD. This can be accomplished with:
2:
(y m) :
2
The quality loss function gives us a quantitative means of evaluating quality. Let
us reexamine our comparison of the four factories with different output distributions (Table 8.1).
Example
The following analysis involves a sample size of 13 pieces each. Suppose that k
0.25. Then, using the nominal-the-best format:
L k[2 ( y m)2] k(MSD)
(8.5)
For factory 3:
y
113
( y m) (113 115)2
2
(8.6)
185
Table 8.1
Data for the four factories
Factory
Data
MSD
Loss per
Piece
115
114
115
117
118
113
114
116
117
113
115
116
115
2.92
$0.73
113
114
115
116
113
114
115
115
116
114
115
115
116
1.08
$0.27
112
113
114
113
114
113
112
115
114
112
113
112
112
4.92
$1.23
114
114
114
114
115
115
115
115
115
116
116
116
116
0.62
$0.15
(8.7)
MSD 2 ( y m)2
0.92 4
4.92
(8.8)
L k(MSD)
0.25(4.92)
$1.23 per piece
(8.9)
Losses for factories 1, 2, and 4 are calculated in the same way. The results are
summarized in Figure 8.4. The 73-cent loss for factory 1, for example, is interpreted
as follows: As a rough approximation, one randomly selected product shipped from
186
A0
y 20
(8.10)
yn y
n
L k(MSD)
MSD
2
i
(8.11)
i1
Figure 8.4
Evaluating four factories
187
Figure 8.5
Smaller-the-better curve
(% shrinkage)
Example
The smaller-the-better case is illustrated for the manufacturing of speedometer cable
casing, where the output response is
y % shrinkage of speedometer casing,
y0
When y is 1.5%, the customer complains about 50% of the time and brings the
product back for replacement. The replacement cost is $80.
A0 $80
y0 1.5
Thus,
k
80
35.56
(1.5)2
(8.12)
(8.13)
The data in Table 8.2 are percent shrinkage values of the casings made from two
different materials. The losses were computed using the formula
L 35.56 (2 y2)
(8.14)
While the shrinkage measurements from both materials meet specications, the
shrinkage from material type B is much less than that of type A, resulting in a much
smaller loss. If both materials cost the same, material type B would be the better
choice.
188
Table 8.2
Smaller-the-better data (% shrinkage)
Type of
Material
Data
MSD
0.28
0.33
0.18
0.24
0.24
0.30
0.26
0.33
0.00227
0.0729
0.0751
$2.67
0.08
0.07
0.09
0.05
0.12
0.03
0.06
0.03
0.00082
0.00439
0.0052
$0.19
1
y2
L k(MSD)
k A 0 y 02
MSD
Figure 8.6
Larger-the-better curve
1
n
1
n
(8.15)
y1
n
i1
2
1
1
1
1
2 2
y 21
y2
yn
(8.16)
8.5. Summary
189
Example
A company wants to maximize the weld strength of its motor protector terminals.
When the weld strength is 0.2 lb/in2, some welds have been known to break and
result in an average replacement cost of A0 $200.
1. Find k and set up the loss function.
2. If the scrap cost at production is $2 per unit, nd the manufacturing
tolerance.
3. In an experiment that is conducted to optimize the existing semimanual welding process (Table 8.3), compare the before and after.
8.5. Summary
The equations to calculate the loss function for different quality characteristics are
summarized below.
1. Nominal-the-best (Figure 8.7)
L k( y m)2
L k(MSD)
k
MSD
A0
20
1
n
( y m)
n
i1
2 ( y m)2
2. Smaller-the-better (Figure 8.8)
L ky 2
A0
y 20
Table 8.3
Larger-the-better data (weld strength)
MSD
Before
experiment
2.3
1.7
1.4
1.6
Data
2.0
2.1
2.2
1.9
2.2
2.0
0.28529
$2.28
After
experiment
2.1
2.5
2.1
2.3
2.9
2.4
2.6
2.4
2.8
2.7
0.16813
$1.35
Figure 8.7
Nominal-the-best curve
L(y) (dollars)
190
A0
y0
y1
n
L k(MSD)
MSD
i1
2
i
2 y 2
3. Larger-the-better (Figure 8.9)
Figure 8.8
Smaller-the-better curve
1
y2
k A 0 y 02
L(y) (dollars)
Lk
A0
y0
8.5. Summary
191
Figure 8.9
Larger-the-better curve
L k(MSD)
MSD
1
n
1
n
y1
n
i1
2
i
1
1
1
2 2
2
yi
y2
yn
9
9.1.
9.2.
9.3.
9.4.
9.5.
Specication
Tolerancing
Introduction
192
Safety Factor
195
Determination of Low-Rank Characteristics
Distribution of Tolerance
204
Deteriorating Characteristics
205
201
9.1. Introduction
In traditional quality control, the fraction defective of a manufacturing process is
considered to be the quality level of the product. When a defective product is
discovered, it is not shipped. So if all products are within specications, are these
products perfect? Obviously they are not, because many of the products shipped,
considered as being good or nondefective, have problems in the marketplace.
Problems occur due to lack of robustness against a users conditions. The only
responsibility that a manufacturer has is to ship nondefective products, that is,
products within specications. Defective products cause a loss to the company. In
other words, it is not a quality problem but a cost problem. It is important to
realize that the tolerances, the upper and lower limits, stated in the drawings are
for inspection purposes, not for quality control.
Again, as in school, any mark above 60 is a passing grade, with a full mark being
100. A student passing with a grade of 60 is not perfect and there is little difference
between a student with a grade of 60 and one with a grade of 59. It is clear that
if a product is assembled with the component parts marginally within tolerance,
the assembled product cannot be a good one.
When troubles occur from products that are within tolerance, the product design engineer is to be blamed. If robust designs were not well performed and
specications were not well studied, incorrect tolerances would come from the
design department.
Should the tolerance be tightened? If so, the loss claims from the market could
be reduced a little. But then manufacturing cost increases, giving the manufacturer
a loss and ending up with an increased loss to society.
192
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
9.1. Introduction
193
Example
To determine a specication, it is necessary to determine two values: functional
tolerance and customer loss. For every product characteristic we can nd a value
at which 50% of customers view the product as not functioning. This value represents an average customer viewpoint and is referred to as the functional tolerance
or LD50 (denoted as 0). The average loss occurring at LD50 is referred to as the
customer loss, A0. The functional tolerance and consumer loss are required to
establish a loss function.
Let us set up the loss function for a color TV power supply circuit where the
target value of y (output voltage) is m 115 V. Suppose that the average cost for
A0 = $100.00
L (y) dollars
Figure 9.1
Loss function for TV
power supply
y = voltage
95
115
135
194
9. Specication Tolerancing
repairing the color TV is $100. This occurs when y goes out of the range 115
20 V in the hands of the consumer (Figure 9.1). We see that L $100 when y
95 or 135 V, consumer tolerance 0 20 V, and consumer loss A0 $100.
Then we have
L( y) k( y m)2
(7.4)
A0 k
(9.1)
2
0
A0
$100
(9.2)
dollars/piece
(9.3)
When the output voltage becomes 95 or 135 V, somebody is paying $100. As long
as the output is 115 V, societys nancial loss is minimized.
With the loss function established, lets look at its various uses. Suppose that a
circuit was shipped with an output of 110 V. It is imparting a loss of
L $0.25(110 115)2 $6.25
(9.4)
Figure 9.2
Loss function for 110-V
circuit
L(y) (dollars)
This means that on the average, someone is paying $6.25. This gure is a rough
approximation of loss imparted to society due to inferior quality (Figure 9.2).
With information about the after-shipping or consumer tolerance, we can calculate the prior-to-shipping or manufacturing tolerance. The manufacturing tolerance is the economical break-even point for rework for scrap.
y
110
115
195
Suppose that the output voltage can be recalibrated to target at the end of the
production line at a cost of $2. What is the manufacturing tolerance? Stated differently, at what output voltage should the manufacturer spend $2 to x each set?
The manufacturing tolerance, , is obtained by setting L at $2.
L( y) 0.25( y m)2
(9.5)
$2 $0.25( y 115)
y 115
(9.6)
2.00
0.25
115 8
115 2.83
115 3 V
(9.7)
A k
A0 2
20
(9.8)
A 2
2
A0 0
(9.9)
or
AA
(9.10)
As long as y is within 115 3, the factory should not spend $2 for rework
because the loss without the rework will be less than $2. The manufacturing tolerance sets the limits for shipping a product. It represents a break-even point between the manufacturer and the consumer. Either the customer or the manufacturer
can spend $2 for quality (Figure 9.3).
Suppose that a circuit is shipped with an output voltage of 110 V without rework. The loss is
L( y) $0.25(110 115)2 $6.25
(9.11)
The factory saves $2 by not reworking, but it is imparting a loss of $6.25 to society.
This loss becomes apparent to the manufacturer through customer dissatisfaction,
added warranty costs, consumers expenditure of time and money for repair, damaged reputation, and long-term loss of market share.
196
9. Specication Tolerancing
Figure 9.3
Tolerancing using the
loss function
(military, communication), and a higher level of safety was attained through technological advancement.
In advanced countries where such high-precision products can be produced,
those new technologies were applied for general products, and tolerances for general products have become more and more stringent. In Japan, industrial standards (JIS) are reviewed every ve years, and of course, tolerances are included
in the list of reviews.
Again, let us denote the function limit as 0. 0 represents the point at which
the function actually fails. It is determined as the point where the function fails
due to the deviation of y by 0 from m, assuming that other characteristics and
operating conditions are in the standard (nominal value) state.
Determination of the function limit, assuming that the other conditions are in
the standard state, means that the other conditions are in the normal state. The
normal state means the median or mode condition, or where half of the conditions
are more severe and the other half are less severe. It is abbreviated as LD50 (lethal
dosage). In most cases, a test for determining the function limit may be conducted
appropriately only under standard conditions.
The value of 0 can easily be determined. For example, the LD50 value is determined by nding the voltage at which ignition fails by varying the voltage from
the mean value of 20 kV of the ignition voltage of an engine. Suppose that the
ignition fails at 8 kV when the voltage is lowered under the standard operating
197
(9.12)
Tolerance values 1 and 2 are determined separately for above and below the
upper and lower function limits, 01 and 02, with separate safety factors, 1 and
2, respectively.
i
0i
i
(i 1,2)
(9.13)
(9.14)
A0 and A are used for the numerator and denominator under the square root in
equation (9.14), where A0 is the average loss when the function limit is exceeded
(after shipping) and A is the loss at the factory when the factory standard is not
met (sale price or adjustment cost).
Therefore, equation (9.14) is written as
AA
(9.15)
Let us explain this reason rst. Let the characteristic value be y and the target
value be m. The loss due to deviation of y from the target value m is considered
here. The actual economic loss is L( y) when a product with a characteristic value,
y, is shipped and used under various conditions. N is the size of the total market
where the product can potentially be used. Li(t,y) is the actual loss occurring at
location i after t years.
Li(t,y) is equal to zero in most cases. If the function fails suddenly, an actual
loss is incurred. The average loss L( y) caused by using the product at all locations
during the entire period of the design life is
198
9. Specication Tolerancing
L( y)
1
N
i1
Li(t,y) dt
(9.16)
A0
( y m)2
20
(9.17)
(y m) in equation (9.17) is the deviation from the target value. Assuming that
the tolerance level of the deviation is the tolerance and A is the value of the
point where quality and cost balance in the tolerance design, equation (9.17) yields
A
A0 2
20
(9.18)
A
A
(9.19)
0
0
(9.20)
Example
On January 5, 1988, an illumination device weighing 1.6 tons fell in a discodancing hall in Tokyo. Three youths died and several were injured. The device was
hung with six wires, but these wires could stretch easily, so when the chain used
for driving the device broke, the device fell.
The design was such that the wires stretched and the device fell when the chain
broke. The main function of the chain was to move the device up and down. Because of its design, when the chain broke and stopped functioning, human lives
were threatened. The tensile strength of the chain had a safety factor of 4 with
respect to the weight of the illuminating device, 1.6 tons. Two chains with a tensile
strength of 3.2 tons had been used. A safety factor of 4 to 5 is adopted in many
corporations in the United States and quite a few Japanese companies have adopted
the same value.
Because the weight of the illuminating device was 1.6 tons, the function limit,
0, of the chain was apparently 1.6 tonsf. On average, several people are located
in the area underneath the illuminating device at any time. Assuming that the loss
caused by the death of one person is $1,550,000 and that four people died, the
loss caused by losing the suspension function is
A0 (1,550,000)(4) $6,200,000
(9.21)
199
A020
($6,200,000)(1.62)
2
y
y2
(9.22)
When the safety factor is 4, y (1.6)(4) 6.4 tonsf. This leads to a poor quality
level, as follows:
L
($6,200,000)(1.62)
6.42
$387,500
(9.23)
If the price of the chain were $387,500, the price and quality would have been
balanced. Actually, a chain costs about $1000, or $2000 for two. Therefore, the
loss of quality that existed in each chain was about 190 times the price. If deterioration of the chain were considered, the loss of quality would be even larger.
Therefore, the problem existed in selection of the safety factor.
In this case, the graver error was the lack of safety design. Safety design is not
for improvement of the functioning rate or reliability. It is for reducing the loss to
merely the repair cost in the case of malfunction. Some examples of safety design:
At JR (Japan Railroad), design is done so that a train will stop whenever there is
any malfunction in the operation of the train. At NTT, there is a backup circuit for
each communication circuit, and when any type of trouble occurs, the main circuit
is switched to the backup circuit; customers feel no inconvenience. Instead of allowing a large safety factor for prevention of a breakdown of the chain, the device
in the disco-dancing hall should have had shorter wires so that it would have
stopped above the heads of the people if the chain did break. In this case, the
safety design could have been done at zero cost. If no human lives were lost, the
loss caused by the breakdown of the chain would probably be only about $10,000
for repair. The loss in this case would be
L
($10,000)(1.62)
6.42
$625
(9.24)
This is less than the price of the chains, $2000. Therefore, the quality of chains
having 6.4 tonsf strength is excessive, and even cheaper chains may be used.
Table 9.1 compares the loss with and without safety design. For example, when
there is no safety design and one chain is used, the safety factor is equal to 2.
Since this is the case of a larger-the-better characteristic, the equality is calculated
as
L
A020
($6,200,000)(1.62)
$1,550,000
2
y
(1.6 2)2
(9.25)
200
9. Specication Tolerancing
Table 9.1
Loss function calculation of chain ($10)a
Number of
chains
No Safety Design
Price
Quality
Total
Quality
100
155,000
155,100
250
Total
350
200
38,750
38,950
60
*260
300
17,220
17,520
30
330
400
9,690
10,090
20
420
500
4,310
4,910
607
800
2,420
3,220
804
10
1,000
1,550
2,550
1,003
12
1,200
1,080
2,280
1,202
14
1,400
790
*2,190
15
1,500
610
2,200
(9.26)
When there is safety design using one chain, the loss, A0, is equal to a repair cost
of $10,000. The quality is calculated as
A020 (10,000)(1.62)
L
A002
(10,000)(1.62)
$2500
[(1.6)(2)]2
[(1.6)(2)]2
(9.27)
(9.28)
This calculates the sum of price and quality for cases with and without safety
design corresponding to the number of chains (1,2,...). In actual practice, a factor
2 or 3 is often given to the purchase price, considering the ratio of purchase price
to the sales price and the interest, because the price is the rst expenditure for a
consumer. Regarding the deterioration of quality, it is only necessary to think about
the mode condition, or the most possible condition. Although it is usually better to
consider the deterioration rate in the mode condition, it is ignored here because the
inuence of deterioration is small.
As we can see, the optimum solution for the case without a safety design is 14
chains and $14,000 for the price of the chains. For the case with a safety design,
the solution is two chains and $2000 as the cost of the chains. The total loss that
indicates productivity is $21,900 and $2600 for the optimum solutions, respectively. Therefore, the productivity of the design with a safety design included is 8.4
times higher.
Example
A stamped product is made from a steel sheet. If the shape after press forming is
defective, an adjustment is needed at a cost, A0, of $12. The specication for a
pressed product dimension is m 300 m. The dimension of the product is
affected by the hardness and thickness of the steel sheet. When the hardness of
steel sheet changes by a unit quantity (in Rockwell hardness HR), the dimension
changes by 60 m. When the thickness of the steel sheet changes by 1 m, the
dimension changes by 6 m.
Lets determine the tolerance for the hardness and thickness of the steel sheet.
When either the hardness or thickness fails to meet the specications, the product
is scrapped. The loss is $3 for each stamped product.
201
202
9. Specication Tolerancing
The method of determining the tolerance of the cause or the low-rank characteristics is to establish an equation for a loss function for the high-rank characteristics and to convert it to a formula for the characteristics of the cause. When the
standard of the high-rank characteristic is m0 0 and the loss for failing to meet
the standard is A0, the loss function is given by the following equation, where the
high-rank characteristic is y.
L
A0
(y m0)2
02
(9.29)
For a low-rank characteristic x, the right-hand side of equation (9.29) becomes the
following, due to the linear relationship between y and x, where b is the inuence
of a unit change of x to the high-rank characteristic y:
A0
[b(x m)]2
20
(9.30)
where m is the central value of the low-rank characteristic x. Substituting this expression into the right-hand side of equation (9.29), and substituting the loss, A
dollars, due to rejection by the low-rank characteristic for L on the left-hand side,
we get
A
A0
[b(x m)]2
20
(9.31)
By solving this equation for x m, the tolerance, , for the low-rank characteristic, x, is given by
A 0
0 b
(9.32)
0:
A:
b:
Both A0 and A are losses for a product failing to meet the standard found by the
purchaser or in the factory of the company. The cost for inspection is not included
here. A0 is the loss for adjustment in the case when the shipping standard is not
satised in the company. In the case of high-rank characteristics of the purchaser,
the loss occurs when the standard is not satisfactory to the purchaser.
203
In the case of hardness, A0 is the loss when the purchaser nds a defective
shape.
A0:
A:
0:
b:
$12
$3
300 m
60 m/HR
2.5H
123 300
60
(9.33)
25.0 m
123 300
6
(9.34)
In many cases, the relationship between the low- and high-rank characteristics
can be approximated by a linear function. When this is not possible, a graphic
approach is employed. For example, the inuence of the change of a low-rank
characteristic to the objective characteristic is found to be as shown in Figure 9.4.
To study this relationship, it is important to keep other low-rank characteristics
and environmental conditions constant.
The function limits, 10 and 20, of the low-rank (cause) characteristic, x, are
determined from the point where the high-rank (objective) characteristic, y,
crosses the upper and lower limits. These are the points where the objective characteristic, y, exceeds the tolerance under the condition that all the other low-rank
characteristics are in the standard state. When the upper and lower tolerance limits
of x are to be determined separately, the lower tolerance limit, 1, and the upper
tolerance limit, 2, are
1
10
20
AA
A
A
(9.35)
(9.36)
A0 is the loss when the high-rank characteristic, y, exceeds the tolerance, A1 is the
loss when the low-rank characteristic, x, does not satisfy the lower limit for x
204
9. Specication Tolerancing
Figure 9.4
Relationship between
low- and high-rank
characteristics
(usually, parts and materials cannot be repaired; the loss is their price) A2 is the
loss when x exceeds the upper limit for x. 1 and 2 are safety factors. Because it
is cumbersome to give different tolerances for above and below the standard,
generally
x min(1,2)
(9.37)
A1 A2 Ak 2
0
A0
(9.38)
There are different relationships between A0, which is the loss caused by the characteristics of an assembled product being unable to meet the tolerance, and the
total sum of the price of component parts (A1 Ak).
A1 A2 Ak
1
A0
(9.39)
In the case of equation (9.39), if an assembled product fails to meet the standard,
it must be discarded. Usually, the loss caused by rejection of an assembled product
205
is several times the total of the price of the component parts. If it is four times,
the process capability index of the characteristics of the assembled product will be
twice that of the component parts. Of course, there is no rejection in such a case.
A1 A2 Ak
1
A0
(9.40)
With equation (9.40), the assembled product can be xed or adjusted at a low
cost. There may be many rejections, but it is all right because the cost for adjustment is low.
A1 A2 Ak
1
A0
(9.41)
The case of equation (9.41) seldom occurs. The cost for adjustment is just equal
to the total sum of the price of parts. Distribution of the tolerance is justied only
in this case.
Generally speaking, it is wrong to distribute the tolerance to each characteristic
of a part so that the characteristic of the assembled product falls within the tolerance limit around the target value. The best way is to determine tolerances
separately for each part, as shown in this chapter. The most common situations
are equations (9.39) and (9.40).
AA b
(9.42)
where A is the loss due to the low-rank (part) characteristic failing to meet the
specication, A0 the loss due to the high-rank characteristic failing to meet the
specication, 0 the tolerance for the high-rank characteristic, and b is the change
in the high-rank characteristic due to a unit change in the low-rank characteristic.
The loss function for deterioration per year is obtained by calculating the variance due to deterioration. The variance, 2, of the objective characteristic is obtained by considering the deterioration of the low-rank characteristic in T years:
1
T
1
T
b T
3
(b t)2 dt
b 22t 3
3
2 2
(9.43)
206
9. Specication Tolerancing
where is the coefcient of deterioration. Therefore,
L
A0 b 22T 2
20
3
(9.44)
3A* 0
0 bT
(9.45)
by substituting * in . A0 and 0 are the same as above, and A* and T are dened
as follows:
A*:
T:
Example
A quality problem occurs when a luminance changes by 50 lx, and the social loss
for repair is $150:
0:
50 lx
A0:
$150
0.8 lx/cd
A:
$3
A*:
T:
$32
20,000 hours
Using equations (9.42) and (9.45), the tolerance for the initial luminous intensity and the tolerance * for deterioration are given as follows:
3
0.850 8.8 cd
AA b 150
3A* 1
*
A T
(3)(32) 50
1
0.00225 cd
150 0.8 20,000
(9.46)
(9.47)
Therefore, the tolerance for the initial luminous intensity of the lamp is 8.8 cd,
and the tolerance for the coefcient of deterioration is less than 0.00225 cd/h.
Even if the lamp has sufcient luminous intensity, the loss due to cleaning must
be considered if the luminance is lost rapidly because of stains on the lamp.
207
10
Tolerance Design
10.1. Introduction
208
10.2. Tolerance Design for Nominal-the-Best and Smaller-the-Better
Characteristics
208
10.3. Tolerance Design for Larger-the-Better Characteristics
210
10.4. Tolerance Design for Multiple Factors
212
10.1. Introduction
In most cases of parameter design, quality can be improved without cost increases.
In tolerance design, quality is improved by upgrading raw materials or component
parts that increase cost. In other words, tolerance design is used to compare the
total loss caused by quality and cost.
Example
Values of the linear thermal coefcient, b (percentage of expansion per 1C), and
wear, , per year (percentage of wear per year) of three materials, A1, A2, and A3,
are as shown in Table 10.1. If the dimension changes by 6%, there will be a
problem in the market, and the loss, A0, in this case is $180. Among A1, A2, and
208
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
Table 10.1
Material characteristics
Material
b (%)
(%)
Price
A1
0.08
0.15
$1.80
A2
0.03
0.06
3.50
A3
0.01
0.05
6.30
A3, which is the best material? The standard deviation, x, of the temperature condition x at which the material is used is 15C, and the design life is 20 years.
Because the dimension of the product at the time of shipment is equal to the
target value, m, at the standard temperature, the variance, 2, of the sum of variability due to temperature and variability due to deterioration is given by
2 b22x
T2 2
(10.1)
The second term of equation (10.1) is determined because the variance is given by
the following equation, where the deterioration per year is and the design life is
T years:
2
1
T
(t) dt 3 T
T
(10.2)
202
(0.152)
3
4.440
A2: 2 (0.032)(152)
(10.3)
202
(0.062)
3
0.6825
A3: 2 (0.012)(152)
(10.4)
20
(0.052)
3
0.3558
(10.5)
Table 10.2 was obtained in this way, and the total loss is the sum of price and
quality. The optimum solution is material A2.
The quality level L is obtained by the equation
L
A0 2
180
2 2
20
6
(10.6)
209
210
Table 10.2
Tolerance design
Material
b (%)
(%)
Price
A1
0.08
0.15
$1.80
4.44
A2
0.03
0.06
3.50
A3
0.01
0.05
6.30
Quality Level
Total Loss
$22.20
$24.00
0.6825
$3.41
$6.91
0.3558
$1.78
$8.08
Because materials and parts cannot be adjusted later, the optimum solution is
approximately at the point where the quality level and the price balance. The optimum solution, A2, is used to determine equation (10.7), or more generally, the
safety factor. It is important to balance quality and cost in advance for rational
determination of the tolerance.
Minimizing the sum of production cost and quality is tolerance design for selection of production methods and production tools. It is important to minimize
the sum of the quality evaluation level, Q , and the product cost, P. The values of
loss, A, due to failure are the cost of materials, parts, and products after balancing
quality and cost in the tolerance design. In this case, the safety factor, , is given
by
AA
(10.7)
A002
A0022
y2
(10.8)
Example
Lets assume that both the strength and price of a pipe are proportional to the cross
section of the pipe. A resin pipe is broken at a stress of 5000 kgf, and the loss in
this case is $300,000. The strength of a unit area b is set at 80 kgf/mm2 and the
211
price per unit area a is set at $40 (per/mm2). For a cross-sectional area of x mm2,
the sum LT of the price and the quality is given as
1
(bx)2
LT ax A002
(10.9)
Setting the differential of LT with x equal to zero and solving for x yields the x
that minimizes this value:
x
2A020
ab2
1/3
(10.10)
(2)(300,000)(50002)
(40)(802)
1/3
388 mm2
(10.11)
(10.12)
1
(80)(388)2
(300,000) (50002)
$7800
(10.13)
(10.14)
Because the strength deteriorates in an actual situation, the loss is determined using
the following variability 2:
2
1 1
(bx)2 T
e
T
2t
dt
1
1
(e2T 1)
(bx)2 2T
(10.15)
where T is the design life and is the deterioration in one year. That is not calculated here. If the characteristic values uctuate when tested under various noise
conditions, the average of the square of the inverse of these values is taken as the
target value of larger-the-better characteristics.
The calculation above was made from one type of resin. The same calculation
can be made for other types, and the one that gives the minimum total loss will
be selected.
212
Example
In the design of an engine control circuit, the output response is y number of on
signals per minute. The target is 600, and the specication is 60. If the output
is out of specication, the average repair cost is $2.50. The optimum nominal
values of the factors obtained through parameter design are shown in Table 10.3
with the existing tolerances. The tolerances of resistors, transistors, and condensers
can be upgraded through the respective cost increases shown in Table 10.4.
The steps for tolerance design are implemented as follows.
213
Table 10.3
Factors, nominal values, and tolerances
Tolerance Factor
Nominal
Value
Resistor R1
2200
5%
Resistor R2
470
5%
Resistor R3
100k
5%
Resistor R4
10k
5%
Resistor R5
1500
5%
Resistor R6
10k
Transistor T1
180 (hFE)
50
W Transistor T2
180 (hFE)
50
Condenser C1
0.68
20%
Condenser C2
10.00
20%
Voltage
6.6 V
0.3
5%
(10.16)
Table 10.4
Upgrade cost and tolerance
Component
Grade
Cost
Resistor
Low
High
Base
$2.75
5%
1%
Transistor
Low
High
Base
$2.75
50
25
Condenser
Low
High
Base
$5.50
20%
5%
214
Table 10.5
Factor levels
Tolerance Factor
Level 1
Resistor R1
Resistor R2
446.5
Resistor R3
95k
Level 2
2090
2310
493.5
10.5k
9.5k
10.5k
Resistor R4
Resistor R5
Resistor R6
Transistor T1
130
230
W Transistor T2
130
230
1425
1575
9.5k
Condenser C1
0.544
Condenser C2
8.00
Voltage
6.3
10.5k
0.816
12.00
6.9
Table 10.6
Experimental layout
L12
P
1
Q
2
R
3
S
4
T
5
U
6
V
7
W
8
X
9
Y
10
Z
11
No. of on
Signals min
588
530
597
637
613
630
584
617
601
10
621
11
579
12
624
215
(10.17)
4,345,236.75
(10.18)
(10.19)
St
fT
(10.20)
(10.21)
Therefore,
VT
9458.25
11
859.84
(10.22)
216
Calculate the level sums of all the factors as shown in Table 10.7. Calculate the
sum of squares for factor P:
SP P12
P22
CF
n
3595.002
3626.002
4,345,236.75
6
80.08
(10.23)
where n is the number of data points per each factor level, in this case, 6. The
degrees of freedom of factor P are:
fP (number of levels) 1
21
1
(10.24)
SP
fP
80.08
1
80.08
(10.25)
Perform similar calculations for the other factors. The results are shown in the
ANOVA table, Table 10.8.
Table 10.7
Response table: Level sums
Factor
Level
1
Level
2
Factor
Level
1
Level
2
3595.00
3626.00
3599.00
3622.00
3517.00
3704.00
3635.00
3586.00
3559.00
3662.00
3618.00
3603.00
3593.00
3628.00
3694.00
3527.00
3532.00
3689.00
3640.00
3581.00
3651.00
3570.00
217
Table 10.8
ANOVA table
Source
80.08
80.08
2914.08
2914.08
884.08
884.08
102.08
102.08
2054.08
2054.08
546.75
546.75
44.08
44.08
200.08
200.08
18.75
18.75
2324.08
2324.08
290.08
290.08
Total
11
9458.25
859.84
Since the effects (sum of squares) of factors P, V, and X are relatively small,
pool them together to represent S(e), the pooled error sum of squares:
S(e) SP SV Sx
80.08 44.08 18.75
142.91
(10.26)
(10.27)
S(e)
f(e)
142.91
3
47.64
(10.28)
218
Next, calculate the pure sum of squares, S, and the percent contribution, (%), for
each factor. For example:
SQ SQ V(e) fQ
2914.08 (47.64)(1)
2866.44
(%)Q
(10.29)
SQ
ST
2866.44
9458.25
30.31
(10.30)
A0
20
250
602
0.0694
(10.31)
(10.32)
(10.33)
Step 4. For each factor, calculate the existing monitory loss using the loss
function. Calculate the loss due to each factor, for example, Q:
LQ(current) LT(current)(%)Q
(59.67)(30.31%)
$18.09 per circuit
(10.34)
219
Table 10.9
Rearranged ANOVA table
S
80.08
80.08
2914.08
2914.08
2866.4
30.31
884.08
884.08
836.44
8.84
102.08
102.08
54.44
0.58
2054.08
2054.08
2006.44
21.21
546.75
546.75
499.11
5.28
152.44
1.61
44.08
44.08
200.08
200.08
18.75
18.75
(%)
Source
2324.08
2324.08
2276.44
24.07
290.08
290.08
242.44
2.56
(e)
142.91
47.64
524.03
Total
11
9458.25
859.84
100.00
the improvement ratio is (1%/5%)2. Then the new loss due to factor Q after upgrading is
LQ(new) LQ(current)
1%
5%
(18.09)(0.04)
$0.72 per circuit
(10.35)
Table 10.10
Current loss
Factor
(%)
Lcurrent
30.31
18.09
8.84
5.27
0.58
0.35
21.21
12.66
5.28
3.15
1.61
0.96
24.07
14.36
220
Therefore,
quality improvement LQ(current) LQ(new)
018.09 0.72
$17.37 per circuit
(10.36)
(10.37)
In this case, it does pay off to use the upgraded tolerance for factor Q. Similar
calculations and justication are done for the other factors as shown in Table 10.11.
Step 6. For factors to be upgraded, calculate the total quality improvement (in
loss), the total upgrade cost, and therefore the total new gain. Since factors Q, R,
T, U, and Y are to be upgraded,
total quality improvement 17.37 5.06 12.15 3.02 13.46
$51.06 per circuit
(10.38)
(10.39)
Therefore,
total net gain (total quality improvement) (total upgrade cost)
51.06 16.50
$34.56 per circuit
(10.40)
Table 10.11
Upgrading decision making
Factor
Lcurrent
Lnew
Quality
Improvement
Upgrade
Cost
Net
Gain
Upgrade
Decision
18.09
0.72
$17.37
$2.75
$14.62
Yes
5.27
0.21
5.06
2.75
2.31
Yes
0.35
0.07
0.34
2.75
2.41
No
12.66
0.51
12.15
2.75
9.40
Yes
3.15
0.13
3.02
2.75
0.27
Yes
0.96
0.24
0.72
2.75
2.03
No
14.36
0.90
13.46
5.50
7.96
Yes
Part IV
Signal-to-Noise
Ratio
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
Introduction to the
Signal-to-Noise
Ratio
11
11.1.
11.2.
11.3.
11.4.
Denition
224
Traditional Approach for Variability
Elements of the SN Ratio
224
Benets of Using SN Ratios
225
224
Machining Technology
229
Injection Molding Process
229
Fuel Delivery System
229
Engine Idle Quality
230
Wiper System
230
Welding
230
Chemical Reactions
230
Grinding Process
232
Noise Reduction for an Intercooler
232
Wave Soldering
233
Development of an Exchange-Coupled Direct-Overwrite Magnetooptical Disk
Equalizer Design
233
Fabrication of a Transparent Conducting Thin Film
233
Low-Pass Filter Design
233
Development of a Power MOSFET
234
Design of a Voltage-Controlled Oscillator
234
Optimization of an Electrical Encapsulant
234
Ink Formulation
234
228
228
233
234
234
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
223
224
References
237
11.1. Denition
The signal-to-noise ratio (SN ratio) is a measurement scale that has been used in
the communication industry for nearly a century. A radio measures the signal or
the wave of voice transmitted from a broadcasting station and converts the wave
into sound. The larger the voice sent, the larger the voice received. In this case,
the magnitude of the voice is the input signal, and the voice received is the output.
Actually, the input is mixed with the audible noise in the space. Good measuring
equipment catches the signal and is not affected by the inuence of noise.
The quality of a measurement system is expressed by the ratio of signal and
noise. SN ratio is expressed in decibels (dB). For example, 40 dB means that the
magnitude of the output is 10,000 times the magnitude of noise. For example, if
the SN ratio of a transistor radio is 45 dB, the power of the input is about 30,000
times the magnitude of noise. The larger the SN ratio, the better the quality.
Taguchi has generalized the concept of SN ratio as used in the communication
industry and applied it for the evaluation of measurement systems as well as for
the function of products and processes.
225
power of signal
(sensitivity)2
2
2
2
power of noise
(variability)
(11.1)
As seen from equation (11.1), the SN ratio is the ratio of sensitivity to variability
squared. Therefore, its inverse is the variance per unit input. In the loss function,
the loss is proportional to variance. Therefore, monetary evaluation is possible.
226
M1
M2
M3
M1
M2
M3
M1
M2
M3
M1
M2
M3
M1
M2
M3
M1
M2
Figure 11.1
Three elements of SN ratio
M3
227
In the early days of quality engineering applications, research for robustness was
conducted by discovering the interactions between control and noise factors.
When there is a signicant interaction between a control and a noise factor, it
shows that different levels of a control factor behave differently under different
noise factors. If this is true, there is a possibility that robustness can be improved.
Such a study method was called direct product design, where the experiment was laid
out in such a way that every interaction between a control and a noise factor may
be calculated. But such calculations were so tedious that many people had to work
full-time on them. By using the SN ratio, such calculations are no longer necessary.
Instead, a comparison of SN ratios between control factor levels is made, and the
one with a higher SN ratio is selected as the robust condition.
Simplication of
Interaction
Calculations
This is one of the most important benets of using the SN ratio in product or
process design. Using traditional methodology, engineers tend to design a product
or a process by meeting the target rst instead of by maximizing robustness rst.
This hitting target rst approach is very inefcient, for the following reasons.
After the target is met, the engineer has to make sure that the product works
under noise conditions: for example, temperature.
If the product does not work at a certain extreme temperature, the engineer
has to change or adjust the design parameters to meet the target. But while studying the effects of temperature, the engineer needs to keep other noise factors,
such as humidity, xed. If the product does not work in a certain humidity environment, the engineer tries to vary some parameter levels and change parameter
levels again to meet the target. In the rst trial, where the aim was to meet the
target at a certain temperature, the control factor levels selected would probably
be different from the control factor levels selected in the second trial, where the
aim was to meet the target at a certain humidity condition.
To meet the target under a certain temperature as well as under a certain
humidity condition, the study must be started again from the beginning. The engineer has to do the same for any other noise conditions that ought to be considered. Generally, there are a lot of noise factors. Not only is this type of approach
tedious and time consuming, it is not always successful. Designing in this fashion
is similar to solving many simultaneous equations, not by calculation but by trialand-error experimentation on hardware.
In contrast, use of the SN ratio enables the engineer to maximize robustness
simply by selecting the levels of control factors that give the largest SN ratio, where
many noise factors are usually compounded into one. After robustness is achieved,
the next step is to adjust the mean to the target by adjusting the level of a control
factor, the factor having a maximum effect on sensitivity and a minimum effect on
robustness. In this way it is possible that meeting the target will have scant effect
on robustness. Very often, the efciency of research can be improved by a factor
of 10, 100, or 1000.
Efcient Robust
Design Achievement
Linearity, one of the three elements in an SN ratio discussed earlier, is very important for simplifying adjustment in product or process design and for calibration
in measurement systems. Because the relationship between the value of an SN
ratio and linearity is so simple and clear, adjusting the output of control systems
and calibration of measurement systems is easier than when traditional methods
are used. When the input/output relationship is not linear, deviation from linearity
Efcient Adjustment
or Calibration
228
Efcient Evaluation
of Measuring
Systems
Reduction of
Product / Process
Development Cycle
Time
Efcient Research
for a Group of
Products: Robust
Technology
Development
Traditionally, research is conducted every time a new product is planned. But for
a group of products having the same function within a certain output range, it
would be wasteful to conduct research for each product. Instead, if we could establish an appropriate SN ratio based on the ideal function of a group of products
and maximize it, the design of a specic product with a certain output target would
be obtained easily by adjusting the input signal. This approach, called robust technology development, is a breakthrough in quality engineering. It is believed that such
an approach will soon become the core of quality engineering.
229
the ideal function, and it takes a lot of discussion. The ideal function is different
from product to product, from one system to another, and there are no generalizable approaches or solutions. Ideal functions must be discussed case by case.
Generally, it is desirable that the ideal function be identied based on the
product function, which is energy related, since energy has additivity. This type of
ideal function is called a generic function. A generic function can be explained in
physics, for example, by Ohms law or Hookes law. In some cases, however, the
data to be collected for the study of such relationships are not available, due to
lack of measurement technology. In such cases, a function that describes an objective may be used. This is called the objective function. Many published cases are
available for engineers to use as a reference to establish an ideal function. Following are some examples.
In a machining process, the dimensions of an actual product are traditionally
measured for analysis. To reduce error and to optimize the process to cover a
range of the products the machine can process, it is recommended that test pieces
be used for study.
In a study entitled NC Machining Technology Development [1], the concept
of transformability was developed. The ideal function in this case was that product
dimension be proportional to programmed dimension:
Machining
Technology
y M,
where y is the product dimension, a constant, and M the programmed dimension. The function of machining was studied using different ideal functions [2].
There were two functions in the study: function 1, y M, where y is the square
root of total power consumption and M is the square root of time (per unit product); and function 2, y M, where y is the square root of total power consumption and M is the square root of total amount removed. The objective of the rst
function is to improve productivity, and the second is to improve machining
quality.
In the injection molding process, quality characteristics such as reject rate, ash,
or porosity have traditionally been measured. In a study of a carbon berreinforced plastic injection molding process [3], the following function was studied: y M, where y is the product dimension and M is the mold dimension. In
a study of an injection molding experiment [4], the function above was used together with another function, y M, where y is the deformation and M is the
load. In this case study, the former function was the objective-related function,
whereas the latter belonged to the generic function.
Injection Molding
Process
pump efficiency
QP
VI
230
M
M*
Idle quality is a major concern on some motor vehicles. It contributes to the vibration felt by vehicle occupants. A commonly used process for evaluating engine
idle quality involves running an engine at idle for several cycles and collecting
basic combustion data, such as burn rates, cylinder pressure, air/fuel ratio, and
actual ignition timing.
In a study conducted by Ford Motor Company [6], it was considered that indicated mean effective pressure (IMEP) is an approximately linear function of fuel ow
at idle when air/fuel ratio, ignition timing, and some of the engine combustion
parameters were held constant under steady operation: y M, where y is the
IMEP and M is the fuel ow.
Wiper System
The function of a wiper system is to clear precipitation (rain, snow, etc.) and
foreign material from a windshield to provide a clear zone of vision. An annoying
phenomenon that affects the performance of a windshield wiper system is wiper
chatter. Chatter occurs when the wiper blade does not take the proper set and
skips across the windshield during operation, potentially causing deterioration
in both acoustic and wiping quality.
The characteristics measured traditionally to improve the quality of a wiper
system are chatter, clear vision, and uniformity of wiper pattern, quietness, and
life. The ideal function in a study [7] was dened as the actual time for a wiper
to reach a xed point on the windshield for a cycle should be the same as the
theoretical time (ideal time) for which the system was designed, expressed as
y M,
where y is the actual time and M is the theoretical time (1/rpm of the motor).
Welding
Chemical Reactions
231
Figure 11.2
Dynamic operating
window
p:
fraction unreacted
q:
T:
time
Letting
1:
2:
1
p
232
1
1pq
The dry developer used for electrophotographic copiers and printers consists of
toner and carrier. The particle size of the developer is one of the physical properties that has a strong inuence on image quality.
Size distribution typically was analyzed as classied data. Quite often, yield was
used for analysis. In a study of developing a technique for improving uniformity
and adjusting particle size conducted by the Minolta Camera Company [11], the
ideal function was y /M, where y is the particle size and M is the grinding
energy. In a later study [12], conducted by the same company, a different ideal
function was used. Since particles are ground by high-pressure air, the collision of
particles in the grinding process is similar to the collision between molecules in
chemical reactions.
In this case, the portion with a particle size larger than the upper specication
limit was treated as unreacted, and the portion with a particle size smaller than
the lower specication limit was treated as overreacted. The ideal functions, similar
to the chemical reaction (i.e., the total reaction and side reactions) were used to
optimize the grinding process.
Intercoolers are used with turbochargers to increase the output power of car engines. The output power of an engine is determined by the amount of mixed gas
that is sucked into the engine cylinder. The more gas sucked in, the higher the
feeding efciency by compressing the mixed air and consequently, raising the air
pressure above the atmospheric pressure.
When the airow resistance inside the cooler is high, the cooling efciency is
reduced and noise is often noted. Before this study, the problem-solving approach
has been to detect causes by measuring audible noise level, but without success.
From the viewpoint of the product function, it is required that the airow resistance be low and the ow rate be uniform. The engine output power varies with
driving speed. The airow changes as the accelerator is pushed down. Therefore,
it is ideal that the airow rate in the intercooler be proportional to the amount
of air. As the rpm rate of the turbocharger varies under different driving conditions, it is also ideal for the airow rate to vary proportional to the rpm rate of
the turbocharger. The ideal function was dened as y MM*, where y is the
airow speed of the intercooler, M the amount of airow, and M* the rpm rate
of the turbocharger.
233
Development of an
Exchange-Coupled
Direct-Overwrite
Magnetooptical Disk
[15]
One of the most important quality items for ampliers is that the gain of frequency
is at. However, it is not quite at for ampliers with a wide frequency range. To
make the response at, a circuit called an equalizer is connected to compensate for
the declining gain.
Using two-stage optimization, the robustness of the equalizer was optimized; the
control factors that least affect SN ratio but were highly sensitive were used to tune
them so that the gain became at. The ideal function is that the output voltage
(complex number) is proportional to the input voltage.
Equalizer Design
[16]
Fabrication of a
Transparent
Conducting Thin
Film [17]
In the design of ac circuits, root mean squares have traditionally been used. By
using complex numbers in parameter design, the variability of both the amplitude
and phase of the output can be reduced. Therefore, a system whose input and
output are expressed by sine waves should use complex numbers.
Low-Pass Filter
Design [18]
234
Development of a
Power MOSFET [19]
Voltage-controlled oscillators are used in wireless communication systems. The performance of those oscillators is affected by environmental conditions and the variability of component parts in printed circuit boards. Traditionally, quality
characteristics such as transmitting frequency or output power were used for evaluation. In this study, the function y MM*, where y is the output (complex
number), M the dc voltage, and M* the ac voltage, was used. A 7.27-dB gain was
obtained after tuning.
Optimization of
an Electrical
Encapsulant [21]
In the development of ink used in digital printers, characteristics such as permeability, stability, ow value, and exibility have commonly been used. But the results
of the use of these characteristics were not always reproducible.
In a digital printer, a document is scanned and the image is converted to electric signals. The signals are then converted to heat energy to punch holes on the
resin lm, which forms the image on paper. The ideal function for the ink development is that the dot area of the master is proportional to the dot area printed.
There are two types of data: continuous variables and classied attributes. Voltage,
current, dimension, and strength belong to the continuous variables category. In
quality engineering, classied attributes are divided into two groups: two-class classied attributes and three-or-more-class classied attributes. The SN ratio equations are
different for different types.
From the quality engineering viewpoint, it is recommended that one avoid using attribute data, for the followings reasons. First, attribute data are less informative. For example, if a passing score in school is 60, students having scores of
60 and 100 are classied equally as having passed, whereas a student with a grade
of 59 is classied as having failed. There is a big difference between the two students who passed, but little difference between a student with a score of 60 and
one with a score of 59.
235
yield 40%
A2B1
yield 70%
A1B1
yield 40%
A1B 2
yield 80%
From the results, one might conclude that the best condition would be A2B 2
and the yield might be over 90%. But one might be surprised if a conrmatory
experiment under condition A2B 2 resulted in less than 40%. This shows the existence of an interaction between A and B. Lets assume the following levels of A
and B :
A1:
low current
A2:
high current
B1:
B 2:
Condition A1B1 was low current and short welding time; therefore, too little
energy was put into the system, and the result was: no weld. Conditions A2B1 and
A1B 2 have more energy going in and the yields are higher. But under condition
A2B 2, high current and long welding time, the yield was low because too much
energy was put into the system and the result was: burnthrough. It is seen that if
01 data were used, the two extremely opposite conditions are considered equally
bad. It is also seen that the superiority and inferiority of factor A is inconsistent,
depending on the condition of another factor, B, and vice versa. The interactions
between control factors indicate inconsistency of conclusions. We should avoid
using 01 data. But in some cases, continuous data are unavailable because of a
lack of technology. In such cases it is necessary to convert the data so that interactions may be avoided.
Following is an example of data collection. In the case of welding, samples are
collected from each welding condition. From each sample, load and deformation
are considered as the input and output, respectively. The SN ratio is calculated
from the relationship above. There will be SN ratios for A1B1, A2B1, A1B 2, and A2B2.
The effect of A is compared by the average SN ratios of (A1B1, A1B 2) and (A2B1,
236
Classication Based
on Intention
There are two types of SN ratios, based on intention: passive and active. The SN
ratio used in measurement is called passive. An object, the input signal, is measured, and from the result, the true value of the input signal is estimated. That is
why the SN ratio is called passive. For the passive type we have:
Input: true value to be measured
Output: measured value
In the active type, a signal is put into a system to change the output:
Input: intention
Output: result
For example, a steering wheel is turned to change the direction of a car. The
output is changed by intention, so it is an active output. In control systems or
product/process design, the parameters for tuning are of the active type. The
difference is in the way that the signal factor is considered. Although the intentions
are different, the equations and calculation of the SN ratio used are the same.
Dynamic SN ratios are used to study either multiple targets or a certain range of
targets, whereas nondynamic SN ratios are used to study a single target. The former includes three aspects: linearity, sensitivity, and variability. The latter includes
two aspects: sensitivity and variability.
In earlier stages of quality engineering, nondynamic characteristics were commonly used. There are three types of nondynamic characteristics: nominal-the-best,
smaller-the-better, and larger-the-better. Nominal-the-best characteristics are used to hit
the target after reducing variability. The output is ideally equal to the target.
Smaller-the-better characteristics are used to reduce both average and variability. The
output is ideally equal to zero. Larger-the-better characteristics are used to maximize
average and minimize variability. Ideally, the output is equal to innity.
Dynamic SN ratios are more powerful and efcient; therefore, it is recommended that one use them as much as possible.
Classication Based
on Input and Output
Case
Input
Output
1
2
3
4
Continuous variables
Continuous variables
Digital
Digital
Continuous variables
Digital
Continuous variables
Digital
Case 1 is the type that occurs most frequently. Driving a car, for example, the
steering wheel angle change is the input and the turning radius is the output.
References
237
References
1. K. Ueno et al., 1992. NC machining technology development. Presented at the
American Supplier Institute Symposium.
2. Kazuyoshi Ichikawa et al., 2000. Technology development and productivity improvement of a machining process through the measurement of energy consumed during on and off mode. Presented at the American Supplier Institute Symposium.
3. K. Ueno et al., 1992. Stabilization of injection molding process for carbon ber
reinforced plastic. Presented at the American Supplier Institute Symposium.
4. A. Obara et al., 1997. A measurement of casting materials for aerocraft using quality
engineering approaches. Quality Engineering Forum, Vol. 5, No. 5.
5. J. S. Colunga et al., 1993. Robustness of fuel delivery systems. Presented at the
American Supplier Institute Symposium.
6. Waheed D. Alashe et al., 1994. Engine idle quality robustness using Taguchi methods. Presented at the American Supplier Institute Symposium.
7. M. Deng, et al., 1996. Reduction of chatter in a wiper system. Presented at the
American Supplier Institute Symposium.
8. Mitsugi Fukahori, 1995. Parameter design of laser welding for lap edge joint piece.
Journal of Quality Engineering Forum, Vol. 3, No. 2.
9. Shinichi Kazashi et al., 1996. Optimization of spot welding conditions by generic
function. Journal of Quality Engineering Forum, Vol. 4, No. 2.
10. Yoshikazu Mori et al., 1995. Optimization of the synthesis conditions of a chemical
reaction. Journal of Quality Engineering Forum, Vol. 3, No. 1.
11. Hiroshi Shibano et al., 1994. Establishment of particle size adjusting technique for
a ne grinding process for developer. Journal of Quality Engineering Forum, Vol. 2,
No. 3.
12. Hiroshi Shibano et al., 1997. Establishment of control technique for particle size
in roles of ne grinding process for developer. Journal of Quality Engineering Forum,
Vol. 5, No. 1.
13. Hisahiko Sano et al., 1997. Air ow noise reduction of inter-cooler system. Presented at the Quality Engineering Forum Symposium.
14. S. Kazashi et al., 1993. Optimization of a wave soldering. Journal of Quality Engineering, Vol. 1, No. 3.
15. Tetsuo Hosokawa et al., 1994. Application of QE for development of exchangecoupled direct-overwrite MO disk. Journal of Quality Engineering Forum, Vol. 2, No.
2.
Other Ways of
Classifying SN Ratios
238
12
SN Ratios for
Continuous
Variables
12.1.
12.2.
12.3.
12.4.
12.5.
Introduction
239
Zero-Point Proportional Equation
241
Reference-Point Proportional Equation
247
Linear Equation
250
Linear Equation Using a Tabular Display of the Orthogonal Polynomial
Equation
254
12.6. When the Signal Factor Level Interval Is Known
256
12.7. When Signal Factor Levels Can Be Set Up
260
12.8. When the Signal Factor Level Ratio Is Known
262
12.9. When the True Values of Signal Factor Levels Are Unknown
264
12.10. When There Is No Signal Factor (Nondynamic SN Ratio)
264
Nominal-the-Best, Type I
265
Nominal-the-Best, Type II
267
Smaller-the-Better
268
Larger-the-Better
269
Nondynamic Operating Window
269
284
References
289
12.1. Introduction
From a quality engineering viewpoint, continuous variables are far more important
then classied attributes, as described in Chapter 11. The SN ratios of continuous
variables are used much more frequently than those of classied attributes. SN
ratios for continuous variables are classied as follows:
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
239
240
Table 12.1
Signal factor and data
Signal
M1
M2
M3
Mk
Data
y11
y12
y10
y21
y22
y20
y31
y32
y30
yk1
yk2
yk0
Total
y1
y2
y3
yk
241
(signal factor) and the output (response data). Following are typical functional
relations.
1. Zero-point proportional equation. In this case, y 0 when M 0:
y M
2. Reference-point proportional equation. In this case, the ideal relationship is expressed by the proportional equation that goes through a particular point,
M Ms:
y ys (M Ms)
3. Linear equation. In this case, the ideal relationship is expressed by the linear
equation between M and y:
y m (M M)
results of measuring M1
results of measuring M2
total of measurements at Mi
242
x1
y1
Voltage
(a)
Figure 12.1
Voltagecurrent
relationship
Voltage
(b)
x2
y2
Voltage
(c)
Table 12.2
Reading from a measurement system
Signal Factor
M1
M2
Mk
Reading
y11
y12
y10
y21
y22
y20
yk1
yk2
yk0
y1
y2
yk
Total
243
In the case of a zero-point proportional equation, when the signal is zero, the
output must be zero. In a measurement system, if a sample to be measured contains no ingredient, the measured result must read zero. Here the ideal function
is expressed as
y M
(12.1)
where y is the output, the slope of the response line or the regression coefcient,
and M the signal.
In reality, there are errors in observations, and the data in Table 12.2 are expressed by
yij Mi eij
(12.1a)
(12.2)
1
(M1 y1 M2 y2 Mk yk)
r0r
(12.3)
r M 21 M 22 M 2k
(12.4)
where
(12.5)
1
(M1 y1 M2 y2 Mk yk)2 and f 1
r0r
(12.6)
(12.7)
The error variance Ve is the error variation divided by its degrees of freedom:
Ve
Se
kr0 1
(12.8)
2
2
(12.9)
244
(12.10)
(1/r0r) (S Ve)
Ve
dB
(12.11)
1
(S Ve)
r0r
(12.12)
Example 1 [1]
In this example, the SN ratio of a newly developed displacement gauge is calculated. The value of displacement that was used as the signal is measured with
another displacement gauge, which is believed to be accurate. The readings given
in Table 12.3 were obtained from the measurement calculated twice by different
testing personnel. A zero-point proportional equation was assumed, since the output, y, of the displacement gauge is zero when the displacement, M, is zero.
The ANOVA is done as follows:
ST 652 742 1972 131,879
(f 6)
(12.14)
(12.13)
Se
221.86
44.37
5
5
(f 1)
(12.15)
(f 5)
(12.16)
(12.17)
245
Table 12.3
Displacement and displacement gauge data
Displacement M (m)
Reading (mV)
30
60
90
R1
65
136
208
R2
74
147
197
Total
139
283
405
The SN ratio is
(12.18)
(12.19)
(12.20)
(12.21)
Example 2 [2]
In Example 1, the SN ratio of one product (or one condition) is calculated . When
there is a control factor with two levels (two conditions), two SN ratios are calculated for comparison. In the present example two quantitative cadmium microanalysis methods, dithizone extractionatomic absorption spectrophotometry and
Table 12.4
ANOVA table for zero-point proportional equation: Displacement gauge
Factor
E(V )
131,657.14
131,657.14
2 02
221.86
44.37
Total
131,879.00
246
(f 10)
(12.22)
(f 1)
(12.23)
( f 9)
(12.24)
1191.02
12,895.28
110
10 (12,895.28 1.36)
10 log(86.2) 19.36 dB
1.36
(12.25)
10 (1,539.38 0.35)
10 log(39.97) 16.02 dB (12.26)
0.35
Table 12.5
Quantitative cadmium microanalysis data
A1
A2
M1
M2
M3
M4
M5
R1
R2
12.0
11.0
23.0
22.5
33.0
34.0
42.5
44.0
52.0
54.0
Total
23.0
45.5
67.0
86.5
106.0
R1
R2
3.5
4.0
7.0
7.5
10.5
12.0
14.5
15.5
18.0
19.5
Total
7.5
14.5
22.5
30.0
37.5
Total
328.0
112.0
247
Table 12.6
ANOVA table of quantitative cadmium microanalysis data of A1
Factor
E(V )
f
1
12,895.28
12,895.28
2 (2)(12 22 52)
12.22
1.36
Total
10
12,907.50
(12.27)
Table 12.8 shows the data after reference-point calibration. As noted above, rst
the average of the r0 readings at the reference point, ys1, ys2, ... , ys0 is calculated:
ys
ys1 ys 2 ys 0)
(12.28)
r0
Table 12.7
ANOVA table of quantitative cadmium microanalysis data of A2
Factor
E(V )
f
1
1539.38
1539.38
2 1002
3.12
0.35
Total
10
1542.50
248
Table 12.8
Data after reference-point calibration
Signal
M1 Ms
M2 Ms
Mk Ms
Reading
y11 ys
y12 ys
y10 ys
y21 ys
y22 ys
y20 ys
yk1 ys
yk2 ys
yk0 ys
y1
y2
yk
Total
The slope or the linear coefcient, , is also called the sensitivity coefcient. It is
given as:
1
[y (M Ms) y2(M2 Ms) yk(Mk Ms)]
r0r 1 1
(12.29)
where
r (M1 Ms)2 (M2 Ms)2 (Mk Ms)2
(12.30)
(12.31)
(y y )
k
r0
ij
( f kr0)
(12.32)
i1 j1
1
[ y (M Ms) y2(M2 Ms) yk(Mk Ms)]2
r0r 1 1
( f 1)
(12.33)
( f kr0 1)
(12.34)
Example [1]
The SN ratio of an analyzer that measures olen density was calculated. Standard
solutions of four different densities were used. The analyzer was used after the
reference-point proportional equation, with 5% standard solution as the reference
point. The data in Table 12.9 were obtained after measurement was conducted by
two different testing personnel.
249
Table 12.9
Measured olfen density data
Density M(%)
Reading
10
15
20
R1
5.2
10.3
15.4
20.1
R2
5.0
10.1
15.5
20.3
5.2 5.0
2 5.1
(12.35)
and after subtracting this value from each datum, reference-point calibration was
done. The result is shown in Table 12.10.
The ANOVA is done as follows:
ST 0.12 (0.1)2 15.22 722.35
(f 8)
(12.37)
(12.36)
(f 1)
(12.38)
(f 7)
(12.39)
Se
0.1771.7 0.0253
7
(12.40)
Table 12.10
Olen density data after reference-point calibration
Density M(%)
5
M Ms
Reading after calibration
R1
R2
Total
10
15
20
10
15
0.1
0.1
5.2
5.0
10.3
10.4
15.0
15.2
0.0
10.2
20.7
30.2
250
(12.41)
(12.42)
The sensitivity, S, is
10 log
1
(722.1729 0.0523) 10 log(1.0316)
700
(12.43)
(12.44)
(12.45)
Table 12.11
ANOVA table of the reference-point proportional equation of olen density
Factor
E(V )
722.1729
722.1729
2 02
0.1771
0.0253
722.3500
Total
251
y = M
Zero-Point
Proportional Equation
M1
M2
M3
M4
y ys = (M Ms)
Reference-Point
Proportional Equation
ys
M1
M2
M3
M4
Ms
y
y = m + (M M)
m
Linear Equation
a
M1
M2
M3
M
M4
Figure 12.2
Three different cases of
equations
252
(12.46)
where m is the average and e is the error. Parameters m and are estimated by
my
(12.47)
1
[y (M M) y2 (M2 M) yk(Mk M)]
r0r 1 1
(12.48)
where
M
M1 M2 Mk
k
(12.49)
(12.50)
Sm
S
( f kr0 1)
(yij)2
(12.52)
kr0
1
[ y (M M) y2(M2 M) yk(Mk M )]2
r0r 1 1
( f 1)
(12.53)
Se ST S
Ve
(12.51)
( f kr0 2)
1
S
kr0 2 e
(12.54)
(12.55)
The SN ratio is
10 log
(1/r0r)(S Ve)
Ve
(12.56)
Example [1]
In an injection molding process of a plastic product, using the injection pressure as
the signal factor controls the dimensions of the plastic product. To obtain the SN
ratio for such a system, the molding injection pressure was altered, and the data
given in Table 12.12 were obtained by measuring the dimensions of two molded
products. In this case, a linear relationship with no particular restriction was
assumed.
253
Table 12.12
Dimension measurement data for plastic injection molding
Pressure (kgf / cm2)
30
40
50
60
15
15
Data (mm)
R1
R2
4.608
4.590
4.640
4.650
4.682
4.670
4.718
4.702
Total
9.198
9.290
9.352
9.420
MM
Table 12.12 also shows the value of Mi M that was used in the ANOVA. Here
M 45. ANOVA was performed as follows:
ST 4.6082 4.5902 4.7022 173.552216
Sm
9.198 9.420
8
173.538450
(f 8)
37.262
8
(f 1)
(12.58)
(12.57)
(12.59)
[(15)(9.198) (15)(9.420)]2
1000
3.642
0.0132496
1000
(f 1)
(12.60)
Se ST Sm S 173.552216 173.538450
0.0132496 0.00051564
Ve
(f 6)
Se
0.005164
0.0000861
6
6
(12.61)
(12.62)
(1/1000)(0.0132496 0.0000861)
0.1529
0.0000861
10 log(0.1529) 8.155 dB
(12.63)
(12.64)
254
Table 12.13
ANOVA table: Linear equation for plastic injection molding
Factor
E(V )
173.538450
173.538450
2 (4)(22m)
0.0132496
0.0132496
2 02
0.0005164
0.0000861
Total
173.552216
(12.65)
3.64
0.00364
1000
(12.66)
(M 45)
(12.67)
12.5. Linear Equation Using a Tabular Display of the Orthogonal Polynomial Equation
When the intervals for signal factor level are set equal, the SN ratios can easily be
calculated. Table 12.14 is part of a tabular display of Chebyshevs orthogonal polynomial equation. Although the table can be used to calculate linear, quadratic,
or higher-order terms, only the linear term is used in the SN ratio calculation.
(Note: This table is used only when level intervals are equal.) In the table,
b1 linear term
b2 quadratic term
bi
W1A1 WkAk
r0Shi
(12.68)
h level interval
ne of bi r0Sh2i
Sbi
(W1A1 WkAk)2
r02S
(12.69)
(12.70)
12.5. Linear Equation Using a Tabular Display of the Orthogonal Polynomial Equation
255
Table 12.14
Orthogonal polynomial equationa
No. of
Levels:
k2
Coefcients:
b1
b1
b2
b1
W1
W2
k3
W3
k4
k5
b2
b3
b1
W4
b2
b3
W5
b4
2S
20
20
10
14
10
70
10
10
14
12
24
1/2
2/3
9/5
10
14
72 / 5
288 / 35
10 / 3
5/6
35 / 12
From Table 12.2 the SN ratio and the linear coefcient are calculated as follows:
2
2
2
ST y 11
y 12
y kr
Sm
0
( f kr0 1)
(12.71)
Sm
( f 1)
(12.72)
(W1 y1 W2 y2 Wk yk)2
r02S
( f 1)
(12.73)
W1 y1 W2 y2 Wk yk
r0Sh
Se ST S
Ve
Se
kr0 2
10 log
(1/r02Sh2)(S Ve)
Ve
(12.74)
( f kr0 2)
(12.75)
(12.76)
(12.77)
To calculate S and , the values of W1, W2, ... , Wk are found from Table 12.6
under column b1 corresponding to the signal factor level, k. h is the level interval
of the signal factor. A1, A2, ... , Ak in Table 12.14 correspond to y1, y1, ... , yk in
Table 12.2.
256
Example
The example in Table 12.12 can be calculated easily by use of Table 12.14. From
the table,
k4
r0 2
h5
W1 3
(12.78)
W2 1
W3 1
W4 3
2S 20
S is calculated by
S
(3y1 y2 y3 3y4)2
200
[(3)(9.198) 9.290 9.352 (3)(9.420)]2
(20)(2)
0.0132496
(12.79)
and
10 log
10 log
(1/r0(2S)h2 (S Ve)
Ve
[1/(2)(20)(52)](0.0132496 0.0000861)
0.0000861
10 log(0.1529) 8.155 dB
(12.80)
257
percent alcohol, M2M1 plus 3% alcohol, M3M1 plus 6% alcohol, and M4M1 plus
9% alcohol. Thus, the signal-factor levels M1, M2, M3, and M4 have equal level
intervals. Chebyshevs table of the orthogonal polynomial equation can be used to
calculate the linearity and thus the SN ratios.
From Table 12.15, the SN ratio of this case is calculated as follows:
2
2
2
ST y 11
y 12
y 34
Sm
Sm
( f 11)
(12.81)
(12.82)
W2 1
A1 y1
A2 y2
W3 1
A3 y3
W4 3
A4 y4
2S 20
so
S
Se ST S
Ve
( f 1)
(12.83)
( f 10)
(12.84)
Se
10
(12.85)
10 log
(1/r02Sh2)(S Ve)
Ve
(12.86)
Table 12.15
Equal level interval
Sample
M1 x
M2 x 3%
M3 x 6%
M4 x 9%
Reading
y11
y21
y31
y12
y22
y32
y13
y23
y33
y14
y24
y34
Total
y1
y2
y3
y4
258
Example [3]
Table 12.16 shows the measuring results of an electronic balance for chemical
analysis. Each raw data point is multiplied by 1000 and M1 is used as a reference
point. Signal factor levels are set as follows:
M1 x
M2 x h
Mk x (k 1)h
M1 M1 0
M2 M1 h
Mk M1 (k 1)h
(12.87)
Since M1 was used as a reference point (Ms), M1 is subtracted from M1, M2, ... ,
Mk to obtain the signal factor levels in Table 12.17b. The average of reading corresponding M1, denoted by ys, is calculated as
ys
(12.88)
The following calculations are made for the zero-point proportional equation.
Total variation:
ST 0.32 (0.8)2 38.42 7.132.19
(f 15)
(12.89)
Table 12.16
Measuring results of an electronic balance
(a) Raw Data
Signal
Noise
M1
x
M2
xh
M3
x 2h
M4
x 3h
M5
x 4h
N1
120.5857
120.5938
120.6018
120.6132
120.6206
N2
120.5846
120.5933
120.6026
120.6117
120.6213
N3
120.5859
120.5914
120.6041
120.6095
120.6238
0
M1 Ms
h
M2 Ms
2h
M3 Ms
3h
M4 Ms
4h
M5 Ms
Linear Equation
N1
0.3
8.4
16.4
27.8
35.2
L1 265.4h
N2
0.8
7.9
17.2
26.3
35.9
L2 264.8h
L3 269.3h
N3
Total
0.5
y1 0.0
6.0
18.7
24.1
38.4
y2 22.3
y3 52.3
y4 78.2
y5 108.9
259
Table 12.17
COD values of wastewater
(a) Raw Data
Signal
M1
Sample
M2
M3
M4
M5
R1
9.3
25.0
43.3
61.9
81.5
R2
11.3
27.0
44.5
63.1
80.6
R3
10.4
27.7
45.2
62.0
79.4
M5
0.5( )
M1
0.5( )
M2
0.25( )
M3
0
M4
0.25( )
R1
35.0
19.3
1.0
17.6
37.2
R2
33.0
17.3
0.2
18.8
36.3
R3
33.9
16.6
0.9
17.7
35.1
53.20
0.1
54.1
108.6
Total
101.9
Effective divider:
r 02 h2 (2h)2 (3h)2 (4h)2 30h2
(12.90)
1
[ y (M Ms) y5(M5 Ms)]2
0 1 1
(L1 L2 L3)2
(799.5h)2
7102.225
2
(3)(30h )
(3)(30h2)
(f 1)
(12.91)
7102.623 S 0.398
( f 2)
(12.92)
(f 12)
(12.93)
260
Error variance:
29.567
2.464
12
(12.94)
0.398 29.567
2.140
14
(12.95)
Ve
Pooled total error variance:
VN
1
[ y (M Ms) y2(M2 Ms) y5(M5 M5)]
0 1 1
(L1 L2 L3)h
8.883
(3)(30h2)
h
8.883
0.8883
10
(12.96)
where h 0.01 (unit: grams) is used. The following SN ratio is calculated using
h 10 (unit: milligrams):
10 log
[1/(3)(30)(102)](7102.225 2.140)
2.140
10 log (0.369) 4.33 dB
(12.97)
Sensitivity is given by
S 10 log
1
(7102.225 2.140)
(3)(30)(10)2
10 log(0.789) 1.03 dB
(12.98)
In this example, pooled error variance (VN) is used as the dominator in the
calculation of SN ratio instead of using unpooled error variance. (This is discussed
further in Section 12.11.)
261
(12.99)
M4 0.2( )
M5
The levels of such a signal factor have the same interval, 0.2( ). Even if the
values of and are not known accurately, the equal-interval relationship between
the adjacent levels is maintained. A level setting of this type is effective in experiments that check sample measurement errors.
Example [3]
Table 12.17 shows the measuring results of chemical oxygen demand (COD) values
from the wastewater of a factory. Two different water samples were mixed to set
the following signed factor levels:
M1
M2 0.75 0.25
M3 0.50 0.50
(12.100)
M4 0.25 0.75
M5
The results of the table were transformed using M3 as the reference point. The
average at M3 is calculated as
ys
(12.101)
Total variation:
ST (35.0)2 (33.0)2 35.12 9322.43
(f 15)
(12.102)
(12.103)
(12.104)
262
(f 1)
(12.105)
Error variation:
Se ST S 9322.43 9303.3630 19.067
( f 14)
(12.106)
Error variance:
Ve
19.067
1.3619
14
(12.107)
SN ratio:
(1/3r)(S Ve)
[1/(3)(10)](9303.3630 1.3619)
Ve
1.3619
10 log(227.67) 23.57 dB
10 log
(12.108)
Sensitivity:
S 10 log
1
(9303.3630 1.3619)
(3)(10)
10 log(310.07) 24.91 dB
(12.109)
263
Mi
Ms
Mk M k Ms
The value Ms , which is used as the base, can be chosen arbitrarily. The maximum
level value is generally used. The transformed level value, M , is used for the
remaining calculation.
In the case of a zero-point proportional equation, it is known that data y 0
if signal M 0. It is assumed that the signal factor, M, and data, y, are proportional.
In other words, the relationship between M and y is assumed to be
y M
(12.110)
The distance between levels is not equal, and the calculation formulations are the
same as the case of a zero-point proportional equation. The linear effect of the
signal factor is calculated using the transformed signal level value, M . For example, if the geometrically progressive levels have a common ratio of 12 and the number of levels k 4, the following will result:
M 1 81
M 2 41
M 3 21
1
0
1
8
1
1
1
y y2 y3 y4
8 1
4
2
1
4
1
2
M 4 1
(12.111)
(12.112)
12
(12.113)
(1/0)(S Ve)
Ve
(12.114)
This SN ratio, which is calculated assuming that Ms 1, is not the minus square
of the unit of the signal factor. However, this alternative SN ratio is sufcient if it
is used only for comparison.
To obtain the SN ratio corresponding to the unit of the signal factor, we substitute Ms with the value expressed in terms of the signal factor unit and transform
it according to the equation
M 2s
(12.115)
264
12.9. When the True Values of Signal Factor Levels Are Unknown
When the signal factor level values are not known accurately, the variation of
samples with unknown values is used as the signal. Signal factor variation SM is
calculated, and the residual of the total variation, ST , after subtracting SM, is considered the error term. This is an expedient measure which assumes that a linear
relationship holds between the signal M and the data y.
The method of decomposing the total variation is different depending on what
is expected with the linear relationship, as in the following two cases:
1. Proportional equation: ST SM Se
2. Linear equation: ST Sm SM Se
However, these two cases share a basic concept: Calculation of the effect, SM,
of the signal factor, M, and the general mean, Sm, as well as the degrees of freedom
is different. The proportional equation is explained below.
The total variation is calculated similarly to the case in which the true value is
known. The linear effect cannot be calculated since the signal factor true value is
unknown. Therefore, the total effect, SM, of M is considered the signal variation
and is obtained as follows:
2
2
ST y 11
y 212 y kr
0
SM
y 21 y 22 y 2k
r0
( f kr0)
(12.116)
( f k)
(12.117)
(12.118)
(12.119)
1
S
kr0 k e
(12.120)
1
S
k M
(12.121)
(1/r0)(VM Ve)
Ve
(12.122)
265
group of products, each product has a different output. A process can produce
different outputs for different products or components. It is therefore important
to use dynamic SN ratios to develop a group of products or a process for manufacturing different components so that different outputs can easily be obtained.
Traditionally, only one output of a product or only one output in a process has
been studied in research and development. Even today, most of the product or
process development is being performed this way. In such a case, a nondynamic
SN ratio may be used to improve robustness for only a specic or xed target
instead of the entire output range.
For example, a study was conducted to plate gold on the contact point of an
electronic component. The target was to plate to a thickness of 50 in. on the
surface, also to reduce variability around 50 in. using a parameter design approach. This is a nondynamic approach. The target may be 65 in. for another
component. Then another parameter design has to be conducted. But using a
dynamic SN ratio, this plating process can be optimized within a certain range of
thickness. After the completion of a dynamic approach, any plate thickness within
that range can be manufactured without repeating the study for a different thickness every time. Thus, it is easy to understand that the dynamic approach is far
more powerful and efcient than the nondynamic approach.
In a nondynamic SN ratio, there are two issues: (1) to reduce variability and
(2) to adjust the average to the target. Once variability is reduced and adjusted to
the target, there will be no more issues of moving or adjusting the target. It is seen
that a nondynamic approach is useful for a specic product with a xed output,
in contrast to a dynamic approach, where the output can easily be adjusted within
a range.
There are four types of nondynamic output response applications: nominal-thebest, smaller-the-better, larger-the-better, and nondynamic operating window. In
nominal-the-best, there are two types, making a total of ve.
Recall that in nominal-the-best, there is a xed target: for example, the plating
thickness of a component, the resistance of a resistor, or various dimensions of a
product. There are typically upper and lower specication limits. Type I is the case
where data are all nonnegative. In type II, data are a mixture of positive and
negative values.
In two-step optimization for a nominal-the-best application, the rst step is to
maximize the SN ratio and thereby minimize variability around the average. The
second step is to adjust the average to the target.
The nominal-the-best SN ratio is described as
SN
desired output
undesired output
effect of the average
variability around the average
In nominal-the-best applications, the data are not negative. This is especially true
if the output response is energy related, where zero input to the system should
produce zero output; thus, there are no negative data.
Nominal-the-Best,
Type I
266
( f n)
(12.123)
( y1 y2 yn)2
n
( f 1)
(12.124)
( f n 1)
(12.125)
The error variance, Ve , is the error variation divided by its degrees of freedom:
Ve
Se
n1
(12.126)
(12.127)
1
(S Ve)
n m
(12.128)
Example
In a tile manufacturing experiment, samples were taken from different locations in
the tunnel kiln, with the following results: 10.18 . . . 10.18 . . . 10.12 . . . 10.06
. . . 10.02 . . . 9.98 . . . 10.20
Total variation:
ST y 12 y 22 y 72
10.182 10.182 10.202
714.9326
(f 7)
(12.129)
Average variation:
Sm
( y1 y2 y7)2
7
(10.18 10.18 10.20)2
7
714.8782
(f 1)
(12.130)
267
Error variation:
Se ST Sm
714.9326 714.8782
0.05437
(12.131)
(f 6)
Error variance:
Ve
Se
0.05437
0.007562
6
6
(12.132)
SN ratio:
10 log
10 log
(1/n)(Sm Ve)
Ve
(1/7)(714.8782 0.007562)
0.007562
10 log(13504.9427) 41.37 dB
(12.133)
Sensitivity:
S 10 log
1
(S Ve)
n m
(12.134)
When the results of an experiment include negative values, the positive and negative values cancel each other, which makes the calculation of Sm meaningless.
Therefore, the information regarding sensitivity, or average, cannot be obtained.
The SN ratio calculated for this case shows variability only. It is less informative
than type I.
Conceptually, the SN ratio gives the following information:
SN 1
variability around the average
The SN ratio is calculated as follows where there are n pieces of data: y1, y2, ... ,
yn. The error variation is
Se y 21 y 22 y n2
(y1 y2 yn)2
n
( f m 1)
(12.135)
Se
n1
(12.136)
Nominal-the-Best,
Type II
268
1
Ve
10 log Ve
(12.137)
Example
For data points 1.25, 1.48, 2.70, and 0.19, the SN ratio is calculated as
follows:
Se 1.252 (1.48)2 (2.70)2 0.192
9.2021
Ve
(f 3)
Se
9.2021
3.0674
n1
3
(12.138)
(12.139)
10 log(3.0674)
4.87 dB
Smaller-the-Better
(12.140)
Smaller-the-better obtains when the objective is to minimize the output value and
there are no negative data. In such a case, the target is zero. For example, we want
to minimize the audible engine noise level or to minimize the number of impurities. These examples are easy to understand; however, they are used without
success in many cases because such outputs are symptoms. They are called downstream quality characteristics. As explained in Chapter 14, using symptoms as the
output quality characteristics in a study is not recommended. It is better, instead,
to use this type of SN ratio together with a larger-the-better SN ratio to construct
an SN ratio for a nondynamic operating window.
A smaller-the-better SN ratio is calculated as
10 log
1 2
(y y 22 y n2)
n 1
(12.141)
Example
For data points 0.25, 0.19, and 0.22, the SN ratio is calculated as
10 log 13 (0.252 0.192 0.222)
13.10 dB
(12.142)
269
Larger-the-better obtains when the intention is to maximize the output and there
are no negative data. In such a case, the target is innity. For example, we use this
if we want to maximize the power of an engine or the strength of material. Similar
to smaller-the-better, such quality characteristics are symptoms: downstream quality
characteristics. A study of downstream quality characteristics will exhibit interactions between control factors; therefore, conclusions from the study cannot be
reproduced in many cases. It is recommended that this type of SN ratio be used
together with the smaller-the-better type to construct an SN ratio of a nondynamic
operating window.
The calculation of a larger-the-better SN ratio is given by
10 log
1
n
1
1
1
2 2
y 21
y2
yn
Larger-the-Better
(12.143)
Example
For data points 23.5, 43.1, and 20.8, the SN ratio is calculated as
10 log
1
3
28.09 dB
1
1
1
23.52
43.12
20.82
(12.144)
Nondynamic
Operating Window
270
Figure 12.3
Paper carriage function
is a good design and has developed a method of stability design called the operating
window method.
It is possible to have x 0 or y , but these are not a problem. The important
point is that the values of x and y change depending on the paper and environmental conditions. Based on noise factors such as paper size, weight, static electricity, water content, wrinkle size, room temperature, humidity, and power voltage,
the following compounded noise factor is created using technical know-how and
preparatory experiments (which are done under only one design condition).
N1:
standard condition
N2:
N3:
Under these three conditions, misfeeding threshold, value x, and doublefeeding threshold, value y, are obtained. According to the data in Table 12.18, the
following two SN ratios are obtained by treating x as a smaller-the-better characteristic and y as a larger-the-better characteristic.
x 10 log 13 (x2 x 22 x 32)
y 10 log
1
3
1
1
1
2 2
y 12
y2
y3
(12.145)
(12.146)
Two SN ratios are obtained for each design (e.g., every condition of an inner
orthogonal array to which control factors are assigned). By considering x and y
as outside data at K1 and K2, in contrast to the design condition, the main effect
of the control factor is obtained from the sum of K1 and K2. The key point is to
maximize the sum, x y.
Table 12.18
Two threshold values x and y
Characteristic
Noise
Misfeed
Multifeed
N1
x1
y1
N2
x2
y2
N3
x3
y3
271
Example
In a paper-feeding experiment, the responses selected were:
x:
y:
1
3
1
1
1
2 2
y 21
y2
y3
1
1
(302 502 502) 10 log
3
3
1
1
1
502
802
1002
3.66 dB
(12.147)
(1/r0r)(S Ve)
Ve
(12.148)
Table 12.19
Results of experiment
x
N1
30
50
N2
50
80
N3
50
100
Nx
Ny
272
Table 12.20
Inputs and outputs
M1
M2
Mk
N1
y11
y21
yk1
L1
N2
y12
y22
yk2
L2
Nl
y11
y2l
ykl
Ll
Total
y1
y2
yk
(12.149)
S
SL ( f = 1)
Figure 12.4
Decomposition of
variation
S N
( f kl )
(12.150)
(f = 1)
( f = l 1)
ST
( f = kl)
Se [ f =l(k 1)]
SN
(f = kl 1)
273
Linear effect:
L1 M1 y11 M2 y21 Mk yk1
L2 M1 y12 M2 y22 Mk yk2
(12.151)
(M1 y1 M2 y2 Mk yk)2
0
(L1 L2 L1)2
S
0
( f 1)
(12.152)
where
0 l
(12.153)
M 21 M 22 M 2k
(12.154)
(12.155)
Variation of L:
SL
(f l)
Variation of due to N:
SN
( f l 1)
(12.156)
Variation of error:
Se ST S SN
[ f l(k 1)]
(12.157)
Error variance:
Ve
Se
l(k 1)
(12.158)
( f kl 1)
(12.159)
SN
kl 1
(12.160)
SN ratio:
10 log
(1/0)(S Ve)
VN
(12.161)
274
Example [4]
Table 12.21 shows the results of an experiment for a car brake. It is ideal that the
input signal, the line pressure, is proportional to the output, the braking torque.
For this case, two noise conditions, labeled N1 and N2, are determined by compounding a few important noise factors, so that
N1: noise condition when the response tends to be lower
N2: noise condition when the response tends to be higher
The noise factors are compounded into two levels:
N1:
N2:
Q2:
(12.162)
(12.163)
Table 12.21
Results of experiment
M1 0.008
M2 0.016
M3 0.032
M4 0.064
8.5
20.4
36.9
L1
N1Q1
4.8
N1Q2
0.9
6.5
13.2
32.7
L2
N2Q1
5.8
11.5
25.0
43.5
L3
N2Q2
0.8
6.8
16.2
34.5
L4
Total
12.3
33.3
74.8
147.6
275
(L1 L2 L3 L4)2
0
(3.1888 2.6264 3.8144 2.8416)2
(4)(0.00544)
7147.5565
(f 16)
(12.164)
0 4
0.0082 0.0162 0.0322 0.0642 0.00544
SN
(12.165)
148.5392
(f 3)
(12.166)
Se ST S SN
7342.36 (7147.5565 148.5392)
46.2643
Ve
(f 12)
Se
46.2643
3.8544
(3)(4)
12
(12.167)
(12.168)
SN ST S 7342.36 7147.5565
194.8035
VN
(f 15)
SN
194.8035
12.9869
(4)(4) 1
15
10 log
10 log
(12.169)
(12.170)
(1/0)(S Ve)
VN
[1/(4)(0.00544)](7147.5565 3.8554)
12.9869
44.03 dB
(12.171)
276
(12.172)
where y is the current, the regression coefcient, M the voltage, and M * the
line width.
Calculation of the SN ratio is done similarly by considering the combinations of
M and M * as one signal. Table 12.22 shows the layout of input/output for double
signals with noise. By considering the product of M and M* as one signal factor,
L1 and L2, the linear equations of the proportional term at N1 and N2, respectively,
are calculated as follows:
L1 M 1*M1 y11 M 1*M2 y12 M 3*Mk y5k
(12.173)
L2 M *M
*M2 y22 M 3*Mk y6k
1
1 y21 M 1
Total variation, ST , is decomposed as shown in Figure 12.5
2
2
2
ST y 11
y 12
y 6k
(L1 L2)2
2r
( f 6k)
(12.174)
( f 1)
(12.175)
L12 L22
S
Se ST S SN
( f 1)
( f 6k 2)
(12.176)
(12.177)
(12.178)
Ve
Se
6k 2
(12.179)
VN
SN
6k 1
(12.180)
10 log
(1/2r)(S Ve)
VN
(12.181)
S 10 log
1
(S Ve)
2r
(12.182)
Example [5]
In a laser welding experiment, the ideal function was dened as follows:
y MM*
277
Table 12.22
Input/output table for double signals
M1
M2
Mk
M *1
N1
N2
y11
y21
y12
y22
y1k
y2k
M *2
N1
N2
y31
y41
y32
y42
y3k
y4k
M *3
N1
N2
y51
y61
y52
y62
y5k
y6k
where y is the force, M the deformation, and M* the welding length. The force at
each deformation level was measured for each test piece of different welding length.
Table 12.23 shows the results of one of the runs.
L1 M 1*M1 y11 M *M
1
2 y12 M *M
3
2 y52
(2)(20)(149.0) (2)(40)(146.8) (6)(40)(713.2)
338840
(12.183)
(12.184)
(f 12)
S
SL ( f = 2)
S N
(12.185)
( f = 1)
( f =1)
ST
( f = 6k)
Se ( f = 6k 2)
SN
( f = 6k 1)
Figure 12.5
Decomposition of
variation
278
Table 12.23
Results of experiment
M1
(20)
M2
(40)
M *1 (2 mm)
N1
N2
149.0
88.6
146.8
87.0
M 2* (4 mm)
N1
N2
300.4
135.7
481.1
205.5
M 3* (6 mm)
N1
N2
408.0
207.5
713.2
304.4
(L1 L2)2
2r
(338,840 152,196)2
1,076,412.292
(2)(112,000)
(f 1)
0 2
(12.186)
(12.187)
(12.188)
L21 L22
S
338,8402 152,1962
1,076,412.292
112,000
155,517.78
(f 1)
(12.189)
Se ST S SN
1,252,436.16 1,076,412.292 155,517.78
20,506.088
(f 10)
(12.190)
(f 11)
(12.191)
SN ST S
1,252,436.16 1,076,412.292
176,023.868
Ve
Se
20,506.088
2050.6088
10
10
(12.192)
VN
279
SN
176,023.868
16,002.1698
11
11
10 log
10 log
(12.193)
(1/2r)(S Ve)
VN
[1/(2)(112,000)](1,076,412.295 2050.6088)
16,002.1698
35.23 dB
S 10 log
10 log
(12.194)
1
(S Ve)
2r
1
(1,076,412.292 2050.6088)
(2)(112,000)
0.68 dB
(12.195)
When there are two signals, the ideal function could be as follows:
M
M*
(12.196)
Table 12.24
Load, cross section, and deformation
M *1
M *2
M *k
M1
N1
N2
y11
y21
y12
y22
y1k
y2k
M2
N1
N2
y31
y41
y32
y42
y3k
y4k
M3
N1
N2
y51
y61
y52
y62
y5k
y6k
280
M1
M
M
y 1 y12 3 y5k
M 1* 11
M 2*
M k*
L2
M1
M
M
y 1 y22 3 y6k
M 1* 21
M 2*
M k*
(12.197)
( f 6k)
(12.198)
L21 L22
r
( f 2)
(12.199)
(L1 L2)2
2r
( f 1)
M1
M 1*
M1
M 2*
(12.200)
M3
M *k
(12.201)
L21 L22
(L L2)2
(L L2)2
1
1
r
2r
2r
( f 1)
(12.202)
Variation of error:
Se ST SL ST S SN
S
SL ( f = 2)
Figure 12.6
Decomposition of
variation
S N
( f 6k 2)
(12.203)
( f = 1)
( f = 1)
ST
( f = 6k)
Se ( f = 6k 2)
SN
( f = 6k 1)
281
Variation of noise:
SN ST S
( f 6k 1)
(12.204)
Variance of error:
Ve
Se
6k 2
(12.205)
VN
SN
6k 1
(12.206)
Variance of noise:
SN ratio:
10 log
(1/2r)(S Ve)
VN
(12.207)
1
(S Ve)
2r
(12.208)
Sensitivity:
S 10 log
Example [6]
A process used to apply an encapsulant, which provides electrical isolation between
elements on the outer surface of a night-vision image intensier tube, was optimized
using double signal factor SN ratio. The ideal function was identied as
y
M
M*
(12.196)
M1
M
M
y 1 y12 5 y53
M *1 11
M *2
M *3
2
2
10
(9)
(5)
(34)
0.016
0.030
0.062
217,781.7
L2 266,009.9
L3 3,687,281
L4 3,131,678
(12.209)
282
Table 12.25
Input/output table
M1
M2
M3
M4
M5
N1
M *1
M *2
M *3
y11
y21
y31
y12
y22
y32
y13
y14
y15
y25
y35
N2
M *1
M *2
M *3
y41
y51
y61
y42
y52
y62
y45
y55
y65
N3
M *1
M *2
M *3
y71
y81
y91
y72
y82
y92
y75
y85
y95
N4
M *1
M *2
M *3
y101
y111
y121
y102
y112
y122
y105
y115
y125
(f 60)
(12.210)
L L L L
2
1
2
2
2
3
2
4
20,258,919
(12.211)
0 4
M1
M *1
2
0.016
M1
M *2
2
0.030
M5
M *3
(12.212)
10
0.062
1,161,051
(12.213)
(L1 L2 L3 L4)2
4
(217,781.7 266,009.9 3,687,281 3,131,678)2
(4)(1,161,051)
11,483,166
(f 1)
(12.214)
283
Table 12.26
Results of experiment
M5
N1
M *1
M *2
M *3
M1
9
5
5
M2
26
13
9
M3
53
24
14
M4
92
42
23
152
62
34
L1
N2
M *1
M *2
M *3
13
8
6
36
23
14
66
44
24
100
62
37
170
90
45
L2
N3
M *1
M *2
M *3
209
165
110
517
374
235
915
622
385
1452
892
515
2266
1186
655
L3
N4
M *1
M *2
M *3
177
100
50
457
234
104
897
408
171
1321
621
262
2113
895
361
L4
SN SL S
20,258,919 11,483,166
8,775,753
(f 3)
(12.215)
(f 56)
(12.216)
(f 59)
(12.217)
Se ST SL ST S SN
20,974,160 20,258,919
715,241
SN ST S 20,974,160 11,483,166
9,490,994
Ve
Se
715,241
12,772
56
56
(12.218)
VN
SN
9,490,994
160,864
59
59
(12.219)
10 log
(1/0)(S Ve)
VN
1
(11,483,166 12,772)
(4)(1,161,051)
10 log
160,864
48.138 dB
(12.220)
284
S 10 log
10 log
1
(S Ve)
0
1
(11,483,166 12,772)
(4)(1,161,051)
3.93 dB
(12.221)
Srow
S
Se
( f 3)
( f 1)
( f 8)
Srow is the variation between rows 1, 2, 3, and 4. It is the variation caused by control
factors B, C, D, E, F, and G. This variation belongs neither to the signal nor to the
noise, and therefore is separated out.
Total variation:
2
2
ST y 11
y 12
y43
( f 1)
(12.222)
285
Table 12.27
Results without noise factors
No.
A
1
B
2
C
3
D
4
E
5
F
6
G
7
M1
M2
M3
Total
y11
y12
y13
y1
y21
y22
y23
y2
y31
y32
y33
y3
y41
y42
y43
y4
y1
y2
y3
Total
5
y51
y52
y53
y5
y61
y62
y63
y6
y71
y72
y73
y7
y81
y82
y83
y8
Totals of M:
y.1 y11 y21 y31 y41
y.2 y12 y22 y32 y42
(12.223)
1
(M1 y1 M2 y2 M3 y3)2
0
( f 1)
(12.224)
(12.225)
Table 12.28
Results of A1
No.
1
A
2
B
3
C
4
D
5
E
6
F
7
G
M1
M2
M3
Total
y11
y12
y13
y1
y21
y22
y23
y2
y31
y32
y33
y3
y41
y42
y43
y4
y1
y2
y3
Total
286
(12.226)
1 2
(y y2 y3 y4)2
(y 1 y 22 y 23 y 24) 1
k
0k
( f 0 1)
(12.227)
(12.228)
Error variation:
Se ST S Srow
( f 8)
(12.229)
Error variance:
Ve
Se
8
(12.230)
SN ratio:
10 log
(1/0)(S Ve)
Ve
(12.231)
The SN ratio of A2 is calculated similarly from the results for A2. The SN ratio of
B1 is calculated from the results for B1, as shown in Table 12.29.
Split Analysis for
Mixed Orthogonal
Arrays
Calculation of the SN ratios for a three-level orthogonal array is exactly the same
as that for a two-level orthogonal array. An L18 orthogonal array is used as an
example to explain the calculations in Table 12.30.
Table 12.31 shows the layout of an experiment. A two-level control factor, A,
and seven three-level control factors, B, C, D, E, F, G, and H, are assigned to an
L18 orthogonal array. The signal factor M is assigned to outside the L18 array. There
are no noise factors.
First, the calculation of the SN ratio of A1 is explained. The results under A1
are shown in Table 12.31.
Total variation:
2
2
20
ST y 11
y 12
y 9k
( f 9k)
(12.232)
Table 12.29
Results of B1
No.
M1
M2
M3
Total
y11
y12
y13
y1
y21
y22
y23
y2
y51
y52
y53
y5
y61
y62
y63
y6
Total
y1
y2
y3
287
Table 12.30
Results without noise factors
No.
A
1
B
2
C
3
D
4
E
5
F
6
G
7
H
8
M1
M2
Mk
Total
y11
y12
y1k
y1
y21
y22
y2k
y2
10
11
12
13
14
15
16
17
18
y181
y182
y18k
y18
Table 12.31
Results of A1
No.
M1
M2
Mk
Total
y11
y12
y1k
y1
y21
y22
y2k
y2
y9
7
8
9
y91
y92
y9k
Total
y1
y2
yk
288
(12.233)
1
(M1 y1 M2 y2 Mk yk)2
0
( f 1)
(12.234)
(12.235)
0 9(M 12 M 22 M k2)
(12.236)
Effective divider:
1 2
1
( y y 22 y 29)
(y y2 y9)2
k 1
0k 1
( f 0 1 8)
(12.237)
( f 9k 9)
(12.238)
Error variance:
Ve
Se
9k 9
(12.239)
SN ratio:
10 log
(1/0)(S Ve)
Ve
(12.240)
The SN ratio of B1 is calculated from the results under B1, as shown in Table 12.32.
2
2
2
ST y 11
y 12
y 12k
( f 6k)
(12.241)
(12.242)
1
(M1 y1 M2 y2 Mk yk)2
0
(12.243)
References
289
Table 12.32
Results of B1
No.
M1
M2
Mk
Total
y11
y12
y1k
y1
y21
y22
y2k
y2
y31
10
y101
11
y111
12
y121
y122
y12k
Total
y1
y2
yk
y12
where
0 6
(12.244)
M 21 M 22 M 2k
Srow
1 2
( y y2 y12)2
( y 1 y 22 y 212) 1
k
0k
Se ST S Srow
Ve
(12.245)
( f 0 1 5)
(12.246)
( f 9k 9)
(12.247)
Se
9k 9
10 log
(12.248)
(1/0)(S Ve)
Ve
(12.249)
References
1. Genichi Taguchi, 1991. Quality Engineering Series, Vol. 3. Tokyo: Japanese Standards
Association/ASI Press.
2. Kazuyoshi Ishikawa et al., 1972. SN Ratio Manual, Tokyo: Japanese Standards
Association.
3. Hiroshi Yano, 1998. Introduction to Quality Engineering Calculation. Tokyo: Japanese
Standards Association.
4. Shin Taguchi, 1998. ASI Workshop Manual. Livonia, Michigan: American Supplier
Institute.
5. Mitsugi Fukahori, 1995. Optimization of an electrical encapsulant. Journal of Quality
Engineering Forum, Vol. 3, No. 2.
6. Lapthe Flora et al., 1995. Parameter design of laser welding for lap edge joint piece.
Presented at the ASI Symposium.
13
13.1.
13.2.
13.3.
13.4.
SN Ratios for
Classied Attributes
Introduction
290
Cases with Two Classes, One Type of Mistake
291
Case With Two Classes, Two Types of Mistake
293
Chemical Reactions without Side Reactions
296
Nature of Chemical Reactions
296
Basic Function of Chemical Reactions
296
SN Ratio of Chemical Reactions without Side Reactions
297
References
309
13.1. Introduction
Classied data consist of 0 and 1. For example, the number of defects can be
expressed by 1, and nondefects can be expressed by 0. The yield calculated from
a mixture of defects and nondefects looks like a continuous variable, but actually,
it is calculated from 0 and 1 data. Even in the case of chemical reactions, yield is
the fraction of substance reacted within a whole substance in units of molecules
or atoms.
Classied attributes are used very widely for data collection and analysis, but
they are not good-quality characteristics from a quality engineering viewpoint. Using such characteristics only, a lot of information cannot be picked up, so the
study would be inefcient and there would be interactions between control factors.
There are strategies to avoid such interactions. First, avoid 01 data. Second,
when continuous-variable data cannot be measured due to lack of technology or
for other reasons, try to provide more than two classes, such as good, normal, and
bad, or large, medium, and small. The more levels that are classied, the more
information that is revealed. In this way, the trend toward control factor change
can be observed and incorrect conclusions can be avoided to some extent. The
third and best strategy is to use an SN ratio based on the function. This is discussed
in Chapter 14.
290
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
291
When 0 and 1 are used to classify attribute data, there are only two classes. In
this case, there are two situations: one type of mistake and two types of mistakes.
Both types of situation are discussed below.
material
A1:
A2:
B:
machine
B1:
B2:
C:
method
C1:
C 2:
(13.1)
292
(13.2)
To calculate the SN ratio, data are decomposed as follows. The total variation is
given by
ST y 12 y 22 y n2 np
(13.3)
The effect of the signal is calculated as the variation of p. Since equation (13.2) is
a linear equation, denoted by L, its variation, denoted by Sp, is given by
Sp
L2
D
(13.4)
1
n
1
n
1
n
1
n
(n)
1
n
(13.5)
p2
np 2 Sm
1/n
(13.6)
(13.7)
The SN ratio of digital data is calculated as the ratio of the variation of the signal
to the variation of error in decibel (dB) units:
10 log
10 log
Sp
Se
10 log
np 2
np(1 p)
p
1
10 log
1
1p
p
(13.8)
1
1
p
10 log
1
1
0.02
1
1
0.10
9.54
16.90
(13.9)
(13.10)
1
1
0.04
13.80
(13.11)
1
1
0.02
16.90
(13.12)
293
1
1
p
T (A2 T ) (B 2 T ) (C 2 T )
A2 B 2 C 2 2T
16.90 13.80 16.90 (2)(9.54)
28.52
(13.13)
p in the fraction is
p 0.0014
(13.14)
q:
From the table, the two types of mistakes are calculated as follows:
p
Bp*
A(1 q *) Bp*
(13.15)
Aq *
Aq * B(1 p*)
(13.16)
Table 13.1
Input/output of the copper smelting process
Output
Input
Product
Slag
Total
Copper
A(1 q *)
Bp*
A(1 q *) Bp*
Noncopper
Aq *
B(1 p*)
Aq * B(1 p*)
Total
AB
294
Table 13.2
Input/output using p and q
Output
Input
Product
Slag
Total
Copper
1p
Noncopper
1q
Total
1pq
1p1
noncopper materials melt into the product, which increases q. The factor, which
reduces p but increases q, is called a tuning factor. The tuning factor does not
improve the separating function of the process. A good separating function is to
reduce both p and q at the same time.
To nd the control factors that can reduce p and q together, it is important to
evaluate the function after tuning p and q on the same basis. For example, we
want to compare which level of control factor, A1 and A2, has a better separating
function. Assume that the p and q values for A1 and A2 are p1, q1 and p2, q2, respectively. As described before, we cannot compare ( p1 q1) with ( p2 q2). We
need to transform the data so that the comparison can be made on the same basis.
For this purpose, the standard SN ratio is used. The SN ratio is calculated after
tuning to reach the point when p q. This fraction is denoted by p0, calculated
as
p0
1
1 [(1/p) 1][(1/q) 1]
(13.17)
(1 2p0)2
4p0(1 p0)
(13.18)
Example [1]
In a uranium concentrating process, the aim was to separate 235U from the mixture
of 235U and 238U. The separating functions of two conditions were compared. Table
13.3 shows hypothetical results after separation. To calculate the standard SN
ratio, the results in the table are converted into fractions to obtain Table 13.4.
Condition A1:
p
3975
0.79500
5000
(13.19)
295
Table 13.3
Results of experiment
Output
A1
A2
Input
Product
Slag
Total
235
U
238
U
1,025
38,975
3,975
156,025
5,000
195,000
Total
40,000
160,000
200,000
U
U
1,018
38,982
3,782
256,218
4,800
195,200
40,000
160,000
200.000
235
238
Total
q
p0
38,975
0.19987
195,000
(13.20)
1
1 [(1/p) 1][(1/q) 1]
1
1 [(1/0.79500) 1][(1/0.19987) 1]
0.49602
10 log
10 log
(13.21)
(1 2p0)2
4p0(1 p0)
[1 (2)(0.49602)]2
(4)(0.49602)(1 0.49602)
41.981 dB
(13.22)
Table 13.4
Results in fraction
Output
A1
Input
Product
Slag
Total
U
U
0.20500
0.19987
0.79500
0.80013
1.00000
1.00000
Total
0.40487
1.59513
2.00000
U
U
0.21208
0.19970
0.78792
0.80030
1.00000
1.00000
Total
0.41178
1.58822
2.00000
235
238
A2
235
238
296
Condition A2:
34.449 dB
(13.23)
Basic Function of
Chemical Reactions
(13.24)
Letting the initial amount of main raw material A be Y0 and the amount at time
T be Y, the fraction of the amount of change by time T is Y/Y0. The reaction
speed is the fraction of the reduced amount differentiated by time:
d(Y/Y0)
dT
1Y
Y0
(13.25)
The amount shown in the parentheses on the right side of (13.25) is the concentration of A remaining, denoted by p0:
p0 1
Y
Y0
(13.26)
(13.27)
(13.28)
ln p0 T
(13.29)
ln p0 y
(13.30)
Setting
297
( f k)
(13.31)
The total variation is decomposed into the variation due to the generic function
and the variation caused by the deviation from the generic function.
Variation of proportional term:
S
L2
( f 1)
(13.32)
Linear equation:
L y1 T1 y2 T2 yk Tk
(13.33)
T 21 T 22 T k2
(13.34)
Effective divider:
Error variation:
Se ST S
( f k 1)
(13.35)
(1/)(S Ve)
Ve
(13.36)
Table 13.5
Fraction remaining at different times
Time
T1
T2
...
Tk
Fraction of Remaining
Main Raw Material
p1
p2
...
pk
y1
y2
...
yk
ln
1
p0
SN Ratio of
Chemical Reactions
without Side
Reactions
298
Figure 13.1
Fraction of remained
material and time
(13.37)
1
(S Ve)
(13.38)
10 log
Sensitivity:
S 10 log
S is calculated to estimate the following:
S 10 log 2
(13.39)
When substances A and B react to form the objective substance C and side-reacted
substance D, it is desirable to maximize the reaction speed for producing C and
minimize the one for D:
ABCD
(13.40)
Table 13.6
ANOVA table for chemical reactions
Source
E(V)
2 2
k1
Se
Ve
Total
299
To do this, the following fractions are measured at time T as shown in Table 13.7.
p0:
p1:
p2:
From p0, p1, and p2, y1 and y2 are calculated to show two situations, M1 and M2,
respectively. For total reaction M1 and reaction speed 1,
y1 1T
y1 ln
(13.41)
1
p0
(13.42)
(13.43)
1
p0 p1
(13.44)
Since the objectives are to maximize total reaction M1 and minimize side reaction
M2, the operating window is shown in Figures 13.2 and 13.3. This is called the
speed difference method.
The SN ratio is calculated from the following decomposition.
Total variation:
2
2
2
ST y 11
y 12
y 2k
( f 2k)
(13.45)
(L1 L2)2
2
( f 1)
(13.46)
Table 13.7
Data collection for dynamic operating window
Time
T1
T2
...
Tk
p01
p02
...
p0k
Objective Product
p11
p12
...
p1k
Side-Reacted Substance
p21
p22
...
p2k
M1: ln
1
p0
y11
y12
...
y1k
M2: ln
1
p0 p1
y21
y22
...
y2k
300
Figure 13.2
Dynamic operating
window (shown in a
fraction)
Effective divider:
T 21 T 22 T k2
(13.47)
(13.48)
(13.49)
Linear equations:
Figure 13.3
Dynamic operating
window (after data
transformation)
(L1 L 2)2
2
( f 1)
(13.50)
301
Error variation:
Se ST S SM
( f 2k 2)
(13.51)
Table 13.8 shows the ANOVA table for a dynamic operating window.
SN ratio of the speed difference method:
10 log
(1/2)(SM Ve)
Ve
(13.52)
1
(S Ve)
2 M
(13.53)
1
(S Ve)
2
(13.54)
In chemical reactions, noise factors are not included in the experiment because
of the cost increase. When there are side reactions, the reaction speed of sidereacted products, 2, may be considered as noise. Therefore, its SN ratio is written
as
21
22
(13.55)
(13.56)
21
22
(13.57)
Table 13.8
ANOVA table for a dynamic operating window
Source
E(V)
22
SM
2 2 2M
2k 2
Se
Total
2k
ST
Ve
Reaction Speed
Ratio Method
302
y11
T1
12
y12
T2
1k
y1k
Tk
21
y21
T1
22
y22
T2
(13.58)
2k
SN ratio for M1:
y2k
Tk
(13.59)
1 10 log
1
k
(13.60)
2 10 log
1 2
( 222 22k)
k 21
(13.61)
1
1
1
2 2
211
12
1k
(13.62)
Table 13.9
Data collection for the speed ratio method
T1
T2
...
Tk
M1
y11
y12
...
y1k
M2
y21
y22
...
y2k
M1
11
12
...
1k
M2
21
22
...
2k
Example [2]
In a chemical reaction, the primary raw material, F0, reacts with another raw material to produce F3 after forming intermediate products F1 and F2:
F0 F1 F2 F3
(13.63)
In the reaction, side-reacted product was also produced. The fraction of side-reacted
product is calculated by
fraction of side-reacted product 1 (F0 F1 F2 F3)
(13.64)
Table 13.10 shows the fraction changes of constituents at different time segments.
SN Ratio Using the Speed Difference Method
The function of unreacted raw material, denoted by p, at time T is given by
p eT
(13.65)
1
T
p
(13.66)
1
p
(13.67)
From equations (13.66) and (13.67), the ideal function is written as a zero-point
proportional equation:
y T
(13.68)
The objectives are to: maximize the reaction speed of the objective product (or
maximize the reducing speed of raw material) and to minimize the reaction speed
of the side-reacted product.
In Table 13.10, F0 is raw material, F1 and F2 are intermediate products, and F3
is the objective product. Therefore, the fraction of unreacted raw material is F0
F1 F2. Let the total of F0, F1, and F2 be p. Also, the fraction of the objective
product is q. The fraction of side-reacted product is then equal to 1 p q. Table
13.11 is calculated from Table 13.10.
The values of y to be used for the SN ratio are calculated as follows. At time T3,
for example, the total reaction M1 is
y13 ln
1
1
ln
p3
0.786
(13.69)
303
304
100.00
2.71
0.00
0.00
F3
Side-reacted
product
Total
8.55
0.00
F2
100.01
0.00
51.86
0.00
F1
36.88
100.00
T1
1.0
F0
T0
0.0
5.52
T2
2.0
100.00
0.36
5.80
42.28
46.04
Table 13.10
Fractions of constituents at time levels
100.00
1.00
20.42
58.56
19.08
0.94
T3
3.0
100.00
2.01
36.21
51.80
9.59
0.39
T4
4.0
100.00
4.20
53.72
39.13
2.85
0.11
T5
5.0
Time (h)
100.00
6.17
65.21
27.45
1.13
0.04
T6
6.0
100.00
7.18
71.93
20.22
0.64
0.03
T7
7.0
100.00
7.79
76.19
15.64
0.37
0.01
T8
8.0
100.00
8.05
79.80
11.83
0.32
0.0001
T9
9.0
100.00
8.21
83.39
8.28
0.12
0.001
T10
10.0
305
Table 13.11
Fractions at time levels
Time (h)
T1
1.0
T2
2.0
T3
3.0
T4
4.0
T5
5.0
T6
6.0
T7
7.0
T8
8.0
T9
9.0
T10
10.0
M1 : p
0.973
0.938
0.786
0.618
0.421
0.286
0.209
0.160
0.122
0.084
0.027
0.06
0.204
0.362
0.537
0.652
0.719
0.768
0.798
0.834
1pq
0.000
0.004
0.010
0.020
0.042
0.062
0.072
0.078
0.081
0.082
M2 : p q
1.000
0.944
0.990
0.980
0.958
0.938
0.928
0.922
0.920
0.918
T9
9.0
T10
10.0
2.1078
2.4770
0.0839
0.0857
1
1
1
ln
ln
p3 q3
0.786 0.204
0.990
(13.70)
Table 13.12 shows the results of calculation. The dynamic operating window is
shown in Figure 13.4.
The SN ratio is calculated from Table 13.11. There are 10 time levels. Two
linear equations, L1 and L2, are calculated.
L1 (0.0275)(1.0) (0.0636)(2.0)
(2.477)(10.0) 83.990
(13.71)
L2 (0.000)(1.0) (0.0036)(2.0)
(0.0857)(10.0) 3.497
(13.72)
12 22 102 385
(13.73)
Effective divider:
Table 13.12
Results after conversion
Time (h)
T1
1.0
T2
2.0
T3
3.0
T4
4.0
T5
5.0
T6
6.0
M1: y1 0.0275
0.0636
0.2410
0.4815
0.8654
M2: y2 0.000
0.0036
0.0100
0.0203
0.0429 0.0636
T7
7.0
T8
8.0
306
Operating
window
Figure 13.4
Dynamic operating window
(L1 L2)2
(83.990 3.497)2
9.940
2
(2)(385)
( f 1)
(13.74)
(f 1)
(13.75)
(L1 L2)2
(83.990 3.497)2
8.4144
2
(2)(385)
Total variation:
ST 0.02752 0.06362 2.4770
0.0002 0.00362 0.08572
19.026
(f 20)
(13.76)
Error variation:
Se ST S SM
19.0262 4.940 8.4144
5.6715
(f 18)
(13.77)
Error variance:
Ve
Se
5.6715
0.0373
18
18
(13.78)
307
Table 13.13
ANOVA table
Source
9.9403
9.9403
8.4144
8.4144
18
0.6715
0.0373
Total
20
19.0262
(1/2)(SM Ve)
Ve
[1/(2)(385)](8.4144 0.0373)
0.0373
5.35 dB
(13.79)
1
1
(S Ve) 10 log
(8.4144 0.0373)
2r M
(2)(385)
19.6
(13.80)
1
(S Ve)
2
1
(9.940 0.0373)
(2)(385)
17.6 dB
(13.81)
(13.82)
where
y ln
1
p
(13.83)
308
y
T
(13.84)
The values of estimated under M1 are larger-the-better, and the ones under M2
are smaller-the-better. Therefore, the larger-the-better SN ratio, 1, is calculated
from the former 1 values, and the smaller-the-better SN ratio, 2, is calculated
from the latter 2 values. The SN ratio of the operating window using the speed
ratio method is calculated by
1 2
(13.85)
Table 13.14 shows the estimates of 1 (under M1) and 2 (under M2).
From Table 13.14, 1 and 2 are calculated:
1 10 log
1
10
10 log
1
10
1
1
1
2 2
211
12
110
1
1
1
0.2752
0.03182
0.24772
25.17 dB
2 10 log
10 log
(13.86)
1
(2 222 2210)
10 21
1
(02 0.00182 0.00862)
10
42.57 dB
(13.87)
Therefore,
1 2
25.17 42.57 17.40
(13.88)
Table 13.14
Estimates of proportional constant
Time (h)
T1
1.0
T2
2.0
T3
3.0
T4
4.0
T5
5.0
T6
6.0
T7
7.0
T8
8.0
T9
9.0
T10
10.0
M1: 1
0.2750 0.0318 0.0803 0.1204 0.1731 0.2085 0.2237 0.2289 0.2342 0.2477
M2: 2
0.0000 0.0018 0.0033 0.0051 0.0086 0.0106 0.0107 0.0101 0.0093 0.0086
References
References
1. Genichi Taguchi, 1987. System of Experimental Design. Livonia, Michigan: Unipub/
American Supplier Institute.
2. Genichi Taguchi et al., 1999. Technology Development for Chemical, Pharmaceutical and
Biological Industries. Tokyo: Japanese Standards Association.
309
Part V
Robust Engineering
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
14
System Design
313
14.1 Design and System Selection
14.2. System Functionality and Evaluation
References
317
314
313
314
(14.1)
The SN ratio is the rst term on the right side of the equation divided by the
second term. Actual calculation is different from one generic function to another.
That is the main focus of quality engineering.
Many systems exist that perform a specic objective function. If the wrong system were selected, its functionality could not be improved. As a result, expensive
tolerance design would have to be performed; thus, the company would lose its
competitiveness in the market.
315
How much improvement can be accomplished depends not only on the new
and old of the system but on the systems complexity. If a system is not complicated
enough, it is difcult to improve its functionality by parameter design. It seems
that selecting and improving a complicated system would be costly. But if the gain
(improvement in SN ratio) were 10 dB, the trouble caused in the market or during
the production stage could be reduced to one-tenth through the process. If the
gain were 20 dB, half of it (10 dB) could be consumed for quality improvement,
and the other half could be used to loosen up tolerance or increase production
speed.
If a simple system were selected, there would be little possibility for improvement of functionality, and the improvement would need to be done by conducting
tolerance design, controlling the causes of variation that increases cost.
Example [3,4]
The resistance of a resistor can be measured by connecting the two ends, applying
a certain voltage, and reading the current. Letting the resistance, voltage, and current be R, V, and I, respectively, R is given by
R
V
I
(14.2)
This system is shown in Figure 14.1, the parameters are unknown resistance,
R, power source, V, and ammeter, X. Since it is so simple, it seems to be a good
system from the cost viewpoint. However, parameter design cannot be conducted
to improve the function for such a simple system. In this case it cannot be performed to improve the precision of measurement. Because there is no complexity
in the system, there are no control factors to be studied for parameter design. In
Figure 14.1 there is only one control factor: power voltage. When there are variations either in voltage or ammeter, there is no way to reduce their effects and to
reduce the measuring error of current. The relative error of resistance is given approximately by
R
V
I
R
V
I
(14.3)
Figure 14.1
Simplest system to
measure a resistance
316
In a simple system such as Figure 14.1, the only way to reduce measurement error
is to reduce the causes of variation: in this case, either to reduce the variation of
input voltage or to use a more precise ammeter.
To provide a constant voltage, we need to use a long-life power source with a
small voltage variation. Or we need to develop a constant-voltage power-supplying
circuit. For the former attempt, there is no such ideal power source. For the latter
attempt, a complicated circuit is necessary. A precise ammeter must be a large unit
and expensive. To improve quality, use of a simple circuit would end up being too
expensive. It is why a more complicated circuit such as the Wheatstone bridge
(Figure 14.2) is used. In the gure, X is an ammeter, B is a rheostat (continuously
variable resistor), C and D are resistors, E is power voltage, A is the resistance of
the ammeter, and F is the resistance of the power source.
A resistor of unknown resistance, y, is measured as follows: y is connected
between a and b. Adjust the rheostat, B, until the reading of the ammeter, X,
becomes zero.
y
BD
C
(14.4)
Even when the ammeter indicates zero, there is actually a small amount of
current owing. There are also some variations of resistance in resistors C, D, and
F. There might be variation in the rheostat. In such cases,
y
BD
X
2 [A(D C) D(B C)][B(C D) F(B C)]
C
CE
(14.5)
Compared with Figure 14.1, Figure 14.2 looks complicated, but by varying the
values of control factors C or D, the impacts of the variability caused by the variations of input power voltage or ammeter can be reduced substantially.
Parameter design is to reduce the variation of an objective function by varying
the levels of control factors without trying to reduce the source of variation, which
Figure 14.2
Wheatstone bridge
References
is costly and difcult to do. The more control factors in the system, the more possibility for improvement. The more complications between the input signal (the
unknown resistance to be measured) and the output (the reading of rheostat), the
higher the possibility for improvement.
Using a Wheatstone bridge, the power supply voltage variation has scarce effectto-measurement error. Variation caused by the ammeter can often be reduced to a
fraction of 100. The effect of variation caused by an inexpensive battery that is
difcult to improve can also be minimized to nearly zero.
But the variation caused by resistors C, D, and B cannot be improved through
parameter design, so higher-grade resistors have to be used. How high the grade
should be is determined by the last step of product design: tolerance design.
References
1. Genichi Taguchi, 1984. Parameter Design for New Product Development. Tokyo: Japanese
Standards Association.
2. Nikkei Mechanical, February 19, 1996.
3. Genichi Taguchi et al., 1996. Management of Technology Development. Tokyo: Japanese
Standards Association.
4. Yuin Wu et al., 1986. Quality Engineering: Product and Process Design Optimization. Livonia, Michigan: American Supplier Institute.
317
15
15.1.
15.2.
15.3.
15.4.
Parameter Design
Introduction
318
Noise
319
Parameter Design of a Wheatstone Bridge
Parameter Design of a Water Feeder Valve
References
339
320
330
15.1. Introduction
After a system is selected, a prototype is made for experimentation, or simulation
without experimentation is conducted. If the prototype functions, or the results
of the simulation, are satisfactory under certain conditions (normal or standard),
the system design is complete. Prototype functions indicates that it hits the objective
target value under certain conditions.
The most important objective after the completion of system design is the determination of parameter midvalues (levels). This study is conducted by using the
lowest-grade, least expensive raw materials and component parts. Tolerance design,
which is to tighten tolerances to secure the function and better product quality, is
accompanied by cost increase. Therefore, tolerance design should be conducted after
parameter design is complete.
In many cases of traditional product design, drawings and specications are
made for production immediately after the prototype functions only under certain
conditions. No further studies are made until problems are reported or complaints
occur in the marketplace. The product is then reviewed, adjusted, or redesigned,
which is called reghting. Fireghting is needed when there has not been robust
design.
Good product design engineers study robustness after system design. The prototype is tested under some customers usage conditions. Suppose that there are
several extreme usage conditions (noise factors). Using the traditional approach,
an engineer tests a product by varying one of the noise conditions. If the output
response deviates from the target, the engineer adjusts one of the design parameters to hit the target. Then the second noise factor is varied. The output deviates
318
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
15.2. Noise
again. The engineer then varies another or other control factors to adjust output
to the target. Such procedures are repeated again and again until the target is hit
under all extreme conditions. But this is not a product design; it is merely operational work, called modication, adjusting, or tuning. It is extremely tedious, since
this approach is similar to trying to solve several simultaneous equationsnot
mathematically, but through hardware.
The foregoing approach is misguided because the engineer tries to hit the
target rst and reduce variability last. In parameter design, robustness must be
improved rst, and the target is adjusted last. This is called two-step optimization.
15.2. Noise
Variables that cause product functions are called noise. There are three types of
noise:
1. Outer noise: variation caused by environmental conditions (e.g., temperature,
humidity, dust, input voltage)
2. Inner noise: deterioration of elements or materials in the product
3. Between-product noise: piece-to-piece variation between products
Parameter design is used to select the best control-factor-level combination so
that the effect of all of the noise above can be minimized.
Example [1]
The objective of a TV sets power circuit is to convert 100-V ac input into 115-V
dc output. If the power circuit maintains a 115-V output anytime, anywhere, the
quality of this power circuit is perfect as far as voltage is concerned. For example,
element resistor (A) and the hFE value of a transistor (B) in a power circuit affect
the output voltage as shown in Figure 15.1.
Suppose that a design engineer forecasts the midvalues of the elements in the
circuit, assembles these to get a circuit, and puts 100 V of ac into it. If the output
voltage so obtained is only 100 V instead of 115 V, he then changes resistance A
from A1 200 to A 250 to adjust the 15-V deviation from the target
value. From the standpoint of quality control, this is very poor methodology. Assume
that the resistance used in the circuit is the cheapest grade. It either varies or
deteriorates to a maximum range of 10% during its service period as the power
source of a TV set. From Figure 15.1, we see that the output voltage varies within
a range of 6 V. In other words, the output voltage varies by the inuence of inner
noise, such as the original variation of the resistance itself, or due to deterioration.
If resistance A, which has a midvalue of 350 , is used instead of A, its inuence
on the output voltage is only 1 V, even if its variation is 10% (35 ). However,
the output voltage goes up about 10 V, as seen from the gure. Such a deviation
can be adjusted by a factor such as B with an almost linear inuence on the output
voltage. In this case, 200 is selected instead of 500 for level B. A factor such
319
320
Figure 15.1
Relationship between
factors A and B and the
output voltage
Parameter design is the most important step in developing stable and reliable
products or manufacturing processes. With this technique, nonlinearity may be
utilized positively. (Factor A has this property.) In this step we nd a combination
of parameter levels that are capable of damping the inuences not only of inner
noise, but also of all noise sources, while keeping the output voltage constant. At
the heart of research lives a conict: to design a product that is reliable within a
wide range of performance conditions but at the lowest price. Naturally, elements
or component parts with a short life and wide tolerance variation are used. The
aim of the design of experiments is the utilization of nonlinearity.
Example
The purpose of a Wheatstone bridge is to measure a resistance denoted by y. Figure
15.2 shows the circuit diagram for the Wheatstone bridge. The task is to select the
midvalues of parameters A, C, D, E, and F.
To measure resistor y, adjustable resistor B is adjusted to bring the reading of
the ammeter, X, to zero. B is therefore not a control factor, but factors A, C, D, E,
and F are. The levels of these control factors are selected as shown in Table 15.1.
321
Figure 15.2
Wheatstone Bridge
Table 15.1
Three levels of control factors
Factor
A ()
Level 1
20
Level 2
Level 3
100
500
C ()
10
50
D ()
10
50
E (V)
1.2
30
F ()
10
50
322
Table 15.2
Three levels of noise factors
Factor
Level 1
Level 2
Level 3
A (%)
0.3
0.3
B (%)
0.3
0.3
C (%)
0.3
0.3
D (%)
0.3
0.3
E (%)
5.0
5.0
F (%)
0.3
0.3
X (mA)
0.2
0.2
BD
C
(15.1)
To investigate the error of measurement, assume that when the ammeter reads
zero, there is a small amount of current actually owing. In this case, the resistance,
y, is not calculated by equation (15.1) but by
y
BD
X
2 [A(D C) D(B C)][B(C D) F(B C)]
C
CE
(15.2)
323
Table 15.3
Direct product layout
Outer Array
1
36
Col. No.
Inner Array
Col A e C D E F e
1 2 3 4 5 6 13
No.
13
1 1 1 1
y1.1
y1.2
2 2 2 2
y2.1
y2.2
3 3 3 3
y3.1
y3.2
1 1 1 1
y4.1
y4.2
36
3 2 3 1
y36.1 y36.2
y36.36 36
324
Table 15.4
Actual levels of resistor A
Noise Factor A
Level 1
A1
Control Factor A
Level 2
A2
Level 3
A3
Level 1, A1
19.94
20.00
20.06
Level 2, A2
99.70
100.00
100.30
Level 3, A3
498.50
500.00
501.50
levels, three-level noise factors are prepared, that is, 0.3% around the foregoing
level of each resistor and 5% around the 6 V for the battery. The midvalue of
resistor B is always equal to 2 , and the midvalue of the ammeter reading is
always equal to zero. The three levels of these two factors of experiment 2 of the
inner orthogonal array are shown in Table 15.5.
For example, the actual levels of noise factors of experiment 2 of the inner array
and experiment 1 of the outer array are shown in the rst column of Table 15.5,
such as A 99.7 , B 1.994 , ... , X 0.0002 A. Putting these gures
in equation (15.2), y is calculated as
y
(1.994)(9.97)
0.0002
9.97
(9.972)(5.7)
[(99.7)(9.97 9.97) (9.97)(1.994 9.97)]
[(1.994)(9.97 9.97) (9.97)(1.994 9.97)]
2.1123
(15.3)
Table 15.5
Three levels of noise factors of experiment 2 of inner array
Factor
A ()
B ()
C ()
D ()
E (V)
F ()
X (A)
1
99.7
1.994
2
100.0
2.0
3
100.3
2.006
9.97
10.0
10.03
9.97
10.0
10.03
6.0
6.3
9.97
10.0
10.03
0.0002
5.7
0.0002
325
Assume that the true value of y is 2 . From this value the true value, 2 , must
be subtracted to obtain an error, which is shown in column 1 of the results of Table
15.6.
2.1123 2 0.1123
(15.4)
Similar calculations are made for the other 35 congurations of the outer array to
get the data of results column 1 of Table 15.6. Since the conguration of experiment
2 represents current levels (before parameter design), current variability is calculated from the experiment 2 data. The total variation of errors of these 36 pieces
of data, denoted by ST, is
ST 0.11232 0.00002 (0.0120)2
0.31141292
( 36)
(15.5)
(15.6)
The number shown in equation (15.6) was calculated using a computer, with the
control factors at the second level and with the varying noise factor ranges as given
in Table 15.2, the total error variance, VT, is
VT
ST
0.31141292
0.00865036
36
36
(15.7)
With a different combination of control factors, and the noise factors varying
according to the levels in Table 15.2, the error variance of measurements changes.
From experiment 1 through 36, combinations of the inner array, the sum of squares
of errors, Se, and the sum of squares of the general mean, Sm, are calculated:
Se (total of the squares of 36 errors)
(correction factor, Sm)
( 35)
Ve
Se
35
Sm
(15.8)
(15.9)
(15.10)
The SN ratio, , is
36 (Sm Ve)
Ve
(15.11)
0.00062
0.31140717
36
0.31140717
0.008897347
35
(15.12)
(15.13)
326
Table 15.6
Layout of noise factors and the data of measurement error
No.
A
1
B
2
C
3
D
4
E
5
F
6
X
7
10
11
12
13
(1) Expt.
2
(2) Improved
Condition
1
2
3
4
5
6
7
8
9
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
2
3
1
1
2
3
1
2
3
3
1
2
1
2
3
2
3
1
1
2
3
1
2
3
2
3
1
2
3
1
1
2
3
2
3
1
3
1
2
1
2
3
2
3
1
3
1
2
1
2
3
3
1
2
1
2
3
1
2
3
3
1
2
2
3
1
1
2
3
3
1
2
2
3
1
1
2
3
3
1
2
3
1
2
1
1
1
1
1
1
1
1
1
0.1123
0.0000
0.1023
0.0060
0.1079
0.1252
0.1188
0.1009
0.0120
0.0024
0.0000
0.0027
0.0060
0.0033
0.0097
0.0036
0.0085
0.0120
10
11
12
13
14
15
16
17
18
1
2
3
1
2
3
1
2
3
1
2
3
2
3
1
2
3
1
3
1
2
3
1
2
3
1
2
2
3
1
1
2
3
2
3
1
1
2
3
3
1
2
1
2
3
3
1
2
2
3
1
1
2
3
2
3
1
1
2
3
3
1
2
3
1
2
3
1
2
2
3
1
2
3
1
3
1
2
3
1
2
1
2
3
2
3
1
3
1
2
3
1
2
1
2
3
2
3
1
2
3
1
2
3
1
1
2
3
1
1
1
2
2
2
2
2
2
0.0120
0.1012
0.1079
0.0950
0.0120
0.1132
0.1241
0.1317
0.0120
0.0120
0.0086
0.0033
0.0087
0.0120
0.0035
0.0096
0.0215
0.0120
19
20
21
22
23
24
25
26
27
1
2
3
1
2
3
1
2
3
2
3
1
2
3
1
3
1
2
1
2
3
2
3
1
2
3
1
3
1
2
3
1
2
1
2
3
3
1
2
3
1
2
2
3
1
3
1
2
1
2
3
3
1
2
1
2
3
2
3
1
3
1
2
2
3
1
1
2
3
1
2
3
2
3
1
1
2
3
3
1
2
1
2
3
3
1
2
1
2
3
2
3
1
3
1
2
2
3
1
3
1
2
2
3
1
2
3
1
2
2
2
2
2
2
3
3
3
0.1201
0.0000
0.1250
0.0060
0.1247
0.1138
0.1129
0.0951
0.0120
0.0153
0.0000
0.0154
0.0060
0.0096
0.0035
0.0035
0.0087
0.0120
28
29
30
31
32
33
34
35
36
1
2
3
1
2
3
1
2
3
3
1
2
3
1
2
3
1
2
2
3
1
3
1
2
1
2
3
2
3
1
3
1
2
2
3
1
2
3
1
2
3
1
3
1
2
1
2
3
3
1
2
2
3
1
1
2
3
2
3
1
3
1
2
3
1
2
2
3
1
1
2
3
2
3
1
1
2
3
2
3
1
3
1
2
2
3
1
2
3
1
1
2
3
1
2
3
3
1
2
3
1
2
1
2
3
1
2
3
3
3
3
3
3
3
3
3
3
0.1186
0.0060
0.1197
0.0060
0.1133
0.1194
0.0957
0.1194
0.0120
0.0095
0.0060
0.0036
0.0060
0.0093
0.0036
0.0087
0.0036
0.0120
Sm
[(2)(36) 0.0006)]2
144.00240000
36
(15.14)
36 (144.0024 0.008897347)
449.552
0.008897347
(15.15)
327
36 (Sm Ve)
10 log(449.552) 26.7 dB
Ve
(15.16)
Analysis of SN Ratio
From the 36 data for each of the 36 conditions of the inner orthogonal array, the
SN ratios are calculated using equation (15.11). Table 15.7 shows the SN ratios
converted to decibels. The SN ratio is the reciprocal measurement of error variance.
Accordingly, the optimum design is a combination of levels, which maximizes the
decibel value. For this purpose, the decibel is used as the objective characteristic
unit, and data analysis is made based on that value for the inner orthogonal array.
The method is the same as the ordinary analysis of variance. For example, the main
effect of A is (result from nontruncated data):
SA
12
36
3700.21
(15.17)
( 2)
(15.18)
Other factorsSC, SD, SE, SF, and STare calculated similarly. Table 15.8 is the
ANOVA table for the SN ratio.
ST 32.22 26.72 8.02
685.02
11,397.42
36
( 35)
(15.19)
(15.20)
328
Table 15.7
Layout of control factors (inner array) and SN ratio
No.
A
1
B
2
C
3
D
4
E
5
F
6
e
7
e
8
e
9
e
10
e
11
e
12
e
13
1
2
3
4
5
6
7
8
9
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
2
3
1
1
2
3
1
2
3
3
1
2
1
2
3
2
3
1
1
2
3
1
2
3
2
3
1
2
3
1
1
2
3
2
3
1
3
1
2
1
2
3
2
3
1
3
1
2
1
2
3
3
1
2
1
2
3
1
2
3
3
1
2
2
3
1
1
2
3
3
1
2
2
3
1
1
2
3
3
1
2
3
1
2
1
1
1
1
1
1
1
1
1
32.2
26.7a
15.9
36.4
28.6
7.2
16.5
13.0
28.0
10
11
12
13
14
15
16
17
18
1
2
3
1
2
3
1
2
3
1
2
3
2
3
1
2
3
1
3
1
2
3
1
2
3
1
2
2
3
1
1
2
3
2
3
1
1
2
3
3
1
2
1
2
3
3
1
2
2
3
1
1
2
3
2
3
1
1
2
3
3
1
2
3
1
2
3
1
2
2
3
1
2
3
1
3
1
2
3
1
2
1
2
3
2
3
1
3
1
2
3
1
2
1
2
3
2
3
1
2
3
1
2
3
1
1
2
3
1
1
1
2
2
2
2
2
2
15.0
16.4
25.5
43.8
8.3
14.6
29.0
6.9
14.7
19
20
21
22
23
24
25
26
27
1
2
3
1
2
3
1
2
3
2
3
1
2
3
1
3
1
2
1
2
3
2
3
1
2
3
1
3
1
2
3
1
2
1
2
3
3
1
2
3
1
2
2
3
1
3
1
2
1
2
3
3
1
2
1
2
3
2
3
1
3
1
2
2
3
1
1
2
3
1
2
3
2
3
1
1
2
3
3
1
2
1
2
3
3
1
2
1
2
3
2
3
1
3
1
2
2
3
1
3
1
2
2
3
1
2
3
1
2
2
2
2
2
2
3
3
3
21.5
17.4
17.4
46.5
5.5
8.2
27.3
43.4
20.9
28
29
30
31
32
33
34
35
36
1
2
3
1
2
3
1
2
3
3
1
2
3
1
2
3
1
2
2
3
1
3
1
2
1
2
3
2
3
1
3
1
2
2
3
1
2
3
1
2
3
1
3
1
2
1
2
3
3
1
2
2
3
1
1
2
3
2
3
1
3
1
2
3
1
2
2
3
1
1
2
3
2
3
1
1
2
3
2
3
1
3
1
2
2
3
1
2
3
1
1
2
3
1
2
3
3
1
2
3
1
2
1
2
3
1
2
3
3
3
3
3
3
3
3
3
3
44.1
39.3
17.0
23.0
44.2
0.9
43.4
7.7
8.0
Example calculated.
Results
(dB)
329
Table 15.8
ANOVA table for the SN ratio
Source
3,700.21
1,850.10
359.94
179.97
302.40
151.20
4,453.31
2,226.65
1,901.56
950.97
25
680.00
27.20
Total
35
11,397.42
(15.21)
0.00289623
0.00008045
36
(15.22)
Compared with the result in equation (15.7), this error variance, under the optimum
condition, is reduced by 1/107.5, a gain of 20.32 dB. Such an improvement is
much greater than the improvement obtainable by tolerance design when the var1
ying range of each element is reduced by
10 (probably at a huge cost increase). In
Table 15.9
Estimate of signicant factors: Average of each level (decibels)
Level
31.56
14.56
20.91
5.66
27.58
18.78
21.10
21.24
18.52
19.68
6.73
21.42
14.93
32.89
9.81
330
Figure 15.3
Effects of signicant factors
parameter design, all component parts used in the circuit are low-priced, with wide
varying ranges. In fact, the role of parameter design is to reduce measurement error,
achieve higher stability, and create a signicant improvement of quality by using
components and/or materials that have a widely varying range and that are inexpensive. In the case of product or process design, a signicant quality improvement
is expected by adopting this same method.
331
system harmonic was created, resulting in a loud, rolling noise through the heating
pipes. The noise was considered by customers to be a signicant nuisance that had
to be minimized or eliminated. Figure 15.4 shows the conguration of the cartridge
in the valve. A traditional one-factor-at-a-time approach was tried. The solutions,
in particular the quad ring and the round poppet, caused ancillary difculties and
later had to be reversed.
Based on the Taguchi methods concept that to get quality, dont measure quality, the engineers tried not to measure quality, that is, symptoms such as vibration
and audible noise. Instead, the product function was discussed to dene a generic
function.
The generic function was dened as (Figure 15.5)
y
M
M*
(15.23)
where y is the ow rate, M the cross-sectional ow area, and M* the square root
of the inlet pressure. The idea was that if the relationship above was optimized,
problems such as vibration or audible noise should be eliminated. For calculation
of the SN ratio, zero-point proportional equation was used.
Experimental Layout
Figure 15.6 illustrates the parameter diagram that shows the control, noise, and
signal factors. Table 15.10 explains the control, noise, and signal factors and their
levels.
Orthogonal array L18 was used to assign control factors. Noise and signal factors
were assigned to outside the array. Table 15.11 shows the layout, calculated SN
ratio, and sensitivity. From Tables 15.11 and 15.12, response tables and response
Figure 15.4
Cartridge conguration
332
Figure 15.5
Ideal function
graphs for the SN ratio and sensitivity were prepared. Table 15.13 shows the response tables for the SN ratio and sensitivity, respectively. Figure 15.7 shows the
response graph of the SN ratio.
Optimization and Conrmation
The optimum condition was selected as
A2 B2 C3 D2 E2 F2
The optimum condition predicted for the SN ratio was calculated by adding together
to larger effects: A, C, D, and E, as follows:
opt T (A2 T) (C3 T) (D2 T) (E2 T)
A2 C3 D2 E2 3T
5.12 4.94 5.24 4.87 (3)(5.59)
3.40 dB
(15.24)
(15.25)
The predicted optimum conditions of sensitivity was calculated using the larger
effects on sensitivity, A, B, D, E, and F.
Sopt T (A2 T) (B2 T) (D2 T) (E2 T) (F2 T)
A2 B2 D2 E2 F2 4T
6.27 6.24 6.44 7.80 6.60 (4)(6.45)
7.55 dB
(15.26)
333
Figure 15.6
Parameter diagram
Table 15.10
Factors and levels
Factor
Description
Control Factors
A
B
C
D
E
F
Level 1
Level 2
Level 3
Fastening
Poppet design
Seat design
Durometer
Restrictor size (in.)
Cartridge side hole size (in.)
Not fastened
Hex
Flat
80
12
0.187
Fastened
Square
Conical
60
18
0.210
Noise Factors
N
Flow rate
Drain rate
Aging
Air cylinder speed (in. / sec)
Air cylinder direction
New
0.002
Down (as valve
closing)
Aged
0.001
Up (as valve
opening)
Signal factors
M
M*
40
150
80
Flow area calculated as valve closing or opening
Spherical
90
1
16
334
Table 15.11
Layout of experimentsa
Inner Array
B
Poppet
Design
C
Seat
Design
D
Durometer
E
Restrictor
Size
F
Cartridge
Slide Hole
Size
Trial
No.
A
Fastening
Not fastened
Hex
Flat
80
0.500
0.187
Not fastened
Hex
Conical
60
0.125
0.210
Not fastened
Hex
Spherical
90
0.063
*0.187
Not fastened
Square
Flat
80
0.125
*0.187
Not fastened
Square
Conical
60
0.063
0.187
Not fastened
Square
Spherical
90
0.500
0.210
Not fastened
*Hex
Flat
60
0.063
0.210
Not fastened
*Hex
Conical
90
0.500
*0.187
Not fastened
*Hex
Spherical
80
0.125
0.187
10
Fastened
Hex
Flat
90
0.125
0.210
11
Fastened
Hex
Conical
80
0.063
*0.187
12
Fastened
Hex
Spherical
60
0.500
0.187
13
Fastened
Square
Flat
60
0.500
*0.187
14
Fastened
Square
Conical
90
0.125
0.187
15
Fastened
Square
Spherical
80
0.063
0.210
16
Fastened
*Hex
Flat
90
0.063
0.187
17
Fastened
*Hex
Conical
80
0.500
0.210
18
Fastened
*Hex
Spherical
60
0.125
*0.187
(15.27)
The gain predicted for the SN ratio is calculated from equations (15.24) and
(15.25) as
gain() 3.40 (8.27) 4.87 dB
(15.28)
The predicted gain in sensitivity is calculated from equations (15.26) and (15.27)
as
gain(S) 7.55 (1.12) 6.43 dB
(15.29)
335
Table 15.12
SN ratio and sensitivity
Sensitivity
(S 10 log 2)
Trial No.
SN Ratio
8.27
4.73
7.96
3.97
18.32
6.86
7.62
4.33
17.60
5.85
1.54
5.86
18.21
9.74
0.62
4.83
8.92
10
3.96
9.02
11
4.26
18.01
12
6.15
1.82
13
5.86
0.48
14
4.29
6.56
15
4.30
6.56
16
4.56
18.25
17
8.21
0.28
18
4.54
6.99
1.12
Table 15.14 compares the gains between the optimum and initial conditions.
Based on the new design, products were manufactured and shipped to the eld.
In the rst year there were no problems of chattering. This success encouraged the
company to start applying such robust design approaches to other projects.
336
Table 15.13
Response tables of SN ratio and sensitivity
Sensitivity (S 10 log 2)
SN Ratio
A1
6.05
6.63
A2
5.12
6.27
B1
5.76
6.55
B2
5.25
6.24
C1
5.90
6.50
C2
5.93
5.94
C3
4.94
6.93
D1
6.12
6.40
D2
5.24
6.44
D3
5.39
6.51
E1
7.35
0.96
E2
4.87
7.80
E3
4.55
10.59
F1
5.64
6.38
F2
5.48
6.60
Average
5.59
6.45
3.00
A1 A2
4.00
Figure 15.7
Response graph of SN
ratio
5.00
6.00
7.00
8.00
B1 B2
C1 C2 C3
D1
D2 D3
E1 E2 E3
F1 F2
337
Table 15.14
Results from the conrmation run
Sensitivity (dB)
SN Ratio (dB)
Predicted
Actual
Predicted
Actual
Initial
8.27
8.27
1.12
1.12
Optimum
3.40
4.25
7.55
8.64
4.87
4.02
Gain
After the success of using quality improvement methods, their methods were
studied further to see the inuence of reducing the number of data points for analysis. First, six data points for each noise factor condition were used for calculation.
The results of optimization were the same. In other words, comparing with the case
when all data points were used for calculation, there was no difference in the
selection of the optimum level for an individual control factor. Next, two data points,
maximum and minimum, of each noise factor condition were used for calculation.
The results of optimization were again the same. These comparisons can be seen
from Figures 15.8 and 15.9.
4.00
2.00
0.00
A1 A2
B1 B2
C1 C2 C3
D1 D2
D3
E1 E2 E3
F1 F2
2.00
Figure 15.8
SN ratio comparison
4.00
6.00
8.00
All data points
Six data points
Two data points
338
1.0000
0.9000
0.8000
0.7000
0.6000
All data points
Six data points
Two data points
0.5000
0.4000
0.3000
0.2000
0.1000
0.0000
A1 A2
B1 B2
C1 C2 C3
D1 D2 D3
E1 E2 E3
F1 F2
Figure 15.9
Sensitivity comparison
References
References
1. Genichi Taguchi et al., 1979. Introduction to Off-Line Quality Control. Nagoya, Japan:
Central Japan Quality Control Association.
2. Yuin Wu et al. 1986. Quality Engineering: Product and Process Optimization. Livonia,
Michigan: American Supplier Institute.
3. Amjed Shaque, 2000. Reduction of chatter noise in 47-feeder valve. Presented at
ASIs 18th Annual Taguchi Methods Symposium.
339
16
16.1.
16.2.
16.3.
16.4.
Tolerance Design
Introduction
340
Analysis of Variance
342
Tolerance Design of a Wheatstone Bridge
346
Summary of Parameter and Tolerance Design
349
Reference
351
This chapter is based on Yuin Wu et al., Quality Engineering: Product and Process
Optimization. Livonia, Michigan. American Supplier Institute, 1986.
16.1. Introduction
Two specic sets of characteristics determine the quality level of a given product.
These are the characteristics of the products subsystems and/or component parts,
called low-rank characteristics, and the characteristics of the product itself, called
high-rank characteristics. Often, specications and characteristic come from different
sources. The end-item manufacturer often provides the specications for component parts, while the marketing or product planning activity might specify the end
item itself. In any case, one function is responsible for determining the specication of low-rank quality characteristics (or cause parameters), which will eventually
affect high-rank quality characteristics. Determination of these low-rank specications includes parameter design and tolerance design.
Parameter design, which was the subject of Chapter 15, is the determination of
optimum component or parameter midvalues in such a way that variability is reduced. If, after these midvalues are determined, the level of variability is still not
acceptable, components and materials must be upgraded (usually at a higher cost)
to reduce variability further. The determination of acceptable levels of variability
after midvalues have been established is tolerance design, the subject of this chapter.
340
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
16.1. Introduction
341
Example
A 100-V, 50-Hz power supply across a resistance A in series with an inductor B
results in a current of y amperes.
y
100
A2 [(2)(50B)]2
(16.1)
The customers tolerance of this circuit is y 10.0 4.0 A. When the current
is out of tolerance, the loss (including after services), denoted by A, is $150. The
annual production of this circuit is 200,000 units.
Currently, second-grade resistors, A, and inductors, B, are used. For resistor A
(), we have
Mean:
mA 9.92
Standard deviation: A 0.300
For inductor B (mH), we have
Mean:
mB 4.0
Standard deviation: B 0.80
The variation in the components due to 10 years of deterioration is included in
the standard deviations A and B. The problem is to reduce the variability of the
current by upgrading the resistor, the inductance coil, or both. If rst-grade components were used, one-half would reduce the varying ranges, including the deterioration. The cost increases would be 12 cents for a resistor and $1 for an inductor.
Taylors expansion and Monte Carlo simulation have frequently been used to solve
such problems of component upgrade to reduce variability.
Here the experimental design approach will be discussed. The noise factor levels
are set in the following way:
Three-Level Factors
Two-Level Factors
3
2
(16.2)
(16.3)
342
The units for B are changed from millihenries to henries. Using the nine possible
combinations of A and B, the output current is computed from equation (16.1).
For example, the current that results when the resistor/inductor combination A1B1
is used is
y11
100
100
2
2
A [(2)(50B1)]
9.552 [(314.16)(0.00302)]2
2
1
10.42
(16.4)
Similar computations are made for each conguration to obtain Table 16.1. In the
table, the target value of 10 A has been subtracted from each observation.
Example [1]
From Table 16.1, the following factorial effects (constituents of the orthogonal polynomial equation) are computed:
m general mean
Al linear term of A
Aq quadratic term of A
Bl linear term of B
Bq quadratic term of B
A1B1 interaction of linear terms of A and B
e error term (including all higher-order terms)
Sum of squares of these terms are calculated as follows:
Sm
0.062
0.00040
9
(16.5)
343
Table 16.1
Results of two-way layout
B1
B2
B3
A1
0.42
0.38
0.33
1.13
A2
0.03
0.00
0.04
0.01
A3
0.32
0.35
0.39
1.06
0.13
0.03
0.10
0.06
Total
Total
2
r( S)
(3)(2)
(1.13 1.06)2
0.79935
(3)(2)
(16.6)
SAq
(16.7)
SBl
[(1)(0.13) (1)(0.10)]2
0.00882
(3)(2)
(16.8)
SBq
(16.9)
To obtain the interaction A1 B1, the linear effects of A at each level of B are
written using coefcients W1 1, W2 0, and W3 1.
L(B1) (1)(0.42) (0)(0.03) (1)(0.32) 0.74
L(B2) (1)(0.38) (0)(0.00) (1)(0.35) 0.73
(16.10)
0.00010
(16.11)
(16.12)
344
(16.13)
Table 16.2 is the ANOVA table. From an ANOVA table, a design engineer can
decide what should be considered as the error variance. It is exactly the same as
the case of the expansion of the exponential function. In the latter case, one can
include terms up to those of a certain order as factors and assign the remainder as
the error. However, it is important to evaluate the magnitude of the error base on
such a consideration. In other words, there is freedom to select a model, but the
error of the model has to be evaluated.
After comparing the variances, Bq and AlBl are pooled with error to get (e). The
pooled error sum of squares is 0.00018, with 5 degrees of freedom. The pooled
error variance is
V(e)
0.00018
0.000036
5
(16.14)
Sm Ve
0.00040 0.000036
(100)
(100) 0.045%
ST
0.80920
(16.15)
ST
0.89020
0.08991
9
9
(16.16)
Table 16.2
ANOVA table
Source
(%)
0.00040
0.00040
0.045
A l
q
1
1
0.79935
0.00045
0.79935
0.00045
98.778
0.051
B l
q
1
1
0.00882
0.00005
0.00882
0.00005
1.086
AlBl
0.00010
0.00010
0.00003
0.00001
(e)
(5)
(0.00018)
Total
0.80920
(0.000036)
0.08991
0.040
100.00
345
Resulting Loss
Tolerance design is done at the design stage where the grades of the individual
component parts are determined. For this purpose, it is necessary to acquire information regarding the loss function and the magnitude of the variability of each
component part (standard deviation including deterioration). In the current example,
the loss function is
L k( y m)2
150 2
$9.38( 2)
4.02
(16.17)
(16.18)
VN VC At
HN
HC
Aq
HN
HC
Bl
H N
e
H C
(16.19)
where
VN variance after upgrading (new)
VC variance before upgrading (current)
Al percent contribution of linear term of A
Aq percent contribution of quadratic term of A
Bl percent contribution of linear term of B
HN varying range of new component A
HC varying range of current component A
H N varying range of new component B
H C varying range of current component B
In the case of component A, the loss due to using second-grade resistors is
L
9.375VT(Al Aq)
100
(9.375)(0.08991)(0.98778 0.00051)
$0.8330
(16.20)
346
$0.2082
(16.21)
(16.22)
Since there is a 12-cent cost increase for upgrading, the net gain would be
$0.6248 12 $0.5048
(16.23)
(16.24)
Example
Consider the parameter design of the Wheatstone bridge example described in
Chapter 15. It is assumed that the nal optimum conguration is A1C3D2E3F1. To
start the tolerance design, lower-price elements are again considered, and the three
Table 16.3
Summary of tolerance design
Element
Grade
Quality
Total
Resistor
Second
First
Base
12
83.30
20.82
83.30
32.82
Second
First
Base
100
0.92
0.23
0.92
100.23
Inductor
Price
347
levels for each element are set closest to the optimum condition (with variations
such as those shown in Table 15.2). In this step we usually cite more error factors
than in the parameter design stage. In this example, however, the levels and error
factors shown in Table 15.2 are investigated. Let the midvalue and the standard
deviation of each element be m and , respectively. The three levels are prepared
as follows:
First level:
m 32
Second level: m
Third level:
(16.25)
m 32
These are assigned in the orthogonal array L36. For this optimum conguration,
the measurement errors of the 36 combinations of the outer array factors are given
in the last column of Table 15.6. To facilitate the analysis of variance, multiply
each of the 36 data points and the percent contribution of each is calculated. The
main effects of these error factors are decomposed into linear and quadratic components using the orthogonal polynomial equation.
Table 16.4
ANOVA for tolerance design
Source
A l
q
1
1
1
0
B l
q
1
1
86,482
2
86,482
1
C l
q
1
1
87,102
0
87,102
D l
q
1
1
87,159
0
87,159
E l
q
1
1
0
0
F l
q
1
1
0
1
X l
q
1
1
28,836
0
21
41
Total
36
289,623
(e)
32
44
1
28,836
1.95
1.38
348
(16.26)
A3 27 97 120 3
Sm
(A1 A2 A3)2
(3 3 3)2
9
0.25
36
36
36
SAl
2
r( S)
r(2S)
SAq
[(3) 3]2
36
1.5
(12)(2)
24
( f 1)
( f 1)
(16.27)
(16.28)
2
r( S)
(12)(6)
72
0.5
( f 1)
(16.29)
2S is the sum of squares of Wi, which can be obtained from the orthogonal polynomial table.
Other factors are calculated similarly. Table 16.4 shows the results. Notice that
the four linear effects, B, C, D, and X, are signicantly large. Pool the small effects
with the error to obtain Table 16.5. Because B, C, and D have large contributions,
higher-quality (less varying) resistors must be used for these elements. Assume that
premium-quality resistors with one-tenth of the original standard deviations are used
for B, C, and D. For the ammeter, the rst-grade product with one-fth of the
original reading error is used. As a result, the percents of contribution of B, C, and
D are one one-hundredth of, and X is one twenty-fth of, the original. The error
variance, Ve, is given as (see Table 16.5)
1 2
1 2
Ve VT[(
10 ) (B C D)] (
5 ) X e
(16.30)
2 0.000001058
(16.31)
The allowance, which is three times that of the standard deviation, becomes
3 3(0.000001058)
0.0031
(16.32)
349
Table 16.5
Rearranged ANOVA table
Source
(%)
86,482
86,482
29.86
87,102
87,102
30.07
87,159
87,159
30.09
28,836
28,836
32
44
Total
36
289,623
1.38
8045.1
9.96
0.02
100.000
If only resistors B, C, and D are upgraded, the resulting error variance will be
Ve (8045)[(0.01)(0.29860 0.30074 0.30093)
0.09956 0.00017]
(8045)(0.108733) 874.8
(16.33)
350
Example
Wheatstone bridge is used in a resistor manufacturing plant to control the quality
of resistors. The plant is producing 120,000 resistors annually. The tolerance of
manufactured resistors is 2 0.5 . When a resistor is out of specication, the
loss is $3. At the stage of parameter design, there is no cost increase for optimization. In the tolerance design stage, the cost for upgrading components is $4 for
a resistor and $20 for an ammeter. The annual cost for interest and depreciation
combined is 50% of the upgrading cost.
Losses are calculated using the following:
1. Before parameter design [column (1) of Table 16.6]
2. After parameter design [column (2) of Table 16.6]
3. After tolerance design by upgrading only resistors to the premium grade
4. After tolerance design by upgrading resistors to the premium grade and the
ammeter to the rst grade
The loss due to measurement before parameter design is
L
3
2 (12)(0.00865036) $0.103804
0.52
(16.34)
3
(0.00008045) $0.000965
0.52
(16.35)
The loss after tolerance design by upgrading only three resistors to the premium
grade is
L
(4)(0.5)(3)
3
(0.00000875) $0.000155
120,000
0.52
(16.36)
Table 16.6
Initial and optimum midvalues and variance of parameters
(1)
Initial Midvalue
(2)
Optimum Midvalue
A ()
100
20
C ()
10
50
D ()
10
10
E (V)
30
F ()
10
Error variance
3
0.00865036
0.00008045
0.2790
0.0269
Reference
351
Table 16.7
Product design stages and costs
Design Stage
2 Out
Quality L
0.00805636
$0.103804
After parameter
design
0.00008045
0.00965
0.00000875
0.00000106
Tolerance design
Premium-grade
resistors only
Premium-grade
resistors only
plus rst-grade
ammeter
Upgrading
Cost
Total
Annual Loss
$0.103804
$12,456.00
0.000965
115.80
0.000105
$0.000050
0.000155
18.60
0.000013
0.000883
0.000896
107.52
(base)
The loss after tolerance design by upgrading only resistors to the premium grade
and the ammeter to the rst grade is
L
(4)(0.5)(3) (20)(0.5)
(1200)(0.000001058)
120,000
$0.0000896
(16.37)
The best solution is therefore (3) above, that is, equation (16.36), as shown in
Table 16.7.
Reference
1. Genichi Taguchi, 1987. System of Experimental Design. Livonia, Michigan: Unipub/
American Supplier Institute.
17
Robust Technology
Development
352
17.1. Introduction
17.2. Concepts of Robust Technology Development
Two-Step Optimization
353
Selection of What to Measure
354
Objective Function versus Generic Function
SN Ratio
355
Orthogonal Array
356
353
355
358
358
359
Paradigm Shift
359
Identication of the Generic Function
360
SN Ratio and Sensitivity
360
Use of Test Pieces
360
Compounding Noise Factors
360
Use of Orthogonal Arrays
361
Calculation of Response Tables
361
Optimization and Calculation of Gains
361
Conrmatory Experiment
361
References
376
17.1. Introduction
Robust technology development is a revolutionary approach to product design.
The idea is to develop a family of products so that development efforts do not
have to be repeated for future products in the same family. This is possible because
the generic function of the entire family of products is the same. Once the robustness of this generic function is maximized, all that remains to be done for a
newly planned product is to adjust the output.
The optimization of a generic function can be started before product planning
in the research laboratory. Since there are no actual products, a generic function
can be studied using test pieces, which are easier to prepare and less expensive.
352
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
353
Robust design is the design of the product that will be least affected by user
conditions. Noise conditions must therefore be considered in the development.
Since the development is conducted in a small-scale research laboratory, the conclusions obtained must be reproducible downstream, that is, in large-scale production and under a customers conditions. To check reproducibility, the concepts
SN ratio and orthogonal array are used.
The most direct benet of robust technological development is that the product
development cycle time can be reduced dramatically. At the Quality Engineering
Symposium in Nagoya, Japan, in 1992, the theme was: Can we reduce R&D cycle
time to one-third? Case studies presented at the symposium offered convincing
proof that it is possible to do so. If a product was shipped one year earlier than a
competitors, the prot potential would be enormous.
Two-Step
Optimization
354
Selection of What to
Measure
In the 1989 ASI Taguchi Methods Symposium, the following was selected as the
theme: To get quality, dont measure quality! The second quality here refers to the
symptom or subjects measured in reghting. Typical examples are audible noise
level and chattering or vibration of a machine or a car.
In the case of a company manufacturing thermosetting and steel laminated
sheets, cracking was measured before shipment [1]. No cracking was detected in
the nal product inspection, but cracking was noticed two or three years after
shipment at the rate of 200 ppm in the market. According to the method of testing
before shipment, the problem would not have been noticed, even after an 80-hour
test.
Parameter design was conducted by measuring the following generic function
of the product: Load is proportional to deformation.
In the study, evaluation of cracking was made using an 18-hour test instead of
80 hours without observing cracks. A 4.56-dB gain was conrmed from the study.
Quality is classied into the following four levels:
1. Downstream quality (customer quality)
2. Midstream quality (specied quality)
3. Upstream quality (robust quality)
4. Origin quality (functional quality)
Downstream quality refers to the type of quality characteristics that are noticed
by customers. Some examples in the auto industry are gasoline mileage, audible
noise, engine vibration, and the effort it takes to close a car door. Downstream
quality is important to management in an organization. However, it is of limited
value to engineers in determining how to improve the quality of a product. Downstream quality serves to create a focus on the wrong thing and is the worst type of
quality characteristic to use for quality improvement. Downstream quality, however,
is easy to understand. If such quality characteristics were used for quality improvement, the statement would be: To improve quality, measure that quality.
Midstream quality is also called specied quality, since it is specication-related
quality, such as dimension, strength, or the contents of impurities in a product.
Midstream quality is important for production engineers, since it is essential to
make the product to print. But many engineers today have begun to realize that
making to specications (print) does not always mean that good quality is
achieved. It is slightly better to measure these downstream quality characteristics
but not much.
Upstream quality is expressed by the nondynamic SN ratio. Since the nondynamic
SN ratio relates to the robustness of a xed output, upstream quality is the secondbest quality characteristic to measure. It can be used to improve the robustness of
a particular product instead of a group or a family of products. However, the
concept of SN ratio is not easily understood by engineers.
Origin quality is expressed by the dynamic SN ratio. This is the best and most
powerful type of quality and is the heart of robust technology development. In
contrast to nondynamic SN ratios that are used to improve the robustness of the
average, the output of a particular product, the dynamic SN ratio is used to improve the generic function of a product, the outputs of a group of products. By using
355
this type of quality characteristics, we can expect the highest probability of reproducing the conclusions from a research laboratory to downstream, that is, to largescale production or market. The use of origin quality therefore improves the
efciency of R&D. However, this type of quality level is hardest for engineers to
understand.
Why is downstream quality the worst, midstream the next worst, upstream the
next best, and origin quality the best? This concept is based on philosophy as well
as on actual experiments over the years. Origin quality can be described from the
viewpoint of interactions between control factors, which we try to avoid, because
the existence of such interactions implies that there is no reproducibility of conclusions. When downstream quality is used, these interactions occur most frequently. Such interactions occur next most frequently when midstream quality is
used, less in upstream quality, and least in origin quality.
The most important quality, origin quality, is the SN ratio of dynamic characteristics, the relationships between an objective function and a generic function.
An objective function is the relationship between the signal factor input used by a
customer and the objective output. In an injection molding process, for example,
the user (production engineer) measures the relationship between mold dimension and product dimension. Mold dimension is the input signal, and product
dimension is the objective output. It is the relationship between the users intention and the outcome.
A generic function is the relationship between the signal factor input and output
of the technical mean (method) the engineer is going to use. In the case of an
injection molding process, the process is the mean (material) used to achieve a
certain objective function. The engineer expects the material to have a certain
physical property, such as the relationship between load and deformation.
In another example, the objective function of a robot is the relationship between the programmed spot the robot is supposed to travel to and the spot actually
reached. If an engineer used an electric motor to move the arm of the robot, this
is the technical means. The generic function would be the relationship between
the input rpm and the angle change of the robots arm.
Generic function is related to physics. It is energy-related; therefore, there is
additivity or reproducibility. We want additivity for reaching conclusions before
product planning. Conclusions from a small-scale study should be reproducible in
large-scale manufacturing and under the customers conditions. Therefore, generic functions are better to study than are objective functions.
In some cases, generic function cannot be used for reasons such as lack of
technology to measure the input or output. In such a case, objective functions may
be used, but it is always preferable to study generic functions.
Objective Function
versus Generic
Function
The SN ratio has been used since the beginning of the last century in the communications industry. For example, a radio measures the signal from a broadcasting station and reproduces the original sound. The original sound from the
broadcasting station is the input, and the sound reproduced from the radio is the
output. But there is noise mixed with the sound reproduced. A good radio catches
mostly original sound and is least affected by noise. Thus, the quality of a radio is
expressed by the ratio of the power of the signal to the power of the noise. The
unit decibel (dB) is used as the scale. For example, 40 dB indicates that the
SN Ratio
356
(17.1)
(17.2)
where y is the response, M the signal, and x1, x2, ... , xn are noise factors. In such
a study it is important to nd an equation that expresses the relationship precisely.
A product with good functionality means has a large portion of equation (17.1)
included in equation (17.2). In quality engineering, the responses of equation
(17.2) are decomposed into the useful part: equation (17.1), and the harmful part,
the rest: equation (17.2) equation (17.1). The latter is the deviation from the
useful part:
f(M, x1, x2, ... , xn) M [f(M, x1, x2, ... , xn) M ]
(17.3)
357
358
Flexibility
The nondynamic SN ratio was used widely and successfully in the 1970s to optimize
the robustness of one target or one particular product. But it would be much
better and more powerful if multiple targets were optimized in one study. Dynamic
SN ratios are used for this purpose. The dynamic SN ratio can improve linearity,
improve sensitivity, and reduce variability, as we have said.
In the case of an injection molding process, the objective function is that the
input dimension be proportional to the output dimension. If linearity, sensitivity,
and variability are all improved, the change is as shown in Figure 17.1 from a
to b.
After optimization, the process can produce any product having dimensions
within the range of the outputs studied. Therefore, one study is good enough to
optimize a group of the products. In other words, one dynamic SN ratio is used
to evaluate all three aspects for the products within the ranges in the study.
A generic function may be used to optimize the same injection molding process.
For example, the input and output may be load and deformation, respectively; or
the input and output could be the weight of the product measured inside and the
one outside the water. What type of generic function should be used depends on
359
N2
Output:
Product
Dimension
Output:
Product
Dimension
N1
N1
N2
Figure 17.1
Input / output for an
objective function
Reproducibility
Paradigm Shift
N1
N2
Input: Load
(a)
Output:
Deformation
Output:
Deformation
N1
N2
Figure 17.2
Input / output for a
generic function
Input: Load
(b)
360
Identication of the
Generic Function
Problems cannot be solved by observing symptoms. In one case in the auto industry, for example, the audible noise level of a brake system was observed to solve
the problem of squealing, but this did nothing to solve the problem in the long
term. In another case, symptoms such as wear and audible noise level were observed in a study of timing belts. Those problems disappeared completely after
dynamic SN ratios were used for evaluation rather than inspection and
observation.
Ideally, the generic function would be used to calculate the dynamic SN ratio.
If technology is lacking, the objective function of a system may be used instead.
For computer simulation, nondynamic SN ratio may be used, since nonlinearity is
not reected in the software in most cases.
SN Ratio and
Sensitivity
Compounding Noise
Factors
To conduct a study efciently, noise factors are compounded to set two or three
conditions, such as:
N1:
N2:
N1:
N2:
standard condition
N3:
or
In the case of simulation, noise factors may not have to be compounded since
calculations are easier than physically conducting experiments. But in simulation,
it is hard to include deterioration or customer conditions. Piece-to-piece variability
may be used to simulate these two types of noise factors.
It is unnecessary to include all noise factors in a study. One can use just a few
important noise factors, either to compound or not to compound. Generally, noise
361
factors related to deterioration and customer conditions are preferable for experimentation rather than the factors related to piece-to-piece variation.
Orthogonal arrays are used to check the existence of interactions. It is recommended that one use L12 and L18 for experimentation or L36 arrays for simulation.
Use of Orthogonal
Arrays
Response tables for the SN ratio and sensitivity are used for two-step optimization.
No ANOVA tables are necessary for selection of the optimum levels of control
factors.
Calculation of
Response Tables
The optimum condition is determined from the two response tables: tables of SN
ratio and sensitivity. The SN ratios of the current condition and the optimum
condition are calculated. The predicted gain is then calculated from the difference. The same is done for sensitivity.
Optimization and
Calculation of Gains
Under the current and optimum conditions, conrmatory experiments are run,
along with their SN ratios and sensitivity; then the gain is calculated. The gain
must be close enough to the predicted gain to show good reproducibility. How
close it should be is determined from the engineering viewpoint rather than by
calculating percent deviation. If the two gains are not close enough, it indicates
poor reproducibility, and the quality characteristic used must be reexamined.
Conrmatory
Experiment
Example [2]
To save energy for copy machines and printers, a new fusing system was developed
using a resistant heater that warms up in a short period of time. This project was
scheduled to be completed in one year. In the midst of development, there were
design changes both mechanically and electrically that affected the progress. By
the use of robust technology development approaches, however, development was
completed within the scheduled time. From past experience it was estimated that
the development would have taken two years if the traditional approaches had been
used.
The electrophotographic system requires a high-temperature roller that xes resin
toner to the paper by heat fusion. In this system, 90% of the power is consumed
for idle time (the time to wait when the machine is not running). To save energy,
it would be ideal that no heating be applied during waiting and heat applied only
when needed. But heating takes time that makes users wait.
To solve this problem, a new system, which heats up the system to a high
temperature in a short period of time, was to be developed in one year. This system
was named the Minolta advanced cylindrical heater (MACH).
Traditional Fixing System
Figures 17.3 and 17.4 show the structure of the traditional and the MACH heat
roller, respectively. The differences between the two systems are in the heat source
and the method of power transformation. In the former system, heat is transferred
362
Figure 17.3
Traditional roller
by the radiation from a halogen heater. In the latter case, it is transferred directly
by a resistor. To heat the heat roller surface to 180C requires that the surface
temperature of the halogen lamp reach 350C. But using the resistor heater, its
surface temperature needs only to be 180C.
Functions to Be Developed
Figure 17.5 shows the functions of the system. Of those functions, the following
four were selected to develop:
Power supply function
Heat-generating function
Separating function
Temperature-measuring function
These functions were developed independently and concurrently because (1)
there was a short period of time for development, (2) several suppliers would be
involved, (3) any delay of one supplier affects the entire project, and (4) no technological accumulations was available.
Development of the Power Supply Function
This system was constructed by a carbon brush and power receiving parts. It was
considered to be ideal that the power be transformed without loss.
Figure 17.4
MACH heat roller
363
Figure 17.5
Functions of the system
Figure 17.6
Ideal function of power
supply
364
Figure 17.7
Test piece for power
supply function
runs were calculated, and response curves were drawn as shown in Figures 17.8
and 17.9.
The levels with o marks in the gures were selected as the optimum conditions. Table 17.2 shows the results of the conrmatory experiment. Factors A, C,
and E were used for estimation. The gain of the SN ratio was fairly reproduced.
Table 17.1
Control factors for power supply function
Factor
Level 1
Level 2
Level 3
Low
High
B Brush shape
II
III
C Pressure
Weak
Medium
Strong
E Brush material
Small
Medium
Large
G Bush area
Small
Medium
Large
H Holder distance
Small
Medium
Large
365
Figure 17.8
SN ratio of power
supply function
The gain of sensitivity was small, but the power loss at the brush was reduced by
half.
Figures 17.10 and 17.11 are the results of conrmation under current and
optimum conditions where twice the range of signal factor in the orthogonal array
experiment was used. It shows a higher power transformation efciency and a
smaller variation due to noise.
Development of the Heat-Generating Function
The heat-generating part is constructed by a lamination of core metal, insulating
layer, and heat-generating element. The ideal function is to transfer power to heat
efciently and uniformly.
Input: square root of input power (three levels)
Output: square root of temperature rise
Noise: four measuring positions and one non-heat-generating position (ve
levels)
Figure 17.9
Sensitivity of power
supply function
366
Table 17.2
Results of conrmation
Sensitivity
SN Ratio
Estimation
Conrmation
Estimation
Conrmation
Initial condition
12.18
6.16
0.89
0.93
Optimum condition
18.22
10.98
0.40
0.81
6.04
4.82
0.49
0.12
Gain
Figure 17.10
Input / output of initial
condition
Figure 17.11
Input / output of
optimum condition
367
Figure 17.12 shows the ideal function, and Figure 17.13 shows the test piece
for heat-generating function. Heat-generating positions and materials were cited as
control factors as shown in Table 17.3. From the results of SN ratio and sensitivity,
response graphs were plotted as shown in Figures 17.14 and 17.15. Factors A, B,
C, D, and F were used to estimate the SN ratio and sensitivity of the optimum
condition. The results of conrmatory experiments are shown in Table 17.4.
The input / output characteristics of the initial and optimum conditions are plotted
in Figures 17.16 and 17.17. From the gures, it can be seen that the temperature
variation of the optimum condition was improved signicantly compared with the
initial condition.
Development of the Peel-Off Function
The peel-off unit is constructed by a core metal coated by primer and uorine resin
layers. It is ideal that the peel-off force of melted toner from the unit is small, as
shown in Figure 17.18. For test pieces, plain peel-off layers were used for simplicity
and easy preparation. The plates were heated, and unxed toner layer was pressed
and melted, then peel-off force was measured. Figure 17.19 shows the test piece.
Table 17.5 shows the control factors. Since the time to complete development
is limited, the control factors related to uorine resin were not studied and only
manufacturing-related control factors were studied. The optimum condition was
determined from response graphs in Figures 17.20 and 17.21.
Results of the conrmatory experiment are shown in Table 17.6. They show
poor reproducibility. But from Figures 17.22 and 17.23, one can see that the
Figure 17.12
Ideal function of heatgenerating function
368
Figure 17.13
Test piece for heatgenerating function
Table 17.3
Control factors for heat-generating function
Factor
Figure 17.14
SN ratio of heatgenerating function
Level 1
Level 2
Level 3
Face
Reverse
B Material I
C Material II
Small
Medium
Large
D Material III
Small
Medium
Large
F Material IV
369
Figure 17.15
Sensitivity of heatgenerating function
peel-off force under the optimum condition became smaller. As a result, a peel-off
force that would satisfy the product requirement was obtained. The poor reproducibility was probably caused by uneven setting of noise factor conditions in the
experiment.
Development of the Temperature-Measuring Function
A unit constructed by a thermistor and a sponge measures temperature (Figure
17.24). The ideal function is that the temperature of the object measured be proportional to the reading, as shown in Figure 17.25. Temperature reading was transformed from the reading of voltage. The construction of thermistor, resistance, and
so on, was studied as control factors, as shown in Table 17.7.
From the response graphs shown in Figures 17.26 and 17.27, the optimum
conditions were selected from the SN ratio. Factor B was selected based on the
situation of self-heat generation. Factors G and H was selected based on the cost
aspect.
Table 17.8 shows the results of conrmation. All factors were used to estimate
both the SN ratio and sensitivity. Both gains showed good reproducibility. From
Figures 17.28 and 17.29, the optimum condition has a better linearity. The study
Table 17.4
Results of conrmation
Sensitivity
SN Ratio
Initial condition
Optimum condition
Gain
Estimation
Conrmation
Estimation
Conrmation
10.32
0.01
0.02
0.05
0.79
10.94
1.70
3.26
9.53
10.93
1.72
3.31
370
Figure 17.16
Input / output of initial
condition
Figure 17.17
Input / output of
optimum condition
371
Figure 17.18
Peel-off function
Figure 17.19
Test piece for peel-off
function
372
Table 17.5
Control factors for peel-off function
Factor
Figure 17.20
SN ratio of peel-off
function
Figure 17.21
Sensitivity of peel-off
function
Level 1
Level 2
Level 3
A Heat treatment
Yes
No
B Baking condition 1
Small
Medium
Large
C Material
D Baking condition 2
Small
Medium
Large
E Film thickness
Small
Medium
Large
F Baking condition 3
Small
Medium
Large
G Baking condition 4
Small
Medium
Large
H Pretreatment
No
Small
Large
373
Table 17.6
Results of conrmation
SN Ratio
Sensitivity
Estimation
Conrmation
Estimation
Initial condition
14.03
21.25
4.51
0.23
Optimum condition
22.22
24.66
7.00
8.84
8.19
3.41
2.49
9.07
Gain
Conrmation
Figure 17.22
Input / output of initial
condition
Figure 17.23
Input / output of
optimum condition
374
Figure 17.24
Test piece for
temperature-measuring
function
Figure 17.25
Ideal function of
temperature
measurement
Table 17.7
Control factors for the temperature-measuring function
Factor
Level 1
Level 2
Level 3
A Surface lm
B Pressure-dividing resistor
2 k
4 k
8 k
C Thermistor chip
II
III
D Thermistor resistor
0.3 k
0.55 k
1 k
E Sponge
F Plate material
G Plate thickness
Small
Medium
Large
H Plate area
Small
Medium
Large
375
Figure 17.26
SN ratio of the
temperature-measuring
function
Figure 17.27
Sensitivity of the
temperature-measuring
function
Table 17.8
Results of conrmation
SN Ratio
SN Ratio
Estimation
Conrmation
Estimation
Initial condition
16.39
23.95
2.39
Optimum condition
13.01
20.87
0.12
2.75
3.38
3.08
2.51
1.67
Gain
Conrmation
4.2
376
Figure 17.28
Input / output of initial
condition
Figure 17.29
Input / output of
optimum condition
was continued and the sensitivity could be improved almost to 1, so it was concluded that no forecasting control or adjustment would be necessary.
References
1. Katsura Hino et al., 1996. Optimization of bonding condition of laminated thermosetting decorative sheets and steel sheets. Presented at the Quality Engineering
Forum Symposium.
2. Eiji Okabayashi et al., 2000. Development of a new fusing system using four types of
test pieces for four generic functions. Journal of Quality Engineering Society, Vol. 8,
No. 1.
18
18.1. Introduction
Robust Engineering:
A Managers
Perspective
377
380
382
386
387
18.1. Introduction
Robust design, a concept as familiar as it is misunderstood, will be claried in this
chapter. After we cover robust design, we discuss tolerance design, which is used
to optimize tolerances at the lowest cost increase. It is important to note that robust
design is not the same as tolerance design.
We conduct robust design optimization by following the famous two-step process created by Genichi Taguchi: (1) Minimize variability in the product or process
(robust optimization), and (2) adjust the output to hit the target. In other words,
rst optimize performance to get the best out of the concept selected, then adjust
the output to the target value to conrm whether all the requirements are met.
The better the concept can perform, the greater our chances to meet all requirements. In the rst step we try to kill many birds with one stone, that is, to meet
many requirements by doing only one thing. How is that possible?
We start by identifying the ideal function, which will be determined by the basic
physics of the system, be it a product or process. In either case, the design will be
evaluated by the basic physics of the system. When evaluating a product or a manufacturing process, the ideal function is dened based on energy transformation
from the input to the output. For example, for a car to go faster, the driver presses
down on the gas pedal, and that energy is transformed to increased speed by
sending gas through a fuel line to the engine, where it is burned, and nally to
the wheels, which turn faster.
When designing a process, energy is not transformed, as in the design of a
product, but information is. Take the invoicing process, for example. The supplier
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
377
378
18.1. Introduction
379
Figure 18.1
Two eld-goal kickers
This dissonance between these two perspectives demonstrates that the traditional view of quality-good enough!-is not good enough for remaining competitive in the modern economy. Instead of just barely meeting the lowest possible
specications, we need to hit the bulls-eye. The way to do that is to replace the
oversimplied over/under bar with a more sophisticated bulls-eye design, where
the goal is not merely to make acceptable products, but to reduce the spread of
darts around the target.
The same is also true on the other side of the mark. In the old system, once
you meet the specication, that was that. No point going past it. But even if were
already doing a good job on a particular specication, we need to look into
whether we can do it better and, if so, what it would cost. Would improving pay
off?
You might wonder why a company should bother making already good designs
into great designs. After all, theres no extra credit for exceeding specications.
Or is there?
Robust design requires you to free your employees-and your imaginations
to achieve the optimum performance by focusing on the energy/information
transformation described earlier. This notion of having no ceiling is important,
not just as a business concept, but psychologically as well. The IRS, of course, tells
you how much to pay in taxes, and virtually no one ever pays extra. Most taxpayers
do their best to pay as little as legally possible. Charities, on the other hand, never
tell their donors what to pay-which might explain why Americans are by far the
most generous citizens around the world in terms of charitable giving.
The point is simple: Dont give any employee, team, or project an upper limit.
Let them optimize and maximize the design for robustness. See whats possible, and
take advantage of the best performances you can produce! Let the sky be the limit
and watch what your people can do! A limitless environment is a very inspiring
place to work.
The next big question is: Once the energy/information transformation is optimized, is the designs performance greater than required? If so, youve got some
decisions to make. Lets examine two extreme cases.
When the optimum performance exceeds the requirements, you have plenty of opportunities
to reduce real cost. For example, you can use the added value in other ways, by using
cheaper materials, increased tolerances, or by speeding up the process. The
380
Baseline Philosophy:
Quality Loss
Function
A tool called the quality loss function (QLF) is very helpful in estimating the
overall loss resulting from function deviations from the target. Of course, to make
the function more efcient, before you decide to minimize function variation, you
need to be sure that the cost to do so is less than the cost would be to maintain
the status quo. If youre losing more money from the variations than it would cost
to reduce them, you have a very convincing case for taking action. If not, youre
better off spending the money elsewhere.
We refer to the quality loss function again in Section 18.3. Because QLF is used
to estimate the cost of functional variations, it is an excellent tool to assist in the
decision making that evaluating the costperformance trade-off requires. We discuss next how the QLF philosophy can motivate corporations to adopt the culture
of robust design.
The quality loss function can be put to use for several tasks, including estimating
the loss associated with one part of a product or process, the entire item, or even
an entire population of products or services if, say, one is looking at something
like distribution that affects a wide range of products.
Ill show you how this works mathematically, but dont be discouraged if youre
not a math whiz. Ill also show how it works in everyday life, too, which is usually
all youll need to know.
Lets call the target t. To estimate the loss for a single product or part (or for
a service or step, for that matter), we need to determine two things: (1) How far
off a product or process is from the target (y t), and (2) how much that deviation
is costing in dollars. We accomplish this by calculating the deviation (y t), squaring it, and then dividing the dollars lost by that gure. So it looks like this:
D (cost of loss in dollars)
k (loss coefficient of process)
(y t)2
Determining the rst sum, the degree of deviation from the norm (y t) is
relatively easy, squaring it presents no problems, but determining the second part
18.1. Introduction
381
of the equation, dollars lost, is much trickier. The reason is that to derive the loss
coefcient, we must determine a typical cost of a specic deviation from the target.
These two estimates, cost and deviation, will ultimately produce the loss coefcient
of the process (k).
Now heres how it works in everyday life. For an easy example, lets take the
old copier paper jam problem. A paper jam is simply a symptom of the variability
of energy transformation within the copier. For a copy to be unacceptable, the
paper displacement rate must not deviate too much but must stay within the realm
of (y t). When this variability is too great, a copiers performance becomes
unacceptable. And when that happens, it will cost the company approximately
$300, the cost of service, and lost productivity for each half-day its out of operation
for service.
Now its time to see how this formula applies to a small number of products
(or processes) with various deviations from the target. Those outputs that hit the
target will cost the company no losses (apart from the original production costs),
whereas those that lie farther and farther from the target cost the company more
and more money.
One simple way to estimate the average loss per product is to total the individual
offset deviations, square the sum, divide by the number of products, and nally,
multiply the resulting gure by the loss coefcient (k). When you compare the
average loss of the function to one with a tighter deviation, you will quickly see
that the average loss of the second group would be much less. And thats the idea.
Having covered how to calculate deviation, loss, average loss, and the loss coefcient, now we need to ask the fundamental question: What creates the variation
in our products and processes in the rst place? Solve that puzzle, and we can
adjust that variation as we see t.
The bugaboos that create the wiggles in the products and processes we create can
be separated into the following general categories:
Manufacturing, material, and assembly variations
Environmental inuences (not ecological, but atmospheric)
Customer causes
Deterioration, aging, and wear
Neighboring subsystems
This list will become especially important to us when we look at parameter
design for robust optimization, whose stated purpose is to minimize the systems
sensitivity to these sources of variation. From here on, we will lump all these
sources and their categories under the title of noise, meaning not just unwanted
sound, but anything that prevents the product or process from functioning in a
smooth, seamless way. Think of noise as the friction that gets in the way of perfect
performance.
When teams confront a function beset with excessive variation caused by noise,
the worst possible response is to ignore the problem-the slip-it-under-the-rug
response. Needless to say, this never solves the problem, although it is a surprisingly
common response.
As you might expect, more proactive teams usually respond by either attacking
the sources of the noise, trying to buffer them, or compensating for the noise by
Noise Factors
382
383
Figure 18.2
Robustness of energy
transformation
384
385
386
387
3. Determine a tolerance upgrading action plan and estimate its cost for each
of the high contributors.
4. Determine a tolerance degrading action plan for the no/low-contributing
tolerances and estimate the cost reduction for using it.
5. Finalize your tolerance upgrading and degrading action plan based on the
percentage of contribution, quality loss, and the cost.
6. Conrm the effectiveness of your plan.
Tolerance design will help teams meet one of the primary objectives of the
program: developing a product or process with six sigma quality while keeping
costs to a minimum. The steps of tolerance design are intended to help you and
your team work through the process of establishing optimal tolerances for optimal
effect.
The goal is not simply to tighten every standard, but to make more sophisticated
decisions about tolerances. To clarify what we mean, lets consider a sports analogy.
Billy Martin was a good baseball player and a great manager. He had his own offeld problems, but as a eld general, he had no equal. One of the reasons he was
so good was because he was smart enough, rst, to see what kind of team he had,
then to nd a way to win with them, playing to their strengths and covering their
weaknesses-unlike most coaches, who have only one approach that sometimes
doesnt mesh with their players.
In the 1970s, when he was managing the Detroit Tigers, a big, slow team, he
emphasized power: extra base hits and home runs. When he coached the Oakland
As a decade later, however, he realized that the team could never match Detroits
home-run power, but they were fast, so he switched his emphasis from big hits to
base stealing, bunting, and hitting singles. In both places, he won division crowns,
but with very different teams.
Its the same with tolerance design. We do not impose on the product or process
what we think should happen. We look at what we have, surmise what improvements will obtain the best results, and test our theories. In Detroit, Martin didnt
bother trying to make his team faster and to steal more bases because it wouldnt
have worked. He made them focus on hitting even more home runs, and they did.
In Oakland, he didnt make them lift weights and try to hit more homers, because
they didnt have that ability. He made them get leaner and meaner and faster and
steal even more bases. And thats why it worked: he played to his teams strengths.
You dont want to spend any money at all to upgrade low-contributing tolerances. You want to reduce cost by taking advantage of these tolerances. You dont
want to upgrade a high-contributing tolerance if it is too expensive. If the price is
right, you will upgrade those high contributors. Tolerance design is all about balancing cost against performance and quality.
It is extremely important to follow the steps shown in Figure 18.3. One common
problem is that people skip parameter design and conduct tolerance design. Be
aware of the opportunities you are missing if you skip parameter design. By skipping parameter design, you are missing great opportunities for cost reduction. You
may be getting the best possible performance by optimizing for robustness, but if
the best is far better than required, there are plenty of opportunities to reduce
cost. Further, you are missing the opportunity to nd a bad concept, so that you
can reject the bad concept at the early stage of product/process development. If
Conceptual Design,
Parameter Design,
Then Tolerance
Design
388
the best concept you can come up with is not good enough, you have to change
the concept.
The result of tolerance design on designs that have not been optimized is far
different from the result of tolerance design after robust optimization has taken
place. In other words, you end up tightening tolerances, which would have been
unnecessary if the design had been optimized for robustness in the rst place.
Think of all reghting activities your company is doing today. If the design were
optimized, you would have fewer problems and the problems would be different.
Hence, solutions would be different.
19
Implementation
Strategies
19.1. Introduction
389
19.2. Robust Engineering Implementation
Step
Step
Step
Step
Step
Step
1:
2:
3:
4:
5:
6:
389
Management Commitment
390
Corporate Leader and the Corporate Team
Effective Communication
391
Education and Training
392
Integration Strategy
392
Bottom-Line Performance
393
390
19.1. Introduction
In this chapter we introduce the implementation processes of robust engineering
into the corporate environment and provide complete and detailed strategies behind the process. In it we will also show compatibility and possible enhancements
of any other quality program or process that an organization may already have in
place. Some organizations absolutely thrive on change and embrace the challenges
and opportunities that come with change and diversity. Those organizations will
have the foresight and vision to see that the robust engineering implementation
strategy can be a guide for them and ultimately, can help them to stay on the
cutting edge and ahead of their competition.
389
390
Step 1: Management
Commitment
Robust engineering is one of the most powerful tools for making higher-quality
products with little or no waste, and making them faster too. However, as with any
new quality process, without the full support of managers, the organization will
not receive the full benet of this program. Buy-in from top and middle management is essential. No matter how good or how efcient a program might be, it
can be extremely difcult for personnel at the lower end of the organization if
management is not committed to supporting it and making it work. It is essential
that management fully understand all of the benets of the process and see that
the program is in no way a threat to their position or to their importance to the
organization.
Today, any organization is dependent on the quality of its offerings. Management wants everything to be more efcient, faster, cheaper, and of higher quality.
However, the only organizations that will achieve competitive status are those willing to invest in a more advanced and efcient methodology to achieve that quality.
When top management is not only supportive, but encouraging, the rest of the
organization will follow suit, and momentum will be gained for the process. A
negative attitude by the powers that be can cause the breakdown of the program.
Step 2: Corporate
Leader and the
Corporate Team
Once all levels of management have bought into the idea of implementing robust
engineering methodology, it is imperative that someone be chosen to be the driver,
the leader of the project. This person should be a manager, highly efcient as a
leader, motivated, and able to motivate others. He or she should be someone
whom others would be honored to follow. He or she should be dependable and
committed to the goals of the program, be rm in delegation, very detail-oriented,
and strong on follow-up. Clearly, it is in the best interest of the organization that
the corporate leader be highly respected and have knowledge of the work of all
departments. He or she must be able to assert authority and have strong projectmanagement skills. In short, you are looking for a champion. Top management
must be certain that they have chosen the right person for the job.
At the start of the process, an open meeting between upper management, middle management, and the chosen corporate leader should take place. At this meeting, a complete understanding must be reached on all of the elements involved,
and the tasks must be broken down to be more manageable and less overwhelming. If the methodology is not managed properly, the robust engineering project
will not succeed.
The next most signicant contributing factor to the success of the robust engineering implementation process is an empowered, multidisciplinary, crossfunctional team. Since robust engineering implementation is an engineering
391
Step 3: Effective
Communication
392
Step 4: Education
and Training
The importance of training and education can never be overstated, especially when
new projects or programs are being introduced into an organization. A lack of
understanding by even one employee can cause chaos and poor performance. The
robust engineering process requires every employee to have an understanding of
the entire process and then have a complete understanding of how his or her part
in the process contributes to the bottom line. To that end, some training in business terminology and sharing of prot and loss information should be undertaken
for every employee. Training also can help to inspire and motivate employees to
stretch their potential to achieve more. It is essential to a quality program such as
robust engineering that everyone participate fully in all training. Engineers especially need to have a full understanding of the methodology and its potential
rewards. But everyone, from top management to the lowest levels of the organization, must commit to focusing on the goals of the project and how they can
affect the bottom line. Knowledge adds strength to any organization, and success
breeds an environment conducive to happier workers.
Organizations the world over are discovering the value of looking at their organizations to nd out what makes them work and what would make them work
better. Through the development and use of an internal expert they are able to
help their people work through changes, embrace diversity, stretch their potential,
and optimize their productivity. These internal experts are corporate assets who
communicate change throughout an organization and provide accurate information to everyone. Even when quality programs already exist, often the internal
experts can work with the cross-functional team to communicate the right information in the right ways. These experts are also extremely efcient and helpful in
spreading the goals and the purpose of robust engineering, as well as explaining
the quality principles on which the methodology is built.
Step 5: Integration
Strategy
393
traditional tools and methods. Some companies, in part, have been able to change
their perspectives on what constitutes failure and have been able to redene reliability. With robust engineering, the emphasis is on variability.
When you make a product right in the rst place, you dont have to make it
again. Using the right materials and the right designs saves time, saves effort, saves
waste, and increases protability.
When one measures the results of new ways against the results of old ways, it is
usually easy to tell which is better. When there are considerable differences, a
closer look at the new way is certainly merited. Systems that implement robust
engineering come out ahead on the bottom line. The real question is: Would you
rather have your engineers spending their days putting out res, or would you
rather have them prevent the res by nding and preventing the causes? What if
you offered incentives to your workers, the ones who actually made it all happen?
What if you motivated them by rewarding them for differences in the bottom line?
Your employees will feel very important to the organization when they see how
they have helped to enhance the bottom line, but they will feel sheer ecstasy when
they are able to share in the increased prots.
Step 6: Bottom-Line
Performance
Part VI
Mahalanobis
Taguchi System
(MTS)
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
20
20.1.
20.2.
20.3.
20.4.
20.5.
20.6.
20.7.
20.8.
20.9.
20.10.
Mahalanobis
Taguchi System
Introduction
397
What Is the MTS?
398
Challenges Appropriate for the MTS
398
MTS Case Study: Liver Disease Diagnosis
401
MahalanobisTaguchi GramSchmidt Process
415
GramSchmidts Orthogonalization Process
415
Calculation of MD Using the GramSchmidt Process
MTS/MTGS Method versus Other Methods
417
MTS/MTGS versus Articial Neural Networks
417
MTS Applications
418
Medical Diagnosis
418
Manufacturing
418
Fire Detection
419
Earthquake Forecasting
419
Weather Forecasting
419
Automotive Air Bag Deployment
419
Automotive Accident Avoidance Systems
Business Applications
419
Other Applications
420
416
419
420
20.1. Introduction
Many people remember the RCA Victor Company, with its logo of a dog listening
to his masters voice. Dogs can easily recognize their masters voices, but it is not
easy for a machine to be created that recognizes its owner by voice recognition.
Painting lovers can easily recognize the impressionist paintings of Vincent van
Gogh. Can we develop a machine that will recognize van Gogh painting? Classical
music lovers can distinguish the music composed by Johann Sebastian Bach from
that of others. But even using the most advanced computers, it is not easy for
human-made machines to recognize human voice, face, handwriting, or printing.
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
397
398
399
To illustrate the difculty of any of these, lets take a very simple example with
two variables, x1 and x2. Let x1 be the weight of an American male and x2 be the
height of an American male. Suppose that a weight is 310 pounds. If you look at
this data, you may think that this person (John) is very heavy but not necessarily
abnormal, since there are many healthy American males who weigh around 310
(Figure 20.1).
Suppose that Johns height is 5 feet. If you look at his height by itself, you may
think John is a very short person but not that there is necessarily anything abnormal about John, since there are many healthy American males whose height is
around 5 feet (Figure 20.2).
On the other hand, if you look at both pieces of data on John, x1 310 pounds
and x2 5.0 feet, you would think something was denitely out of the ordinary
about John. You would think that he is not necessarily normal. This is because, as
everyone knows, there is a correlation between weight and height. Suppose that
we can dene the normal group in terms of weight and height (Figure 20.3).
That would have the shape of an ellipse because of the correlation between weight
and height. We can clearly see that John is outside the normal group. The distance
between the center of a normal group and a person under diagnosis is the
Mahalanobis distance.
From this discussion, you can see the following points:
1. For an inspection system, two characteristics meeting the specication do
not guarantee that it is a good product because of correlation (interaction).
2. It is easy to see the pattern when the number of variables, k, is equal to 2.
What if k 5, k 10, k 20, k 40, k 120, k 400, or k 1000? The
greater the number of variables, the more difcult it is to see the pattern.
The steps to design and optimize the MTS are as follows:
Task 1: Generation of normal space
Step 1. Dene the normal group.
Step 2. Dene k variables (1 k 1000).
Step 3. Gather data from the normal group on k variables with sample size
n, n k.
Step 4. Calculate the MD for each sample from the normal group.
Task 2: Conrmation of discrimination power
Step 5. Gather data on k variables for samples from outside the normal
group; sample size r.
John
Figure 20.1
Johns weight
x1 = weight
400
Figure 20.2
Johns height
x2 = height
Step 6. Calculate the MD for each sample from outside the normal group.
Step 7. Evaluate the discrimination power.
Task 3: Identication of critical variables and optimization
Step 8. Optimize the MTS. Evaluate critical variables. Optimize the number
of variables. Dene the optimum system.
There are two primary methods of computing the MD. One is to use an inverted
matrix and the other is to use GramSchmidt orthogonal vectors, which we refer
to as MTGS. First, a simple case study with inverted matrix method is illustrated.
MTGS will be introduced later.
x2 = height
Normal Space
(Mahalanobis Space)
(Reference Space)
Figure 20.3
Both height and weight
MD = Mahalanobis distance
John
x1 = weight
401
x6
LDH
x11
x2 Gender
x7
ALP
x12 PL
x3 TP
x8
GTP
x13 Cr
x4 Alb
x9
LAP
x14 BUN
x5 ChE
x10 Tch
x1
TG
x15 UA
Physicians identied these 15 variables, 13 from blood test results. Variables can
be continuous or discrete. In this case, gender was a discrete variable.
Step 3. Generate a database of a normal group for the variables selected. For a discrete
variable with two classes, such as gender, simply assign a number for each class:
say, 1 for male and 10 for female (Table 20.1).
Step 4a. Compute the mean and standard deviation for each variable and normalize the
data. See Table 20.2.
Z
xx
x
1
ZA1Z T
k
z1
1
(z1, z2, ... , z15) (A1)
k
z15
The distribution of Mahalanobis distances for samples from the normal group
typically look like Figure 20.5, below, shown in Figure 20.6 as a curve. The average
is 1.00. Note that it is very rare to have an MD value greater then 5.0; an MD for
an abnormal sample should become much larger than 5.0; MD provides a measurement scale for abnormality.
402
59
51
53
52
52
56
35
35
53
39.7
10.34
198
199
200
Avg.
Sum
Age
x1
Sample
4.30
6.9
10
10
10
Gender
x2
Table 20.1
Data for normal group
0.30
7.1
6.8
7.1
7.3
7.2
6.7
6.6
6.7
TP
x3
0.14
4.3
4.4
4.3
3.9
4.3
4.2
4.4
4.2
Alb
x4
0.10
0.9
0.88
0.86
0.7
0.93
0.97
0.9
0.83
0.98
0.7
ChE
x5
22.93
195.4
225
190
203
188
198
204
220
217
190
LDH
x6
1.36
5.5
8.2
6.1
6.1
5.2
4.6
9.2
6.3
7.7
ALP
x7
12.92
22.6
18
36
19
16
13
15
16
23
12
GTP
x8
55.76
332.1
330
358
343
304
312
289
278
340
275
LAP
x9
22.21
184.0
189
190
165
216
192
171
220
225
220
Tch
x10
34.86
84.7
124
155
59
86
51
59
67
87
140
TG
x11
20.57
192.1
203
200
178
213
203
175
222
227
228
PL
x12
0.19
1.2
1.4
1.3
0.8
0.8
1.1
0.8
Cr
x13
2.88
15.1
17
19
14
13
14
13
17
15
13
BUN
x14
1.22
5.0
4.3
5.7
6.4
4.2
4.1
3.8
3.2
4.6
3.8
UA
x15
403
0.73
0.73
1.87
1.09
1.29
1.19
1.19
1.58
0.45
0.45
1.29
0.0
1.00
198
199
200
Avg.
Sum
TP
z3
Alb
z4
ChE
z5
LDH
z6
0.77
2.81
1.36 1.2
1.00
0.0
1.00
0.0
2.7
0.56
1.00
1.00
0.0
LAP
z9
0.03
0.14
0.51 0.97
0.33
1.00
0.0
1.29
0.46
0.2
1.00
0.0
1.00
0.0
1.00
0.0
1.04
0.41 0.28
GTP
z8
ALP
z7
0.06
1.59
TG
z11
1.62 0.51
1.85
1.62
Tch
z10
Cr
z13
0.04
1.00
0.0
0.23
0.27
1.00
0.0
1.13
2.02
0.74 1.01
0.65 1.5
1.25
1.00
0.0
1.00
0.0
0.53 0.85
0.39
1.00
0.0
0.65
1.35
0.73 0.39
1.00
0.0
0.6
0.56
1.12
1.44
0.36 0.97
UA
z15
0.74 1.01
BUN
z14
1.7
1.75 1.9
PL
z12
1.07
0.94
0.48 0.32
0.06 1.91
0.9
0.17
0.15
0.83
0.49 2.09
0.06
1.36
0.66 0.56
1.36 1.54
1.36 1.2
1.36 0.19
Age
z1
Sample
Gender
z2
Table 20.2
Mean and standard deviation for normal group
404
z1
z2
z3
0.0763
0.0454
0.0149
0.1762
0.0785
0.3874
0.1996
0.1084
0.0581
0.3638
0.3934
0.1966
0.4168
0.0570
0.2084
0.3335
0.0234
0.1119
0.2343
0.1405
z4
z5
z6
z7
z8
z9
z10
z11
z12
0.0845
0.2189
0.6490
0.2675
0.5616
0.2658
0.2305
0.0292
0.2108
z13
z14
z15
0.2022
0.1167
0.1376
0.1225
0.2707
0.1588
0.3966
1.0000
0.0811
0.2993
0.0811
1.0000
0.2968
0.2993
z3
1.0000 0.2968
z2
z1
Table 20.3
Correlation matrix
z4
0.3684
0.0961
0.2767
0.0943
0.0423
0.0864
0.2266
0.1769
0.1341
0.0139
0.1274
1.0000
0.3966
0.3874
0.4168
z5
0.3500
0.0293
0.2535
0.2866
0.3198
0.2891
0.4005
0.3305
0.0264
0.1472
1.0000
0.1274
0.0763
0.1996
0.0570
z6
0.0514
0.0919
0.0666
0.2488
0.1051
0.2137
0.2285
0.2195
0.2230
1.0000
0.1472
0.0139
0.0454
0.1084
0.2084
z7
0.0430
0.0869
0.0348
0.1783
0.1523
0.1806
0.1615
0.1754
1.0000
0.2230
0.0264
0.1341
0.0149
0.0581
0.3335
z8
0.4663
0.0534
0.3851
0.2110
0.4462
0.2057
0.7297
1.0000
0.1754
0.2195
0.3305
0.1769
0.1588
0.3638
0.0234
z9
0.4427
0.0514
0.4610
0.1634
0.3378
0.1855
1.0000
0.7297
0.1615
0.2285
0.4005
0.2266
0.1762
0.3934
0.1119
z10
0.1354
0.0173
0.0381
0.9334
0.3036
1.0000
0.1855
0.2057
0.1806
0.2137
0.2891
0.0864
0.0785
0.1966
0.2343
z11
0.3791
0.0288
0.2553
0.3014
1.0000
0.3036
0.3378
0.4462
0.1523
0.1051
0.3198
0.0423
0.1225
0.2707
0.1405
z12
0.1191
0.0349
0.0204
1.0000
0.3014
0.9334
0.1634
0.2110
0.1783
0.2488
0.2866
0.0943
0.0845
0.2189
0.2658
z13
0.5713
0.2485
1.0000
0.0204
0.2553
0.0381
0.4610
0.3851
0.0348
0.0666
0.2535
0.2767
0.1376
0.6490
0.2305
z14
z15
0.1447
1.0000
0.2485
1.0000
0.1447
0.5713
0.1191
0.3791
0.0349
0.1354
0.0173
0.4427
0.4663
0.0430
0.0514
0.3500
0.3684
0.2022
0.5616
0.2108
0.0288
0.0514
0.0534
0.0869
0.0919
0.0293
0.0961
0.1167
0.2675
0.0292
405
0.0759
0.0251
0.3648
0.1303
0.1838
01639
0.2063
0.3587
0.2414
0.2598
0.2343
0.0860
0.1833
0.0536
0.4678
1.0153
0.3740
0.5164
0.3422
0.1152
0.0397
0.0843
12
13
14
15
0.0999
11
0.1414
0.2613
0.3976
0.1133
0.2578
10
0.0652
0.0352
0.0908
0.0459
0.0986
0.2274
0.4626
0.3415
2.6720
0.4176
0.3455
3
0.3455
1.3519
0.1542
0.1542
0.2274
1.6383
Table 20.4
Inverse matrix
0.2785
0.0570
0.1499
0.0027
0.1347
0.0292
0.0395
0.0007
0.0640
0.1313
0.0202
0.2076
0.0470
0.0164
0.2407
0.0055
0.1634
0.1469
0.2908
0.0225
0.2280
0.0687
0.1840
0.2222
0.0903
0.1651
1.2503
0.0757
0.1313
0.0352
0.3648
0.1114
0.1115
0.0628
0.0640
0.0394
0.1598
0.1969
0.0508
1.2354
0.1651
0.0720
0.0640
0.0652
0.1303
7
0.3587
6
0.2063
0.3733
0.0646
0.0720
0.0757
1.3626
0.0202
0.0908
0.4626
1.5731
0.0459
0.4176
5
0.0986
4
0.3415
0.3568
0.0036
0.0939
0.3686
0.3927
0.3081
1.5365
2.5327
0.0508
0.0903
0.0646
0.0007
0.0759
0.1838
0.2414
0.0556
0.1034
0.3981
0.3556
0.0977
0.4256
2.6016
1.5365
0.1969
0.2222
0.3733
0.0395
0.0251
0.1639
0.2598
10
0.1908
0.0849
0.1133
7.3840
0.1736
8.0089
0.4256
0.3081
0.1598
0.2280
0.0687
0.0292
0.0999
0.2613
0.1133
11
0.2435
0.1621
0.0449
0.1540
1.5448
0.1736
0.0977
0.3927
0.0394
0.0225
0.1840
0.1347
0.1414
0.3976
0.2578
12
0.0348
0.2711
0.0523
8.3166
0.1540
7.3840
0.3556
0.3686
0.0640
0.2908
0.2407
0.0027
0.2343
0.4678
0.3422
13
0.4937
0.1681
2.1288
0.0523
0.0449
0.1133
0.3981
0.0939
0.0628
0.1468
0.0164
0.1499
0.0860
1.0153
0.1152
14
0.0172
1.1955
0.1681
0.2711
0.1621
0.0849
0.1034
0.0036
0.1115
0.1634
0.0470
0.0570
0.1833
0.3740
0.0397
15
2.0420
0.0172
0.4937
0.0348
0.2435
0.1908
0.0556
0.3568
0.1114
0.0055
0.2076
0.2785
0.0536
0.5164
0.0843
406
1.70 1.30 0.45 1.46 2.05 1.20 1.25 0.88 1.20 0.71 0.57 0.55 1.87 1.20 1.03 0.69 0.90 0.77 0.75 0.91
1.35 0.78 1.67 0.61 0.95 0.67 0.86 1.08 0.78 0.90 0.88 0.71 0.74 1.37 0.88 0.97 1.24 0.80 0.44 1.18
1.06 0.97 0.80 0.62 1.08 0.61 0.61 0.81 0.79 0.52 0.83 0.49 0.68 1.62 1.03 0.98 0.88 1.11 0.80 0.64
1.99 0.78 0.91 1.26 2.10 1.35 0.79 0.51 0.72 0.97 1.04 0.59 0.61 0.79 1.45 0.83 2.21 0.69 0.66 1.19
0.82 0.58 1.07 0.97 3.34 0.84 0.61 1.65 0.82 1.88 0.96 1.13 1.58 0.78 1.63 0.94 0.73 0.51 0.64 0.88
0.95 0.86 1.18 0.84 0.50 0.92 0.96 0.94 0.82 0.91 1.75 0.37 0.65 0.46 0.81 0.45 2.76 1.47 0.49 0.76
0.70 1.02 0.90 0.76 0.82 1.00 0.90 0.56 2.59 0.70 0.90 1.09 0.52 0.88 1.32 0.63 1.64 0.98 1.39 0.53
1.41 1.11 0.43 0.66 2.63 0.74 1.12 1.04 0.93 1.25 1.52 0.69 1.69 0.72 0.72 0.41 1.81 1.06 0.95 0.59
0.58 0.95 0.64 1.15 0.67 0.72 0.83 0.89 0.69 1.22 0.64 0.88 1.04 1.19 1.25 0.72 1.09 0.47 2.77 1.55
2.11 0.61 1.13 0.75 0.82 0.74 0.57 0.86 0.64 1.20 0.85 0.69 0.66 0.99 0.76 0.52 0.81 0.77 1.29 1.16
Table 20.5
Mahalanobis distance for 200 samples from healthy group
407
Figure 20.4
Sample showing Table 20.5 data
408
Frequency
40
Figure 20.5
MD distribution
30
20
10
0
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
MD
At this point it is important to recognize the following:
The objective of MTS is to develop a measurement scale that measures abnormality. What we have just done is to dene the zero point and one unit.
MTS measures abnormality. Only the normal group has a population.
Now we will assess the discrimination power by gathering information.
Step 5. Gather data on k variables for samples from outside the normal group. See Table
20.6.
Step 6. Calculate the MD for each sample from outside the normal group. See Table
20.7.
Step 7. Evaluate The discrimination power. See Figure 20.7. Just by eyeballing Figure
20.7, one can see that the discrimination power is very good. In some studies it
may be sufcient to have an MTS that can discriminate x percent of bad samples.
For example, there is a situation called no trouble found for those samples that
come back for warranty claims. In that situation, discriminating only 25% can be
extremely benecial.
After conrming a good discrimination, we are ready to optimize the MTS.
Figure 20.6
Distribution for Figure
20.6
0.0
1.0
2.0
3.0
4.0
5.0
MD (Mahalanobis distance)
409
Age
x1
30
46
22
31
33
71
61
16
Sample
34
35
36
10
10
10
Gender
x2
Table 20.6
k Variables for samples
6.7
7.2
6.8
6.7
7.2
TP
x3
3.9
3.3
3.4
3.8
3.8
3.3
Alb
x4
0.51
0.53
0.54
0.51
0.57
0.52
0.48
0.59
ChE
x5
458
747
533
281
723
255
365
336
LDH
x6
12.8
13.7
22.7
14.4
24.3
10.6
25.8
13.7
ALP
x7
37
123
156
96
152
123
274
192
GTP
x8
495
971
646
613
841
704
1027
856
LAP
x9
122
174
130
89
125
141
113
144
Tch
x10
151
91
126
133
45
190
176
138
TG
x11
201
215
220
147
164
206
218
199
PL
x12
1.9
1.4
1.6
1.4
1.3
1.4
1.9
1.6
Cr
x13
11
13
13
11
15
11
18
16
BUN
x14
3.6
4.4
2.6
1.9
3.1
3.5
4.5
3.5
UA
x15
410
LDH
z6
24.05
11.45
5.35
6.01
12.64
2.06 1.36
35
36
14.72
34
6.53
13.82
3.73
14.93
6.01
ALP
z7
3.73
23.01
0.65
2.60
1.71 1.36
7.39
ChE
z5
0.61
Alb
z4
6.13
0.73
0.94
TP
z3
3.19 2.09 3.05
Gender
z2
Age
z1
Sample
Table 20.7
Normalized data
1.12
7.78
10.33
5.69
10.02
7.78
19.47
13.12
GTP
z8
2.92
11.46
5.63
5.04
9.13
6.67
12.46
9.40
LAP
z9
2.79
0.45
2.43
4.28
2.65
1.93
3.19
1.80
Tch
z10
1.90
0.18
1.18
1.38
1.14
3.02
2.62
1.53
TG
z11
0.43
1.12
1.36
2.19
1.36
0.68
1.26
0.34
PL
z12
3.88
1.25
2.30
1.25
0.73
1.25
3.88
2.30
Cr
z13
1.43
0.74
0.74
1.43
0.04
1.43
1.00
0.31
BUN
z14
1.17
0.52
1.99
2.56
1.58
1.25
0.43
1.25
UA
z15
25.0
52.8
41.8
17.0
59.8
16.9
66.5
25.1
MD
Abnormal
Figure 20.7
Discrimination power
411
412
3 1
4 1
5 1
6 1
7 1
8 1
9 2
10 2
11 2
12 2
13 2
14 2
15 2
16 2
2 1
1 1
x1
1
x2
2
x3
3
x4
4
x5
5
x6
6
x7
7
x8
8
x9
9
x10
10
x11
11
x12
12
x13
13
x14
14
x15
15
16
15
14
13
12
11
10
8 Use
7 Use
6 Use
5 Use
Use
Use
Use
Use
Use
Use
Use
Use
Use Use
Use Use
Use
Use
Use Use
Use
Use
Use
Use Use
Use
Use Use
Use Use
Use
Use
Use Use
Use Use
Use
Use
Use Use
Use
Use
Use
Use Use
Use Use
Use
Use
Use
Use Use
Use Use
Use
Use
Use
Use
Use
Use
Use Use
Use Use
Use Use
Use Use
Use
Use
Use Use
Use Use
Use Use
1 Use Use Use Use Use Use Use Use Use Use Use Use Use Use Use
Table 20.8
Two-level orthogonal array
413
16.6
38.0
21.0
43.5
24.1
40.3
23.4
37.8
32.2
10
11
12
13
14
15
16
38.7
4.1
3.9
39.8
34.4
17.3
25.1
Table 20.9
Response data
66.8
79.2
68.4
100.0
61.5
108.5
42.1
100.7
63.9
93.4
15.8
90.8
4.2
77.1
54.4
66.5
23.4
16.1
13.8
15.5
17.5
21.3
12.7
22.2
13.4
16.3
9.5
23.2
2.8
18.6
10.2
16.9
96.8
95.0
43.9
49.0
47.6
50.9
88.2
96.1
107.7
106.5
4.4
32.2
1.9
30.4
112.9
59.8
19.7
12.6
22.6
20.6
15.3
20.9
15.9
23.2
16.7
16.0
13.4
22.4
3.6
12.1
21.5
17.0
54.6
22.6
40.2
25.6
40.4
32.2
35.2
34.5
21.9
34.7
8.5
42.7
2.6
38.0
12.2
29.2
26.1
14.9
25.3
23.1
24.3
22.8
21.9
18.8
9.0
23.2
7.9
25.8
5.3
21.1
13.1
17.3
17.1
10.8
24.6
22.3
14.2
19.0
17.5
24.3
14.8
14.4
14.9
24.8
2.8
10.1
21.4
17.2
35.7
35.1
15.1
24.7
18.0
36.9
17.3
38.0
29.2
32.3
10.3
31.4
5.4
22.1
26.1
27.3
58.3
35.4
62.3
51.2
68.1
50.2
53.6
35.1
29.2
59.1
7.4
63.6
2.2
51.1
33.0
39.9
10
54.0
51.5
37.8
45.9
30.4
50.8
43.4
67.1
62.2
59.2
10.6
34.0
5.4
27.0
60.2
41.8
34
107.9
96.3
27.5
16.4
37.2
21.2
95.7
92.4
88.8
93.8
3.0
34.7
2.1
29.4
95.6
52.8
35
34.6
30.0
11.5
5.2
9.9
16.4
24.3
32.2
37.4
26.9
8.2
11.7
9.2
8.5
34.6
25.0
36
30.70
25.33
24.07
18.47
25.04
24.18
27.11
27.35
26.47
28.44
15.57
25.45
7.90
21.93
23.40
28.39
SN Ratio
414
Figure 20.8
SN ratios
22.0
x151
x152
x141
x142
x131
x132
x121
x122
x111
x112
x91
x92
x81
x82
x71
x72
x61
x62
x51
x52
x41
x42
x31
x32
x21
x22
x11
x12
18.0
x101
x102
20.0
1200
100
D2 Normal
D2 Normal
D2 Abnormal
90
D2 Abnormal
1000
80
70
800
MD
MD
60
600
50
40
400
30
20
200
10
0
0
0
50
100
Sample
Figure 20.9
Result of conrmation
150
200
10
20
Sample
30
415
Table 20.10
Improvement in diagnosis of liver disease
MahalanobisTaguchi
Traditional Method
Negative
Positive
Negative
Positive
Total
Negative
Positive
Total
28
51
79
63
16
79
15
16
15
16
Z 2U1
U1
U U
1 1
Uk Zk
Z kU1
Z kUk1
U
U
U 1U1 1
U k1Uk1 k1
GSP
Figure 20.10
GramSchmidts process
416
1
k
u21j
2
1
2
u2j
2
2
ukj2
s2k
where j 1, ... , n. The values of MD obtained from the inverted matrix method
are approximately equal to each other. Table 20.11 below shows MD values by the
inverted matrix method and the MTGS method from the preceding example.
Table 20.11
MD values using MTGS and MTS
(a) Healthy Group
MD Values (1,2, ... , 200)
Average MD
MTGS
0.99499991
MTS
0.99498413
Average MD
MTGS
60.09035
MTS
60.08957
417
418
Manufacturing
In manufacturing, pattern recognition is widely used. For example, many inspections are done by eyeball observation, such as the appearance of welded or soldered component parts. When the fraction defective in a production line is very
low, it is difcult for a worker to inspect 300 pieces an hour, and therefore easy
to overlook a defective. Moreover, meeting specications one by one does not
guarantee that it is a good product because of correlations. Many successful studies
of MTS have been reported from the mechanical, electrical, and chemical
industries.
MTS is also very useful in monitoring a manufacturing process. For example,
a semiconductor process can be very complex and consists of many variables, such
as temperature, ow, and pressure. In that case, the normal group would be dened as the group existing when the process is producing perfect products.
419
In public buildings or hotels, re alarm systems are installed by law. Perhaps most
of us have experienced a false alarm: An alarm sounds but there is no re, or a
smoke detector sounds as a result of cigarette smoking or barbecuing. In the development of re alarm systems, data such as temperature or the amount of smoke
are collected from the situations without re. These data are considered as a reference group for the construction of a Mahalanobis space.
Fire Detection
Earthquake
Forecasting
Weather Forecasting
In designing air bags in a car, we want to avoid both types of error: an air bag
actuated when there was no crash and an air bag that did not actuate at the
moment of a crash. To construct a Mahalanobis space, acceleration at several locations in a car is recorded from time to time at an interval of a few thousandth
of a second. Such data are collected at various driving conditions: driving on a
very rough road, quick acceleration or quick stop, and so on. Those data are used
with other sources of information, such as driving speed, to construct a Mahalanobis space. The computer in the air bag system is constantly calculating a
Mahalanobis distance. When the distance becomes greater than a predicted threshold, the system sends a signal to actuate the air bag(s) before a crash. On the
passenger side, you dont want to deploy the air bag when a child is sitting there
or when an adult is sitting in the seat and bending over. Such MTS studies are
currently being conducted within automotive companies.
Automotive manufacturers are developing sensors that detect a dangerous condition during driving. From various sensor outputs, we want to detect a situation
immediately before an accident takes place so that the system can notify the driver
or even take over the control of the car by braking, accelerating, and/or steering.
Research is in the early stages.
Automotive Accident
Avoidance Systems
Prediction is one of the most important areas in pattern recognition. In the business area, banks or credit card companies make predictions for loan or credit card
Business
Applications
420
Other Applications
500
MD
400
Figure 20.11
Rating of abnormality
level
300
200
100
0
0
50
100
Rating by professional
150
421
300
MD
250
200
Patient A
150
Patient B
Figure 20.12
Monitoring abnormal
condition
100
50
0
0
10
15
20
25
30
35
40
45
Time
thousands of people are studied; sometimes such a study takes years. By use of the
dynamic SN ratio, the trend can be observed in a short period of time. Such
applications have a huge potential for future research and would bring immeasurable benets to society.
Of course, it is all right to discuss two types of errors, or chi-square distribution,
as a reference. But it is essential to realize that the objective of the MTS approach
is to use the SN ratio for estimation of the error of misjudgment and also for
forecasting.
Part VII
Software Testing
and Application
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
Application of
Taguchi Methods
to Software
System Testing
21
21.1.
21.2.
21.3.
Introduction
425
Difculty in System Testing
Typical Testing
426
425
21.1. Introduction
System testing is a relatively new area within the Taguchi methods. Systems under
testing can be hardware and software or pure software: for example, printer operation, copy machine operation, automotive engine control system, automotive
transmission control, automotive antilock brake (ABS) module, computer operating systems such as Windows XP, computer software such as Word and Excel,
airline reservation systems, ATMs, and ticket vending machines. The objective of
testing is to detect all failures before releasing the design. It is ideal to detect all
failures and to x them before design release. Obviously, you would rather not
have customers detect failures in your system.
425
426
Example
The procedure for system testing using Taguchi methods is explained below using
a copy machine operation as an example.
Step 1: Design the test array. Determine the factors and levels. Factors are
operation variables and levels are alternatives for operations. Assign them to an
orthogonal array.
Factors and Levels. When an operation such as paper tray selection has, say,
six options, you may just use three options out of six for the rst iteration. In the
next iteration you can take other levels. Another choice is to use all six options (a
six-level factor). It is not difcult to modify an orthogonal array to accommodate
many multilevel factors.
427
A3 166%
A3 100%
A4 166%
A5 216%
Basically, you can take as many levels as you like. The idea is to select levels to
cover the operation range as much as possible. As you can see in the design of
experiment section of this book, you may choose an orthogonal array to cover as
many factors and as many levels as you like.
Orthogonal Array. Recall the notation of an orthogonal array (Figure 21.1). Following is a list of some standard orthogonal arrays:
L8(27), L16(215), L32(231), L64(263)
L9(34), L27(313), L81(340)
L12(211), L18(21 37), L36(211 312), L54(21 325)
L32(21 49), L62(421), L25(56), L50(21 511)
Below is a list of some orthogonal arrays that you can generate from a standard
array.
L8(41 24)
L16(41 212) L16(42 29), L16(43 26), L16(44 23), L16(45), L16(81 28)
L32(41 228), L32(44 219), L32(48 27), L32(81 224), L32(81 46 212)
L64(41 260), L64(412 227), L64(88 27), L64(84 45 220), L64(89)
L27 (91 39)
L81 (91 332), L81(96 316), L81(271 327), L81(910)
L18(61 36)
L36(61 313 22)
L54(61 324)
For example, L81(96 316) can handle up to six nine-level factors and 16 threelevel factors in orthogonal 81 runs.
In the design of experiments section of the book a technique called dummy
treatment is described. This technique allows you to assign a factor that has fewer
428
Figure 21.1
Orthogonal array
notation
levels than those of a column. Using the dummy treatment technique, you can
assign:
two-level factor to a three-level column
two-level factor to a four-level column
two-level factor to a ve-level column
etc.
etc.
etc.
etc.
429
Table 21.1
Factors and levels
Factor
Level 1
Level 2
Level 3
A Staple
No
Yes
B Side
2 to 1
1 to 2
2 to 2
C No. copies
20
50
D No. pages
20
50
Paper tray
Tray 6
Tray 5 (LS)
Tray 3 (OHP)
Darkness
Normal
Light
Dark
G Enlarge
100%
78%
128%
H Execution
From PC
At machine
Memory
Step 2: Conduct system testing. Table 21.2 shows the L18 test array. Eighteen
tests are conducted according to each row of the L18. The response will simply be
a 01 response where
y
0
1
if no problem
if problem
430
0
2
0
0
1
1
0
1
1
1
0
1
1
0
1
1
1
0
1
1
0
B1
B2
B3
C1
C2
C3
D1
D2
D3
E1
E2
E3
F1
F2
F3
G1
G2
G3
H1
H2
H3
A1
Table 21.2
L18 array
1
0
0
0
1
0
0
0
1
0
1
0
1
0
0
0
0
1
0
1
0
A2
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
B1
2
1
0
1
2
0
1
0
2
1
1
1
1
1
1
0
1
2
B2
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
B3
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
C1
1
0
0
1
0
0
0
0
1
0
0
1
0
1
0
C2
1
1
0
0
2
0
1
0
1
1
1
0
1
0
1
C3
D1
1
0
0
0
1
0
0
0
1
0
1
0
D2
1
0
0
1
0
0
0
0
1
0
0
1
D3
0
1
0
0
1
0
1
0
0
1
0
0
E1
0
1
0
0
1
0
1
0
0
E2
1
0
0
0
1
0
0
0
1
E3
1
0
0
1
0
0
0
0
1
0
1
0
0
1
0
F1
0
0
0
0
0
0
F2
2
0
0
1
1
0
F3
G1
1
0
0
G2
1
1
0
G3
0
0
0
431
1
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
2
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
A
1
1
2
3
Table 21.2
(Continued )
3
3
3
2
2
2
1
1
1
3
3
3
2
2
2
1
1
1
B
2
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
C
3
3
1
2
2
3
1
3
1
2
2
3
1
1
2
3
1
2
3
D
4
2
3
1
3
1
2
3
1
2
1
2
3
2
3
1
1
2
3
E
5
3
1
2
1
2
3
2
3
1
3
1
2
2
3
1
1
2
3
F
6
1
2
3
3
1
2
2
3
1
2
3
1
3
1
2
1
2
3
G
7
2
3
1
2
3
1
1
2
3
3
1
2
3
1
2
1
2
3
H
8
0
0
0
0
0
1
0
0
0
0
0
0
0
1
1
0
0
0
1 Problem
432
0
67
0
0
33
33
0
33
33
33
0
33
33
0
33
33
33
0
33
33
0
B1
B2
B3
C1
C2
C3
D1
D2
D3
E1
E2
E3
F1
F2
F3
G1
G2
G3
H1
H2
H3
A1
33
0
0
0
33
0
0
0
33
0
33
0
33
0
0
0
0
33
0
33
0
A2
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
B1
Table 21.3
Percentage of failure
100
50
0
50
100
0
50
0
100
50
50
50
50
50
50
0
50
100
B2
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
B3
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
C1
50
0
0
50
0
0
0
0
50
0
0
50
0
50
0
C2
50
50
0
0
100
0
50
0
50
50
50
0
50
0
50
C3
50
0
0
0
50
0
0
0
50
0
50
0
D1
50
0
0
50
0
0
0
0
50
0
0
50
D2
D3
0
50
0
0
50
0
50
0
0
50
0
0
E1
0
50
0
0
50
0
50
0
0
50
0
0
0
50
0
0
0
50
E2
50
0
0
50
0
0
0
0
50
E3
0
50
0
0
50
0
F1
0
0
0
0
0
0
F2
100
0
0
50
50
0
F3
G1
50
0
0
G2
50
50
0
0
0
0
G3
It is important to recognize that this system test method will lead us to where
problems may reside, but it does not pinpoint the problem or provide solutions to
solve problems.
Xerox, Seiko Epson, and ITT Defense Electronics are three companies that presented applications for this method in public conferences in the late 1990s. All
three companies report that they achieved 400% improvement in both area coverage and detection rate. For more case studies, see Section 2 in this book.
433
Part VIII
On-Line Quality
Engineering
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
22
22.1. Introduction
Tolerancing and
Quality Level
437
22.2.
22.3.
22.4.
22.5.
22.6.
22.7.
22.8.
438
448
450
451
22.1. Introduction
The difference between parameter design and on-line quality engineering is that
in parameter design, the parameters of the process are xed at certain levels so
that variability is reduced. Parameter design is used during manufacturing to vary
manufacturing conditions and reduce variability. But in the on-line approach, a
certain parameter level is varied from time to time based on the manufacturing
situation to adjust the quality level to hit the target. In the case of a watch, for
example, its error can be reduced by designing (purchasing) a more accurate
watchthis is parameter design. Using the on-line approach, the inaccurate watch
is used and errors are reduced by checking more frequently with the time signal
and making adjustments to it.
Although it is important to improve working conditions and the method of
operating machines and equipment on the job, such issues are off-line improvements in terms of production technology. Instead of such process improvement,
here we deal with process control through checking, as well as process maintenance through prevention. In other words, general issues related to the design of
process control of on-line real-time processing and production are covered in this
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
437
438
It is important to discuss two types of quality: design and production. Design quality
is represented by the properties contained in a product. Such properties are a
part of the product and can be described in catalogs. The marketing department
determines which quality items should be in the catalogs. Design quality includes
the following four aspects, along with usage conditions and problem occurrence:
1. Natural conditions: temperature, humidity, atmospheric pressure, sunlight, hydraulic pressure, rain, snow, chemical durability, etc.
2. Articial conditions: impact, centrifugal force, excess current, air pollution,
power failure, chemical durability, etc.
3. Individual differences or incorrect applications: differences in individual ability,
taste, physical conditions, etc.
4. Indication of problems: ease or difculty of repair, disposability, trade in, etc.
Production quality represents deviations from design quality, such as variations
due to raw materials, heat treatment, machine problems, or human error. Production quality is the responsibility of the production department.
Quality Problems
before and after
Production
On-Line Approaches
The quality target must be selected by the R&D department based on feedback
from the marketing department about customer desires and competitors products.
After the development of a product, the marketing department evaluates the design quality, and the production department evaluates the manufacturing cost.
Generally, production quality problems are caused either by variability or mistakes, such as the variations in raw materials or heat treatment or problems with
a machine. The production department is responsible not for deviation from the
drawings but for deviation from design quality.
Since there is responsibility, there is freedom. It is all right for the production
department to produce products that do not follow the drawings, but it must then
be responsible for the claims received due to such deviations. Production departments are also responsible for claims received due to variations, even if the product
was made within specications.
Design quality also includes functions not included in catalogs because they are
not advantageous for marketing. However, claims due to items included in catalogs
but not in the design must be the responsibility of the marketing department. To
summarize: Design quality is the responsibility of the marketing department, and
production quality is the responsibility of the production department.
Various concerns of on-line production are discussed below.
QUANTITATIVE EVALUATION OF PRODUCT QUALITY
Since defective products found on the line are not shipped, the problem of these
defective products does not affect customers directly, but production costs increase. Product design engineers do not deal with process capability but work to
design a product that works under various customers conditions for a certain
number of years. This is called parameter design. The point of parameter design for
22.1. Introduction
production is to provide a wide tolerance for manufacturing or to make it easy for
production personnel to make the product.
In addition, production engineering should design a stable manufacturing
process, to balance loss due to variations in cost. When a process is very stable,
engineers should try to increase the production speed to reduce the manufacturing cost. When production speed increases, variation normally increases. An engineer should adjust the production speed to a point where the manufacturing
cost and the loss due to variation balance. This is important because a product is
normally sold at three or four times unit manufacturing cost (UMC). Therefore,
any reduction in the UMC yields a tremendous benet to the manufacturer in
terms of prot.
DETERMINATION OF SPECIFICATIONS FOR MANUFACTURED PRODUCTS:
TOLERANCING
Tolerancing is a method to quantitatively determine the specications for transactions or a contract. A new and unique approach to tolerancing utilizes the safety
factor, previously vague.
FEEDBACK CONTROL
Using inspection, product quality characteristics are measured and compared with
the target. A product whose quality level deviates from the target is either adjusted
or discarded. This is inspection in a broad sense, where individual products are
inspected.
439
440
(22.1)
Control costs consist of both production control and quality control costs. The
product design division is responsible for all the costs seen on the right-hand side
of equation (22.1). When we say that a product designer is responsible for the
four terms in equation (22.1), we mean that he or she is responsible for the design
of a product so that it functions under various conditions for the duration of its
design life (the length of time it is designed to last). In particular, the method of
parameter design used in product design enables the stability of the product function to be improved while providing a wide range of tolerance. For example, doubling the stability means that the variability of an objective characteristic does not
change even if the level of all sources of variability is doubled.
The production engineering division seeks to produce the initial characteristics
of the product as designed, with minimum cost and uniform quality. The production engineering division is responsible for the sum of the second to fourth terms
on the right-hand side of equation (22.1). In other words, they are responsible for
designing stable production processes without increasing cost and within the tolerance of the initial characteristics given by the product design division.
However, no matter how stable a production process is, it will still generate
defectives if process control is not conducted. Additionally, if production speed is
increased to reduce cost by producing more product within tolerance limits, more
loss will occur, due to increased variability. Basically, it is difcult to balance production costs and loss due to variability in quality, so an appropriate level of process
control must be employed.
441
loss due to the second and third items is evaluated using the mean square of the
difference from the target value. The quality level is evaluated using the equation
L
A 2
(22.2)
where A (dollars) is the loss when the initial characteristic value of a product does
not satisfy the tolerance, is the tolerance of the characteristic, and 2 is the mean
square of the difference from the objective value of the initial characteristic.
If the cost is far more important than the quality and inventory losses, one has
to use a number several times that of the real cost as the cost gure in the following
calculations. According to Professor Trivus, the director of MITs Advanced Technology Research Institute, Xerox Corporation used to set the product price at four
times the UMC. Since the UMC does not include the costs of development, business operations, head-ofce overhead, and so on, corporations cannot generate
prot if their product is not sold at a price several times higher than the UMC.
Production attempts to minimize the sum of process control costs and qualityrelated loss through process and product control. Process control costs are evaluated using real cost gures, although multiples of the gure are often used. If
the cost is several times more important than the quality-related loss, one can take
the cost gure times some number to use as the real cost. To balance the two types
of unit costs necessary for control and the quality-related loss, we consider the
following cost terms:
B:
C:
If the amount of control cost is far greater than the amount of quality-related
loss, excessive control is indicated; alternatively, if quality-related losses dominate,
a lack of control is indicated.
The main task of people on the production line is to control the process and
product while machines do the actual processing. Therefore, we can also use quality engineering methods to determine the optimal number of workers on the
production line.
442
Example
In the power circuit of a TV set, 100-V ac input in a home should be converted
into 115-V dc output. If the circuit generates 115-V of output constantly during its
designed life, the power circuit has perfect functional quality. Thus, this power
circuit is a subsystem with a nominal-the-best characteristic whose target value is
115-V.
Suppose that the target value is m and the actual output voltage is y. Variability
of component parts in the circuit, deterioration, variability of input power source,
and so on, create the difference (y m). Tolerance for the difference (y m) is
determined by dividing the difference by the safety factor , derived from past
443
experience. Suppose that the safety factor in this case is 10. When a 25% change
in output voltage causes trouble, the tolerance is derived as follows:
tolerance
functional limit
25
2.5%
safety factor
10
(22.3)
(22.4)
(22.5)
Let 0 denote the functional limit, that is, deviation from the target value when
the product ceases to function. At 0, product function breaks down. In this case,
other characteristics and usage conditions are assumed to be normal, and 0 is
derived as the deviation of y from m when the product stops functioning.
The assumption of normality for other conditions in the derivation of functional limits implies that standard conditions are assumed for the other conditions.
The exact middle condition is where the standard value cuts the distribution in
half, for example, median and mode, and it is often abbreviated as LD50 (lethal
dosage), that is, there is a 5050 chance of life (L) or death (D)-50% of people
die if the quantity is less than the middle value and the other 50% die if the
quantity is greater. Consequently, one test is usually sufcient to obtain the function limit.
The value of 0 is one of the easiest values to obtain. For example, if the central
value of engine ignition voltage is 20 kV, LD50 is the value when ignition fails after
changing the voltage level. (The central value is determined by parameter design
and not by tolerance design. It is better to have a function limit as large as possible
in parameter design, but that is a product design problem and is not discussed
here.) After reducing voltage, if ignition fails at 8 kV, 12 kV is the lower functional limit. On the other hand, the upper functional limit is obtained by raising
the voltage until a problem develops. A high voltage generates problems such as
corona discharge; let its limit be 18 kV. If the upper- and lower-side functional
limits are different, different tolerances may be given using different safety factors.
In such a case, the tolerance is indicated as follows:
1 m 2
(22.6)
Upper and lower tolerance, 1 and 2, are obtained separately from respective
functional limit 01 and 02 and safety factor 1 and 2:
i
0i
i
(i 1,2)
(22.7)
The functional limit 0 is usually derived through experiment and design calculations. However, derivation of the safety factor is not clear. Recently, some
444
(22.8)
Letting A0 be the average loss when the characteristic value falls outside the functional limit, and A be the loss to the factory when the characteristic value falls
outside the factory standard:
A0
(22.9)
The reason for using this formula is as follows. Let y denote the characteristic
value and m the target value. Consider the loss when y differs from m. Let L(y)
be the economic net loss when a product with characteristic value y is delivered
to the market and used under various conditions. Assume that the total market
size of the product is N and that Li(t,y) is the actual loss occurring on location. In
the tth year after the product stops functioning, the loss jumps from zero to a
substantial amount. With a given design life T, L(y) gives the average loss over the
entire usage period in the entire usage location; that is,
L(y)
1
N
i1
Li(t,y) dt
(22.10)
Even if individual Li(t,y) is a discontinuous function, we can take the average usage
condition over many individuals, and thus approximate it as a continuous function.
We can conduct a Taylor expansion of L(y) around target value m:
L(y) L(m y m)
L(m)
L(m)
L(m)
(y m)
(y m)2
1!
2!
(22.11)
In equation (22.11), the loss is zero when y is equal to m and L(m) is zero, since
the loss is minimum at the target value. Thus, the rst and second terms drop out
and the third term becomes the rst term. The cubic term should not be considered. By omitting the quartic term and above, L(y) is approximated as follows:
L(y) k(y m)2
(22.12)
The right-hand side of equation (22.12) is loss due to quality when the
characteristic value deviates from the target value. In this equation, the only unknown is k, which can be obtained if we know the level of loss at one point where
y is different from m. For that point we use the function limit 0 (see Figure 22.1).
Assuming that A dollars is the average loss when the product does not function in
445
L(y)
m 0
Figure 22.1
Loss function
A0
A0
m+
m + 0
the market, we substitute A0 on the left-hand side of equation (22.12) and 0 for
y m on the right-hand side. Thus,
A0 k20
(22.13)
A0
20
(22.14)
Therefore, the loss function, L(y), due to quality variability in the case of a
nominal-the-best characteristic becomes
L(y)
A0
(y m)2
20
(22.15)
As we can see from equation (22.10), the value of quality evaluation given by
equation (22.15) is the average loss when the product with characteristic value y
is used under various usage conditions, including a major problem such as an
accident. Since it is not known under which condition the product is used, it is
important to evaluate the average loss by assuming that the product is used under
the entire range of usage conditions. Tolerance should be decided by taking the
balance between average loss and cost.
To determine the safety factor, , we need to evaluate one more economic
constant, the cost, A, when the product fails to meet factory standards. A product
that does not meet these standards is called defective. A defective item needs to be
repaired or scrapped. If it can be modied, the cost of modication is used for A.
But in some cases the product may not even pass after modication. If the rate of
acceptance is close to 10%, the product price is used as the cost of modication,
A. If the rate of acceptance is lower, say q, A is dened as
A
factory pine
q
(22.16)
446
Example
Continuing the example introduced at the beginning of this section, the functional
limit of the power circuit is assumed to be 25%, and the loss to the company when
the product falls outside the function limit in the market, A0, is assumed to be
$300 (this includes repair cost and replacement cost during repair). We assume
further that the power circuit, which does not satisfy factory standards, can be xed
by changing the level of resistance of the resistor within the circuit. Let the sum of
labor and material cost for repair, A, be $1. The safety factor, , is determined as
follows using equation (22.9).
17.3
AA 300
1
0
(22.17)
0
25
1.45%
17.3
(22.18)
Therefore, with a target value of 115 V, the factory standard, , is just 1.7 V:
115 (115)(0.0145) 115 1.7 V
(22.19)
Even if the functional limit in the market is 115 29 V, the standard for the
production division of the factory is 115 1.7 V, and those products that do not
satisfy this standard need to be repaired.
A characteristic value that is nonnegative and as small as possible is called a
smaller-the-better characteristic. The safety factor for a smaller-the-better characteristic is found in equation (22.9). For example, if the fatal dose of a poisonous
element (LD50) is 8000 ppm, the loss, A0, when someone has died due to negligence, for example, is obtained using the following equation:
A0 (national income) average life (or average life expectancy)
$20,000 77.5 years
$1,550,000
(22.20)
The safety factor, , and factory tolerance, , are determined from equation (22.9),
assuming that the price of the product, A, is $3.
A
A0
1,550,000
719
3
(22.21)
447
0
8000
11 ppm
719
(22.22)
A020
(bx)2
(22.23)
2A0 20
ab2
1/3
(22.24)
(2)(300,000)(5000)2
(40)(802)
1/3
388
(22.25)
(22.26)
30
4.4
1.55
(22.27)
Thus, the lower limit of the standard is 22 tonsf, that is, 4.4 times the strength
corresponding to the function limit 0 5 tonsf.
The safety factor method can be applied to any component part and material
tolerance in the same way. In this case you would use the point of LD50, that
is, the characteristic value when the objective characteristic stops functioning
(the value of functional limit when the characteristic of other parts and usage
448
AA
(22.28)
0 larger-the better
(22.30)
A0 2
A
2 2
2
0
(22.31)
where 2 is the mean square of the difference from the target value m of n data,
y1, y2, ... , yn, or the mean-squared error, or variance.
2
1
[(y1 m)2 (y2 m)2 (yn m)2]
n
(22.32)
Example
Consider a plate glass product having a shipment cost of $3 per unit, with a dimensional tolerance, , of 2.0 mm. Dimensional data for glass plates delivered
from a factory minus the target value, m, are
0.3 0.6 0.5 0.2 0.0 1.0 1.2 0.8 0.6 0.9
0.2 0.8 1.1 0.5 0.2 0.0 0.3 0.8
1.3
0.0
To obtain the loss due to variability, we must calculate the mean-squared error from
the target value, 2, and substitute it into the loss function. We refer to this quantity
as the mean-squared error, 2 (variance):
2
1
(0.32 0.62 1.32) 0.4795 mm2
20
(22.33)
449
A 2
3
2(0.4795) 36 cents
2
(22.34)
The mean of the deviations of glass plate size from the target value is 0.365,
indicating that the glass plates are a bit oversized on average. One can create an
ANOVA table to observe this situation (see Chapter 29).
ST 0.32 0.62 1.32 9.59
Sm
2.66
20
20
(f 20)
(22.35)
(f 1)
(22.36)
(f 19)
(22.37)
3
(0.365) (36.0)(0.761) 27.4 cents
22
(22.38)
(22.39)
per sheet of glass plate. If the monthly output is 100,000 sheets, quality improvement will be $8600 per month.
No special tool is required to adjust the mean to the target value. One needs
to compare the size with the target value occasionally and cut the edge of the plate
Table 22.1
ANOVA table
Source
(%)
2.66
2.66
2.295
23.9
19
6.93
0.365
7.295
76.1
Total
20
9.59
0.4795
9.590
100.0
450
A0 2
20
(22.40)
where A0 is the loss incurred by the company if the functional limit is exceeded,
0 the functional limit, and 2 the mean-squared error, or variance.
In production factories, the following and A are used instead of 0 and A0,
respectively.
:
factory standard
A:
(22.41)
(22.42)
Example
Suppose that the specication of roundness is less than 12 m. The loss when the
product is not acceptable, A, is 80 cents. Two machines, A1 and A2, are used in
production, and the data for roundness of samples taken two per day for two weeks
is summarized in Table 22.2. The unit is micrometers.
Table 22.2
Roundness of samples from two machines
A1
0
6
5
0
4
3
2
10
3
4
1
5
7
3
6
2
8
0
4
7
A2
5
2
4
1
0
3
4
0
2
2
1
4
0
1
2
6
5
2
3
1
451
We compare the quality level of two machines, A1 and A2. The characteristic
value of this example is a smaller-the-better characteristic with an upper standard
limit. Thus, equation (22.40) is the formula to use:
12 m
(22.43)
A 80 cents
(22.44)
23.4 m2
L
(22.45)
A 2
0.80
(23.4) 13 cents
2
122
(22.46)
For A2,
1
2
2
2
2
2
2
20 (5 4 0 1 ) 8.8 m
0.80
(8.8) 4.9 cents
122
(22.47)
(22.48)
(22.49)
13.0
2.7
4.9
(22.50)
or A2 is
times better than A1. If production is 2000 per day, the difference between the two
machines will amount to $40,500 mil per year, assuming that the number of working days per year is 250.
(0 y )
(22.51)
452
L () 1
L () 1
1! y
2! y2
(22.52)
(22.53)
L() 0
(22.54)
Thus, by substituting the above into equation (22.52) and omitting the cubic terms
and beyond, we get the following approximation:
L(y) k
1
y2
(22.55)
(22.56)
For this point we can use the point when a problem actually occurs in the market,
0, and derive k with loss A0 at that point:
k 20A0
(22.57)
20A0
y2
(22.58)
Let y1, y2, ... , yn denote n observations. We can calculate the quality level per
product by substituting the data into equation (22.58). Let L be the quality level
per product. Then
L A0022
where
2
1
n
(22.59)
1
1
1
2 2
2
y1
y2
yn
(22.60)
Example
For three-layered reinforced rubber hose, the adhesive strength between layers
K1, the adhesion of rubber tube and reinforcing ber, and K2, the adhesion of reinforcing ber and cover rubberis very important. For both adhesive strengths, a
lower standard, , is set as 5.0 kgf. If a defective product is scrapped, its loss is
$5 per unit. Annual production is 120,000 units. Two different types of adhesive,
A1 (50 cents) and A2 (60 cents), were used to produce eight test samples each,
Reference
453
Table 22.3
Adhesive strength
A1
K1
K2
10.2
14.6
5.8
19.7
4.9
5.0
16.1
4.7
15.0
16.8
9.4
4.5
4.8
4.0
10.1
16.5
A2
K1
K2
7.6
7.0
13.7
10.1
7.0
6.8
12.8
10.0
11.8
8.6
13.7
11.2
14.8
8.3
10.4
10.6
and the adhesive strength of K1 and K2 was checked. The data are given in Table
22.3.
Let us compare the quality level of A1 and A2. Since it is a larger-the-better
characteristic, we want the mean to be large and the variability to be small. A
measurement that takes both of these requirements into account is the variance,
that is, the mean square of the inverse, given by equation (22.60). Variances for
A1 and A2 are derived from equation (22.60) separately, and the loss function is
obtained from equation (22.59). For A1,
21
1
16
1
1
1
0.22
5.82
16.52
0.02284
(22.61)
L1 A02021 (5)(52)(0.02284)
$2.85
For A2,
22
1
16
(22.62)
1
1
1
7.62
13.72
10.62
0.01139
L2 (5)(52)(0.01139) $1.42
(22.63)
(22.64)
Therefore, even if the adhesion cost (the sum of adhesive and labor) is 10 cents
higher in the case of A2, the use of A2 is better than the use of A1.
The amount of savings per unit is
(0.50 2.85) (0.60 1.42) $1.33
(22.65)
Reference
1. Genichi Taguchi et al., 1994. Quality Engineering Series 2: On-Line Production. Tokyo:
Japanese Standards Association and ASI Press.
Feedback Control
Based on Product
Characteristics
23
23.1. Introduction
454
23.2. System Design of Feedback Control Based on Quality
Characteristics
454
23.3. Batch Production Process
459
23.4. Design of a Process Control Gauge (Using a Boundary Sample)
462
23.1. Introduction
Under feedback control, the characteristics of a product produced by a production
process are checked, and if a difference between a characteristic and a target is
greater than a certain level, the process is adjusted. If the difference is not so
great, production is continued without adjustment. This type of control system can
be applied without modication to cases where the process is controlled automatically. Even when a process is controlled automatically, routine control by human
beings is necessary.
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
455
B
n
dollars
(23.1)
u0
D 20
(23.2)
(Many Japanese rms use D0 /3, while many U.S. rms use D0 D. Which is
more rational can be determined only through comparison with the actual optimal
value, D.) After deciding on an optimal adjustment limit, D, the mean adjustment
interval, u, is estimated by
u u0
D2
D 02
(23.3)
Under this assumption, the checking adjustment costs are calculated based on the
use of equations (23.1) and (23.3):
B
C
B
D 2C
0 2
n
u
n
u 0D
(23.4)
On the other hand, what would happen to the quality level if checking and adjustment were performed? Assuming the absence of checking error, the loss is
given by
A
2
D2
n1
1
2
D2
u
(23.5)
mD
mD
( y m)2 dy
(23.6)
456
D2
(23.7)
D2
u
(23.8)
n1
1
2
n1
1
2
n1
1
2
D2
D2
u
(23.9)
Total loss L, which is the sum of checking and adjustment costs [i.e., equation
(23.4)] and the quality level [i.e., equation (23.9)], is
L
B
C
A
2
n
u
n1
1
2
D2
B
D 2C
A
0 2 2
n
u0D
D2
u
(23.10)
(23.11)
n0 1
1
2
D 20
D 20
u0
In particular, if n, u, and D are current n0, u0, and D0, equation (23.11) becomes
L0
B
C
A
2
n0
u0
n0 1
1
2
D 20
D 02
u0
(23.12)
The optimal checking interval, n, and the optimal adjustment limit, D, are derived
by differentiating equation (23.11) with respect to n and D and by solving them
after equating them to zero:
1
2
dL
B
A
2 2
dn
n
D 20
0
u0
(23.13)
2uA B D
0
(23.14)
3C D 20 2
A u0
2
3
D0
(23.15)
1/4
(23.16)
457
Example
Consider the control of a certain component part dimension. The parameters are
as follows.
Specication: m 15 m
Loss due to a defective: A 80 cents
Checking cost: B $1.50
Time lag: l 1 unit
Adjustment cost: C $12
Current checking interval: n0 600 units, once every two hours
Current adjustment limit: D0 5 m
Current mean adjustment interval: u0 1200 units, twice a day
(a) Derive the optimal checking interval, n, and the optimal adjustment limit,
D, and estimate the extent of improvement from the current situation.
(b) Estimate the process capability index Cp.
(c) Assuming that it takes 3 minutes for checking and 15 minutes for adjustment, calculate the required worker-hours for this process to two digits after
the decimal.
(a) We begin the derivation as follows:
n
155
2uA B D (2)(1200)(1.50)
0.80
0
3C D 02 2
A u0
1/4
(3)(12)
0.80
52
(152)
1200
3.8 4.0 m
B
C
A
2
n
u
1/4
(23.18)
(23.17)
D2
Substituting
n1
l
2
D2
u
(23.19)
D2
42
u u 0 2 (1200) 2
D0
5
768 units
(23.20)
458
1.50
12
0.80 42
200
768
152 3
201
l
2
42
768
(23.21)
B
C
A
2
n0
u0
n0 1
l
2
D 20
1.50
12
0.80 52
600
1200
152 3
D 02
u0
601
l
2
52
1200
(23.22)
Assuming that the total hours of operation per year are 2000, a production rate of
300 units per hour will result in a yearly improvement of
(0.0644 0.0496) (300) (2000) $8880
(23.23)
D2
n1
1
2
D2
u
(23.24)
(2)(15)
(42/3) [(201/2) 1] (42/768) (6)
1.83
(23.25)
The process capability index is about 1.8. This number deteriorates if there is measurement error. The Cp value of the current condition is
Cp
(2)(15)
(5 /3) [(601/2) 1] (52/1200) (6)
2
1.31
(23.26)
(c) Under the optimal checking interval, checking is done every 40 minutes; it
takes 3 minutes for checking. Therefore, in one 8-hour day, there will be 12 checks,
amounting to 36 minutes. The estimated adjustment interval is once every 768
pieces and
(300)(8)
3.1
768
(23.27)
459
times a day. It takes (15 minutes)(3.1) 46.5 minutes per day. Assuming that
work minutes per day per worker is 480 minutes, the number of personnel required
is
36 46.5
0.17 person
480
(23.28)
Similar calculations are done for each process, and an allocation of processes
per worker is determined. For example, one person can be in charge of seven processes (i.e., M1, M2, ... , M7), and the checking interval is 200, 150, 180, 300,
100, 190, and 250, respectively. These can be rearranged with 30% exibility
to obtain the following schedule.
Once every 0.5 hour (once every 150 pieces): M1, M2, M3, M5, M6
Once every 1.0 hour (once every 300 pieces): M4, M7
One should create an easily understandable checking system, and based on the
system, the number of checking persons is determined.
B
C
A
2
n
u
D2
n1
1
2
D2
2m
u
(23.29)
Example
In an injection molding process that produces 12 units in one shot, an average
product dimension is estimated once every 100 shots and adjusted relative to its
460
target value. It takes 1 hour of production to produce 100 shots, and the loss per
defective is 30 cents. The tolerance is 120 m; current adjustment limit is 50
m; adjustment cost is $18; mean adjustment interval is 800 shots; cost B for the
estimation of average size is $4; and time lag l is four shots.
(a) Derive the optimal checking interval, n, and optimal adjustment limit, D,
and obtain the possible gain per shots and per year (200,000 shots/year).
(b) Suppose that the standard deviation of the estimation error of the current
measuring methods is 15 m. Compare this with a new measuring method
whose estimation error is supposed to be one-third of the current error and
whose cost is $7 ($3 more than the current cost). Which method is better,
and by how much? Assume that the time lag is the same for both
alternatives.
(c) The standard deviation between cavities is 6 m. Obtain the process capability index when the process is controlled with the new measuring
method.
(a) Since 12 units are produced at a time, 12 is treated as the unit of production.
The same treatment will be used for a mass-produced chemical product (e.g., a
10-ton batch). A is the loss when all units in a batch are defective. Thus, the
parameters can be summarized as follows:
Tolerance of the dimension: 120 m
Loss due to defective: A (30) (12) $3.60
Cost of estimating the average dimension: B $4
Adjusting cost: C $18
Time lag: l 4 shots
Mean dimension checking interval: n0 100 shots
Current adjustment limit: D0 50 (m)
Current mean adjustment interval: u0 800 shots
Derive the optimal checking interval, n, and optimal adjustment limit, D:
2uA B D
(2)(800)(4) 120
50
3.60
(23.30)
3C D 02 2
A u0
1/4
29 30 m
(3)(18)
3.60
502
(1202)
800
1/4
(23.31)
461
Thus, the adjustment limit, 30, which is smaller than the current 50, is better.
The loss under the current method, L0, is
L0
B
C
A
2
n0
u0
n1
1
2
D 20
4
18
3.60
100
800
1202
502
D 20
u0
101
4
2
502
800
(23.32)
D2
(800)
D 20
302
502
288 shots
(23.33)
4
18
3.60
100
288
1202
302
101
4
2
302
288
(23.34)
Therefore, use of the optimal system results in an improvement over the current
system by
(31 0.22) (100) (2000) $18,000
(23.35)
(b) For the current checking method, the loss due to measurement error (its
standard deviation),
A 2
3.60
m
(152) 6 cents
2
1202
(23.36)
(23.37)
On the other hand, the new measuring method costs $7 rather than $4, and
thus it is necessary to derive the optimal checking interval, n, again:
n
(2)(800)(7)
3.60
120
50
(23.38)
462
B
C
A
2
n
u
D2
n1
1
2
7
(18)
3.60 302
150
288
1202 3
D2
2m
u
151
4
2
302
52
288
(23.39)
Because measuring accuracy is inferior using the current method, the new measuring method results in an improvement of
(0.28 0.25) (100) (2000) $6000
(23.40)
(c) The on-line control of a production process does not allow control of variability among cavities created by one shot. Therefore, if c is the standard deviation
among cavities, the overall standard deviation, , is
D2
n1
1
2
302
151
4
2
D2
2m 2c
u
302
288
52 62
24.7 m
(23.41)
(2) (120)
1.6
(6) (24.7)
(23.42)
463
of preventive maintenance based on quality data. The method of preventive maintenance can be applied to the preventive method for delivered products.
Suppose that tolerance, , is specied for the appearance of a product. The
appearance is currently controlled at D0, using a boundary sample. Similar to the
example in Section 23.2, parameters are dened as follows.
A: loss per defective
: tolerance
B: checking cost
C: adjustment cost
n0: current checking interval
u0: current mean adjustment interval
D0: current adjustment limit
l: time lag
The optimal checking interval, n, and optimal adjustment limit, D, are given by
the following:
n
D
2uA B D
0
(23.43)
3C D 20 2
A u0
1/4
(23.44)
If pass/fail is determined using a boundary or a limit sample without an intermediate boundary sample, we have the following:
D0 (tolerance)
(23.45)
(23.46)
(23.47)
464
3C
D2
0 2
A
u0
1/4
1/4
3C 2 2
A u
1/4
3C
Au
(23.48)
(23.49)
Thus, is given by
1/4
3C
Au
(23.50)
2uAD B
0
(23.51)
2
0
By substituting
D0
u0 u
D 02
2
u 2
2
(23.52)
2uB
2uB
A
A
(23.53)
Furthermore, the loss function, L is obtained based on the formula for continuous
values. That is,
L
B
C
A
2
n
u
D2
n1
1
2
D2
u
(23.54)
B
C
A
2
n
u
B
C
A
n
u
1
()2
3
n1
1
2
n1
1
2
2
u
()2
u
(23.55)
465
Example
Consider a characteristic value that cannot be measured easily, and product pass/
fail is evaluated using a boundary sample. The loss, A, when the product is not
accepted is $1.80; the mean failure interval, u, is 2300 units; the adjustment cost,
C is $120; the measuring cost, B, is $4; and the time lag, l, is 2 units.
In the following, we design the optimal preventive maintenance method. The
parameters are:
A $1.80
B $4
C $120
u 2300 units
l 2 units
From equation (23.50),
3C
Au
1/4
(3)(120)
(1.80)(2300)
1/4
0.54 0.5
(23.56)
This is about one-half of the unacceptable boundary samples. In other words, optimal control should be done with a boundary sample that is half as bad as the
unacceptable boundary sample. That level is D.
D 0.5
(23.57)
Next, derive the optimal checking interval, n, and the loss function, L. From equation (23.53),
n
(2)(2300) (4)
2uB
A
1.80
100 units
(23.58)
This means that the optimal checking interval of inspection is 100. Loss does
not vary much, even if we change the checking interval by 20 to 30% around the
optimal value. In this case, if there are not sufcient staff, one can stretch the
checking interval to 150.
Next is the loss function. Under current conditions,
L0
B
C
A
2
n0
u0
D 20
n0 1
1
2
D 02
u0
(23.59)
466
and since
u 0 u 2300 units
(23.60)
D0
(23.61)
2uAD B 2uB
A
2uB
(2)(2300)(4)
A
1.80
n0
2
0
100 units
(23.62)
D 20
n0 1
1
2
D 02
u0
n0 1
1
2
2
u
L0
B
C
A
2
n0
u0
B
C
A
2
n0
u0
B
A
n 1A
C
lA
0
n0
3
2
u
u
u
4
1.80
101
100
3
2
1.80
120
(2)(1.80)
2300
2300
2300
(23.63)
B
C
A
2
n
u
D2
n1
1
2
D2
u
(23.64)
(23.65)
D
()
u
u2
2
2
2
(23.66)
B
C
2A
n 1 2A
l2A
n
u
3
2
u
u
(23.67)
467
the following,
u u2 (2300) (0.52) 575 units
2A (0.52) (1.80) 45 cents
we obtain
L
4
120
0.45
101
100
575
3
2
(23.68)
(23.69)
0.45
(2)(0.45)
575
575
(23.70)
Compared with the current method, the optimal method results in a gain of
73 44 29 cents
(23.71)
24
24.1.
24.2.
24.3.
24.4.
Feedback Control of
a Process Condition
Introduction
468
Process Condition Control
468
Feedback Control System Design: Viscosity Control
469
Feedback Control System: Feedback Control of Temperature
471
24.1. Introduction
In this chapter the method for designing a feedback control system for changes
in process conditions is explained. A case where changes in the process condition
affect product quality is given. A case where deterioration of the process condition
leads to stoppage of the process is discussed in later chapters.
(24.1)
Loss A in the numerator is the loss when a change in process condition creates
defectives. A process condition is irrelevant to the concept of defect.
468
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
469
L0
B
C
A D2
2 0
n0
u0
3
B
C
A D2
2 0
n
u
3
n0 1
l
2
n1
l
2
D 20
u0
D2
u
(24.2)
(24.3)
D2
D 02
(24.4)
Example
Consider a process of continuous emulsion coating. The object of control is the
viscosity of the emulsion. The standard of coating thickness is m0 8 m and the
loss caused by failing to meet the standard is $3/m2. When the viscosity of the
emulsion changes by 5.3 P, the coated lm fails to meet the standard. The current
checking interval, n0, is 6000m2 (once every 2 hours), the adjustment limit is 0.9
P, and the mean adjustment interval is 12,000m2 (twice a day). Cost B of measuring x is $2, and the adjustment cost of viscosity, C, is $10. The time lag, l, of
the measurement is 30m2. Derive the optimal checking interval, n, optimal adjustment limit, D, and the prot per m2 and per year (assume 2000-hour/year
operation).
According to the assumptions, the tolerance of viscosity x is 5.3. The parameters can be summarized as follows.
470
Tolerance: D 5.3 P
Loss due to defective: A $3
Measurement cost: B $2
Time lag for measurement: l 30m2
Adjustment cost: C $10
Current checking interval: n0 6000m2
Current adjustment limit: D0 0.9 P
Current mean adjustment interval: u0 12,000m2
The current loss function, L 0, is
L0
B
C
A
2
n0
u0
n0 1
1
2
D 20
2
10
3 0.92
6000
12,000
5.32 3
D 02
u0
6001
30
2
0.92
12,000
(24.5)
0.95.3
2uA B D (2)(12,000)(2)
3
0
745 1000m2
D
3C D 20 2
A u0
1/4
(3)(10)
3
0.92
(5.32)
12,000
(24.6)
1/4
0.37 0.4 P
(24.7)
D2
0.42
(12,000)
2
D0
0.92
2400m2
(24.8)
B
C
A
2
n
u
D2
n1
1
2
2
10
3
1000
2,400
5.32
0.42
D2
u
1001
30
2
0.42
2400
(24.9)
471
(24.10)
Example
Consider a heat process where objects are fed continuously under 24-hour operation. When the temperature changes by 40C, the product becomes defective and
the unit loss is $3. Although the temperature is controlled automatically at all times,
a worker checks the temperature with a standard thermometer once every 4 hours.
The cost of checking, B, is $5; the current adjustment limit, D0, is 5C; the mean
adjustment interval is 6 hours; the adjustment cost is $5; the current adjustment
limit D0 is 5C; the mean adjustment interval is 6 hours; the adjustment cost is
$8; the number of units heat-treated in 1 hour is 300; the time lag of measurement
is 20 units; and the measurement error (standard deviation) of the standard thermometer is about 1.0C.
(a) Design an optimal temperature control system and calculate the yearly improvement over the current system. Assume that the number of operationdays per year is 250.
(b) If a new temperature control system is introduced, the annual cost will increase by $12,000, but the degree of temperature drift will be reduced to
one-fth of the existing level. Calculate the loss/gain of the new system.
Parameters are summarized as follows:
D 40C
A $3
B $5
C $18
n0 (3)(4) 12
D0 5C
u0 (3)(6) 18 units
l 20 units
s 1.0C
472
B
C
A
2
n0
u0
n0 1
1
2
D 20
5
18
3 52
1200
1800
402 3
D 02
2s
u0
1201
20
2
52
1.02
1800
(24.11)
405
2uA B D (2)(1800)(5)
3
0
(once in 2 hours)
3C D 20 2
A u0
1/4
(24.12)
(3)(18) 52
(402)
3
1800
1/4
(24.13)
Accordingly, the mean adjustment interval remains the same, and the new loss
function is
L
5
18
3
600
1800
402
52
601
20
2
52
1.02
1800
(24.14)
(24.15)
(24.16)
473
Assuming that u0 (1800)(5) 9000, optimal checking interval, n, and optimal adjustment limit, D, are
n
405
(2)(9000)(5)
3
(3)(18)
3
52
9000
1/4
(402)
3.0C
(24.17)
D2
(9000)
D 20
3.02
52
3240 units
Thus,
L
5
18
3
1200
3240
402
32
(24.18)
1201
20
2
32
1.02
3240
(24.19)
Even if the increase in the cost given by equation (24.16) (i.e., $0.0067) is
added, the loss is $0.0272. Therefore, compared to equation (24.14), a yearly
improvement of $30,420 can be expected:
(0.0441 0.0272)(300)(24)(250) $30,420)
(24.20)
25
25.1.
25.2.
25.3.
25.4.
Process Diagnosis
and Adjustment
Introduction
474
Quality Control Systems and Their Cost during Production
Optimal Diagnostic Interval
477
Derivation of Optimal Number of Workers
479
474
25.1. Introduction
Some quality characteristics, such as soldering and electric welding, cannot be
measured on a continuous scale; they either pass or fail. Although process conditions can be adjusted to control these characteristics, costs would increase signicantly if we attempted to control all conditions. Frequently, we do not know
what condition is causing problems. However, one cannot possibly anticipate and
check for all potential future problems, such as arsenic contamination of milk or
PCB (polychlorinated biphenyl) contamination of food oil, for example. For quality control testing to nd unknown items, it would be necessary to design and
conduct a performance test or animal test that would incorporate all possibilities.
In this chapter we examine the design of a process control system when measurement technology is not available. The only possible test is to check whether or not
a product functions or when go/no-go gauges are used. In this situation, the emphasis is on minimizing costs of control and not on potential loss due to the
variability of products.
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
475
B:
C:
u:
If the number of failures is zero from the start of production to the present, the
following is used:
u 2(quantity of production in the period)
l:
time lag, the number of output units produced between the time of sample taking and the time of process stoppage/adjustment if the process is
diagnosed as abnormal (expressed in terms of the number of output
units)
Therefore, L, the quality control cost per unit of production when process
diagnosis and adjustment is performed using inspection interval, n, becomes the
following:
L (diagnostic cost per unit of production) (loss of producing defectives between diagnoses using interval n) (process adjustment cost) (the possibility of process adjustment) (loss due to time lag)
B
n1
A
C
lA
n
2
u
u
u
(25.1)
Example
An automatic welding machine sometimes fails due to welding-head abrasion and
carbon deposition. Currently, the welded spots of a product are inspected at
476
100-unit intervals. Tensile strength and welding surface are inspected to nd nonwelded or imperfectly welded product. If the machine fails, the product is scrapped
and the loss per unit is 50 cents. Diagnosis cost B is $1.60. Although it does not
take time to sample, inspection takes 4 minutes to perform. Thirty units can be
produced during inspection. Therefore, the time lag, l, for this inspection method
is 30 units. Furthermore, production in the past two months was 84,000 units and
the number of failures during this period was 16.
The adjustment cost, C, when the machine fails is $31.70. It is derived in the
following way: First, when the machine fails, it has to be stopped for about 20
minutes. The loss due to stoppage (which has to be covered by overtime work,
etc.) is $47.10 per hour. The products were found to be acceptable at the previous
inspection and remained so until the hundredth unit produced since the last inspection. A binary search procedure is used to screen the 100 units. If the ftieth
is checked and found to be acceptable, the rst 50 products are sent to the next
process or to the market. If the seventy-fth, in the middle of the last 50, is found
to be defective, the seventy-sixth to hundredth are scrapped. This process is continued to identify acceptable product. The search process is continued until no more
than four items remain. They are scrapped, since further selection is uneconomical.
This screening method requires inspection of ve products; the cost is $8. Direct
costs such as repair and replacement of the welding head, which can be done in
20 minutes, is $8. Thus, the total adjustment cost is C $31.70.
The parameters for this problem can be summarized as follows:
A:
B:
C:
u:
l:
n:
B
n1A
C
lA
n
2 u
u
u
1.60
101
100
2
0.50
31.70
(30) (0.50)
5250
5250
5250
(25.2)
The quantity control cost for failure of the automatic welding machine is $0.0297
per unit of output when diagnosis/adjustment is done at 100-unit intervals. (Note:
If we had a welding machine never known to fail, no diagnosis cost would be
necessary and no welding machine adjustment would be necessary, since no defective products would ever be produced. Therefore, the value of L in equation
(25.2) would be zero. In reality, such a machine does not exist, and even if it did,
it would be very expensive.) If the interest and depreciation of the machine permit
477
output higher than the current machine by 10 cents, introduction of a new machine
would increase the cost by 7 cents per output.
It is surprising that the processing cost difference is only 3 cents between a
machine that never fails or a tool whose life is innite and a welding machine that
fails eight times a month and has a limited life. Of course, if the diagnosis/adjustment method is irrational, the difference will be greater. Since the mean failure
interval is once every three days, one may think that diagnosis can be done once
a day. In such a case, the diagnostic interval n is 1500 units, and the quantity of
daily production and quality control cost L becomes
L
1.60
1501
1500
2
0.50
31.70
(30) (0.50)
5250
5250
5250
(25.3)
1.60
51
50
2
0.50
31.70
(30) (0.50)
5250
5250
5250
(25.4)
In this case, the cost increases due to excessive diagnosis by $0.0136 per product
compared to the current diagnosis interval. What is the optimal diagnosis interval?
This problem is addressed in the next section.
478
2(u l )B
A C/u
(25.5)
Example
For the automatic welding machine example in Section 25.2,
(25.6)
1.60
186
185
2
0.50
31.70
(30) (0.50)
5250
5250
5250
(25.7)
Therefore, by altering the diagnostic interval from 100 to 185 units, we obtain
a cost reduction of 0.0297 0.0263 $0.0034 per unit. This translates into a
cost reduction of 0.0034 42,000 $143 per month. Although this number
appears to be small, if there are 200 machines of various types and we can improve
the cost of each by a similar amount, the total cost improvement will be signicant.
It does not matter if there is substantial estimation error in parameters A, B, C,
u, and l. This is because the value of L does not change much even if the value of
these parameters changes by up to about 30%. For example, if loss A due to a
defect is estimated to be 70 cents, which is nearly a 40% overestimation over 50
cents, then
156 units
(25.8)
479
that is, a difference of about 15% from the correct answer, 185 units. Furthermore,
quality control cost L in the case of n 156 units is
L
1.60
157
156
2
0.50
31.70
(30) (0.50)
$0.0266
5250
5250
5250
(25.9)
that is, a difference of $0.0003 per product compared to the optimal solution.
Therefore, the optimal diagnostic interval can be anywhere between 20% of 185
(i.e., between 150 and 220), depending on the circumstances of the diagnostic
operation. For example, if there are not enough workers, choose 220 units; however, if there are surplus workers, choose 150 units.
Equations (25.1) and (25.5) are the basic formulas for process adjustment.
Example
LP records are produced by press machines (this is an example from the 1960s)
in a factory that has 40 machines, and each machine produces one record every
minute; weekly operation is 40 hours. Dust on the press machines and adjustment
error sometimes create defective records. Process diagnosis/adjustment is currently
conducted at 100-record intervals. The defective record, A, is $1.20; the cost per
diagnosis is $8.00; diagnosis time lag l is 30 records; the mean failure interval of
the press machine is 8000 records; adjustment time when a press machine breaks
down is two hours on average; and adjustment cost C is $50. Derive the optimal
diagnosis interval and the optimal number of workers necessary for the factory
operation.
480
Since there are 40 press machines, the number of records produced in one week
is (1 record)(60 minutes)(40 hours)(40 machines) 96,000 records. Therefore,
the number of diagnoses per week is 960 and total diagnosis time is
(960)(30 minutes) 28,800 minutes 480 hours
(25.10)
(25.11)
This is fewer than the weekly hours of one worker. Therefore, a total of 13 workers
are currently working in this factory for diagnosis and adjustment.
From equation (25.5), the optimal diagnostic interval is
(2)(u l)B
A C/ u
330 units
(2)(8000 30)(8)
1.20 50/8000
(25.12)
This implies that the diagnostic frequency can be increased by 3.3 times the
current level. The number of diagnosing workers can be 1/3.3 of the current 12
workers, or 3.6. Thus, one worker should take 10 machines, checking one after
another. It is not necessary to be overly concerned about the gure of 330 records.
It can be 250 or 400 once in a while. A worker should diagnose one machine
about every 30 minutes.
On the other hand, the frequency of failure will not change, even if the diagnostic
interval is changed, meaning that one person is required for adjustment. Thus, ve
workers is more than enough for this factory. If a diagnosing worker also takes the
responsibility for adjusting the machine that fails, he or she can take eight machines
instead of 10, and the diagnostic interval can still be extended to 10 machines.
This is because the diagnostic interval can be changed by 20 to 30%. Therefore,
the maximum number of workers for diagnosis and adjustment of this press process
is four or ve. Using ve workers, the number of defectives will increase by 3.3
times, and the total quality control cost, L, will be
L
B
n1A
C
lA
n
2 u
u
u
8
331
330
2
1.20
50
(30)(1.20)
8000
8000
8000
(25.13)
481
which is a cost improvement over the current quality control cost, L0,
L0
8
101
100
2
1.20
50
(30)(1.20)
8000
8000
8000
(25.14)
by $0.0386 per record. For one week, this will mean a prot from the quality
control system by
(0.0386)(60 minutes)(40 hours)(40 machines) $3700
(25.15)
n1
1
1
331
1
2
u
u
2
8000
0.024 2.4%
30
8000
(25.16)
If the actual defective ratio is far greater than 2.4%, there is not enough diagnosis; however, if it is less than 2.4%, this means that there is excessive checking.
Even the new diagnostic interval will incur a quality control cost of 6 cents per
record; it is necessary to attempt to reduce the cost closer to zero.
Part IX
Experimental
Regression
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
26
Parameter
Estimation in
Regression
Equations
26.1. Introduction
485
26.2. Estimation of Operation Time and Least Squares Method
26.3. Experimental Regression Analysis
486
Setting of Initial Values
488
Selection of Orthogonal Array and Calculation
Sequential Approximation
490
Selection of Estimated Values
492
485
489
26.1. Introduction
Quite often we wish to express results or objectives by functions of causes or means
in various specialized elds, such as natural science, social science, or engineering.
If we cannot perform experiments to obtain results or objectives after arranging
causes or means, that is, factors in a particular manner, we often estimate parameters of a regression function predicted by data obtained from research or observations. Although the least squares method is frequently used to estimate unknown
parameters after a linear regression model is selected as a regression function, in
most cases this procedure is not used appropriately and predicted regression equations do not function well from the standpoint of specialized engineering elds.
Among several methods of avoiding this situation is experimental regression
analysis.
485
486
(26.1)
In actuality, since there are jobs other than the k types of work or slack times,
we should add a constant a0 to equation (26.1). Therefore, the total operation
time y is
y a0 a1x1 a2x2 akxk
(26.2)
Table 26.1 shows the incidence of seven different works (x1, x2, ... , x7) and
corresponding operation time y (minutes) for each business ofce. For the sake
of convenience, we take data for a certain day. However, to improve the adequacy
of analysis, we sometimes sum up all data for one month. To estimate a0, a1, a2, ...
, a7 of the following equation, using these data is our objective:
y a0 a1x1 a2x2 a7x7
(26.3)
This process is called estimation of unknown parameters a0, a1, a2, ... , a7.
An equation such as equation (26.2), termed a single regression equation or linear
regression equation, takes y as the objective variable and x1, x2, ... , x7 as the explanation variables with respect to unknowns a0, a1, a2, ... , a7.
As one of the estimation methods of unknowns in a linear regression equation,
the multivariate analysis is known, which is based on the least squares method. By
applying the least squares method to the data in Table 26.1, we obtain the following result:
y 551.59 72.51x1 25.92x2 44.19x3 65.55x4
6.58x5 24.02x6 11.35x7
error variance Ve 233,468.19
(26.4)
Although the true values of a0, a1, a2, ... , a7 are unknown, the fact that a1 are
estimated as 72.51 or a3 is estimated as 44.19 is obviously unreasonable because
each of the values indicates the operation time for one job.
In the least squares method, the coefcients a1, a2, ... , a7 are estimated in such
a way that the coefcients may take a range of (, ). It is well known that the
estimation accuracy of a0, a1, a2, ... , a7 depends signicantly on the correlation
among x1, x2, ... , x7. If a certain variable x1 correlates strongly with other variables,
the estimation accuracy of a1 is decreased; otherwise, it is improved.
Actual data observed always correlate with each other. In this case, since the
volume of all jobs is supposed to increase on a busy day, the data x1, x2, ... , x7 tend
to correlate with each other, reecting actual situations. When the least squares
method is applied to this case, the estimation obtained quite often digresses from
the true standard time. In calculating appropriate coefcients, the least squares
method is not effective.
487
Table 26.1
Incidence of works and total operation time
No.
x1
x2
x3
x4
x6
x7
18
x5
58
120
22
3667
16
12
74
85
31
3683
12
18
10
58
127
37
3681
82
39
14
2477
28
31
1944
26
27
12
414
52
31
14
1847
110
15
1689
86
31
11
2459
10
64
20
1934
11
84
15
2448
12
50
13
1228
13
72
12
1234
14
32
12
1240
15
40
648
16
70
13
1252
17
42
715
18
46
15
1309
19
84
62
22
3023
20
12
72
57
24
2433
21
86
45
2447
22
74
76
21
3210
23
58
24
1239
24
52
28
13
3635
25
94
22
1341
26
56
24
11
1930
27
56
18
1940
28
56
18
7565
29
12
639
30
36
1229
31
52
1239
32
94
22
1341
33
10
1245
488
Table 26.1
(Continued )
No.
x1
x2
x3
x4
34
x5
48
x6
4
x7
0
748
35
36
1218
36
20
518
37
56
636
38
26
631
39
48
650
40
16
649
41
653
42
18
653
43
16
640
First, by assuming how much time a0, a1, a2, ... , a7 affects the total operation time
y when each piece of work increases by 1 unit (number of jobs or papers), or a
rough range of the standard operation time required for each piece of work, we
set up three levels within the range. More specically:
1. After setting the minimum to level 1 and the maximum to level 3, we select
the middle of the two values as level 2.
2. After setting the forecasting value to level 2, by creating a certain range
before and after level 1 based on level 2s condence, we establish level 2
the range as levels 1 and 2 the range as level 3.
Both of the methods above can be used. For example, we assume that work 1
takes approximately 2 hours on average and has a range of 30 minutes.
Level 1 of a1: a11 90 minutes
Level 2 of a1: a12 120 minutes
Level 3 of a1: a13 150 minutes
These are called initial values of a1. Although the range should be as precise as
possible, we do not need to be too sensitive to it. Table 26.2 shows the initial values
489
Table 26.2
Initial values of coefcients (minutes)
Level
Coefcient
a1
90
120
150
a2
60
120
a3
30
60
a4
30
60
a5
10
20
a6
20
40
a7
20
40
for other work. Now, we do not set up a variable range of a1 but calculate it using
the following relationship:
a0 y a1x1 a2x2 a7x7
(26.5)
If, as for level 2, all values of a0, a1, a2, ... , a7 follow the data in Table 26.1, the
total operation time of business ofce 1, y, is calculated as follows:
y1 a0 a12x1 a22x2 a72x7
a0 (120)(4) (60)(6) (20)(22)
a0 4980
(26.6)
However, since the actual time is 3667 minutes, y1 y1 indicates the difference
between the actual time and the estimate assuming a1 120, a2 60, ... , a7
20 as a unit operation time. This difference involves assumptive errors. On the
other hand, if each of the assumed coefcients a has little error, y1 y1 for each
business ofce is not supposed to vary greatly. The magnitude of change can be
evaluated by variation (a sum of squares of residuals, Se). Therefore, we need to
do the same calculation and comparison for other business ofces. Now, we exclude a0 in equation (26.3). In this case, y1 y1 involves not merely assumptive
errors regarding a1 120, a2 60, ... , a7 20 but also operation times other
than x1, x2, ... , x7.
When we have three assumed values for each of the unknowns a1, a2, ... , a7,
the eventual total number of combinations amounts to 37 2187. However, we
select a certain number of combinations from them. Which combination should
be chosen depends on an orthogonal array.
By taking into account the number of unknown parameters, we select the size
of an orthogonal array according to Table 26.3. The orthogonal arrays shown in
Table 26.3 have no particular columns where inter-column interactions emerge.
Since we can disperse interaction effects between parameters for the sum of
squares of residuals Se to various columns by using these arrays, we can perform
sequential approximation even if there are no interactions.
Selection of
Orthogonal Array and
Calculation
490
Table 26.3
Selection of orthogonal array
Number of Unknown Parameters
3 3 two-dimensional layout
36
713
1449
Because the L18 orthogonal array has seven three-level columns, we can allocate
up to seven parameters. However, if the actual calculation does not take long, we
recommend that the L36 array be used in cases of seven parameters or more. For
this case, we utilize the L36 array, where there are only seven unknown parameters.
Table 26.4 illustrates the L36 array. In the table, each number from 1 to 13 in the
top horizontal row shows a type of unknown, and each digit 1, 2, and 3 in the
table itself indicates the unknowns level. Each row in the table represents a combination of levels: 36 combinations in case of an L36 orthogonal array.
Now, we assign each of the seven unknowns a1, a2, ... , a7 to each column from
1 to 7. This process is called assignment (layout) to an orthogonal array. As a consequence, the numbers in rows 1 to 36 represent the following 36 equations:
Row 1: y 90x1 0x2 0x7
Row 2: y 120x1 60x2 20x7
(26.7)
(26.8)
(26.9)
T 21
43
(26.10)
Table 26.5 summarizes the result for the rst data in Table 26.2. However, the data
in the rst column of Table 26.5 do not indicate T itself but the averaged T, which
is equivalent to the constant term a0 in each equation.
Sequential
Approximation
Using Table 26.5, we compute a level-by-level sum of S values for each of the
unknown coefcients a1, a2, ... , a7, as shown in Table 26.6. For each unknown we
compare three different level-by-level sums of S. If all of them are almost the same,
any of the three levels assumed for each coefcient leads to the same amount of
491
Table 26.4
L36 orthogonal array
Factor
No.
10
11
12
13
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
3
1
2
3
1
3
1
3
1
2
3
3
3
2
2
2
1
1
2
2
3
1
2
1
3
3
32
492
Table 26.4
(Continued )
Factor
No.
10
11
12
13
33
34
35
36
error. However, if S for level 2 is small while S values for levels 1 and 3 are large,
the true value of the coefcient ai will lie close to level 2. In this case, by resetting
three levels in the proximity to level 2, we repeat the calculation. For practical
calculation, we determine new level values using level-by-level sums of S values in
Table 26.6 according to Table 26.7.
Now, for Ve shown in the determinant column of Table 26.7, we use error variance obtained by the least squares method. Indeed, coefcients calculated by the
least squares method are unreliable; however, error variances can be considered
trustworthy. For the sake of convenience, we recommend using 3 as the F-test
criterion because a value of 2 to 4 has been regarded as the most appropriate in
our experience.
For instance, coefcient a1 has three levels 90, 120, and 150. Table 26.6 shows
the following:
S(a11) 0.39111173 109
S(a12) 0.34771989 109
S(a13) 0.36822313 109
The fact that the value of S at level 1 is the largest and the value at level 2 is the
smallest indicates a V-shaped type (type V), in particular, pattern 3.
1
87.8 3
(26.11)
Based on the newly selected levels, we repeat the same calculation as in the rst
round. In addition, after making the same judgment as that shown in Table 26.7,
for coefcients not converging sufciently, we set up the three new levels and move
on to the third round. In this case, all coefcients have converged in the fth
round, as summarized Table 26.9.
493
Table 26.5
T and S in rst round
No.
1
2
3
Averaged T
Variation S
1,553.58140
36,740,548.5
16,829,328.5
11,911,050.8
143.581395
1,266.41860
Combination
4,221.953488
10,251,155.9
988.046512
81,015,535.9
996.837209
14,439,209.9
497.069767
18,175,430.8
225.023256
45,653,675.0
158.697674
11,285,447.1
10
128.465116
46,156,958.7
11
512.186047
12
209.906977
19,597,979.6
13
232.930233
18,923,170.8
14
212.186047
45,252,574.5
15
451.488372
10,126,044.7
16
909.395349
13,132,844.3
17
231.255814
13,060,792.2
18
709.906977
52,139,539.6
19
692.00000
43,558,928.0
20
1,174.27907
9,359,614.51
16,310,512.7
21
51.5348837
26,925,158.7
22
40.7906977
12,195,333.1
441.953488
31,986,195.9
24
52.000000
27,263,368.0
25
433.162791
71,126,909.9
26
209.395349
15,054,924.3
27
654.511628
13,870,504.7
28
717.534884
13,035,738.7
29
508.279070
28,202,412.7
30
221.488372
48,144,624.7
31
598.046512
78,524,235.9
32
254.511628
13,377,984.7
23
494
Table 26.5
(Continued )
No.
Averaged T
Variation S
Combination
33
774.279070
13,789,392.7
34
438.976744
29,290,475.0
35
264.976744
31,616,335.0
36
604.744186
11,531,352.2
As a reference, see Tables 26.10 and 26.11, showing averaged T and S values
and level-by-level sums of S, respectively. When all coefcients are converged, we
can use any of the three levels. However, without any special reasons to select a
specic level, we should adopt level 2. Selecting level 2 for all of a1, a2, ... , a7, we
can express them as follows:
a1 127.5 7.5 minutes
a2 30.0 15.0 minutes
(26.12)
(26.13)
This value is equal to the value of averaged T in row 2 of Table 26.10 because all
levels in the row are set to level 2.
Table 26.6
Level-by-level sum of S valuesa
Coefcient
Level 1
Level 2
Level 3
a1
0.39111173E 09
a2
a3
0.30145602E 09
0.35595808E 09 0.44964065E 09
a4
0.37578365E 09
0.33903175E 09 0.39223935E 09
a5
0.32961513E 09
0.33368737E 09 0.44375224E 09
a6
0.17658955E 09
0.24090200E 09 0.68956319E 09
a7
0.29273406E 09
0.34254936E 09 0.47177133E 09
0.34771989E 09 0.36822313E 09
495
Table 26.7
New setting of levelsa
New Level
Plot of Variation S
I
II
Determinant
(1/r)(S3 S1)
(1)
F
Ve
(2)
(1/r)(S2 S1)
F
Ve
1.5
(3)
(1/r)(S2 S1)
F
Ve
0.5
(1)
1
(1.5)
1.5
(2)
(1)
(1/r)(S3 S2)
F
Ve
(2)
(1/r)(S1 S2)
F
Ve
1.5
(3)
(1/r)(S1 S2)
F
Ve
1.5
2.5
Omitted
Omitted
III
IV
(1)
(1/r)(S1 S3)
F
Ve
(2)
(1/r)(S1 S2)
F
Ve
2.5
(3)
(1/r)(S3 S2)
F
Ve
1.5
2.5
496
Table 26.7
(Continued )
New Level
Plot of Variation S
VI
Determinant
(1/r)(S1 S3)
(1)
F
Ve
(2)
(1/r)(S2 S3)
F
Ve
2.5
(3)
(1/r)(S2 S3)
F
Ve
2.5
(2)
3
(2.5)
3.5
(3)
a
r N/3, where N is the size of an orthogonal array; Level 1.5 means to create an intermediate level between levels 1 and
2; values in parentheses are used when newly determined values exceed the initial variable range.
(Se 9279244.98)
(26.14)
Table 26.8
Three levels for second round
Level
Coefcient
a1
105
120
135
a2
30
60
a3
15
30
a4
15
30
45
a5
10
a6
10
20
a7
10
20
497
Table 26.9
Three levels for fth round
Level
Coefcient
a1
120.0
127.5
135.0
a2
15.0
30.0
45.0
a3
0.0
7.5
15.0
a4
30.0
37.5
45.0
a5
5.0
7.5
10.0
a6
10.0
12.5
15.0
a7
10.0
15.0
20.0
Table 26.10
T and S in fth round
No.
Averaged T
Variation S
864.976744
11,740,655.0
573.523256
282.069767
643.116279
351.662791
725.790698
9,962,633.12
661.895349
Combination
1
9,489,529.28
542.418605
9,702,168.47
9,279,244.98
11,255,440.8
9,535,102.42
10,016,397.4
516.255814
9,133,562.19
10
630.790698
9,877,123.12
11
12
604.627907
485.151163
9,122,004.05
9,326,132.77
2
3
2
3
1
2
3
1
2
3
1
2
3
1
1
2
3
1
2
3
1
2
3
1
1
1
13
479.395349
9,390,239.28
14
651.720930
9,654,002.65
15
589.453488
9,293,008.41
16
703.930233
9,779,070.79
17
595.441860
9,142,876.60
18
421.197674
9,549,964.57
498
Table 26.10
(Continued )
No.
Averaged T
19
425.674419
20
770.151163
21
524.744186
22
486.779070
23
648.116279
24
585.674419
25
Variation S
Combination
9,087,415.44
9,579,907.19
9,120,635.15
9,342,025.44
490.383721
9,972,519.92
26
528.930233
9,688,640.79
27
701.255814
9,511,572.19
28
655.965116
9,647,941.20
29
410.558140
9,381,541.60
30
654.046512
9,862,355.91
31
449.162791
32
540.209302
9,418,093.12
33
731.197674
9,944,122.07
34
427.883721
9,123,062.42
35
664.918605
9,628,645.97
36
627.767442
9,686,485.67
10,164,835.3
10,081,987.4
10,108,859.9
Table 26.11
Level-by-level sum of S values
Coefcient
Level 1
Level 2
Level 3
a1
0.11687215E 09
0.11528044E 09
0.11644715E 09
a2
0.11652424E 09
0.11450165E 09
0.11757385E 09
a3
0.11467273E 09
0.11543298E 09
0.11849404E 09
a4
0.11843048E 09
0.11506510E 09
0.11510416E 09
a5
0.11969653E 09
0.11441208E 09
0.11449113E 09
a6
0.11675750E 09
0.11378532E 09
0.11805692E 09
a7
0.11660350E 09
0.11500287E 09
0.11699338E 09
499
Table 26.12
Comparison between least squares method and experimental regression analysis with different
business ofce dataa
No.
x1
x2
x3
x4
x5
x6
x7
10
144
85
21
2980
Experimental
Regression
596.37
651.02
10
120
28
2431
140.17
127.48
42
19
1254
287.43
112.02
78
38
1815
19.38
16.48
118
27
2452
356.60
145.98
56
14
1223
197.47
200.52
70
14
1234
441.32
279.52
Least Squares
Method
819,226.319
117,032.331
shown in Table 26.12. This table reveals that the mean sum of squares of deviations
between the estimation and actual value (variance) calculated based on the least
squares method is 1.38 times as large as that by the experimental regression analysis. This fact demonstrates that we cannot necessarily minimize the sum of squares
of residuals for different business ofce data, whereas we can do so for the data
to be used for coefcient estimation. This is considered to be caused the by uncertainty of parameters estimated using the least squares method. Then if volume
of jobs such as x1 and x3 is increased in the future, the equation based on the least
squares method will lose its practicality.
592,554.321
84,650.617
Part X
Design of
Experiments
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
27
27.1.
27.2.
27.3.
27.4.
Introduction to
Design of
Experiments
(27.1)
where y is the output characteristic, M the input signal, and x1, x2, ... , xn the noise
factors.
This approach is appropriate for scientic studies. But for engineering studies,
where the objective is to develop a good product efciently at low cost, the
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
503
504
(27.2)
where M is the useful part and [f(M, x1, x2, ... , xn) M] is the harmful part.
The SN ratio is the ratio of the rst term to the second. The larger the ratio, the
better the functionality.
The basic difference between the traditional design of experiments and the
design of experiments for quality engineering is that the SN ratio is used as an
index for functionality instead of using a response. In the traditional design of
experiments, there is no distinction between control factors and noise factors.
Error is considered to vary at random, and its distribution is discussed. But in
quality engineering, neither random error nor distribution are considered.
Reference
How can one assure that the conclusions from a test piece study in a laboratory
can be reproduced downstream, where the conditions are totally different? Since
the conditions are different, the characteristic (output) to be measured will be
different. Therefore, we have to give up reproducibility of the output characteristic. The characteristic will be adjusted or tuned at the second stage of parameter
design. At the development stage, only improvement of functionality is sought.
That is why the SN ratio is used in parameter design.
By use of the SN ratio, it is expected that the difference in the stability of a
function can be reproduced. However, there is no guarantee of reproducibility.
Therefore, a method to check reproducibility is needed. Orthogonal arrays are
used for this purpose. This is discussed in a later chapter.
Reference
1. Genichi Taguchi et al., 1973. Design of Experiments. Tokyo: Japanese Standards
Association.
505
28
28.1.
28.2.
28.3.
28.4.
Fundamentals of
Data Analysis
Introduction
506
Sum and Mean
506
Deviation
507
Variation and Variance
509
28.1. Introduction
In this chapter, the fundamentals of data analysis are illustrated by introducing the
concepts of sum, mean, deviation, variation, and variance. This chapter is based
on Genichi Taguchi et al., Design of Experiments. Tokyo: Japanese Standards Association, 1973.
171.6
156.3
159.2
160.0
158.7
167.5
175.4
172.8
153.5
(28.1)
1
1638.2
T
163.82
10
10
(28.2)
To simplify the calculation, the sum and mean can be obtained for the data after
subtracting a gure that is probably in the neighborhood of the mean. This number is called a working mean. Obviously, the numbers become smaller after
506
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
28.3. Deviation
507
subtracting the working mean from each side; thus the calculation is simplied.
Also for simplication, data after subtracting a working mean could be multiplied
by 10, or 100, or 0.1, and so on.
This procedure is called data transformation. The height data after subtracting
a working mean of 160.0 cm are:
11.6 3.7 0.8
3.2
0.0 1.3
7.5
15.4
12.8 6.5
38.2
163.82
10
(28.4)
(28.5)
The mean, y, is
y
1
(y y2 yn)
n 1
(28.6)
(28.7)
(28.8)
28.3. Deviation
There are two types of deviations: deviation from an objective value and the deviation from the mean, y . First, deviation from an objective value is explained. Let
n measurements be y1, y2, ... , yn. When there is a denite objective value, y0, the
differences of y1, y2, ... , yn from the objective value, y0, are called the deviations
from an objective value.
The following is an example to illustrate calculation of the deviation from an
objective value. In order to produce carbon resistors valued at 10 k, 12 resistors
were produced for trial. Their resistance was measured to obtain the following
values in kilohoms:
10.3
9.9
10.5
11.0
10.0
10.3
10.2
10.3
9.8
9.5
10.1
10.6
0.5
1.0
0.0
0.3
0.2
0.1
0.6
An objective value is not always limited to the value that a person would assign
arbitrarily. Sometimes standard values are used for comparison, such as nominal
values (specications or indications, such as 100 g per container), theoretical values (values calculated from theories or from a standard method used by a
508
(28.9)
Table 28.1
Observational values y and theoretical values y0
R
y0
Deviation
10
50
5.10
5.00
0.10
20
20
0.98
1.00
0.02
20
40
2.04
2.00
0.04
40
30
0.75
0.75
0.00
40
60
1.52
1.50
0.02
509
(28.10)
The number of independent squares in the total variation, ST, is called the number
of degrees of freedom. The number of degrees of freedom in equation (28.10) is
therefore n.
The deviation of the resistance of carbon in Section 28.2 varied from 0.5 to
1.0. To express these different values by a simple gure, the sum of these deviations squared is calculated:
ST (0.3)2 (0.1)2 (0.5)2 (0.6)2
0.09 0.01 0.25 0.36
2.23
(28.11)
(28.12)
(28.13)
There are n squares in equation (28.13); however, its number of degrees of freedom is n 1. This is because there exist the following linear relationships among
n deviations:
(y1 y) (y2 y) (yn y) 0
(28.14)
Generally, the number of degrees of freedom for the sum of n number of squares
is given by n minus the number of relational equations among the deviations
before these deviations are squared.
There are n squares in the ST equation (28.13), but there is a relational equation shown by (28.14); therefore, its number of degrees of freedom is n 1.
510
(28.15)
is the sum of deviations (from the mean y) squared with nine degrees of freedom.
To obtain a variation from the arithmetic mean, y, it is easier to calculate by
way of subtracting a working mean from y1, y2, ... , yn, calculating their sum squared,
then subtracting the correction factor:
ST (y1 y)2 (y2 y)2
2
2
(y)
(y)
1
2
2
(y)
2
(y1 y2 y)
n
n
(28.16)
The measurements y,
1 y
2 , ... , y
n are the values subtracted by a working mean. It
is relatively easy to prove equation (28.16). In equation (28.16),
CF
2
(y1 y2 y)
n
n
(28.17)
38.22
10
660.32 145.924
514.396
(28.18)
(f 9)
(28.19)
511
38.22
145.92
10
(28.20)
variation
degrees of freedom
(28.21)
Variance signies the mean of deviation per unit degree of freedom. It really
means the magnitude of error, or mistake, or dispersion, or change per unit. This
measure corresponds to work or energy per unit time in physics.
Letting y1, y2, ... , yn be the results of n measurements, and letting an objective
value be y0, the variance of these measurements is
V
(28.22)
ST
degrees of freedom
2.23
12
0.186
(28.23)
Sometimes the value above is called the error variance with 12 degrees of freedom.
It is recommended that one carry out this calculation to one more decimal place.
This is shown in equation (28.23), which is calculated to the third decimal place.
When the variation is calculated as the sum of the deviations squared from the
arithmetical mean, as in the case of height, variance is obtained as variation divided
by the degrees of freedom. Letting the arithmetical mean of y1, y2, ... , yn be y, the
variance, V, is then
V
(28.24)
512
y1 y2 yn
n
(28.25)
This would then be called deviation, or more correctly, the estimate of deviation.
When the initial observational values before subtracting an objective or a theoretical value are always larger than the objective value, deviation y in (28.25) would
be a large positive value, or vice versa.
In the case of carbon resistance discussed earlier, the deviations from the objective value are
0.3 0.1
0.5
1.0
1.0
0.0
0.3
0.2
0.1
0.6
The deviation y is
y
1
(0.3 0.1 0.5 0.6) 2.5
12
0.208
(28.26)
It is also recommended that one calculate y to one or two more decimal places.
A deviation that shows a plus value indicates that the resistance values of carbon
are generally larger than the objective value. To express the absolute magnitude
of a deviation, no matter whether it is plus or minus, y is squared and then multiplied by n:
magnitude of deviation n(y)2
n
(y1 y2 yn2
n
(28.27)
This value signies the variation of the mean, m, of y1, y2, ... , yn, which are the
deviations from an objective value or from a theoretical value.
The following decomposition is therefore derived:
y12 y22 yn2
(y1 y2 yn)2
n
(28.28)
513
If the sum of the carbon resistances, ST , in (28.11) is 2.23, then Sm, the variation
of the mean (m), is
Sm
0.52
(28.30)
(28.31)
(28.32)
Se is called the error variation. The number of degrees of freedom of error variation
is 12 1 11.
Error variation divided by its degrees of freedom is denoted by Ve. Using the
example of resistance,
Ve
Se
1.71
0.155
degrees of freedom
11
(28.33)
514
29
29.1. Introduction
515
29.2. Decomposition of Variation
29.3. Analysis-of-Variance Table
Introduction
to Analysis
of Variance
515
521
29.1. Introduction
In this chapter, linear equations of observational data with constant coefcients
and the method of calculating their variations are explained. Analysis of variance
is also introduced. This chapter is based on Genichi Taguchi et al., Design of Experiments. Tokyo: Japanese Standards Association, 1973.
(29.1)
Some of the coefcients c1, c2, ... , cn, could be zero, but assume that not all the
coefcients are zero.
The sum of these coefcients squared is denoted by D, which is the number of
units comprising the linear equation, L. D is
D c 21 c 22 c 2n
(29.2)
y1 y2 yn
n
1
1
1
y y2 yn
n 1
n
n
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
(29.3)
515
516
1
n
(29.4)
1
1
n
n
n
(29.5)
The square of a linear equation divided by its number of units forms a variation
with one degree of freedom. Letting it be SL, then
SL
L2
L2
2
c c c n
D
2
1
2
2
(29.6)
(y1 y2 yn)2
n
(29.7)
The variation corresponds to variation Sm, which is the correction factor. Following
is an example to clarify linear equations and their variation.
Example 1
There are six Japanese and four Americans with the following heights in
centimeters:
A1 (Japanese): 158, 162, 155, 172, 160, 168
A2 (American): 186, 172, 176, 180
The quality of total variation among the 10 persons is given by the sum of deviations
(from the mean) squared. Since there is neither an objective value nor a theoretical
value, a working mean of 170 is subtracted and the total variation among the 10
persons is calculated.
Deviations from the working mean for A1: 12, 8, 15, 2, 10, 2
Total: 45
Deviations from the working mean for A2: 16, 2 , 6, 10
Total: 34
Therefore,
CF
(45 34)2
(11)2
12
10
10
(29.8)
(29.9)
517
Observing the data above, one must understand that the major part of the total
variation of heights among the 10 persons is caused by the difference between the
Japanese and the Americans. How much of the total variation of heights among
the 10 persons, which was calculated to be 925, was caused by the differences
between the two nationalities?
Assume that everyone expresses the difference between the mean heights of the
Japanese and the Americans as follows:
L (mean of Japanese) (mean of Americans)
170
45
6
170
34
4
45
34
6
4
(29.10)
Usually, the total of the Japanese (A1) is denoted by the same symbol, A1, and the
total of the Americans (A2), by A2. Thus,
L
A1
A
2
6
4
(29.11)
Now let the heights of the 10 persons by y1, y2, y3, y4, y5, y6 (Japanese), y7, y8,
y9, y10 (Americans). The total variation, ST, is the total variation of the 10 measurements of height y1, y2, ... , y10. Equations (29.10) and (29.11) are the linear
equations of 10 observational values y1, y2, ... , y10 with constant coefcients.
Accordingly, the variation SL for equation (29.11) is
SA SL
(A1/6 A2/4)2
(45/6 34/4)2
2
2
(1/6) (6) (1/4) (4)
1/6 1/4
[(4)(45) (6)(34)]2
(384)2
614
(42)(6) (6)2(4)
240
(29.12)
It is seen that with the total variation among 10 heights, which in this case is
925, the difference between Japanese and Americans can be as much as 614. By
subtracting 614 from 925, we then identify the magnitude of individual differences
within the same nationality. This difference is called the error variation, denoted by
Se:
Se ST SA 925 614 311
(29.13)
The degrees of freedom are 9 for ST, 1 for SA, and (9 1 8) for Se. Therefore,
the error variance Ve is
Ve
Se
311
38.9
8
8
(29.14)
Error variance shows the height differences per person within the same nationality.
The square-root value of error variance (Ve) is called the standard deviation of
individual difference of height, usually denoted by s:
s Ve 38.9 6.24 cm
(29.15)
518
In this way, the total variation of height among the 10 persons, ST, is decomposed
or separated into the variation of the difference between the Japanese and the
Americans and the variation of the individual differences.
ST SA Se
(29.16)
(variation)
(degrees of freedom)
(29.17)
Since there are 10 persons, there must be 10 individual differences; otherwise, the
data are contradictory. In Se, however, there are only eight individual differences.
To make the total of 10, one difference is included in correction factor CF, and one
difference in SA.
In general, one portion of the individual differences is included in SA, which
equaled 614. Therefore, the real differences between nations must be calculated
by subtracting a portion of the individual differences (which is the error variance)
from SA. The result after subtracting the error variance is called the pure variation
between Japanese and Americans, distinguished from the original variation using a
prime:
SA SA Ve 614 38.9 575.1
(29.18)
Another portion of the individual differences is also in the correction factor. But
in the case of the variation in height, this portion does not provide useful information. However, it will be shown in Example 2 where the correction factor does
indicate some useful information. For this reason, the total quantity of variation is
considered to have nine degrees of freedom.
Accordingly, the pure variation of error, S,
e is calculated by adding the error
variance, Ve (which is the unit portion subtracted from SA), to Se:
Se Se Ve 311 38.9 349.9
(29.19)
(29.20)
(29.21)
Each term of equation (29.20) is divided by ST and then multiplied by 100. This
calculation is called the decomposition of the degrees of contribution.
In this case, the following relationship is concluded:
100 A e
575.1
349.9
(100)
(100) 62.2 37.8
925
925
(29.22)
Equation (29.22) shows that the total varying quantity of 10 heights is equal to
925; the differences between the Japanese and American nationalities comprises
62.2% of the cause, and the individual differences consitutes 37.8%
519
Example 2
To improve the antiabrasive properties of a product, two versions were manufactured on a trial basis.
A1:
A2:
Six test pieces were prepared from the two versions, and wear tests were carried
out under identical conditions. The quantities of wear (mg) were as follows:
A1:
26,
18,
A2:
15,
8,
19,
14,
21,
13,
15,
16,
29
9
(29.23)
A2 15 8 9 75
(29.24)
T A1 A2 203
(29.25)
2032
3434
12
(29.26)
520
When there is an objective value, the correction factor CF is also called the variation
of the general mean, denoted by Sm. It signies the magnitude of the average
quantity of abrasion of all the data.
The difference between averages A1 and A2, denoted by L, is
L
A1
A
A A2
2 1
6
6
6
(29.27)
The variation of the differences between A1 and A2, or SA, is calculated from the
square of linear equation L divided by the sum of the coefcients of L squared.
SA SL
L2
sum of coefficients squared
[(A1 A2)/6]2
(A A2)2
(A A2)2
2 1
1
(1/6)2(6) (1/6)2(6)
(1 )(6) (1)2(6)
12
(128 75)2
532
2809
234
12
12
12
(29.28)
The objective value in this case is zero, so the total variation is equal to the total
sum squared.
ST 262 182 92 3859
(29.29)
(29.30)
Degrees of freedom are 12 for ST, 1 for the correction factor (CF or general mean
Sm), and (12 2) 10 for the error variation Se.
Accordingly, the decomposition of the variation as well as the degrees of freedom
are concluded as follows:
ST Sm SA Se
(29.31)
(variation)
(degrees of freedom)
(29.32)
(29.33)
Sm Sm Ve 3434
(29.34)
(29.35)
Also,
Se Se 2Ve 191 (2)(19.1) 229.2
(37.36)
521
0.885
ST
3859
(29.37)
SA
214.9
0.056
ST
3859
(29.38)
Se
229.2
0.059
ST
3859
(29.39)
Therefore,
100 m A e 88.5 5.6 5.9%
(29.40)
Table 29.1
ANOVA table
Degrees of
Freedom,
f
Variation,
S
Variance,
V
Pure
Variation,
S
Degrees of
Contribution,
(%)
CF
Vm
Sm
SA
VA
SA
n2
Se
Ve
Se
ST
ST
100.0
Source
Total
522
Table 29.2
ANOVA table for Example 2
Degrees of
Freedom,
f
Variation,
S
3434
234
10
191
Total
12
3859
Source
Pure
Variation,
S
Degrees of
Contribution,
(%)
3434
3414.9
88.5
234
214.9
5.6
229.2
5.9
3859.0
100.0
Variance,
V
19.1
in Table 29.1, an analysis of variance (ANOVA) table. The symbols in the table are
dened as follows:
Vm
CF
CF
1
(29.41)
VA
SA
SA
1
(29.42)
Sm CF Ve
(29.43)
(29.44)
Se Se 2Ve
(29.45)
Sm
100
ST
(29.46)
SA
100
ST
(29.47)
Se
100
ST
(29.48)
The calculating procedures above constitute the analysis of variance. The results
for Example 2 are shown in Table 29.2.
30
One-Way Layout
30.1. Introduction
523
30.2. Equal Number of Repetitions
523
30.3. Number of Repetitions Is Not Equal
527
30.1. Introduction
One-way layout is a type of experiment with one factor with k conditions (or k
levels). The number of repetitions of each level may be the same or may be different; they are shown as n1, n2, ... , nk.
The experiments based on a k value of 2 were described in Chapter 29. In this
chapter, experiments with k equal to or larger than 3 are described. For a more
precise comparison, it is desirable that the number of repetitions in each level be
the same. However, if this is not possible, the number of repetitions in each level
can vary. This chapter is based on Genichi Taguchi et al., Design of Experiments.
Tokyo: Japanese Standards Association, 1973.
523
524
Table 30.1
Data with n repetitions
Level of A
Data
Total
A1
A1
A2
A2
Aa
Total
Suppose that datum yij denotes the deviation from the objective value, y0. Variations are calculated from A1, A2, ... , Aa.
Sm
SA
(A1 A2 Aa)2
an
A21 A22 Aa2
ST y
2
11
2
12
Sm
( f a 1)
(30.1)
(30.2)
( f an)
(30.3)
[ f a(n 1)]
(30.4)
2
an
Se ST Sm SA
( f 1)
The ANOVA table is shown in Table 30.2. The symbols in the table signify the
following:
Vm
Sm
Sm
1
(30.5)
VA
SA
a1
(30.6)
Ve
Se
a(n 1)
(30.7)
Sm Sm Ve
(30.8)
SA SA (a 1)Ve
(30.9)
Se Se aVe
(30.10)
Sm
ST
(30.11)
SA
ST
(30.12)
Se
ST
(30.13)
525
Table 30.2
ANOVA table
f
(%)
Sm
Vm
Sm
a1
SA
VA
SA
a(n 1)
Se
Ve
Se
an
ST
ST
100.0
Source
Total
SA Se
S Se
A
(a 1) a (n 1)
an 1
(30.14)
Example
In a pinhole manufacturing process, the order of processing was changed in three
levels. The experiment was carried out with 10 repetitions for each level. The roundness of pinholes were measured, as shown in Table 30.3.
A1:
A2:
(process the lower hole to half-depth) (ream the lower hole) (ream)
(process the leading hole)
A3:
Table 30.3
Data for roundness (m)
Level
Data
Total
A1
87
A2
85
A3
8, 2, 0, 4, 1, 6, 5, 3, 2, 4
35
Total
207
526
There is an objective value (i.e., zero), in this case, so the general mean, m, is
tested:
Sm
2072
1428
30
SA
(30.15)
(30.16)
4 2131
(30.17)
(30.18)
(30.19)
(30.20)
Sm
1408.4
0.661
ST
2131
(30.21)
134.8
0.063
2131
(30.22)
587.8
0.276
2131
(30.23)
A1
87
8.7
n
10
(30.24)
Table 30.4
ANOVA table
Source
(%)
1428
1428
1408.4
66.1
174
87
134.8
6.3
27
529
19.6
587.8
27.6
Total
30
2131
2131.0
100.0
527
Table 30.5
ANOVA table
f
(%)
Sm
Vm
Sm
a1
SA
VA
SA
n a
Se
Ve
Se
ST
ST
100.0
Source
Total
Similarly,
A2
85
8.5
10
(30.25)
A3
35
3.5
10
(30.26)
(30.27)
Sm
(total)
n1 n2 na
SA
A12
A2
A2
2 a Sm
n1
n2
na
Se ST Sm SA
( f 1)
( f a 1)
( f n1 n2 na a)
(30.28)
(38.29)
(30.30)
Decomposition to
Components with
a Unit Degree
of Freedom
31
31.1.
31.2.
31.3.
31.4.
Introduction
528
Comparison and Its Variation
528
Linear Regression Equation
534
Application of Orthogonal Polynomials
542
31.1. Introduction
In previous chapters, the total sum of the squares was decomposed into the sum
of the variations of a correction factor, the main effect of a factor, and the error.
This decomposition can be extended by looking at what the researcher feels could
be causing the variations; or the total sum of the squares can be decomposed into
the sum of the variations of causes with a unit degree of freedom. The latter
method is described in this chapter. This chapter is based on Genichi Taguchi et
al., Design of Experiments. Tokyo: Japanese Standards Association, 1973.
A1
A
2
10
10
(31.1)
However, A3 and the mean of A1 and A1 would be compared by using the following
linear equation:
L2
528
A1 A2
A
3
20
10
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
(31.2)
529
L1 and L2 are linear equations; their sum of the coefcients for A1, A2, and A3 is
equal to zero either in L1 or in L2.
L1:
1
1
0
10
10
(31.3)
L2:
1
1
1
0
20
20
10
(31.4)
Assume that the sums A1, A2, ... , Aa each has the same number of data (b) making
up their respective total.
In a linear equation with constant coefcients A1, A2, ... , Aa,
L c1A1 c2A2 ca Aa
(31.5)
(31.6)
(31.7)
(c1A1 ca Aa)2
(c 21 c 21 c 2a)b
(31.8)
where SL has one degree of freedom. Calculation and estimation of a contrast are
made in exactly the same way.
In two contrasts,
L1 c1A1 ca Aa
(31.9)
L21 c 1A1 c a Aa
(31.10)
(31.11)
L21
(c c 2a)b
(31.12)
SL2
L22
(c 12 c a2)b
(31.13)
2
1
530
(31.14)
Example 1
In an example of pinhole processing,
L1
A1
A
2
10
10
(31.15)
L2
A1 A2
A
3
20
10
(31.16)
1
10
1
20
1
10
1
1
0
0
20
10
(31.17)
(31.18)
(A1/10 A2/10)2
(1/10)2 (A1 A2)2
2
2
[(1/10) (1/10) ](10)
(1/10)2 [12 (1)2](10)
(A1 A2)2
20
(87 85)2
20
0.2
SL2
(31.19)
173.4
(31.20)
Thus, the magnitude of the variation in roundness, which is caused by A1, A2,
and A3, namely, SA, is decomposed into the variation caused by the difference
between A1 and A2, namely, SL1 , and that variation caused by the difference between
A3 and the mean of A1 and A2, namely, SL2:
SL1 SL2 0.2 173.4 174 SA
(31.21)
531
Example 2
We have the following four types of products:
A1:
foreign products
A2:
A3:
A4:
Two, ten, six, and six products were sampled from each type, respectively, and a
300-hour continuous deterioration test was made. The percent of deterioration (Table 31.1) was determined as follows:
y
Sm
n
24
9923
SA
(31.22)
(31.23)
A21
A2
A2
A2
2 3 4 Sm
2
10
6
6
262
1752
1472 1402
9923
2
10
6
346
(31.24)
(31.25)
Se ST Sm SA
10,426 9923 346
157
(31.26)
L2:
L3:
532
Table 31.1
Percent of deterioration
Level
Data
Total
A1
12, 14
A2
175
A3
147
A4
26
140
Total
488
The comparison above can be made by using the following linear equations:
L1
A1
A A3 A4
2
2
22
26
462
2
22
8.0
L2
(31.27)
A2
A A4
3
10
12
175
287
10
12
6.4
L3
(31.28)
A3 A4
6
147 140
6
1.2
(31.29)
Table 31.2
ANOVA table
Source
9,923
346
20
157
Total
24
10,426
V
9,923
115.3
7.85
(%)
9,915
95.1
322
3.0
189
1.8
10,426
100.0
533
These equations are orthogonal to each other, and also orthogonal with the general
mean,
A1 A2 A3 A4
24
Lm
(31.30)
1
2
1
(2)
24
1
22
1
(22) 0
24
(31.31)
where the sum of product of the coefcients is zero. The orthogonality between L1
and L2 is proven by
1
(0)(2)
2
22
1
(10)
10
1
22
1
(12) 0
12
(31.32)
After calculating the variations of L1, L2, and L3, we obtain the following
decomposition:
SA SL1 SL2 SL3
SL1
L21
sum of the coefficients squared
(286 462)2
264
117
SL2
(31.33)
(31.34)
(1050 1435)2
225
660
SL2
(31.35)
(A3/6 A4/6)2
(1/6)2(6) (1/6)2(6)
(A3 A4)2
12
(31.36)
534
An ANOVA table with the decomposition of A into three components with a unit
degree of freedom is shown in Table 31.3.
From the analysis of variance, it has been determined that there is a signicant
difference between L1 and L2, but none between the other domestic companies.
When there is no signicance, as in this case, it is better to pool the effect with
the error to show the miscellaneous effect. Pooling the effect of L3 with the error,
the degrees of contribution would then be 1.9%.
Where the error variance, Ve, is calculated from the pooled error variation,
Ve
157 4
21
7.67
(31.37)
1
1
2
22
12
22
6
11
1
2
(2)
22
(22)
(31.38)
The other condence intervals were calculated in the same way. Since L3 of equation (31.29) is not signicant, it is generally not estimated.
Table 31.3
ANOVA table
Source
m
(%)
9,923
9,923
9,915
95.1
1
1
1
117
225
4
117
225
4
109
217
1.0
2.0
20
157
185
1.8
24
10,426
10,426
100.0
A
L1
L2
L3
e
Total
7.85
535
The relationship of the tensile strength, y, to temperature, x, is usually expressed by a linear function:
a bx y
(31.39)
Then n observational values (x1, y1), (x2, y2), ... , (xn, yn), are put in equation
(31.39):
a bxi yi
(i 1, 2, ... , n)
(31.40)
Equation (31.40) is called an observational equation. When the n pairs of observational values are put in the equation, there are n simultaneous equations with two
unknowns, a and b.
When the number of equations exceeds the number of unknowns, a solution
that perfectly satises both of these equations does not exist; however, a solution
that minimizes the differences between both sides of the equations can be obtained. That is, to nd a and b would minimize the residual sum of the squares
or the differences between the two sides:
Se (y1 a bx1)2 (y2 a bx2)2 (yn a bxn)2
(31.41)
This solution was named by K. F. Gauss and is known as the least squares method. It
is obtained by solving the following normal equations:
na
xi
xi b
x i2
yi
(31.42)
xi yi
(31.43)
where
n sum of coefficients squared of a in (31.40); the coefficients
are equal to 1
xi
12 12 12
(31.44)
yi
(31.45)
(31.46)
x 2i
(31.47)
(31.48)
536
xi
xi
From this,
n
b
xi
yi
xi yi
x y x y
n x x
i i
2
i
(31.49)
(31.50)
(31.51)
(x1 x2 xn)2
n
(31.52)
(31.53)
1
n
ST(xy)
ST(xx)
yi
x 0
i
xi
(31.54)
(31.55)
then
a
y1 yn
y
n
(31.56)
537
Thus, the estimation of the unknowns a and b becomes very simple. For this purpose, equation (31.39) may be expanded as follows:
m b(x x) y
(31.57)
where x is the mean of x1, x2, ... , xn. Such an expansion is called an orthogonal
expansion.
The orthogonal expansion has the following meaning: Either the general mean,
m, or the linear coefcient, b, is estimated from the linear equation of y. Using
equation (31.57), the two linear equations are orthogonal; therefore, the magnitudes of their inuences are easily evaluated.
In the observational equation
m b(xi x) yi
(i 1, 2, ..., n)
(31.58)
(x x) 0
i
y
0m (x x) b (x x)y
nm 0b
(31.59)
y1 y2 yn
n
(31.60)
(x x)y
(x x)
i
ST(xy)
ST(xx)
(31.61)
Not only m
, but also b , is a linear equation of y1, y2, ... , yn.
c1
x1 x
ST(xx)
c2
x2 x
ST(xx)
cn
xn x
ST(xx)
(31.62)
538
x1 x
ST(xx)
x2 x
ST(xx)
xn x
ST(xx)
1
[(x1 x)2 (x2 x)2 (xn x)2]
[ST(xx)]2
ST(xx)
[ST(xx)]2
1
ST(xx)
(31.63)
( y1 yn)2
n
(b)2
no. of units
[ST(xy)/ST(xx]2
1/ST(xx)
ST(xy)2
ST(xx)
(31.64)
Se y 12 y 22 y n2 Sm Sb
(31.65)
Example
To observe the change of tensile strength of a product, given a change in temperature, the tensile strength of two test pieces was measured at four different temperatures, with the following results (kg/mm2):
A1 (0C):
84.0,
85.2
A2 (20C):
77.2,
76.8
A3 (40C):
67.4,
68.6
A4 (60C):
58.2,
60.4
539
Subtracting a working mean, 70.0, the data in Table 31.4 are obtained.
CF
17.82
39.60
8
(31.66)
(31.67)
Assuming that tensile strength changes in the same way as the linear function of
temperature, A, the observational equations will become
y m b(A A)
(31.68)
where A signies temperature and A represents the mean value of the various temperature changes:
A 18 [(2)(0) (2)(20) (2)(40) (2)(60)] 30C
(31.69)
(31.70)
Table 31.4
Data after subtracting a working mean
Level
A1
A2
Data
Total
14.0,
15.2
7.2,
29.2
6.8
14.0
A3
2.6, 1.4
4.0
A4
11.8, 9.6
21.4
Total
17.8
540
The unknowns are m and b, and there are eight equations. Therefore, the least
squares method is used to nd m and b.
n
xi
yi
x 2i
xi yi
(31.71)
(70.0)(8) 17.8
577.8
4000
1698.0
(31.72)
(31.73)
(31.74)
(31.75)
(31.76)
577.8
8
72.22
(31.77)
541
1698.0
b
4000
0.4245
(31.78)
The orthogonality of these equations is proved as follows: Let the eight observational data be y1, y2, ... , y8.
m
y1 y2 y8
8
(31.79)
1
8
3
400
400
1
8
1
8
3
400
(31.81)
Sm CF
41,731.60
Sb
(31.82)
ST(xy)
ST(xx)
(1698.0)2
4000
720.80
(31.83)
(31.84)
(31.85)
542
Se
6
4.84
6
0.807
(31.86)
The pure variations of the general mean and the linear coefcient, b, are
Sm Sm Ve
41,731.60 0.807
41,730.793
(31.87)
Sb Sb Ve
720.80 0.807
719.993
(31.88)
41,730.793
42,457.24
98.289%
b
719.993
42,457.24
1.696%
e
(31.89)
(31.90)
4.84 (2)(0.807)
42,457.24
0.015%
(31.91)
(31.92)
543
Table 31.5
ANOVA table
Source
(%)
41,731.60
41,731.60
41,730.793
98.289
720.80
720.80
719.993
1.696
4.84
6.545
0.015
42,457.24
42,457.240
100.000
Total
0.807
comparisons rather than simply on the theoretical methodologies. Only a researcher or an engineer who is thoroughly knowledgeable of the products or conditions being investigated can know what comparison would be practical. However,
the contrasts of orthogonal polynomials in linear, quadratic, ... , are often used.
Let A1, A2, ... , Aa denote the values of rst, second, ... , a level, respectively.
Assuming that the levels of A are of equal intervals, the levels are expressed as
A1 A1
A2 A1 h
A3 A1 2h
(31.93)
Aa A1 (a 1)h
From each level of A1, A2, ... , Aa, r data are taken. The sum of each level is denoted
by y1, y2, ... , ya, respectively. When A has levels with the same interval, an orthogonal
polynomial, which is called the orthogonal function of P. L. Chebyshev, is generally
used.
The expanded equation is
y b0 b1(A A) b2
(A A)2
a2 1 2
h
12
(31.94)
a1
h
2
(31.95)
544
a2 1 2
h
12
a 1 2
h
12
y1
r
(31.96)
ya
r
y1 ya
ar
(31.97)
b1
(31.98)
It is easy to calculate b0, since it is the mean of the total data. But it seems to
be troublesome to obtain b,
1 b
2 , ... . Actually, it is easy to estimate them by using
Table 31.6, which shows the coefcients of orthogonal polynomials when a 3
(three levels). b1 and b2 are linear and quadratic coefcients, respectively. These
are given by
b1
(31.99)
b2
(31.100)
In the equations above, A1, A2, and A3 are used instead of y1, y2, and y3. For b1,
the values of W1, W2, and W3 are 1, 0, and 1. Also, S is 2. b1 is then
A1 A3
2rh
(31.101)
A1 2A2 A3
2rh2
(31.102)
b1
Similarly,
b2
Table 31.6
Coefcients of orthogonal polynomial for three levels
Coefcients
b1
b2
W1
W2
W3
23
545
The variations of b1 and b2 are denoted by SA1 and SAq, respectively. These are given
by the squares of equations (31.101) and (31.102) divided by their numbers of
units, respectively.
SAl
(A1 A3)2
2r
(31.103)
SAq
(31.104)
The denominator of equations (31.103) and (31.104) is r(2S) (2S is the sum of
squares of the coefcients of W ). The effective number of replication, ne, is given
by
b:
ne rSh2
1
(31.105)
b:
ne rSh4
2
(31.106)
W1A1 Wa Aa
r(S)hi
(31.107)
SAi
(W1A1 Wa Aa)2
(W 21 W 2a)r
(31.108)
Example
To observe the change of the tensile strength of a synthetic resin due to changes
in temperature, the tensile strength of ve test pieces was measured at A1 5C,
A2 20C, and A3 35C, respectively, to get the results in Table 31.7. The main
effect A is decomposed into linear, quadratic, and cubic components to nd the
correct order of polynomial to be used. From Table 31.8, nd the coefcients at
the number of levels k 4.
Table 31.7
Tensile strength (kg/mm2)
Level
Data
Total
A1 (5C)
233
A2 (20C)
209
A3 (35C)
190
A4 (50C)
172
546
b2
4
4
7
4
4
1
5
W3
W4
W5
b4
b5
35 / 2
3/2
112 / 3
56
5/3
324 / 5
108
7 / 12
576 / 7
48
21 / 10
400 / 7
120
28
28
84
84
84
35
252
28
28
180
70
S
84
b2
10
10
b1
10 / 3
9/5
10
10
W6
10
W7
W2
b3
k6
W1
2/3
b1
Coeff.
1/2
b3
k7
b4
1/6
216
36
7 / 12
3,168 / 7
264
154
1
1
5/6
72 / 5
12
10
14
14
14
20
2
20
10
2
1
b3
W5
b2
k5
b1
W4
W3
b3
3
b2
k4
3
1
1
2
b1
W2
b2
W1
b1
b1
k3
Coeff.
k2
Table 31.8
Orthogonal polynomials with equal intervals
24
70
240
84
7 / 20
4,800 / 7
b5
85 / 12
288 / 85
b4
547
3
13
3
3
7
5
3
5
5
3
1
7
168
84
42
W4
W5
W6
W7
W8
35
14
3
1
2
W6
W7
W8
W9
6
31
W5
22
12
W4
W10
17
12
42
31
35
14
18
18
18
17
22
2
1
W3
18
W2
42
b4
b3
W1
b2
12 / 7
12,672 / 7
1,056
b1
k 10
2/3
594
396
Coeff.
168
168
616
14
14
11
11
b5
10 / 7
31,200 / 7
3,102
2184
b1
60
60
10
15
b2
308
924
2772
28
264
4
168
60
17
20
17
28
W9
b2
23
17
15
15
17
23
b1
b5
13
W3
b4
W2
b3
W1
b2
b1
Coeff.
k8
14
k 11
14
b4
7 / 12
41,184 / 7
3,432
2,002
22
23
14
14
23
22
30
b3
5/6
7,128 / 5
1,188
990
14
21
18
11
21
11
b4
13
13
14
b3
k9
3,120
468
11
11
b5
3 / 20
20,800
b5
548
165 / 2
1/2
528
264
5/3
15,444 / 5
5,148
5 / 12
82,368 / 5
6,864
1 / 10
78,000
7,800
110
110
165
780
b1
2,860
b5
5
8,580
b4
110
132
b3
330
b2
W11
b1
Coeff.
k 10
Table 31.8
Orthogonal polynomials with equal intervals (Continued)
858
858
858
15
b2
30
5/6
30,886 / 5
5,148
4,290
b3
k 11
6
286
1 / 12
41,184
3,432
b4
b5
549
7
7
19
1
17
29
35
35
29
17
1
25
55
11
W3
W4
W5
W6
W7
W8
W9
W10
W11
W12
33
143
4,004 / 3
2/3
11,583
7,722
7 / 24
658,944 / 7
27,456
3 / 20
707,200
106,080
182
182
4,004
2,002
2,002
2,002
22
286
15,912
8,008
6
5,148
182
12,012
572
11
10
13
14
13
10
11
22
b2
b1
W13
33
57
21
29
44
20
20
44
29
21
57
33
b5
33
27
3
33
13
33
12
28
28
12
13
33
27
25
b4
21
19
25
21
33
55
25
11
W2
b3
W1
b2
b1
Coeff.
k 12
1/6
20,592
3,432
572
11
11
b3
99
99
116,688
7 / 12
7 / 120
12,729,600 / 7
106,080
6,188
22
33
66
11
26
20
20
26
11
18
33
22
18
b5
96
54
11
64
84
64
11
54
96
66
68,068
b4
1,400,256 / 7
k 13
550
SAl
(172)2
296
100
(31.109)
SAq
1
(5)(4)
20
(31.110)
SAc
0
(5)(20)
100
(31.111)
(31.112)
It is known from Table 31.9 that only the linear term of A is signicant; hence,
the relationship between temperature, A, and tensile strength, y, can be deemed
to be linear within the range of our experimental temperature changes. The estimate
of the linear coefcient of temperature, b, is
b1
(31.113)
Table 31.9
ANOVA table
S
(%)
1
1
1
296
1
0
296
1
0
84.2
16
51
3.2
Total
19
348
(18)
(52)
Source
Linear
Quadratic
Cubic
(e)
100.0
(2.9)
(15.8)
551
794
0.23(A 27.5)
20
1 linear
b2:
2 quadratic
W1A1 Wa Aa
b
r(S)hi
2
Var(b i)
rSh2i
Sb i
(W1A1 Wa Aa)2
r(2S)
(31.114)
(31.115)
32
Two-Way Layout
32.1. Introduction
552
32.2. Factors and Levels
32.3. Analysis of Variance
552
555
32.1. Introduction
Let us discuss how to determine a way of increasing the production yield of a
chemical product. Many factors might affect yield of the product; in this case we
only discuss an experiment designed for determining the catalyst quantity and
synthesis temperature to obtain a high yield. This chapter is based on Genichi
Taguchi et al., Design of Experiments. Tokyo: Japanese Standards Association, 1973.
552
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
553
carried out as follows: First, the ranges of temperature and catalyst quantity are
xed and two or three temperatures and catalyst quantities within the ranges are
selected. For example, when the temperature range is between 200 and 300C, the
following temperatures may be selected: A1 200C, A2 225C, A3 250C,
A4 275C, and A5 300C.
After the temperature range is xed, the remaining problem is to decide the
number of points within the range to be used. The number of points needed
depends on the complexity of the curve of temperature, which is plotted against
the yield. Like the experiment described above, three to ve points are usually
chosen because the curve of temperature and the yield normally has a mountainous shape or a simple smooth curve. If there is a possibility for a curve with two
or more mountains, more temperature points must be chosen.
Suppose that we have the range 200 to 300C and take ve points. In this case
we dene the temperature at ve different levels at equal intervals. In the same
way, levels are chosen for the catalyst quantity factor. If it is known before the
experiment begins that the catalyst quantity must be increased when the temperature is increased, the range of catalyst quantity should be adjusted for each temperature level.
This is not likely to happen in the relationship between temperature and catalyst quantity, but obviously does happen in the case of temperature and reaction
time. This particular phenomenon often happens with various related factors of
chemical reactions: for example, the reaction of hydrogen and oxygen to produce
water,
2H2 O2 2H2O
where two molecules of hydrogen and one molecule of oxygen react. Consequently, the quantity of oxygen is in the neighborhood of half the chemical equivalent of hydrogen.
To nd the effect caused by different ows per unit of time, suppose that the
quantity of H2 is varied as A1, A2, and A3. The quantity of oxygen is then set at the
following three levels:
B1:
B2:
B3:
Why are levels B2 and B3 necessary? To minimize the productivity loss caused
by the amount of unreacted material, we want to investigate how much excess
oxygen can be added to increase protability maximally, since oxygen happens to
be cheaper than hydrogen. (The purpose of setting three levels for the quantity
of hydrogen is to investigate the effect of hydrogen ow.)
In this particular experiment, we assume that it is not necessary to forecast the
temperature range for each distinct catalyst quantity; therefore, the range of catalyst quantity can be independent of temperature. Suppose that the range is 0.2
to 0.8% and that four levels are chosen at equal intervals. Thus, B1 0.2%,
554
Table 32.1
Experiment with a two-way layout
Actual Condition of
Experiment
Data
in
Yield
(%)
Experiment
No.
Level
of A
Level
of B
Combination
of A and B
Temperature
(C)
Catalyst
Quantity
(%)
A1B1
200
0.2
64
A1B2
200
0.4
65
A1B3
200
0.6
76
A1B4
200
0.8
64
A2B1
225
0.2
67
A2B2
225
0.4
81
A2B3
225
0.6
82
A2B4
225
0.8
91
A3B1
250
0.2
76
10
A3B2
250
0.4
81
11
A3B3
250
0.6
88
12
A3B4
250
0.8
90
13
A4B1
275
0.2
76
14
A4B2
275
0.4
84
15
A4B3
275
0.6
83
16
A4B4
275
0.8
92
17
A5B1
300
0.2
73
18
A5B2
300
0.4
80
19
A5B3
300
0.6
84
20
A5B4
300
0.8
91
555
1. How much inuence against the yield will be caused when the temperature
or catalyst quantity changes?
2. What is the appearance of the curve showing the relation of temperature
and yield or catalyst quantity and yield?
3. Which temperature and what catalyst quantity will give the highest yield?
An investigation of these questions follows.
CF
(32.1)
which is subtracted from the total sum of the squares to get the total variation, ST;
then ST is decomposed. First, a working mean, 80, is subtracted from each result
in Table 32.1. The data subtracted are shown in Table 32.2.
Table 32.2
Subtracted data
B1
B2
B4
Total
A1
16
15
B3
4
16
51
A2
13
11
A3
10
15
A4
12
15
A5
11
44
13
28
12
Total
556
(12)2
7
20
( f 1)
(32.2)
(f 19)
(32.3)
(f 4)
(32.4)
557
A1
51
12.75
4
4
A2
A2
1
0.25
4
4
A3
A3
15
3.75
4
4
A4
A4
15
3.75
4
4
A5
A5
8
2.00
4
4
(32.5)
where A1, A2, A3, A4, and A5 indicate the totals of the results under conditions A1,
A2, A3, A4, and A5.
Figure 32.1 shows the gures obtained after adding back the working mean,
80. The line of T, which is 79.4, shows the average value of all experiments. Let
a1 be the difference between A1 and T:
a 1
Ai
T
4
20
(i 1, 2, 3, 4, 5)
(32.6)
If all ve points were on-line T, everyone would agree that within this range
there is no effect due to changes in temperature on yield. It is true, therefore,
558
Figure 32.1
Effect of A
(32.7)
This is known as the effect of the average curve. Since there are four repetitions
for each temperature, we can determine that four times the magnitude of equation
(32.7) is the effect of temperature A, which constitutes part of the total variation
of the data. Therefore, SA, the effect of temperature A, is
SA 4[(a1)2 (a2)2 (a3)2 (a4)2 (a5)2]
A21
T A1
2
16
20 4
A1
T
4
20
A2
T
4
20
T
20
A22
T A2
2
16
20 4
T
20
A25
T A5
T2
2
16
20 4
20
A5
T
4
20
T A1 A2 A3 A4 A5
T
(4) (4)(5)
20
4
20
4
20
20
4
20
(32.8)
(32.9)
559
772
4
20
(32.10)
In the calculation of SA, (A21 A22 A25) is divided by 4, since each of A1,
A2, ... , A5 is the total of four curves. Generally, in such calculations of variation,
each squared value is divided by its number of units, which is the total of the square
for each coefcient of a linear equation. Since A1 is the sum of four observational
values, A1 is expressed by a linear equation with four observational values, where
each value has a coefcient of 1. Therefore, A21 must be divided by 4, the number
of units of the linear equation, or the total of squares of each coefcient, 1.
Similarly,
SB
B 21 B 22 B 23 B 24
CF
5
(44)2 (9)2 132 282
7 587
5
(f 3)
(32.11)
All other factor-related effects (including interaction A B) are evaluated by subtracting SA and SB from ST , the total variations of all data. The residual sum of the
squares, Se, is called the error variation:
Se ST SA SB 1593 772 587
234
(f 19 4 3 12)
(32.12)
There are four unknown items in SA; its degrees of freedom is four. If all the
items are negligible, or if the effect of temperature A is negligible, SA will simply
be about equal to 4Ve .
There are 4 units of error variance in SA. Therefore, the net effect of A, or the
pure variation SA, is estimated by
S A SA 4Ve
(32.13)
Therefore,
S A 772 (4)(19.5)
694.0
(32.14)
(32.15)
The total effect of all factors, including A and B but excluding the general
mean, is shown by ST , which is 1593. The magnitude of all factor-related effects,
excluding A and B, is then
S e (total variation) (pure variation of A) (pure variation of B)
1593 694.0 528.5 370.5
(32.16)
560
pure variation of A
S 4Ve
A
total variation
ST
694
0.436 43.6%
1593
(32.17)
That is, of the yield variation ranging from 64 to 92%, in 43.5% of the cases, the
change of temperature caused the variation in yield. Similarly,
B
528.5
33.2%
1593
(32.18)
370.5
23.2%
1593
(32.19)
There are similarities between the analysis of variance and spectrum analysis.
The total amount of variance, ST , corresponds to the total power of the wave in
spectrum analysis. In the latter, total power is decomposed to the power of each
component with different cycles. Similarly, in analysis of variance, total variance is
decomposed to a cause system. The ratio of the power of each component to total
power is determined as degrees of contribution and is denoted by a percentage.
The components for the purpose of our investigation are factor A, factor B, and
other factors that change without articial control. In the design of experiments,
the cause system is designed to be orthogonal. Accordingly, calculation of the
analysis of variance is easy and straightforward (Table 32.3).
Degrees of contribution can be explained as follows: In the case of a wave,
when the amplitude of oscillation doubles, its power increases four fold. When
the power is reduced to half, its amplitude is 1/2. In Table 32.1, the minimum
value is 64 and the maximum value is 92.
We found from the analysis above that temperature and the quantity of the
catalyst do indeed inuence yield. What we want to know next is the relationship
between temperature and yield and the quantity of catalyst and yield.
Table 32.3
Analysis of variance
Source
E(V)
(%)
772
193
694.0
43.6
587
196
2 52B
528.5
33.2
12
234
370.5
23.2
19
1593
Total
19.5
2
A
1593
100.0
561
51
80 67.25
4
225C:
250C:
275C:
1
80 80.25
4
15
80 83.75
4
8
80 82.00
4
(32.20)
Plotting these four points, a curve showing the relation of temperature and yield
is obtained as shown in Figure 32.2. It is important to stress here that these values
show the main effect of temperature and yield. The main effect curve shown in
Figure 32.2 is the curve of the average values at B1, B2, B3, and B4.
Next, we want to nd the temperature and quantity of catalyst that will contribute to an increase in yield. This procedure is actually the determination of
optimum conditions. These conditions are normally found after observing the
graphs in Figures 32.2 and Figure 32.3. The best temperature is the highest point
of the curve. But the best temperature must result in the highest prot, and that
may not necessarily mean the highest yield.
Actually, the heavy oil consumptions that may be necessary to maintain different
temperatures are investigated rst; these consumptions are then converted to the
same cost per unit as that of the yield, and those units are then plotted against
temperature. The temperature of the largest difference between yield and fuel
cost is read from the abscissa, and it is then established as the operating standard.
From Figure 32.2 it is clear that the maximum value probably lies in the neighborhood of 250 to 275C. If the value does not vary signicantly, the lower temperature, 250C, is the best temperature for our discussion. Similarly, the catalyst
quantity that will give us the highest yield is determined from Figure 32.3. Let us
assume that 0.8% is determined as the best quantity.
Figure 32.2
Relationship between
temperature and yield
562
Figure 32.3
Relationship between
catalyst and yield
Our next step is to nd the actual value for the yield when production is continued at 250C using an 0.8% catalyst quantity. To do this, we need to calculate
the process average,expressed by the following equation, where
is the estimate
of process average of yield at A3B4:
(32.21)
33
Two-Way Layout
with Decomposition
33.1. Introduction
563
33.2. One Continuous Variable
33.3. Two Continuous Variables
563
567
33.1. Introduction
When a factor in an experiment is a continuous variable, such as temperature or
time, it is often necessary to know the linear or quadratic effects of the factor. For
this purpose, the orthogonal polynomial equation is used. In this chapter, methods
of analysis for one or two continuous variables in two-way layout experiments are
shown. This chapter is based on Genichi Taguchi et al., Design of Experiments. Tokyo:
Japanese Standards Association, 1973.
563
564
Table 33.1
Percent elongation
B1
B2
B3
B4
A1
15
31
47
62
A2
11
20
37
45
A3
34
42
49
54
The procedure for this analysis is as follows: First, subtract a working mean of
40 from all data (Table 33.2). The factorial effect, B, is decomposed into components of linear, quadratic, and cubic effects. For this purpose, coefcients of the
four-level polynomial are used.
SBl
[(3)(60)(1)(27)(1)(13) (3)(41)]2
(3)(20)
3432
117,649
1961
(3)(20)
60
SBq
SBc
(33.1)
(60 27 13 41)2
(5)2
(3)(4)
12
25
2
12
(33.2)
6.0
60
60
(33.3)
Table 33.2
Data after subtracting working mean, 40
B1
B2
B3
B4
Total
A1
25
22
A2
29
20
47
A3
14
19
Total
60
27
13
41
33
565
3
12
6179
19 1969
3
(f 3)
(33.4)
(33.5)
(33.6)
20
(20)(3)
(33.7)
where
L(A1) (3)(25) (1)(9) 7 (3)(22) 157
L(A2) (3)(29) (1)(20) 3 (3)(5) 119
(33.8)
20
60
( f 2)
(33.9)
566
4
12
2595
91 558
4
(f 2)
(33.10)
The total variation, ST, and error variation, Se, are calculated.
ST (25)2 (9)2 142
(33)2
12
2831 91 2740
(f 11)
(33.11)
( f 4)
(33.12)
Se ST SA SB SABl
2740 558 1969 204
9
Table 33.3
Analysis of variance
V
(%)
Source
558
279
552
20.1
l
q
c
1
1
1
1961
2
6
1961
2
6
1958
71.5
ABl
204
102
198
7.3
2.25
(e)
(6)
(17)
(2.83)
32
1.1
Total
11
2740
100.0
2740
567
Next, an estimation is made against the signicant effects. The main effect of
A is calculated as
5
40 38.75
4
47
A2
40 2.06 28.25
4
19
A3
40 2.06 44.75
4
A1
(33.13)
(33.14)
(33.15)
This shows that elongation of A3 is the highest. Next, the change of elongation
due to changes in temperature is calculated. B is signicant, as is ABl , so the linear
coefcient at each level of A is calculated.
3A1B1 A1B2 A1B3 3A1B4
b 1(A1)
r(S)h
(33.16)
(33.17)
(3)(6) 2 9 (3)(14)
b 1(A3)
0.45
(1)(10)(15)
(33.18)
From the results above, it is known that the change due to temperature is the
smallest at A3.
We can now conclude that the plastic-using additive, A3, has the highest elongation, the least change of elongation, and the least change of elongation due to
a change in temperature. If cost is not considered, A3 is the best additive.
568
(A A)2
(A A)3
3a 2 7
(A A)h2A
20
(B B)2
b2 1 2
hB
12
b3
a2 1 2
hA
12
b1(B B)
(B B)3
3b 2 7
(B B)hB2
20
c11(A A)(B B)
(33.19)
where a and b are the numbers of levels of A and B, and hA and hB are the intervals
of the levels of A and B.
The magnitude of each term is evaluated in the analysis of variance; then a
polynomial is written only from the necessary terms. Letting the linear, quadratic,
Table 33.4
Tensile strength data (kg/cm2)
B1
B2
B3
B4
A1
64.9
62.6
61.1
59.2
A2
69.1
70.1
66.8
63.6
A3
76.1
74.0
71.3
67.2
A4
82.9
80.0
76.0
72.3
569
Table 33.5
Supplementary table
B1
B2
B3
B4
Total
A1
51
74
89
108
322
A2
32
64
104
A3
61
40
13
28
86
A4
129
100
60
23
312
Total
130
67
48
177
28
and cubic terms of A be Sal , SAq , and Sac , and dening the term of the product of
linear effects of A and B as SAl Bl , the calculations are then made as follows:
SAl
(3A1 A2 A3 3A4)2
(2S )
(2092)2
54,706
80
SAq
(A1 A2 A3 A4)2
r(2S )
82
4
16
SAc
(33.21)
642
51
80
SBl
(33.20)
(33.22)
(3B1 B 2 B 3 3B4)2
r(2S)
(1036)2
13,416
80
(33.23)
570
(130 67 48 177)2
(4)(4)
272
SBq
(33.24)
18
(33.25)
(3L1 L 2 L 3 3L4)2
[(3)2 (1)2 12 32](20)
(612)2
936
400
(33.27)
(28)2
16
(33.28)
(33.29)
The result of the calculation of SA is checked by adding SAl, SAq, and SAc. It means
that the differences between A1, A2, A3, and A4 are decomposed to the components
of a polynomial to achieve more analytical meaning. If necessary, the terms SAqB1
corresponding to A2B can be calculated. If it were calculated, we would nd it to
be very small, since it is merely a component of Se .
571
Table 33.6
Analysis of variance
S
(%)
Source
Linear
Quadratic
Cubic
1
1
1
54,706
4
51
54,706
4
51
78.3
Linear
Quadratic
Cubic
1
1
1
13,416
272
18
13,416
272
18
19.1
936
936
1.2
A1B1
e
Total
(e)
456
15
69,859
(12)
57.0
100.0
(801)
(66.7)
(1.4)
The result of the decomposition of variation and the analysis of variance are
summarized in Table 33.6. The degree of contribution of A is calculated after
pooling the insignicant effects, SAq, SAc, SBq, and SBc, with Se .
A
SA Ve
54,706 66.7
78.2%
ST
69,859
(33.30)
(33.31)
In this calculation, remember to convert the values of the four unknowns, m, a1,
b1, and c11, to their original units.
m
average of total working mean
70.0
a
1
10
28
16
69.82
1
10
T
16
(33.32)
3A1 A2 A3 3A4
(3)(322) (104) 86 (3)(312)
10r(S )hA
(10)(4)(10)(10)
0.523
(33.33)
572
10rShB
(10)(4)(10)(50)
0.0518
c11
(33.34)
3L1 L 2 L 3 3L4
(3)(610) (561) 492 (3)(429)
0.00122
(33.35)
(33.36)
34
Two-Way Layout
with Repetition
34.1. Introduction
573
34.2. Same Number of Repetitions
573
34.3. Different Numbers of Repetitions
578
34.1 Introduction
In this chapter we describe experiments with repetitions. This chapter is based on
Genichi Taguchi et al., Design of Experiments. Tokyo: Japanese Standards Association,
1973.
27.02
16
45.56
(f 1)
(34.1)
(f 15)
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
(34.2)
573
574
Table 34.1
Bounding height of golf balls (cm)
B1
B2
B3
B4
A1
99.0
98.2
105.1
104.6
110.3
112.8
114.5
116.1
A2
96.1
95.2
101.6
102.4
109.8
108.2
117.1
116.0
Next, the variation between the combinations of A and B, say ST1, is obtained. ST1 can
also be called either the variation between experiments (there are eight experimental
combinations with different conditions), or the variation between the primary units.
ST1
1
[(12.8)2 (0.3)2 23.12] CF
2
826.04
(f 7)
(34.3)
Table 34.2
Supplementary table
B1
B2
B3
B4
A1
6.0
6.8
0.1
0.4
5.3
7.8
9.5
11.1
A2
8.9
9.8
3.4
2.6
4.8
3.2
12.1
11.0
(a)
B1
B2
B3
B4
Total
A1
12.8
0.3
13.1
20.6
20.6
A2
18.7
6.0
8.0
23.1
6.4
Total
31.5
6.3
21.1
43.7
27.0
(b)
575
SAB1: interaction between the linear effect of B and the levels of A (A1 and A2)
Se1: primary error (error between experiments)
SA
1
(20.6 6.4)2
16
12.60
SBl
(f 1)
(3B1 B 2 B 3 3B4)
[(3)2 (1)2 12 32](4)
[(3)(31.5) (1)(6.3) (1)(21.1) (3)(43.7)]2
80
800.11
SBq
(f 1)
(34.5)
0.42
SBc
(34.4)
2
(f 1)
(34.6)
0.61
(f 1)
(34.7)
Next, the comparisons of the linear effects of B for A1 and A2 are calculated:
L(A1) (3)(12.8) (1)(0.3) (1)(13.1) (3)(20.6)
113.6
(34.8)
L(A1)2 L(A2)2
[L(A1) L(A)2)]2
40
80
113.62 13.942
253.02
40
80
8.32
(f 1)
(34.9)
(f 2)
(34.10)
Next, the variation within repetitions (also called the variation of the secondary error),
namely Se 2, is calculated:
576
(6.0)2 (6.8)2
12.12 11.02
(12.8)2
2
23.12
2
(12.8)2 23.12
CF
2
ST ST1
833.50 826.04
7.46
(f 8)
(34.11)
Table 34.3
ANOVA table
Source
(%)
12.60
12.60
11.56
1.4
l
q
c
1
1
1
800.11
0.42
0.61
800.11
0.42
0.61
799.07
95.9
AB1
8.32
8.32
7.28
0.9
E1
3.98
1.99
E2
7.46
0.932
(e)
(12)
(12.47)
(1.04)
15.59
1.8
833.50
100.0
Total
15
833.50
577
A1 b1(B B)
(34.12)
A1 working mean
105.0
1
A
8 1
20.6
8
107.6
(34.13)
3B1 B 2 B 3 3B4
b 1
r(S )h
113.6
200
0.568
B
(34.14)
1
(0 10 20 30)
4
15
(34.15)
From the above, the relationship between the bouncing height of golf ball A1 and
temperature, B, is
(34.16)
6.4
105.8
8
139.4
b 1
0.697
200
(34.17)
The results above show that the effect of bouncing by temperature for A2 (Eagle)
is about 23% higher than A1 (Dunlop).
It is said that professional golfers tend to have lower scores during cold weather
if their golf balls are kept warm. The following is the calculation to show how
much the distance will be increased if a golf ball is warmed to 20C.
To illustrate, B 5C and B 20C are placed into equations (34.16) and
(34.17). When B 5C,
Dunlop:
107.6 (0.568)(5 15)
101.92
Eagle:
105.8 0.697(5 15)
98.83
(34.18)
578
(34.19)
Eagle:
105.8 (0.697)(20 15)
109.28
Hence, the increase in distance is
Dunlop:
Eagle:
(34.20)
foreign product
A2:
A3:
Table 34.4
Tensile strength (kg/mm2)
B1
B2
B3
A1
20
B4
9
A2
22
25
28
25
26
12
8
10
9
12
2
0
3
0
12
14
13
A3
17
23
20
6
8
4
8
6
3
20
18
22
579
Table 34.5
Table of means
B2
B3
A1
20.0
B1
8.0
0.0
9.0
B4
Total
19.0
A2
25.2
10.2
0.2
13.0
22.6
A3
20.0
6.0
5.7
20.0
0.3
Total
65.2
24.2
5.5
42.0
41.9
recommended that the means be calculated to at least one decimal place. Then
do an analysis of variance using the means.
41.92
146.30
12
(34.21)
SA
(34.22)
SB
(34.23)
CF
(34.24)
ST1 is the total variation between the combinations of A and B. From ST1, subtract
SA and SB to get the variation, called the interaction between A and B and denoted
by SAB. SA indicates the difference between the companies as a whole.
To make a more detailed comparison, the following contrasts are considered:
L1(A)
A1
A A3
2
4
8
(34.25)
L2(A)
A2
A
3
4
4
(34.26)
L1 shows the comparison between foreign and domestic products, and L2 compares
our company and another domestic company. Since L1 and L2 are orthogonal to
each other, their variations, SL1(A) and LS2(A), are
SA SL1(A) SL2(A)
(34.27)
where
SL1(A)
[L1(A)]2
2A A2 A3
2 1
sum of coefficients squared
(2 )(4) (1)2(8)
[(2)(19.0) 22.6 0.3]2
15.12
24
24
9.50
(34.28)
580
[L 2(A)]2
(A2 A3)2
2
sum of coefficients squared
(1 )(4) (1)2(4)
(22.6 0.3)2
8
62.16
(34.29)
Next, we want to know whether or not the relationship between temperature and
tensile strength is expressed by a linear equation. The linear component of temperature B is given by
SB1
(3B1 B 2 B 3 3B4)2
r(2S )
[(3)(65.2) 24.2 (5.5) (3)(42.0)]2
(3)(20)
2056.86
(34.30)
(34.31)
For the linear coefcients of temperature, their differences between foreign and
domestic, or between our company and another domestic company, are found by
testing SL1(A)Bl and SL2(A)Bl, respectively.
SL1(A)Bl
36.63
SL2(A)Bl
(34.32)
[L(A2) L(A3)]
[12 (1)2][(3)2 (1)2 12 32]
2
(124.6 131.7)2
40
1.26
(34.33)
where
L(Ai) 3Ai B1 Ai B 2 Ai B 3 3Ai B4
(34.34)
The primary error variation, Se 1, is obtained from ST1 by subtracting all of the
variations of the factorial effects that we have considered.
Se 1 ST1 SA SB SL1(A)B1 SL2(A)B1
2175.31 71.66 2064.01 36.63 1.26
1.75
(f 4)
(34.35)
581
1
[S (error variation within the repetitions of A1B1)
r 11
S34 (error variation within the repetions of A3B4)]
1
r
20
202
1
(22)2
(20)2 (18)2
(20 18 22)2
3
1
[(sum of the individual data squared)
r
(variation between the combinations of A and B)]
1
(93.02)
r
(34.36)
1
1
r
number of combinations of A and B
(sum of recipricals of the number of repetitions of Ai Bj)
1
12
1
1
1
1
1
1
3
1.8997
(34.37)
(34.38)
Since SA and SB were obtained from the mean values, Se 2 is multiplied by 1/ r. From
the results above, the analysis of variance can be obtained, as shown in Table 34.6.
In the table, ST was obtained by
ST SA SB Se 2 2224.28
The degrees of freedom for Se 2 are obtained by summing the degrees of freedom
for the repetitions A1B1, A1B 2, ... , A3B4 to be 0 2 21.
The analysis of variance shows that L2(A), B l, and L1(A)Bl are signicant. The
signicance of L 2(A) means there is a signicant difference between the two domestic companies. It has also been determined that the inuence of temperature
to tensile strength is linear; the extent to which temperature affects tensile strength
differs between the foreign and domestic products.
582
Table 34.6
ANOVA table
Source
(%)
A
L1(A)
L2(A)
1
1
9.50
62.16
9.50a
62.16
l
Res
1
2
2056.86
7.15
AB
L1(A)B1
L2(A)B1
1
1
36.63
1.26
e1
1.75
0.438a
e2
21
48.97
2.332a
(29)
(68.63)
(2.367)
59.793
2.69
2056.86
3.58a
2054.493
92.37
36.63
1.26a
34.263
1.54
Total
a
32
2224.28
(3.46)
(75.731)
2224.28
(3.40)
100.00
Pooled data.
In the case of foreign products, the inuence per 1C, or b1, is given by
L(A1)
b
r(S )h
95.0
300
0.317
(34.39)
124.6 131.7
(2)(10)(30)
256.3
0.427
600
(34.40)
583
1 A1 b1(B B)
80 4.75 (0.317)(B 15)
84.75 (0.317)(B 15)
(34.41)
2 A2 b2(B B)
80 5.65 (0.427)(B 15)
85.65 (0.427)(B 15)
(34.42)
3 A3 b3(B B)
80 0.075 (0.427)(B 15)
80.075 (0.427)(B 15)
(34.43)
35
Introduction to
Orthogonal Arrays
35.1. Introduction
584
35.2. Quality Characteristics and Additivity
584
35.1. Introduction
Earlier, reproducibility of conclusions was discussed, and it was recommended that
one use orthogonal arrays to check the existence of interactions, the cause of
nonreproducibility. Followings are examples of poor-quality characteristics that
cause interactions.
584
At the stage of selecting a characteristic, it must be considered whether that characteristic is the most suitable one to use to achieve the purpose of the experiment.
A characteristic may not always exactly express the purpose itself. As an example, lets discuss an experiment for curing the physical condition of a patient. Since
the data showing physical condition is directly related to the purpose of the experiment, a characteristic condition is selected. Another example of a characteristic condition would be data recorded under categories such as delicious or
unsavory.
Assume that there are three types of medicines: A, B, and C. First, medicine A
was taken by the patient, then his or her physical condition became better. Such
data can be expressed as:
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
Bad condition
A2:
Took A
Suppose that the results after taking medicines B and C were as follows:
B1:
Bad condition
B2:
Took B
Better condition
C1:
Bad condition
C2:
Took C
If, for the reason that the three medicines (A, B, and C) did the patient good,
and if the patient became much better after taking the three medicines together,
we say that there exists additivity in regard to the effects of the three medicines,
A, B, and C. However, if the patient became worse or died after taking the three
medicines together, we say that there is interaction (a large effect that reverses the
results) between A, B, and C.
When there is interaction, it is generally useless to carry out the investigations
in such a way that one factor (in this case, medicine) is introduced into the experiment followed by another factor.
For example, suppose that a patient is suffering from diabetes, which is caused
by lack of insulin being produced by the pancreas. Also suppose that medicine A
contains 80%, B contains 100%, and C contains 150% of the amount of insulin
the patient requires. In this case, interaction does occur. Taking A, B, and C together would result in a 330% insulin content, which would denitely be fatal.
As far as physical condition is concerned, there is no additivity. But if the
amount of insulin is measured, it becomes possible to estimate the result of taking
A, B, and C together, but the combination can never be tested. If effect A is reversed by the other conditions, solely researching effect A would turn out to be
meaningless. Accordingly, we have to conduct the experiment in regard to all the
combinations of A, B, and C.
When there are three two-level factors, there are only eight combinations in
total; however, when there are 10 factors to be investigated, 1024 combinations
should be included in the experiment, which would be impossible.
If the amount of insulin is measured, there then exists additivity. That is, we
can predict that physical condition becomes worse when A, B, and C are taken
together. In other words, we do not need the data from all combinations. In this
way, even when there are 10 two-level factors, we can estimate the results for all
combinations by researching each individual factor, or by using orthogonal arrays
(described in other chapters) and then nishing the research work by conducting
only about 10 experiments.
Physical condition is not really an efcient and scientic characteristic, but
quantity of insulin is. Research efciency tends to drop if we cannot nd a
characteristic by which the results of the individual effect reappears or has consistent effects, no matter how the other conditions, variables, or causes vary. It is
therefore most important from the viewpoint of research efciency to nd a characteristic with the additivity of effects in its widest sense.
585
586
To measure noise level, it is usual to measure the magnitude of noise for an audible
frequency range using phon or decibel as a unit. Generally, there are three
types of factors affecting noise. Of these types, additivity exists for the factors of
the following two types:
1. Factors that extinguish vibrating energy, the origin of noise.
2. Factors that do not affect vibrating energy itself, but convert the energy
caused by vibration into a certain type of energy, such as heat energy. That
is, the less vibrating source, the lower the noise level. Also, the more we
convert the energy that causes noise into other types of energy, the lower
the noise level.
The third type of factors imply:
3. Factors that change the frequency characteristics into frequency levels outside the audible range: for example, changing the shape or dimensions of
component parts, changing the gap between component parts, changing
surface conditions, or switching the relative positions of the various component parts.
Suppose that the frequency characteristic of a certain noise is as shown in Figure 35.1. There is a large resonance frequency at 500 cycles. It forms half of the
total noise. There are two countermeasures, A and B, as follows:
1. Countermeasure A: converts resonance frequency from 500 cycles to 50,000
cycles. As a result, the noise level decreases by 50%.
2. Countermeasure B: converts resonance frequency from 500 cycles to 5 cycles.
As a result, again the noise level decreases by 50%.
Figure 35.1
Frequency and noise
level
Noise Level
If the two countermeasures were instituted at the same time, the noise level
might come back to the original magnitude.
Generally, most factors cited in noise research belong to the third type. If so,
it is useless to use noise level as a characteristic. Unless frequency characteristics
are researched with the conditions outside the audible range, it would turn out
to be inefcient; therefore, researching all combinations is required to nd the
optimum condition.
In many cases, taking data such as the frequency of misoperation of a machine,
the rate of error in a communications system or in a calculator, or the magnitude
50
500
1000
Frequency
587
of electromagnetic waves when there are frequency changes does not make sense.
These errors are not only the function of the signal-to-noise (SN) ratio, but may
also be caused by the unbalance of a setting on the machine. In the majority of
these cases, it is advisable to calculate the SN ratio, or it is necessary to nd another
method to analyze these data, which are called classied attributes.
Data Showing
Particle Size
Distribution
Example
The product becomes off grade when it is either too coarse or too ne; the neness
within the range of 15 through 50 mesh is desirable, it being most desirable to
increase the yield. But additivity does not exist for yield or percentage in this case.
One would not use such a characteristic for the purpose of nding the optimum
condition efciently.
Suppose that the present crushing condition is A1B1C1. The resulting yields after
changing A1 to A2, B1 to B2, and C1 to C2 are as follows:
A1:
40%
B1:
40%
C1:
40%
A2:
80%
B2:
90%
C2:
40%
The data signify that yield increases by 40% when crushing condition A changes
from A1 to A2; the yield also increases by 50% when B is changed from B1 to B2,
but yield does not change when C is changed from C1 to C2. Judging from these
data, the optimum condition is either A2B2C1 or A2B2C2.
The yield of any one of the optimum conditions is expected to the nearly 100%.
But if the crushing process is actually operated under condition A2B2C2, and if a
yield of less than 10% is obtained, the researcher would be puzzled. However, it
Table 35.1
Particle size distribution
15
1550
50
A1
60
40
A2
80
20
B1
60
40
B2
90
10
C1
60
40
C2
40
60
588
is very possible to obtain such a result. This is easy to understand from the particle
distribution shown in Table 35.1.
At present operating conditions, a 40% low yield is obtained because the particles are too coarse. As to conditions C1 and C2, the percentage of particles in the
range 15 to 50 mesh is equal, but factor C exerts the largest inuence on particle
distribution. Only by changing condition C from C1 to C2 do the particles become
much ner. Furthermore, adding the effects of A and B, the yield of condition
A2B2C2 tends to be very small. Yield is therefore not a good characteristic for nding
an optimum condition.
Even if the yield characteristic were used, the optimum condition could be obtained if all of the combinations were included in the experiment. In this case, the
optimum condition is obtainable because there are only eight experiments (23
8). When there are only three two-level factors, the performance of eight experiments is not impractical. But when these factors are three and four levels, the total
combinations increase to 27 and 64, respectively.
If there are 10 three-level factors, experiments with 59,049 combinations are
required. Assuming that the experiments performed are 10 combinations a day,
about 20 years would be required to test for all the combinations. Clearly, in order
to nd an optimum condition by researching one factor after another, or researching
a few factors at one time, a characteristic must have additivity.
The example above is a case where yield becomes 100% when particle size is
in a certain range. In such a case, it is nonsense to take yield as data. In addition,
we need percentages of the coarser portions and the ner portions. If the three
portions are shown, it is clear that condition A2B2C2 is the worse condition without
conducting any experiment.
There is no additivity with the yield data within the range 15 to 50 mesh, but
there is additivity if the total (100%) is divided into three classes, called classied
variables.
When a physical condition is expressed as good, average, and bad, this type of
data would not be additive. The same holds true when yield is expressed as a
percentage.
The yield in the case of a reversible chemical reaction, or the yield of a reaction
that may be overreacted, would not be considered additive data. What type of
scientic eld would exist where condition or balance is seriously discussed,
such as the color matching of dyes or paints, the condition of a machine or a
furnace, taste, avor, or physical condition? In such elds, no one could be as
competent as an expert worker who has the experience of nearly all the combinations. So far a characteristic with additivity has not been discovered; we can never
be out of the working level.
It is difcult in most cases to judge in a specialized technical eld whether the
characteristic is additive. Accordingly, we need a way to judge whether or not we
are lost in a maze. Evaluation of experimental reliability is the evaluation of a
characteristic in regard to its additivity. The importance of experiments using orthogonal arrays lies in the feasibility of such evaluations.
589
Example [1]
An early application of robust design involved the optimization of a tile manufacturing process in Japan in the 1950s. In 1953, a tile manufacturing experiment
was conducted by the Ina Seito Company. The ow of the manufacturing process
is as follows:
raw material preparation: crushing and mixing molding calcining
glazing calcining
The molded tiles are stacked in the carts and move slowly in a tunnel kiln as
burners re the tiles. The newly constructed kiln did not produce tiles with uniform
dimensions. More than 40% of the outside tiles were out of specication. The inside
tiles barely met the specication. It was obvious to the engineers at the plant that
one of the causes of dimensional variation was the uneven temperature distribution.
The traditional way of improving quality is to remove the cause of variation. In
this case it would mean redesigning the tunnel kiln to make temperature distribution
more uniform, which was impossible because it was too costly. Instead of removing
the cause of variation, the engineers decided to perform an experiment to nd the
formula for the tile materials that would produce consistently uniform tiles, regardless of their positions within the kiln.
In the study, control factors were assigned to orthogonal arrays, and seven positions in the kiln were assigned to the outer array as noise factors. The interactions
between each control factor and the noise factors were studied to nd the optimum
condition. Today, the SN ratio is used to substitute for the tedious calculation to
study the interactions between control and noise factors.
Following are the control factors and their levels:
A: amount of a certain material
A1 5.0%
A2 1.0% (current)
B: rmness of material
B1 ne
B2 (current)
B3 coarse
C: amount of agalmatolite
C1 less
C2 (current)
C3 new mix without additive
590
D: type of agalmatolite
D1 0.0%
D2 1.0% (current)
D3
E: amount charged
E1 smaller
E2 current
E3 larger
F: amount of returned material
F1 less
F2 medium (current)
F3 more
G: amount of feldspar
G1 7%
G2 4% (current)
G3 0%
H: clay type
H1 only K-type
H2 half-and-half (current)
H3 only G-type
These were assigned to an orthogonal array L18 as the inner array, and seven positions in the kiln were assigned to the outer array. Dimensions of tiles for each
combination between a control factor and the noise factor are shown in Table 35.2.
Data Analysis
As a quality characteristic, a nominal-the-best SN ratio is used for analysis. (For
the SN ratio, see later chapters.) For experiment 1:
(10.18 10.18 10.20)2
714.8782
7
(35.1)
(35.2)
(35.3)
Sm
Ve
Se
0.0454
0.00757
71
6
10 log
10 log
(35.4)
(1/n)(Sm Ve)
Ve
7(714.8782 0.00757)
0.00757
41.31 dB
(35.5)
The SN ratios of the other 17 runs are calculated similarly, as shown in Table 35.2.
591
Table 35.2
Layout and results
L18
A B C D E F G H
1 2 3 4 5 6 7 8
P1
P2
P3
P4
P5
P6
P7
Mean
SN
1 1 2 2 2 2 2 2 10.03 10.01
9.98
9.96
9.91
9.89 10.12
9.99 42.19
1 1 3 3 3 3 3 3
9.74
9.74
9.71
9.68
9.76 43.65
9.99
9.92
9.89
9.85
9.78 10.12
1 3 1 2 1 3 2 3
9.98
9.95
10
2 1 1 3 3 2 2 1 10.00
9.98
9.93
9.80
9.77
9.70 10.15
9.90 35.99
11
2 1 2 1 1 3 3 2
9.97
9.97
9.91
9.88
9.87
9.85 10.05
9.93 42.88
12
2 1 3 2 2 1 1 3 10.06
9.94
9.90
9.88
9.80
9.72 10.12
9.92 37.05
13
9.98
9.91
14
2 2 2 3 1 2 1 3
9.87
9.86
9.87
9.85
9.80 10.02
9.88 43.15
15
2 2 3 1 2 3 2 1 10.02 10.00
9.95
9.92
9.78
9.71 10.06
9.92 37.70
16
2 3 1 3 2 3 1 2 10.08 10.00
9.99
9.95
9.92
9.85 10.14
9.99 40.23
17
2 3 2 1 3 1 2 3 10.07 10.02
9.89
9.89
9.85
9.76 10.19
9.95 36.60
18
9.99
9.97
9.81
9.91
9.91
9.78
9.88
9.88
9.84
9.82
9.80
Tables 35.3 and 35.4 show the response tables for the SN ratio and average.
Figure 35.2 shows the response graphs of SN the ratio and average. For example:
average SN ratio for A1
43.10
average SN ratio for A2
39.50
average SN ratio for B1
(35.6)
(35.7)
40.51
(35.8)
9.87
9.93
9.97 37.74
9.87 46.34
592
Table 35.3
Response table: SN ratio
Level
43.10
0.51
40.45
40.33
44.53
41.11
40.44
39.90
39.50
41.24
40.96
40.88
40.12
41.38
41.47
42.82
42.16
42.51
42.71
39.26
41.42
42.00
41.19
3.60
1.65
2.06
2.38
5.27
0.31
1.57
2.92
Ranking
Optimization
The rst thing in parameter design is to reduce variability or maximize the SN ratio.
Select the optimum combination from the SN ratio response table as
A1B3C3D3E1F3G3H2. Factors A, C, D, E, and H had a relatively strong impact on
variability, whereas B, F, and G had a relatively weak impact. Therefore, A1, C3,
E1, and H2 are denitely selected.
The second step is to adjust the mean. Adjusting the mean is easy by adjusting
the dimension of the mold. In general, look for a control factor with a large impact
on the mean and a minimum impact on variability.
Estimation and Conrmation
It is important to check the validity of experiments or the reproducibility of results.
To do so, the SN ratios of the optimum conditions and initial condition are estimated
using the additivity of factorial effects. Their difference, called gain, is then compared with the one calculated from the conrmatory experiments under optimum
and initial conditions.
Additivity of factorial effects is simply the sum of the gains from the control
factors. Every factorial effect may be used (added) to estimate the SN ratios of the
optimum and initial conditions. But it is recommended that one exclude weak factorial effects from the prediction to avoid overestimates. In this example, factors B,
Table 35.4
Response table: Mean
Level
9.99
10.00
10.07
10.02
9.93
9.99
9.95
10.00
10.00
9.97
10.02
10.02
9.97
9.99
9.94
0.06
0.08
0.03
0.02
0.08
Ranking
9.98
10.03
9.97
9.97
10.02
9.91
10.01
9.90
0.17
0.04
0.13
A1 A2 B1 B2 B3
C1 C2 C3
593
D1 D2 D3
E1 E2 E3
F1 F2 F3
G1 G2 G3
H1 H2 H3
Figure 35.2
Response graphs
A1 A2 B1 B2 B3
C1 C2 C3
D1 D2 D3
E1 E2 E3
F1 F2 F 3
G1 G2 G3
H1 H2 H3
F, and G are excluded from the predictions. The SN ratios under the optimum and
initial conditions, denoted by opt and initial, respectively, are predicted by
opt T (A1 T) (C3 T) (D3 T) (E1 T) (H2 T)
A1 C3 D3 E1 H2 4T
43.10 42.51 42.71 44.53 42.82 (4)(41.30)
50.47 dB
(35.9)
(35.10)
(35.11)
594
Conrmatory experiments are conducted under the optimum and initial conditions. From these results, two SN ratios are calculated. Their difference is the gain
conrmed. When the gain from prediction is close enough to the gain from the
conrmatory experiments, it indicates there is additivity and that the conclusions
are probably reproducible.
595
596
References
1. Genichi Taguchi et al., 1973. Design of Experiments. Tokyo: Japanese Standards
Association.
2. American Supplier Institute, Robust Design Workshop, 1998.
36
36.1.
36.2.
Introduction
597
Linear Graphs of Orthogonal Array L8
Linear Graph (1)
Linear Graph (2)
36.3.
36.4.
36.5.
Layout of
Orthogonal Arrays
Using Linear Graphs
597
598
598
Multilevel Arrangement
599
Dummy Treatment
605
Combination Design
606
Reference
608
36.1. Introduction
Linear graphs have been developed by Taguchi for easy assignment of experiments. These graphs can be used to assign interactions between factors in an array
to calculate interactions in the design of experiments. But in quality engineering,
linear graphs are not used for assigning interactions but for special cases such as
multilevel assignment. This chapter is based on Genichi Taguchi, Design of Experiments. Tokyo: Japanese Standards Association, 1973.
597
598
Figure 36.1
Linear graphs of
orthogonal array L8
Figure 36.2
Information required for
a linear graph
Linear graph (2) is used for an experiment where the interactions between one
particular factor and some other factors are important. This is illustrated in the
following experiment, with two types of raw materials, A1 and A2; two annealing
methods, B1 and B 2; two temperatures, C1 and C2; and two treating times, D1 and
D2.
It is expected that since B1 and B 2 use different types of furnaces, their operations are quite different and the optimum temperature or optimum treating time
might not be the same for B1 as for B 2. On the other hand, it is expected that a
better raw material must always be better for any annealing method. Accordingly,
only the main effect is cited for factor A. Thus, we need to get the information of
main effects A, B, C, D and interactions B C and B D.
These requirements are shown in Figure 36.2. Such a layout is easily arranged
using two lines out of the three radial lines of linear graph (2) shown in Figure
36.1.
The remaining columns, 6 and 7, are removed from the form, as shown in
Figure 36.3, with the independent points as indicated. An independent point is
used for the assignment of a main effect; factor A is assigned to any one of the
remaining two points, since only the main effect is required for A. The layout is
shown in Table 36.1.
599
Figure 36.3
Layout of the
experiment
Such a layout can also be arranged from linear graph (1). Remove column 6,
which is the line connected by columns 2 and 4, indicate it as an independent
point like column 7, then assign factor A, as shown in Figure 36.4.
Table 36.1
Layout of experiment
Column
B
1
C
2
BC
3
D
4
BD
5
A
6
e
7
600
Figure 36.4
Layout using linear
graph (1)
To obtain these main effects from an L8 array, it is necessary to prepare a fourlevel column in the array. For instance, line 3 with two points 1 and 2 in linear
graph (1) is selected. Then three columns are removed from the linear graph and
rearranged using a four-level column, as shown in Table 36.2. That is, from the
two columns shown as the two points at both ends of the line (or any two of the
three columns), four combinations (11, 12, 21, and 22) are obtained; these combinations are then substituted for 1, 2, 3, and 4, respectively, to form a four-level
column. Then columns 1, 2, and 3 are removed from the table, as shown on the
right side of Table 36.2.
Table 36.2
Preparation of orthogonal array L8(4 24) from L8 (27)
L8(4 24)
L8(27)
No.
123
1(1)
1(1)
2(2)
2(2)
2(3)
2(3)
1(4)
1(4)
601
Figure 36.6
Linear graph from an
L16 array
E
o
7
602
Table 36.3
Orthogonal array L16(8 28) from L16(215)
Column
Col.
No.
17
1
8
2
9
3
10
4
11
5
12
6
13
7
14
8
15
9
10
11
12
13
14
15
16
Example
To obtain information concerning the tires for passenger cars, ve two-level factors
from the manufacturing process, A, B, C, D, and E, are cited. It is judged that factor
A is most important, followed by B and C. Accordingly, it is desirable to obtain the
interactions A B and A C. For the life test of tires, four passenger cars, R1,
R2, R3, and R4, are available and there are no driving restrictions during the testing.
In other words, once the tires for the test are put on the cars, there is no restriction
on the roads driven or the distance traveled, and the cars are allowed to run as
usual. As a result, the driving conditions among these four cars must differ
considerably.
On the other hand, it is known that the different positions of tires of a car cause
different conditions; the condition of the rear tires is worse than the front and there
is also a difference between the two front tires since a car is not symmetrical, a
driver does not sit in the middle, the road conditions for both sides are not the
same, and so on.
For these reasons, we must consider the difference among the four positions in
a car: V1: front right; V2: front left; V3: rear right; and V4: rear left. Such inuences
should not interfere with the effects of the ve factors and interactions A B and
A C.
The real purpose of the experiment is to obtain the information concerning factors A, B, C, D, and E, and not for R and V. The character of factors A, B, C, D,
and E is different from that of factors R and V.
The purpose of factors A, B, C, D, and E, called control factors, is the selection
of the optimum levels. There are four cars, R1, R2, R3, and R4. For each level of R,
the number of tests is limited to four (four wheels). The factor that has restricted
the number of tests for each level, in this case four, is called a block factor. From
a technical viewpoint, the signicance of the difference between levels in a block
factor is not important. Therefore, the effect of such a factor is not useful for adjustment of the levels of other factors. The purpose of citing a block factor is merely
to avoid its effect being mixed with control factors.
The factor, like the position of tire, V, is called an indicative factor. It is meaningless to nd the best level among the levels of an indicative factor.
From the explanation of block factor or indicative factor, the information of R
(cars) and V (positions of tires) are not needed; these factors are cited so to avoid
mixing the control factors (A, B, C, D, and E).
This experiment is assigned as follows. Factors and degrees of freedom are listed
in Table 36.4. All factorial effects belong to either two levels or four levels; the total
degrees of freedom is 13; hence L16 (with 15 total degrees of freedom) is probably
an appropriate one. (The selection of either a two- or three-level series orthogonal
array depends on the majority of the number of levels of factors.)
R and V are both four-level factors. As described in Chapter 35, a four-level
factor has to replace three two-level columns, which consist of a line and two points
at the ends. Accordingly, the linear graph required in this experiment is shown in
Figure 36.7.
There are six standard types of linear graphs given for an L16 array [1]. But
normally, we cannot nd an exactly identical linear graph to the one that is required
in a particular experiment. What we must do is to examine the standard types of
linear graphs, compare them with the linear graph required for our purpose, and
select the one that might be most easily modied. Here, type 3 is selected, for
example, and modied to get the linear graph we require. Figure 36.8 shows
type 3.
603
604
Table 36.4
Distribution of degrees of freedom
Factorial
Effect
Degrees of
Freedom
Factorial
Effect
Degrees of
Freedom
AB
AC
Figure 36.7
Linear graph for
experiment
605
Figure 36.8
Type 3 linear graph
(A1 A1)2
A2
(A A1 A2)2
2 1
6r
3r
9r
(36.1)
(A1 A1 2)2
18r
(36.2)
or
SA
Figure 36.9
Layout of experiment
606
Table 36.5
Layout of experiment
A
AC
AB
Car
Position
R1
V1
R1
V2
R2
V2
R2
V2
R2
V3
R2
V4
R1
V3
R1
V4
No.
R3
V1
10
R3
V2
11
R4
V1
12
R4
V2
13
R4
V3
14
R4
V4
15
R3
V3
16
R3
V4
Column
11
14
15
10
607
Table 36.6
Assignment of A to column 3
Level
No.
there are so many two-level factors, it is better to assign these factors to an L32(231)
array. But for an experiment such as the 210 36 type, it is more practical to
combine each two for the 10 two-level factors to get ve three-level combined
factors and assign the other six three-level factors to an L27 array.
Also, if it is necessary to obtain the interactions between combined factor AB
and the three-level factor C, AB and C are assigned to the two dots of a line to
obtain interactions A C and B C.
A three-level factor (A) and a two-level factor (B) are combined to form a
combined four-level factor:
(AB)1 A1B1
(AB)2 A2B1
(AB)3 A3B1
(AB)4 A1B 2
The variation of a combined three-level factor (AB) by two factors (A, B) of two
levels is calculated as
S(AB)
1
[(AB)21 (AB)22 (AB)23] CF
r
(36.3)
(36.4)
608
[(AB)1 (AB)2]2
2r
(36.5)
SB
[(AB)1 (AB)3]2
2r
(36.6)
(36.7)
S(AB) SB Sres
(36.8)
or
In the analysis of variance table, S(AB) is listed inside the table, while SA and SB are
listed outside.
Reference
1. ASI, 1987. Orthogonal Arrays and Linear Graphs. Livonia, Michigan: American Supplier
Institute.
37
37.1.
37.2.
Introduction
610
Treatment of Incomplete Data
Cases 1.1 to 1.4
Case 2.1
611
Case 2.2
611
Case 2.3
612
Cases 3.1 to 3.3
Cases 4.1 to 4.3
Case 4.4
613
Case 4.5
613
37.3.
Incomplete Data
611
611
612
613
Sequential Approximation
References
616
613
37.1. Introduction
Sometimes during experimentation or simulation, data cannot be obtained as
planned because of lost data, equipment problems, or extreme conditions. Sometimes, the SN ratios of a few runs of an orthogonal array cannot be calculated, for
various reasons. In the treatment of missing data in the traditional design of experiments, no distinction is made between different types of incomplete data. In
this chapter, treatments of such cases are described.
Although it is desirable to collect a complete set of data for analysis, it is not
absolutely necessary. In parameter design, it is recommended that control-factorlevel intervals be set wide enough so that a signicant improvement in quality may
be expected. Such determinations can only be made based on existing knowledge
or past experiences. In product development, however, there is no existing knowledge to determine the range of control factor levels. To avoid generating undesirable results, engineers tend to conduct research within narrow ranges.
If control-factor-level intervals were set wide, the conditions of some controlfactor-level combinations may become so extreme that there would be no product
or no measurement: for example, an explosion in a chemical reaction. Thats all
right, because important information was produced from the result. It must be
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
609
610
611
explosions or cracked test pieces. Such situations are allowed at the R&D stage. In
the case of an L18 orthogonal array experiment, there may be as many as several
runs with incomplete data, but this is still good enough to draw valuable
conclusions.
An example of case 2.1 is when the number of signal factor level of run 4 is 3 but
all other runs are 5. The SN ratio of run 4 is calculated as is and analyzed with
the other SN ratios.
Case 2.1
For case 2.2, the number of repetitions under each signal factor are different, as
shown in Table 37.1. In the case of the zero-point proportional equation, the SN
ratio is calculated as follows:
Case 2.2
2
2
2
ST y 11
y 12
y kr
k
(M1 y1 M2 y2 Mk yk)2
r1M 21 r2M 22 rkM 2k
Se ST S
Ve
( fT r1 r2 rk)
(37.1)
( f 1)
(37.2)
( fe fT 1)
(37.3)
Se
fe
10 log
(37.4)
[1/(r1M 21 r2M 22 rk M 2k )](S Ve)
Ve
(37.5)
Table 37.1
Incomplete data
Signal
M1
M2
Mk
Repetition
y11
y12
y1r1
y21
y22
yk1
yk2
y2r2
ykrk
y1
y2
yk
Total
612
(fT 4 6 6 16)
(37.6)
2
2
2
(4)(0.1 ) (6)(0.3 ) (6)(0.5 )
2.08
2.00175012
(f 1)
(37.7)
Se 2.002033 2.00175012
0.0028288
Ve
(f 16 1 15)
0.00028288
0.00001886
15
10 log
(37.8)
(37.9)
(1/2.08)(2.00175012 0.00001886)
0.00001886
10 log 51027.08
47.08 dB
(37.10)
Case 2.3
An example of case 2.3 is when there are two compounded noise factor levels: the
positive and negative extreme conditions. If one of these two extreme conditions
is totally missing, do not use the results of another extreme condition to calculate
the SN ratio. Instead, treat it as described for type 1.
Whether its an explosion, lack of current, or lack of end product, each problem
indicates that the condition is very bad. Use negative innity as the SN ratio. It is
important to distinguish this case from type 1, in which it is not known whether
the condition is good or bad.
There are two ways to treat these cases: (1) classify the SN ratios, including
positive and/or negative innity, into several categories and analyze the results by
accumulation analysis [2]; and (2) subtract 3 to 5 dB from the smallest SN ratio
in the orthogonal array. Assign that number to the SN ratio for the missing run(s).
Then follow with sequential approximation.
Table 37.2
Numerical example
Signal
Result
Total
M1 0.1
M2 0.3
M3 0.5
0.098
0.097
0.093
0.092
0.294
0.288
0.288
0.296
0.297
0.287
0.495
0.493
0.489
0.495
0.495
0.488
0.380
1.750
2.955
613
Cases 4.1 to 4.3 involve error variance being greater than the sensitivity. Use negative innity for the SN ratio and treat these cases in the same way as case 3.1.
Case 4.4 is when all results are equal to zero for a smaller-is-better case. Use positive
innity as the SN ratio and classify the SN ratio into several categories and conduct
accumulation analysis. Add 3 to 5 dB to the largest SN ratio in the orthogonal
array. Assign that number to the SN ratio of the missing run(s), followed with
sequential approximation.
Case 4.4
Case 4.5 is when all results are equal to zero for a larger-is-better case. Use negative
innity, and treat the problem in the same way as case 3.1.
Case 4.5
Table 37.3
Missing data in experiment 3
Amount of Wear
(m)
Factor:
No.\ Column:
A
1
B
2
C
3
D
4
E
5
F
6
G
7
H
8
I
9
J
10
K
11
39 19
9 10
27.12
21 20
3 16
24.42
42 27
13 24
29.08
37 35
9 29
29.44
68 85
38 64
36.38
21 10
21.54
38 23
17
27.55
N1
(dB)
N2
30 34
30 24
29.46
10
72 46
24 40
33.75
11
10
15.47
12
30 12
24.42
614
is
the
estimation
of experiment
3 using sequential
3)
29.78 dB
325.78
12
(37.11)
3) 30.88 dB
(37.12)
Using the same procedure, the fth approximation is obtained as 31.62 dB. This
is close to the fourth approximation of 31.53, and calculation is discontinued.
The actual result was 34.02; these are fairly close.
Table 37.4
ANOVA table using zeroth approximation
Factor
38.16
38.16
10.64
10.64
55.64
55.64
4.84
4.84
5.91
5.91
0.01
0.01
0.08
0.08
7.68
7.68
53.68
53.68
139.40
139.40
9.97
9.97
Total
11
326.02
: Pooled as error.
615
Table 37.5
Supplementary table
Factor
Level Total
Factor
Level Total
A1
A2
173.59
152.19
G1
G2
162.39
163.39
B1
B2
157.24
168.54
H1
H2
158.09
167.69
C1
C2
149.97
175.81
I1
I2
150.20
175.58
D1
D2
166.70
159.08
J1
J2
183.34
142.44
E1
E2
158.68
167.10
K1
K2
168.36
157.42
F1
F2
163.06
162.72
Total
324.78
The calculation above is terminated at the fth approximation using four factors: A, C, I, and J. Since the effect of E becomes larger and larger, the approximation could be better if E were included in the calculation.
Next, the optimum condition is estimated using the ANOVA and supplementary
tables from the fth approximation (Tables 37.6 and 37.7).
Table 37.6
Supplementary table using fth approximation
Factor
Level Total
Factor
Level Total
A1
178.06
G1
166.86
A2
152.19
G2
163.39
B1
161.71
H1
162.56
B2
168.54
H2
167.69
C1
149.97
I1
150.20
C2
180.28
I2
180.05
D1
166.70
J1
183.34
D2
163.35
J2
146.92
E1
158.68
K1
168.36
E2
171.57
K2
161.89
F1
167.53
F2
162.72
Total
330.25
616
Table 37.7
ANOVA table using fth approximation
Factor
55.77
3.89
76.56
0.89
13.83
(%)
55.77
15.6
3.89
76.56
21.6
0.83
13.85
3.4
1.93
1.93
1.00
1.00
2.19
2.19
74.25
74.25
20.9
110.60
31.5
110.60
3.49
3.49
(e)
(6)
(13.33)
(2.22)
11
344.36
Total
(7.0)
100.0
: Pooled as error.
The optimum condition is A2B1C1D2E 1F2G 2H1I1 J2K2. Using the effects of A, C,
E, I, and J, we have
A2 C 1 E 1 I1 J2 4T
25.36 25.00 26.45 25.03 24.48 (4)(27.52)
16.24 dB
(37.13)
Sequential approximation can be used when there is more than one missing experiment. But it cannot be used when all data under a certain level of a certain
factor are missing.
References
1. Yuin Wu et al., 2000. Taguchi Methods for Robust Design. New York: ASME Press.
2. Genichi Taguchi, 1987. System of Experimental Design. Livonia, Michigan: Unipub/
American Supplier Institute.
3. Genichi Taguchi et al., 1993. Quality Engineering Series, Vol. 4. Tokyo: Japanese Standards Association.
38
38.1.
38.2.
38.3.
38.4.
Youden Squares
Introduction
617
Objective of Using Youden Squares
Calculations
620
Derivations
623
Reference
625
617
38.1. Introduction
Youden squares are a type of layout used in design of experiments. A Youden square
is a fraction of a Latin square. They are also called incomplete Latin squares. Table
38.1 shows a Latin square commonly seen in experimental design texts. Table 38.2
is another way of showing the contents of a Latin square using an L9 orthogonal
array, where only the rst three columns of the array are shown.
Table 38.3 shows a Youden square. In the table, a portion of the layout from
Table 38.1 is missing. Table 38.4 is another way of showing the contents of a Youden square.
From Tables 38.1 and 38.2, it is seen that all three factors, A, B, and C, are
balanced (orthogonal), whereas in Tables 38.3 and 38.4, factors B and C are unbalanced. Although factors B and C are unbalanced in a Youden square, their
effects can be calculated by solving the equations generated from the relationship
shown in this layout.
617
618
Table 38.1
Common Latin square
B1
B2
B3
A1
C1
C2
C3
A2
C2
C3
C1
A3
C3
C1
C2
Table 38.5 is a Youden square where an L18 array is assigned with an extra
column; the ninth factor is denoted by C, as column 9. In the table, y1 , y2 , ... , y18
are the outputs of 18 runs. Comparing Tables 38.4 and 38.5, the rst two columns
of the two arrays correspond to each other, with the exception that there are three
repetitions in Table 38.5. For example, combination A1B1 appears once in Table
38.4, while the combination appears three times (numbers 1, 2, and 3) in Table
38.5. It is the same for other combinations.
It is known that the number of degrees of freedom means the amount of information available. There are 18 runs in an L18 array; its number of degrees of
freedom is 17. When one two-level factor and seven three-level factors are assigned,
the total number of degrees of freedom from this layout is 15. The difference, two
degrees of freedom, is hidden inside. One of the ways to utilize these two degrees
of freedom to obtain more information without having more runs is by substituting
columns 1 and 2 with a six-level column to create a column that can assign a factor
of up to six levels. This is called multilevel assignment. Another way of utilizing the
two degrees of freedom is to assign a three-level column using a Youden square.
Thus, 17 degrees of freedom are all utilized.
Table 38.2
Latin square using an orthogonal array
No.
A
1
B
2
C
3
y1
y2
y3
y4
y5
y6
y7
y8
y9
619
Table 38.3
Youden square
B1
B2
B3
A1
C1(y1)
C2(y2)
C3(y3)
A2
C2(y4)
C3(y5)
C1(y6)
Table 38.4
Alternative Youden square depiction
No.
A
1
B
2
C
3
y1
y2
y3
y4
y5
y6
Table 38.5
Youden square
D3
3
No.
A
1
B
2
C
9
I
8
1
2
3
1
1
1
1
1
1
1
1
1
y1
y2
y3
y1
4
5
6
1
1
1
2
2
2
2
2
2
y4
y5
y6
y2
7
8
9
1
1
1
3
3
3
3
3
3
y7
y8
y9
y3
10
11
12
2
2
2
1
1
1
2
2
2
y10
y11
y12
y4
13
14
15
2
2
2
2
2
2
3
3
3
y13
y14
y15
y5
16
17
18
2
2
2
3
3
3
1
1
1
y16
y17
y18
y6
Results
Subtotal
620
38.3. Calculations
In Table 38.5, one two-level factor, A, and eight three-level factors, B, C, D, E, F,
G, H, and I, were assigned. Eighteen outputs, y1, y2, ... , y18, were obtained, and
results, y,
1 y
2, ... , y
18, were obtained. Calculation of main effects A, D, E, F, G, H,
and I are exactly identical to regular calculation. Main effects B and C are calculated after calculating their level averages as follows:
y1 y1 y2 y3
y2 y4 y5 y6
(38.1)
(38.2)
b3 19 (y1 y3 y5 y6)
c1 19 (y1 y3 y4 y6)
c2 19 (y1 y2 y4 y5)
(38.3)
c3 19 (y2 y3 y5 y6)
Example [1]
In a study of air turbulence in a conical air induction system (conical AIS) for
automobile engines, the following control factors were studied:
A: honeycomb, two levels
B: large screen, three levels
C: filter medium, three levels
D: element part support, three levels
E: inlet geometry, three levels
F: gasket, three levels
G: inlet cover geometry, three levels
38.3. Calculations
621
Figure 38.1
Main effects of A, B,
and C
b2
y1 y2 y4 y6
9
9.05 8.77 8.88 8.23
0.1033
9
(38.4)
y2 y3 y4 y5
9
8.77 8.31 8.88 8.58
0.0177
9
(38.5)
622
Table 38.6
Layout and results
Column
No.
1
A
2
B
3
C
4
D
5
E
6
F
7
G
8
H
9
I
SN
1
2
3
1
1
1
1
1
1
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
1
1
4.24
2.21
2.60
y1 9.05
4
5
6
1
1
1
2
2
2
1
2
3
1
2
3
2
3
1
2
3
1
3
1
2
3
1
2
2
2
2
2.20
4.01
2.56
y2 8.77
7
8
9
1
1
1
3
3
3
1
2
3
2
3
1
1
2
3
3
1
2
2
3
1
3
1
2
3
3
3
2.20
3.87
2.24
y3 8.31
10
11
12
2
2
2
1
1
1
1
2
3
3
1
2
3
1
2
2
3
1
2
3
1
1
2
3
2
2
2
2.66
2.66
2.31
y4 8.88
13
14
15
2
2
2
2
2
2
1
2
3
2
3
1
3
1
2
1
2
3
3
1
2
2
3
1
3
3
3
2.20
2.35
4.03
y5 8.58
16
17
18
2
2
2
3
3
3
1
2
3
3
1
2
2
3
1
3
1
2
1
2
3
2
3
1
1
1
1
2.11
2.16
3.96
y6 8.23
b3
i1
i2
Subtotal
y1 y3 y5 y6
9
9.05 8.31 8.58 8.23
0.1211
9
(38.6)
y1 y3 y4 y6
9
9.05 8.31 8.88 8.23
0.01
9
(38.7)
y1 y2 y4 y5
9
9.05 8.77 8.88 8.58
0.0022
9
(38.8)
38.4. Derivations
623
y2 y3 y5 y6
9
i3
(38.9)
(38.10)
(38.11)
(38.12)
(38.13)
(38.14)
(38.15)
(38.16)
Other level totals are calculated in the normal way. Table 38.7 shows the results
of calculation.
38.4. Derivations
The derivations of equations (38.2) and (38.3) are illustrated. Table 38.8 is constructed from Table 38.4 using symbols a1, a2, b1, b2, b3, c1, c2, and c3, as described
in Section 38.3. It is seen from the table that y1 is the result of a1, b1, and c1; y2 is
the result of a1, b2, and c2; and so on. Therefore, the following equations can be
written:
Table 38.7
Response table for SN ratio
A
2.90
2.98
2.81
2.92
3.00
2.89
2.88
4.00
2.89
2.86
2.90
2.88
2.82
2.79
2.81
2.84
2.33
2.88
2.76
2.95
2.90
2.85
2.94
2.92
2.30
2.87
624
Table 38.8
Youden square
No.
A
1
B
2
C
3
a1
b1
c1
y1
a1
b2
c2
y2
a1
b3
c3
y3
a2
b1
c2
y4
a2
b2
c3
y5
a2
b3
c1
y6
y1 a1 b1 c1
(38.17)
y2 a1 b2 c2
(38.18)
y3 a1 b3 c3
(38.19)
y4 a2 b1 c2
(38.20)
y5 a2 b2 c3
(38.21)
y6 a2 b3 c1
(38.22)
In addition, the following three equations exist based on the denition of a1, a2,
... , c3:
a1 a2 0
(38.23)
b1 b2 b3 0
(38.24)
c1 c2 c3 0
(38.25)
By solving these equations, b1, b2, b3, c1, c2, and c3 are calculated, as shown in
equations (38.2) and (38.3).
For example, the effect of C2 is derived as follows.
equation (38.17) equation (38.18):
y1 y2 b1 b2 c1 c2
(38.26)
(38.27)
(38.28)
(38.29)
Reference
625
(38.30)
(38.31)
(38.32)
Reference
1. R. Khami, et al., 1994. Optimization of the conical air induction systems mass airow
sensor performance. Presented at the Taguchi Symposium.
Section 2
Application
(Case Studies)
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
Part I
Robust
Engineering:
Chemical
Applications
Biochemistry (Cases 12)
Chemical Reaction (Cases 34)
Measurement (Cases 58)
Pharmacology (Case 9)
Separation (Cases 1011)
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
CASE 1
1. Introduction
For growing bean sprouts, we use a series of production processes where small beans are soaked in
water and germinated and grown in a lightless environment. As shown in Figure 1, this process is divided into germination, growth, and decay periods.
The major traits in each period are described below.
Germination Period
Although the raw material for bean sprouts is often
believed to be soybeans or small beans, in actuality,
they are specic types of small bean: mappe (ketsuru adzuki) and green gram. The former are imported from Thailand and Burma, and the latter
mainly from China. Despite the fact that green gram
has become mainstream, its price is three to ve
times as great as black mappes. Because they grow
under dry and hibernating conditions, they must be
soaked in water and heated for germination. Germination has the important role of sprouting them
and also increases the germination rate and sterilizes accompanying bacteria.
Growth Period
After germination, mappes absorb a considerable
amount of water. Regarding three days before shipping as a growth period, we evaluate an SN ratio.
Decay Period
Although a plant needs photosynthesis after a
growth period, bean sprouts normally decay and rot
because no light is supplied in the production environment. Whether the preservation of bean
sprouts is good or poor depends on the length of
the decay period and the processing up to the decay
period. On the basis of a criterion that cannot judge
whether growth should be fast or slow, such as a
fast-growing bean sprout withers fast, or a slowly
growing bean sprout is immature and of poor quality, a proper growth period for bean sprouts has
been considered approximately seven days. Additionally, a contradictory evaluation of appearance,
one of which insists that completely grown bean
sprouts are better because they grow wide and
long and the other that growing young sprouts is
better because they last longer, has been used. We
believe that our generic function (in this case, a
growth rate by weight is used) can evaluate the ideal
growth, as shown in Figure 1, solving these contradictions. A state where no energy of bean sprouts
are wasted for purposes other than growth is regarded as suitable for healthy bean sprouts.
2. Generic Function
Bean sprouts grow relying on absorption of water.
This study observes a process of bean sprout germination and growth as a change in weight and evaluates this change as a generic function of growth.
Figure 1 shows three states: actual, ideal, and optimal regarding the weight change of normal bean
sprouts.
Although perpetual growth is ideal, in actuality,
after using up nutrition, bean sprouts slow their
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
631
632
Case 1
Figure 1
Growth curve of bean sprouts
growth and wither from around the seventh day because only water is supplied. This entire process,
classied into germination, growth, and decay periods, cannot be discriminated perfectly. Then, for
a period of water absorption and growth caused by
germination, setting the initial weight of seeds to Y0
and the grown weight after T hours to Y, we dene
the following as an ideal generic function:
Y Y0e T
(1)
(2)
(3)
before putting it to a practical use. This is the quality engineering way of thinking.
1. Selection of signal factor. Considering the ideal
function, we choose time as a signal factor.
More specically, with 24 hours as one day, we
select ve, six, and seven days as signal factor
levels.
2. Selection of noise factors. Since simultaneous
control of humidity and temperature is quite
difcult, we focus on humidity, which is considered to be more easily regulated as an experimental noise factor.
N1:
N2:
633
Error variance:
Ve
0.148627
0.037157
4
0.010724 0.148627
14
0.031870
(12)
SN ratio:
[1/(2)(110)]
(16.200409 0.031870)
10 log
0.031870
3.63 dB
( f 6)
(11)
(13)
Sensitivity:
(4)
S 10 log
Effective divider:
52 62 72 110
(5)
1
(16.200409 0.031870)
(2)(110)
11.34 dB
(14)
Linear equations:
L1 (1.500)(5) (1.623)(6)
(1.692)(7) 29.082
(6)
L2 (1.626)(5) (1.697)(6)
(1.758)(7) 30.618
(7)
(29.082 30.618)2
(2)(110)
16.200409
( f 1)
(8)
(29.082 30.618)2
(2)(110)
0.010724
( f 1)
(9)
Error variation:
Se 16.359796 16.200409 0.010724
0.148627
( f 6)
(10)
634
Case 1
Table 1
Control factors and levels
Level
Control Factor
A:
type of seed
Black mappe
Green gram
B:
18
24
C:
D:
10
E:
sprinkling timing
F:
G:
0.1
1.0
Figure 2
Response graphs
20
30
30
635
Particularly for the variability in mappe properties mentioned above, to mitigate the variability we
should have increased the number of mappes or
sieved only appropriate sizes. In the case of handling organisms, it is difcult to select uniform samples because of large variability among individuals.
Therefore, we need to plan an experimental
method in such a way that individual variability is
negligible.
The improvement in SN ratio leads to (1)
smooth growth (decline or elimination of frequent
occurrence in rot, prolonged period of preservation), and (2) improved controllability of a growth
curve (controllability of a growth rate, shipping, or
surplus and shortage in volume). In the meantime,
the improvement in sensitivity brings the following:
(1) a shortened period of growth, more accurate
prediction of shipping day; and (2) reduction in
production cost, streamlining of production operations, retrenchment of production equipment,
more effective use of space for other products,
mitigation of industrial waste. Shortening of a production period by improving sensitivity, S, leads
to signicant cost reduction together with high
productivity.
In the meantime, among the unexpected significant benets brought about by short-term growth
or early shipping is a reduction in industrial waste.
In producing bean sprouts, although little waste is
caused by rottenness that occurs in normal production processes, we have had a considerable amount
of waste due to excessive production by incorrect
prediction of a shipped volume. The cost required
to discard the waste (which is not a loss due to the
discard of products themselves but the expense of
turning the waste over to industrial waste disposal
Table 2
Estimation of optimal conguration
Combination
1
SN ratio (dB)
Sensitivity (dB)
4.02
5.30
5.26
9.24
11.19
9.24
0.325
11.20
0.276
6.89
0.345
11.20
636
Case 1
Table 3
Results of conrmation experiment for combination 3 (dB)
Conguration
Optimal
Current
Gain
SN ratio
Estimation
Conrmation
5.26
5.72
3.52
2.20
Sensitivity
Estimation
Conrmation
9.24
8.93
11.49
2.56
Reference
Setsumi Yoshino, 1995. Optimization of bean sprouting
condition by parameter design. Quality Engineering,
Vol. 3, No. 2, pp. 1722.
CASE 2
where both the SN ratio and sensitivity S of the proliferation curve are large are regarded as optimal.
2. Experimental Procedure
The factors affecting the proliferation speed of
small algae include environment, cultivation apparatus, bacteria type, or medium. To study medium
conditions for obtaining maximum biomass, we
need to repeat experiments under the conditions
for high-density cultivation. However, since the
transparency deteriorates and light energy acts as a
constraint factor, we had difculty performing an
experiment on the inuence of a mediums ingredients. On the contrary, whereas it is easy to establish an optimal level of medium in the case of
low-density cultivation, it is difcult to adopt the results for high-density cultivation. It was expected in
this experiment, however, that the results of smallscale low-density cultivation would be applicable for
large-scale cultivation.
Since there is a correlation between the amount
of small algae produced (biomass, amount of dried
cells per liter in the culture solution) and optical
density (OD680), the latter was considered as an alternative characteristic, for biomass. There are
other substitutive characteristics, such as PCV (volume of cells in culture solution per liter) or cell
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
637
638
Case 2
P
Nutrients
Temperature Sensor
pH Sensor
Air
Light
StirringDevice
Device
Stirring
Figure 1
Tube cultivation apparatus
Ideal: y = M
3. SN Ratio
To analyze the measurements of optical density using a linear relationship passing through the origin
(zero-point proportional equation), we include
their logarithmized values in Table 1. Now x0 and x
indicate optical density (OD680) and y represents a
logarithmized value.
y 1000[log(1000x) log(1000x0)]
Actual
(1)
( f 8)
(2)
639
Table 1
Logarithmized measurement of optical density
Day
No.
0.0
298.2
577.4
833.7
1002.8
1127.6
1265.2
1303.8
1319.0
0.0
306.6
604.6
900.3
1089.9
1198.1
1285.8
1235.7
668.6
17
0.0
443.9
781.9
1020.9
1213.5
1314.4
1419.1
1497.5
1518.2
18
0.0
443.9
800.1
1089.9
1259.6
1291.6
1301.9
1340.6
1115.7
SN ratio:
Effective divider:
12 22 92 249
(3)
10 log
(1/249) (8,046,441.5662 55,489.6123)
55,489.6123
( f 1)
(4)
Error variation:
( f 7)
Sensitivity:
S 10 log
1
(8,046,441.5662 55,489.6123)
249
(8)
(5)
4. Results of Experiment
Error variance:
Ve
(7)
45.06 dB
Se 8,434,868.8523 8,046,441.5662
388,427.2861
2.38 dB
388,427.2861
55,489.6123
7
(6)
Table 2 shows the control factors. Adding as inoculants small algae (Nannnochrolopsis) in 18 culture
Table 2
Control factors and levels
Level
Control Factor
A:
illuminance (lx)
B:
C:
nitrogen coefcient
2000
4000
30
70
0.385
0.585
0.785
D:
phosphorus coefcient
0.001
0.002
0.002
E:
0.010
0.016
0.022
F:
0.014
0.022
0.030
G:
1.90
2.04
2.18
640
Case 2
Figure 3
Response graphs for proliferation
(9)
Sensitivity:
S 46.49 46.00 45.88 46.03 45.86
45.81 46.20 (6)(45.62)
48.55
(10)
(11)
Sensitivity:
S 46.49 46.00 45.57 45.64 45.63
45.56 45.31 (6)45.62
46.48
(12)
641
Table 3
Estimation and conrmation of process average (dB)
Conguration
Optimal
Initial
Gain
Estimation
SN Ratio
Sensitivity S
Conrmation
Conrmation
Estimation
1.10
2.81
2.72
48.55
43.85
47.37
6.28
3.08
3.08
46.48
43.02
43.02
7.38
0.27
5.80
2.07
0.83
4.35
642
Although it is possible to maintain the conditions
of control factors using current technology, it is
costly and difcult to expect perfect control. Therefore, in the future, by attempting to use the control
system to retain the same conditions to some extent,
we might consider the control factors as noise factors in an economical study of an optimal conguration for biomass.
Case 2
Reference
Makoto Kubonaga and Eri Fujimoto, 1993. Optimization of small algae production process by parameter
design. Quality Engineering, Vol. 1, No. 5.
CASE 3
Abstract: In this research we applied quality engineering principles to a synthetic plastics reaction known as a polymerization reaction, regarded as one
of the slowest of all polymerization reactions, and a reaction with a wide
molecular mass distribution, to enhance productivity and obtain a polymerized substance with a narrow molecular mass distribution.
1. Introduction
In manufacturing processes for chemical products,
quality and cost depend strongly on a synthetic
reaction condition. The reaction handled in this
research, a synthetic reaction of resin called a
polymerization reaction, is one of the slowest of all such
reactions and has a wide molecular weight distribution. Although we had tried many times to
improve productivity and molecular weight distribution, no signicant achievement was obtained. In
general cases of polymerization reaction, for a complete product we have conventionally measured its
molecular weight distribution and analyzed the
average molecular weight calculated. The objective
of this study was to accomplish an unachieved goal
by applying quality engineering to this polymerization reaction. Regarding the application of quality
engineering to a chemical reaction, there is one
case of doubling productivity by using a dynamic
operating window for an organic synthesis reaction.
However, the case contains some problems in reproducibility, such as having both peaks and V-shapes
in a factor effect plot of the SN ratio.
To solve these problems, we have taken advantage of the ratio of the rate of main reaction 1 and
that of side reaction 2: 1/2. By doing this we obtained fairly good reproducibility. In our research,
in reference to an organic synthesis reaction, we applied quality engineering to a polymerization reaction by improving a method of handling a
polymerization degree distribution and calculating
reaction speed and the SN ratio.
2. Experimental Procedure
Objective
Since, in general, a polymer substance consists of
various types of molecular weight, it has a particular
molecular weight. A polymer in this experiment is
an intermediate material used as a raw material in
the following process. In this case, its molecular
weight should be neither too large nor too small.
Therefore, using the quality engineering method,
we attempted to enhance productivity and obtain a
polymerized substance with a narrow molecular
weight. To enhance productivity, a high reaction
speed and a stable reaction are both essential.
To obtain a polymerized substance with a narrow
molecular weight distribution, the constitution of
the resulting molecular weight should be high.
Therefore, by classifying an ingredient not reaching
a certain molecular weight Mn as an underreacted substance, one ranging from Mn to Mm as a target substance, and one exceeding Mm as an overreacted
substance, we proceeded with our analysis.
Experimental Device
We heated material Y, solvent, and catalyst in a 1-L
ask. After a certain time, we added material X to
the ask continuously. Next came the reaction of X
and Y. When the addition of X was complete, dehydration occurred and polymerization took place.
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
643
644
Case 3
Table 1
Control factors and levels for the rst L18 orthogonal array (%)
Level
Column
None
Control Factor
A:
amount of material X
96
100
106
B:
amount of agent 1
33
100
300
C:
amount of agent 2
85
100
115
D: reaction speed
90
100
110
E:
maturation time
33
100
F:
33
100
Table 2
Control factors and levels for the second L18 orthogonal array (%)
Level
Column
Control Factor
C:
amount of agent 2
A:
amount of material X
B:
amount of agent 1
100
115
92
96
100
100
300
500
H: agent type
I:
J:
F:
33
100
150
K:
material type of Y
Table 3
Change in constitution of yields at each reaction time (%)
Reaction Time
0.3
Underreacted substance
97.1
30.9
20.7
19.1
12.3
Target substance
2.9
62.0
72.2
63.6
61.1
Overreacted substance
7.1
7.1
17.3
26.7
Total
100
100
100
100
100
645
ular mass and its distribution were computed as converted values of the standard polystyrene molecular
mass.
Figure 1
Dynamic operating window method
Figure 2
Reaction regarded as ideal
Generic Function
Since a chemical reaction is an impact phenomenon between molecules, by setting the constitution
of underreacted substances in a yielded substance
to p, we dened the following equation as a generic
function on the basis that reaction speed is proportional to density of material:
p eT
Taking the reciprocal of both sides and logarithmizing, we have
ln(1/p) T
Setting Y ln (1/p), we obtain the following zeropoint proportional equation:
Y T
Now is a coefcient of the reaction speed.
Concept of Operating Window
If we categorize a yield as underreacted, target, and
overreacted based on the molecular mass, as a
chemical reaction progresses, the constitution of the
target substance rises (Figure 1).
Setting the constitution of an underreacted substance at the point of reaction time T to p and that
of an overreacted substance to q, we can express that
of an overreacted substance by 1 p q. Since an
ideal reaction is regarded as one with a faster reaction speed and higher constitution then a target
substance (Figure 2), we should lower the constitution of an underreacted substance immediately
and maintain that of an overreacted substance as
low as possible.
Then, taking into account the following two reaction stages, we estimated a reaction speed for each
stage .
R1:
646
Case 3
polymer, that is, a dimer consisting of two
combined molecules) convert into target substances (molecular mass Mn Mm1) and overreacted substances (molecular mass more than
Mm):
ln
1
1iTi
pi
ln
1
2iTi
pi qi
In this case, the reaction speed to yield a target substance 1 should be larger and the reaction speed
to yield an overreacted substance 2 should be
smaller.
1i
Y1i
1
1
ln
Ti
Ti
pi
2i
Y2i
1
1
ln
Ti
Ti
pi qi
1
K
1
1
2
2
11
1K
1 2
2
( 2K
)
K 21
When a reaction speed is stable throughout a total reaction, we can analyze the reaction with only
one at a single observation point in a more efcient manner. If we use a single observation point,
the calculation is
1 2
10 log
1
2
10 log 22 10 log 21
2
1
2
1
1
1
1
ln
ln
1.1757
T1
p1
1
0.309
647
Figure 3
Factor effect plot of the SN ratio in the rst L18 experiment
1
1
1
1
ln
ln
0.0776
T2
p2 q2
4
0.123 0.611
SN ratio:
10 log
21
(1.1757)2
10 log
23.6105 dB
2
2
(0.0776)2
Sensitivity:
S 10 log 21 10 log (1.1757)2 1.4059 dB
Figure 4
Factor effect plot of sensitivity in the rst L18 experiment
648
Case 3
Table 4
Optimal conguration in the rst L18
experimenta
Factor
SN ratio
(3)
(2)
(1)
Sensitivity
(2)
(3)
(3)
10 log
21optimal
21current
On the other hand, since the gain is 5.0 dB, that is,
the real (nondecimal) number is 3.1,
Figures 5 and 6 summarize the factor effect plots
of the SN ratio, , and sensitivity, S, in the second
L18 experiment. According to each result for the SN
ratio and sensitivity, we determined the optimal levels as shown in Table 5.
21optimal
3.1
21current
1optimal
1.8
1current
For the second L18 experiment, we conducted a
conrmatory experiment under the optimal conguration determined by both the SN ratio and sensitivity. As a result of the second L18 conrmatory
experiment, the gain in actual sensitivity is increased to 9.2 dB from that of the rst L18 experiment. This is equivalent to a 2.9-fold improvement
in reaction speed (i.e., the reaction speed is reduced by two-thirds).
On the other hand, if we compare two molecular
mass distributions of polymerized substances under
the current conguration (8720 dB) and the optimal conguration (8840 dB) by adjusting the average molecular mass to the target value, we can
obtain almost the same result. That is, we cannot
determine a way to narrow down the range of molecular mass distribution in this research.
Figure 5
Factor effect plot of the SN ratio in the second L18 experiment
649
Figure 6
Factor effect plot of the sensitivity in the second L18 experiment
7. Conclusions
Experimental Achievements
The productivity improved dramatically because the
reaction speed doubled under the optimal conguration compared to that under the current conguration. It was difcult to improve the molecular
mass distribution; consequently, although the
reaction speed was enhanced, the molecular mass
distribution was not. However, improving the
productivity threefold while retaining the same level
of current quality (molecular mass distribution) was
regarded as a major achievement.
Future Reearch
Since the reaction speed was increased despite little
improvement in molecular mass distribution, our
research can be regarded as a signicant success
from a company-wide standpoint. However, judging
from the denition of SN ratio used, we expected
Table 5
Optimal conguration in the second L18
experimenta
Factor
SN ratio
(3)
2
Figure 7
Example of lapse of reaction time
650
Case 3
Table 6
Results of the rst L18 conrmatory experiment
SN Ratio
Conguration
Factors
Sensitivity
Estimation
Conrmation
Estimation
Conrmation
Optimal
A3
B3
C3
D3
E1
F3
G1
38.7
34.8
10.8
9.9
Current
A2
B2
C2
D2
E3
F3
G1
30.6
28.1
3.0
7.2
8.1
6.7
7.5
2.7
Optimal S
A1
B3
C3
D2
E3
F3
G1
30.4
29.6
16.3
12.2
Current
A2
B2
C2
D2
E3
F3
G1
30.6
28.1
3.0
7.2
0.2
1.5
13.3
5.0
Gain
Gain
Table 7
Results of the second L18 conrmatory experiment
SN Ratio
Conguration
Factors
Sensitivity
Optimal
C2
A1
B3
H3
I3
J1
F2
K2
33.1
35.5
19.2
16.4
Current
C1
A3
B1
H2
I1
J1
F2
K2
23.0
28.1
5.8
7.2
10.1
7.4
13.4
9.2
Gain
continue the polymerization reaction until the average molecular mass attains the target. Our conrmatory experiment demonstrates that after doing
so, the molecular mass distribution is no longer improvable. Therefore, to improve molecular mass distribution, research on design of experiments and
analytical methods, including consideration of a reverse reaction and consideration of a termination
reaction, should be tackled in the future.
Achievement through Use of Quality Engineering
Experiments in production technology beginning at
the beaker level in a laboratory using conventional
development processes require a considerable
amount of time. By evaluating a polymerization reaction with an SN ratio reecting a generic function
in a chemical reaction, we can improve reproducibility in factor effects and enhance productivity
threefold, thereby obtaining a signicant achievement in a short period of time.
References
Yoshikazu Mori, Koji Kimura, Akimasa Kume, and Takeo Nakajima, 1995. Optimization of synthesis conditions in chemical reaction (SN ratio analysis
example no. 1). Quality Engineering, Vol. 3, No. 1.
Yoshikazu Mori, Koji Kimura, Akimasa Kume, and Takeo Nakajima, 1996. Optimization of synthesis conditions in chemical reaction (SN ratio analysis
example no. 2). Quality Engineering, Vol. 4, No. 4.
1998 Report for NEDO (New Energy and Industrial
Technology Development Organization), 1998.
Globally Standardized Research and Development: Study
of Optimization Design and Evaluation Technique by
Quality Engineering. Tokyo: Japanese Standards
Association.
CASE 4
1/T E
(1)
(2)
p
2 log E A
1p
(3)
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
651
652
Case 4
Figure 1
Characteristic curve of color negative lm
p
log E B
1p
(4)
Protection Layer
Blue-Sensitive Layer
Yellow Filter Layer
Green-Sensitive Layer
Intermediate Layer
Red-Sensitive Layer
Intermediate Layer
Anti-Halation Layer
File Base
Photographic Density, D
(Omega-Transformed Value)
Figure 3
Characteristic curve of subsystem
Figure 4
Input / output in the image transfer method
Figure 5
Fluctuation of characteristic curve for each development time
653
Case 4
SN Ratio (dB)
654
Development Time: T
Figure 6
Development time and conventional SN ratio
2. Generic Function
In evaluating a photographic system using a dynamic operating window, we regard the photographic function to be the relationship between
developing reaction speed with light. Development
is a chemical reaction in which a microcrystal of silver halide is reduced to original metallic silver using
a reducer, the developing chemical. Even unexposed photosensitive material will be developed
sooner or later if the developing time is extended
long enough. On the other hand, a microcrystal of
Figure 7
Signal range for calculating conventional SN ratios
(5)
(7)
L21
r
(8)
SN ratio:
10 log
SM 2Ve
3r(Ve S(M1))
(9)
SM 2Ve
3r
(10)
Sensitivity:
S 10 log
3. Analysis Example
(6)
Dening exposures M1, M2, and M3 and developing times T1, T2, ... , T6 as signals, we calculated
an SN ratio from Table 1.
SM
655
10
(11)
Effective divider:
Figure 8
Input / output relationship for a dynamic operating window
(12)
656
Case 4
Table 1
Data for dynamic operating window
Developing Time
T1
T2
T3
T4
T5
T6
Linear
Equation
M1 ( 0: fog)
y11
y12
y13
y14
y15
y16
L1
M2
y21
y22
y23
y24
y25
y26
L2
M3
y31
y32
y33
y34
y35
y36
L3
Exposure
Total variation:
2
2
2
2
ST y 11
y 12
y 21
y 22
( f 4)
(13)
SM1
L21
42,786
( f 1)
(16)
Error variation:
L L22
S 1
(42,786)
2
( f 1)
Se ST (S SM)
(14)
L21 L22
S
42,786
(17)
Error variance:
( f 2)
Ve
Se
2
(18)
(15)
VN Ve SM1
(19)
Table 2
Control factors and levels
Level
Control Factor
A:
density of ammonia
1.2
B:
pH
C:
temperature (C)
75
65
55
D:
0.55
0.65
0.75
E:
6.5
8.1
9.1
F:
G:
10
Oxidation
None
Reduction
Figure 9
Actual plot for color negative lm (dynamic operating window)
Figure 10
Response graphs of SN ratio by dynamic operating window
Figure 11
Response graphs of SN ratio by conventional procedure
657
658
Case 4
SN ratio:
10 log
[1/(2)(42,786)](SM Ve)
VN
(20)
Reference
Shoji Matsuzaka, Keisuke Tobita, Tomoyuki Nakayama,
Nobuo Kubo, and Hideaki Haraga, 1998. Functional evaluation of photographic systems by dynamic operating window. Quality Engineering, Vol. 7,
No. 5, pp. 3140.
CASE 5
Abstract: Ultra trace analysis with physical and chemical methods is of considerable importance in various elds, including medicine, the food industry,
and solid-state physics and electronics. For manufacturing electronic devices,
ultrapure water has to be monitored for metallic contamination. Extreme demands exist related to the reliability and reproducibility of quantitative results
at the lowest possible concentration level. With such data, interpretation of
the linear response with the least variability and the highest sensitivity can
be achieved. For proof, the lowest detectable concentration is calculated and
a robust detection limit below 1 ppt has been conrmed for some elements.
1. Introduction
The analytical tool considered here is an inductively
coupled plasma mass spectrometer (ICP-MS), used
widely for the determination of metal impurities in
aqueous solutions. The output signal of the measurement system responds to the concentration
level of the metallic elements. For higher metal
concentrations, the error portion of the signal
can essentially be neglected, whereas for lower
concentrations signal deviations become dominant.
To avoid unwanted noise, ultrapure water sample
preparation was carried out under owbox conditions (class 10). The response (intensity/counts per
time) is investigated for a variety of measurement
conditions, equipment components, and sample
preparation (10 control parameters and two noise
parameters) as a function of the signal factor. Signal
factor levels are ultrapure water solutions with calibrated metal concentrations from 2.5 to 100 ppt (
109 g/L) and additional water samples for reference (blanks). This type of system, with its steadily
increasing relative noise toward lower signal values,
requires appropriate dynamic S/N ratio calculations. Analytical tools with more demanding
requirements are used to elucidate process moni-
toring of metallic contaminations. Quantitative results must be reliable and reproducible from high
metal concentrations to the lowest possible limit of
detection (LOD). In the mass production of silicon
wafers in semiconductor manufacturing, most metallic surface contaminations have to be below 1010
atoms/cm2 to avoid malfunction of the nal device,
due to subsequent metal diffusion into the bulk.
In manufacturing, every process step is accompanied by or nishes with a water rinse. For this
purpose, ultrapure water is prepared and used. Consequently, the nal silicon wafer surface is only as
clean (i.e., metal free) as the ultrapure water available. Therefore, water purity with respect to such
common elements as Na, K, Cu, Fe, and others must
be monitored routinely.
For stringent purity demands in semiconductor
processing (metal concentration 2 to 10 ppt), easy
methods (e.g., resistivity measurements) are not at
all sufcient; the ion product of water at pH 7 corresponds to a salt concentration of less than 10
ppm. Instead, the Hewlett-Packards ICP-MS 4500
analytical equipment is used, where the notation
stands for the physical principle of an inductively
coupled mass spectrometer.
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
659
660
2. Equipment
The starting point for the raw material water is
drinking quality. The following salt contents are presented with typical concentrations beyond 10 ppm:
Salts 10 ppm: Na, K, Ca2, Mg2
Salts 10 ppm: Fe2. Sr2, Mn2, NH4
Salts 0.1 ppm: As3, Al3, Ba2, Cu2, Zn2, Pb2,
Li, Rb
From this level any metal contamination has to
be removed to ppt values for use in semiconductor
cleaning. With numerous chemical and physical
Figure 1
Ultrapure water supply system
Case 5
principles applied in multiple process steps (Figure
1), this can be achieved.
The ultrapure water output quality must be monitored (i.e., controlled routinely) to maintain the
required low contamination level. For detection,
liquid samples are fed into the ICP-MS 4500. Figure
2 shows the components of the inductively coupled
plasma mass spectrometer. After ionization, the ions
are focused and separated according to their
charge/mass ratio, and the intensity is measured in
counts per time.
Various equipment components can be tested
combined with various measurement modes and
methods of sample preparation. Altogether, 10
parameters have been dened, composed of
661
Figure 2
ICP-MS 4500 schematic
3. Concept of Experiments
Mass production of semiconductor wafers requires
a constant low-level ultrapure water supply. In principle, it is sufcient to prove the maintenance of a
contamination level below an upper limit for the
elements monitored. In common analytical practice,
the measured sample is compared with a blank
(sample without intentional content of elements to
be detected). The information on the count rate of
the blank alone does not allow one to calculate the
lowest possible detection limit. Therefore, sample
series with a calibrated content of elements are prepared and measured. Thus, a concentration range
is covered and deviation of the signal output becomes visible. For physical reasons, the system is expected to behave linearly (plasma equilibrium),
which means that the count rate (output) is proportional to the concentration of the elements. In
designed experiments with 10 parameters, dynamic
optimization enables one to reduce variation at the
4. Experimental Design
Equipment parameters (hardware), measurement
parameters (count sampling), and preparation steps
total 10 parameters on two levels:
662
Case 5
Figure 3
Element-specic count-rate deviations
A:
H:
B:
I:
C:
K:
probe cleaning
D:
component 4 (nebulizer)
L:
error
E:
cleaning time
F:
stabilization time
G:
integration time
spectrum scan
The L12 layout has enough space for the parameters and additional error calculations. The
calibration series are prepared with concentrations
of (ppt).
663
Figure 4
Results for aluminum and potassium
2.5
10
25
50
100
blank 1
blank 2
for two noise conditions (four combinations): longterm stability and dilution steps. In total, 12 experimental congurations (L12) are carried out for
eight samples with four replications (two noise factors) and one repetition; in each case 21 elements
(under cold plasma conditions) are characterized,
resulting in 16,128 calculations.
deviation were element-specic. Results are presented for selected elements, classied for environmental occurrence: for example, Al and K
compared with Li and Co (Figures 4 and 5).
SN Linear Function
In a coarse estimation, hardware parameters A to D
(i.e., equipment components) have a greater inuence than parameters E to K, which dene measurement mode and preparation techniques. In
addition, parameter settings for E to K have to be
at the low level, which means short and quick
measurements. For example, extended integration
time does not improve the SN ratio of the output
signal, because with increasing time and repeated
cleaning procedures, the probability for contamination through the environment increases as well.
SN Zero-Point Proportional
Compared to the previous situation, the stronger effects are now from the software parameters, E to K,
664
Case 5
Figure 5
Results for lithium and cobalt
7. Conclusions
With appropriate tuned analytical equipment detection, limits below 1 ppt can be achieved for relevant
metallic contaminations in ultrapure water:
665
Table 1
Predicted and conrmed detection limits (3) (ppt)
Element
Element
Prediction
Ag
Prediction
2
Conrmation
3
Li
0.6
Al
0.7
0.6
Mg
0.7
Ca
Mn
1.5
0.6
Co
0.6
Na
Cr
0.6
Ni
0.8
Cs
Pb
Cu
0.6
Rb
0.7
Fe
Sr
0.7
Ga
0.9
0.9
Ti
In
Zn
0.10.2 ppt: Co
0.30.4 ppt: Al, Mg, Li, Cr, Mn, Rb, Sr, In, Cs
0.50.6 ppt: Fe, Ni, Ga, Ti, Pb
0.71.0 ppt: K, Zn, Ag, Cu, Na
Requirements in the semiconductor industry are
thus fullled and can be guaranteed. Further improvements concern sampling techniques to benet
from this ultra-trace analytical capability. Future
steps will lead toward new industrial cleaning procedures below the ppt level for the most important
raw material: water.
Conrmation
References
Balazs, 1993. Balazs Analytical Laboratorys Pure Water
Specications and Guidelines. Austin, TX: Balazs Analytical Laboratory.
Hewlitt-Packard, 1994. HP 4500 ChemStation and Application Handbook. Palo Alto, CA: Hewlitt-Packard.
K. E. Jarvis, A. L. Gray, and R. S. Houk, 1992. Handbook
of Inductively Coupled Plasma Mass Spectrometry. Glasgow: Blackie:
CASE 6
1. Introduction
In a high-performance liquid chromatograph
(HPLC), after moving components placed on liquid, called a mobile phase, we separated each component by taking advantage of its degree of
adsorption for a solid phase, called a carrier, in the
column. A ow rate indicates the mobile phase. The
result of separation was recorded as a peak by sensing an optical deection or a difference in the wavelength of a desorbed component in the liquid using
a detector. Although components were separated according to a degree of adsorption, in actuality this
distance is determined by the type and constitution
of the mobile phase (primarily, organic solvent),
pressure, temperature, and type of column (i.e., the
solid phases capability of adsorption and separation
inside the column). Therefore, using parameter design we selected levels and types of parameters leading to an appropriate degree of separation from
various variables.
As shown in Figure 1, in separating two peaks we
used the separability, Rs, as a degree of separation
of two components, which is dened as
666
RS
2(tR2 tR1)
W1 W2
(1)
The larger Rs becomes, the better the degree of separation of two components. However, since there
are numerous factors related to the separation of
components peaks, it really is difcult to conduct a
simultaneous multiple-factor evaluation regarding a
relationship between the separability and inuential
factors. In our study we attempted to evaluate the
relationship quantitatively using the SN ratio in parameter design for the separation of components.
The HPLC method detects the peak value of a
component. In this case, since the desorbing time
of a component was unknown, we determiend that
a reciprocal of a ow rate M in separation, 1/M, was
proportional to a desorbing time Y (time when a
peak shows up), and claried that the dynamic operating window method for components M*1 , M*2 ,
and M*
3 can be utilized with the SN ratio based on
a desorbing time Y as an output. Yet we have come
to the conclusion that it is more natural to choose
a ow rate M per se in response to the function of
the HPLC instead of selecting reciprocal 1/M as a
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
667
Figure 1
Correspondence of evaluation scale and separability
Figure 2
Chromatography and conceptual diagram of separation
y M
(2)
Based on this equation, our objective was to compute a variation of differences in proportional terms
between components M* values, denoted by SM*.
As a component of a signal in the dynamic operating window method, we studied the following SN
ratio *:
*
(M*)2
2N
(3)
Signal:
M1
M2
M3
5.0
4.2
3.3
1
1
1
2
642
76
1452
(5)
668
Case 6
Table 1
Results of experiment 2 (mm)
S 10 log
Flow Rate
M3
Linear
Equation
76
85
96
107
L1
L2
84
96
98
110
124
138
L3
L4
89
93
106
116
132
145
L5
L6
Component
M1
M2
M*1
N1
N2
64
73
M2*
N1
N2
M3*
N1
N2
L1
(5.0)
1
64
(4.2)
1
(3.3)
76
1
96
13.4211
80
(6)
(L1 L6)2
12.5375
(3)(2r)
(7)
0.2462
SN
(8)
Se ST S SN 0.0056
S
Ve e 0.0004
14
SN Se
0.0027
14 1
1
(S
2Ve) 26.40 dB
2r M*
(15)
Table 2
Control factors and levels
0.0350
VN
S* 10 log
1
(S Ve) 14.09 dB (14)
(3)(2r)
Level
(9)
(10)
(11)
(12)
(1/2r)(SM* 2Ve)
* 10 log
0.71 dB (13)
VN
Control Factor
A:
error
B:
Low
Mid
High
C:
Low
Mid
High
D:
Low
High
E:
Low
High
F:
Low
Mid
High
G:
error
H: error
669
Figure 3
Factor effect plot of SN ratio *
Figure 4
Factor effect plot of sensitivity S
670
Case 6
Figure 5
Factor effect plot of sensitivity S
Table 3
Results of conrmatory experiment (dB)
Conguration
Characteristic
Condition
Optimal
Current
Gain
SN ratio *
Estimation
Conrmation
23.98
28.96
1.23
11.99
22.74
40.95
Sensitivity S
Estimation
Conrmation
0.43
2.78
9.86
8.20
10.29
5.42
Sensitivty S
Estimation
Conrmation
11.82
7.60
39.17
50.19
27.35
42.59
671
Reference
Kouya Yano, Hironori Kitzazki, and Yoshinobu Nakai,
1995. Evaluation of liquid chromatography analysis
using quality engineering. Quality Engineering, Vol.
3, No. 4, pp. 4148.
CASE 7
Abstract: Our company produces granule products from herbal extracts. The
granules are produced by a dry granulating process, including press molding,
crushing, and classifying. The granules require a certain amount of hardness
and fragility to meet the specications and to make pills easier to swallow.
Granules are difcult to dissolve if too hard and become powdery if too soft.
In the current measuring method, granules are vibrated; then the powder is
separated by screening. The amount of powder remaining on the screen is
measured to indicate the hardness. This method is time consuming and the
measurement error is high. This case reports the development of a new
method with a short measurement time and high precision.
1. Introduction
Our research dealt with granules manufactured
from solidied essence that is extracted from herbal
medicines. Measurement of the granular strength
were taken as follows. After sieving granules and removing ne powders, we forcibly abraded them with
a shaker and then got rid of the abraded granules
through sieving and suction. Measuring the remainder after suction, we dened the granular strength
as
granular strength
672
2. Abrasion and classication. By sucking the inside air from an exhaust outlet using a vacuum cleaner, we made air ow out of a
rotating slit nozzle. Receiving this owing-out
air, the test sample hits the sieve frame and
lid and is thus abraded. The abraded powders
were removed from the exhaust outlet
through suction.
3. Measurement of the remainder. After a certain period, we measured the weight of the sample
remaining on the sieve.
This measurement instrument was considered to
have two types of functions. The rst function was
the ratio of the amount of abrasion to the operation
time: The shorter the operation time, the less the
abrasion; the longer the time, the greater the abrasion. In terms of this function, by focusing not on
hardness as a quality characteristic but on pulverization of granules as the measurement instruments
function, we evaluated the function. Considering a
time-based change in the number of granules or
their shapes, we supposed that like a chemical reaction, the amount of abrasion for the operation
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
673
Figure 1
Schematic of new measurement instrument for granular strength
2. SN Ratio
Considering the above as functions of a new measuring instrument, we developed a zero-point proportional equation based on two types of signal
factors: operation time and abrasive resistance
(hardness) of a test sample. Since it was difcult to
collect and measure the abraded powders, we measured primarily the weight of a test sample (W1),
and secondarily, gauged the weight of the remainder left on the sieve net (W2). The remaining rate
was dened as
Table 1
Remaining fractions for experiment 1
Abrasive Resistance
to Test Sample
Operation Time
T1 (15 s)
T2 (30 s)
T3 (60 s)
M1 (hard)
0.992
0.991
0.989
M2 (medium)
0.990
0.988
0.986
M3 (soft)
0.988
0.986
0.980
674
Case 7
Table 2
Logarithmized values for experiment 1
Abrasive Resistance
of Test Sample
Operation Time
Linear
Equation
T1 (15 s)
T2 (30 s)
T3 (60 s)
M1 (hard)
0.00783
0.00944
0.01065
1.03961
M2 (medium)
0.00984
0.01186
0.01389
1.33681
M3 (soft)
0.01207
0.01388
0.02039
1.82062
fraction remained
W2
W1
(2)
ln
W2
eT
W1
(3)
W2
T
W1
(4)
y ln
W2
W1
(5)
y T
(6)
Table 3
Control factors and levels
Level
Control Factor
A:
Low
Mid
High
B:
L1 10
M1 10
H1 10
L1
M1
H1
L1 10
M1 10
H1 10
C:
0.297
0.355
0.500
L2
M2
H2
1.5 L2
1.5 M2
1.5 H2
2 L2
2 M2
2 H2
E:
57
42
27
F:
Low
Mid
High
12
675
Total variation:
ST 0.007832 0.009442 0.020392
0.001448
(f 9)
(7)
(8)
Effective divider:
Linear equations:
L1 (0.00783)(15) (0.00944)(30)
(0.01065)(60) 1.03961
L2 1.33681,
Figure 2
Response graphs for determining good pulverization conditions
L3 1.82062
(9)
676
Case 7
Sensitivity:
S 10 log (1/3r)(SM Ve)
0.001243
(f 1)
85.23 dB
(17)
0.000066
(f 2)
(11)
Error variation:
Se 0.001448 0.001243 0.000066
0.00014
(f 6)
(12)
Error variance:
Ve
0.00014
0.000023
6
(13)
(1/3r)(S Ve)
Ve
[1/(3)(4725)](0.001243 0.000023)
0.000023
24.32 dB
(14)
Sensitivity:
S 10 log
10 log
1
(S Ve)
3r
1
(0.001243 0.000023)
(3)(4725)
70.65 dB
(15)
38.90 dB
(16)
A3B1C1D1E3 F1G3
A2B1C1D1E2 F1G3
677
Figure 3
Response graphs for increasing detectability of differences between
materials
Table 4
Conrmation of SN ratio and sensitivity (dB)
SN Ratio
Conguration
Sensitivity
Estimation
Conrmation
Estimation
Conrmation
18.38
26.79
22.09
13.13
21.54
16.71
63.54
73.61
67.05
56.23
75.63
68.38
8.41
8.41
10.07
19.40
3.71
3.58
3.51
12.15
Optimal
Worst
Comparative
678
Case 7
Conrmatory Experiment
To conduct a conrmatory experiment, comprehensively judging all congurations obtained from the
two types of analyses, we selected the following optimal, worst, and comparative congurations:
Optimal configuration:
A1B3C2D3E3 F3G1
Worst configuration:
A3B1C1D1E1 F1G3
Reference
Yoshiaki Ohishi, Toshihiro Ichida, and Shigetoshi
Mochizuki, 1996. Optimization of the measuring
method for granule fragility. Quality Engineering, Vol.
4, No. 5, pp. 3946.
CASE 8
1. Introduction
Thermoresistant bacteria form heat-resisting spores
in the body that survive in the food after cooking
and rot heat-treated food. In order to detect a specic bacterium causing food poisoning, we developed a method of detecting poison produced by a
bacterium or technique using an antigenantibody
reaction. However, since a heat-resistant bacterium
is a common decomposing bacterium, a method of
detecting cultivated bacteria is widely used. In this
case, a key issue was to accelerate bacteria as quickly
as possible and to detect them accurately. If the
speed of occurrence of bacteria is slow, accuracy in
detection is reduced. In contrast, poor accuracy of
detection leads to a larger error in conrmation of
occurrence. This relationship between a measuring
method and a measured result cannot be separated
in technological development, and each should be
analyzed individually. Whereas cultivation of bacteria is considered a growth-related phenomenon,
detection is a measurement- or SN ratiorelated
issue. In our study we dealt with both in a single
experiment.
In cultivating bacteria, time is regarded as a signal factor (Figure 1). Since a certain number of bacteria exist before starting, we used a reference-point
proportional equation with the ninth hour from the
start as a reference time. On the other hand, because the real number of bacteria was unknown, the
diluted solution, pH 7
N2:
diluted solution, pH 3
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
679
680
Case 8
Table 1
Measured data of number of bacteria in
experiment 1
Figure 1
Cultivation of bacteria using time as the signal
M1
M2
M3
T1
N1
N2
496
337
119
35
11
8
T2
N1
N2
564
425
137
50
23
12
T3
N1
N2
612
480
153
63
27
13
T4
N1
N2
663
525
164
76
34
14
T5
compute the SN ratio using a reference-point proportional equation whose reference point is set to
T1, we created an auxiliary table (Table 2).
N1
N2
710
558
171
86
40
15
T6
N1
N2
746
578
180
92
41
17
Total variation:
T7
N1
N2
763
605
186
96
41
18
603,265
(1)
Effective divider:
r 02 12 62 91
(2)
Linear equation:
L1 (0)(79.5) (6)(346.5) 6178.5
L2 2965.5
L4 167.0
Table 2
Calibrated data for reference-point proportional
equation
L3 2018.0
L5 590.5
L6 133.5
(3)
Figure 2
Measurement of bacteria using dilution of bacteria as the
signal
M1
M2
M3
N1
N2
79.5
79.5
42.0
42.0
1.5
1.5
N1
N2
147.5
8.5
60.0
27.0
13.5
2.5
N1
N2
195.5
63.5
76.0
14.0
17.5
3.5
N1
N2
246.5
108.5
87.0
1.0
24.5
4.5
N1
N2
293.5
141.5
94.0
9.0
30.5
5.5
N1
N2
329.5
161.5
103.0
15.0
31.5
7.5
N1
N2
346.5
188.5
109.0
19.0
31.5
8.5
Linear
equation
N1
N2
L1
L2
L3
L4
L5
L6
681
Table 3
Calibrated data
(L1 L2 L3 L4 L5 L6)
6r
266,071.08
(4)
T1
N1
N2
0.2
0.04
1.191
0.809
0.286
0.084
0.026
0.019
55,826.82
(6)
Error variation:
Total variation:
ST 1.1912 0.2862 0.0192 2.163
(12)
Effective divider:
r 12 0.22 0.042 1.042
(13)
Linear equations:
L1 (1)(1.191) (0.04)(0.026) 1.249
L2 0.827
(14)
Se ST S SM SN 58,915.45
(7)
S
Error variance:
Ve
Se
1550.41
38
(8)
(16)
Se ST S SN 0.009
(17)
Error variation:
SN ratio:
10 log
L21 L22
S 0.086
r
SN
(9)
Error variance:
(S Ve)/2r
3.06
VN
(10)
S Ve
31.62
2r
(11)
Se
0.002
4
(18)
SN Se
0.019
5
(19)
(S Ve)/2r
17.216
VN
(20)
S Ve
0.04
2r
(21)
Ve
Sensitivity:
S 10 log
(15)
Total variance:
S Se
VN N
2942.11
39
(L1 L2)2
2.068
2r
Total variance:
VN
SN ratio:
10 log
Sensitivity:
S 10 log
682
Case 8
Analysis 1: A1B3C1D3E2F1H2
Analysis 2: A1B3C1D3E2F1H2
Under both optimal and current congurations,
we performed a conrmatory experiment. The results are shown in Table 5.
5. Results of Experiment
According to the results of the conrmatory experiment shown in Table 5, we discovered that satisfactory reproducibility in the gain of the SN ratio
cannot be obtained. However, since the trend toward increasing SN ratio with respect to time in the
results of analysis 2 was similar to that of the estimation, we concluded that our experimental results
were fairly reliable. On the other hand, a number
of peaks and V-shapes in the response graphs for
the control factors were regarded as problems to be
solved in a future study.
Despite several remaining problems, the following improvements were achieved:
1. We reduced the experimentation time by two
days because the bench time allocated as one
of the control factors was not necessary. More-
Table 4
Control and indicative factors and levels
Level
Factor
A:
type of bacterium
(B. subtilis)
B:
dilution solution
Sterile water
Phosphate buffer
solution
Peptone
phosphate buffer
solution
C:
D:
type of medium
Standard
Glucose tryptone
Trypticase soy
E:
50
100
F:
0.01
0.1
G:
0.00
0.10
0.20
20
40
683
Figure 3
Analysis 2: Response graphs for factor A
over, whereas ve days are used as a cultivation period in our current process, we found
such a long-time cultivation not meaningful
for experimentation. Therefore, we nished
an inspection that normally takes one week in
only two days.
2. The improvement above contributed not only
to reducing the cost but also to taking countermeasures more swiftly. If we can detect the
occurrence of bacteria sooner, we can more
retrieve defective products immediately, and
684
Case 8
Table 5
Conrmatory experiment (dB)
Analysis 1
Conguration
Optimal
Current
Gain
SN ratio
Estimation
Conrmation
12.85
2.53
14.01
17.58
26.87
15.05
Sensitivity
Estimation
Conrmation
31.02
30.84
28.62
23.30
2.40
7.54
Analysis 2
Conguration
Sn Ratio
Sensitivity
Optimal
Current
Gain
T1
Estimation
Conrmation
29.67
20.01
5.33
7.61
24.34
12.40
T2
Estimation
Conrmation
37.77
22.20
5.08
7.88
32.69
14.32
T3
Estimation
Conrmation
35.47
23.84
6.54
7.87
29.93
16.44
T5
Estimation
Conrmation
40.66
25.65
6.33
7.68
34.33
17.97
T6
Estimation
Conrmation
43.20
26.75
6.32
7.69
36.88
19.05
T7
Estimation
Conrmation
42.77
28.55
6.40
7.70
36.37
20.85
T1
Estimation
Conrmation
0.19
0.07
0.14
0.10
0.05
0.02
T2
Estimation
Conrmation
0.13
0.04
0.10
0.08
0.03
0.04
T3
Estimation
Conrmation
0.12
0.03
0.08
0.06
0.04
0.03
T4
Estimation
Conrmation
0.10
0.03
0.04
0.05
0.05
0.02
T5
Estimation
Conrmation
0.08
0.02
0.02
0.04
0.06
0.02
T6
Estimation
Conrmation
0.07
0.02
0.0
0.02
0.07
0.01
T7
Estimation
Conrmation
0.06
0.01
0.01
0.02
0.05
0.01
685
and the resulting loss totals 150 million yen (for loss
of life). On the other hand, the loss due to discard
of a product is 300 yen.
A0 150 million yen
A 300 yen
where the daily labor cost is 10,000 yen and the machine running cost is 155 yen, we obtain the following costs:
The tolerance is
Current configuration:
235 (10,000)(5) (155)(5) 51,010
0 100,000 products
A
300
(100,000) 141.42
150,000,000
A
Optimal configuration:
221 (10,000)(2) (155)(2) 20,531
The cost benet per inspection is 30,479 yen.
Risk Aversion due to Time Reduction
Now suppose that it takes two days from the point
when the production of a product is completed to
the point when it starts to be sold at a shop. If the
optimally produced product contains a harmful bacterium, despite being in process of shipment, we
can withdraw it shortly after checking the result of
inspection. Then we assume that the loss in withdrawal costs 1 million yen. On the other hand, under the current conguration, the product reaches
a consumer. In this case, we suppose that the loss to
a consumer amounts to 150 million yen per person
for the death, or 100,000 yen for hospitalization (as
the number of victims is derived from a certain past
record).
(180 persons 100,000 yen)
(3 persons 150 million yen)
Current:
183
2,557,377 yen/product
Optimal:
1 million yen
183
5,264 yen/product
Supposing that the current average number of bacteria is 100, we have the following loss function:
L
300
(1002) 150 yen
(141.42)2
Reference
Eiko Mikami and Hiroshi Yano, 2001. Studies on the
method for detection of thermoresistant bacteria.
Proceedings of the 9th Quality Engineering Symposium,
pp. 126129.
CASE 9
Abstract: Evaluation of percutaneous permeation of a transdermal drug delivery system usually requires expensive and repeated animal experimentation, which also requires conrmation of a species difference between animals
and human beings. In this study, considering these issues, we applied the
quality engineering method to technological development in prescription optimization of an effective transdermal drug delivery system.
1. Introduction
In recent days, transdermal drug delivery that supplies medical essence through the skin has attracted
a great deal of attention because this causes few side
effects, such as gastrointestinal disorder, compared
with a conventional drug delivery system via the digestive tract. The new method can be used with patients or aged people who have a feeble swallowing
movement (movement of the muscles of the throat
while swallowing food).
However, in general cases it is not easy for medicine to permeate the skin. Since normal skin is
equipped with a barrier ability to block alien substances from the outside, penetration by medicine
is also hampered. Therefore, to obtain sufcient
permeation for medical treatment, several additives,
including a drug absorption enhancer, need to be
compounded. As a result, the prescription becomes
more complex and a longer period of research on
prescription optimization is required. Moreover, for
the evaluation of percutaneous permeation of a
transdermal drug delivery system (e.g., an ointment), animal experimentation is needed. This
animal experimentation not only is economically
inferior due to the recommendation that four or
more repeated experiments be performed to eliminate individual differences but also requires conrmation of a species difference between animals and
686
human beings. In this study, considering these issues, we used the quality engineering method for
the technological development of an effective transdermal drug delivery system.
2. Experimental Procedure
Among several existing evaluation methods of percutaneous permeation of a transdermal drug delivery system, we selected an in vitro percutaneous
permeation experiment, the most inexpensive and
handiest. Figure 1 shows the structure of a diffusion
cell used in the in vitro percutaneous permeation
experiment.
Between a donor cell (supplying medicine) and
a receptor cell (receiving medicine), we place a
horny layer of skin extracted from an animal on the
side of the donor cell. Skin extracted from various
types of animals was used for the horny layer. We
started with an application of ointment on the skin
extracted from the side of the donor cell. Since
medical essence (a principal agent in medicine)
that penetrates the skin is dissolved in the receptor
solution, by measuring the amount of the principal
agent in solution at each point in time, we calculated the cumulative amount of skin permeation per
unit area.
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
687
Cumulative Amount
of Skin Permeation, y
Ideal
Actual
Figure 1
In vitro skin permeation experiment
Time, T
When optimal conguration is determined from
the result of animal experimentation, the animal
species difference comes into question, as is often
the case with other experiments. If we could use
human tissues, this problem would be solved easily.
Yet for ethical reasons, it is difcult to do this in
Japan. To solve this problem, we used an animal
species difference as a noise factor. By doing so, we
attempted to obtain an optimal prescription that
could achieve skin permeation that is not only independent of the animal species difference but also
superior for a human being with respect to the effects of barrier ability or absorption enhancer in
skin permeation.
Lag Time, T0
(Time to Saturation Density Inside Skin)
Figure 2
Relationship between cumulative amount of skin
permeation ( y) and time (T)
688
Case 9
(L1 L2)2
2,002,184
1 2
(6)
4. SN Ratio
(f 1)
(L1 L2)2
375,496
1 2
(f 1)
(7)
Error variation:
Table 2 shows part of the data regarding the principal agents cumulative amount of skin permeation
per skin area for N1 and N2 as the output. Based on
this result, we performed the following analysis
(analysis 1).
Below is an example of calculation for experiment 2.
Se ST S SN 84,220 (f 7)
(8)
Error variance:
Ve
Se
84,220
12,031
7
7
(9)
SN ratio:
Total variation:
10 log
(1)
12.84 dB
Sensitivity:
Effective divider:
1 22 42 62 82 102 220
(2)
2 4 8 24 48 2960
(3)
Linear equations:
L1 (2)(26.5) (4)(152.8) (10)(748.3)
142,892.2
(4)
(5)
Table 1
Signal factors and levels (sampling point) (h)
Noise
Factor
T1
T2
T3
T4
T5
N1
10
N2
24
48
S 10 log
(10)
1
(S Ve)
1 2
27.97 dB
(11)
689
Table 2
Principal agents cumulative amount of skin permeation per skin area (g/cm2)
Noise
Factor
No.
T1
T2
T3
N1
1
2
18
0.0
26.5
15.0
0.0
152.8
142.8
2.3
319.3
378.5
4.6
528.3
545.7
7.7
748.3
718.8
N2
1
2
18
0.2
1.0
2.3
1.0
13.4
20.1
25.4
447.4
335.7
123.8
1138.7
1059.7
T4
T5
2). The calculation procedure of analysis 2 was basically the same as that for analysis 1. However, since
the amount of permeation differed largely from experiment to experiment, the range of a signal factor
was determined by picking up an overlapping range
between N1 and N2 for each experimental run.
Thus, while the range of a signal factor for each
column was different in analysis 2, the effective divider for each row, r, was common for N1 and N2.
5. Results of Experiment
The control factors are allocated as shown in Table
3. To determine an ointment prescription with
Table 3
Control factors and levels
Level
2
A:
Control Factor
Mid
Large
B:
Small
Mid
Large
C:
Low
Mid
High
D:
None
Mid
High
F:
None
Mid
High
G:
Low
Mid
High
Low
Mid
High
690
Case 9
Table 4
SN ratios and sensitivities (dB)
No.
A
1
B
2
C
3
D
4
E
5
F
6
G
7
H
8
SN Ratio
16.74
6.79
12.84
27.96
20.84
20.26
20.59
3.75
19.58
17.64a
20.04
10.38
21.32
9.37
22.12
25.67
Sensitivity
13.56
38.46
10
19.58a
17.64a
11
14.87
22.73
12
23.29
24.52
13
24.89
6.36
14
19.65
17.59
15
19.58a
17.64a
16
16.78
17.33
17
32.45
6.64
18
13.79
27.25
Figure 3
Analysis 1 of in vitro skin permeation experiment (signal: time)
Figure 4
Analysis 2 of in vitro skin permeation experiment
691
692
Case 9
Figure 5
Graph for conrmatory experiment
analysis 2. Comparing both of the optimal congurations, we could see differences at A and E. Since
factor A was an experimental condition in an in vitro skin permeation experiment, and also because
E1 and E2 were nearly equivalent, we concluded that
the optimal congurations obtained from both of
the experiments were almost consistent as a result.
As for the conrmatory experiment, we adopted
the optimal conguration obtained for analysis 1 because analysis 2 had been performed after all other
experiments, including the conrmatory experiment, were completed.
Initial configuration:
A1B3C2D2E1 F1G1H1
693
10.3
15.5
12.4
Optimal
6.5
25.8
18.9
Initial
Gain
Conrmation
SN Ratio
Estimation
Conguration
14.5
36.0
21.6
12.4
27.9
15.5
Conrmation
Sensitivity
Estimation
Analysis 1
Table 5
Estimation and conrmation of SN ratio and sensitivity (dB)
46.1
9.6
8.0
36.5
53.9
45.9
Conrmation
SN Ratio
Estimation
12.1
40.5
28.4
12.1
30.2
18.1
Conrmation
Sensitivity
Estimation
Analysis 2
694
der both initial and optimal congurations for N1
and N2.
Under the optimal conguration, we conrmed
excellent skin permeation for both a rat and a minipig, which led to remarkable improvement compared with those under the initial conguration.
Additionally, combining this result with that of analysis 2 and plotting each data point by setting the
amount of permeation as the signal and time as the
output characteristic, we obtained Figure 5c and d.
These results reveal that a considerable amount of
time was reduced when a certain amount of permeation is achieved.
Table 5 shows each pair of estimations and conrmations of the SN ratio, , and sensitivity, S, for
analyses 1 and 2. Now all factors are used for estimation. In analysis 2, since shorter permeation time
was a characteristic (i.e., a smaller sensitivity was
preferable), the estimated gain turns out negative.
The reason that the estimated gain of the SN ratio
becomes negative is that if an improvement in is
relatively smaller than that of because a decrease
in becomes larger than an improvement in
, eventually the SN ratio (dened as 2/2)
diminishes.
Comparing the estimated and measured values
of the SN ratio and sensitivity, we found that both
were nearly consistent in terms of the gain of sensitivity for both analyses 1 and 2; and for analysis 2,
the reproducibility of the SN ratio also improved.
In analysis 2 the gain in the SN ratio resulted in
negative values for both estimation and conrmation. However, as the standard deviations in Figure
5c and d show, although the variability under the
optimal conguration was less than or equal to that
under the current, the improvement in sensitivity,
which was larger than that of the variability, turned
the gain of the SN ratio into negatives.
Although animal testing usually requires four or
more repetitions because of individual differences,
we conrmed that a single experiment allowed us
to obtain good reproducibility, thereby streamlining
the experimental process.
7. Conclusions
Using a characteristic of in vitro skin permeation,
we have optimized prescription of ointment. Setting
Case 9
a rat and minipig to noise factor levels and assuming
that an animal species difference was an indicative
factor, we calculated an SN ratio using Ve. As a result,
we obtained an ideal, optimal prescription with a
high skin permeation speed and short lag time.
Although we have shown two different types of
analyses in this study, considering the quality engineering concept that a signal factor is a customers
demand in nature, we have realized that analysis 2
is more appropriate. In fact, reproducibility of the
SN ratio is more improved in analysis 2 than in analysis 1, where time was selected as a signal factor.
In addition, despite omuitting the details in our
research, response graphs based on a conventional
SN ratio calculation method that uses VN as an error
variance have almost agreed with those calculated
by Ve (Figure 5). This implies that even if there are
signicant noise conditions, such as a rat and a minipig, the resulting optimal congurations are not so
different for the two types, and these optimal congurations are not susceptible to species differences. Therefore, we can predict that only a simple
adjustment such as of principal agent, amount of
applied medicine, or application time is needed to
apply a transdermal drug delivery system effectively.
When optimizing an ointment prescription, not
only skin permeation but also the problems of skin
stimulation and stable storage need to be taken into
consideration. Our research succeeded in reducing
development cycle time drastically. In addition, for
the characteristics obtained from the animal experiment, we conrmed that only one experiment
is necessary for optimization, and consequently,
achieved a considerable economic benet. Therefore, we believe that the quality engineering method
is applicable to prescription optimization of other
types of drugs or other animal experiments.
Reference
Hatsuki Asahara, Kouya Yano, Yuichiro Ota, Shuichi
Takeda, and Takashi Nakagawa, 1998. Optimization
of model ointment prescription for the improvement of in vitro skin permeability Quality Engineering, Vol. 6, No. 2, pp. 6573.
CASE 10
Abstract: This study applies the dynamic operating window analysis method
in granulation technology. The original SN ratio, calculated from M , was
named the speed difference method, whereas the new one, calculated from
the ratio of larger-the-better and smaller-the-better characteristics, was called
the speed ratio method. In this study, parameter design was used for the
granulation of herbal medicines.
1. Introduction
In 1994, Taguchi introduced the dynamic operating
windows method for chemical reactions. Since then
several applications have been reported. Later, a
new analysis method was introduced to combine the
larger-the-better and smaller-the-better characteristics, the former representing the main chemical reaction and the latter, side reactions. This new
method was based on a practice that no noise factors are assigned for chemical reactions, although
side reactions behave as noise to the main reaction.
2. Granulation
Granulation is a universal technology, used widely
in various industries, such as fertilizer, food, catalysts, agricultural chemicals, and ceramics. In this
study, parameter design was used for the granulation of herbal medicines. There are two types of
granulation: dry and wet. Most of the granulation
processes for medicines use the wet method. Since
this study belongs to upstream technology development and no restrictions were to be followed,
laboratory-scale equipment was used, without
considering the scale of manufacturing. Instead, efciency of development was considered the top
priority.
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
695
696
Case 10
To correspond to chemical reactions from the viewpoint of quality engineering, an example of dissolution of a substance is used for explanation. When
a reaction progresses,
(1)
(2)
dA
dB
k[A][B]
dt
dt
(3)
k[A]
dt
(4)
d[A]
d[B]
k[A][B]
dt
dt
(5)
(6)
(7)
(8)
a A bB pP q Q
(9)
dt
Y
Y0
(10)
Equation (10) is described simply as the relationship between the fraction of reduction of concentration and time. Since the equation essentially
expresses the function only, one may judge the
equation to be insufcient. But that is the difference
between science and engineering.
Let the left side of equation (15.11) be p0 and
solve the equation:
1
Y
et
Y0
p1
ln p0 t
(11)
Y
Y0
(12)
(13)
(14)
4. Generic Function
Figure 1 shows the uidized-bed granulating equipment used in this experiment. Powder is fed into
the chamber and heated air is blown from the bottom to uidize powder. While the particles continue
to keep contact with each other, a liquid containing
binder is sprayed to make the particles grow. The
granulated particles sink by weight; the light particles ow up in the air and continue granulation.
The process ends when the blowing air stops. Like
the case of spray drying, it is difcult to balance heat
energy and blowing energy by adjusting the temperature, liquid feeding speed, or air-blowing speed.
Parameter design is an effective approach in these
cases.
The growing mechanism in uidized-bed granulation can be considered in the same way as a
chemical reaction that follows exponential function.
Letting the fraction of reduction of powder be y2
and time be t,
y2 1 et
(15)
Thus, it is ideal that the ne particles in the chamber decrease following equation (15).
Among many available granulating methods, a
laboratory-scale uidized-bed granulating device
was selected using about 100 g of powder to conduct
one run of the experiment. It was low in cost and
easy to operate. The device was small and could be
put on a table, as shown in Figure 1. Air was blown
from the bottom of the chamber to cause the pow-
Figure 1
Fluidized-bed granulation equipment
697
5. Experimental Design
An L18 orthogonal array was used for the experiment. The factors and levels are given in Table 1.
Particle size distribution was measured by the
screening method shown in Figure 2.
698
Case 10
Table 1
Factors and levels
Level
2
A:
Control Factor
Close
1
Far
B:
concentration of binder
Standard
Slightly high
High
C:
Small
Standard
Large
D:
air ow
Small
Standard
Large
E:
Low
Standard
High
F:
air temperature
Low
Standard
High
G:
Small
Standard
Large
Small
Standard
Large
H: ow of binding liquid
Unreacted part: q et
1:
Overreacted part:
2:
underreacted (q), ne
larger-the-better
(16)
(17)
1 p et
smaller-the-better
Constitution of
Substances
Over-reacted Substance
(Rough Granule)
0%
Less than
5%
#10
1 700
#12
1 400
Target Substance
Under-reacted Substance
(Smaller Than Fine Granule)
Less than 15%
Enlargement of
Operating Window
#42
355 (m)
Reaction Time
Figure 2
Specication of particle size
699
Overreacted
(Coarse Particles)
Fraction
Operating Window
Target Product
1pq
Underreacted
(Fine Particles)
Time, T
Figure 3
Dynamic operating window threshold (specication of particle size)
2 10 log 22
1 2
Sensitivity: S 10 log 12
1
1
ln
T
q
(18)
1
1
ln
T
1p
(19)
1
21
(20)
1 10 log
(21)
(22)
(23)
700
Case 10
Table 2
Calculation of experiment 13: granulation
progresseda
Time
T1 (25 min)
T2 (40 min)
Overreacted
substance p
Reaction
0.0060
0.0099
Target substance
1pq
0.0930
0.0960
Underreacted
substance q
0.9010
0.8941
After Conversion
T1
T2
Overreacted
substance
p (2)
0.0002423
0.002488
Underreacted
substance
q (1)
0.0041715
0.0027974
8. Conrmatory Experiment
and Improvement
Table 4 shows the results of optimum conditions using the data of one point. If the entire reaction is
stable, analysis may be made from one point only.
For the conrmatory experiment, condition A1B1
C1D3E3F3G3H2 was used, which is the optimum SN
ratio condition selected from the smaller-the-better
characteristic.
Under optimum conditions, the fraction of targeted particle size was high with a small fraction unreacted. The SN ratio was slightly lower, due to a
larger overreaction portion (side reaction). In
the conrmatory experiment, a gain of 20.86 dB
was obtained, which is seen by a comparison of the
particle-size distribution. As shown in Figure 6,
nearly 80% was unreacted (i.e., ne particles) under
the initial conditions. But under the optimum conditions, over 80% was targeted product.
In the conrmatory experiment, the SN ratio of
the speed ratio increases, so that the reaction speed
increases. But observing the particle-size distribution shown in Figure 7, it appears that the distributions of the two cases were not really very
different; only the center of the distribution was
shifting. In such a case, it is better to compare both
after adjusting the center of distribution. But from
the viewpoint of granulation, it looks as if the
growth of particles is being hindered due to an im-
Table 3
Calculation of experiment 8: granulation did not progress
Level
Control Factor
A:
Close
Far
B:
density of binder
Standard
Slightly high
High
C:
Small
Standard
Large
D:
amount of air ow
Small
Standard
Large
E:
spraying pressure
Low
Standard
High
F:
Low
Standard
High
G:
Small
Standard
Large
Small
Standard
Large
701
Figure 4
Response graphs of SN ratio
Figure 5
Response graphs of sensitivity
Table 4
Results of optimum conguration
Time
Reaction
(37 min)
Overreacted substance p
0.0539
Target substance 1 p q
0.7494
Underreacted substance q
0.1967
After Conversion
0.0014978
0.0439504
a
The results are 1 27.14 dB, 2 56.49 dB,
29.35 dB, and s 27.14 dB.
1 2
0.02272
2
(24)
1 2
0.002545
2
(25)
(26)
702
Case 10
90
80
Distribution (%)
70
60
50
Initial Conditions
Optimal Conditions
40
30
20
10
0
Coarse
Granules
Target
Product
Fine
Granules
Figure 6
Granule-size distribution before and after optimization
Initial Conditions
Optimal Conditions
Distribution (%)
60
40
30
20
10
0
Coarse Proper-Size
Fine
Granules Granules Granules
Figure 7
Improved granule-size distribution before and after optimization
0.8406 0.002545T
T 330 min
Table 5
Gain of SN ratio from conrmatory
experiment (dB)
Conguration
Estimation
Conrmation
(27)
Initial
Optimal
Gain
13.78
41.11
27.33
8.49
29.35
20.86
21
0.0037012
10 log
2
2
0.013882
8.52 dB
(28)
703
Table 6
Data on initial conditions
Time
T1 (10 min)
T2 (20 min)
Tx (? min)
Overreacted substance p
Reaction
0.0001
0.0385
0.3675
Target substance 1 p q
0.0258
0.4671
0.3377
Underreacted substance q
0.9741
0.4944
0.2948
After Conversion
T1
T2
Average
0.00001
0.001963
0.001388
0.002624
0.03522
0.003701
Table 7
SN ratio of the smaller-the-better
characteristic (dB)
Itema
No. Particle Size Fluidity Cohesion SN Ratio
4.7
6.0
0.0
9.5
3.0
9.5
8.7
3.0
6.7
10
3.0
11
6.7
12
8.6
13
0.0
14
9.5
15
3.0
16
6.0
17
9.5
18
6.0
9. Smaller-the-Better SN Ratio
Using Scores
Because of time restrictions, an attempt was made
to estimate the optimum conditions by eyeball observation of the results instead of using particle-size
distribution. Qualitative scores were given by observing items such as particle size, uidity, and cohesion, with the smaller-the-better SN ratio being
used for evaluation:
10 log 2
(29)
704
Case 10
3
A1
A2
B1
B2
B3
C1
C2
C3
D1
D2
D3
E1
E2
E3
F1
F2
F3
G1
G2
G3
H1
H2
H3
Figure 8
Response graphs of the smaller-the-better SN ratio
10 log
22 12 22
4.7 dB
3
(30)
Reference
10. Conclusions
Use of the dynamic operating window in conjunction with the speed ratio method applied to gran-
CASE 11
1. Introduction
A prototyping device used in the research stage is a
combination of grinding and separating devices,
which has the advantages of requiring less material
for experiments and enabling us to arrange various
factors easily. Using a parameter design based on an
L9 orthogonal array with easily changeable factors
in the rst experiment, we obtained a sufciently
good result in particle-size adjustability. However,
since we could not achieve an adequate improvement in particle-size distribution, we tackled our second parameter design with an L18 orthogonal array
by taking in more control factors to improve both
particle-size adjustment and distribution.
Figure 1 shows the grinding and separating device utilized. At the supply area (1), roughly ground
material (sample) is input, and subsequently is dispersed at the cyclone (2) and carried from the separating area to the grinding area. At the separating
area, an airow type of separating machine classies
particles of different diameters by taking advantage
of a balance of centrifugal and suction forces. The
material transported to the grinding area strikes an
impact plate and is crushed into ne particles. In
the separating area, only particles of a target size
are sorted out; larger granules are reground in the
grinding area. Thus, in the grinding and separating
process, a granule is pulverized repeatedly until it
matches the target size.
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
705
706
Case 11
Figure 1
Schematic drawing of grinding and separating device
1
high pressure
M2:
1
medium pressure
M3:
1
low pressure
Particle Size
1/Grinding Energy
Figure 2
Relationship between grinding energy and particle size
707
2
x 21p1 x 22p2 x 14
p14
y2
100
(2)
Figure 3
Particle-size distribution after grinding
(3)
Effective divider:
production capacity and yield have to be considered, so we must make an overall judgment. However, it was quite difcult to satisfy all of them, so in
most cases we had to compromise.
In this experiment we needed to compute an average y and variance V from the particle-size distribution data in Figure 3. Primarily, we tabulated the
values of particle diameters (x1, ... , x14) and fractions (p1, ... , p14):
M 21 M 22 M 23
Variation of proportional term:
S
11
12
13
14
SN
x2
x3
x4
x12
x13
x14
p2
p3
Ve
p4
p11
p12
p13
p14
VN
y
(7)
Se
4
(8)
(6)
Error variance:
Fraction (%):
p1
(L1 L2)2
2
Se ST S SN
x11
(5)
Error variation:
(L1 L2)2
2
Measurement channel:
1
(4)
(1)
SN
Se
5
V V12 V13 V21 V22 V23
11
6
(9)
Table 1
Average particle diameter (y) and variance (V) for each experiment
M1
M2
M3
N1
y11
V11
y12
V12
y13
V13
N2
y21
V21
y22
V22
y23
V23
Linear Equation
708
Case 11
Table 2
Control factors and levels
Level
Control Factor
A:
Cone 1
Cone 2
Trapezoid 1
Trapezoid 2
Triangle 1
Triangle 2
B:
1.0
1.5
2.0
C:
amount of material
supplied
1.0
1.5
2.0
D: supply time
10
20
30
E:
damper condition
10
25
40
F:
20
18
16
G: particle diameter of
supplied material
Figure 4
Response graphs
1.0
1.5
2.0
709
SN ratio:
(1/2)(S Ve)
VN
(10)
S 10 log(1/2)(S Ve)
(11)
10 log
Sensitivity:
(12)
Estimation of sensitivity:
S A4 C2 F1 G2 3T 46.69 dB
(13)
Figure 5
Improvement in particle-size distribution
Table 3
Results of conrmatory experiment (dB)
SN Ratio
Sensitivity
Optimal
Conguration
Estimation
Conrmation
Estimation
Conrmation
L18
22.15
22.19
46.69
46.00
L9
17.13
17.13
45.29
45.31
Gain
5.02
5.07
1.40
0.69
710
Case 11
Table 4
Data for dynamic operating window
Signal
Threshold
Grinding Pressure
Error
T1
T2
T3
Linear
Equation
M1
N1
N2
y111
y121
y112
y122
y113
y123
L1
L2
M2
N1
N2
y211
y221
y212
y222
y213
y223
L3
L4
M1 (small particles):
y1 ln
In the next analysis it was assumed that the relationship between grinding energy, T, and reduction of
raw material speed follows equation (16.4), where Y
and Y0 denote the amount of after-ground product
and before-ground material, respectively. The grinding efciency should be highest when the grinding
process follows the equation.
Y
1 eT
Y0
(14)
When performing an analysis based on the dynamic operating window, an important key point is
how to determine the portions corresponding to
side reactions and underreactions in chemical reactions. Since we produce small particles from raw
material in a grinding system, we can assume that a
side-reacted part, p, is the part in the particle-size
1
,
1p
1p
Y1
Y0
(15)
M2 (large particles):
1
y2 ln ,
q
Y2
Y0
(16)
( f 12)
(17)
( f 1)
(18)
(L1 L4)2
4
Figure 6
Response graphs of dynamic operating window
711
712
Case 11
S*
M
( f 1)
(19)
( f 1)
(20)
Error variation:
Se ST S S*
M S*
N
( f 9)
(21)
Error variance:
Se
9
(22)
Se S*N
10
(23)
Ve
Total variance:
VN
(1/4)(S*M Ve)
VN
(24)
1
(S Ve)
4
(25)
Sensitivity:
S 10 log
1
(S* Ve)
4 M
(26)
Figure 7
Comparison of particle-size distribution
Table 5
Conrmation by dynamic operating window method
Estimation
Conrmation
9.97
11.53
6.76
5.75
11.36
10.59
713
Figure 8
Relationship between grinding pressure and particle diameter
714
References
Hiroshi Shibano, Tomoharu Nishikawa, and Kouichi
Takenaka, 1994. Establishment of particle size adjusting technique for a ne grinding process for
developer. Quality Engineering, Vol. 2, No. 3, pp.
2227.
Case 11
Hiroshi Shibano, Tomoharu Nishikawa, and Kouichi
Takemaka, 1997. Establishment of control technique for particle size in roles of ne grinding
process for developer. Quality Engineering, Vol. 5,
No. 1, pp. 3642.
Part II
Robust
Engineering:
Electrical
Applications
Circuits (Cases 1215)
Electronic Devices (Cases 1620)
Electrophoto (Cases 2122)
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
CASE 12
Abstract: The authors have been studying some applications of the reproducibility of conclusions in electrical elds. One of the conclusions was that
the characteristics of the measuring device, settings of the signal factor, and
control factors from the circuit elements affected the SN ratio. In the study,
a new amplier was designed considering the foregoing points. Parameter
design was conducted by varying two combinations of control factors.
1. Introduction
A circuit designed for our research is illustrated in
Figure 1. This is a typical transistor amplication
circuit whose function is expressed by y M,
where M is the input voltage and y is the output
voltage. We selected eight factors from resistance R,
capacitor C, and inductor L as control factors and
assigned them to an L18 orthogonal array. Among
the control factors shown in Table 1, factors C, D,
E, and F were determined in the design phase by
considering the technical relationship of circuit
characteristics. As for a signal factor of the input
voltage of an amplication circuit, we allocated the
values shown in Table 2 to each of four levels. In
addition, we dened input frequency as an indicative factor and assigned three levels. As a noise factor, we selected environmental temperature for the
circuit and set up two levels of room and high temperatures. The former is approximately 25C, and
the latter is heated up to around 80C using a hair
dryer.
2. Experimental Procedure
Using the RC oscillator depicted in Figure 2, we
added a sinusoidal input signal to a circuit and measured the input voltage with a voltmeter as a signal
factor. Setting the frequency of the input signal as
an indicator, we proceeded with our experiment by
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
717
718
Case 12
Figure 1
Transistor amplication circuit
Effective divider:
r 5.02 10.02 40.02 2125
(2)
Linear equations:
L11 (5.0)(510) (10.0)(1020) (20.0)(2010)
(40.0)(3690)
200,550
L21 217,250
L31 210,900
L12 200,700
L22 218,600
L32 212,550
124,626,376.7
125,390,500
(f 24)
SF
(1)
(f 1)
(4)
Total variation:
ST 5102 6102 39302 38102
(3)
S
2r
144,608.6
(f 2)
(5)
719
Table 1
Control factors
Level
Control Factor
A:
resistance R1
B:
resistance R2
10
C:
D:
resistance R5
E:
3
6.8
22
0.1
33
0.2
0.4
Ic 1 mA
Ic 3 mA
inductor L(H)
10
22
56
F:
capacitor C
20
20
G:
resistance R4
470
820
200
H: resistance R6
22
47
2
11
2
21
2
31
S SF
1074.7
L
r
2
12
2
22
SN ratio:
2
32
10 log
(10)
1
(S Ve) 39.50 dB
2r i
(11)
(6)
S1 10 log
Se ST S SF SN(F ) 618,440
( f 18)
Si
(7)
(L11 L12)2
2r
S2 40.22
Error variance:
Se
34,357.78
18
(f 18)
(8)
(1/6r)(S Ve)
4.80 dB
VN
Sensitivity:
(f 3)
Error variation:
Ve
Ic 5 mA
Se SN(F )
29,500.7
21
(f 21)
(9)
(12)
S3 39.96 dB
(13)
Table 2
Signal, indicative, and noise factors
Level
Signal factor
Factor
M:
5.0
10.0
20.0
40.0
500.0
Indicative factor F:
frequency (kHz)
125.0
250.0
Noise factor
temperature
Room
High
N:
720
Case 12
Figure 2
Measuring apparatus
4. Experiment 2: Amplication
Circuit Design Reecting Effects
on Measurement
In experiment 2, we adopted a common method to
place a 50- impedance between the input and output terminals of a circuit. As Figure 4 illustrates, we
connected a 50- resistor between the input and
output terminals. This is called terminal resistance.
Other conditions were exactly as in experiment 1,
as was the analysis procedure. Table 5 shows the
data of experiment 1 from the L18 orthogonal array.
Due to the effect of the added terminal resistor,
each measured datum was changed. This is because
the circuit load changes, and eventually, measurement apparatus characteristics also vary. Although
Table 3
Measured data for experiment 1 (mV)
Noise
Factor
Indicative
Factor
M1
M2
M3
M4
Linear
Equation
N1
F1
F2
F3
510
610
580
1020
1180
1140
2010
2300
2250
3690
3910
3790
L11
L21
L31
N2
F1
F2
F3
520
700
590
1010
1170
1160
2000
2310
2280
3700
3930
3810
L12
L22
L32
721
Figure 3
Response graphs of experiment 1
Table 4
Gain in SN ratio of conrmatory experiment for
experiment 1 (dB)
Conguration
SN Ratio
Optimal
Current
Gain
Estimation
1.87
8.09
9.96
0.26
6.88
6.62
Conrmation
we changed only terminal resistance in this experiment, the factor effect plot in Figure 5 shows characteristics similar to those of experiment 1 in Figure
3. Nevertheless, since the V-shape of factor H is
somewhat less signicant, the effect of terminal resistance was not negligible.
The optimal and current congurations obtained from the response graph in Figure 5 were
the same as those of experiment 1. Table 6 shows
the result of a conrmatory experiment. The fact
that we did not obtain sufcient reproducibility is
reasonable since the optimal and current congurations were unchanged. Although this experiment
was performed on the assumption that insufcient
reproducibility is caused by measurement apparatus
characteristics, experiment 2 revealed that there are
factors other than the characteristics affected by
722
Case 12
Figure 4
Terminal resistance-added amplication circuit for experiment 2
723
Table 5
Measured data of experiment 2 (mV)
Noise
Factor
Indicative
Factor
M1
M2
M3
M4
Linear
Equation
N1
F1
F2
F3
53
68
74
109
134
142
212
262
280
399
460
468
L11
L21
L31
N2
F1
F2
F3
55
69
75
110
134
144
219
265
282
410
473
485
L12
L22
L32
Figure 5
Response graphs of experiment 2
724
Case 12
Table 6
Results of conrmatory experiment for
experiment 2 (dB)
Table 7
Signal factors reecting amplication capability
Conguration
SN Ratio
Optimal
Current
Gain
Estimation
0.58
7.09
7.67
1.42
6.40
4.98
Conrmation
Figure 6
Nonlinearity of output characteristic for signal factor level
Signal Factor
M1
M2
M3
5.0
10.0
20.0
725
Table 8
Measured data of experiment 3 (mV)
Indicative
Factor
M1
M2
M3
Linear
Equation
N1
F1
F2
F3
65
88
98
128
175
196
258
348
380
L11
L21
L31
N2
F1
F2
F3
66
87
97
130
170
190
262
340
372
L12
L22
L32
Error Factor
Figure 7
Response graphs of experiment 3
726
Case 12
Table 9
Result of conrmatory experiment for
experiment 3 (dB)
Table 10
Levels of terminal resistance RL
Conguration
Optimal
Current
Gain
Estimation
SN Ratio
16.55
4.19
12.36
Conrmation
17.07
7.36
9.71
voltage and factor G/H and the terminal resistance RL, as discussed before.
2. Analysis with terminal resistance as a signal factor.
To clarify the mutual relationship, by combining all resistances of factors H and G and the
terminal resistance into a single resistance, we
Level
Resistance, RL()
50
200
1000
Figure 8
Response graphs for each terminal resistance of experiment 4
727
1
1/R 4 1/(R 6 RL)
R out R all
M * R out
RL
R 6 RL
(14)
(15)
(16)
Figure 9
Schematic of combined resistances
728
Frequency
F1
F2
F3
F1
F2
F3
Noise
N1
N2
66
87
97
65
88
98
M1
226
Table 11
Results of experiment 4 (mV)
130
170
190
128
175
196
M2
452
190
258
288
192
263
302
M3
702
262
340
372
258
348
380
M4
904
380
510
580
382
525
590
M5
1403
438
570
670
440
590
680
M6
1599
620
820
910
630
840
930
M7
2350
785
1020
1140
790
1040
1160
M8
2806
870
1150
1310
880
1170
1345
M9
3197
1250
1620
1780
1270
1680
1820
M10
4700
1770
2300
2550
1770
2350
2590
M11
6394
2550
3210
3400
2540
3310
3500
M12
9400
L12
L22
L32
L11
L21
L31
Linear
Equation
729
Figure 10
Response graph of experiment 4
730
Case 12
Figure 11
Irregularity unveiled by two-signal factor analysis
Then we conducted two conrmatory experiments to check reproducibility, the rst an experiment that we performed using the optimal
conguration of experiment 1 after removing the
data causing this irregularity; the second was an experiment using the optimal conguration selected
according to the response graphs of experiment 4,
shown in Figure 8, for four different terminal resistances. (Note: For four different resistances, each optimal level was also different, but we chose a level
that was selected most frequently as the optimal, to
observe reproducibility.)
Table 12
Result of conrmatory experiment 1 (dB)
Conguration
SN Ratio
Optimal
Current
Gain
Estimation
34.02
49.01
14.99
Conrmation
39.93
48.94
9.01
731
ration was A1B1C1D3E3 F2G1H1. Using this conguration, we conducted a conrmatory experiment and
summarized the results in Table 13. We have obtained good reproducibility.
Table 13
Result of conrmatory experiment 2 (dB)
Conguration
SN Ratio
Optimal
Current
Gain
Estimation
48.05
50.99
2.94
Conrmation
47.47
52.45
4.98
Reference
Naoki Kawada, Yoshisige Kanemoto, Masao Yamamoto,
and Hiroshi Yano, 1996. Design for the stabilization
of an amplier. Quality Engineering, Vol. 4, No. 3, pp.
6981.
CASE 13
Error variation:
1. Introduction
Oscillating frequency, accuracy, and oscillator type
are different depending on usage. In general, as
compared with a crystal oscillator, a ceramic oscillator tends to be used for a circuit that does not
require less accuracy because of its lower price and
accuracy (Figure 1). In this study we designed parameters to maintain stable oscillation even if the
temperature and voltage vary.
Se
2
ij
Sm
( f 8)
Error variance:
Ve
Se
8
10 log
(3)
Sm Ve
Ve
S 10 log Sm
Sm
732
yij
( f 1)
(1)
(4)
(5)
(2)
Figure 1
Ceramic oscillation circuit
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
733
Table 1
Factors and levels
Level
Factor
Control factors
A: resistance
200 k
1 M
4.7 M
B:
resistance
100
1 k
10 k
C:
33
100
330
D:
33
100
330
K
E:
manufacturer of oscillator
F:
Temperature
compensating
Generic
G:
Temperature
compensating
Generic
25
50
12
15
18
Noise factors
H: environmental temperature (C)
I:
Figure 2
SN ratios and sensitivities of effective factors
734
Case 13
Table 2
Conrmatory experiment result at optimal conguration (kHz)
0C
25C
50C
Sample
12 V
15 V
18 V
12 V
15 V
18 V
12 V
15 V
18 V
100.05
100.04
100.03
100.10
100.09
100.08
100.10
100.09
100.09
99.83
99.82
99.82
99.85
99.84
99.83
99.89
99.88
99.88
A: A1 220
B: B1 100
C: C 3 330 pF
D : D1 33 pF
E: E 2 N
F: F2 generic
G: G2 generic
(x 100)
ij
3. Conrmatory Experiment
Ve
10log Ve
Sample
1
(dB)
80.80
S (dB)
49.55
80.04
49.53
(6)
(7)
Reference
Kiyohiro Inoue and Genichi Taguchi, 1984. Reliability
Design Case Studies for New Product Development, pp.
169173. Tokyo: Japanese Standards Association.
CASE 14
Abstract: In this report a new method of evaluating the performance of fourterminal circuits is proposed. In circuit design, sinusoidal waves are frequently used as the input signal to generate responses. From the viewpoint
of research efciency, responses are not good outputs to use. Here, pulses
that include wave disorder are used as input instead of sinusoidal waves. One
SN ratio was used to evaluate the robustness of four-terminal circuits. The
results showed good reproducibility of the experimental conclusions.
1. Introduction
The most fundamental elements in an electric circuit are a capacitor, coil, and resistor, expressed as
C, L, and R. These circuit elements represent the
relationship between the current, i, and the terminal voltage, e. For instance, the resistance, R, whose
unit is the ohm, is the coefcient of i to e. The coils
terminal voltage is proportional to the rate of current change, and its coefcient is denoted as L. Similarly, the coefcient of the capacitor, in which the
current is proportional to the rate of voltage
change, is represented as C.
In general, although these coefcients are designed on the assumption that they are constant in
any case, they are not invariable in actuality. For example, they vary in accordance with environmental
temperatures, frequencies, and magnitudes of input. Therefore, we design these coefcients such
that an overall system can be robust. However, in
most cases this robust design cannot be repeated.
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
735
736
Case 14
Figure 1
Relationship between the voltage and current of typical elements
Figure 2
Examples of waveforms in four terminal circuits
Figure 3
Input and output of an electronic circuit
737
Table 1
Control factors and levels
Level
Control Factor
Figure 4
Filter circuit
4. Experimental Procedure
Figure 5
Measurement circuit for a lter circuit
capacitor
0.1
0.22
B:
coil
10
10
C:
coil
470
1000
2200
D:
coil
4.7
10
20
E:
coil
D(0.5)
D(1.0)
D(2.0)
F:
coil
D(0.5)
D(1.0)
D(2.0)
G:
resistor
430
820
1600
H1
H2
H3
H: As type
Next we considered an LC type of high-pass lter
circuit (Figure 4). L is a simulation inductor consisting of two operational ampliers, four resistors,
and one capacitor. We measured the outputs by generating pulses with a function generator, then synchronizing the waveforms of input and output of a
lter with a storage oscilloscope (Figure 5).
As control factors we assigned six elements chosen from resistances and capacitances in an L18 orthogonal array, since levels of the elements can be
selected independently. The types and levels of control factors are given in Table 1. We set 250 and 500
mV as levels of the signal factor. For the time, t, as
an indicative factor, we measured 15 points at a time
interval of 8 s between 8 and 120 s. Finally, as a
A:
noise factor, we picked up low and high environmental temperatures, N1 and N2.
738
Case 14
Total variation:
(1)
2
2
T1N2r12 M 112
M 212
(2)
(3)
(4)
(5)
(6)
(L11 L152)2
r11 r152
(4,474,591.41 4,814,868.42)2
9,494,979.22
9,088,388.92
( f 1)
(7)
Table 2
Results of experiment 1 (mV)
Signal
Effective
Divider
Linear
Equation
M211
(500.0)
y211
(578.9)
r11
(319,274)
L11
(364,251)
M112
(257.9)
y112
(294.7)
M212
(513.2)
y212
(605.3)
r12
(329,886)
L12
(386,643)
M121
(257.9)
y121
(284.2)
M221
(500.0)
y221
(565.8)
r21
(316,512)
L21
(356,195)
M122
(257.9)
y122
(331.6)
M222
(513.2)
y222
(592.1)
r22
(329,887)
L22
(389,385)
Time
Noise
M1
T1
N1
M111
(263.2)
y111
(284.2)
N2
T2
N1
N2
M2
T3
N1
M1151
(252.6)
y1151
(226.3)
M2151
(486.8)
y2151
(328.9)
r151
(300,781)
L151
(217,272)
M1152
(252.6)
y1152
(236.8)
M2152
(500.0)
y2152
(328.9)
r152
(313,807)
L152
(224,266)
N2
739
Figure 6
Response graph of SN ratio
ST
Se ST S ST SN
(L11 L12)2
(L L22)2
12
r11 r12
r12 r22
(L151 L152)2
S
r151 r152
9,395,685.60 9,088,388.92
230,328.32 244.63
74,523.73
9,318,717.24 9,088,388.92
230,328.32
( f 14)
(L11 L21
r11 r21
(L L22
12
r12 r22
L151)2
r151
L152)2
S
r152
Ve Se
1693.72
44
( f 1)
SN Se
2444.63 523.73
45
45
1710.40
VN
(9)
(12)
Error variation:
10 log
Table 3
Results of SN Ratio Conrmatory Experiment
32.52 (dB)
Si 10 log
2
i
SN Ratio
Conguration
Estimation
Conrmation
Optimal
12.47
17.69
Current
19.69
24.44
7.22
6.75
Gain
(11)
4,747,591.412
4,814,868.422
4,651,433.52
4,843,545.71
9,088,388.92
2444.63
(10)
Error variance:
(8)
( f 44)
(13)
(i 1, 2, 3, ... , 15)
For T 1,
12
For T 15,
r11
1
r12
(14)
(L11 L12)2
Ve
r11 r12
(15)
740
Case 14
Figure 7
Response graph of sensitivity
2
15
r151
1
r152
(L151 L152)2
Ve
r151 r152
(16)
Reference
Yoshishige Kanemoto, Naoki Kawada, and Hiroshi
Yano, 1998. New evaluation method for electronic
circuit. Quality Engineering, Vol. 6, No. 3, pp. 5761.
CASE 15
Abstract: In our research we veried highly repeatable experimentation (measurement characteristics) in regard to resonant circuits such as oscillator and
modulation circuits, typical in the electrical engineering eld.
1. Introduction
The majority of tasks requisite for electrical circuit
design involve adjustment of quality characteristics
to target values. More specically, by implementing
product conrmatory experiments called environmental or reliability tests, we have made a strong
effort to match the quality characteristics to nal
specications through the modication of part characteristics if the quality characteristics go beyond the
standards. However, since this procedure is regarded only as adjustment and robustness cannot
be built in, complaints in the marketplace regarding
products have not been reduced signicantly. This
is the reason that parameter design in quality engineering is needed desperately. On the other hand,
most conventional examples to which parameter design has been applied have not had sufcient repeatability because their designs are implemented
based on measurement of powers such as output
voltage. We assume that one of the most important
characteristics related to experimentation is reproducibility. Even if a certain function is stabilized in
a laboratory, if it cannot be reproduced under massproduction conditions, the experiments will be
wasted.
2. Generic Function
As shown in Figure 1, the experimental circuit is
designed based on a combination of a Corvit oscillator and a variable capacitor (voltage-variable capacitor). It consists of an inverter IC, ceramic
oscillator (C3), two resistors (R1 and R2), two capacitors (C1 and C2), and two variable capacitors (Cv1
and Cv2), whose capacitance varies according to the
voltage.
As illustrated in Figure 2, in actual modulation,
when an alternating current such as a human voice
is given on a direct bias voltage, the output voltage
alternates between the lower and upper voltages
around the bias voltage. This alternating signal
makes the terminal capacitance, and consequently
the frequency, change.
Therefore, the proportional relationship between voltages imposed on the variable capacitors
and the frequency is the function of the modulation
circuit. We selected the voltage, Vt, of the variable
capacitor as a signal factor, set the dc bias voltage
to 2 V, and altered this voltage within the range of
1.0 V at an interval of 0.5 V equivalent to one
level. In sum, as shown in Table 1, we laid out ve
levels.
The generic function of a frequency-modulation
circuit is to maintain a relationship between a signal
voltage, Vt, and a frequency deviation (difference
between an initial and a deviated frequency; Figure
3). Obviously, the frequency deviation should not
uctuate in accordance with usage and environmental conditions. In our study, noise factors such as
environmental temperature or time drift can be regarded as contained in the uctuation of the power
supplys voltage. After all, we chose only the power
sources voltage as a noise factor, which ranges between 0.5 V, N1 and N2, around the power sources
initial voltage.
Based on the design of experiments mentioned
above, we collected measured data. As an example,
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
741
742
Case 15
Figure 1
Experimental circuit
Figure 3
Generic function
Figure 2
Frequency modulation voltage Vt and terminal capacitance
(1)
Linear equations:
L1 M1y11 M2y12 M4y14 M5y15
(1.0)(588) (1.0)(581) 1453.5
Table 1
Signal factors and levels (V)
Signal
Absolute
voltage
Signal
voltage
M1
M2
M3
M4
M5
1.0
1.5
2.0
2.5
3.0
1.0
0.5
0.5
1.0
(2)
Effective divider:
r M 21 M 22 M 23 M 24 M 25
(1.0)2 (0.5)2 0.52 1.02 2.5
(3)
743
Table 2
Results of experiment 1
Signal
Noise
Raw data
N1
N2
M1 (1.0)
M2 (0.5)
M3 (0.0)
M4 (0.5)
M5 (1.0)
452,698
452,744
452,971
453,022
453,246
453,304
453,540
453,608
453,867
453,947
588
542
315
264
40
18
254
322
581
661
Converted data
N1
N2
Error variation:
(L1 L2)2
2
(1453.5 1496.0)2
1,739,910.05
(2)(2.5)(2.5)
Se ST (S SN )
1,753,035 (1,739,910.05 361.25)
Ve Se
1595.46
8
L21 L22
(1453.5 1496.0)2
S
2.5 1,739,910.05
361.25
(6)
Error variance:
12,763.70
(4)
(5)
(7)
Table 3
Control factors and levels
Level
Control Factor
A:
B:
resistance
CSB 455
B456 F15
680
820
1000
C:
resistance
3.3
4.7
5.6
D:
capacitance
560
680
820
E:
capacitance
Two levels
below C1
One level
below C1
Same as C1
F:
capacitor type
Ceramic
Chip
Film
G:
capacitor type
Ceramic
Chip
Film
744
Case 15
Figure 4
Response graph of SN ratio
VN
Se SN
1458.33
9
(8)
SN ratio:
10 log
(1/2)(S Ve)
23.77 dB
VN
(9)
S Ve
55.41 dB
2
(10)
Sensitivity:
S 10 log
3. Results of Experiment
As control factors we selected seven constant values
and types of parts, shown in Figure 1. Additionally,
the voltage of the power voltage supplied to the inverter, IC, was chosen as another control factor. All
the control factors were assigned to an L18 orthogonal array. Table 3 lists the control factors and
levels.
A1B2C2D2E2F2G2H2
Table 4
Conrmatory experiment
SN Ratio
Sensitivity
Conguration
Estimation
Conrmation
Estimation
Conrmation
Optimal
37.53
37.27
60.25
60.32
Current
29.78
29.77
57.32
57.24
7.74
7.5
2.94
3.08
Gain
745
However, because there is a radical change near the
boundary between the pass and attenuation bands,
the output voltage changes from 0 to 1 under a certain use and consequently does not function well as
an addable characteristic. Our procedure can be applied to a wide range of devices, including digital
circuits.
Reference
Yoshishige Kanemoto, 1998. Robust design for frequency modulation circuit. Quality Engineering, Vol.
6, No. 5, pp. 3337.
CASE 16
1. Introduction
As a measurement device for the amount of toner
charge, we selected blow-off charge measurement
apparatus used widely because of its quick measurability and portability. Figure 1 depicts the magnied toner separator of the measurement system.
Our measurement procedure was as follows. Developing solvent consisting of toner and carrier was
poured in a measurement cell that included a mesh
at the bottom through which only the toner could
pass. Then compressed air was blown into the cell,
pushing the toner out while leaving the carrier,
which has an equivalent but opposite-polarity
charge to that of the separated toner. Finally, the
amount of the charge is detected with an electrometer, and the polarity is read as opposite.
2. Generic Function
The blow-off charge measurement separates the
toner by stirring the developing solvent with blown
air. However, this process might alter a state of
charge in the developing solvent to be measured
and disturb actual measured values.
We considered it ideal that blown-air energy be
used to separate the toner with as little disturbance
of the developing solvent as possible. In this ideal
state, we hoped to nd a true charge because there
is no unnecessary, subsequent variation of charge
746
(1)
Now the total mass of toner, M * could be measured by the mass density of toner in the developing
solvent measured. On the other hand, the separated
mass of toner, Y, is the total mass of toner separated
from the beginning to the end of the blown air (Figure 3):
Additionally, we assumed that the mass separation rate, Y/M *, is affected by the total amount of
developing solvent measured. From our experience
we assumed a relationship between blown-air time,
M, and mass separation rate, Y/M *.
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
747
Figure 1
Toner separator
(2)
(3)
(4)
0.5
Y = 1 exp ( M)
M*
0
Figure 2
Blown-air time and mass separation rate
Blow Time, M
Then we made signal factor M * and control factor B independent by setting nominal 90 and 110%
values of factor BW as levels of total mass of toner,
M *. Signal and level factors are shown in Table 1.
On the other hand, to optimize the measurement apparatus by focusing on its separation function, we dealt with the toner charge amount as a
noise factor, N. With its true values of levels unknown, we prepared two levels close to the mini-
748
Case 16
separated was obtained from the experiment as Y11
2.0 mg. We calculated the output, y11, as follows:
1
Mass Separation Rate, Y/M*
M3*
M2*
ln 1
less
0.5
Blow Time, M
Figure 3
Total mass of toner and mass separation rate
2.0
0.0378
53.88
(7)
Total variation:
2
2
2
ST y 11
y 12
y 33
0.3117
(fT 18)
(8)
Effective divider:
As an example, we chose experiment 9. The converted data are shown in Table 2. First, we calculated
signal factor M *. Setting mass density of toner W
6 wt % and using 898.0 mg as the amount of developing solvent, we computed the total mass of toner,
M *:
M *1 amount of developing solvent used W
(898.0)(0.06) 53.88
Y11
M 1*
Y = 1 exp ( MM*)
M*
0
y11 ln 1
M1*
(6)
As a next step, we converted the mass of toner seperated, Y11, into the output, y11. The mass of toner
(9)
Linear equation:
Li (M1M i*) y1i (M2M i*) y2i (M3M i*) y3i
(10)
2
(L1 L2 L)
3
0.2703
r1 r2 r3
(f 1)
(11)
Table 1
Signal and noise factors
Level
Factor
Signal factors
M:
blown-air time
M*: total mass of toner
1
BW 10%
2
BW
3
BW 10%
Noise factor
N: toner charge amount
High
Low
749
Table 2
Logarithmized output data of experiment 9
Signal Factor, M
Signal
Factor, M*
Noise
Factor, N
M1
M2
M3
Effective
Divider, r
Linear
Equation, L
54
N1
N2
0.00378
0.1027
0.1358
0.1570
0.1343
0.2086
40,713
28,641
38.38
47.14
60
N1
N2
0.0479
0.1236
0.1045
0.1340
0.1074
0.2000
50,528
35,092
34.81
49.63
66
N1
N2
0.0370
0.0897
0.1001
0.1379
0.1197
0.2106
60,582
42,151
39.24
54.72
SN
2
(L1 L2 L3)2
(L L2 L)
3
1
S
r1 r2 r3
r 1 r 2 r 3
0.0297
(fN 1)
33.7 dB
(12)
Error variation:
(16)
Sensitivity:
Se ST S SN 0.0117
(fe 16)
(13)
Error variance:
Ve
10 log
Se
0.0117
0.00073
fe
16
SN ratio:
1
(S Ve) 59.8 dB
r1 r2 r 3
(17)
(14)
S 10 log
(15)
Table 3
Control factors and levels
Level
Control Factor
A:
Yes
No
B:
Small
Mid
Large
C:
Primary pressure
Low
Mid
High
D:
Secondary pressure
Low
Mid
High
E:
Airow amount
Low
Mid
Large
F:
Airow length
Short
Mid
Long
G:
No. 1
No. 2
No. 3
Closed
closed
Open
1
3
750
Case 16
Figure 4
Response graphs
Current configuration:
A1B3C2D2E2 F1G2H1
Table 4
Conrmatory experiment (dB)
SN Ratio
Sensitivity
Conguration
Estimation
Conrmation
Estimation
Conrmation
Current
29.8
28.5
55.8
54.3
Optimal
12.4
17.4
38.0
41.0
Gain
17.4
11.1
17.8
13.3
751
To ascertain our achievement, we compared the
optimal and current congurations using unit-mass
charge amount q/m, expressed as
q
charge amount detected, q(C)
m
separated mass of toner (g)
(18)
(19)
(20)
A0
17,778 yen
20
(21)
S0
0.794
0
(22)
Optimal configuration: 2
S
0.047
(23)
752
Case 16
Assuming a type of developing solvent whose production volume is 500 lots per year, we obtained
Reference
Kishio Tamura and Hiroyuki Takagiwa, 1999. Optimization of blow-off charge measurement system.
Quality Engineering, Vol. 7, No. 2, pp. 4552.
CASE 17
1. Introduction
2. Study of Measurement
Figure 1 illustrates a measurement circuit, and Figure 2 depicts the waveforms measured. Figure 2a
indicates the waveform of a terminal voltage given
to a capacitor, and 2b shows the waveform of a corresponding charging or discharging current. We divided a voltage waveform of one charge-to-discharge
cycle (Figure 2a) by 10 equal time frames and set
each of 10 voltage values (V1 to V10) at each point
of time (T1 to T10) to 10 different levels. However,
when we analyzed them, to assess the linearity of a
waveform, we combined voltage values V1 to V5 at
charging points of time T1 to T5 with signal factors
M6 to M10. On the other hand, by subtracting V5
from V6 to V10 at discharging points of time T6 to
T10, we created V *6 to V *10 and then related each of
V 10
* to V 6* with M1 to M5 in a reverse manner.
Since we established a relationship between terminal voltage and electric charge as the generic
function, we needed to integrate the current over
the time. Dividing the current waveform shown in
Figure 2b at the same 10 points of time as those used
for (1), we calculated an integral of current from
zero (T0) to each point of time (the accumulated
area between a current waveform and the time axis)
and set each integral to each of the electrical
charges Q 1 to Q 10. In actuality, we measured Q 1 to
Q 10 in Figure 2b by reading a waveform from a digital oscilloscope, computing the area at each time
frame divided into 10 equal lengths, and summing
up all areas. When analyzed, for the same reason as
in the case of signal factors, accumulated electrical
Q CV
(1)
(2)
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
753
754
Case 17
has variations in the voltage designed, we often adjust them:
P1:
1 kV
P2:
2 kV
P3:
3 kV
Figure 1
Measurement circuit
N1:
predegradation state
N2:
postdegradation state
3. SN Ratio
Table 2 shows the data for experiment 1. We computed the SN ratios and sensitivities of experiment
1, as follows.
Total variation:
ST (11.67)2 (11.05)2 127.442
169,473.52
(f 60)
(3)
r3 569.16
r5 1097.47
r4 869.50
r6 1697.48
(4)
L3 3945.39
L5 7542.77
L6 8726.45
L4 4558.54
(5)
Figure 2
Waveform measured
(L1 L2 L3 L4 L5 L6)2
r1 r2 r3 r4 r5 r6
160,059.15
( f 1)
(6)
755
Se ST S SP(N) 5092.99
Table 1
Control factors
(9)
Error variance:
Control Factor
Levels
A:
conducting lm material
B:
pre-treatment condition
C:
forming temperature
D:
forming pressure
E:
forming time
F:
posttreatment A
G:
posttreatment B
H: impregnation condition
Ve
Se
94.31
54
(10)
SP(N) Se
57
161.71
(11)
SN ratio:
10 log
(1/r)(S Ve)
6.74 dB
VN
(12)
Sensitivity:
SP
1
S 10 log (S Ve) 15.35 dB
r
(L L2)2
(L L4)2
(L L6)2
1
3
5
S
r1 r2
r3 r4
r5 r6
197.16
( f 2)
(7)
L21
L2
L2
L2
L2
L2
2 3 4 5 6 S SP
r1
r2
r3
r4
r5
r6
4124.22
(13)
( f 3)
(8)
Error variation:
Table 2
Results of experiment 1
P1
N1
N2
P2
N1
N2
P3
N1
N2
M1
M2
M10
r/L
M
Y
M
Y
3.01
11.67
4.26
25.45
2.70
11.05
3.63
28.03
7.11
50.98
8.36
39.38
r1
L1
r2
L2
M
Y
M
Y
7.17
30.25
7.48
48.59
6.86
28.03
6.55
42.38
13.74
105.81
15.30
81.84
r3
L3
r4
L4
M
Y
M
Y
8.98
38.81
8.98
38.31
8.04
36.13
8.04
34.69
21.99
147.50
21.99
127.44
r6
L5
r6
L6
756
Case 17
Figure 3
Response graphs
A1B1C2D2E1 F1G1H1
Table 3
Results of conrmatory experiment (dB)
SN Ratio
Conguration
Estimation
Sensitivity
Conrmation
Estimation
Conrmation
Optimal
9.81
10.69
21.43
12.69
Current
3.63
2.41
15.83
11.66
Gain
13.44
13.10
5.60
1.03
757
the optimal conguration. On the other hand, we
noticed that it was unnecessary to study the indicative factor. We could conduct an equivalent experiment with only a noise factor by xing the indicative
factor at a maximum voltage (3 kV). By doing so,
we could have scaled down the experiment by onethird.
Owing to the improved linearity of the generic
function by parameter design, tolerable voltage, one
of the important quality characteristics, was also
ameliorated as follows:
Optimal conguration: no failures with all samples
up to 5.00 kV
Current conguration: smoke-and-re breakdowns
with all samples at 4.25 kV
Reference
Yoichi Shirai, Tatsuru Okusada, and Hiroshi Yano,
1999. Evaluation of the Function for Film Capacitors. Quality Engineering, Vol. 7, No. 6, pp. 4550.
Figure 4
Characteristic of lm capacitor
CASE 18
1. Introduction
To miniaturize the width of an electric wire is an
essential technology not only for downsizing and
high accumulation of integrated circuits (ICs) but
also for performance enhancement of transistors.
Currently, a 1.0-m-wide wire is used as the smallestwidth wire for a circuit on a printed circuit board.
However, variability in the wire width causes uctuation of a transistors threshold voltage, and the ICs
malfunction. Additionally, since the uctuation of a
transistors threshold voltage is affected more by variability in width as the width decreases, to stabilize
this is regarded as important in improving the degree of accumulation.
Until now we have focused especially on the patterning process in the IC manufacturing process, to
improve the ability to pattern circuits from a mask
to a photoresist using the line-and-space pattern of
one-to-one wire width. Figure 1 shows the process
that we researched. First, a photoresist applied onto
a wafer was exposed to ultraviolet light through a
mask. Only the exposed part of the photoresist was
dissolved with solvent, and as a result, a pattern was
formed. As a next step, after metal material was
sputtered onto the patterned wafer, a metal pattern
(circuit) was created through dissolution of the photoresist remaining.
On the other hand, considering that this experiment was also conducted to accumulate technical
know-how that could be widely used, we laid out not
758
only specic circuit patterns but also various patterns designed as shown in Figure 2, whose wire
widths range from 0.4 to 1.2 m in the middle and
at the edge of the mask surface by grouping different types.
By utilizing simple patterns, we also tested the
functionality of electrical characteristics of a pattern
together with patternability in accordance with the
wire width. We set the photoresist type, prebaking
temperature, and developing time as control factors, assigned them to an L18 orthogonal array, and
investigated the patternability. As a result, we found
that exposure time, focusing depth, developing
time, and descum time greatly affect patternability.
2. Evaluation Characteristics
and Experiment
On the basis of our experimental results and experience, we selected four control factors and levels
(Table 1). As a noise factor, we chose two pattern
positions in the center and at the edge of the wafer.
These factors were allocated to an L 9 orthogonal
array. Although an L18 is recommended for this case
in the eld of quality engineering, we used an L9
because we knew that there was no interaction
among control factors. As evaluation characteristics,
we selected the current and voltage characteristics
of a circuit. More specically, by setting the wire
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
759
Figure 1
Patterning process
k
m
L
t
(1)
M*
MM *
R
1
k
(2)
(f 98)
r (2){[(1.2)(0.5)]2 [(1.2)(1.0)]2
[(0.4)(4.0)]2} 336.35
(3)
(4)
760
Case 18
Table 1
Control factors and levels
Signal
Control Factor
C:
exposure time
Short
Middle
Long
D:
focusing depth
Standard
Deep
Deeper
G:
developing time
Short
Middle
Long
No
Yes
H: descum time
[(1.2)(0.5)(12.50 12.53)
(0.4)(4.0)(41.98 40.25)]2
S
r
160,667
(f 1)
C3D1G3H2. We estimated the process average by selecting two effective factors, C and D.
SN ratio:
(5)
Se ST S 162,302 160,667
1635
(f 97)
Se
1635
Ve
16.86
n1
98 1
(7)
10 log
10 log
Since the control factors and levels for our experiment did not include the values for the current,
conguration, we conducted an extra experiment
for the current, whose results are shown at the bottom of Table 3. The SN ratio of the current conguration is
(11)
1
(S Ve)
r
1
(160,667 16.86)
336.35
(10)
0 8.29 dB
Sensitivity:
S 10 log
(6)
SN ratio:
(1/r)(S Ve)
Ve
(1/336.35)(160,667 16.86)
10 log
16.86
10 log 28.3 14.52 dB
(9)
14.52 dB
(12)
3. Optimal Conguration
Based on the SN ratios and sensitivities illustrated
in Table 3, the level-by-level computed averages of
the SN ratio and sensitivity are as shown in Table 4
and in the factor effect diagram (Figure 3). According to Figure 3, the optimal conguration is
4. Sensitivity Analysis
The resistance of a formed circuit can be expressed
as follows with circuit length, L, wire width, M, wire
height, t, and resistivity, :
761
Table 2
Data for experiment 7 (mA)
M *1
(0V)
M *2
(0.5 V)
M *3
(1.0 V)
M *4
(1.5 V)
M *5
(2.0 V)
M *6
(2.5 V)
M *7
(3.0 V)
M *8
(4.0 V)
M1
(1.2 m)
Middle
Edge
0
0
12.50
12.53
24.90
24.99
37.12
37.28
49.10
49.30
60.71
60.97
71.91
72.23
92.87
93.32
M2
(1.0 m)
Middle
Edge
0
0
10.74
10.77
21.41
21.49
31.95
32.06
42.28
42.44
52.34
52.55
62.07
62.33
80.47
80.77
M3
(0.8 m)
Middle
Edge
0
0
9.210
9.246
18.38
18.46
27.42
27.56
36.32
36.51
45.01
45.25
53.45
53.75
69.44
69.84
M4
(0.7 m)
Middle
Edge
0
0
8.525
8.528
17.00
17.00
25.36
25.37
33.61
33.62
41.65
41.68
49.49
49.51
64.39
64.43
M5
(0.6 m)
Middle
Edge
0
0
7.746
7.762
15.46
15.48
23.09
23.11
30.61
30.63
37.96
37.99
45.14
45.18
58.82
58.88
M6
(0.5 m)
Middle
Edge
0
0
6.864
6.824
13.71
13.61
20.48
20.32
27.15
26.96
33.70
33.46
40.11
39.82
52.36
52.00
M7
(0.4 m)
Middle
Edge
0
0
5.461
5.222
10.91
10.43
16.30
15.61
21.65
20.73
26.90
25.77
32.05
30.71
41.98
40.25
Table 3
Assignment and analysis of control factors (dB)
Column and Factor
No.
1
C
2
D
3
G
4
H
8.13
24.69
4.34
24.18
SN Ratio
Sensitivity
1.60
21.29
8.46
25.02
8.46
25.23
1.15
25.39
14.52
26.79
8.12
26.44
4.11
25.03
8.29
25.22
Current conguration
762
Case 18
Table 4
Level-by-level averages of SN ratio and sensitivity (dB)
SN Ratio
Factor
Sensitivity
3.62
6.01
8.92
23.39
25.21
26.10
10.37
6.96
1.22
25.50
25.28
23.92
5.80
5.64
7.11
25.50
24.76
24.44
5.94
6.67
24.62
25.45
Average
6.18
L
tM
24.90
(13)
0.004928
M
Figure 3
Response graphs
M*
1.0
R
(0.004928)(0.001)
0.02029 A 20.29 mA
(14)
(15)
Table 5
SN ratio comparison of optimal and current
congurations (dB)
Conguration
Optimal
Current
Gain
Estimation
13.11
Conguration
14.52
8.29
6.23
(16)
763
(17)
Reference
Yosuke Goda and Takeshi Fukazawa, 1993. Parameter
design of ne line patterning for IC fabrication.
Quality Engineering, Vol. 1, No. 2, pp. 2934.
CASE 19
1. Introduction
The objective of this study was to minimize inductance variation from transformer to transformer after processing. The inductance should be within
calculated limits. A ferrite pot core transformer
(Figure 1) was the subject of the experiment.
Manufacturing specications are determined as
follows:
L N2
AL
1 106
(1)
where L is in millihenries, N is the number of primary turns, and AL is in inductance factor from the
core manufacturers data sheets (specied with a
tolerance). Initial out-of-tolerance inductance indicates an incorrect number of primary turns or incorrect core material. Large drops in inductance
after processing indicate separated or cracked cores.
This results in high magnetizing currents and atter
hysteresis loops. The present distribution of transformer inductance is shown in Figure 2, where 8.2%
of the products are below the lower specication
limit.
3. Results of Experiment
A regular analysis was conducted using inner and
outer orthogonal array concepts. This analysis identied the most signicant factors and interactions
between noise and control factors. The following
equation was the basis for variation calculations:
y CF
n
ST
2
i
(2)
i1
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
765
Sm
1 Sm Ve
n
Ve
T2
n
(3)
(4)
n
i1
378.70 0.117
29.08 dB
0.117
Figure 1
Pot core transformer
Sm
1
4
( yi y)2
n1
(5)
For run 1:
Figure 2
Present PS6501 auxiliary distribution curve
766
Case 19
CORES
MATERIALS
INCOMING
INSPECTION
FACTOR A
MEASURE L & R
CORE HALVES
FACTOR L
FACTOR C
MEASURE L0
FACTORS
B, D, E, F, G, H
MEASURE L1
AFTER 1 HOUR
TRANSFORMER
INDUCTANCE
TEST
Figure 3
Transformer process
Table 1
Factors and levels
Factor
Level 1
Level 2
Level 1
Level 2
Level 1
Level 2
Level 1
Level 2
Level 1
Level 2
Level 1
Level 2
Level 1
Level 2
Level 1
Level 2
Level 1
Level 2
Level 3
Level 4
Level 5
Level 6
Level 7
Level 8
767
Figure 4
Linear Graph
Table 2
Orthogonal array
Array
Data
SNa
9.44
10.21
9.54
9.07
9.68
8.82
9.73
38.92
29.08
8.84
36.41
11.42
8.41
7.23
8.87
8.17
32.68
21.42
10.20
10.48
10.62
11.08
42.38
29.20
9.56
8.39
8.85
7.87
34.67
21.64
9.08
9.18
9.30
8.94
36.50
35.59
9.30
8.11
9.43
9.04
35.88
23.55
9.72
9.83
9.92
9.85
39.32
41.40
9.10
8.88
9.43
10.08
37.49
25.07
10
9.63
9.77
9.90
9.73
39.03
38.82
11
9.94
9.17
10.40
9.15
38.66
23.96
12
9.63
7.85
9.52
7.87
34.87
18.87
13
10.11
8.52
9.84
7.83
36.30
18.46
14
9.89
10.65
10.19
10.71
41.44
28.49
15
10.20
9.87
10.87
10.73
41.67
27.00
16
8.72
8.94
9.14
8.91
35.71
34.24
768
Case 19
Figure 5
Response graphs
not just the noise factors controlled in the experiment. A conrmation experiment (Table 4) was designed to identify the optimum setting for the
conicting factors.
As a result of the conrmation experiment, optimum factor levels L4, B1, C1, E1, F2, G1, D1, A2, and
H1 were selected for production and cost reasons.
The process average equation for each combination
of noise factor is
u YL YB YC YE YF YG XD
XG G 5Y X
(6)
769
Table 3
Summary table
Regular Analysis
Interaction with Noise Factor
X
Y
Factor
Main
Effect
SN
10 log
1
2
3
4
5
6
7
8
9.35
9.06
9.00
10.12
9.62
9.90
10.35
9.25
9.05
9.16
8.14
10.31
8.83
10.21
9.73
8.39
9.20
9.11
8.57
10.21
9.22
10.06
10.04
8.82
6.11
12.13
3.86
15.21
2.47
13.61
5.45
7.69
1
2
9.62
9.54
9.47
8.98
9.55
9.26
1
2
9.51
9.65
9.51
8.94
9.51
9.30
11.25
5.38
1
2
9.44
9.23
9.92
9.53
6.90
9.72
1
2
9.44
9.72
9.35
9.10
10.48
6.14
1
2
9.62
9.54
9.08
9.37
9.44
7.18
1
2
9.73
9.43
9.59
8.87
9.51
9.17
9.82
9.13
9.66
9.15
1
2
9.60
7.02
1
2
u x2 y1 10.17 mH
u x2 y2 11.56 mH
Table 4
Conrmation experiment
Level
Exp.
Best
770
Case 19
Figure 6
Comparison of transformer inductance distributions
The conrmation experiment variation was calculated from only two samples; therefore, more data
should be obtained to determine the actual inductance distribution. The variation exhibited in the
experiment falls within the range predicted in the
analysis, although at the upper limit of the condence interval.
Implementing the new factor levels will improve
the capability of the process. A comparison of transformer inductance distributions is shown in Figure
6. Engineering and assembling time will not be
wasted on questions regarding out-of-specication
transformers. Scrap transformers and production
delays will be eliminated.
Savings using the conventional method were calculated to be about 28% annually. This calculation
considers yearly production volume, rejection percentage, cost of scrap, and engineering and assembly time. If costing information, specication limits,
and target value are valid, savings calculated by the
quality loss function (QLF) will be reected in real
life. In this case, the annual savings calculated by
the QLF are 30.60%. After a more extensive conrmation/production run, the mean can be adjusted
to the target value. This will increase savings due to
minimizing loss due to quality to 99.67% annually
(when the quality loss after optimization is compared to the quality loss before optimization).
CASE 20
1. Introduction
In the development of automobile electronics, new
technical issues, such as increasing space for control units or growing consumption energy, have
emerged. To solve them, the power MOSFET, which
is easy to obtain and is driven by low electrical
power, is attracting much attention as a key nextgeneration device in the electronic eld and is being developed as a reliable switching element to
control large electric currents.
Since a large current ows in an automobiles
power MOSFET, its electric resistance (ON resistance) must be lowered to reduce the electrical
power loss. In our research we attempted to reduce
the ON resistance by looking at the contact resistance of a back gate of a power MOSFET and optimizing the conditions for forming the back gate.
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
771
772
Case 20
Gate
Aluminum Contact
Source
Silcon Substrate
Power-MOS Current
Back Electrode
Drain
Load
Power Source
Figure 1
Structure of power MOSFET
Source
110 m
Drain
Figure 2
Structure of electric resistances
1001000 m
773
Table 1
Factors and levels
Level
1
Signal factor
Current
Factor
0.5
1.0
1.5
Noise factors
Environmental temperature
Heat cycle
Low
Initial
Room
1
High
2
(2)
Linear equations:
L1 (0.5)(428) (1.0)(523) (1.5)(597)
1632.5
(3)
(L1 L2 L3 L 9)2
9r
(1632.5 1178.0 1167.5)2
9r
4,399,364.5710
(f 1)
(4)
Error variation:
Se ST S 4,761,567 4,399,364.5710
362,202.4290
(f 26)
(5)
Se
362,202.4290
13,930.8627
26
26
(6)
Error variance:
Ve
Figure 3
Voltage measurement
SN ratio:
774
Case 20
Table 2
Voltage data
Signal
Error Factor
M1
M2
M3
Linear
Equation
I1:
low temperature
J1 (initial)
J2 (cycle 1)
J3 (cycle 2)
428
295
365
523
378
440
597
435
495
L1
L2
L3
I2:
room temperature
J1
J2
J3
385
288
295
485
365
371
565
424
425
L4
L5
L6
I3:
high temperature
J1
J2
J3
372
276
295
474
356
375
548
418
430
L7
L8
L9
10 log
(1/9r)(S Ve)
Ve
[1/(9)(3.50)]
10 log
S 10 log
1
(S Ve)
9r
10 log [1/(9)(3.50)]
(4,399,364.5710
13,930.8627
4,399,364.5710
13,930.8627
51.44 dB
13,930.8627
10.00 dB
(7)
Sensitivity:
(8)
Table 3
Factors and levels
Level
Control Factor
A:
B:
device temperature 1
20
150
350a
B: sputtering temperature 1
Low
Mid
High
C:
D:
20a
40
E:
F:
organic cleansing
None
IPA
G:
10
20
H:
device temperature 2
Low
Low (dummy)
Higha
Current condition.
Methanol
775
13.72
11.47
(9)
Table 4
Assignment of control factors, SN ratio, and sensitivity
Column and Factor
Exp.
1
A
2
B
2
B
3
C
4
D
5
E
6
F
7
G
8
H
SN
Ratio
Sensitivity
10.00
51.44
22.39
29.93
23.06
28.19
11.77
48.08
9.80
25.28
9.68
42.23
5.27
38.19
5.17
38.46
17.67
29.81
10
11.12
59.42
11
10.52
61.12
12
14.66
52.72
13
8.14
50.09
14
7.85
48.48
15
10.23
55.78
16
11.80
44.94
17
9.64
59.55
18
7.75
48.55
776
Case 20
Table 5
Level-by-level averages of SN ratio and sensitivity (dB)
SN Ratio
Control Factor
Sensitivity
A:
12.76
10.19
36.93
53.41
B:
device temperature 1
14.95
10.70
8.76
46.74
44.00
44.75
B: sputtering temperature 1
C:
D:
E:
13.72
9.90
10.79
43.18
47.94
44.38
9.68
10.89
13.84
47.81
43.80
42.88
11.64
11.33
11.45
50.96
40.91
43.62
8.51
12.67
13.24
48.45
44.99
42.06
F:
organic cleansing
9.55
13.09
11.78
49.08
44.05
42.37
G:
11.96
11.39
11.07
42.11
47.63
45.75
H:
device temperature 2
11.19
12.04
44.75
45.99
Total average
11.47
B2
45.17
B3
T
(14)
B3
B1
14.95
B2
(10)
T
(11)
11.47
(12)
9
(13)
777
Figure 4
Response graph of SN ratio and sensitivity
(15)
(16)
778
Case 20
Table 6
SN ratio and sensitivity estimation from conrmatory experiments (dB)
SN Ratio
Sensitivity
Condition
Estimation
Conrmation
Optimal
22.63
22.96
21.41
23.99
Current
4.62
4.77
39.25
40.35
18.01
18.19
17.84
16.36
Gain
optimal condition, we calculated the process average using the six factors A, C, D, E, F, and G, whose
differences between the sensitivity and total average
were large.
S (A1 T ) (C3 T ) (D2 T )
(E3 T ) (F3 T) (G1 T ) T
A1 C3 D2 E3 F3 G1 5T
36.93 42.88 40.91 42.06
42.37 42.11 (5)(45.17)
21.41 dB
(17)
Figure 5
Conrmatory experiment results
Estimation
Conrmation
(18)
779
conrmatory experiments. For the sake of convenience here, we omit details of the data measured
and calculation procedure. The results are shown in
Table 6.
Looking at the results, we see that both the SN
ratio and sensitivity are consistent with the estimation and, in fact, the SN ratio was improved by 18.19
dB. This indicates that the standard deviation of resistance was lowered by approximately 87.5%. On
the other hand, the sensitivity at the optimal condition was reduced by 16.26 dB, approximately
85.7% of the current resistance.
Figure 5 shows the results of the conrmatory
experiment. We concluded that the back contact resistance is dramatically better stabilized for the uctuations of environmental temperature selected as
noise factors, whereas the optimal magnitude of resistance is considerably reduced compared to the
current resistance.
Through our research, we developed a new
manufacturing technology that achieves a back
contact resistance equal to the current one without the impurity doping and heat treatment processes regarded as essential to reduce back contact
resistances in conventional manufacturing. Furthermore, since abolishing these processes enabled us
Reference
Koji Manabe, Takeyuki Koji, Shigeo Hoshino, and Akio
Aoki, 1996. Characteristic improvement of back
contact of power MOSFET. Quality Engineering, Vol.
4, No. 2, pp. 5864.
CASE 21
1. Introduction
An electrophotography process consists of (1) electrical charge, (2) exposure, (3) development, (4)
transfer/separation, and (5) xing (Figure 1). In
developing this electrophotography process, we utilized the magnetic brush development method with
a two-component developer comprising toner and
carrier (called simply developer). In magnetic brush
development, the carrier, magnetic material, is held
on a sleeve, and toner adheres to the carrier electrostatically. In the development process, toner is
developed on photosensitive material, the carrier
remains in the developing apparatus, and consequently, the same amount of toner as that developed is supplied from a stirring area.
The carrier charges the toner positively through
friction. In this case, if the electrically charged
amount of toner is small, overlap and spill will occur, due to toner scattering. On the other hand, if
the amount of toner is large, insufcient image density occurs. Therefore, we need to stabilize the electrically charged toner to obtain a high-quality
image.
However, because the charging force of the carrier decreases (durability) or the charged amount
of carrier varies due to a change in environment
such as temperature or humidity (environmental
780
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
Figure 1
Electrophotography process
Figure 2
Generic function required for carrier
Figure 3
Adjusting the function required for the carrier
Figure 4
Function of the electrophotographic developer
781
782
Case 21
Table 1
Output data of experiment 1 of an L18 orthogonal array
1
(M1)
10
(M2)
20
(M3)
Linear Equation
M *1
N1
N2
12.9
10.5
275.7
189.1
596.7
373.1
L1:
L2:
14,703.9
9,363.6
M *2
N1
N2
17.2
11.9
341.7
226.5
718.2
458.5
L3:
L4:
17,798.2
11,446.9
M *3
N1
N2
19.8
14.2
391.5
269.3
818
549.3
L5:
L6:
20,294.8
13,693.2
Figure 5
Developing function analysis
Figure 6
Response graphs of charging characteristic
783
(f 18)
(1)
784
Case 21
Effective divider:
(2)
Linear equations:
L1 (1)(12.9) (10)(275.7) (20)(596.7)
14,703.9
L4 11,446.9
L2 9363.6
L5 20,294.8
L3 17,798.2
L6 13,693.2
(3)
Figure 7
Response graphs of treatment condition
S(signal)
(L1 L2 L6)2
(2)(3)(M 21 M 22 M 23)
(14,703.9 13,693.2)2
(2)(3)(12 102 202)
2,535,417
(4)
(L1 L2 L5 L6)2
(2)(2)(M 21 M 22 M 23)
49,105
(5)
111,322
(6)
Error variation:
Se ST S S* SN 3231
(7)
Error variance:
Se
215
15
(8)
SN Se
7160
16
(9)
Ve
Total error variance:
VN
SN ratio:
10 log
[1/(2)(3r)](S Ve)
9.57 dB
VN
785
(10)
* 10 log
[1/(2)(2r)](S* Ve)
24.96 dB
VN
(11)
Table 2
Control factors of developer
Sensitivity:
S 10 log
S* 10 log
Level
1
(S Ve) 29.26 dB
(2)3r
1
(S Ve) 13.87 dB
(2)2r *
(12)
(13)
Control Factor
A:
formulation
A1
A2
B:
manufacturing condition
B1
B2
B3
C:
formulation
C1
C2
C3
D:
manufacturing condition
D1
D2
D3
E:
manufacturing condition
E1
E2
E3
F:
manufacturing condition
F1
F2
F3
G:
manufacturing condition
G1
G2
G3
H: manufacturing condition
H1
H2
H3
786
9.25
9.55
9.33
Current
6.32
0.30
3.01
Optimal
Gain
Conrmation
Estimation
Conguration
SN ratio
27.85
0.17
2.41
27.68
Conrmation
28.49
26.08
Estimation
Sensitivity
Table 3
Estimation and conrmation of SN ratio and sensitivity of developer (dB)
9.03
24.36
15.33
Estimation
10.17
25.07
14.90
Conrmation
SN ratio*
0.34
13.44
13.78
Estimation
0.75
12.33
13.08
Conrmation
Sensitivity*
787
Figure 8
Results of conrmation experiment at current and optimal congurations
5. Economic Calculation
Using the results of the experiment on actual imaging, we calculated the economic effect. While
overlap and spill due to toner scattering occur when
the amount charged falls below 10 C/g, there is
insufcient image density when it exceeds 40 C/
g. Thus, we can dene functional limits as smaller
than 10 C/g and greater than 40 C/g (range: 30
C/g), respectively. The expected costs at functional limits were considered to be approximately
20,000 yen ($160), which includes both parts cost
for replacement developer and service cost for repair personnel.
Thus, loss per photocopy machine can be calculated as follows:
L(current)
20,000 yen
(30/2)2(14/2)2
(14)
20,000 yen
L(optimal)
(30/2)2(6/2)2
800 yen (difference due to
environment: 6 C/g)
(15)
Reference
Hiroyuki Kozuru, Yuji Marukawa, Kenji Yamane, and
Tomoni Oshiba, 1999. Development of high quality
and long life developer. Quality Engineering, Vol. 7,
No. 2, pp. 6066.
CASE 22
Functional Evaluation of an
Electrophotographic Process
Abstract: In this study, methods of developing an electrophotographic process, which consist of many elements, were discussed. There are two approaches: to divide the entire system into several subsystems and evaluate
each one individually, or is to evaluate the entire system. From the study we
determinedthat the function of the entire system can be evaluated and part
of the subsystem can be optimized at the same time.
1. Introduction
Electrophotography is widely used for plain-paper
photocopy machines and laser printers. Figure 1
outlines a popular imaging device. As photosensitive
material turns in the direction of the arrow, it is
evenly charged on the surface, exposed by a laser
beam modulated according to an image, developed
by toner, transferred to transfer material such as paper, and the surface cleaned. Toner image transferred onto transfer material is xed and carried
out of a machine. The imaging device in Figure 1
can be regarded as consisting of functional subsystems such as electrical charge, exposure, development, transfer, and xer.
In developing this device, two different approaches are possible: (1) organize a total system
after optimizing each subsystem, or (2) evaluate the
functionality of the entire system continuously from
the beginning. The former can be ideal if there is
no interaction among separated subsystems. However, we should consider that we have some interaction in case of a complex system such as an
electrophotography device. In the following, we explain the method to integrate subsystems.
2. Generic Function
1. Paper type (popular noise factor, very inuential on transfer and xing)
788
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
789
Figure 1
Electrophotography device
Figure 2
Generic function of imaging
3. SN Ratio
After producing images using preset conditions, we
measured magnied dot areas using image analysis
software. The results are shown in Table 1. We surmised that the proportionality of input and output
dot sizes would deviate if the beam diameter of the
exposure light were large. This theory will be evaluated in the analysis. In short, we removed divergence from the proportionality, Mres, from a signal
790
Case 22
Figure 3
Noise factor in electrophotography image
SN ratio:
10 log
(1/8r)(S Ve)
12.02 dB
VN
(4)
1
(S Ve) 23.43 dB
8r
(5)
Sensitivity:
Total variation:
S 10 log
54,167
( f 32)
(2)
ST SM
13.82
28
Table 2 shows control factors assigned to an L27 orthogonal array. As a result of parameter design, we
(3)
Table 1
Example of square root of dot area of toner image
Signal (Square Root of Number of
Dots)
Paper Type
Plain
Gap
Min. / Max.
Linear
Equation
Total
Min.
Max.
Min.
Max.
19.7
23.5
13.7
22.1
33.9
35.9
31.4
35.0
42.0
49.0
39.9
47.8
51.9
56.7
49.9
55.2
420.9
469.0
395.9
456.4
Min.
Max.
Min.
Max.
23.5
27.1
13.7
22.5
37.0
41.6
34.3
39.3
47.4
51.1
43.9
48.8
53.4
56.0
49.8
54.6
453.3
487.6
413.2
465.9
Core
Overhead Projector
Total
Core
Table 2
Control factors
Table 3
Result of conrmatory experiment
Level
Control Factor
Developer
A: ingredient content
B: ingredient content
C: ingredient type
D: ingredient content
E: manufacturing condition
(quantity)
Device
F: device
G: device
H: device
I: device
J: device
K: device
L: device
a
condition
condition
condition
condition
condition
condition
condition
791
(quantity)
(quantity)
(quantity)
(quantity)
(quantity)
(quantity)
(type)
A1a
B1a
C1a
D1
E1
A2
B2
C2
D2a
E2a
A3
B3
C3
D3
E3
F1a
G1
H1
I1
J1a
K1a
L1
F2
G2a
H2a
I2a
J2
K2
L2a
F3
G3
H3
I3
J3
K3
L3
Conrmation
Current
10.33
9.39
Optimal
16.61
13.21
6.28
3.82
(b) Reanalysis
Conguration
Estimation
Conrmation
Current
12.94
10.80
Optimal
19.71
16.57
6.77
5.77
Gain
Gain
Figure 4
Response graphs of original analysis corresponding to Table 3a
792
Case 22
Figure 5
Response graphs of reanalysis corresponding to Table 3b
Serror Sgap Se
27
8.43
(6)
Reference
Hisashi Shoji, Tsukasa Adachi, and Kohsuke Tsunashima, 2000. A study of functional evaluation for electrophotographic process. Quality Engineering, Vol. 8,
No. 5, pp. 5461.
Part III
Robust
Engineering:
Mechanical
Applications
Biomechanical (Case 23)
Machining (Cases 2426)
Material Design (Cases 2730)
Material Strength (Cases 3133)
Measurement (Cases 3435)
Processing (Cases 3646)
Product Development (Cases 4765)
Other (Case 66)
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
CASE 23
Abstract: The aim of this study was to determine the optimal method for
exor tendon repair. We used the Taguchi method of analysis to identify the
strongest and most consistent repair.The optimum combination of variables
was determined by the Taguchi method to be the augmented Becker technique. Five tendons repaired with the optimized combination were then tested
and compared to the set of tendons repaired initially using the standard modied Kessler technique. The strength of the exor tendon repair improved from
an average of 17 N to 128 N with a standard deviation improving from 17%
of the mean to 4% of the mean. Stiffness of the optimized repair technique
improved from an average of 4.6 N/mm to 16.2 N/mm.
1. Introduction
Flexor tendon repair continues to evolve. Despite
years of research by many investigators, the optimum suture technique remains elusive. As our understanding of tendon healing and biomechanics
advances, new goals are set for ultimate motion. Historically, treatment involved delayed primary tendon grafting. Currently, primary repair is done
followed by a postoperative active extensionpassive
exion protocol [13]. There is now a trend toward
primary repair followed by early active exion,
which aims to improve nal results [24].
The premise is that early active exion reduces
adhesions and improves the ultimate range of motion and function [6]. Toward that end, investigators have tried to determine the strongest method
of tendon repair [5,8,14,19,27,28,31]. Because of
the many variables investigated, the various testing
methods used, and the different focus of each study,
a review of the literature does not provide the optimum combination of factors for the best repair
technique.
The purpose of our experiment was to identify
the factors that have the greatest effect on outcome
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
795
796
tendons were each divided transversely into two sections of equal length. Comparison of proximal and
distal sections have shown no substantial difference
in tendon diameter or stiffness characteristics [3].
A total of 14 exor tendon specimens were available
per hand for testing. Tendon size is not under the
control of the surgeon and was considered extraneous (so-called noise) according to the Taguchi
method.
Static Testing
After each specic tendon repair, specimens were
mounted in a servohydraulic mechanical testing machine (Instron, Canton, Massachusetts) for tension
testing to failure. A jig was specially designed to grip
the tendon ends so that minimal slippage would occur. The gripper was rst tested with whole tendons
and noted to withstand tensile loads of more than
1000 N before slippage was identied visually. Elongation was produced at a constant speed of 0.33
mm/s. A preload of 2.0 N was applied prior to testing to remove slack in the system. Ultimate strength
was determined by the peak load recorded.
Real-time recording was made using an xy plot
for displacementload analysis. This revealed that
the initial portion of each curve was linear. Stiffness
was determined by the slope of this initial curve.
The load was determined at 2.0 mm displacement,
and the result was divided by 2.0 to yield a stiffness
of newtons per millimeter. We recognize that crosshead displacement for stiffness only provides an estimate of the true gap. During specimen testing we
noted that the overwhelming majority of displacement was from the relatively weak and compliant
repair. Increased stiffness is therefore reective of
less gap formation.
Variables under Study
Five control factors were studied: core tendon repair technique, core suture type, core suture size,
epitenon technique, and distance from the repair
site for core suture placement (Table 1).
Eight techniques of core tendon repair were examined, as shown in Figure 1: modied Kessler 1,
consisting of a two-strand repair with a single knot
within the repair site; modied Kessler 2, consisting
of a two-strand repair with an epitenon-rst repair
with the core suture knot away from the repair site;
Case 23
Table 1
Summary of factors under study
Control Factor
Levelsa
Core suture
technique
Modied Kessler 1
(MK1)
Modied Kessler 2
(MK2)
Double modied
Kessler (Dbl. MK)
Savage
Lee
Tsuge
Tajima
Augmented Becker
(Aug. Beck)
Ethillon, Nurolon,
Prolene, Mersilene
4-0, 3-0
Epitenon suture
technique
0.5, 1.0
a
Modied Kessler 1, two-strand repair with a single knot
within the repair site; modied Kessler 2, epitenon-rst twostrand repair with a single knot outside the repair site; Lee,
double-loop locking four-strand repair with two knots within
the repair site; savage, six-strand repair with a single knot
within the repair site; Tsuge, two-strand repair with a single
knot outside the repair site; augmented Becker, four-strand
repair with two knots outside the repair site; double modied
Kessler, four-strand repair with a single not outside the repair
site; Tajima, modied Kessler grasp with two knots within
the repair site.
797
Figure 1
Eight repair techniques
Four core suture materials were examined: monolament nylon (Ethilon), braided nylon (Nurolon), polypropylene monolament (Prolene), and
braided polyester (Mersilene) (Ethicon, Inc., Somerville, New Jersey). Two core suture sizes were examined: 4-0 and 3-0. Four epitenon techniques were
examined: no epitenon, volar only, simple, and
crossed. Suture distances of 0.5 and 1.0 cm were
examined. (This refers to the distance from the repair site to the entry or exit and the transverse
course of the core suture.)
All knots were three-throw square knots. All repairs were done by a single surgeon. All modied
Kessler techniques used the locking-loop modication described by Pennington [17].
Experimental Design: Taguchi Method
The set of 16 experiments (Table 2) was repeated
four times, for a total of 64 experiments. Within
each set of 16 experiments, the order was randomized. In this case, 64 experiments gave a minimum
798
Case 23
Table 2
Test matrix
No.
Core Size
Suture Type
Ethilon
Epitenon
None
MK1
Technique
4-0
0.5
Ethilon
Volar
MK2
3-0
1.0
Nurolon
None
Lee
3-0
1.0
Nurolon
Volar
Savage
4-0
0.5
Nurolon
Simple
MK2
4-0
0.5
Nurolon
Cross
MK1
3-0
1.0
Ethilon
Simple
Savage
3-0
1.0
Ethilon
Cross
Lee
4-0
0.5
Prolene
None
Tsuge
4-0
1.0
10
Prolene
Volar
Aug. Becker
3-0
0.5
11
Mersilene
None
Dbl. MK
3-0
0.5
12
Mersilene
Volar
Tajima
4-0
1.0
13
Mersilene
Simple
Aug. Becker
4-0
1.0
14
Mersilene
Cross
Tsuge
3-0
0.5
15
Prolene
Simple
Tajima
3-0
0.5
16
Prolene
Cross
Dbl. MK
4-0
1.0
3. Results
SN Ratio
Traditional statistical methods use the mean to compare results. Using the standard deviation, one may
then determine whether the difference between two
groups is signicant. With the Taguchi method, one
uses a different statistic, the signal-to-noise (SN) ratio, to compare results.
In the Taguchi method, variables under study are
divided into factors that either can be controlled
(control factors) or factors which either cannot be
controlled or are too expensive to control (noise
factors). The greater the effect of the noise, the
greater the inconsistency. The goal of the Taguchi
method is to choose control factors that produce
not only the desired result (such as stronger) but
also to direct a process that is less sensitive to noise.
Although noise cannot be eliminated, the effect of
799
SN ratio 10 log10
1/y 21 1/y 2n
n
Suture Technique
Of all the variables studied, suture technique had
the greatest effect on both strength (change in ratio
10.5 dB, maximum 36.6 dB) and stiffness of
the repair (change in ratio 5.9 dB, maximum
16.2 dB) (Table 4).
Core Suture
The SN ratios for the various core suture materials
were similar, with the exception of Nurolon (Table
4). Several of the repairs with Nurolon failed because the knot untied. This did not occur with any
of the other sutures. Three-throw square knots were
useed throughout the study; four throws would be
unrealistic given the bulk of the knot. This led to
occasional low values, as reected in the low SN
ratio.
Epitenon
Moderate improvement with epitenon repair was
obtained compared to no epitenon repair. Simple
epitenon was 31.2 dB 28.7 dB 2.5 dB better
than no epitenon repair (Table 4). This is approximately 102.5/20 or 33% stronger.
800
Ethilon
Ethilon
Nurolon
Nurolon
Nurolon
Nurolon
Ethilon
Ethilon
Prolene
Prolene
Mersilene
Mersilene
Mersilene
Mersilene
Prolene
Prolene
10
11
12
13
14
15
16
Suture
Type
Cross
Simple
Cross
Simple
Volar
None
Volar
None
Cross
Simple
Cross
Simple
Volar
None
Volar
None
Epitenon
Dbl. MK
Tajima
Tsuge
Aug. Becker
Tajima
Dbl. MK
Aug. Becker
Tsuge
Lee
Savage
MK1
MK2
Savage
Lee
MK2
MK1
Technique
Table 3
Tensile strength, stiffness, and SN ratios
4-0
3-0
3-0
4-0
4-0
3-0
3-0
4-0
4-0
3-0
3-0
4-0
4-0
3-0
3-0
4-0
Core
1.0
0.5
0.5
1.0
1.0
0.5
0.5
1.0
0.5
1.0
1.0
0.5
0.5
1.0
1.0
0.5
Suture
Distance
(cm)
31
21
21
53
14
39
72
21
38
95
21
13
52
26
34
19
TS1
49
26
33
70
27
45
80
19
44
88
15
18
50
16
32
16
TS2
46
27
20
66
27
58
67
25
29
59
24
22
46
49
36
21
TS3
46
19
21
60
25
73
90
24
39
65
37
22
26
48
32
26
TS4
6.62
8.86
7.14
5.89
5.24
7.14
7.83
2.48
4.37
2.74
6.06
5.50
5.71
4.81
4.51
1.75
Stiff1
6.66
6.49
7.5
7.65
4.64
6.88
7.57
2.66
4.98
4.77
7.18
6.88
9.51
4.81
4.55
2.40
Stiff2
8.34
3.35
7.78
5.28
6.66
5.93
5.67
2.01
6.23
3.73
3.99
2.96
5.20
4.72
2.53
2.57
Stiff3
4.21
5.84
8.82
7.22
3.52
7.14
5.93
2.53
5.93
4.59
4.55
3.95
3.95
4.51
3.78
2.05
Stiff4
32.2
27.1
27.0
35.7
26.3
33.9
37.6
27.0
31.2
37.2
26.3
24.8
31.7
28.1
30.4
25.8
SN TS
15.4
14.1
17.8
16.0
13.3
16.5
16.3
7.5
14.4
11.3
14.0
12.4
14.5
13.5
10.9
6.5
SN Stiff
Signal-to-Noise
Ratio for 4
Repetitions (dB)
801
SN ratio shows that a smaller suture distance gives
greater stiffness, an intuitively apparent result.
Table 4
SN ratios for strength and stiffness (dB)
SN Ratio
Strength
Stiffness
I. Technique
MK1
Taijima
Tsuge
MK2
Lee
Dbl MK
Savage
Aug. Becker
26.1
26.7
27.0
27.6
29.7
33.0
34.4
36.6
10.3
13.9
12.7
11.7
14.0
16.0
12.9
16.2
27.7
30.7
30.9
31.1
13.6
15.9
13.3
10.8
29.3
31.0
12.5
14.3
IV. Epitenon
None
Cross
Simple
Volar
28.7
29.2
31.2
31.5
11.0
15.4
13.5
13.8
29.9
30.4
14.1
12.7
4. Discussion
The goal of exor tendon repair is restoration of
full motion of the nger. Historically, repair was followed by postoperative immobilization [10,11].
Healing of exor tendons was thought to occur via
an extrinsic process mediated by the exor sheath
[18]. This was logically thought to require immobilization of the tendon with necessary adhesion formation. Final motion of the digit was, not
surprisingly, limited. Studies by Lundborg and Rank
[12] and Gelberman et al. [6] provide evidence that
the exor tendon has an intrinsic repair capability.
This revision of the understanding of the healing
process gave impetus to the need to study postoperative motion protocols.
In a study of canine tendon healing, Gelberman
et al. [6] found that tendon healing could occur
without adhesion formation. In addition, mobilized
tendons healed more rapidly than immobilized repairs and had greater ultimate strength [6]. In a
clinical study, Kleinert et al. [9], using postoperative
active extension and passive exion, produced substantially improved results. In a subsequent clinical
study, Strickland et al. [25] conrmed the benets
802
Case 23
Table 5
Comparison of standard and optimum combinations of variables
Optimum Combination
Standard
Method
Predicted
Tested
17.2 2.9
94
128 5.6
13.1 (24%)
4.6 1.0
10
16.2 5.8
Tensile strength
24.4
40
42.1
Stiffness
12.7
20
22.8
121 (5%)
SN ratio (dB)
803
Braided suture has more friction and tends to deform the tendon more than monolament suture.
Technical considerations may therefore lead to the
choice of Prolene. Ethilon would have been chosen
if strength alone was used for selection. Ethilon performed poorly with respect to stiffness and therefore is not the optimum choice when considering
overall repair performance.
Increasing the core suture size substantially improved both strength and stiffness (Table 4), but not
as much as would have been predicted from the material properties alone. Ethicon, Inc. [4] reports a
suture strength for 4-0 Mersilene of 13.0 N and for
3-0 Mersilene of 18.0 N. This reects an improvement in strength of 38% in going from 4-0 to 3-0
Mersilene. The Taguchi method showed that the 30 core suture improved strength 1.7 dB compared
with 4-0 core suture (31.0 dB 29.3 dB). This translated into a difference of approximately 22%.
Injury to exor tendons outside zone II would
certainly be better repaired with the large suture
[26]. In zone II lacerations, additional studies would
have to be made to evaluate the gliding characteristics before the 3-0 suture could be recommended
because of the added bulk. There is a clinical precedent for using this size suture in zone II injuries.
Cullen et al. [2] report on their results of early active motion in zone II repairs using 3-0 Tycron modied Kessler technique. They had 77% (24 of 31
digits) excellent or good results with a rupture rate
of 6.5% (two of 31 digits).
The addition of a simple epitenon suture has
been shown in previous studies to have an impact
on strength [15,29]. Volar epitenon was chosen to
test the clinical situation where suturing only the
volar epitenon is technically feasible. Simple epitenon repair was chosen over volar only because
performance was similar mechanically and the biology of tendon healing suggests that unexposed
tendon leads to less scar production [30]. Simple
epitenon was stronger than no epitenon by 2.5 dB,
or approximately 33%. Wade et al. [29] found that
adding a peripheral 6-0 polypropylene stitch improved strength by 12.7 N, with an ultimate strength
of 31.3 N, a difference of 41%. The results here
agree with Wade et al. that a peripheral stitch adds
substantial strength to the repair.
Comparing modied Kessler 1 (core suture rst)
to modied Kessler 2 (epitenon suture rst) repairs,
804
the SN ratio for strength improved from 26.1 dB to
27.6 dB, respectively. This translates to an expected
improvement in strength of 101.5/20, or 19% improvement by suturing the epitenon rst. This correlated well with the study by Papandrea et al. [15]
using 26 matched canine tendons. They found an
improvement of 22% when performing epitenon
rst repair using 4-0 braided polyester core suture
and 6-0 braided polyester epitenon suture.
Standard versus Optimum Combination
The Taguchi method was used to identify a stronger
and less variable repair method. The conrmation
experiment determines whether the Taguchi
method of study was accurate. Variation may be
measured by the standard deviation, but in the spirit
of attaining quality, the minimum result is also important. It is the occasional low value that is associated with the occasional failure.
The optimal exor tendon repair improved in
strength from 17.0 N to 128 N, with a low value
going from 24% below the mean to 5% below the
mean (Table 5). Anticipating stress on a repair of
up to 35 N during unopposed active exion, a repair that can resist 128 N of tension with minimal
variation should give the surgeon enough condence to begin an early postoperative active exion
protocol.
The increase in stiffness from 4.6 N/mm to 16.2
N/mm substantially reduces the gap under physiologic load. With the standard combination, a load
of 34.0 N would exceed the expected strength of
the repair, but if the repair remained intact, a gap
of 7.4 mm is predicted. With the optimum combination, a maximum gap 2.1 mm is expected at 34.0
N of load.
Taguchi Method
The Taguchi method differs from traditional statistical methods by its focus on identifying a solution
that is, in this instance, both a stronger and a less
variable method of repair. The parameter used for
optimization is the SN ratio, in which a low value is
more heavily penalized by the formula used than a
high value is rewarded. In other words, it is the low
values that are associated with failure. Not only does
the SN ratio help reduce variability, but it does so
by identifying control factors that lead to those low
Case 23
values. The Taguchi method identies which factors
provide the greatest contribution to variation and
determines those settings or values that result in the
least variability. However, the method does not allow
easy analysis of interactions between factors.
The optimum technique for a exor tendon repair shown in this study still may not satisfy many
surgeons. It may, for example, be unacceptable because of time or effort to perform the augmented
Becker technique. One may in fact decide that any
four-strand repair technique is either technically unappealing, injurious to the tendon, or both. Inspection of the signal-to-noise ratio graph, however,
allows one to realize that the two-strand techniques
are simply too weak to allow for a reliable early active motion protocol. The clinical series by Small
and Colville [24] and Cullen et al. [2] both used a
two-strand repair technique followed by early active
exion. Small reported 77% (90 of 117 digits) excellent or good results with a dehiscence rate of
9.4% (11 of 117 digits). Cullen reported 77% (24
of 31 digits) excellent or good results with a rupture
rate of 6.5% (two of 31 digits). The Taguchi method
suggests that by using the optimum combination, a
better range of motion and a smaller rupture rate
are possible. The results show that only a four-strand
technique is strong enough to perform immediate
active exion rehabilitation reliably.
References
1. H. Beckerand and M. Davidoff, 1977. Eliminating
the gap in exor tendon surgery. Hand, Vol. 9, pp.
306311.
2. K. W. Cullen, P. Tolhurst, D. Lang, and R. Page,
1989. Flexor tendon repair in zone 2 followed by
controlled active mobilization. Journal of Hand Surgery, Vol. 14-B, pp. 392395.
3. E. Diao, J. S. Hariharan, O. Soejima, and J. C. Lotz,
1996. Effect of peripheral suture depth on strength
of tendon repairs. Journal of Hand Surgery, Vol. 21A, pp. 234239.
4. Ethicon, 1983. Ethicon Knot Tying Manual Somerville, NJ: Ethicon, Inc., p. 39.
5. R. H. Gelberman, J. S. Vandeberg, G. N. Lundborg,
and W. H. Akeson, 1983. Flexor tendon healing
and restoration of the gliding surface. Journal of
Bone and Joint Surgery, Vol. 65-A, pp. 7080.
805
CASE 24
1. Functional Evaluation by
Energy Conversion
To assess machining of stainless steel used for mass
production, we conducted a practical experiment
using an L18 orthogonal array. We surmised that
there were certain technical issues because we have
not been able to obtain satisfactory reproducibility,
even though we have implemented several different
analyses after encountering extremely poor reproducibility at rst. Considering that there have been
some problems with variability of energy during machine idling after referring to the research of Ford,
which deals with energy evaluation during idle time,
by adding electrical power during idling, we have
analyzed the relationships among time, material removed, and electrical power by use of the SN ratio.
For electrical power, we calculated the product of
time and power as area so as to effectively reect its
variability. For a noise factor, we selected a difference between maximum and minimum electrical
power. Using all of them, we computed SN ratios.
2. Generic Function
The objective of machining is to cut a product or
part cost-effectively and accurately to realize a target
806
shape. Therefore, machining engineers select optimal conditions by changing conditions of machines
and tools used or cutting conditions such as cutting
or feeding speeds, and measuring eventual dimensions and roughness of a product or part. In contrast, the objective of machining evaluation by
energy is to assess general functions of machines
and secure nal quality characteristics (machining
accuracy or surface roughness).
As an effective evaluation method of cutting, including machine performance, we can pick up a
change between electrical power supplied to a machine and power used during cutting. In other
words, we assumed that cutting efciency can be assessed by the relationship between time consumed
for cutting and electrical power consumed by a machine. We concluded that unsatisfactory precision of
work is caused by inefcient consumption of energy
for a target material amount to be removed, due to
a factor such as unevenness of material, tool condition, or cutting setup. Generic function 1 is expressed as y 1T by the relationship between
cutting time, T, and electrical power, y. In this case,
the greater the SN ratio and sensitivity, the better.
Generic function 2 is expressed as y 2M,
where the amount removed is M and the power is
y. For this, less sensitivity and a higher SN ratio are
desirable. Figures 1 and 2 illustrate these relation-
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
807
Figure 3
Shape of the test piece
Figure 1
Generic function 1
3. Measurement and
Experimental Procedure
A common manual lathe was used for this experiment. A wattmeter connected to a three-phase distribution board located at a power source for the
lathe measured effective power (W ) consumed for
cutting. Figure 3 depicts the shape of a test piece.
Its material is SUS 304. Two grooves were added
4. Design of Experiments
Figure 2
Generic function 2
808
Case 24
Figure 4
Cutting processes
Assuming that causes for errors affect the variability of power in a direct or indirect manner, we set
a difference between maximum and minimum values of electrical power to a noise factor, which
should be small. In addition, we selected a minimum value of power ymin as N1 and a maximum ymax
as N2. Since the diameter of a test piece becomes
smaller as we repeat cutting, a change in electrical
power for each cutting run was also chosen as a
noise factor.
As control factors, we picked various factors, such
as machining setup and tool condition. We conrmed that a revolution during machining does not
vary when measured with a tachometer. Control factors are summarized in Table 1.
For generic function 1, we calculated the maximum and minimum of electrical power, W, for each
divide time interval (Figure 7). Table 2 shows a sample of the electrical power measured for cutting run
1. Idling power means before- or after-cutting
power. Moreover, by including power during idling,
we show the cumulative relationship between time,
T, and electrical power, W, in Table 3 and Figure 8,
which is a schematic of Table 3. For generic function 2, we computed the electrical power for each
time duration from start to nish of cutting (Figure
9). Using electrical power during idling as a standard, we accumulated each area of power (Figure
10), which represent the data of experiment 1 of
the L18 orthogonal array. The symbol P0 indicates
idling, and P1, P2, and P3 indicate the cutting run
number. T, M, and y are point of time, amount removed, and cumulative value of electrical power calculated as area, respectively. In addition, to evaluate
the linearity of these data, we plotted Figure 11 for
the change of electrical power during idling (Table
4), Figure 12 for the change of electrical power during cutting (Table 5), and for the change in electrical power versus mass removed. As a result, we
can see the linearity for each case.
809
Figure 5
Fluctuation of electrical power at cutting run 1
ference between idling and cutting. For energy consumption, it should be smaller during idling and
greater during cutting. By regarding this difference
of effect as an effective portion of energy, we calculated the SN ratio.
Based on these results, we describe our calculation process in the following section.
Total variation:
SN Ratio for Generic Function: Time versus
Electrical Power
By calculating the square root of each data point in
Table 3, we obtained the converted data in Table 6.
We computed SM* , which is the effect due to a dif-
Table 1
Control factors and levels
Level
Control Factor
A:
Little
Mid
B:
0.5
1.0
2.0
C:
Small
Mid
Large
D:
Small
Mid
Large
E:
Small
Mid
Large
F:
G:
Slow
Mid
Fast
Slow
Mid
Fast
810
Case 24
S*
M
(L1 L2 L6)2 (L7 L8 L12)2
(3)2r
S 27056211
SP
(L1 L2 L7 L8)2 (L5 L6 L11 L12)2
(2)2r
S 1041.748
Figure 6
Minimum and maximum values of electrical power
Error variation:
Se ST S SN SM* SP 64.434
L1 (3.32)(125.419) (4.69)(177.370)
(8.12)(304.877) 8692.862
Error variance:
Ve
L12 9940.218
Total error variance:
Effective divider:
VN
Se
0.962
67
Se SN SP
70
SN ratio:
(L1 L12)
454,651,059
(2)(3)(2)r
2
2.90 dB
SN
(L1 L3 L11)2 (L2 L4 L12)2
(2)3r
S 225.049
Sensitivity:
S 10 log
[1/(2)(3r)](SM* Ve)
VN
10 log
19.018
1
(S Ve)
(2)(3r)
32.15 dB
watts
T
W
T01 T02 T03
T1
T2
T3
T4
T5
T6
Figure 7
Electrical power for each time interval for cutting run 1 (generic function 1)
811
Table 2
Measured data for cutting run 1 (W) (generic function 1)
Time (s)
M *1 :
idling
M *2
cutting
T01
13.00
T02
13.00
T03
13.00
T04
13.00
T05
13.00
T06
13.00
N1:
Wmin
1430
1430
1420
1390
1390
1390
N2:
Wmax
1500
1490
1480
1440
1440
1440
N1:
Wmin
1960
1950
1950
1950
1940
1940
N2:
Wmax
2030
2020
2010
2010
2000
2010
Table 3
Data of time versus electrical power
Time
M *1 :
idling
P01:
cutting 1
P02:
cutting 2
P03:
cutting 3
M 2*:
cutting
P1:
cutting 1
P2:
cutting 2
P3:
cutting 3
T1
11.00
T2
22.00
T3
33.00
T4
44.00
T5
55.00
T6
66.00
N1
N2
ymin
ymax
15,730
16,500
31,460
32,899
47,080
49,170
62,370
65,010
77,660
80,850
92,950
96,690
N1
N2
ymin
ymax
15,180
15,620
30,360
31,350
45,650
46,970
60,500
62,040
75,350
77,220
90,200
92,400
N1
N2
ymin
ymax
14,740
15,070
29,840
30,140
44,330
45,320
58,960
60,280
73,590
75,240
88,000
90,200
N1
N2
ymin
ymax
21,560
22,330
43,010
44,550
64,460
66,660
85,910
88,770
107,250
110,770
128,590
132,880
N1
N2
ymin
ymax
20,680
21,230
41,360
42,350
61,930
63,360
82,390
84,370
102,850
105,270
23,420
126,170
N1
N2
ymin
ymax
19,910
20,460
39,930
40,810
59,840
61,160
79,750
81,510
99,550
101,860
119,350
122,100
812
Case 24
Figure 8
Change in electrical power during idling and cutting (generic
function 1)
L2 16,811.217
r 13.7532 18.9122 22.5052 1053.284
S
SN
(L1 L2)2
502,688.4487
2r
L21 L22
S 554.6836
r
Se ST S SN 6334.4629
Ve
VN
Se
1055.7438
6
Se SN
7
Figure 9
Electrical power for each cutting run, including the idling run (generic function 2)
984.1638
813
Electrical Power, y
Idling
Cutting 1
Cutting 2
Cutting 3
Amount Removed
Mass 0
Amount Removed
Mass 1
Amount Removed
Mass 2
Amount Removed
Mass 3
Figure 10
Time change in electrical power for each idling and cutting run (generic function 2)
Figure 11
Change in electrical power during idling
814
Case 24
Table 4
Measured data for each cutting run (W) (generic function 2)
M:
P0:
Idling
0
P1:
Cutting Run 1
189.140
P2:
Cutting Run 2
168.510
P3:
Cutting Run 3
148.830
1310
1500
1940
2030
1860
1930
1800
1860
SN ratio:
10 log
(1/2r)(S Ve)
VN
6.16 dB
Sensitivity:
S 10 log
1
(S Ve)
2r
23.77 dB
Figure 12
Change in electrical power during cutting
815
Table 5
Amount removed and electrical power (W) for each cutting run (generic function 2)
M:
P0:
Idling
0
P1:
Cutting Run 1
189.140
P2:
Cutting Run 2
357.650
P3:
Cutting Run 3
506.480
86133
98625
213,687.5
232,097.5
335,982.5
358,995
454,442.5
481,290
Table 6
Converted data of time versus electrical power (postconversion)
Time
T1
3.32
M *1 :
idling
P01:
cutting run 1 N1
N2
P02:
cutting run 2 N1
N2
P03:
cutting run 3 N1
N2
M *2 :
P1:
cutting
cutting run 1 N1
N2
P2:
cutting run 2 N1
N2
P3:
cutting run 3 N1
N2
T2
4.69
T3
5.74
T4
6.63
T5
7.42
T6
8.12
816
Case 24
Table 7
Converted data of amount removed versus electrical power (W) (postconversion)
M:
P0:
Idling
0
P1:
Cutting Run 1
13.753
P2:
Cutting Run 2
18.912
P3:
Cutting Run 3
22.506
N1 ymin
293.483
462.263
574.640
674.042
N2 ymax
314.046
481.765
599.162
693.751
Table 8
Data for reference-point proportional equation (W)
M:
P0:
Idling
0
P1:
Cutting Run 1
13.753
P2:
Cutting Run 2
18.912
P3:
Cutting Run 3
22.505
10.22
158.498
275.875
370.277
178.000
295.397
389.986
10.282
Figure 13
Response graphs of SN ratio of time versus electrical power
Figure 14
Response graphs of sensitivity of time versus electrical power
817
Table 9
Results of gain in conrmatory experiments
Conguration
Optimal
Initial
Gain
SN ratio
2.40
5.96
5.53
4.92
7.93
10.88
Sensitivity
Estimation
Conrmation
32.40
31.96
30.36
30.10
2.04
1.86
Conguration
(b) Amount Removed vs. Electrical Power
Optimal
Initial
SN ratio
Estimation
Conrmation
0.74
0.27
7.25
6.93
7.99
7.20
Sensitivity
Estimation
Conrmation
24.56
23.98
24.97
24.87
0.41
0.89
R represents repetition of measurement, and N indicates the number of test pieces machined. Table
12 shows the results calculated as a nominal-the-best
characteristic. Consequently, we can conrm that we
can estimate the nal quality and machine products
in a stable manner once we improve the cutting
process based on energy evaluation.
Gain
Table 10
Dimensional data at optimal conguration (mm)
J2
J1
N1
P1
P2
P3
N2
P1
P2
P3
J3
J4
R1
R2
R1
R2
R1
R2
30.023
39.031
38.022
38.020
37.023
37.023
39.025
39.031
38.023
38.020
37.023
37.024
39.019
39.029
38.018
38.021
37.019
37.024
39.018
39.029
38.018
38.019
37.018
37.022
39.015
39.019
38.015
38.012
37.015
37.015
39.017
39.027
38.016
38.020
37.016
37.021
39.015
39.030
38.014
38.022
37.014
37.025
39.014
39.027
38.014
38.020
37.014
37.021
R1
R2
R1
R2
R1
R2
39.032
39.013
38.052
38.020
37.038
37.024
39.025
39.015
38.045
38.022
37.027
37.026
39.032
39.020
38.050
38.026
37.034
37.029
39.033
39.019
38.050
38.024
37.034
37.026
39.027
39.028
38.045
38.028
37.029
37.032
39.026
39.026
38.044
38.027
37.028
37.030
39.026
39.026
38.043
38.027
37.026
37.033
39.025
39.029
38.043
38.030
37.026
37.034
818
Case 24
Table 11
Roughness data at optimal conguration (m)
N1
P1
P2
P3
N2
P1
P2
P3
J1
J2
J3
J4
R1
R2
R1
R2
R1
R2
2.500
2.910
2.500
2.930
2.240
3.030
2.460
2.950
2.450
2.990
2.500
3.010
2.450
3.020
2.500
3.080
2.480
3.100
2.450
2.940
2.530
2.940
2.500
3.080
R1
R2
R1
R2
R1
R2
3.510
1.980
3.560
2.211
3.980
2.487
3.580
2.030
3.650
2.360
4.160
2.512
3.640
2.060
3.780
2.240
4.140
2.604
3.670
2.120
3.820
2.780
4.010
2.564
Table 12
Gain of the SN ratio of dimension and roughness (dB)
Conguration
Optimal
Initial
Gain
Improvement of
Variance
SN ratio of dimension
72.86
60.20
12.66
1 / 18.45
SN ratio of roughness
13.64
11.98
1.66
1 / 1.47
Reference
Kazuhito Takahashi, Shinji Kousaka, Kiyoharu Hoshiya,
Kouya Yano, Noriaki Nishiuchi, and Hiroshi Yano,
2000. Optimization of machining conditions
through observation of electric power consumption.
Quality Engineering, Vol. 8, No. 1, pp. 2430.
CASE 25
1. Introduction
Because component parts used for power train or
steering systems require high strength and durability, we have secured such requirements by cutting
and carburizating low-carbon steel, which is easy to
cut. However, since carburization takes approximately 10 hours and lowers productivity, currently,
high-frequency heating is used to reduce process
time drastically, to 1 minute. Yet high-frequency
heating material shows 30 on the Rockwell C scale
and is difcult to cut, whereas carburization material is quite easy to cut, with almost zero on the same
scale. In this research we focused on developing cutting technology for difcult-to-cut material.
2. Shape to Be Machined
When we develop cutting technology, by paying too
much attention to an actual product shape, we tend
to digress from our original technological purpose.
To avoid this situation and develop accurate and stable machining technology, a development method
using the idea of transformability is extremely
effective.
Applying the idea of transformability to our research, we pursued cutting conditions with the
proportionality
y M
(1)
where M is a signal factor of input data from a numerically controlled (NC) machine and y is the output of a product dimension corresponding to the
input.
As a product shape to be machined, we used an
easy-to-measure shape without sticking with a product shape to be manufactured, because our objective was technological development. In addition, to
pursue cutting technology robust to the control
shaft direction of a cutting machine and to simplify
an experimental process by using analogous solid
geometry, we selected the model detailed in Figure
1, which depicts the measured points a1 to a4, b1 to
b4, and c1 to c4. The signal factors are dened as
follows:
Signal factor:
M1
M2
M12
M66
a2 a3
c3 c4
Linear distance:
a1 a2
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
a1 a3
819
820
Case 25
a1
a4
b1
c1
c4
b2
c2
b4
c3
b3
a2
a3
c2
b2
c3
b3
Vertex of Product
b4
a2
a3
Figure 1
Test piece shape and measured points
A signal factor can be calculated from a coordinate of each vertex of the model shown in Figure
1. This is dened as a linear distance between all
possible combinations of two vertices. Now since we
have 12 vertices, the number of signal factor levels
amounts to 66. By measuring a coordinate (X, Y, Z )
of each vertex with a coordinate measuring machine, we computed a linear distance as output.
From our technical knowledge, we chose as control factors, eight factors that affect machining accuracy and tool life. For a noise factor, we picked
material hardness and set up maximum and minimum values that were assumed to happen in the
actual manufacturing processes. Table 1 summarizes
the control and noise factors and levels selected.
(f 132)
(2)
Linear equations:
L1 (71.000)(70.992) (84.599)(84.607)
(11.000)(10.958)
125,043.939335
(3)
L2 (71.000)(70.991) (84.599)(84.607)
(11.000)(10.955)
125,040.360611
(4)
Effective divider:
r (2)(71.0002 84.5992 11.0002)
250,146.969846
Variation of proportional term:
(5)
821
Table 1
Factors and levels
Level
Factor
Control factors
A: cutting direction
B: cutting speed (m / min)
C: feeding speed (m / min)
D: tool material
E: tool rigidity
F: twisting angle (deg)
G: rake angle (deg)
H: depth of cut (mm)
Up
Slow
Slow
Soft
Low
Small
Small
Small
Down
Standard
Standard
Standard
Standard
Standard
Standard
Standard
Fast
Fast
Hard
High
Large
Large
Large
Noise factor
N: material hardness
Soft
Hard
Table 2
Layout of control factors and results of analysis (dB)
Factor
No.
SN
Ratio
Sensitivity
31.41
0.0022
39.70
0.0058
39.68
0.0028
9.25
0.0730
44.56
0.0001
42.02
0.0020
33.75
0.0057
44.59
0.0003
19.18
0.0114
10
42.80
0.0011
11
30.55
0.0145
12
26.41
0.0166
13
25.86
0.0148
14
35.24
0.0056
15
42.52
0.0022
16
41.01
0.0009
17
2.63
0.1801
18
39.30
0.0025
822
Case 25
SN ratio:
Table 3
Results of experiment 1 (mm)
M1
a1a2
71.00
Noise
Factor
1
250,146.969846
250,021.645747 0.000728
0.000728
10 log
Signal Factor
M2
a1a3
84.599
M66
c3c4
11.000
N1
70.992
84.607
10.958
N2
70.991
84.607
10.955
Total
141.983
169.214
21.913
31.41 dB
1
250,146.969846
(125,043.939335 125,040.360611)2
250,021.645747
(f 1)
(6)
(11)
Sensitivity:
S 10 log
250,021.645747 0.000728
250,146.969846
0.0022 dB
S
(12)
We summarized these results in Table 2. Additionally, using these results, we created Table 4 as a
level-by-level supplement table regarding the SN ratio and sensitivity. Further, we plotted the factor effects in Figure 2.
1
250,146.969846
(125,043.939335 125,040.360611)2
0.000051
(f 1)
(7)
Error variation:
Se 250,021.740385 250,021.645747 0.000051
0.094587
(f 130)
(8)
Error variance:
57.24 dB
0.094587
Ve
0.005944
130
0.000728
(9)
0.094587 0.000051
131
0.000722
(13)
33.73 dB
(10)
(14)
823
Table 4
Supplementary table of the SN ratio and sensitivity (dB)
SN Ratio
Control Factor
Sensitivity
A:
cutting direction
33.79
31.81
0.0110
0.0263
B:
cutting speed
35.09
33.24
30.08
0.0065
0.0162
0.0332
C:
feeding speed
30.68
32.88
34.85
0.0153
0.0344
0.0063
D:
tool material
22.59
34.93
40.89
0.0465
0.0076
0.0018
E:
tool rigidity
35.38
33.91
29.12
0.0047
0.0162
0.0350
F:
twisting angle
29.82
30.91
38.68
0.0353
0.0166
0.0040
G:
rake angle
H: depth of cut
32.97
33.90
31.54
0.0051
0.0328
0.0180
40.86
33.05
24.49
0.006
0.0079
0.0473
Average
32.80
(15)
S B 2 D2 E 2 F2 H2 4T
0.0162 0.0076 0.0162 0.0166 0.0079
(4)(0.0186)
0.0099 dB
0.0186
y
y
1.0061y
0.9939
(16)
5. Conrmatory Experiment
We performed conrmatory experiments for both
the optimal and initial congurations. Measured
data and calculation procedures are tabulated in
6. Research Process
This example began with the following requests
from a manager and engineer in charge of developing cutting technology: to coach them on the
Taguchi method in developing cutting technology
for high-strength steel; and to focus on two evaluation characteristics, surface roughness of a gear after it is machined and tool life.
824
Case 25
Figure 2
Response graphs of the SN ratio and coefcient of proportionality
Table 5
Results of estimation and conrmatory experiment (dB)
SN Ratio
Sensitivity
Conguration
Estimation
Conrmation
Estimation
Conrmation
Optimal
57.24
54.09
0.9935
0.9939
Current
33.73
34.71
0.9989
0.9992
Gain
23.51
19.38
825
Figure 3
Final product and test piece
826
Therefore, we regarded this experiment as quite
meaningful.
3. Although you prove that a technological development
has good results, can you expect that the result will hold
true for a real product? Because we obtained the result
by using proportionality between various input signals regarding all directions x, y, and z and product
sizes, we are condent that this method can work
for any types of model shapes. Yet absolute values
of the SN ratio of a test piece may not be consistent
with those of an actual product. Even in this case,
Case 25
we ask readers to conrm the results because you
can obtain good reproducibility of gain.
Reference
Kenzo Ueno, 1993. Machining technology development for high performance steel to which concept
of transformability is applied. Quality Engineering,
Vol. 1, No. 1, pp. 2630.
CASE 26
1. Introduction
Small, precise plastic gears require an extremely rigorous tolerance between 5 and 20 m. Furthermore, as the performance of a product equipped
with gears improves, the tolerance also becomes
more severe. Traditionally, in developing a plasticmolded part, we have repeated the following processes to determine mold dimensions:
1. Decide the plastic molding conditions based
on previous technical experience.
2. Mold and machine the products.
3. Inspect the dimensions of the products.
4. Modify and adjust the mold.
For certain product shapes, we currently struggle
to modify and adjust a die due to different shrinkage among different portions of a mold. To solve
this, we need to clarify such technical relationships,
thereby reducing product development cycle time,
eliminating waste of resources, and improving product accuracy. In this experiment we applied the idea
of transformability to assess our conventional, empirical method of determining molding conditions
2. Transformability Process
As signal factors, we chose mold dimensions to evaluate transformability and a parameter in the production process for adjusting. Transformability
corresponds to the dimensions of a model gear (Figure 1). More specically, M1, M2, M3, M4, M5 and M6
were selected, and in particular, each of M1, M2, M3
and M4 contains two directions of X and Y. In sum,
since one model has six signal factor levels and one
mold produces two pieces, 2 6 12 signal factors
were set up in total. We chose holding pressure as
a three-level adjusting signal factor.
On the other hand, for all control factors, we set
the current factor levels to level 2. As a noise factor,
we selected the number of plastic molding shots
completed and set the third and twentieth shots to
levels 1 and 2, respectively. They represented the
noise at the initial and stable stages of the molding process. Table 1 summarizes signal and noise
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
827
828
Case 26
Figure 1
Section of model gear
L1 (9.996)(9.782) (0.925)(0.900)
(20.298)(19.818)
3. SN Ratio
We show some of the experimental data of the L18
orthogonal array in Table 2. Based on these, we proceeded with an analysis.
(f 72)
(1)
(2)
L 2 (9.996)(9.786) (0.925)(0.890)
(20.298)(19.820)
(3)
L 3 (9.996)(9.819) (0.925)(0.900)
(20.298)(19.894)
1199.881184
(4)
L4 (9.996)(9.808) (0.925)(0.896)
(20.298)(19.892)
Table 1
Signal and noise factors
1199.350249
Factor
Level
Signal
Transformability
(mold
dimension)
Adjusting (holding
pressure)
Noise
Number of shots
1195.315475
1195.845021
Total variation:
2
(5)
L 5 (9.996)(9.832) (0.925)(0.910)
(20.298)(19.941)
1202.810795
(6)
L 6 (9.996)(9.829) (0.925)(0.906)
(20.298)(19.931)
1202.271255
Effective divider:
(7)
829
Table 2
Example of one run of an L18 orthogonal array (mm)
Signal
Adjusting
Factor
Noise
Factor
M *1
N1
N2
M *2
M *3
M1
9.996
M2
9.989
M3
1.018
M4
1.026
M5
0.978
M6
20.004
9.782
9.786
9.768
9.770
0.903
0.901
0.905
0.894
0.955
0.954
19.545
19.565
N1
N2
9.819
9.808
9.805
9.808
0.900
0.896
0.900
0.900
0.928
0.959
19.630
19.616
N1
N2
9.832
9.829
9.836
9.833
0.908
0.905
0.892
0.904
0.977
0.970
19.680
19.666
M8
10.142
M9
1.041
M10
1.043
M11
0.925
M12
20.298
Linear
Equation
M7
10.164
M *1
N1
N2
9.924
9.921
9.901
9.910
0.883
0.875
0.864
0.864
0.900
0.890
19.818
19.820
L1
L2
M *2
N1
N2
9.950
9.950
9.9934
9.921
0.880
0.879
0.857
0.866
0.900
0.896
19.894
19.892
L3
L4
M *3
N1
N2
9.979
9.970
9.954
9.964
0.878
0.873
0.869
0.869
0.910
0.906
19.941
19.931
L5
L6
SL
(8)
7049.366256
1
[(9.996)(58.856) (20.004)(117.70)
6r
(10.164)(59.694)
(20.298)(119.296)]2
7049.325990
(f 1)
S*
(f 1)
(10)
(11)
(9.996)(29.433) (20.004)(58.855)
(10.164)(29.853) (20.298)(59.653)
(3)(9.9962 20.0042 10.1642
20.2982)
(9.996)(29.423) (20.004)(58.847)
(10.164)(29.841) (20.298)(59.643)
0.000039
(L1 L 2 L 5 L 6)2
(2)(2r)
0.039582
SN
(f 6)
(9)
L21 L 22 L 25 L 26
r
(12)
(f 3)
(13)
830
Case 26
Table 3
ANOVA table of one run of the L18 orthogonal array
Level
7049.325990
7049.325990
M: positional noise
0.377766
0.377766
N:
0.000039
0.000039a
proportional term
compounded noise
*:
proportional term
0.039582
0.039582
res:
residual
0.000645
0.000215a
e:
error
65
0.020853
0.000321a
e:
69
0.021537
0.000312
72
7049.764914
Total
a
Factors to be pooled.
Now, looking at these experimental data in detail, we notice that contraction rates for die
dimensions corresponding to signal factors M3, M4,
M9, and M10 have a tendentious difference compared to other dimensions, because of mold structure, including gate position or thickness. Since we
have a similar tendency for other experiments of
the L18 orthogonal array, by substituting M 1 for M1,
M2, M5, M6, M7, M8, M11, and M12, and M 2 for M3,
M4, M9, and M10, we calculated the variation of interaction between M s and and removed this from
error variation.
SM (variation of proportional term for factors
other than M3, M4, M9, and M10)
(variation of proportional term for
M3, M4, M9, and M10) S
Se ST S SM SN S* Sres
0.020853
(f 65)
(15)
0.000312
3076.25
10 log 34.88 dB
(16)
[(9.996)(58.856) (20.298)(119.296)]2
(6)(9.9962 20.2982)
[(1.018)(5.413) (1.043)(5.189)]2
SN ratio of adjustability:
(6)(1.0182 1.0432)
[1/(2)(2)(1224.108656)(12 )]
S
(0.039582 0.000312)
*
(7173.532160)2
(21.941819)2
0.000312
S
(6)(1219.848126)
(6)(4.26053)
0.02571
7030.870285 18.833471 S
10 log * 15.90 dB
(17)
0.377766
(f 1)
(14)
For the sensitivity of transformability, S, we calculated the sensitivity, S2, of signal factors M3, M4,
Error variation:
831
1
(6)(1219.848126)
(7030.870285 0.000154) 0.960621
10 log S1 0.17 dB
(18)
Sensitivity of dimensions M 2:
S2
1
(6)(4.26053)
(18.833471 0.000787) 0.736711
10 log S2 1.33 dB
(19)
Table 4
Control and noise factors and levels
Level
Factor
A:
Low (A1)
High (A2)
B:
A1 5
A2 5
A1
A2
A1 5
A2 5
C:
cylinder temperature
Low
Current
High
D: injection speed
Slow
Current
Fast
E:
injection pressure
Low
Current
High
F:
cooling time
Short
Current
Long
II
III
G: part placement
832
Case 26
Table 5
SN ratios of the L18 orthogonal array (dB)
No.
Transformability
Adjustability
No.
Transformability
Adjustability
34.88
15.90
10
33.53
24.26
34.38
17.82
11
36.07
14.38
32.73
13.78
12
30.07
21.60
34.96
15.87
13
35.46
14.81
34.69
16.41
14
33.39
22.37
35.57
21.92
15
36.29
14.55
35.38
14.82
16
33.17
22.37
28.97
31.51
17
35.49
16.22
36.54
14.91
18
31.14
32.88
(20)
Adjustability:
* 15.42 16.05 18.26 (2)(19.28)
11.17 dB
(21)
Table 6
Average SN ratios by level (dB)
Transformability
Adjustability
Factor
A:
34.23
33.87
18.18
20.38
B:
33.66
35.06
33.45
18.07
17.66
22.12
C:
cylinder temperature
34.56
33.83
33.77
18.12
19.79
19.94
D: injection speed
35.70
33.57
32.89
15.42
19.72
22.70
E:
injection pressure
34.40
33.02
34.74
20.50
20.62
16.73
F:
cooling time
33.46
33.99
34.72
20.44
21.35
16.05
33.84
35.11
33.22
19.04
18.26
20.54
G: part placement
833
Figure 2
Response graphs
Transformability:
33.57 33.99 35.11 (2)(34.06)
34.55 dB
(22)
Adjustability:
* 19.72 21.35 18.26 (2)(19.28)
(dB)
36
B1
34
33
B3
A1
Figure 3
Interaction between A and B
20.77 dB
B2
35
A2
(23)
834
Case 26
Table 7
ANOVA table of conrmatory experiment (optimal conguration)
Level
7075.810398
7075.810398
M: positional noise
0.318380
0.318380
N:
compounded noise
0.000202
0.000202a
*:
rst-order term
0.038291
0.038291
res:
residual
0.000442
0.000111a
e:
error
64
0.013529
0.000211a
e:
69
0.014173
0.000205
72
7076.181243
proportional term
Total
a
Factors to be pooled.
Table 8
ANOVA table of conrmatory experiment (current conguration)
Level
:
proportional term
7065.759455
7065.759455
M: positional noise
0.311792
0.311792
N:
compounded noise
0.000128
0.000128a
*:
rst-order term
0.044723
0.044723
res:
residual
0.002015
0.000504a
e:
error
64
0.017974
0.000281a
e:
69
0.020117
0.000292
72
7066.136083
Total
a
Factors to be pooled.
Table 9
Estimation and conrmation (dB)
Table 10
Sensitivity S for conrmatory experiment
Conguration
Transformability
Estimation
Conrmation
Adjustability
Estimation
Conrmation
Optimal
Current
Gain
37.41
36.72
34.55
35.18
2.86
1.54
11.17
16.08
20.77
16.84
9.6
0.76
Conguration
Current
Optimal
Transformability S1
S2
0.962783
0.757898
0.964159
0.757105
Adjustability S1
S2
0.044507
0.000166
0.031529
0.000152
835
rmatory experiment, as shown in Table 10. No signicant differences were found.
Reference
Takayoshi Matsunaga and Shinji Hanada, 1992. Technology Development of Transformability. Quality Engineering Application Series. Tokyo: Japanese
Standards Association, pp. 8395.
CASE 27
1. Introduction
Partial felting is a forming process based on felting
shrinkage where by using a paste made primarily of
heat-sensitive polymers as an antifelting agent, after
printing a predetermined pattern onto wool cloth,
it is felted above the temperature at which heatsensitive polymers become insoluble. Through this
process, the part not printed shrinks, whereas the
part printed is not felted, and as a result, both felted
and unfelted parts remain, creating an expressive
appearance including rugged surface areas and various uffs.
What felting means here is a phenomenon
unique to wool (knitted) cloth, where because of
cuticles on the surface of wool, its bers are interwoven with each other, the wool shrinks in an irreversible manner, and subsequently, its texture
becomes thicker and ner. This is similar to the case
where a sweater made of wool shrinks after being
washed and can never be returned to its original
size. Unlike other common substances, heatsensitive polymers are soluble in cold water but insoluble in hot water. PVME [poly(vinyl methyl
ether)], especially, used in our study, is a very heatsensitive polymer that quite acutely repeats the reversible states of becoming dissolved and clouded
around 35C.
Another heat-sensitive polymer, is a mixture
of PNIPAM ([poly(N-isopropylacrylamide)] with
836
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
837
(1)
Effective divider:
r 127.52 90.52 276,021.8
(2)
Linear equation:
L1 (127.5)(123.0) (90.5)(88.0)
269,879.3
L2 (127.5)(118.9) (90.5)(83.4)
251,651.5
(3)
(L1 L2)2
492,704.6
2r
(f 1)
(4)
(L1 L2)2
601.86
2r
(f 1)
(5)
(f 22)
(6)
Error variation:
Se ST S SN 97.44
Error variance:
Ve
Se
4.43
22
(7)
VN
SN Se
23
30.40
(8)
SN ratio:
Table 1
Noise factors
10 log
N1
N2
Cloth type
Mousseline
Serge
50
70
(1/2r)(S Ve)
VN
15.32 dB
(9)
Sensitivity:
S 10 log
1
(S Ve) 0.49 dB
2r
(10)
838
Case 27
Intermediate Color
M4
M7
M5
M1
M6
M3
M8
M9
M12
M2
M10
M11
Deep Color
Light Color
Figure 1
Signal factors
Table 2
Experimental data (mm)
N1
N2
N1
N2
M1
(127.5)
M2
(207.0)
M3
(194.0)
M4
(101.0)
123.0
118.9
201.0
187.4
188.0
173.0
99.0
89.6
M7
(230.5)
M8
(204.5)
M9
(71.0)
228.0
210.8
202.5
184.8
69.0
68.0
M5
(92.5)
M6
(150.5)
Visual
Judgment
90.0
86.0
146.6
136.0
5
5
M10
(126.5)
M11
(117.0)
M12
(90.5)
Visual
Judgment
123.5
119.9
113.5
108.9
88.0
83.4
3, 4
5, 5
839
Table 3
Control factors and levels
Level
1
A:
Control Factor
LN: Type-1
YH: Type-2
B:
print-dyeing paste
SM 4000
SH 4000
SH 4000
C:
paste concentration
Low
Mid
High
D:
PVME concentration
Low
Mid
High
E:
Mid
High
F:
Mid
High
G:
pH adjustment agent
None
Malic acid
Ammonium sulfate
H:
dry temperature
High
Low
(Low)
Figure 2
Response graphs
840
Case 27
Table 4
Results of conrmatory experiment (dB)
SN Ratio
Sensitivity
Conguration
Estimation
Conrmation
Estimation
Conrmation
Optimal
11.90
12.28
0.37
0.36
Worst
15.87
16.20
0.51
0.51
3.97
3.92
0.14
0.15
Gain
Reference
Katsuaki Ishii, 2001. Optimization of a felting resist
paste formula used in the partial felting process.
Quality Engineering, Vol. 9, No. 1, pp. 5158.
CASE 28
Abstract: In the conventional development process, friction material development has been made in conjunction with the development of automatic
transmissions so that the performance of friction material may meet the requirements for ease of use. In this research, besides developing an automatic
transmission, we attempted to develop a friction material that has high
functionality.
1. Introduction
Figure 1 illustrates the structure of an automatic
transmission. At the portions that are circled, friction materials are required for smooth clutching
and releasing actions to meet targeted performance
goals such as reduction of shift shock or vibration
and improvement in fuel efciency.
2. Generic Function
Traditionally, we have evaluated friction materials by
using primarily the V characteristic (representing
the coefcient of dynamic friction corresponding to
a relative rotating speed between a friction material
and its contact surface; Figure 2). The V characteristic need not have a negative slope to prevent
shift shock or vibration. However, it tends to become negative. Thus, through many repeated experiments, engineers have made nal decisions by
looking at V characteristics.
Fundamentally, the friction force follows Coulombs law, expressed by F W, where F is the
friction force, W the vertical load, and the friction
coefcient. Taking into consideration the fact that
a friction material inside an automatic transmission
is used in a rotational direction, a commonly used
experimental device (Figure 3) was utilized for our
3. SN Ratio
Based on the format shown in Table 2, we obtained
measurement data. To add the slope of V char-
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
841
842
Case 28
Total variation:
2
2
ST y 2111 y 112
y 135
2
2
y 611 y 635
1032.4750
( f 90)
(1)
Effective divider:
r M 21 M 22 M 23 3.50
(2)
1029.1453 ( f 1)
Figure 1
Structure of automatic transmission (sectional view)
Friction
Coefficient,
acteristics to our evaluation, we prepared the summary table of linear equations illustrated in Table 3
and proceeded with our analysis based on the next
procedure. Figure 4 indicates measured data under
a certain noise condition. With respect to , we can
replot these data in Figure 5 by setting the rotating
speed velocity, M *, to the horizontal axis. Next we
dened the slope in Figure 5 as *. That * becomes positive is equivalent to a positive slope of a
V characteristic. S* corresponds to the variation
in the rst-order element among the variations SM*
due to the relative rotating speed M *. Orthogonal
polynomial equations are used to decompose these
data.
K 21 K 22 K 25
S
6r
0.0319
( f 4)
(4)
0.0030
( f 1)
(5)
(3)
( f 3)
(6)
843
Figure 3
Experimental device
Error variation:
SN ratio:
Se ST S SM* 3.2977
( f 85)
(7)
Error variance:
(1/30r)(S Ve)
24.14 dB
(10)
1
(S Ve) 9.91 dB
30r
(11)
(12)
VN
Sensitivity:
S
Ve e 0.0388
85
(8)
10 log
0.0290 3.2977
0.0378
88
S 10 log
Slope:
(9)
Table 1
Signal and noise factors
Factor
Signal factors
Main signal
M: surface pressure
Subsignal
M *: relative rotating speed
Noise factors
O: ATF oil temperature
P: t-in, deterioration
Level
Low
Mid
High
M *1
M *2
M *3
M *4
M *5
Low
High
After Degradation
Initial
Fit-in
844
N4
N5
N6
y611
y612
y613
y614
y114
Initial
Fit-in
Deterioration
y113
High
y112
y111
N1
N2
N3
y615
y115
y621
y121
y622
y634
y122
M *2
Initial
Fit-in
Deterioration
M *1
Low
M *5
M *4
M *3
M *1
Noise Condition
(Oil Temperature)
M *2
M2
Relative Rotating Speed (rpm)
Table 2
Examples of torque data
y635
y134
M4
y135
M *5
M3
845
Table 3
Summary of linear equations
M *1
M *2
M *3
M *4
M *5
Linear Equation
for Each Error
Condition
N1
L11
L12
L13
L14
L15
55.0480
L1
N2
L21
L22
L23
L24
L25
56.8663
L2
N3
L31
L32
L33
L34
L35
57.6040
L3
N4
L41
L42
L43
L44
L45
51.7542
L4
N5
L51
L52
L53
L54
L55
52.3932
L5
N6
L61
L62
L63
L64
L65
55.0596
L6
K1
K2
K3
K4
K5
65.6614
66.4108
65.6962
65.6730
65.2838
Subtotal
4. Results of Analysis
Figure 6 summarizes the response graphs of the
analysis of results obtained by the aforementioned
analysis procedure.
Figure 4
Typical measured data
Figure 5
Relationship between relative rotating speed and *
846
Case 28
Figure 6
Response graphs
Table 4
Control factors and levels
Level
1
A:
ber diameter
Control Factor
Small
Large
B:
Small
Mid
Large
C:
amount of ber
Small
Mid
Large
D:
ber ratio
Small
Mid
Large
E:
amount of resin
Small
Mid
Large
F:
compression rate
Small
Mid
Large
G:
plate thickness
Small
Mid
Large
None
H: surface treatment
847
Table 5
Results of conrmatory experiment
SN Ratio, (dB)
Sensitivitiy, S (dB)
Slope, *
Conguration
Estimation
Conrmation
Estimation
Conrmation
Estimation
Conrmation
Optimal
25.80
27.29
9.82
9.74
0.00024
0.00017
Initial
22.72
23.64
9.79
9.65
0.00018
0.00026
3.08
3.65
0.03
0.09
0.00006
0.00009
Gain
Reference
Makoto Maeda, Nobutaka Chiba, Tomoyuki Daikuhara,
and Tetsuya Ishitani, 2000. Development of wet fric-
CASE 29
Abstract: Because of its low cost and easy recycling, green sand casting
(casting method using green sand) is used in many cast products. Since the
caking strength of green sand tends to vary widely when mixed and it uctuates over time after being mixed, it is considered difcult to control on
actual production lines. Therefore, by applying parameter design we attempted to determine an optimal manufacturing condition such that we can
improve its caking strength after kneading and make it more robust to timelapsed change.
1. Introduction
848
Total variation:
ST 28.32 47.92 2.42 1.52
6880.46
(f 10)
(1)
Linear equations:
L1 (0.02)(28.3) (0.04)(47.9)
(0.10)(8.3) 7.168
L2 (0.02)(34.2) (0.04)(18.1)
(0.10)(1.5) 2.002
(2)
Effective divider:
r 0.022 0.042 0.102 0.022
(3)
(L1 L2)2
1911.111
2r
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
(f 1)
(4)
849
Figure 1
Method for measurement of caking strength
L2 L22
1
S 606.536
r
S 10 log
(f 1)
(5)
(f 8)
(6)
Error variation:
Se ST S SN 4362.819
1
(S Ve) 44.92 dB
2r
(10)
Error variance:
Ve
Se
545.352
8
(7)
SN Se
9
552.151
(8)
SN ratio:
10 log
(1/2r)(S Ve)
VN
17.50 dB
(9)
Sensitivity:
Table 1
Results of pull test (mm)
Signal Factor (Displacement of Green Sand)
M1
(0.02)
M2
(0.04)
M3
(0.06)
M4
(0.08)
M5
(0.10)
N1 (0 min)
28.3
47.9
44.4
14.8
8.3
N2 (60 min)
34.2
18.1
4.2
2.4
1.5
Noise Factor
850
Case 29
Table 2
Control factors and levels
Level
1
A:
Control Factor
Small
Large
B:
Low
Mid
High
C:
No
Small
Large
D:
No
Small
Large
E:
No
Small
Large
F:
No
Small
Large
G:
Small
Mid
Large
Short
Mid
Long
H: maturation time
Figure 2
Response graphs
851
Table 3
SN ratios and sensitivity and results of experiments
Sensitivity
SN
Prediction
Conrmation
Prediction
Conrmation
Optimum
27.61
33.89
55.55
55.96
Baseline
22.20
29.13
48.51
49.55
5.41
4.76
7.04
6.41
Gain
Table 4
Jolt toughness values
Conguration
Right after
Kneading (N1)
60 min after
Kneading (N2)
Optimal
64
48
Initial
33
16
Reference
Yuji Hori and Satomi Shinoda, 1998. Parameter design
on a foundry process using green sand. Quality Engineering, Vol. 6, No. 1, pp. 5459.
Figure 3
Jolt toughness test
CASE 30
Abstract: New materials composed of metal and ceramic have many interesting and useful properties in the mechanical, electrical, and chemical elds.
Recently, a great deal of attention has been paid to the potential utilization
of these properties. In this study, the forming process for such material was
researched using low-pressure plasma spraying equipment, including two independent metal and ceramic powder supplying devices and a plasma jet
ame. Using quality engineering approaches, the generic function of plasma
spraying was used for evaluation. After optimization, metal and ceramic materials can be sprayed under the same conditions. Also, it is possible to produce a sprayed deposit layer and metal/ceramic mixing ratio for both
dispersed-type and inclined-type products.
1. Introduction
Major technical problems in developing a functional material by plasma spraying were as follows:
1. The conditions for spraying metal are different from those for ceramic because they
have extremely different heat characteristics.
2. Metal and ceramics are so different in specic
gravity that they cannot comingle.
To tackle these issues, by utilizing two separate
powder supply systems connected to a single decompressed plasma thermal spraying device, and by
supplying metal and ceramics simultaneously to a
plasma heat source, we created conditions that can
develop a compound thin lm.
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
853
Figure 1
Experimental device
Thickness Experiment
Total variation:
ST 782 802 2692
3,081,538
(f 54)
(1)
Effective divider:
r 12 32 52 35
(2)
Linear equations:
L2 17,667
(3)
(L1 L2)2
2,971,069.3
(9)(2r)
(f 1)
(4)
Table 1
Factors and characteristics
Level
Factor
Noise factor
Compound ratio (in volume) Ni vs. alumina
73
37
Characteristic
Thickness Weight
Signal factor
Number of reciprocating sprays
854
Case 30
[1/(9)(2r)](S Ve)
VN
2.26
(3.55 dB)
(9)
Sensitivity:
S
1
(S Ve) 4715.66
(9)(2r)
Weight Experiment
Total variation:
ST 0.3632 1.02922 1.12602
Figure 2
Thicknesses to be measured
59,201,730
(f 6)
(11)
Linear equations:
SN
L L
S 99,817.30
9r
2
1
2
2
(f 1)
(5)
12.0486
Error variation:
L2 7.8716
Se ST S SN 10,651.34
(f 52)
(6)
Error variance:
S
Ve
Se
204.83
52
SN
SN Se
1 52
2084.31
(L1 L 2)2
5.6687766
2r
(f 1)
(13)
(7)
(12)
L21 L22
S 0.2492475
r
(f 1) (14)
Error variation:
(8)
Se ST S SN 0.0021488
(f 4)
(15)
Error variance:
SN ratio:
Table 2
Data for 16 of the L18 orthogonal array (m)
M1 (First Round Trip)
J1
J2
J3
Total
J1
J2
J3
Total
J1
J2
J3
Total
N1
I1
I2
I3
Total
78
75
78
231
80
81
82
243
79
81
80
240
237
237
240
714
282
270
240
792
272
263
241
776
274
257
237
768
828
790
718
2336
434
408
385
1227
419
400
375
1194
401
385
368
1154
1254
1193
1128
3575
N2
I1
I2
I3
Total
48
56
51
155
47
53
49
149
51
52
47
150
146
161
147
454
174
175
160
509
167
178
160
505
160
172
155
487
501
525
475
1501
293
294
288
875
292
281
270
843
273
282
269
824
858
857
827
2542
855
Table 3
Data for experiment 16 of the L18 orthogonal array (g)
M1 (First Round Trip)
N1
0.3630
1.0292
1.7196
N2
0.1842
0.6858
1.1260
Se
0.0005372
4
Ve
(16)
SN Se
0.0502792
(17)
(18)
14
SN ratio:
(1/2r)(S Ve)
VN
Sensitivity:
S
1
(S Ve) 0.0809748
2r
Table 4
Control factors and levels
Level
Control Factor
Hydrogen
A:
Helium
B:
25
35
45
C:
current / voltage (A / V)
12
15
18
D:
50
200
400
E:
0.8
F:
13
24
G:
30
60
0.1
0.5
1.2
Case 30
7
6
5
4
3
2
1
0
1
2
3
4
Sensitivity (dB)
SN Ratio (dB)
856
A1 2
B1 2 3
C1 2 3
D1 2 3
E1 2 3
F1 2 3
G1 2 3
38
37
36
35
34
33
32
31
30
29
28
27
26
25
24
23
H1 2 3
A1 2
B1 2 3
C1 2 3
D1 2 3
E1 2 3
F1 2 3
G1 2 3
H1 2 3
A1 2
B1 2 3
C1 2 3
D1 2 3
E1 2 3
F1 2 3
G1 2 3
H1 2 3
12
14
Sensitivity (dB)
SN Ratio (dB)
Figure 3
Response graphs of thickness experiment
3
2
1
0
1
2
16
18
20
22
24
26
28
30
B1 2 3
A1 2
C1 2 3 D 1 2 3
E1 2 3
F1 2 3
G1 2 3 H1 2 3
Figure 4
Response graphs of weight experiment
Table 5
Estimation and conrmatory experiment results
Conguration
Optimal
Worst
Gain
Thickness
Estimation
Conrmation
9.83
12.70
6.49
3.04
16.32
15.74
Weight
Estimation
Conrmation
12.16
15.12
8.48
1.61
20.64
16.73
Nickel Side
Alumina Side
Base Side
Nickel:
Alumina = 7.3
Figure 5
Dispersed and inclined lms
Dispersed Type
(Nickel: Alumina = 3.7)
Inclined Type
A1B1C2D1E3 F1G3H2
857
clined thermal-sprayed thin lm, we gradually alter
the supply ratio from the nickel-abundant state to
the alumina-abundant state, from right to left in the
gure. These results enabled us to deal with metal
and ceramics under the same conditions and to develop compounded functional materials by plasma
spraying.
Reference
Kazumoto Fujita, Takayoshi Matsumaga, and Satoshi
Horibe, 1994. Development of functional materials
by spraying process. Quality Engineering, Vol. 2, No.
1, pp. 2229.
CASE 31
1. Introduction
A two-piece gear consists of two parts. After charging a circular brazing material in a groove of gear
2, we press-t gear 1 to gear 2 and braze them together in a furnace. Figure 1 outlines the brazing
process.
Good brazing that can result in constant strength
needs to have the state in which the brazing material is permeated all over the brazing surface. In
other words, it should ideally have the same characteristics as that of a one-piece gear. As shown in
Figure 2, when pressure is applied, a one-piece gear
is supposed to have proportionality between the
load and the displacement, following Hookes law
within the elastic range. Therefore, to establish the
proportionality between the load, y, and the displacement, M, regardless of gear size, we select the
proportionality between them and the proportionality between the load and the brazing area, M *, as
the generic function of brazing.
As signal factors, the displacement, M, and brazing area, M *, were chosen according to the strength
standard and preliminary experimental results, as
illustrated in Table 1. On the other hand, as a noise
factor, temperature (Table 1) was chosen because
the variability in temperature inside the brazing furnace greatly affects brazing permeation.
The experiment was performed according to the
pulling strength test shown in Figure 3, and the test
858
(f 24)
(1)
Effective divider:
r [(0.1)(554)]2 [(0.1)(679)]2 (0.1)(804)]2
[(0.4)(804)]2
423,980
Variation of proportional terms:
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
(2)
859
Figure 1
Gear brazing process
1
2r
(0.1)(554)(0.00 0.13)
(0.1)(679)(0.1 0.005)
(0.4)(679)
(0.00 0.00) (0.4)(804)
(7.33 0.00)
24.54
(f 1)
(3)
SN
22.61
Figure 2
Material property of a one-piece gear
(0.1)(554)(0.00)
1 (0.4)(804)(733)2
r (0.1)(554)(0.13)
(0.4)(804)(0.00)2
(f 1)
24.54
(4)
Error variation:
Table 1
Signal and noise factors and levels
Level
Factor
Signal factors
Displacement, M (mm)
Brazing area, M * (mm2)
0.1
554
0.2
679
0.3
804
0.4
Noise factor
Furnace temperature, N
High
Low
860
Case 31
SN Se
3.25
23
VN
(7)
SN ratio:
10 log
(1/2r)(S Ve)
50.95 dB
VN
(8)
Sensitivity:
S 10 log
1
(S Ve) 45.83 dB
2r
(9)
Figure 3
Testing method for a two-piece gear (pulling strength test)
(f 22)
(5)
Error variance:
Ve
Se
2.370
22
(6)
Figure 4
Test piece shape of a two-piece gear
Table 2
Strength test measurements
M1 (0.1 mm)
M2 (0.2 mm)
M3 (0.3 mm)
M4 (0.4 mm)
M *1
M *2
M *3
M *1
M *2
M *3
M *1
M *2
M *3
M *1
M *2
M *3
N1
0.00
0.10
2.08
0.00
0.23
3.55
0.00
0.25
5.33
0.00
0.00
7.33
N2
0.13
0.05
0.00
0.15
0.08
0.00
0.20
0.10
0.00
0.00
0.00
0.00
861
Table 3
Control factors and levels
Level
1
A:
ux supply position
Control Factor
Bottom
Top
B:
ux concentration
Thin
Med.
Concentrate
C:
amount of ux
Less
Med.
More
D:
Less
Med.
More
E:
Less
Med.
More
F:
Fine
Med.
Rough
G:
tighting amount
Small
Med.
Large
Low
Med.
High
H: pressure
Figure 5
Response graphs
862
Case 31
Table 4
Estimation and conrmation of SN ratio and sensitivity (dB)
SN Ratio
Sensitivity
Conguration
Estimation
Conrmation
Estimation
Conrmation
Optimal
34.5
36.0
32.4
30.7
Initial
41.6
43.4
38.3
34.6
7.1
7.4
5.9
3.9
Gain
* 100.745 5.55
(10)
Then the error variance of 1/5.55 is the initial conguration. Applying this result to an actual product,
we obtain the histogram in Figure 7, based on a
torsion test to evaluate the original function of a
gear. This reveals that we can improve both the variability and strength under the optimal conguration as compared with those under the initial
conguration.
Reference
Akira Hashimoto, 1997. Optimization of two-piece gear
brazing condition. Quality Engineering, Vol. 5, No. 3,
pp. 4754.
Figure 7
Result of torsion test
CASE 32
Abstract: In the manufacture of automobile switches using electronic components such as light-emitting diodes or resistors, a good electrical connection between the component and the terminal is essential. Soldering has long
been used to make this connection, but resistance welding is becoming more
and more popular, due to easy automation and the shorter time required for
the process. Currently, evaluation of resistance welding relies on a strength
test; therefore, evaluation cannot be made without destroying the delicate
wire in the component. In this study, evaluation is made by the generic function for welding currentvoltage characteristics rather than by strength.
1. Introduction
2. Evaluation Method
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
863
864
Case 32
Figure 1
Measuring method of current and voltage characteristics
ment method and positions. Because of the tiny resistance at the joint, to measure voltage drop
properly, we selected three levels of current, 5, 10,
and 15 A, which is a wider range than that of a
normal tolerable current in electronic parts. Additionally, we prepared a special measurement device
to achieve the accuracy of locating the voltage measuring probe.
(f 24)
(1)
Linear equations:
L11 (5)(0.020) (10)(0.041) (20)(0.093)
3. SN Ratio
Using test pieces for each welding condition laid out
in an L18 orthogonal array, we measured the current
and voltage characteristics. Table 1 gives one example of the experimental data. Based on this, we
2.370
L12 3.045
L24 1560
(2)
865
Table 1
Example of voltage drop at welding joint (mV)
Signal Factor
Noise Factor
Contamination of
Electrode
Measurement
Position
I1:
contaminated
J1:
J2:
J3:
J4:
I2:
not contaminated
J1
J2
J3
J4
edge 1
center 1
center 2
edge 2
M1
5A
M2
10 A
M3
20 A
Linear
Equation
0.020
0.029
0.024
0.018
0.041
0.058
0.050
0.037
0.093
0.116
0.105
0.070
L11
L12
L13
L14
0.020
0.027
0.024
0.016
0.040
0.055
0.049
0.032
0.086
0.108
0.101
0.058
L21
L22
L23
L24
Effective divider:
r 52 102 202 525
(3)
SI
0.088229
(f 1)
(2.370 1.860)2
(2.220 1.560)2
0.088229
(4)(525)
0.000130
(f 1)
(5)
(4)
Table 2
Control factors and levels
Level
Control Factor
1
1
2
2
A:
error column
B:
electrode material
B1
B2
B2
C:
electrode diameter
Small
Large
Large
D:
upslope time
Short
Mid
Long
E:
Short
Mid
Long
F:
welding current
Small
Mid
Large
G:
applied pressure
Small
Mid
Large
H: downslope time
Short
Mid
Long
866
Case 32
Figure 2
Response graphs
Table 3
Results of conrmatory experiment (dB)
SN Ratio
Conguration
Worst
Optimal
Gain
Sensitivity
Estimation
Conrmation
Estimation
Conrmation
16.02
13.22
46.27
47.05
3.68
1.95
45.83
45.68
19.70
15.17
0.44
1.37
867
Figure 3
Characteristics of current and voltage in conrmatory experiment
( f 3)
(6)
Error variation:
Se 0.091721 0.088229 0.000130 0.003275
0.000087
(f 19)
(7)
Error variance:
Ve
0.000087
0.000005
19
(8)
SI SJ Se
23
0.000152
(9)
SN ratio:
10 log
[1/(8)(525)](0.088229 0.000005)
0.000152
8.59 dB
(10)
Sensitivity:
S 10 log
1
(0.088229 0.000005)
(8)(525)
46.78 dB
(11)
868
sections of the joints, we have conrmed that all of
them have good joint conditions. Additionally, a
normal strength test under the optimal conguration used as a double-check has revealed that rupture at the lead, which indicates sufcient strength
at the joint, was observed. Although this evaluation
method is not easy to implement in terms of measurement, it is regarded as applicable to other types
of joints.
Case 32
Reference
Takakichi Tochibora, 1999. Optimization of resistance
welding conditions for electronic components.
Quality Engineering, Vol. 7, No. 3, pp. 5258.
This case
Tochibora.
study
is
contributed by Takakichi
CASE 33
Abstract: In this study we performed experiments based on the generic function of tile manufacture, forming a tile with evenly distributed density with
the intention of saving on material in order to avoid waste. As a signal factor
and output, we selected weight in the air and weight in the water and measured the proportionality of an underwater weight to an in-the-air weight.
1. Generic Function
We lled a press die evenly with material, that is,
formed a tile with evenly distributed density. Then,
as a signal factor and output, we selected a weight
in the air and a weight in the water, respectively, and
measured the proportionality of underwater weight
to in-the-air weight. To measure the underwater
weight, we attempted to use underwater weighing.
Although it was not easy to calculate the underwater
weight of a absorptive tile, we were able to do so
through our technical know-how.
In actuality, we split a formed tile into several
pieces, as shown in Figure 1a, and dened y M
as the ideal relationship equation of the in-the-air
weight, M, and the underwater weight, y, according
to Figure 1b. In addition, since we know that to measure the geometric dimensions of a formed tile and
to judge its appearance are relatively easy, for the
purpose of fundamental research we checked the
transformability of its shape and appearance. As illustrated in Figure 2, setting the dimensions of a die
to a signal factor, M, and each corresponding dimensions of a formed tile to y, we dened y M
as the ideal relationship equation. A caliper was
used for measurement.
On the other hand, since a tile that is not easily
separated from the base plate after being baked
tends to have a poor surface, we performed a
smaller-the-better analysis by setting the value of an
easily separated tile to 0, that of a not-easily separated tile to 100, and that of a tile in between to 50.
For noise factors, we select the following four:
(1)
Effective divider:
r1 12.912 6.872 3.812 1.782
1.732 234.5424
(2)
(3)
(L1 L2 L16)2
459.9931
r
( f 1)
(4)
Variation of sensitivity:
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
869
870
Case 33
Figure 2
Measurement of tiles dimensions
Figure 1
Proportionality of divided tile
Table 1
Data examples of in-the-air and underwater weights (experiment 1)
N1
N *1
N **
1
N ***
1
N ***
2
N **
2
N ***
1
N ***
2
N *2
N **
1
N ***
1
N ***
2
N2
N *2
N **
1
N 2***
N **
2
N ***
1
N ***
2
M1
M2
M3
M4
M5
In-the-air weight
Underwater weight
In-the-air weight
Underwater weight
In-the-air weight
Underwater weight
In-the-air weight
Underwater weight
In-the-air weight
Underwater weight
In-the-air weight
Underwater weight
12.91
5.40
10.61
4.40
12.85
5.86
11.06
4.98
11.92
3.65
10.24
3.04
6.87
2.81
4.58
1.87
6.62
3.00
6.74
3.08
7.15
2.12
5.43
1.56
3.81
1.59
2.72
1.13
3.23
1.44
3.52
1.59
3.89
1.15
3.05
0.92
1.78
0.76
1.48
0.62
2.04
0.92
1.56
0.72
2.20
0.69
1.80
0.48
1.73
0.72
1.25
0.58
1.53
0.70
1.29
0.60
1.36
0.56
1.13
0.33
In-the-air weight
In-the-air weight
Underwater weight
In-the-air weight
Underwater weight
In-the-air weight
Underwater weight
3.69
10.64
3.70
12.17
4.33
9.98
3.73
2.13
5.69
1.93
6.88
2.39
5.24
1.88
0.97
2.58
0.99
3.27
1.19
2.43
0.92
0.68
1.56
0.67
1.85
0.70
1.45
0.51
0.44
1.01
0.38
1.60
0.62
1.30
0.48
871
Table 2
Control factors and levels
2.1038
Level
Control Factor
A:
pulverizing time
Long
Short
B:
pressing pressure
Small
Mid
Large
C:
clay 1 (%)
10
15
D:
waste 1 (%)
E:
waste 2 (%)
F:
waste 3 (%)
10
15
20
G:
waste 4 (%)
10
15
20
H: clay 2 (%)
(L1 L8)2
(L L16)2
9
r1 r8
r9 r16
SN
SN *
(f 1)
(5)
(L1 L2 L3 L4
L9 L10 L11 L12)2
r1 r2 r3 r4
r9 r10 r11 r12
(L L16)2
5
S
r5 r16
6.8997
(f 1)
(6)
Similarly,
SN ** 1.6786
(f 1)
(7)
SN *** 0.0035
(f 1)
(8)
Error variation:
Figure 3
Response graphs of in-the-air versus underwater weights
872
Case 33
Figure 4
Response graphs of shape transformability
Figure 5
Response graph of separability
873
Table 3
Optimal and compared congurations
Factor
A
In-the-air weight
Underwater weight
Transformality of shape
2
3
(1)
(2)
(2)
(2)
(1)
(2)
(2)
(2)
(1)
(3)
Optimal conguration
(1)
(3)
(3)
(2)
Compared conguration
(2)
(2)
(3)
(1)
Appearance
Se ST S SN SN * SN** SN***
0.2641
(f 75)
(9)
Error variance:
Ve
Se
0.0035
75
(10)
(11)
SN ratio:
(1/r)(S Ve)
1.3312 1.24 dB
(12)
1
(S Ve) 0.1493 8.26 dB
r
(13)
VN
Sensitivity:
S
Table 4
SN ratio and sensitivity for in-the-air versus underwater weights
SN Ratio
Conguration
Sensitivity
Estimation
Conrmation
Estimation
Conrmation
Optimal
8.83
8.63
7.21
6.87
Compared
2.17
4.48
6.40
6.57
Gain
6.66
4.15
874
Case 33
Table 5
SN ratio and sensitivity for transformability of shape
SN Ratio
Conguration
Sensitivity
Estimation
Conrmation
Estimation
Conrmation
Optimal
1.58
2.26
0.32
0.33
Compared
7.66
7.44
0.84
0.96
6.08
5.18
Gain
Reference
Toshichika Hirano, Kenichi Mizuno, Kouji Sekiguchi,
and Takayoshi Matsunaga, 2000. Make tile of industry rejection by quality engineering. Quality Engineering, Vol. 8, No. 1, pp. 3137.
CASE 34
1. Introduction
An imaging method in electrophotography used for
photocopy machines or printers is to develop an
electrostatic latent image that is transferred onto
photosensitive material to obtain an objective image
using charged toner particles. Therefore, in designing electrophotographic developer to reproduce an
electrostatic latent image accurately, control of the
amount of charge of toner particles is a major technical problem. To solve this, a measuring technology for a charging function focusing on individual
toner particles, which can be used to design a charging function for toner particles, needs to be established. Since we have realized the importance of
developing a measuring technology through this
research, we developed the technology ahead of
designing functions for the developer and the
developing device.
Here, as a typical example, we describe twoingredient developer. This is developer that supplies
a target amount of charge to a toner particle by
mixing a charged particle (called a carrier particle)
and a toner particle and charging the toner with
friction. Figure 1 shows an outline of electrophotography. A toner particle is required to have a function, particularly in a developing area. This is a
process using a toners characteristic of being
charged.
In a developing area, a toner particle, the particle to be developed (Figure 2), is transferred onto
photosensitive material and developed in accordance with the strength of an electric eld formed
by imaging information between a developing sleeve
and the photosensitive material. That is, development is a system of controlling toner particles by the
strength of an electric eld. Now, since it is expected that an individual toner particle is transferred to a target position and developed according
to the strength of an electric eld, a constant
amount of charge in a toner particle is considered
ideal. In other words, the ideal function of a toner
particle is that each toner particle has an identical
amount of charge. To proceed with the functional
design of a toner particle, we need to measure the
amount of charge each toner particle and to evaluate the uniformity. In the following sections we discuss optimization of a measuring system.
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
875
876
Case 34
Figure 1
Electrophotography process
electric eld, each particle runs through with oscillation. By detecting this particle movement using a
laser beam, we can simultaneously measure the
amount of charge and the diameter of each particle.
In general, the generic function of a measuring
system is regarded as a proportionality between the
true value, M, and the measurement, y. However,
since it is realistically difcult to prepare the true
amount of charge as a signal factor, we need to arrange an alternative signal factor for the true
Figure 2
Movement of toner particles in developing area
877
Figure 3
Measuring system for amount of charge
878
Case 34
SM *
(L1 L 9)2
(L L )
9
1
S
r1 r9
r 1 r 9
582,097
(5)
L1
L
L
1 9 S SM*
r1
r 1
r 9
31,846
(f 16)
(6)
Error variation:
Total variation:
Se ST S SM* Srow(M *) S
(f 1)
(f 54)
(1)
13,518
(7)
Error variance:
Linear equations:
Ve
L1 (7750)(216.43) (13,269)(385.09)
(18,957)(498.32)
(f 36)
Se
13,518
376
36
36
(8)
SN ratio:
16,233,744
A1 10 log
L1 (8440)(95.41) (12,522)(135.99)
(17,875)(173.65)
61.2 dB
5,612,121
(9)
Sensitivity:
SA1 10 log
L 9 (7996)(65.42) (12,243)(112.96)
(16,032)(148.09)
4,280,246
(2)
Effective divider:
r1 77502 13,2692 18,9572 595,496,710
1
(S Ve)
r1 r 1 r 9
35.5 dB
(10)
(L1 L1 L 9)2
2,269,031
r1 r 1 r 9
(f 1)
(4)
879
Table 1
Data in each experiment under control factor A1 (M, m2; y, fC)
No.
1
Experimental Data
Effective
Divider
Linear
Equation
M *1
M 7,750
13,269
18,957
(large charge amount) y
216.43
385.09
498.32
L1
r1
(595,496,710) (16,233,744)
M *2
M 8,440
12,522
17,875
(small charge amount) y
95.41
135.99
173.65
r1
L1
(547,549,709) (5,612,121)
M 5,387
12,337
18,259
M *1
(large charge amount) y
140.38
301.42
455.37
r9
L9
(363,931,139) (9,442,478)
M *2
M 7,996
12,243
16,032
(small charge amount) y
65.42
112.96
148.09
r9
L9
(470,852,089) (4,280,248)
(11)
Table 2
Control factors and levels
Level
Control Factor
2a
Standard
A:
Standard 5
B:
C:
air-supplying condition 1
Standard 5
D:
air-supplying condition 2
0.2
0.3
0.4
E:
air-supplying condition 3
F:
20
35
G:
detector condition 1
H:
detector condition 2
10
15
Current level.
9
Standard
12
Standard 5
880
Case 34
Figure 4
Response graphs
Table 3
Conrmation of SN ratio and sensitivity (dB)
SN Ratio
Sensitivity
Conguration
Estimation
Conrmation
Estimation
Conrmation
Current
57.7
36.5
Optimal
Value at current
conguration, 10.8
48.7
Value at current
conguration, 0.4
37.7
Gain
10.8
9.0
0.4
1.2
(12)
A0 2
100 million yen 2
0
02 0
7.52
17,778
2
0
2current
L Lcurrent Loptimal
(17,778)(0.8)(1 0.0956) 12,863 yen/lot
(15)
Now, setting the annual production volume of
developer to 500 lots, the annual cost benet is
(12,863 yen/lot)(500 lots/year)
approximately 6,430,000 yen/year
(16)
(13)
881
0.0956
(14)
Reference
Kishio Tamura and Hiroyuki Takagiwa, 1999. Optimization of a charge measuring system for electrophotographic toner focused on each individual
particle. Quality Engineering, Vol. 7, No. 5, pp. 47
54.
CASE 35
Abstract: Clear vision is as the angle of the steering wheel while a vehicle is
being driven in a straight line. Since the process of measuring CV introduces
measurement variability and to ensure that the setting process is stable, the
CV angle is audited on several vehicles at each shift and corrective action is
taken if the CV average is outside SPC limits. A Ford gauge repeatability study
indicated that gauge measurement error was approximately 100% of the
tolerance of the CV specication. Due to the large measurement error, corrective actions were taken too often, not often enough, or incorrectly. This
study used methods developed by Taguchi to reduce the gauge error by developing a process robust to noise factors that are present during the measurement process.
1. Introduction
Clear vision (CV) is the perceived angle of the steering wheel while a vehicle is being driven in a
straight line (Figure 1). The perception of this angle is inuenced strongly by the design of the steering wheel hub as well as by the design of the
instrument panel. Clear vision for each vehicle is set
in conjunction with the vehicles alignment in the
assembly plant at a near-zero setting determined by
a customer acceptance study. Since the process of
measuring CV introduces measurement variability
to ensure that the setting process is stable, the CV
angle is audited on several vehicles at each shift using the following audit process:
1. A CV audit tool is mounted to the steering
wheel. The tool measures the inclination (i.e.,
rotation) angle of the wheel.
2. The vehicle is driven along a straight path
while the CV tool collects and stores data.
3. At the completion of the drive, the CV tool
algorithm calculates the average CV angle,
and corrective action is taken if the CV average is outside SPC limits.
Currently, CV is audited by driving a vehicle
along a straight path (Figure 2). The driver mounts
882
Quality Concerns
Historical gauge repeatability and reproducibility
metrics indicated that gauge measurement error
was approximately 100% of the tolerance of the CV
specication. Due to the large measurement error
in the facility, corrective actions were taken too often, not often enough, or incorrectly.
Measurement studies conducted in two different
facilities indicated errors of 95 and 108%, respectively. These errors indicated that the facilities were
not capable of accurate tracking of their own performance setting of clear vision. Most important,
without the ability to track their own progress accurately, the facility was unable to respond properly
to CV setting problems. Improving the robustness
of the measurement system would reduce the variability of the audit measurements and allow the facilities to monitor and correct the setting process
more accurately when required. This would lead to
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
883
Positive
Clear
Vision
Clear Vision = 0
Negative
Clear
Vision
Figure 1
Clear vision: drivers view
2. Parameter Design
Ideal Function
The objective of this study was to reduce the measurement error of the CV tool from 100% of the
clear vision tolerance. The ideal function (Figure 3)
Figure 2
Testing diagram
884
Case 35
Figure 3
Ideal function
Noise Strategy
As in every design, noise factors can induce a relatively large amount of variability and cannot be controlled easily. The noise strategy selected for this
study included driver skill and driver weight. These
factors were compounded as N1 skilled, lightweight driver and N2 unskilled, heavyweight
driver.
Experimental Layout
The experimental layout was completed in two
phases. The rst phase was the execution of parameter design to determine the best combination of
control factors to reduce the measurement system
variability. An L18 orthogonal array (Figure 5) was
used to dene the control factor conguration for
each of the 18 tests. The experiment was conducted
at two signal levels, M1 and M2. The control factors
Figure 4
Measured CV angle versus actual CV angle: reference-point
proportional
885
Figure 5
Experimental layout
Table 1
Control factors and levels
Factor
Level
Smooth
Line
Loop
50 ft
10 mph
BHND WHL
Bottom
Current
Rough
Diam. sign
Modied
150 ft
20 mph
FRT WHL
Center
New 1
Goalpost
Modied
300 ft
30 mph
BHND WHL
Top
New 2
886
Case 35
Table 2
Raw data, SN ratio, and beta values
M1 1
M2 1
N1
N1
N2
SN
2.80
5.90
7.30
5.10
10.70
9.20
11.10
9.40
15.38
17.27
6.30
4.10
5.60
5.50
9.90
10.40
10.10
11.40
15.83
22.87
4.90
5.80
7.40
6.20
12.20
9.80
9.10
9.10
16.13
19.04
2.10
6.40
1.60
3.80
11.10
9.90
10.90
17.80
15.90
11.44
5.10
5.80
4.60
3.40
8.80
10.60
11.10
12.20
15.40
18.93
7.20
7.40
6.40
7.20
8.80
9.10
9.00
8.60
15.93
30.13
9.10
9.50
6.20
11.60
13.00
4.60
12.10
17.80
20.98
10.95
6.50
6.70
6.00
5.80
8.00
8.80
9.40
9.50
15.18
25.48
0.40
1.00
8.30
7.30
15.70
11.50
17.10
17.80
11.98
6.44
10
5.90
4.60
5.10
5.40
9.40
9.70
10.50
9.00
14.90
25.01
11
6.10
6.90
5.70
6.00
7.30
7.80
8.80
8.10
14.18
24.86
12
5.30
5.30
6.20
6.80
9.70
8.20
10.00
10.30
15.45
22.28
13
5.50
4.70
3.70
1.10
7.00
11.20
13.90
11.90
14.75
12.52
14
6.50
7.60
5.60
7.90
10.20
13.60
11.50
2.90
16.45
10.66
15
4.30
4.60
3.40
1.60
9.50
8.50
11.40
12.90
14.05
15.41
16
5.30
5.50
5.70
6.10
8.80
10.30
10.00
9.80
15.38
26.42
17
7.30
5.00
17.80
5.40
10.20
11.30
17.80
17.80
11.55
0.80
18
7.90
6.50
5.40
5.40
8.50
8.50
9.00
10.00
15.30
20.89
N2
N1
Deviation of SN and
(1/r)(S Ve)
SN 10 log10
10 log10 22
VN
r (S
1
N2
j 1, 2, 3, ... , n
M
k
Lj
i1
M
1
L
r
r nr0
i1
M i Mi Mref
i 1, 2, 3, ... , m
y ijk
yijk Yref
i 1, 2, 3, ... , m;
j 1, 2, 3, ... , n; k 1, 2, 3, ... , r0
y
r0
ijk
k1
N2
Ve)
In the following equations, m is the number of signal levels, r0 the number of repetitions at each noise
level, and n the number of noise levels.
Yij
N1
j1
L
S
r/n
2
j
SN
( fN n 1)
j1
ST
( f 1)
(y )
r0
ijk
( fT nkr0)
k1 j1 i1
i 1, 2, 3, ... , m;
Se ST SN S
[ fe n(kr0 1)]
887
Figure 6
SN response plot
Ve
V N
Se
fe
Se SN
fe fN
SN
(456.75)2
(465.75)2
1
945.5625
1
(2 )(900)
(2 )(900)
0.09
(fN 1)
Sample Calculations
We have m 2, n 2, r0 2, and Mref M1.
M1 M1 M1 0
Ve
13.2175
2.202916667
6
VN
13.2175 0.09
1.901071429
7
M2 M2 M1 15
Yref average response at M1
14 [(28) (5.9) (73) (5.1)]
5.275
Y11 (2.8 5.275) (5.9 5.275) 1.85
Y12 (10.70 5.275) (9.20 5.275) 30.75
Y21 (7.30 5.275) (5.10 5.275) 18.5
1
9 0 0 (945.5625 2.202916667)
1.023805311
(1.023805311)2
2.585636787
1.901071429
2.59
SN 10 log
Appendix
The original analysis sets the value for M1 at 1 and
M2 at 1. See Figures 1 to 5 and Tables 1 and 2.
Note C2 and C3 had similar levels and were grouped
in analysis. The same is true for F1 and F3.
888
Case 35
Figure 7
Beta response plot
41 (S Ve)
2
10 log10 2
VN
Y 21
Y2
72
2
4
4
8
Y1
Y
4
i1
VN
SN
fN
Y
8
Y2
i5
Table 3
Values predicted and conrmed
Std. Dev. Relative to Spec. (%)
SN
Predicted
Conrmed
16.42
100
100
Optimum
6.06
30
39
Gain
10.36
70
61
Baseline
889
Table A1
Values predicted and conrmed (at one signal level only)
Std. Dev. Relative to Spec. (%)
SN
Predicted
Conrmed
Baseline
24.19
100
100
Optimum
34.54
30
39
Gain
10.35
70
61
T2 (Y1 Y2)2
Y
8
ST
i1
2
i
T2
8
S
SN ST S
Sample Calculations
Y1 (2.8 5.9 7.3 5.1) 21.1
Y2 (10.7 9.2 11.1 9.4) 40.4
T 2 (21.1 40.4)2 372.49
ST [2.82 (5.92) (7.32)
(5.12) 10.72 9.22
372.49
11.12 9.42]
486.09
8
(21.2)2
40.42
372.49
472.78
4
4
8
13.31
2.22
6
S/N 10 log10
14 (472.78 2.45)
17.24
2.22
15.38
472.78
2
CASE 36
1. Introduction
Electronic parts installed on resin boards (e.g.,
motherboards) need shape stability, atness, or heat
radiation, depending on how they will be used. To
this end, we need to x a resin plate onto a more
rigid plate made of material that has good heat conductivity. Therefore, a product combining a resin
board and a copper plate is used. As a typical
method of evaluating good adhesion, we have conventionally used the peel test to judge good or poor
adhesion. This method is still regarded as being in
most common use to indicate adhesion conditions
per se and as the easiest to use to control an adhesion process.
In the meantime, other methods exist, such as
observation of the occurrence of bubbles displayed
in a monitor by an ultrasonic detection technique.
However, since they are not easy to quantify, we use
these methods primarily for inspection. The adhesion structure studied in our research is shown in
Figure 1. Using adhesive, we glued a copper plate
onto a resin board that had a metal pattern on the
surface. The most critical issue was that a conventional adhesion strength test could not be used because the plastic board used is a thin lm. In the
peel test, after adhesion the lm tends to break
down when either the resin board or copper plate
890
2. Generic Function
Taking into consideration the function of a copper
plate as a heat radiator, we would usually evaluate
the relationship between electricity consumption
and temperature on the copper side. However, since
errors are often generated due to the attachment of
the copper to a plastic board, in this research we
focused on stable adhesion quality and dened a
capacitance characteristic on the adhesion area as a
generic function.
Looking at the fact that the copper plate and
metal pattern shown in Figure 1 clamp the adhesive
and plastic board, we note that this structure forms
a capacitor. Although we have attempted to assess
Q CV as a generic function, we cannot prepare a
proper measuring instrument for this. Therefore,
we took advantage of a capacitance characteristic
C S/d (where C is the capacitance, the permittivity, S the surface area of an electrode, and d
the distance between electrodes; Figure 2). By regarding a small change in capacitance (a high SN
ratio) as a good condition both before and after an
accelerated test that places a high-temperature
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
Adhesive
Plastic Board
891
Metal Pattern
Figure 1
Joint area of resin board and copper plate
3. SN Ratio
Table 1 shows the calculations for the SN ratio and
sensitivity.
Total variation:
ST 44.142 40.332 103.882 42,972.83
(1)
Effective divider:
r 69.252 213.752 70,506.88
(2)
Linear equations:
L1 (69.25)(44.14) (213.75)(128.51)
Noise Factor
High-temperature stress before and after the accelerated test were selected as noise factor levels:
N1:
shortly after
solidies)
adhesion
(after
adhesive
N2:
after accelerated test (three sets of water absorption and reow at a temperature of 245C
42,783.85
L2 (69.25)(40.33) (213.75)(103.88)
34,581.00
Variation of proportional term:
S
(L1 L2)2
42,444.94
2r
(f 1)
(4)
M1 = 69.25
Figure 2
Generic function of resin board and copper plate
(3)
M2 = 141.50
M3 = 213.75
(mm2)
Figure 3
Signal factor for adhesion (metal pattern area)
892
Case 36
Table 1
Capacitance data (pF)
M1 (69.25)
M2 (141.50)
M3 (213.75)
No.
N1
N2
N1
N2
N1
N2
44.14
40.33
86.63
67.73
128.51
103.88
Table 2
Control factors and levels
Level
1
A:
Control Factor
Normal temperature
and humidity
High temperature
and humidity
B:
Dry
High temperature
and humidity
Normal temperature
and humidity
C:
adhesion load
Small
Mid
Large
D:
adhesion temperature
Low
Mid
High
E:
adhesion time
Short
Mid
Long
F:
leave-as-is time
Short
Mid
Long
Table 3
Conrmatory results (dB)
SN Ratio
Sensitivity
Conguration
Estimation
Conrmation
Estimation
Conrmation
Optimal
11.97
13.74
3.89
4.50
Current
19.72
19.17
4.79
4.66
7.75
5.43
0.89
0.16
Gain
SN
(L1 L2)2
477.165
2r
(f 1)
Error variance:
(7)
SN Se
105.5788
5
(8)
Error variation:
Se ST S SN 50.72878
Se
12.6822
4
Ve
(5)
(f 4)
VN
(6)
SN ratio:
893
SN Ratio (dB)
-16
-17
-18
-19
-20
-21
-22
Sensitivity (dB)
-4.4
C
-4.9
-5.4
Figure 4
Response graphs
10 log
(1/2r)(S Ve)
25.4514 dB
VN
(9)
1
(S Ve) 5.21565 dB
2r
(10)
Sensitivity:
S 10 log
2optimal
6.426
(11)
2current
31.12
10current/10
(12)
10optimal/10
894
Case 36
A0
106.2
2
2optimal
(13)
A
2 20
514.4
current
(14)
Loptimal
Lcurrent
Reference
Yugo Koyama, Kazunori Sakurai, and Atsushi Kato,
2000. Optimization of adhesion condition of resin
board and copper plate. Proceedings of the 8th Quality
Engineering Symposium, pp. 262265.
CASE 37
Abstract: In optimizing manufacturing conditions for soldering, quality characteristics such as bridges or no-solder joints are traditionally measured for
evaluation. By using the characteristics of current and voltage as an ultimate
objective function of soldering, we optimized manufacturing conditions in the
automated soldering process.
1. Introduction
For optimization of manufacturing conditions in
the automated soldering process, it is important to
make the process robust not only to the products of
current-level density but also to those with high density, to arrange conditions for manufacturing various
types of motherboards without adjustment. The
process studied is outlined in Figure 1.
In optimizing manufacturing conditions for soldering, quality characteristics such as bridges or
no-solder joints are traditionally measured for
evaluation. Soldering is used to join a land pattern
printed on the board to a lead of an electronic part
(Figure 2). Defects called no-solder indicate that the
two parts are not joined completely; bridge is the
name given to the defect of unwanted joints.
In one print board, some portions need considerable energy for soldering and others need less
energy. This manufacturing process provides the
same amount of energy to each portion. Our technical objective is to solder circuit boards evenly no
matter how much energy is needed. In addition,
ideally, the amount of current owing in solder is
proportional not only to its area but also to the corresponding voltage. Therefore, by using the characteristics of current and voltage as an ultimate
objective function of soldering, we optimize manufacturing conditions in the automated soldering
process.
(f 24)
(1)
Linear equations:
L1 (5)(15)(18) (20)(62)(192)
796,540
(2)
L2 737,770
(3)
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
895
896
Case 37
Figure 1
Automated soldering process
Effective divider:
Error variation:
Se ST S SN 1811.71
(4)
Ve
(L1 L2)
238,910.76
2r
(f 1)
(5)
L2 L22
1
S 350.53
r
(fN 1)
Figure 2
No-solder and bridge
Se
1811.71
82.35
fe
22
(8)
(6)
(7)
Error variance:
(fe 22)
SN Se
1 22
SN ratio:
350.53 1811.71
94.01
23
(9)
897
Table 1
Signal and noise factors
Level
Factor
Signal factors
Sectional area M* (101 mm2)
Voltage M (mV)
15
50
62
10
15
20
Noise factor
Energy demand
Much N1
Little N2
Narrow
Many
Wide
One
Table 2
Current data (dB 102 A)
M 1* (15)
Sectional Area:
Voltage
M 2* (50)
M 3* (62)
Noise:
N1
N2
N1
N2
N1
N2
M1 (5)
18
16
44
42
61
55
M2 (10)
33
28
84
79
118
105
M3 (15)
44
38
116
112
163
145
M4 (20)
52
47
143
138
192
178
Table 3
Control factors and levels
Level
Control Factor
A:
amount of ux air ow
1.5
Current
B:
specic gravity of ux
0.01
Current
0.01
C:
Fully closed
Half closed
Fully open
D:
Close
Mid
Far
E:
wave height
Low
Mid
High
F:
solder temperature
Low
Mid
High
G:
preheating temperature
Low
Mid
High
H:
conveyor speed
Slow
Mid
Fast
898
Case 37
10 log
(1/2r)(S Ve)
35.89 dB
(10)
1
(S Ve) 16.16 dB
2r
(11)
VN
Sensitivity:
S 10 log
other words, if we set the levels of these factors independently, we cannot provide adequate energy to
each factorlevel combination (Figure 3). Thus, as
a result of studying levels along energy contours, we
used a sliding-level method (Figure 4 and Table 4).
According to the result in the L18 orthogonal array,
we calculated an SN ratio and sensitivity for each
experiment. Their response graphs are shown in
Figure 5.
Table 5 shows the estimates of the SN ratio under
the current and optimal congurations. The fact
that the gains in the conrmatory experiment were
consistent with those estimated demonstrates success in our experimental process. Next, we calculated the gains obtained by the loss function.
Assuming that the resulting social loss when the
function reaches its limitation, A0, is 50,000 yen, we
have
Figure 3
Energy balance by combination of solder temperature, conveyor speed, and preheating
temperature
Figure 4
Selection of values at factor levels for soldering by sliding-level method
899
Table 4
Relative factor levels for solder temperature
Conveyor Speed
Solder
Temperature
G1
Slow
Preheating Temperature
G2
Mid
G3
Fast
G2
Mid
G3
High
F1:
low
0.63
0.9
1.125
90
120
160
F2:
mid
0.7
1.0
1.25
70
100
130
F3:
high
0.742
1.06
1.325
40
90
120
15.75
Sensitivity (dB)
33.00
SN Ratio (dB)
G1
Low
34.00
35.00
36.00
16.00
16.25
16.50
16.75
37.00
1 2 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3
A
B
C
D
E
F
G
H
1 2 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3
A
B
C
D
E
F
G
H
Figure 5
Response graphs
L0 k 2
50,000
(60.22)(2951.9)(3000)
L1 k 21
50,000
(60.22)(935.4)(3000)
(12)
(13)
Table 5
Results of conrmatory experiment
Reference
Conguration
Optimal
Current
Gain
Estimation
30.70
36.09
5.39
Conrmation
29.71
34.70
4.99
Shinichi Kazashi and Isamu Miyazaki, 1993. Process optimization of wave soldering by parameter design.
Quality Engineering, Vol. 1, No. 3, pp. 2232.
CASE 38
1. Introduction
A camshaft, one of the most vital parts of an automobile engine, is commonly made of cast iron. If
there are casting defects on the machined surfaces
of frictional areas such as a cam, functional deterioration of an engine takes place, thereby causing
quality assurance cost. To reduce this, we need to
eliminate casting defects in essential formed areas
of a camshaft.
A camshaft is cast by pouring cast iron melted at
a high temperature, called molten iron, into a cavity
of a die called a shell mold. This casting method and
ow of molten iron are shown in Figure 1. Molten
iron poured from a pour basin is gradually charged
from a gate located at the bottom to a hollow area
for a camshaft. Since casting conditions such as constituents or the temperature of molten iron are varied within a certain range under control, we need
to design a casting method robust to variability in
these conditions.
Reynolds number, Re, expressed as an index, indicates the degree of uid turbulence:
Re V
(1)
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
Figure 1
Method and ow of camshaft casting
901
Figure 2
Control factors
(f 20)
(2)
531,738,281
(f 1)
(3)
Error variation:
Se ST Sm 545,917,217 531,738,281
14,178,936
(f 19)
(4)
Error variance:
Ve
SN ratio:
Se
14,178,936
746,260
19
19
(5)
902
Case 38
Table 1
Control and noise factors and levels
Level
Factor
Control factors
A: choke
B: vertical runner
C: swirl horizontal runner
D: swirl entrance gate
E: swirl lower gate
F: horizontal runner
G: undergate
H: gate
Down
Down
Down
Down
Down
Down
Down
Down
Std.a
Std.a
Std.a
Std.
Std.a
Std.a
Std.a
Std.a
Up
Up
Upa
Up
Up
Up
Up
Noise factors
I: temperature of molten iron when poured
J: lling rate of molten iron (%)
K: side of a runner
Down
35
Left
Std.a
40
Right
45
50
55
K1
(35%)
K2
(40%)
K3
(45%)
K4
(50%)
K5
(55%)
Initial level.
Table 2
Analysis results for casting of camshaft (experiment 1)
I1:
lower temperature of molten iron
J1 (left)
J1 (right)
5749
5732
5900
5728
4722
5484
4552
4967
4070
4712
J1 (left)
J2 (right)
6162
6069
6127
5278
6298
4392
5138
3104
5062
3879
I2:
10 log
20 (Sm Ve)
Ve
(531,738,281 746,260)
10 log 20
746,260
15.51 dB
(6)
Sensitivity:
1
S 10 log
20 (Sm Ve)
1
10 log
20 (531,738,281 746,260)
74.24 dB
(7)
903
Figure 3
Response graphs
Table 3
Results of conrmatory experiment (dB)
SN Ratio
Sensitivity
Conguration
Estimation
Conrmation
Estimation
Conrmation
Optimal
17.10
18.09
75.67
75.27
Current
14.52
12.95
75.73
75.78
2.58
5.15
0.06
0.51
Gain
Reference
Kazyuki Shiino and Yasuhiro Fukumoto, 2001. Optimization of casting conditions for camshafts. Quality
Engineering, Vol. 9, No. 4, pp. 6873.
CASE 39
Abstract: In this optimization case, our objective was to shorten the product
development cycle drastically and reduce the operating cost by optimizing
stability of a photoresist shape through simulation, conrming the shape with
an actual pattern. In this optimization especially, we optimized a process
dealing with relatively thick (2.2 m) photoresist on a metal lm that greatly
affects the shape of photoresist when etched.
1. Introduction
2. Simulation
In the photomask patterning process using exposure, a pattern of light (the width of a light shield)
generated by a mask (light shield) is transferred
onto photoresist. In the case of the positive photoresist used for this research, photoresist in the
area that does not receive any light remains after
development. In this case, the ideal situation is that
a 1-m-wide pattern in design is transferred as 1 m
and a 0.5-m-wide pattern as 0.5 m.
Therefore, in this study, we regarded it as a generic function that a dimension of the pattern transferred is proportional to that of a mask. As the ideal
input/output relationship, we considered the proportionality between a dimension of a mask, M, and
the counterpart of a pattern, y ( y M) (Figure
2).
A dimension of a mask pattern was chosen as a
signal factor, M, that has three different values, 0.6,
0.8, and 1.0 m. We dened as an ideal condition
that the photoresist shape becomes rectangular and
does not change at any position in the direction of
the photoresists thickness. Then, as a noise factor,
we selected a dimension of photoresist, setting the
dimension of a boundary between the photoresist
and the metal lm below to Lbot, the dimension of
photoresist at the top to Ltop, and that in the middle
to Lmid (Figure 3). When the shape is to be controlled separately, it was analyzed as an indicative
factor. Eight factors that affect dimension and shape
904
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
905
Figure 1
Photoresist and etched shapes
Total variation:
ST 0.3252 0.5682 0.7832 1.0412
4.670
(f 9)
Effective divider:
r 0.62 0.82 1.02 2.0
(2)
Linear equations:
L1 (0.6)(0.325) (0.8)(0.568)
(1.0)(0.783)
1.432
Figure 2
Generic function
(1)
Figure 3
Dimensions checked
(3)
906
Case 39
Table 1
Control factors and levels
Level
2
A:
photoresist type
Control Factor
Current
New
B:
80
90
100
C:
0.05
0.05
D:
numerical aperture
0.50
0.54
0.57
E:
focus (m)
0.3
0.3
F:
10%
EOP
10%
G:
100
110
120
H:
development time
60
90
120
L2 1.636
L3 2.135
0.026
(4)
(3)(2.0)
6
4.513
(f 1)
Ve
(5)
(f 2)
(8)
VN
4.670 4.513
0.0196
8
(9)
SN ratio:
0.131
0.026
0.00438
6
(7)
Error variance:
(f 6)
(6)
Error variation:
[1/(3)(2.0)](4.513 0.00438)
0.0196
38.30 (15.83 dB)
(10)
Sensitivity:
Table 2
Data for one run in an L18 orthogonal array
Signal
Noise
0.6
m
0.8
m
1.0
m
Linear
Equation
N1
0.325
0.568
0.783
L1
N2
0.422
0.645
0.867
L2
N3
0.694
0.8477
1.041
L3
1
(4.513 0.00438)
(3)(2.0)
0.751 (1.241 dB)
(11)
907
Figure 4
Response graphs using position in thickness direction as a noise factor
the SN ratio depends greatly on the type of photoresist, we can improve the SN ratio by overexposing photoresist to light under the conditions of a
positive mask bias.
(1/3r)(S Ve)
Ve
[1/(3)(2.0)](4.513 0.00438)
0.00438
171.5
(22.34 dB)
(12)
908
Case 39
Figure 5
Response graphs using position in thickness direction as indicative factor
S1
L21
1.4322
1.026
r
2.0
S2
L22
1.338
r
S3
L2
3 2.279
r
(13)
Sensitivity:
S1
1
(S Ve) 0.511 (2.92 dB)
r 1
(14)
(15)
909
Table 3
Conrmation of SN ratio and sensitivity (dB)
SN Ratio
Sensitivity
Conguration
Estimation
Conrmation
Estimation
Conrmation
Optimal
26.63
26.73
0.19
0.27
Current
18.76
19.37
1.13
1.10
7.87
7.36
0.94
0.83
Gain
Figure 6
Comparison of dimensions before and after optimization
910
Case 39
Figure 7
Comparison of shapes of photoresist before and after optimization
Table 4
Reproducibility of shapes (dB)
Reproducibility: SN Ratio
Optimal
Current
Gain
29.59
16.80
12.79
Simulation
27.55
15.59
11.96
Reference
Isamu Namose and Fumiaki Ushiyama, 2000. Optimization of photoresist prole using simulation. Proceedings of the 8th Quality Engineering Symposium, pp.
302305.
CASE 40
1. Introduction
In general, a manufacturing process to form from
sheet metal a cylindrical container with a bottom is
called a drawing process, (Figure 1). In most cases, a
deep-drawn product is rarely formed in a single
process but is stamped with the required depth in
multiple processes. To reduce production cost for
deep-drawn products, we minimize the number of
processes through optimization of manufacturing
conditions for deep drawing.
A drawing defect (i.e., a crack) is caused by partial strain concentration following uneven stretch of
a material under an inappropriate forming condition. Ideal stamping is considered as forming any
shape with uniform stretch and no cracks in the material. In a deep-drawing process, we obtain the required depth through multiple drawing processes.
In the processes following process 2, the rate of
drawing is regarded to decrease, compared with that
in process 1, under the inuence of work hardening. Therefore, since we need to evaluate deep
drawing in the processes after process 2, as a signal
factor the number of drawings was selected. At each
drawing, we measured material thickness and dened a logarithmized value of the ratio of the before and after drawing thickness as an output, y.
That is, setting the number of drawings to a signal
factor, M, and a logarithmized value of the initial
thickness, Y0, and after-drawing thickness, Y, to the
following output, we regarded a zero-point proportional equation of y and M as a generic function:
y log
Y
Y0
(1)
( f 15)
(2)
Effective divider:
r 12 22 32 14
(3)
Linear equations:
L1 (1)(0.021) (2)(0.029) (3)(0.057)
0.250
L2 (1)(0.021) (2)(0.026) (3)(0.058)
0.247
L5 (1)(0.023) (2)(0.027) (3)(0.061)
0.260
(4)
(L1 L5)2
0.024816
5r
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
(f 1)
(5)
911
912
Case 40
Figure 1
Deep-drawing process
L21 L25
S 0.000079
r
(f 4)
Se
0.000043
10
(8)
SN Se
0.000036
14
(9)
Ve
Total error variance:
VN
(6)
SN ratio:
Error variation:
Se ST S SN 0.00043
(f 10)
10 log
(7)
Error variance:
(1/5r)(S Ve)
9.88 dB
VN
(10)
Sensitivity:
Table 1
Measured data of experiment 1 in the L18 orthogonal array (converted)
Signal Factor
Noise Factor
M1
1
M2
2
M3
3
Linear Equation
N1
0.02
0.03
0.06
L1
N2
0.02
0.03
0.06
L2
N3
0.02
0.04
0.06
L3
N4
0.02
0.03
0.06
L4
N5
0.02
0.03
0.06
L5
913
1
(S Ve) 34.51 dB
5r
(11)
Table 2
Control factors and levels
Level
Control Factor
A:
error
B:
punch type
C:
dice type
D:
clearance
Small
Mid
Large
E:
Small
Mid
Large
F:
lubricant type
Company A
Company B
Company C
G:
knockout pressure
Small
Mid
Large
H:
material type
914
Case 40
Figure 2
Response graphs
customer in this case is dened as A0 yen. The production cost that has to date amounted to 8.5 yen/
product can be reduced to 4 yen under the optimal
conguration. The improvement is calculated as
follows.
Loss function:
A0 2
product cost
2
8.5
(0.00322) 8.5 8.502
0.22
(12)
8.5
(0.00682) 4 4.01
0.22
(13)
(14)
Table 3
Estimation and conrmation of SN ratio and sensitivity
SN Ratio
Sensitivity
Conguration
Estimation
Conrmation
Estimation
Conrmation
Optimal
19.13
18.63
34.47
30.26
Current
2.17
3.40
30.26
30.79
16.96
15.23
4.21
1.41
Gain
915
Reference
Satoru Shiratsukayama, Yoshihiro Shimada, and
Yasumitsu Kawasumi, 2001. Optimization of deep
drawing process. Quality Engineering, Vol. 9, No. 2,
pp. 4247.
is
contributed
by
Satoru
CASE 41
1. Introduction
Robust technology development of an encapsulation process was conducted at LSI Assembly Engineerings Research and Development Facility,
located in Fremont, California. The experiment focused on minimizing encapsulation height variation
for the single-tier cavity-down EPBGA electronic
package. The encapsulation process is a critical
process for ensuring surface mountability [1].
Objectives
Determining optimum process parameters to reduce encapsulation height variation for the EPBGA
package was the main purpose of this experiment.
Optimum process parameters for the EPBGA encapsulation process require (1) encapsulation height
standard deviations of less than 1.5 and on target,
and (2) satisfying a capacity requirement of 150
916
units per hour. The function of the EPBGA encapsulation process is to dispense a ow control dam
ring and a glob-top liquid epoxy (encapsulant) over
the silicon die and delicate bonding wires [3]. Encapsulation offers several basic functions for an electronic package: (1) lling the cavity to create a seal,
(2) adding mechanical strength to the package, and
(3) providing good moisture resistance.
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
917
Figure 1
Top view of a single-tier cavity-down electronic process
2. Design of Experiment
Parameter Diagram
A parameter diagram (Figure 3) represents the
EPBGA encapsulation process. Noise, signal, and
controllable factors are inputs to the EPBGA encapsulation process. The output of the EPBGA encapsulation process is called the response (encapsulation
height).
Figure 2
Side view of encapsulant process
918
Case 41
Figure 3
EPBGA encapsulation process parameter design
Figure 4
Ideal function between encapsulation weight and height
919
Figure 5
Actual function between encapsulation weight and height
Noise factors are process parameters whose settings are difcult or expensive to control.
Controllable factors are parameters that can be
specied by the designer.
Signal is the parameter set by the operator to
achieve the intended value for the response.
Response is the measured output for the
EPBGA encapsulation process.
Cured encapsulation height is measured by utilizing a mechanical height indicator and a designer
xture. The xture contains ve measuring holes
and four zeroing holes located on the top plate to
ensure repeatable encapsulation height measurements. The drop indicator is zeroed on the substrate and is then moved on top of the cured
encapsulant for a digital display representing encapsulation height [6].
Figure 6
Sliding level settings
Ideal Function
The ideal function of the encapsulation process is a
linear function between the encapsulation weight
and height (Figure 4). A correlation exists between
the weight and height: Increasing the weight increases the height [5].
The EPBGA encapsulation process is sensitive to
the following noise factors:
1. Package-to-package variation. The volume of the
cavity could be different for every package,
due to variations in the die attach material.
2. Encapsulation viscosity variation. The viscosity
decreases when heat is applied and increases
over time.
Figure 5 represents an undesirable function between encapsulation weight and height. The experiment considered several process parameters to
920
Case 41
make this function linear (Figure 4). The zeropoint-proportional SN ratio was utilized. A large SN
ratio decreases the encapsulation height variance.
Layout of Experiment
An L18 orthogonal array was selected. A sliding level
was utilized between the gelling time and temperature to enlarge the range of experimental settings.
The low and high gelling times were paired with
low, medium, and high gel temperature settings,
(Figure 6).
The sliding level was incorporated into run 1 of
the inner array, as seen in Table 1. During the experiment the encapsulants viscosity (noise factor)
was controlled by the thaw time in hours prior to
dispensing. Controlling the low-viscosity level, the
thaw time was 1.5 hours. Controlling the highviscosity level, the thaw time totaled 6 hours. Two
encapsulation viscosity levels (in centipoise) were incorporated into the experiment:
Low (06 hours):
40,600
61,200
useful energy
not useful energy
10 log10
2
2
Table 1
Control factors and levels for the inner array
Level
Run
Control Factor
Gel time
Gel temperature
GT 40
GT
GT 25
Dispense pressure
DP 4
DP
DP 3
Dispense
temperature
DT 15
DT
DT 50
Dispense pattern
Pattern A
Pattern B
Pattern C
Needle
temperature
NT
NT 10
NT 20
Needle diameter
ND 0.010
ND
ND 0.010
DR 0.003
DR
DR 0.003
Figure 7
Experimental design and layout
921
922
Case 41
Table 2
SN ratio and linear coefcient per run
Run
SN Ratio
(dB)
Linear
Coefcient
24.20
33.77
25.58
32.64
20.84
34.45
25.11
31.02
22.17
24.13
24.98
31.42
23.73
30.76
24.33
32.19
24.85
32.52
10
26.59
34.48
11
27.37
30.60
12
26.38
29.77
13
26.48
31.24
14
24.23
33.92
15
24.28
33.00
16
24.87
31.06
17
24.69
31.79
18
25.45
31.21
4. Conrmation
Prior to the conrmation run, factors containing
strong effects (a large SN ratio) from the choice
table were placed into the following SN ratio prediction equation. (Tav of SN ratio 24.79):
SN ratioprediction T (GTI2 T ) (DH2 T )
(TT1 T ) (GTE1 T )
24.79 (25.59 24.79)
(25.69 24.79) (25.18
24.79) (25.16 24.79)
27.32
The process factors with strong effects included gel
Figure 8
Level average for SN ratios and linear coefcients
923
924
Case 41
Table 3
Level averages for SN ratio (dB)
Level
Symbol
Gel time
Control Factor
GTI
23.98
25.59
Gel temperature
GTE
25.16
24.54
24.46
Dispense pressure
DPS
25.17
24.73
24.46
Dispense temperature
DT
25.09
24.96
24.31
Dispense pattern
DPA
25.00
25.09
24.27
Needle heater
TT
25.18
25.30
23.88
Needle diameter
TD
24.40
24.90
24.90
DH
24.50
25.60
24.16
Table 4
Level averages for linear coefcient
Level
Control Factor
Symbol
Gel time
GTI
32.54
31.89
Gel temperature
GTE
32.61
32.45
31.58
Dispense pressure
DPS
32.05
32.54
32.06
Dispense temperature
DT
32.11
31.62
32.91
Dispense pattern
DPA
31.94
31.61
33.10
Needle heater
TT
31.69
32.63
32.33
Needle diameter
TD
32.58
32.34
31.78
DH
33.13
31.57
31.95
5. Summary
The experiments process settings show that encapsulation height variation was reduced by:
dispense
and
gelling
925
Table 5
Control factors and levels
Code
SN Ratio a
Chosen
Process
Level
Gel time
GTI
Highest SN ratio
Gel temperature
GTE
Dispense pressure
DPS
Dispense temperature
DT
Dispense pattern
DP
Tip temperature
TT
Needle diameter
TD
Dam ring
DH
Highest SN ratio
Control Factor
Table 6
SN ratios for various settings
Setting
SN Ratioprediction
(dB)
SN Ratio
(dB)
Units / h
24.33
150
27.32
31.85
60
27.31
150
Choice table
Trade off
References
1. Fred W. Billmeyer, Jr., 1984. Textbook of Polymer Science. Hoboken, NJ: Wiley.
2. Hal F. Brinson, 1990. Engineered Materials Handbook,
Vol. 3, Adhesives and Sealant. Materials Park, OH:
ASM International.
3. Marilyn Hwan, 1996. Robust technology development: an explanation with examples. Presented at
the ASI/Western Product Development Symposium,
Pomona, CA, November 68.
4. Brian Lynch, 1992. Asymtek automatic encapsulation coating for COT packages. LSI Logic, May.
5. Madhav S. Phadke, 1989. Quality Engineering Using
Robust Design. Upper Saddle River, N.J.: Prentice
Hall.
6. Felipe Sumagaysay and Brent Bacher, Repeat a xture. U.S. patent pending, November 22, 1995.
CASE 42
Abstract: This paper describes a parameter design experiment utilized to optimize a gas-arc stud welding process. The desired outcome of the project
was to determine the optimum process parameter levels leading to minimized
variation and maximized stud weld strength across a wide array of real-world
processing conditions. The L18 orthogonal array experiment employed a zeropoint proportional dynamic SN ratio, with signal levels varying across runs.
The optimized process parameters yielded a conrmed SN ratio improvement
of 6.4 dB, thus reducing the systems energy transformation variability by
52%. Implementation of the optimized process parameters is expected to
result in signicant cost savings, due to the elimination of a secondary welding operation and a signicant reduction in downstream stud strength failures.
1. Introduction
Welding Process
Stud welding is a general term for joining a metal
stud to a metal base material. In gas-arc stud welding,
the base of the stud is joined to the base material
by heating both parts with an arc drawn between
them and then bringing the parts together under
pressure.
The typical steps in gas-arc stud welding (Figure
1) are as follows:
1. The stud is loaded into the chuck and the
stud gun is positioned properly for welding by
inserting the spark shield into the welding xture bushing until the stud base contacts the
workpiece.
2. The stud guns trigger is depressed, starting
the automatic welding cycle. Immediately
upon triggering the weld cycle, the original
atmosphere within the spark shield is purged
with a controlled gas mixture. Upon purge
926
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
927
Figure 1
Gas-arc stud welding process
Desired Outcome
The desired outcome of the project was to determine the optimum process parameter levels, leading to minimized variation and maximized stud
weld strength across a wide array of real-world processing conditions.
M MI MT MV
2. Experimental Procedure
The study was performed using studs 10 mm in diameter and 28 mm in long, of low carbon content,
Engineered System
The stud welding process can be broken down into
an engineering system with three types of input variables and one output (response) variable (Figure
2). Each variable type is discussed in detail below.
Signal Factor
The signal factor is typically the primary energy input into the engineering system. For a gas-arc stud
welding system the signal factor (M) is dened as
928
Case 42
Figure 2
Engineered system
Table 1
Signal factor settings
MI:
Average
Current (A)
MT:
Welding
Time (s)
M1 and M2
560
0.35
M3 andM4
560
0.40
M5 and M6
560
0.45
M7 and M8
700
0.35
M9 and M10
700
0.40
700
0.45
840
0.35
840
0.40
840
0.45
Trial
Response Factor
The response factor (Y ) is the measurable intended
output of the engineered system. The use of maximum applied torque prior to stud breakage as the
response factor was considered. This approach
would work well if weld strength is less than the stud
material yield strength; however, it would not give
an accurate indication if weld strength equaled or
exceeded the stud material yield strength. In addition, it could not detect instances of excessive applied energy. Therefore, a more suitable response
factor had to be found.
Based on discussions with several welding experts, it was determined that measuring the annular
weld llet area at the base of the stud would provide
a reasonable estimate of the amount of energy applied to make a weld. Therefore, annular weld llet
area was chosen as the response factor (Y ).
Ideal Function
In an ideal stud welding system, 100% of the welding input energy (signal) would be efciently transformed into the weld, bonding the stud and base
material. However, under real-world conditions, the
efciency of the energy transformation will vary due
to the presence of noise factors. The goal of robust design is to minimize the effect of noise factors and to maximize the efciency of the energy
transformation.
929
y M
where y is the weld llet area, M the welding energy,
and the slope of the best-t line between y and
M.
Noise Strategy
Noise factors cause variability in the energy transformation of the engineered system. These are factors that are difcult, impossible, or too expensive
to control. Robust design strives to make the engineered system insensitive to these noise factors.
Based on brainstorming, the project team determined that the following noise factors were the most
important: collet condition, spark shield condition,
stud gun angle, and workpiece surface contamination. A compound noise strategy was chosen to simulate mild (N1) and severe (N2) noise conditions.
Table 2 shows the noise factor settings for each of
the compound noise levels.
Control Factors
Control factors are the parameters that can be specied by process designers. The project teams brainstorming efforts yielded the following critical
factors: gas mixture, spring damper, bushing material, preow gas duration, postow gas duration,
plunge depth, and gas ow rate.
In addition, the team felt that arc length was an
important factor. However, it was excluded from the
experiment because changing the arc length would
increase arc resistance and thus interact signicantly
with the feedback control loop of the welding power
supply. Therefore, based on existing engineering
knowledge, the arc length was held constant at 3/
Orthogonal Array
The use of an orthogonal array allows for efcient
sampling of the multidimensional design space. An
L18 (2 37) orthogonal array was chosen for this
experiment. The L18 array is shown under the inner
array portion of Table 4. Each run within the inner
array was repeated for each signal and noise level,
as shown in the outer array portion of the table. The
experiment design required a total of 324 stud
welds (18 runs 9 signal levels 2 noise levels).
Table 2
Compound noise factor settings
Noise Factor
Collet condition
New
New
Perpendicular
Surface contamination
Clean part
Oil-coated part
930
Case 42
Table 3
Control factors and levels
Level
1
A:
gas mixture
Control Factor
90% argon
10% CO2
98% argon
2% O2
B:
spring damper
None
Weak
Mid
C:
bushing material
Plastic
Steel
Brass
D:
0.50
0.75
1.0
E:
0.25
0.50
1.0
F:
0.040
0.070
0.100
G*:
H:
SN Ratio
The signal-to-noise ratio (SN or ) is the best indicator of system robustness. As this ratio increases,
the engineered system becomes less sensitive to
noise factor inuences, and the efciency of the energy transformation increases. The goal of SN analysis is to determine the set of control factor settings
that maximize the SN.
Zero-Point Proportional SN Ratio
The experiment was designed to treat the response
factor (llet area) as a zero-point proportional dynamic characteristic. The formula for a zero-point
proportional SN ratio is
(1/r)(S Ve)
Ve
M
k
10
10 log
2
i
i1
20
30
1
r
Mi yi
i1
where yi is the llet area. To calculate the error variance (Ve), the total variation sum of squares (ST)
and error variation (Se) need to be calculated as
y
k
ST
2
i
i1
931
Table 4
Experiment array
Inner Array
Control Factor Level
Outer Array
M1
M2
M3
M4
M5
M6
M7
M8
M9
Run A B C D E F G H N1 N2 N1 N2 N1 N2 N1 N2 N1 N2 N1 N2 N1 N2 N1 N2 N1 N2
1
1 1 1 1
1 1 1 1
1 1 2 2
2 2 2 2
1 1 3 3
3 3 3 3
1 2 1 1
2 2 3 3
1 2 2 2
3 3 1 1
1 2 3 3
1 1 2 2
1 3 1 2
1 3 2 3
1 3 2 3
2 1 3 1
1 3 3 1
3 2 1 2
10
2 1 1 3
3 2 2 1
11
2 1 2 1
1 3 3 2
12
2 1 3 2
2 1 1 3
13
2 2 1 2
3 1 3 2
14
2 2 2 3
1 2 1 3
15
2 2 3 1
2 3 2 1
16
2 3 1 3
2 3 1 2
17
2 3 2 1
3 1 2 3
18
2 3 3 2
1 2 3 1
Se
11,339.8
667.0
k1
18 1
1
( M1 y1 M2 y2 M18 y18)2
r
Ve
1
[(6232)(16.48)
2,158,878,662
(6611)(27.93)
(15,365)(93.61)]2
[1/(2.16)(109)]
(62,058.0 11,339.8)
10 log
667.0
62,058.0
2
ST y 12 y 22 y 18
16.482
2
27.93 93.612
73,397.8
Se ST S 73,397.8 62,058.0
11,339.8
73.70 dB
The same methodology was used to calculate SN
ratios for all 18 runs.
Response Analysis
The calculated SN ratio and beta values for each
experimental run are displayed in Table 7. The average SN ratio for each control factor level is shown
932
Case 42
Figure 3
Annular weld llet area calculation
current SN ratio
B1 C2 H1 2(av. )
(71.59 dB) (69.38 dB) (70.68 dB)
2(69.48 dB)
72.69 dB
4. Beta Analysis
1
r
M y
k
i i
i1
933
N1
29.9
54.4
12.5
36.8
39.4
44.5
34.1
39.9
36.2
33.1
22.8
10.0
37.5
25.9
8.5
31.2
10.0
10.0
Run
10
11
12
13
14
15
16
17
18
N2
32.0
33.7
34.8
35.2
31.2
33.7
29.0
31.5
28.4
26.3
34.7
38.8
43.6
33.6
35.4
21.4
40.0
51.6
M1
31.7
36.8
29.1
27.6
8.5
35.2
32.4
18.8
38.7
26.2
35.9
22.5
41.0
36.8
45.2
15.4
28.0
N2
30.9
35.2
37.3
31.4
34.1
33.8
27.2
32.8
28.1
35.1
32.1
16.7
38.0
36.3
32.1
33.0
36.7
38.9
M2
38.0
N1
31.8
16.7
32.2
31.7
33.5
41.9
32.9
31.2
36.6
38.2
31.1
22.2
45.5
37.2
37.6
12.5
49.3
N2
32.9
32.4
36.3
33.1
34.6
37.4
36.9
38.5
7.5
34.5
4.0
4.2
34.4
26.9
42.9
32.1
33.6
23.7
M3
24.0
N1
Table 5
Maximum torque prior to stud failure
38.4
35.1
39.2
32.4
33.3
38.1
38.5
35.6
35.0
33.8
25.5
33.7
44.8
41.5
27.0
26.5
31.8
N2
36.5
32.4
39.6
32.1
33.5
35.9
32.9
39.5
26.8
32.8
27.2
41.3
31.6
31.3
44.1
32.7
27.2
31.6
M4
34.0
N1
13.2
29.8
36.6
31.7
33.0
36.7
35.9
34.2
34.0
27.0
43.1
36.3
50.3
43.8
26.2
35.4
46.2
N2
38.2
24.9
40.4
29.9
31.7
32.9
23.7
32.9
22.9
31.0
33.8
36.1
29.6
39.0
38.4
38.7
33.4
25.0
M5
43.9
N1
31.5
37.2
38.2
37.8
29.4
40.3
29.8
39.3
39.2
26.0
46.6
37.8
50.6
10.0
39.2
41.4
49.9
N2
28.5
35.0
31.2
26.1
31.2
35.9
32.9
17.5
34.0
30.0
34.2
34.7
40.8
31.5
33.1
31.5
39.2
27.4
M6
53.5
N1
36.1
39.2
34.1
37.3
32.4
32.4
27.3
38.6
10.0
38.5
45.9
38.0
50.4
49.2
36.1
44.2
52.8
N2
37.0
29.6
36.3
36.1
34.8
37.1
34.1
33.5
26.3
28.4
42.7
41.7
43.9
35.1
35.7
30.0
40.4
31.2
M7
43.7
N1
33.3
37.7
36.9
38.5
33.3
39.2
32.2
37.5
36.0
35.3
54.9
38.9
44.3
40.7
37.8
38.1
45.7
N2
31.7
26.8
38.7
33.5
35.5
38.2
36.6
31.0
30.8
25.5
37.1
34.8
39.6
35.9
39.7
29.7
28.9
33.8
M8
51.8
N1
36.8
36.6
36.3
34.2
29.8
36.3
39.9
41.3
39.6
43.1
45.9
38.0
44.3
43.1
43.9
52.5
53.2
N2
36.7
28.0
40.2
29.2
35.8
35.2
40.8
35.5
38.7
29.9
36.0
37.2
43.5
37.6
38.8
36.1
35.8
37.2
M9
51.0
N1
934
Case 42
Figure 4
Weld llet area versus maximum torque prior to stud failure
Table 6
Data for run 3
Trial
Actual
Current
(A)
Actual
Time
(s)
Voltage
(V)
Energy, M
(J)
M1
584.2
0.35
30.5
6,232
16.48
M2
566.1
0.35
33.3
6,611
27.93
M3
575.1
0.39
31.8
7,139
29.56
M4
560.1
0.38
35.5
7,585
34.47
M5
583.5
0.43
31.0
7,786
29.56
M6
563.2
0.43
34.8
8,460
56.45
M7
737.3
0.35
38.5
9,935
16.48
M8
760.4
0.35
33.7
8,958
43.43
M9
731.9
0.38
39.1
10,875
78.04
M10
756.6
0.39
34.3
10,185
29.56
Area, y Fillet
(mm2)
M11
745.2
0.30
37.2
8,305
105.78
M12
761.6
0.44
33.3
11,112
37.74
M13
885.5
0.35
42.1
13,060
121.93
M14
904.7
0.35
39.7
12,655
38.46
M15
887.1
0.39
42.0
14,517
89.55
M16
909.8
0.39
38.9
13,894
75.36
M17
884.5
0.44
42.4
16,482
75.36
M18
909.9
0.43
39.0
15,365
93.61
935
Table 7
SN ratio and beta values for each run
Control Factor
Run
71.74
0.00542
69.56
0.00544
73.70
0.00536
64.27
0.00476
67.56
0.00478
68.07
0.00438
68.34
0.00397
69.63
0.00382
68.69
0.00420
10
72.39
0.00522
11
70.76
0.00504
12
71.40
0.00540
13
65.87
0.00450
14
70.45
0.00417
15
70.34
0.00426
16
67.19
0.00421
17
68.31
0.00407
18
72.45
0.00347
69.48
0.00458
Average:
Table 8
SN ratio response table
Control Factor
Level
G*
69.06
71.59
68.30
69.02
70.30
69.17
69.51
70.69
69.91
67.76
69.38
69.20
68.73
69.64
69.50
68.36
69.10
70.78
70.24
69.42
69.65
69.45
69.41
0.84
3.83
2.48
1.22
1.57
0.48
0.06
2.33
936
Case 42
Figure 5
SN ratio effect plot
Table 9
Optimal control factor settings and SN ratio predictions
Conguration
Gas
Mixture
Spring
Damper
Bushing
Material
Preow
Gas
Duration
(s)
Postow
Gas
Duration
(s)
Plunge
Depth
(in.)
Gas Flow
Rate
(ft3 / h)
SN Ratio
Predicted
(dB)
Optimal
90 / 10
Weak
Plastic
0.50
0.50
0.040
20
65.46
Current
98 / 2
None
Steel
0.50
0.50
0.070
10
72.69
Gain predicted:
7.23
Table 10
Beta response
Control Factor
Level
G*
0.00468
0.00531
0.00468
0.00463
0.00441
0.00460
0.00470
0.00450
0.00448
0.00448
0.00455
0.00459
0.00465
0.00454
0.00456
0.00463
0.00396
0.00451
0.00453
0.00469
0.00460
0.00449
0.00462
0.00020
0.00136
0.00017
0.00010
0.00028
0.00006
0.00020
0.00013
937
Figure 6
Beta effect plot
Table 11
Predicted versus conrmed results
Conrmed
Predicted
SN Ratio
(dB)
SN Ratio
(dB)
Optimal
65.46
0.00458
66.85
0.00420
Current
72.69
0.00520
73.23
0.00518
Gain
7.23
Beta Predictions
In the production environment, the input energy
will be set to an optimum level and held constant.
Therefore, is relatively unimportant in this experiment. However, a comparison of predicted values
to conrmed values will provide increased insight
into the validity of the experimental results. Therefore, values were predicted using the optimal control factor levels as identied in the SN ratio
analysis. Beta predictions are as follows:
Optimal: 0.00458
Current:
0.00520
6.38
current SN ratio
B1 C2 H1 2(av. )
0.00531 0.00455 0.00450 (2)(0.00458)
0.00520
5. Conrmation Test
The conrmation test was conducted in three steps:
1. Testing at current control factor settings across
all 18 energy input levels:
Control factor settings: A1B1C2D1E1 F2H1
2. Testing at optimal control factor settings predicted across all 18 energy input levels:
Control factor settings: A1B2C1D2E2F1H2
3. Testing at optimal control factor settings predicted at the optimum energy input level:
938
Case 42
Table 12
Constant-set-point versus full-energy input
range
Conrmed Optimal
SN Ratio
(dB)
66.85
0.00420
66.86
0.00475
840 A, 0.35 s
though the input energy set point was held constant, the actual energy input varied from stud to
stud due to the inherent variability of the power
supply. The one positive effect of this inherent variability was that it enabled calculation of a SN ratio
and for the run. Table 12 shows a comparison of
the results of the constant-set-point run and the
original optimal condition conrmation run utilizing the full input energy range. It can be observed
that the results of the runs were virtually identical.
Figure 7 shows the torque testing results from
studs welded under optimal control factor settings
and the constant input energy setpoint of 840 A,
0.35 s. For all studs in this run, the weld yield
strength was greater than the stud material yield
strength.
The average observed maximum torque prior to
failure (39.6 ft-lb) is 2.6 times larger than the assembly torque upper specication limit. The optimal
process data exhibit a state of statistical control;
therefore the process capability index (Cp) can be
calculated as follows:
average max. torque assembly
torque upper spec. limit
Cp
Figure 7
Maximum torque prior to stud failure: Optimal process
Cp
39.6 15.0
(3)(2.72)
9.03
6. Conclusions
Process parameter optimization utilizing robust design methodology clearly resulted in a substantial
improvement in the gas-arc welding process. The
optimal combination of control factor settings was
determined, resulting in a 52.1% process variability
reduction and an average stud weld strength of 2.6
times the level required by the assembly operation.
Of the seven control factors tested, only two factors
had been set at their optimal level. Additionally, the
preexperiment welding energy input settings (700
A, 0.4 s) were found to be suboptimal. Implementation of the optimized process parameters is expected to result in signicant cost savings (greater
than $30,000), due to the elimination of a second-
939
References
American Supplier Institute, 1998. Robust Design Using
Taguchi Methods: Workshop Manual. Livonia, MI:
American Supplier Institute, pp. I1II36.
American Welding Society, 1993. ANSI/AWS C5.4-93:
Recommended Practices for Stud Welding. Miami, FL:
American Welding Society.
Yuin Wu, 1999. Robust Design Using Taguchi Methods. Livonia, MI: American Supplier Institute, pp. 4751.
CASE 43
1. Introduction
Traditionally, the generic function of injection
molding has been considered transformability to a
mold or the capability of molding a product proportional to the mold shape. However, the resin
caster discussed in this study is required to have sufcient strength because it is used as a cart carrying
baggage. In addition, since internal voids occur
quite often because the caster has a considerably
thick wall, instead of the concept of transformability
we selected as a generic function even lling of the
inside of a mold with resin, that is, uniformity of
density (specic gravity).
As a measuring method for analyzing density uniformity, we used a specic gravity measurement
based on an underwater weighing method. A
molded product was cut in ve pieces (Figure 1).
We chose in-the-air weight as a signal factor, M, and
underwater weight as the output, y.
More specically, splitting up a molded product
and measuring both in-the-air weight (M) and underwater weight (y), we set the data to an ideal relationship equation (Figure 2).
The mixing ratio of recycled material was used
as a noise factor. It affects the uidity of resin
940
2. SN Ratio
The measured data are listed in Table 2. The data
analysis procedure was as follows.
Total variation:
ST 19.12 22.32 82.82 176.62
82,758.13
(f 10)
(1)
Effective divider:
r1 22.22 25.12 202.82 54,151.02
r2 23.32 26.02 200.92 52,852.51
(2)
Linear equations:
L1 (22.2)(19.1) (202.8)(178.5)
47,668.88
L2 (23.3)(20.2) (200.9)(176.6)
46,434.04
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
(3)
941
(L1 L2)2
82,757.64
r1 r2
(f 1)
(4)
L21
L2
47,668.882
46,434.042
2 S
r1
r2
5451.02
52,852.51
82,757.64 0.08
(f 1)
(5)
Error variation:
Figure 2
Ideal relationship
Se ST S SN 82,758.13 82,757.64
0.08 0.41
(f 8)
(6)
Error variance:
Ve
Se
0.41
0.05
8
8
(7)
Se SN
0.41 0.08
0.05
n1
9
(8)
SN ratio:
10 log
Sensitivity:
S 10 log
1
(S Ve) 1.11 dB
r1 r2
(10)
The injection molding conditions were listed as control factors (Table 3). Figure 3 shows the shape of
molded product. Nylon plastic was used as the resin
material. Control factors were assigned to an L18 orthogonal array and the noise factor to the outer array. Eighteen experiments were conducted twice.
Due to poor setup conditions in experiments 1,
4, 11, and 17, the resin was not fully charged in the
mold, so no products were obtained from the mold.
Because of no data available, these conditions were
treated as incomplete data. Analyses were made using the following two methods:
1. Find the worst SN ratio from experiment 1 to
experiment 18; experiment 8 is the worst
(6.5). Double the value (13). Put this
value into experiments 1, 4, 11, and 17 to calculate the averages.
2. Ignore the missing values to calculate level averages. For example, the average of A1 is calculated as A1 (total of experiments 1 to 9
except experiments 1 and 4) 7.
Based on these analyses, we created the response
graphs in Figure 4.
The optimal conguration is estimated as
follows:
1. Select levels with less missing data for each
factor.
Figure 1
Division of thick-walled molded product
942
Case 43
Table 1
Control factors and levels
Level
Control Factor
A:
Low
High
B:
Low
Mid
High
C:
B 200
B 190
B 180
D:
Low
Mid
High
E:
F:
F1
F2
G:
Low long
Mid mid
High short
H:
Short
Mid
High
Table 2
Example of measured data
Noise Factor
[Mixing Ratio of Recycled
Material (wt %)]
M
y
22.2
19.1
25.1
22.3
50.3
44.3
96.8
85.3
202.8
178.5
50
M
y
23.3
20.2
26.0
23.0
48.4
42.6
94.5
82.8
200.9
176.6
Table 3
Estimation and conrmation (dB)
Sensitivity
SN Ratio
Estimation
Conrmation
Estimation
Conrmation
Optimal conguration 1
20.1662
14.2205
1.0344
1.1160
Optimal conguration 2
14.5307
1.1170
6.4157
9.8173
1.1193
1.0925
Gain 1
13.7505
4.4032
0.0850
0.0235
Gain 2
4.7134
Comparison
0.0245
943
6.4157 dB
Figure 3
Shape of molded product
Figure 4
Response graphs
944
Case 43
A
22
where A (loss when a defective product is produced) is assumed to be 400 yen/unit, (tolerance
in the process) is assumed to be 0.1, and (variance) is computed by the result of the conrmatory
experiment. That is, converting 2/2, we obtain 2 2/.
The loss function for the case of the reference
conguration is
L1
400
(0.081) 3244 yen/unit
12
400
(0.0272) 1088 yen/unit
12
Reference
Shinji Hanada, Takayoshi Matsunaga, Hirokazu Sakou,
and Yoshihisa Takano, 1999. Optimization of molding condition for thick walled mold. Quality Engineering, Vol. 7, No. 6, pp. 5662.
This case study is contributed by Shinji Hanada, Takayoshi Matsunaga, Hirokazu Sakou, and Yoshihisa
Takano.
CASE 44
Abstract: In a conventional development process of electrodeposition, improvement has been focused mainly on pinholes as one of the quality characteristics. However, in our research, we regarded a function of coating
formation in the electrodeposition process as an input and output of energy.
By reducing the variability in coating thickness on each side and minimizing
the difference in coating thickness in both sides of a magnet, we aimed to
minimize pinholes as a quality characteristic.
1. Introduction
The magnet for motors is electrodeposited to prevent it from releasing dust and from becoming rusty.
There are coating methods other than electrodeposition: for example, spraying and plating. One of
the major advantages of electrodeposition is its
good throwing power, which enables us to obtain
uniform coating thickness even on a complicated
shape. However, since the rare-earth-bonded magnet used is compression molded from metal powder,
it has plenty of hollow holes inside. Therefore, when
its coating is baked, gas frequently comes out of the
inside, leading to the development of tiny pinholes,
ranging in diameter from a few micrometers to a
few dozen on the coat.
To electrodeposit a magnet, water-soluble resin
made primarily of epoxy resin is used. After charging the magnet negatively and the electrode positively by applying voltage in its solution, we can
obtain water electrolysis, which raises the pH of the
solution, with hydrogen generated around the magnet. On the other hand, since the coating material
in the solution is charged positively, it is attracted
by the magnet and consequently condensed and
educed on the surface of the magnet due to alkali.
The coating thus formed on the surface has
many minute holes generated by O2 gas. Because of
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
945
946
Case 44
Table 1
Signal and noise factors
Signal Factor:
Coulomb value
M1
M2
N1
N2
measured side
Outer side
Inner side
2:
deterioration of solution
Yes
No
3.
coating thickness
Maximum
Minimum
Noise Factor:
1:
M3
4
3. Calculation of SN Ratio
Table 2 shows data examples of experiment 1 in an
L18 orthogonal array. Using these data we computed
the SN ratios and sensitivities as follows.
Total variation:
ST 9.52 9.32 13.62 14.52
2188.01
(f 12)
(1)
Linear equations:
L1 (2)(9.5 9.3) (4)(21.1 20.6)
291.7
L2 (2)(6.7 6.7) (4)(13.6 14.5)
204.0
(2)
Effective divider:
r 22 32 42 29
Figure 1
Experimental device
(3)
Table 2
Data of experiment 1 on coating thickness
(m)
M1 (2)
M2 (3)
Ve
9.5
9.3
14.6
14.5
21.1
20.6
N2
6.7
6.7
10.7
10.9
13.6
14.5
(L L2)2
(291.7 204.0)2
S 1
(2)(2r)
(2)(2)(29)
(f 1)
VN
(4)
(2)(2)(3) 1
2188.01 2118.26
6.34
11
10 log
10 log
(1/4r)(S Ve)
VN
[1/(4)(29)](2118.26 0.34)
6.34
S 10 log
10 log
Error variation:
(f 10)
(9)
Sensitivity:
(5)
3.44
(8)
4.59 dB
291.72 204.02
2118.26
(2)(29)
(f 1)
ST S
SN ratio:
L2 L22
1
S
2r
66.31
(7)
Se
3.44
0.34
(2)(2)(3) 2
10
M3 (4)
N1
21,18.26
947
1
(S Ve)
4r
1
(2118.26 0.34)
(4)(29)
12.61 dB
(6)
(10)
Table 3
Control factors and levels
Level
Signal
A:
Far
Close
B:
temperature
Low
Mida
High
e:
C:
NV value
Small
Mida
Large
D:
amount of ash
Small
Mida
Large
E:
amount of solvent
Small
Mid
Large
F:
ow of solution
Small
Mid
High
G:
voltage (V)
115
175
235
Current level.
948
Case 44
Figure 2
Response graphs
Table 4
Results of estimation and conrmatory experiment
SN Ratio
Sensitivity
Conguration
Estimation
Conrmation
Estimation
Conrmation
Optimal
8.49
11.70
13.95
14.28
Current
3.01
7.42
13.27
14.16
Gain
5.48
4.28
0.68
0.12
949
Figure 3
Results of conrmatory experiment
Reference
Hayato Shirai and Yukihiko Shiobara, 1999. Quality improvement of electro-deposited magnets. Quality Engineering, Vol. 7, No. 1, pp. 3842.
CASE 45
1. Introduction
We used Taguchis dynamic parameter design approach to optimize an electrical encapsulation process. Here we present a parameter design effort
focused on reducing the process cycle time and curing temperature while maintaining the same reliability performance for the image sensor module
used in the night-viewing product lines at ITT Night
Vision.
Figure 1 shows an F5050 system that has two sensor modules (Figure 2), one located behind each
objective lens opposite the eyepiece. At the heart of
each module is an image intensier tube (Figure 3),
whose outer surface has exposed electrical elements
that must be covered to provide environmental
isolation.
In addition, the material covering the tubes surface must isolate the exposed metallic elements
electrically. The space allowed for the encapsulant
material is very small, to reduce sensor module
weight and size, increasing the challenge of introducing a new encapsulant material. A paramount
950
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
Figure 1
F5050 night vision goggle
Figure 2
Cross section of sensor module
Figure 3
Cross section of image-intensier tube
951
952
Case 45
Figure 4
Test coupon to simulate the product
Figure 5
Encapsulation system
3. Parameter Design
Experimental Approach
Figure 5 presents the encapsulating/isolating engineered system. The signal factor is applied voltage.
The response is the resultant current ow between
the metallic elements. Figure 6 presents the ideal
function for the system. The compound noise factor
was thermal/humidity exposure, performed cyclically over many days, as well as the condition of the
coupon simulating a reworking situation. The electrical data were collected before and after this cycling. The data were analyzed using the zero-point
proportional dynamic SN ratio using applied voltage as the signal factor. The objective was to minimize the slope, . For the ideal function of voltage
versus current, the slope is inversely proportional to
resistance, and for this system, the objective was to
maximize resistance in order to increase the electrical isolation properties of the encapsulant system.
Figure 6
Ideal function
953
954
Case 45
Table 1
Factors and levels
Table 3
Experimental results
Factor
Level
Signal factors
M: voltage between
electrodes (V)
Controllable factors
A: cure temperature (C)
B:
C:
D:
E:
F:
G:
postcleaning agent
H:
sandblast surface
Noise factors
X: humidity / temperature
Y:
coupon condition
1:
2000
2:
3:
4:
4000
6000
8000
1:
2:
1:
2:
3:
1:
2:
3:
1:
60
80
2
4
6
5
Std.
30
0
2:
3:
1:
2:
3:
1:
5
10
0.5
12
24
250m
2:
3:
1:
2:
3:
1:
2:
3:
2
200
IPA
Std.
Acetone
Light
Std.
Heavy
1: Before
2: After
1: New
2: Reworked
Row
SNR (dB)
Slope,
16.5406
1.9895
15.6297
0.7347
14.5655
0.6423
15.3339
0.928
14.9945
0.6889
18.344
0.7758
15.4606
0.9989
15.1447
0.8003
15.77
0.2967
10
14.9883
0.4587
11
15.7399
0.1666
12
16.3414
0.5862
13
18.0843
0.4627
14
15.2836
0.3669
15
17.7041
0.5584
16
15.9816
0.1713
17
16.9162
0.8485
18
14.4645
0.2468
4. Experimental Layout
Table 1 lists the factors and levels used in the L18
experimental layout. Thirty-six coupons were fabricated, half of which were treated to represent one
of the noise factors called reworked. Each was encapsulated according to the orthogonal combinations using the process developed for sensor module
Table 2
Baseline raw data
2 kV
Rework
4 kV
New
Rework
6 kV
New
Rework
8 kV
New
Before After Before After Before After Before After Before After Before
After
0.041 1.59 0.056 5.05 0.131 3.61 0.118 8.01 0.252 6.02 0.205 10.93
Rework
Before After
1.25
9.13
955
Sensitivity Analysis
1.15
1.1
1.05
1
0.95
0.9
0.85
0.8
0.75
0.7
0.65
0.6
0.55
0.5
0.45
0.4
B2
B3
B1
A1 A2 B1 B2 B3 A1 A2 C1 C2 C3 D1 D2 D3 E1 E2 E3 F1 F2 F3 G1 G2 G3 H1 H2 H3
Figure 7
Average response graph for
Table 4
Process average and conrmed values
Process Average
Estimate
Condition
(dB)
Baseline
Conrmed Value
(dB)
14.61 0.6298
7.42 0.1985
7.19 0.4313
6. Conrmation
Table 4 presents the process average estimates and
conrmation results for and . Figure 8 compares
956
Case 45
Figure 8
Optimum versus standard process conditions
7. Conclusions
As a result of the parameter design effort focused
on the electrical encapsulation engineered system,
the SN ratio increased 7.2 db while decreasing the
slope, , 68% from 0.6298 to 0.1985. This translated
CASE 46
1. Introduction
CFRP is material made of a mixture of superengineering plastics and carbon bers. Compared with
materials used for plastic injection molding, this has
a much higher melting point and becomes carbonized and solidied at temperatures 50 to 60 higher
than the melting point. Therefore, since a conventional plastic injection machine cannot mold this
material, by adjusting the shapes of the nozzle and
screw in the preliminary experiment, we conrmed
factors and their ranges regarding possible molding
conditions.
As the shape to be molded, we selected an easily
measurable shape for both mold and product without considering actual product shapes to be manufactured in the future, because our objective was
technological development of plastic injection
molding. On the other hand, taking into account a
shape with which we can analyze the proportionality
(resemblance relationship), we prepared the shape
shown in Figure 1 by using a changeable mold. As
a reference, we note the die structure in Figure 2.
The measurements are as follows:
Bottom Stage: ABCDE (Points)
Third Stage:
Using a coordinate measuring machine as measurements, we read each set of x, y, and z coordinates for both the mold and the product. Then we
dened the linear distance for every pair of measurements in the mold as a signal factor.
As control factors, we selected from the factors
conrmed in the preliminary experiment eight
molding conditions believed to greatly affect moldability and dimensions. For a noise factor, we chose
the number of shots whose levels are the second
and fth shots. Table 2 summarizes all factor levels
of the control and noise factors.
FGHIJK (Points)
(f 110)
(1)
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
957
958
Case 46
SN
1
(90,229.763098
181,682.363768
90,265.630586)2
179,316.178332
(f 1)
(4)
1
(90,229.763098
181,682.363768
90,265.630586)2
0.007108
Figure 1
Shape of molded product and measurements
(f 1)
(5)
Error variation:
Se 179,318.477815 179,316.178332
0.007108
L1 (34.729)(34.668) (60.114)(59.996)
(36.316)(35.877)
90,229.763098
(2)
2.292402
(f 108)
(6)
Error variance:
L2 90,265.630586
Effective divider:
2.292402
0.021226
108
(7)
181,682.363768
(3)
Figure 2
Model mold used for injection molding of CFRP
VN
SN ratio:
2.292402 0.007108
0.021096
109
(8)
959
Table 1
Signal factors
Planar Distance on
Bottom Stage
Planar Distance on
Third Stage
AB
AC
DE
FG
FH
JK
AF
AG
EK
M1
M2
M10
M11
M12
M25
M31
M32
M60
(1/181,682.363768)
(179,316.178332 0.021226)
10 log
0.021096
16.70 dB
(9)
Sensitivity:
1
S() 10 log
181,682.363768
(179,316.178332 0.021226)
0.0569 dB
Initial configuration:
(10)
A1B2C2D2E2 F2G2H2
Table 2
Control factors and levelsa
Level
Factor
Control Factors
A: cylinder temperature (C)
B: mold temperature (C)
C: nozzle temperature (C)
D: injection speed (%)
E: switching position of holding pressure (mm)
F: holding pressure (%)
G: holding time (s)
H: cooling time (s)
Lowa
Low
Low
5
5
70
15
40
Mid
Mida
Mida
8a
10a
85a
30a
110a
Noise factor
N: number of shots
Second shot
Fifth shot
Initial conditions.
High
High
15
15
99
45
180
960
Case 46
Table 3
Distance data of molded product for transformability (experiment 1) (mm)
Dimensions of Mold (Signal Factor, Number of Levels: 55)
M1
34.729
M2
60.114
M11
14.447
M12
25.018
M31
28.541
M60
36.316
N1
34.668
59.966
14.294
24.769
28.104
35.877
N2
34.672
59.970
14.302
24.780
28.137
35.885
Total
79.340
119.936
28.596
49.549
56.241
71.762
Noise Factor
SN Ratio (dB)
18.0
17.0
16.0
15.0
Total Average = 0.9936
Contraction Rate
0.9945
0.9940
0.9935
0.9930
Cylinder
Mold
Temperature Temperature
A
B
Figure 3
Response graphs
Nozzle
Temperature
C
Injection
Speed
D
Switching
Position of
Holding Factors
E
Holding
Pressure
F
Holding
Time
G
Cooling
Time
H
961
termine an optimal conguration where the shrinkage remains constant, thereby designing a mold that
allows us to manufacture a product with high accuracy. Nevertheless, in actuality, we modied the
dimensions of the mold through tuning processes
after calculating the shrinkage for each portion.
Table 7 shows the decomposition of Sportion, variation caused by different portions. Only Se is treated
as the error. For the calculation procedure, we rewrote the distance data obtained from the test piece
prepared for the optimal conguration in Table 8.
Based on them, we proceeded with the calculation
as follows.
Analysis of Planar Bottom Stage
Total variation:
ST1 34.7052 34.6972 34.6902
57,884.917281
(f 20)
1. Planar bottom stage: analysis using planar distance dimensions on the bottom stage
Linear equation:
L1 (34.729)(69.402) (60.114)(120.149)
(34.729)(69.383)
3. Vertical direction: analysis using distance dimensions between the bottom and third stages
The results summarized in Table 6 imply that the
differences between shrinkage generated in each
pair of portions or directions lower the SN ratio of
the whole. Next, we computed the SN ratio when
changing the shrinkage for each portion.
57,931.241963
(11)
(12)
Effective divider:
r1 (2)(34.7292 60.1142 34.7292)
57,977.605121
(13)
4. Analysis Procedure
S1
L21
57,931.2419632
r1
57,977.605121
57,884.915880
(f 1)
(14)
Error variation:
Se1 ST1 S1 57,884.917281
57,884.915880
Table 4
Results of conrmatory experiment (dB)
SN Ratio,
0.001401
Sensitivity, S()
H1
H2
H1
H2
E1
17.16
18.32
0.0478
0.0305
E2
18.76
17.88
0.0345
0.0340
E3
19.19
18.63
0.0320
0.0346
(f 19)
(15)
Error variance:
Ve1
Shrinkage:
Se1
0.001401
0.0000738
19
19
(16)
962
Case 46
Table 5
Results of estimation and conrmatory experiment (dB)
SN Ratio
Shrinkage
Conguration
Estimation
Conrmation
Estimation
Conrmation
Optimal
18.52
19.19
0.995
0.996
Initial
16.39
16.44
0.993
0.994
Gain
2.13
2.75
r (S V )
1
57,977.605121
1
Linear equation:
L L1 L2 L3
57,931.241963 14,944.716946
108,637.586755
(57,884.915880 0.0000738)
0.999
181,513.545664
(17)
(19)
Effective divider:
Similar analyses are made for planar top stage
and vertical direction.
r r1 r2 r3
57,977.605121 15,021.647989
109,183.504276
Overall Analysis
Total variation:
182,182.757386
Variation of proportional term:
(20)
(f 110)
S
(18)
1
(L L2 L3)2
r 1
182,182.757386
57,931.241962
14,944.716946
108,637.586755
180,846.792157
Table 6
SN ratio and shrinkage for each portion of
product
SN Ratio
(dB)
Shrinkage
41.31
0.999
38.83
0.995
Vertical direction
31.01
0.995
(21)
L21
L2
L2
2 3 S
r1
r2
r3
57,931.2419632
14,944.7169462
57,977.605121
15,021.647989
108,637.5867552
180,846.792157
109,183.504276
0.702439
(f 2)
(22)
Table 7
ANOVA table separating portion-by-portion
effects
Factor
10 log
1, 2, 3
Sportion
n3
Se
Total
ST
Ve
Se ST S Sproportion
(23)
0.046620 (f 107)
Error variance:
Se
0.046620
0.0004357
107
107
(1/182,182.757386)
(180,846.792157 0.0004357)
10 log
0.0004357
33.58 dB
Error variation:
Ve
(1/r)(S Ve)
180,847.541216 180,846.792157
0.702439
963
(24)
(25)
Table 8
Distance data under optimal conguration
Signal Factor
Noise Factor
M1
34.729
M2
60.114
M10
34.729
N1
N2
Total
34.705
34.697
69.402
60.075
60.074
120.149
34.693
34.690
69.383
N1
N2
Total
M11
14.447
14.347
14.357
28.704
M12
25.016
24.873
24.887
49.760
M25
14.450
14.368
14.364
28.732
N1
N2
Total
M31
28.541
28.313
28.315
56.628
M32
36.277
36.032
36.024
72.056
M60
36.316
36.082
36.122
72.204
Vertical direction
964
Case 46
Table 9
Results of conrmatory experiment (dB
Shrinkage
Conguration
SN Ratio
(dB)
Planar
Bottom Stage
Planar
Top Stage
Vertical
Direction
Optimal
33.58
0.999
0.995
0.995
Initial
28.19
0.999
0.993
0.993
Gain
5.39
Reference
Tamkai Asakawa and Kenzo Ueno, 1992. Technology Development for Transformability. Quality Engineering
Application Series. Tokyo: Japanese Standards Association, pp. 6182.
CASE 47
1. Introduction
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
965
966
Case 47
1. Another part not shown pushes
here in the arrows direction
2. Starts to rotate
3. Starts to open
4. Contacts a stopper and
starts to close
Figure 1
Shutter mechanism
r M M M
2
1
2
2
( f 15)
2
5
(1)
(2)
(3)
Figure 3
Data conversion of shutter angle
Figure 4
Input / output relationship in method 1
Figure 5
Input / output relationship in method 2
967
968
Case 47
Table 1
Factors and levels
Level
Control Factor
A:
part As weight
0.5
Current
B:
part Bs weight
0.5
Current
2.0
C:
Current
Outside
More outside
D:
spring constant
Current
1.3
1.5
E:
F:
On gravity center
path
Current
G:
part As stroke
Short
Current
Long
Weak
Current
Strong
H: motion input
Table 2
Data format
Control Factor
Signal Factor
No.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
18
Figure 6
Response graphs of method 1
Error
N1:
N2:
N3:
standard condition
positive-side condition
negative-side condition
N1:
N2:
N3:
standard condition
positive-side condition
negative-side condition
T1
T2
T3
T4
T5
y11
y21
y31
y12
y22
y32
y13
y23
y33
y14
y24
y34
y1
y25
y35
Table 3
SN ratio in conrmatory experiment dB
Condition
Current
Optimal
Gain
Estimation
53.26
63.08
9.82
Conrmation
51.72
56.07
4.35
S
SN
(L1 L2 L3)
3r
Se ST S SN
Ve
VN
( f 1)
( f 2)
( f 12)
Se
12
SN
2
10 log
(1/3r)(S Ve)
VN
(4)
(5)
A2B2C1D1E1 F3G2H2
(6)
(7)
(8)
(9)
Figure 7
Angle change using method 1
969
Setup
We dened a simulation output (cumulative value
of the sectors motion angle) under standard conditions and a simulation output (cumulative value
of the sectors motion angle) under noise conditions as an output value, y (Table 4).
970
Case 47
Figure 8
Angle under optimal condition in method 1
Standard SN Ratio
The calculations are as follows:
SN
2
2
2
ST y 11
y 12
y 25
( f 10)
r M 21 M 22 M 25
(10)
(11)
(L1 L2)2
2r
( f 1)
(12)
(13)
L21 L22
S
r
( f 1)
Se ST S SN
Ve
VN
(14)
( f 8)
Se
8
(15)
(16)
SN Se
(17)
10 log
(1/2r)(S Ve)
(18)
VN
Table 4
Data format
Control Factor
No.
Signal
Output
N: standard condition
N1: positive-side condition
N2: negative-side condition
Signal
Output
N: standard condition
N1: positive-side condition
N2: negative-side condition
18
M1
y11
y21
M2
y12
y22
M3
y13
y23
M4
y14
y24
M5
y15
y25
971
Figure 9
Response graphs of method 2
A2B2C1D1E1 F3G2H2
Table 5
SN ratio in conrmatory experiment (dB)
Condition
Current
Optimal
Estimation
13.49
6.49
19.98
Gain
Conrmation
20.36
7.65
12.71
972
Case 47
Figure 10
Angle change under optimal condition using method 2
Reference
Shuri Mizoguchi, 2001. Optimization of parameters in
shutter mechanism designing. Proceedings of the 9th
Quality Engineering Symposium, pp. 230233.
CASE 48
Abstract: During the development of a new clutch disk damping system and
after three months of hard work, the engineering team got stuck with unsatisfactory system performance (rattle noise) despite the help of a computer
simulation software. The only alternative seemed to be using a more expensive damping concept, but that option would increase part cost by $200 plus
development cost. The team then decided to try Taguchis robust engineering
optimization approach. In just two weeks we got a gain of 4.6 dB for the SN
ratio. That allowed us to keep the system proposed initially, thus avoiding
the cost increase and keeping the development schedule on track. Perhaps
even more important, we can now apply our engineering knowledge more
effectively and make better use of the computer simulator, which results in
a faster and more reliable development process.
1. Introduction
More and more attention is being paid to audible
noise caused by torsional vibrations in the engine
clutchtransmissionrear axle system (the drive
train). On the one hand, modern automotive design is producing stronger mechanical excitation
due to more powerful engines; on the other hand,
transmissions that have been optimized for weight
and efciency are highly sensitive to excitation. This
situation brings a number of comfort problems, including rattle noise.
Many sources of excitation must be considered
related to torsional vibrations. The main source is
engine irregularity resulting from the ignition cycle.
This basic excitation in the motor vehicle drive train
is the focus of this study. Irregular ignition or even
misring can also lead to serious vibration problems, but those factors were not included in this
work because they can usually be eliminated by optimizing the ignition itself. The main source of excitation for torsional vibrations is the fact that
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
973
974
Case 48
Figure 1
Engine excitation due to gas forces
Figure 2
Analytical model for vehicle drive train vibrations
Figure 3
Torsion damper characteristic curve in analytical model
975
2. Experimental Procedure
Scope of Experiment
Figure 4 shows the engineering system under study.
Its parts are spring set and friction components. System frontiers are, at the input, the engine ywheel,
and at the output, the input transmission shaft.
System Function
The objective function is to minimize the angular
variation from the engine while transferring torque
through the system. The basic function (work) is to
976
Case 48
Figure 4
Damping system
Control Factors
Engine Angular
Variation
Noise Factors
Figure 5
P-diagram for the engineering system
Attenuated Angular
Variation
977
M1
M2
M3
M4
Figure 6
Ideal function
978
Case 48
Figure 7
Example of simulator response
4. Results
The SN results (in decibels) are:
Table 1
Example of response for each experimental run
Response, y
(units / min)
Run
Signal (rpm)
25
M7:
1050
16.5
27
M6:
1000
18
28
M5:
950
17.5
31
M4:
900
16
Initial configuration:
23.46
18.58
Confirmed by simulation:
18.90
Gain:
4.56
As expected, due to the good correlation between the simulator and reality, the SN gain was
conrmed subsequently in vehicle testing. Figures 8
and 9 show the positive impact obtained in the
damping system performance by comparing the initial situation (Figure 8) with the optimized design
(Figure 9).
979
Table 2
Compound noise
Noise Factor
Compound Noise
N1 ( y high)
N2 ( y low)
Variation of Kspring
Variation of Kvehicle
Min.
Max.
K, elastic constant.
5. Conclusions
In the process of conventional design using simulation, the control factors were being analyzed sep-
Table 3
Control factors and levels
Control Factor
Level
A:
Springs K
B:
Angle to Second
(deg) Friction
Starts
C:
Second Friction
(N)
D:
First Friction
(% of C)
4.6
25
9.2
15
50
14
27
75
13.8
18.4
23
27.6
980
Case 48
Table 4
L18 orthogonal array and responses under noise extremes
Factor
No.
y at:
4.6
4.6
4.6
N1
N2
0.75
120 95 60 37 36
26 22.5 22.5 21 21
105 65 60 39 28 26.5 21
18.5 18 16.5
15
7.5
49 44 34 29 26 26 26
23 23 23
14
27
20.25
30 29.5 29 29 27 26
23.5 23.5 23 21.5
9.2
1.5
97 76 62 38 34 26.5
22.5 21 18.5 14.5
87 66 54 39 28 27 21
18 13 11
9.2
15
11.25
49 44.5 33 30 26 26
25 23.5 23 21.5
42 41 31 26 26 24 24
23.5 22.5 20.5
9.2
14
27
6.75
13.8
15
3.75
49.5 44 33 29 26.5
26 23 23 21.5 21.5
23.5 23 22 19.5 19 18
15 15 14 14
13.8
27
30 29.5 29 27 25.5
25 23 23 22 19.5
13.8
14
2.25
60 54 50.5 37 36 21
15 11.5 10.5 10
10
18.4
27
20.25
11
18.4
0.75
12
18.4
14
15
7.5
21 20 20 18 15 13.5
13.5 13.5 13.5 13.5
13
23
15
11.25
24 23 22 20 19.5 18 16
15 14.5 14.5
14
23
27
6.75
30 27 24 23.5 21
19.5 18.5 17 17 16.5
15
23
14
1.5
12 10 9 6 5.5 4 4 4
3.5 3.5
16
27.6
27
13.5
30 27 25 23 21 20 19
18.5 18 17
17
27.6
2.25
14.5 13 12 8.5 8 6
5.5 4.5 4 4
18
27.6
14
15
3.75
15 13.5 13 9 8 6.5
5.5 5 4.5 4
13.5
981
Table 5
Control factors and calculated values of SN and average (smaller-the-better)
Factor
No.
A
1
B
3
C
4
D
5
SN
(dB)
Average
34.66
42.93
29.85
29.23
28.30
25.15
33.52
38.70
29.83
29.10
28.07
24.48
28.40
24.00
27.62
23.13
28.52
19.43
10
27.06
21.58
11
17.50
6.65
12
25.51
17.93
13
25.82
18.65
14
26.76
20.80
15
17.30
6.35
16
27.04
21.58
17
19.51
8.33
18
19.85
8.75
26.40
21.48
Grand averages:
Table 6
Response table for the SN ratio
Table 7
Response table for the mean
Factor
No.
Factor
30.94
29.42
25.17
25.87
30.47
25.18
26.54
28.18
24.59
27.48
No.
32.43
27.90
20.40
21.27
26.81
30.76
19.54
21.28
22.82
26.51
22.18
17.01
22.78
20.37
10.89
2.39
2.45
23.35
15.38
23.29
15.27
22.13
12.88
Diff.
19.55
Diff.
8.80
4.82
2.31
0.94
982
Case 48
Figure 8
Damping system performance before study
Figure 9
Damping system performance after study
983
CASE 49
1. Introduction
The common rail direct-injection diesel fuel system
is an important technology for Delphi. For this reason, a product and process engineering team, dedicated to the design and implementation of the best
possible common rail system with respect to both
product and process, was set up at the European
Technical Center. The direct-injection diesel common rail system is comprised of the following core
components of the engine management system:
Injectors
High-pressure pump
Fuel rail and tubes
High-pressure sensor
Electronic control module
Pressure control system
The main challenge with diesel common rail systems as opposed to indirect diesel injection systems
is the continuous high operating pressures. Current
984
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
985
F
Return
ENVIRONMENT
Solenoid Valve
Bleed Orifice and
Hydraulic Valve
CUSTOMER USE
High-Pressure
Connection
Feed Orifice in
Control Piston
Edge Filter
Control Piston
PERFORMANCE
Too much variability
Needle
MATERIALS
MFG. AND ASSEMBLY
Figure 1
Injector and key variability sources
986
Case 49
Scope/objective
PART I
PART II
Ideal function
Input signal (M)
Output response (y)
Data analysis
Two-step optimization
Results of parameter design
Confirmation run
Comparison
dynamictraditional
Conclusions
Figure 2
Optimization process
PART V
PART IV
987
Connection Line
Common Rail
DDI Injector
Figure 3
Modeled system
Return Line
To the
Common Rail
Area Ac
Needle
Feed Line
to Nozzle
Area AN
Nozzle Volume: Pressure PN
Nozzle Variable Orifice
Figure 4
Injector
988
Case 49
INPUT:
Control
Valve
Model
Hydraulic
Model
OUPUT:
Magnetic Force
Armature Motion
Mechanical
Model
Figure 5
Mathematical model
Rail pressure
Model outputs include time variation of the following elements:
Magnetic force
Fuel pressure and velocity in lines
Flows through orices
Displacement of moving parts
Approach to Optimization
The difculty in building and then optimizing such
a model is that some features have a very strong
effect on the results. Imprecision about their model
representation can affect results in an unacceptable
way. Experimental investigations have been used to
thoroughly study the characteristics of internal ows
Injected Quantity
Pulse Width
Figure 6
Quantity injected versus pulse width for various rail pressure levels
989
Results
Figure 7 shows the correlation between model and
experiment for quantity injected, for rail pressures
and pulse widths covering the full range of injector
operation, and for three different injectors with different settings of control factor values. The agreement between model and hardware is good over the
full range of operation. Figure 7 is the basis of the
high condence level we have in the model outputs.
3. Parameter Design
Ideal Function
The direct-injection diesel injector has several performance characteristics:
60
50
40
30
20
10
0
0
10
20
30
40
50
60
(mm3)
Model Output
Experimental Data from Test Rig
Figure 7
Comparison model data versus experimental data for various rail pressure, pulse width, and
injector control factor settings (, reference; , control factors A, B, G, E, and F changed;
O, control factors A, B, G, E, and I changed)
Case 49
990
*
= 1 (45)
Quantity Desired, M (mL)
Figure 8
Ideal function
Recirculated quantity
Table 1
Noise factors
Noise
Factor
Level
Compounded
Noise Level
1 (N1)
Compounded
Noise Level
2 (N2)
Rationalea
10
10
Mfg.
5
5
Age
2
2
Mfg., Age
3
3
Mfg., Age
3
3
Age
3
3
Mfg., Age
4
4
Age
30
30
Mfg.
3
3
Age
Mfg., Age
10
System
Level
Relationship with
Quantity Injected
0.6
10
0.03
Mfg.
Mfg.
991
Table 2
Control factors and levels (%) a
Level
Factor
16.6
16.6
11.1b
11.1
16.6
16.6
11.1b
11.1
28.6
28.6b
18.5
18.5
18.2
22.7b
33.3
66.6
11.1
11.1b
8.62
6.9b
2.3
2.3b
a
X is a reference value. An L27 (313) orthogonal array was
used to perform the experiment. Two columns were unused
(see Figure 9).
b
Current design level.
Quantity Desired
2 mL
5 mL
15 mL
25 mL
40 mL
Response
Actual Quantity, y
Signal
Actual Quantity
y = M
=1
y=M
Desired Quantity, M
992
Case 49
Table 3
Experimental layout
Signal
Control Factor Array
2 mL
5 mL
15 mL
25 mL
40 mL
No. A B C D E F G H I J K 12 13 N1 N2 N1 N2 N1 N2 N1 N2 N1 N2 S/N
1 1 1 1 1 1 1 1
1 1 1 1
2 1 1 1 1 2 2 2
2 2 2 2
3 1 1 1 1 3 3 3
3 3 3 3
25 3 3 2 1 1 3 2
3 2 1 2
26 3 3 2 1 2 1 3
1 3 2 3
27 3 3 2 1 3 2 1
2 1 3 1
10 log
S Ve
rVe
993
SN Ratio (dB)
0.885
0.89
0.895
0.9
0.91
0.905
0.915
0.92
0.925
0.93
0.935
0.945
0.94
A1
A1
0.95
-21.2
-21.4
-21
-20.8
-20.6
-20.2
-20.4
-19.8
-20
-19.6
-19.2
-19.4
-18.8
-19
-18.6
A2
A2
A3
A3
B1
B1
B2
B2
Figure 10
SN ratio and sensitivity plots
Sensitivity (dB)
B3
B3
C1
C1
C2
C2
C3
C3
D1
D1
D2
D2
D3
D3
E1
E1
E2
E2
E3
E3
F1
F1
F2
F2
F3
F3
G1
G1
G2
G2
G3
G3
H1
H1
H2
H2
H3
H3
I1
I1
I2
I2
I3
I3
J1
J1
J2
J2
J3
J3
K1
K1
K2
K2
K3
K3
994
Case 49
Table 4
Two-step optimization
Factor
A
Initial design
A2
B1
C1
D1
E3
F2
G3
H2
I3
J3
K3
A3
B2
C3
D3
E?
F2
G1
H3
I1
J1
K3
A3
B2
C3
D3
E1
F2
G1
H3
I1
J1
K3
E1
max 1
1
and Z1 min
SN (dB)
(Slope)
Initial
20.7118
0.8941
Optimized
15.0439
1.0062
Gain
5.66
4. Tolerance Design
max 1
Table 5
Prediction
Design
0.1121
995
Table 6
Model conrmation
Conrmed
Predicted
Hardware
Model
SN (dB)
SN (dB)
SN
Initial
20.7118
0.8941
20.2644
0.9150
Ongoing
Ongoing
Optimum
15.0439
1.0062
15.8048
0.9752
Ongoing
Ongoing
5.66
0.1121
4.46
Design
Gain
0.06
A0
9.18C
20
996
Case 49
Initial Design: SN = 20.2644 dB, Slope = 0.9150
45
Quantity Injected
40
35
30
25
20
15
10
5
0
5
10
15
20
25
30
35
40
Quantity Injected
35
30
25
20
15
10
5
0
5
10
15
20
25
30
35
40
Dynamic Loss Function The current loss distribution is shown in Table 16 and the dynamic tolerance
design in Figure 20.
Dynamic characteristic: slope . This is the same as
signal point by signal point tolerance design. The
response is in this case; %r is the percent contribution of the design parameter on the variability;
A0
2500C
20
997
Table 7
Slope specications
Minimum Quantity
Acceptable (at 3)
Target
Maximum Quantity
Acceptable (at 3)
25
21.5
25
28.5
40
36.5
40
43.5
(slope)
0.945 (at 3)
1.054 (at 3)
Quantity Injected
Quantity Desired
50
45
40
35
30
25
20
15
10
5
0
Minimum
quantity acceptable
Target quantity
Maximum
quantity acceptable
10
20
30
40
50
(z)
z x /2
e
dx
(z) = [1/(2
)2]
2
[z = (x )/]
FTQ
min
Z
Figure 13
FTQ representation
Z1
1 max
Z2
Case 49
First-Time Quality (%)
998
100
80
60
40
20
0
0
0.2
0.4
0.6
0.8
1.2
Table 8
End-of-line FTQ as a function of the standard deviation on the slope
i
FTQi
(%)
opt
FTQ
(%)
FTQopt
(%)
0.10
0.076
41
52
11
26
0.05
0.038
72
84
12
16
50
45
Quantity Injected
40
35
Minimum
quantity acceptable
Target quantity
30
25
20
Maximum
quantity acceptable
15
10
tolerance design
5
0
0
10
20
30
40
50
999
Table 9
Factors and levels for tolerance designa
Level
1
X 3/2
Factor
2
X
Current
(%)
3
X 3/2
3.810
3.704
2.381
1.333
0.010
0.004
0.002
2.667
6.250
0.002
0.114
0.022
2.632
X is the optimized level of each design parameter; is the current tolerance of each design parameter.
Table 10
Tolerance design experimental layout
Tolerance Factor
Signal
No.
25
26
27
2 mL
5 mL
15 mL
25 mL
40 mL
1000
Case 49
Table 11
ANOVA for signal point 1
Source
d.f.
2.6413
1.3206
15.9851
2.4760
12.63
4.3410
2.1705
26.2721
4.1758
21.30
0.0672
0.0336
2.2450
1.1225
13.5872
2.0798
10.61
0.3953
0.1977
0.9158
0.4579
5.5426
0.7506
2.83
1.0480
0.5240
6.3427
0.8828
4.50
0.0332
0.0166
0.6072
0.3036
3.6750
0.4420
2.25
0.5165
0.2582
3.1257
0.3512
1.79
2.0346
1.0173
12.3134
1.8693
9.53
4.2696
2.1348
25.8403
4.1044
20.93
0.4940
0.2470
2.9899
0.3288
1.68
(e)
0.4957
0.0826
2.1480
10.95
Total
26
19.6087
0.7542
e1
Percent
Contribution
60
Signal point 1 = 2 mL
40
Signal point 2 = 5 mL
Signal point 3 = 15 mL
20
Signal point 4 = 25 mL
Signal point 5 = 40 mL
0
A
Design Factor
Figure 16
Percent contribution of design factors to variability per signal point
1001
Table 12
Percent contribution of design factors to variability (signal point 1: 2 mL desired)
Factor
r (%)
Ltc (c)
Current,
pc
Lcp (c)
12.63
6.9235
3.81
0.8744381
21.30
10.61
6.9235
3.70
1.474706
6.9235
2.38
6.9235
1.33
0.7345834
6.9235
0.01
3.83
6.9235
0.00
0.2651701
4.5
6.9235
0.00
0.3115575
6.9235
2.67
2.25
6.9235
6.25
0.1557788
1.79
6.9235
0.00
0.1239307
9.53
6.9235
0.11
0.6598096
20.93
6.9235
0.02
1.449089
1.68
6.9235
2.63
0.1163148
0 specication on 0.02 at 1
VT current total variance from ANOVA 0.0012
Loss (dollars)
A0
y
0
Table 13
Summary of current total loss for some signal
points
Signal Point
(mL)
Current Total
Loss
$6.9235C
25
$0.5541C
40
$0.5541C
1002
Case 49
B,L
6
5
A
D
K
3
2
G
F
I
M,J
1
0
350
300
250
200
150
100
50
50
100
150
Figure 18
Chart to support manufacturing tolerance decision making (signal point 1: 2 mL)
5. Conclusions
A gain of 4.46 dB in the SN ratio was realized in the
modeled performance of the direct-injection diesel
injector. This gain represents a reduction of about
Table 14
Experimental layout for dynamic tolerance design
Tolerance Factor
No.
25
25
26
26
27
27
1003
Table 15
Dynamic tolerance design ANOVA
Source
d.f.
0.0117
0.0059
35.4725
0.0114
35.50
0.0010
0.0005
2.9348
0.0006
1.99
0.0066
0.0033
20.1535
0.0063
19.73
0.0001
0.0000
0.0006
0.0003
0.0002
0.0001
0.0000
0.0000
0.0005
0.0003
0.0029
0.0015
8.9340
0.0026
8.17
0.0038
0.0019
11.5096
0.0035
10.82
0.0005
0.0003
0.0004
0.0002
0.0037
0.0018
11.0911
0.0033
10.39
0.0043
13.39
e1
e2
(e)
14
0.0023
0.0002
Total
26
0.0320
0.0012
Percent Contribution
40
35
30
25
20
15
10
5
0
A"
B"
C"
D"
E"
F"
G"
H"
I"
J"
K"
Design Factor
Figure 19
Percent contribution of design factors on the slope (dynamic tolerance design)
1004
Case 49
Table 16
Total current loss distribution among
parameters
Factor
r (%)
Ltc (c)
Lcp (c)
35.5
1.065
1.99
0.0597
19.73
0.5919
8.17
0.2451
10.82
0.3246
K
L
0
0
10.39
0.3117
Loss (dollars)
Several charts were developed to support manufacturing process tolerance decision making. As information on tolerance upgrade or degrade cost is
made available, the charts will be used to reduce
cost further by considering larger tolerances where
appropriate.
Recommendations
We recommend the use of dynamic tolerance design for dynamic responses. Optimization at individual signal points may not always lead to a clear and
consistent design recommendation. This case study
proposes a solution for a nominalthe-best situation
for tolerance design. A standardized approach
needs to be developed to cover dynamic tolerance
design.
Hardware cost is one of the roadblocks to Taguchi robust design implementation. The development of good simulation models will help overcome
this roadblock while increasing the amount of good
information that one can extract from the projects.
With a good model, we can conduct parameter design and tolerance design with a great number of
control factors. We are also able to simulate control
factor manufacturing variation as noise factors. The
ability to consider manufacturing variation is an important advantage compared to full hardware-based
robust engineering. Hardware conrmation should
be conducted to ascertain the results and should
not lead to a surprise if a good robustness level is
achieved up front on the simulation model.
Acknowledgments We would like to acknowledge
Dr. Jean Botti, Dr. Martin Knopf, Aparicio Gomez,
Mike Seino, Carilee Cole, Giulio Ricci from Delphi
Automotive Systems and Shin Taguchi and Alan Wu
from the American Supplier Institute for their technical support.
A0
m=1
References
0
CASE 50
1. Introduction
Figure 1 shows the structure of a disk blade mobile
cutter. We move a carriage with a disk blade along
a xed blade with an edge on one side by using a
motor and cut a label placed on the xed blade.
The carriage has a shaft for rotating the disk blade
and compressed spring to press the disk blade onto
the xed blade.
A good label cutter is considered a cutter that
can cut a label with less energy regardless of width
or material the label is made from. Therefore, the
generic function of label cutting is dened as proportionality between cutting resistance of a cutter
and width of a cutout label with a cutting range in
accordance with the edges movement. In other
words, if cutting length and resistance are set to an
input signal, M, and output, y, respectively, the equation of y M holds true. In addition, sensitivity,
, should be smaller.
For measurement of cutting resistance, load imposed on the carriage from start to end of cutting
is measured in the cutting direction (X-direction in
Figure 1) and the direction perpendicular to it (Ydirection in Figure 1). As shown in Figure 2, the
cutting resistance when the carriage cuts a label is
calculated by subtracting the load with no label
from the total load and integrating it within the
range of a label width. Then, by measuring both
cutting resistances X and Y in the cutting and per-
(1)
we dene y.
As noise factors, we chose edge deterioration, N,
and carriage moving direction, N. For the former,
we deteriorated the edge at an increasing speed by
cutting a polish tape with it. Additionally, as an indicative factor, we selected P1 for difcult-to-cut paper due to its thickness and P2 for difcult-to-cut
cloth due to its softness.
2. SN Ratio
Table 1 shows the data of cutting resistance for row
1 in the L18 orthogonal array. The following is a
calculation method for the SN ratio and sensitivity.
Total variation:
ST 7.712 9.472 24.442
3274.8
(f 12)
(2)
Effective divider:
r 302 672 1002 15,389
(3)
Linear equation:
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1005
1006
Case 50
Figure 1
Structure of a disk blade mobile cutter
(L1 L2 L3 L4)2
3256.55
4r
(f 1)
(5)
L2 3422.28
L3 3583.16
L4 3732.99
(4)
SN
Figure 2
Cutting resistance generated during carriage movement
3.65
(f 1)
(6)
1007
Table 1
Cutting resistance data for row 1 in L18 orthogonal arraya
P1
P2
M1
30 mm
M2
67 mm
M3
100 mm
Linear
Equation
N1
N1
N2
7.71
9.47
15.21
14.52
21.69
21.65
L1
L2
N2
N1
N2
8.10
8.67
15.28
15.35
23.16
24.44
L3
L4
N1
N1
N2
1.01
2.11
1.31
1.44
1.88
1.31
L5
L6
N2
N1
N2
11.69
12.96
18.10
19.52
25.83
29.64
L7
L8
M, signal factor; N, edge deterioration; N, carriage movement direction; P1, thick paper; P2, cloth.
A1
A1
A2
A2
Figure 3
Response graphs
B1
B1
B2 B3
B2 B3
C1
C1
C2
C3 D1
C2 C3 D1
D2 D3
D2 D3
E1 E2 E3 F1
E1
E2 E3 F1
F2 F3 G1
F2 F3 G1
G2 G3
G2 G3
H1
H1
H2 H3
H2 H3
1008
Case 50
Table 2
Control factors and levels (dB)
Level
Control Factor
A:
A1a
A2
B:
Smalla
Mid
Large
C:
Large
Mid
Small
D:
D1a
D2
D3
E:
E1
E2
F:
Low
Same as current
High
G:
pressing force
Small
Mid
Large
H:
H1a
H2
H3
E3
a
Current condition.
Table 3
Conrmatory experimental results (dB)
SN Ratio
Sensitivity
Condition
Estimation
Conrmation
Estimation
Conrmation
Optimal
45.26
37.94
47.89
48.60
Current
54.06
52.86
26.80
31.67
8.80
14.92
21.09
16.93
Gain
Figure 4
Measured data in conrmatory experiment
1009
SN ratio:
Table 4
Conrmatory experimental results
Number of Cutting Polish Tape until
Cutter
Becomes Unable to Cut Standard
Cloth
Condition
50
100
150
200
250
Current
Optimal
10 log
(1/4r)(S Ve)
14.98 dB
VN
(11)
1
(S Ve) 12.77 dB
4r
(12)
Sensitivity:
S 10 log
0.38
(f 1)
(7)
Error variation:
Se ST S SN SN 14.30
(f 9) (8)
Error variance:
Se
1.59
9
(9)
SN SN Se
1.67
11
(10)
Ve
Total error variance:
VN
Table 5
Evaluation of cutting performance
Material
Current
Condition
Optimal
Condition
N1
N2
N1
N2
Thick paper
Standard cloth
Soft cloth
Extremely soft lm
1010
to be cut, and increase versatility of a cutter under
the optimal condition.
In this study we attempted initially to consider
the function as input/output of energy, as we studied in machining. However, we could not grasp the
difference in a motors power consumption when a
cutter is moved without and with cutting. Then we
gave up an experiment based on input/output of
energy and conducted an experiment using cutting
resistance as output. As a consequence, we succeeded in optimizing each parameter in the cutter.
Case 50
Reference
Genji Oshino, Isamu Suzuki, Motohisa Ono, and
Hideyuki Morita, 2001. Robustness of circle-blade
cutter. Quality Engineering, Vol. 9, No. 1, pp. 3744.
CASE 51
Abstract: In the case where a capstan and head cylinder rotate and a tape
travels in an ideal manner under a proper condition, the output waveform of
a D-VHS tape is determined theoretically. Thus, we supposed that we could
evaluate the travel stability of a tape by assessing this difference between
ideal and actual waveforms. First focusing on time assigned to the horizontal
axis of the output waveform plot, we considered the relationship between the
time the output reaches the peak point and the ideal time to reach as the
generic function.
1. Introduction
What controls travel of a VTR tape is a pinch roller
and capstan (Figure 1). Therefore, the generic
function is regarded as travel distance of a tape for
the number of revolutions of a capstan (Figure 2).
However, since the travel stability of the tape used
for this study is a few dozens of micrometers and
too small compared to the travel distance, we cannot evaluate it. By focusing on a shorter time interval, we considered some generic functions that can
be used to evaluate stability at a minute level of a
few dozen micrometers. Finally, we selected output
waveform during tape playing.
Figure 3 shows the relationship between a signal
track and head when a tape travels, and Figure 4
represents the output waveform. If a head traces a
record track, output proportional to an area traced
is produced.
When a capstan and head cylinder rotate and a
tape travels in an ideal manner under proper conditions, the output waveform is determined theoretically. Thus, we supposed that we could evaluate
the travel stability of a tape by assessing this difference between ideal and actual waveforms.
First focusing on time assigned to the horizontal
axis of the output waveform plot, we considered the
relationship between the time the output reaches
the peak point and the ideal time to reach as the
2. SN Ratio
We show the data for experiment 4 as an example
in Table 2.
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1011
1012
Case 51
Figure 1
VTR mechanism
L1 (32.1)(25) (320.5)(308)
Total variation:
377,307.7
(f 4000)
(1)
(2)
(3)
1
(L L400)2
400r 1
153,671,134.7
SO
(f 1)
(4)
19,468
(5)
200r
2
SP
Figure 2
Generic function
1964
(f 1)
(6)
1013
Figure 3
Relationship between signal track and head
SQ
5584
(f 99)
(7)
Se ST S SO SP SQ
53,309
(f 3898)
VN
(10)
(1/400r)(S Ve)
13.16 dB
VN
(11)
SN ratio:
10 log
(8)
SO SP SQ Se
20.086
3999
Sensitivity:
Error variance:
Ve
Se
13.676
3898
Magnitude of noise:
Figure 4
Output waveform during tape traveling
S 10 log
(9)
S Ve
0.126 dB
400r
(12)
1014
Case 51
Table 1
Control factors and levels
Level
2
A:
Control Factor
Low
Current
B:
Low
Current
High
C:
Low
Current
High
D:
Low
Current
High
E:
Small
Current
Large
F:
Small
Current
Large
G:
Small
Current
Large
H:
Small
Current
Large
optimal condition (for experiments 1 and 8, we obtained no data because of the serious damage of a
tape). This estimation showed the same results as
the case using no-damage data only.
To conrm the effect under the optimal condition, we conducted a conrmatory experiment. The
result is shown in Table 4 (for sensitivity, we do not
use the data as judging criteria because there is no
signicant difference around 0 dB). From this we
4. Reproducibility
Control factor D, which is likely to have an interaction, has to be determined by the level of control
Table 2
Raw data for experiment 4 of the L18 orthogonal array (bitsa)
Measurement
Error Factor
Tape
O
Head
P
Position
Q
M1
32.1
Start
P1
100
25
28
P2
100
P1
P2
End
1 bit 4 105 s.
M10
320.5
Linear Equation
308
313
L1
L100
29
26
316
313
L101
L200
100
32
32
317
323
L201
L300
100
28
32
315
320
L301
L400
1015
Table 3
SN ratio of control factor levels for tape traveling and damage dataa
Level
Control Factor
NG
NG
NG
13.04
13.60
13.87
14.05
12.96
13.46
13.70
12.98
13.38
13.94
13.15
13.75
13.15
13.47
13.88
13.09
13.55
13.06
12.96
14.23
13.15
13.95
12.96
As a reference, we calculated the eventual economic effect. The loss is expressed by L (A/2)2.
Now A is the loss when the travel exceeds a tolerance, the tolerance, and the variance. As an
example, we supposed that when the travel exceeds
30 m, the VTR cannot display any picture. As a
result, the customer has A 10,000 yen of repair
cost.
Taking into account the fact that the difference
of the travel between under the current and optimal
conditions is approximately 2.4 m, we obtained the
following economic effect per unit:
Lcurrent Lopt
A
(2
2opt)
2 current
10,000
(2.42) 64 yen
(13)
302
Table 4
Conrmatory experimental results for the L18 orthogonal array
SN Ratio
Conguration
Estimation
Conrmation
Optimal: A2B3C1D1E3F2G2H2
6.52
11.06
Current: A2B2C2D2E2F2G2H2
12.50
12.41
5.98
1.35
Gain
1016
Case 51
Table 5
Control factors for the L9 orthogonal array (reexperiment)
Level
1
B:
Control Factor
Current
C:
Current
E:
Current
I:
tension A
Current
Figure 5
Response graphs of the SN ratio for the L9 orthogonal array
Table 6
Conrmatory experimental result for the L9 orthogonal array (economic effect) (dB)
SN
Sensitivity
Condition
Estimation
Conrmation
Estimation
Conrmation
Optimal
10.17
10.38
0.22
0.01
Current
12.17
12.41
0.06
0.06
2.00
2.03
0.16
0.05
Gain
1017
Reference
Hideaki Sawada and Yoshiyuki Togo, 2001. An investigation on the stability of tape run for D-VHS. Quality
Engineering, Vol. 9, No. 2, pp. 6370.
CASE 52
Abstract: In the development of spindles for machining, many quality characteristics, such as temperature rise of bearings, vibration noise, or deformation, have traditionally been studied, spending a long time for evaluation.
Recently, the evaluation of machining by electricity was developed, which
makes the job easier. This study was conducted from the energy transformation viewpoint, aiming for the stability of machining and reducing development cycle time.
1. Introduction
Although in most studies of machine tools, we evaluate performance by conducting actual machining,
essentially it is important to change the design of
machine tools. However, since this type of study can
be implemented only by machine manufacturers,
we tentatively conduct research on a main spindle
as one of the principal elements in a machine tool.
Figure 1 depicts a main spindle in a machining center. A tool is to be attached at the main spindle end
and a motor is connected to its other end. The main
spindle is required of stable performance for various tools and cutting methods under the condition
ranging from low to high numbers of revolutions.
In general, manufacturers and users stress quality
of a product for evaluating performance. In the traditional study of high-speed machining, temperature rise at bearings is the most focused on, and
quality characteristics such as vibration, noise, and
deformation are measured separately, thereby leading to an enormous amount of labor hours and cost.
But to secure its stability in the development phase
and shorten the development cycle from the standpoint of energy conversion, we studied the evaluation method grounded on functionality as well as
quality characteristics.
1018
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1019
value of electric power (N 1) and maximum
value (N 2r) at each number of revolutions.
For the number of revolutions (k indicates 1000
revolutions), three values, 8k, 12k, and 16k per minute, were allocated to the outer array as the second
signal. As control factors, we selected eight factors,
such as type of bearing, part dimension, part shape,
or cooling method, and assigned them to an L18 as
the inner array.
Figure 1
Main spindle in machining center
(f 120)
(1)
Effective divider:
r 5.4772 7.7462 12.2472 450.00
Figure 2
Generic function
Linear equations:
(2)
1020
Case 52
Figure 3
Transition of electric power at on- and off-states when main spindle is spinning
L1 (0.410)(5.477) (0.913)(12.247)
32.566
L24 107.842
(3)
(L1 L2 L24)2
219.13
24r
(f 1)
(4)
8r
S 12.00990
(f 2)
(5)
Figure 4
Data selection of minimum and maximum electric power
values when main spindle is spinning
Variation of differences of proportional terms regarding motor only and motor and main spindle:
1021
Table 1
Square root data of cumulative electric power
Time:
Revolutions:
T1
5.477
T2
7.746
T3
9.487
T4
10.954
T5
12.247
Linear
Equation
M*1 (motor
only)
8k increase max.
min.
8k decrease max.
min.
12k increase max.
min.
12k decrease max.
min.
16k increase max.
min.
16k decrease max.
min.
0.410
0.394
0.403
0.396
0.529
0.515
0.529
0.515
0.670
0.644
0.674
0.650
0.578
0.556
0.575
0.560
0.748
0.728
0.749
0.726
0.948
0.915
0.951
0.923
0.708
0.681
0.707
0.686
0.917
0.893
0.917
0.891
1.160
1.119
1.167
1.136
0.816
0.786
0.818
0.791
1.058
1.023
1.058
1.028
1.338
1.291
1.349
1.313
0.913
0.879
0.915
0.883
1.177
1.138
1.181
1.147
1.496
1.443
1.511
1.471
33.566
32.305
33.546
32.488
43.399
42.080
43.452
42.206
54.991
53.051
55.409
53.883
L1
L2
L3
L4
L5
L6
L7
L8
L9
L10
L11
L12
M*2 (motor
and main
spindle)
8k increase max.
min.
8k decrease max.
min.
12k increase max.
min.
12k decrease max.
min.
16k increase max.
min.
16k decrease max.
min.
0.904
0.733
0.642
0.629
1.351
1.211
0.937
0.911
1.407
1.330
1.345
1.304
1.216
1.057
0.914
0.887
1.866
1.588
1.325
1.292
1.981
1.896
1.907
1.847
1.461
1.303
1.121
1.087
2.132
1.852
1.623
1.581
2.422
2.331
2.343
2.269
1.673
1.506
1.297
1.256
2.344
2.077
1.876
1.826
2.794
2.690
2.711
2.629
1.855
1.683
1.453
1.402
2.535
2.280
2.094
2.038
3.119
3.009
3.035
2.945
69.279
61.680
53.230
51.557
98.801
87.186
76.994
74.964
114.838
110.398
111.231
107.842
L13
L14
L15
L16
L17
L18
L19
L20
L21
L22
L23
L24
SM*
(L1 L2 L3 L12)2
(L13 L14 L24)2
S
12r
22.92859
(f 1)
SN
12r
S 0.38847
(f 1)
(7)
SN
S
12r
0.14159
(f 1)
(8)
SMM*
1
4r
(f 2)
(9)
Error variation:
Se ST S SM* SM SN SN SMM*
0.91277
(f 112)
(10)
Error variance:
Ve
Total error variance:
Se
0.00815
112
(11)
1022
Case 52
Table 2
Control factors and levels
Level
2
A:
bearing type
Control Factor
A1
A2
B:
housing dimension
Small
Mid
Large
C:
Small
Mid
Large
D:
Small
Mid
Large
E:
E1
E2
E3
F:
tightening dimension
Small
Mid
Large
G:
bearing shape
Small
Mid
Large
H:
cooling method
H1
H2
H3
A1
A2
B1
B2
B3 C1 C2
C3 D1
D2
D3
E1
E2
E3
F1 F2 F3 G1 G2
G3 H1 H2
H3
A1
A2
B1
B2
B3 C1 C2
C3 D1
D2
D3
E1
E2
E3
F1 F2 F3 G1 G2
G3 H1 H2
H3
Figure 5
Factor effects plot
1023
Table 3
Conrmatory experimental results (dB)
SN Ratio
Condition
Estimation
Sensitivity
Estimation
Conrmation
Optimal
4.33
1.20
28.80
28.25
Current
2.71
3.23
26.55
27.54
7.04
4.43
2.25
0.71
Gain
VN
Conrmation
SN SN Se
0.012656
114
(12)
SN ratio:
* 10 log
(1/24r)(SM* Ve)
7.76 dB (13)
VN
Sensitivity:
S* 10 log
1
(S
Ve) 26.73 dB
24 M*
(14)
Figure 6
Temperature change of main spindle
A2B3C3D2E3F3G1H2
1024
main spindle. By measuring the electric power of
the motor and evaluating its functionality, we found
the optimal condition, which can be conrmed
based on the quality characteristics.
Although it has long been believed that we
should continue to do experiments on a main spindle until it breaks down, in our study, no breakdown
happened. Therefore, we were able to drastically reduce the time necessary for evaluation.
Case 52
Reference
Machio Tamamura, Makoto Uemura, Katsutoshi Matsuura, and Hiroshi Yano, 2000. Study of evaluation
method of spindle. Proceedings of the 8th Quality Engineering Symposium, pp. 3437.
CASE 53
1. Introduction
This project was initiated in October 1997. At that
time, the project was at its early development stage;
as a result, there was still much design freedom allowed for optimization of the latch system. The basic design of the latch is a four-bar linkage with a
detent inside the handle to lock or unlock the latch
(Figure 1).
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1025
1026
Case 53
N1
To D-Pillar
Output Energy,
N2
M1
M2
M3
M4
Input Energy
Figure 3
Real function
3. Design Constraints
Figure 1
Rear window latch of a minivan
y = M
In addition to its basic function, this latch has a design constraint, that is, it needs a certain amount of
self-locking energy to prevent itself from unlocking.
There is no strict requirement for this self-locking
energy. However, the maximum opening effort will
be strongly correlated to this self-locking energy
(Figure 4).
Output Energy
Attached to
Glass Panel
Self-Locking Energy
Input Energy
Figure 4
Design constraint of self-locking energy
1027
5. Dynamic SN Ratio
The following equations were applied to calculate
the dynamic SN ratio for the real function of Figure
3. In the calculation, k is 4 and r0 is 2.
yij Mi (i 1, ... , k, j 1, ... , r0)
(1)
YiMi
S
r0
2
i
Yi
ij
(2)
y
YM
r M
ST
2
ij
2 Ve
(3)
i
2
i
(4)
ST S
kr0 1
(5)
2
2
(6)
SN 10 log
Table 1
Control and noise factors
Level
Factor
Abbreviation
Handle length
Low
High
x-coordinates of A
Ax
Low
Mid
High
y-coordinates of A
Ay
Low
Mid
High
x-coordinates of B
Bx
Low
Mid
High
y-coordinates of B
By
Low
Mid
High
Low
High
Control factors
Noise factor
Friction coefcient of the latch
1028
7,620
37,906
1,761
6,023
9 2 2 1 3 3 3,928 3,952
10 2 1 1 1 1 4,190 4,690
35,812
8,760
6 2 1 2 2 3 4,660 4,620
9 0.284072 90.7171
8,367
5 1 1 1 1 1 4,167 4,214
7,095
4 1 2 1 3 3 3,452 3,571
7,211
7,360
2 1 1 3 3 2 3,750 3,846
3 1 1 2 2 3 3,600 3,900
SN Ratio
6,125
N2
809
N1
N2
7,730
N1
10,115
N2
40,000 Energy
Units
N1
30,000 Energy
Units
N2
20,000 Energy
Units
37,839
N1
10,000
Energy Units
55,353
23,886
12,096
34,255
48,293
55,000
59,286
48,515
41,676
48,519
Seal
Energy
Average
at
Self-Locking Locking
Energy
Postion
0 0.241143 95.9907
C Ax Ay Bx By
Factor
M1
Run
Table 2
L18 DOE matrixa
1029
-40.0
SN Ratio (dB)
-50.0
-60.0
-70.0
-80.0
-90.0
-100.0
-110.0
1
2
Ax
2
Ay
2
Bx
2
By
2
Az
2
Ay
2
Bx
2
By
2
C
Figure 5
Main effects on the SN ratio
1.000
(dB)
0.800
0.600
0.400
0.200
0
2
C
Figure 6
Main effects on beta (energy efciency)
Table 3
ANOVA for the SN ratio
Table 4
ANOVA for beta
Percent
Contribution
Source
d.f.
C (handle length)
0.00
Ax
Ay
Percent
Contribution
Source
d.f.
C (handle length)
0.00
78.50
Ax
31.40
2.07
Ay
11.10
Bx
5.26
Bx
1.45
By
0.57
By
6.80
Error
13.60
Error
49.25
Total
17
100.00
Total
17
100.00
1030
Case 53
Table 5
ANOVA for the self-locking energy
Source
d.f.
C (handle length)
0.02
Ax
59.40
Ay
13.30
Bx
9.71
By
17.00
Error
0.57
Total
17
100.00
8. Conclusions
self-locking energy, the authors determined that the
optimal design is C level 1, Ax 1, Ay 2, Bx
1, By 1. Table 6 is a comparison of an initial
(2,3,3,3,3) and an optimal (1,1,2,1,1) design.
Table 6
Comparison between initial and optimal designs
Design
Initial
SN radio (dB)
(energy efciency)
Optimal
86.44
63.56
Improvement
22.88
0.2715
0.6116
0.3410
1.30
1.28
0.02
0.68
0.84
0.16
0.47
0.33
0.14
Table 7
Validation test results
Design
Initial
Optimal
Water leak
Questionable
Pass
95.70 31.05
44.73 10.52
1031
Tony Sajdak of MDI for his technical support of
ADAMS/AVIEW. Finally, the authors wish to thank
Dan Drahushak for his support in making this project a reality.
References
Genichi Taguchi, 1993. Taguchi Methods: Design Of Experiment. Quality Engineering Series, Vol. 4. Livonia,
Michigan: ASI Press, pp. 3438.
Kenzo Ueno and Shih-Chung Tsai (translator), 1997.
Company-Wide Implementations of Robust-Technology
Development. New York: ASME Press.
CASE 54
1. Introduction
The principal components of an evaporative emissions control system are a carbon canister, tank pressure transducer, rollover orice, liquid separator,
and the purge solenoid, a valve that functions to
meter fuel vapor from the fuel tank and a carbon
canister to the intake manifold (Figure 1). The linear proportional purge solenoid project was initiated to develop a linear purge solenoid that offers
a continuous proportional ow with a high degree
of ow controllability.
This particular solenoid design proposes to meet
or exceed the existing ow and noise specication
and has the potential to replace or compete with
comparable products currently in use in the
marketplace.
2. Problem Statement
The objective of using robust engineering methods
in the design of the linear proportional purge solenoid was to reduce part-to-part ow and opening
1032
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1033
Figure 1
System hardware
Voltage
Current Level
Current
Time
Figure 2
Current level maintained using high-frequency PWM signal
1034
Case 54
Flow (slpm)
4. Simulation Models
Every subsystem was optimized using a specic simulator for each case. For the magnetic package,
Ansoft Maxwell was used, solving an axisimetric
nite-element model of the magnetic circuit. Ansoft
Maxwell simulators are currently used and have
been validated in the past. The error estimate for
Ansoft Maxwell is about 1.5%, based on comparing
other models to actual test data.
For a spring package, a mathematical model was
developed. The general equation for a spring-massforced system is
% Duty Cycle
Opening Point
Figure 3
Linear proportional ow
3. Opening point stability. The solenoid has to control the ow of fuel vapor at a low duty cycle
percentage.
4. Low part-to-part variation. Reduction of part-topart variation of ow and opening point is
required.
Magnetic
Package
mx c kx F (t)
Spring Package
Flow
Package
Flow
Decomposition
Figure 4
Functional decomposition into three subsystems
(1)
1035
Magnetic Force
Source
Figure 5
Ideal function
dx
d 2x t 2
dnx (x)n
t 2
n
dt
dt 2
dx
n!
Xi1 xi
(2)
d 2x
x 2xi xi1
i1
dt 2
t 2
Table 1
Control factors and levelsa
(4)
(5)
Level
dx
d 2x t 2
dnx (x)n
t 2
(1)n n
dt
dt 2
dx
n!
(3)
Factor
Lowa
Average
High
Low
Averagea
High
Low
Average
High
Low
Average
High
3a
3a
3a
3a
5. Functional Decomposition
Lowa
Average
High
Low
Average
High
Low
Averagea
High
Low
Average
High
Low
Averagea
High
a
a
xi1
(6)
1036
Case 54
Noise Factor: Initial Point to Five Levels
Signal
Magnetic Force
Magnetic Force, y
Source
(Amp-Turns)
Level 1
Level 2
Level 3
Response
Source, M
6. Parameter Design
1037
No.
10
11
12
13
14
15
16
17
18
19
Table 2
Experiment layout
Control Factor
I
1
4
2
4
3
4
SN
1038
No.
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
Table 2
(Continued )
Control Factor
I
1
4
2
4
3
4
SN
1039
-37
SN Ratio (dB)
-37.5
-38
-38.5
-39
-39.5
-40
A1 A2 A3 B1 B2 B3 C1 C2 C3 D1 D2 D3 E1 E2 E3 F1 F2 F3 G1 G2 G3 H1 H2 H3 I1 I2 I3 J1 J2 J3 K1 K2 K3 L1 L2 L3 M1 M2 M3
Sensitivity (dB)
0.005
0.0045
0.004
0.0035
A1 A2 A3 B1 B2 B3 C1 C2 C3 D1 D2 D3 E1 E2 E3 F1 F2 F3 G1 G2 G3 H1 H2 H3 I1 I2 I3 J1 J2 J3 K1 K2 K3 L1 L2 L3 M1 M2 M3
= Initial Design
= Optimized Design
Figure 7
SN ratio and sensitivity plots
Table 3
Two-step optimization
Control Factor
A
Original
SN ratio
(rst step)
Sensitivity
(second step)
Optimized
1
2
1040
Case 54
Table 4
Model conrmation
SN
Condition
Prediction
Sensitivity
Prediction
Conrmation
Optimum
29.1
32.2
0.0048
0.0049
Current
35.65
38.02
0.0041
0.0041
Improvement (dB)
Conrmation
6.55
5.82
53
49
S Ve
rVe
(7)
where S is the sum of the squares of distance between zero and the least squares best-t line (forced
through zero) for each data point, Ve the mean
square (variance), and r the sum of squares of the
signals. (Note: The same dynamic SN ratio cited earlier was used in each experiment.)
Figure 7 illustrates the main effects of each control factor at the three different sources. The effect
of a factor level was given by the deviation of the
response from the overall mean due to the factor
level. The main effects plots show which factor levels
are best for increasing the SN ratio. They should
give the best response with the smallest effect due
to noise.
Control factor settings were selected to maximize
the SN ratio, minimizing the sensitivity to the noise
factor (initial point) under study, providing a uniform linearity regardless of the travel position. The
0.0007
17
0.0008
19
optimum nominal settings selected are shown in Table 3 and the model conrmation is given in Table
4.
In robust design experiments, the level of improvement is determined by comparing the SN ratio
of the current design to the optimized design. An
improvement in SN ratio signies a reduction in
variability. In this case the gain (SN ratio in decibels) is 5.82 dB; this represents a reduction in the
variation about 49% from the original design using
the following formula:
variability improvement
(12 )Gain/6 initial variability
(8)
Manufacturing
Nominal
Travel (mm)
Before Robust Engineering Optimization
(0.7 mm Linear Nominal Travel)
Design
Manufacturing
Nominal
Travel (mm)
After Robust Engineering Optimization
(1.5 mm Linear Nominal Travel)
Figure 8
Performance of the magnetic package before and after
robust engineering
(main excitation). The ideal function for this package is shown in Figure 10. The second robust design
experiment we conducted was on the travel (millimeters) of the spring package.
1041
Signal and Noise Strategies Three levels of the
magnetic force (newtons) were selected: force 1,
force 2, and force 3, which represent the total range
of the magnetic force on operation conditions. The
model (vacuum/area) was the principal source that
cannot be controlled during normal operation of
the solenoid and was treated as a noise factor for
this experiment. This strategy reduced the mechanical response variation due to the pneumatic forces
generated by the orice diameter variation and vacuum variation.
Control Factors and Levels Table 5 gives the layout
for control factors and levels. The parameter diagram, illustrating the relationship of the control factors, noise factors, signal factor, and the response to
the subsystem of the spring package, is shown in
Figure 11.
Experimental Layout An L18 orthogonal array was
selected and used to perform the experiment (Figure 12), so that we could study two levels of one
control factor and three levels of the three other
control factors.
Data Analysis and Two-Step Optimization The
data were analyzed using the dynamic SN ratio with
equation (7).
Figure 13 illustrates the main effects of each control factor at the three sources. The effect of a factor
level is given by the deviation of the response from
the overall mean due to the factor level. The main
effects plots show which factor levels are best for
increasing the SN ratio. They should give the best
response with the smallest effect due to noise.
Control factor settings were selected to maximize
the SN ratio, minimizing the sensitivity to the noise
factor (vacuum/area) under study, providing a uniform linearity of travel regardless of the pneumatic
force. The optimum nominal settings selected are
shown in Table 6.
Conrmation and Improvement in the SN Ratio
Table 7 shows the predictions and conrmatory
data. In robust design experiments, the level of
improvement is determined by comparing the SN
ratio of the current design to the optimized design.
Case 54
1042
Source (Amp-Turns)
0
1
Source (Amp-Turns)
Figure 9
Ideal function
1043
Travel
Magnetic Force
Figure 10
Ideal function
Level
Factor
Witha
Without
Low
Average
High
Low
Average
High
Lowa
Average
High
Table 5
Control factors levelsa
1044
Case 54
Travel, y
Signal
Response
Travel
Magnetic Force, M
Control Factors: One Factor at Two Levels and Three Factors at Three Levels Each
A, B, C, D
Figure 11
Parameter design
Magnetic Force
Vacuum / Area
A
1
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
2
B
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
3
3
3
Figure 12
Experimental layout
C
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
D
1
2
3
1
2
3
2
3
1
3
1
2
2
3
1
3
1
2
Low
1
High
Low
2
High
DATA
Low
3
High
SN
1045
10
SN Ratio (dB)
9.5
9
8.5
8
7.5
7
6.5
Sensitivity (dB)
A1
A2
B1
B2
B3
C1
C2
C3
D1
D2
D3
B3
C1
C2
C3
D1
D2
D3
1.5
1.45
1.4
1.35
1.3
1.25
1.2
1.15
1.1
1.05
1
0.95
0.9
A1
A2
B1
B2
= Optimized Design
= Initial Design
Figure 13
SN ratio and sensitivity plots
Table 6
Two-step optimization
Control Factor
A
Initial design
A1
B3
C1
D1
SN ratio
(rst step)
A1
B1
C1
D1
Sensitivity
(second step)
Optimized
7. Summary
C1
A1
B1
C1
D1
The largest contributors to the SN ratio for the magnetic package were, in order of importance, E, L, A,
1046
Case 54
Table 7
Model conrmation
SN
Condition
Optimum
Current
Improvement (dB)
Sensitivity
Prediction
Conrmation
Prediction
Conrmation
11.11
10.66
1.62
1.63
7.73
7.04
1.37
1.37
3.38
3.62
0.21
34
0.26
15
19
Travel (m)
32.3
Travel (m)
Pneumatic force 1
Pneumatic force 1
Pneumatic force 2
Pneumatic force 2
0
0
Figure 14
Improvement results
Table 8
Control factors and levelsa
Level
Flow
Travel
Figure 15
Ideal function
Factor
Smalla
Medium
Large
Low
Averagea
Higha
Lowa
Average
High
Low
Average
High
Smalla
Medium
Large
1047
Flow, y (slpm)
Signal
Response
Flow
Travel (mm)
Level 1
Level 2
Level 3
Level 4
Travel, M
No.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
A
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
3
3
3
B
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
C
1
2
3
1
2
3
2
3
1
3
1
2
2
3
1
3
1
2
Figure 17
Experimental layout
Travel
Temp.
Vacuum
D
E
1
1
2
2
3
3
2
2
3
3
1
1
1
3
2
1
3
2
3
2
1
3
2
1
3
1
1
2
2
3
2
3
3
1
1
2
L
L
H
L
Level 1
L H
H H
L
L
H
L
Level 2
L H L
H H L
DATA
H
L
Level 3
L H L
H H L
H
L
Level 4 SN
L H
H H
Case 54
SN Ratio (dB)
1048
17
16
15
14
13
12
11
10
A1
A2
A3
B1
B2
B3
C1
C2
C3
D1
D2
D3
E1
E2
E3
A2
A3
B1
B2
B3
C1
C2
C3
D1
D2
D3
E1
E2
E3
Sensitivity (dB)
37
36
35
34
33
32
31
30
A1
= Optimized Design
= Initial Design
Figure 18
SN ratio and sensitivity plots
Table 9
Two-step optimization
D. A contributed little to the SN ratio. The optimized control factor settings for the spring package
is C.
The largest contributors to the SN ratio for the
ow package were, in order of importance, B, C,
and D. A contributed little to the SN ratio. The optimized control factor settings for the magnetic
package is B.
8. Conclusions
Control Factor
Original
A1
B2
C1
D2
E1
SN ratio
(rst step)
A1
B3
C1
D2
E2
D2
E3
D2
E3
Sensitivity
ratio
(second step)
Optimized
A1
B3
C1
1049
Table 10
Model conrmation
SN
Condition
Sensitivity
Prediction
Conrmation
Prediction
Conrmation
Optimum
20.37
21.57
30.64
32.52
Current
12.31
13.44
29.32
29.61
8.13
1.32
Improvement (dB)
8.06
60.5
61
2.91
4.5
10
Flow (slpm)
Flow (slpm)
15 kPa
60 kPa
15 kPa
60 kPa
Travel (mm)
Before Robust Engineering Optimization
Travel (mm)
After Robust Engineering Optimization
Figure 19
Performance of the ow package before and after robust engineering
Reference
Steven C. Chapar and Raymond P. Canale, 1988. Metodos Numericos para Ingenieros. New York: McGrawHill Mexico.
CASE 55
Abstract: Since the structure of magnetic circuits is advanced and complicated, it is difcult to design magnetic circuits without nite-element method
(FEM) analysis. In our research, by integrating FEM magnetic eld analysis
and quality engineering, we designed parameters of a magnetic circuit to
secure its robustness through the evaluation of reproducibility not only in
simulation but also in an actual device. Additionally, we found that reducing
cost and development cycle time was possible using this process.
1. Introduction
Figure 1 depicts a circuit of a magnetic drive system.
The magnetic circuit used for this research was cylindrical and horizontally symmetric. Therefore,
when electric current is not applied, static thrust is
balanced in the center position with magnetic force
generated by magnets. On the other hand, once
electric current is applied, one-directional thrust is
produced by the imbalance of force caused by magnetic ux around a coil. The direction of thrust can
be controlled by the direction of current. As illustrated in Figure 2, it is ideal that static thrust, y, be
proportional to input current, M, as well as a large
amount of thrust by a small input of current.
In our research, by using static simulation software for magnetic drive, which was developed inhouse, instead of conducting an actual experiment,
we implemented parameter design. After selecting
an optimal conguration, we conrmed reproducibility in simulation and in an actual device.
1050
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1051
Case
Magnetic Flux
Generated by Magnet
Magnetic
Flux Generated
by Magnet
Plunger
Coil
Yoke
Magnet
Figure 1
Magnetic drive system
3. SN Ratio
Based on the L18 orthogonal array, we obtained the
data shown in Table 7. Using the zero-point proportional equation, we analyzed them using the following calculation procedure.
Total variation:
ST
2
ij
( f 30)
(1)
Effective divider:
r
2
j
(2)
Linear equations:
L1 M1y11 M2y12 M3y13
Slope:
(3)
1052
Case 55
Figure 3
Magnetic drive and design parameters
Figure 4
Y-type cause-and-effect diagram
1053
Table 1
Control factors and levels
Level
1
A:
dimension e
Control Factor
A1
A2
B:
dimension b c d
B1
B2
B3
C:
(dimension c d)/factor B
C1
C2
C3
D:
(dimension d)/(dimension c d)
D1
D2
D3
E:
dimension a
E1
E2
E3
F:
dimension f
F1
F2
F3
G:
dimension g
G1
G2
G3
H1
H2
H3
H: dimension h/(factor B)
Table 2
Layout and results of L12 orthogonal array
Factor
No.
Data
2.95
3.12
11
3.85
12
2.80
Table 3
Factor levels of preliminary experiment
Control Factor
Positive
Side 1
Negative
Side 2
P:
dimension e
e e
e e
Q:
dimension c
c c
c c
R:
dimension d
d d
d d
S:
dimension a
a a
a a
T:
dimension b
b b
b b
U:
dimension f
f f
f f
V:
dimension g
g g
g g
W: dimension h
h h
h h
Lm Lm5
2r
(4)
Li
10r
( f 1)
(5)
2r
(6)
1054
Case 55
Table 4
Factor effects of preliminary experimental data
Factor
No.
3.17
3.17
3.18
3.04
3.19
2.90
3.29
3.30
3.18
3.19
3.18
3.32
3.17
3.46
3.07
3.06
SN
1
[(L1 L5)2 (L6 L10)2]
5r
S
( f 1)
(7)
10 log
(12)
1
(S Ve) dB
10r
(13)
Sensitivity:
Error variation:
S 10 log
Se ST S SK SN
( f 24)
(8)
Error variance:
Ve
Se
fe
(9)
( f 25)
VN
SN
25
15 (Sm VN)
dB
VN
(14)
(10)
SN ratio:
(1/10r)(S Ve)
dB
VN
(11)
Figure 5
Response graphs for preliminary experiment
Table 5
Compounded noise factors and levels
Control Factor
Positive
Side
N1
Negative
Side
N2
S:
dimension a
U:
dimension f
f f
f f
V:
dimension g
g g
g g
W: dimension h
h h
h h
1055
A1B2C2D2E2F2G2H2
Using the optimal and current congurations obtained from simulation, we prepare an actual device
equipped with actual condition of use and stress
(before/after thermal shock and continuous operation). As a result we obtained a large amount of
gain (Table 10). Despite different absolute values of
SN ratios and sensitivities, we found a correlation
between the results in simulation and the actual device, which enabled us to ensure robustness in simulation. Figure 7 shows actual data at the optimal
and current congurations.
On the other hand, the fact that a difference between each slope of stroke was also improved by approximately 3 dB proves that we do not have any
problems in dealing with dynamic movement. Since
we conrmed that simulation enables us to ensure
robustness and reproducibility, by implementing
product development in a small scale for the same
series of products without prototyping, we can expect to reduce development labor hours and innovate our development process.
The expected benets include elimination of the
prototyping process (cut development cost in half)
and a shortened development cycle time (cut labor
hours for development in half). Although we focused on static simulation in this research, we will
attempt to apply this technique to dynamic simulation or other mechanisms to innovate our development process and, consequently, standardize our de-
Table 6
Layout of an L18 orthogonal array
N1
N2
No.
1
A
2
B
8
H
M1
k1k5
M2
k1k5
M3
k1k5
M1
k1k5
M2
k1k5
M3
k1k5
17
18
1056
Case 55
Table 7
Results of experiments
Noise
Stroke
M1
M2
M3
N1
k1
k2
k3
k4
k5
y11
y21
y31
y41
y51
y12
y22
y32
y42
y52
y13
y23
y33
y43
y53
L1
L2
L3
L4
L5
1
2
3
4
5
N2
k1
k2
k3
k4
k5
y61
y71
y81
y91
y101
y62
y72
y82
y92
y102
y63
y73
y83
y93
y103
L6
L7
L8
L9
L10
Table 8
Results of SN ratio and sensitivity analysis
No.
SN Ratio
Sensitivity
SN Ratio of Slope
17.97
3.55
30.10
25.79
8.72
18.55
21.42
7.49
17.15
18.51
8.54
14.96
17.28
3.15
26.28
28.60
9.67
24.28
28.09
8.71
22.38
23.08
9.51
25.23
37.14
8.15
23.12
10
23.65
5.83
25.10
11
20.33
7.71
15.33
12
29.83
7.71
24.57
13
29.41
10.20
14.10
14
29.66
7.74
26.29
15
25.02
7.26
20.82
16
21.37
4.76
22.53
17
21.10
8.56
21.36
18
30.88
9.68
17.28
Average
24.95
7.61
21.64
1057
Figure 6
Response graphs
Table 9
Estimation and conrmation of gain
Estimation
Conrmation
Condition
SN Ratio
Sensitivity
SN Ratio
Sensitivity
Optimal
37.54
10.56
34.08
9.01
Current
29.41
9.33
27.82
9.22
8.14
1.23
6.16
0.021
Gain
Table 10
Results conrmed by actual experimental device
Condition
SN Ratio
Sensitivity
SN Ratio of Slope
Optimal
36.43
8.97
24.22
Current
25.50
8.65
21.15
Gain
10.93
0.32
3.07
1058
Case 55
Figure 7
Conrmation of simulated results using actual device
Reference
Tetsuo Kimura, Hidekazu Yabuuchi, Hiroki Inoue, and
Yoshitaka Ichii. 2001. Optimization of linear actua-
CASE 56
Abstract: In this study we assessed robot functionality. There are still a number of problems to be solved for obtaining a clear-cut conclusion; however,
we have determined that we may be able to evaluate robot functionality by
measuring electricity consumption of an articulated robot.
1. Introduction
Many multijoint robots have the function of grasping and moving an object located at an arbitrary
position with perfect freedom, as human arms do.
Robots used primarily for product assembly are
called industrial robots. Each arm of an articulated
robot moves in accordance with each joint angle
changed by a motor and aligns its end effecter to a
target. That is, the movement of a robots end effecter is controlled by an indicated motors angle.
We used the vertical articulated robot shown in Figures 1 and 2 and instructed it with angular values
so that it could move by recognizing both angular
and positional values.
Using Cartesian coordinates (X, Y, Z) (Figure 1),
and joint coordinates (J1, ... , J5) (Figure 2), we can
express the position of this robot. In this case, the
locating point of this robot is the center point of
the ange at the end of the robots arm (point P in
Figure 1), whose coordinates are indicated on the
controller.
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1059
1060
Case 56
(1)
(2)
GH 2y GH 2Z
4 cos1
Figure 2
Robots movement in joint coordinate system
2
GHy GHZ
2L4
Arm 1
XY Plane
X-Axis
XZ Plane
Y-Axis
Figure 3
Positions of robot and box (wall)
(4)
Point P
Robot Base
(3)
Figure 4
Plotting sequence for displacement of robot arm
Figure 5
Relationship among pens center point, point P, and rotational center of
joint J4
1061
1062
Figure 6
Relationship between pens center point and rotational
center of joint J4
Case 56
Y OY L2 cos 2 (L3 L4 cos 4)cos 3
L4 sin 4 sin 3
(5)
(6)
Figure 7
Relationship between pens center point and rotational
center
1063
Table 1
Signal factors regarding arms angle (deg)
Level
Factor
10
15
10
15
y M
(7)
y M
(8)
Table 2
Indicative factors
Level
1
A: arms posture
Factor
Folded
Intermediate
Extended
B: acceleration
Low
Mid
High
C: velocity (constant)
6%
12%
18%
1064
Case 56
S
(L1 L2 L3)2
1030.0952
3r
(f 1)
(12)
SJ3
(f 2)
(13)
Error variation:
Se ST S SJ3 0.0172
(f 6)
(14)
Error variance:
Ve
Figure 8
Position of origin for robot locating
Se
0.0029
6
(f 6)
(15)
VN
51,030.1378
(f 9)
SJ3 Se
10 log
(10)
(16)
(1/3r)(S Ve)
26.85 dB
VN
(17)
Sensitivity:
Linear equation:
S 10 log
L1 (5)(4.98) (10)(9.83)
(15)(14.87)
346.250
L2 348.950
L3 344.800
(11)
0.0061
SN ratio:
(9)
Effective divider:
r (5)2 (10)2 (15)2 350
(f 7)
1
(S Ve) 4.69 dB
3r
(18)
Analyzed Result
The response graphs of SN ratio and sensitivity regarding joint J2 are shown in Figures 9 and 10, respectively, and those regarding joint J3 are shown in
Figures 11 and 12. Looking at the SN ratio factor
effect plots, we can see that factor C (constant velocity) greatly affects the locating performance.
Table 3
Measurement data for A1B1C1 (deg)
M1 (5)
M2 (10)
M3 (15)
Linear
Equation
M1 (5)
4.98
9.83
14.87
L1
M2 (10)
4.96
9.96
14.97
L2
M3 (15)
4.85
9.81
14.83
L3
1065
SN Ratio (dB)
18.00
A1
A2
A3
A2
A3
B1
B2
B3
B2
B3
B2
B3
C1
C2
C3
C2
C3
C2
C3
Sensitivity (dB)
Figure 9
SN ratio for joint J2
A1
B1
C1
SN Ratio (dB)
Figure 10
Sensitivity for joint J2
A1
Figure 11
SN ratio for joint J3
A2
A3
B1
C1
Case 56
Sensitivity (dB)
1066
A1
A2
A3
B1
B2
B3
C1
C2
C3
Figure 12
Sensitivity for joint J3
Table 4
Estimation of gains and results in conrmatory experiment (dB)
Joint J2
Joint J3
Condition
Estimation
Conrmation
Estimation
Conrmation
Optimal
25.13
25.33
24.25
23.81
Initial
20.69
23.17
18.44
19.28
Gain
4.44
2.16
5.81
4.53
1067
Table 5
Indicative factors
Level
Control Factor
A:
teaching method
Controller
Digit
B:
arms used
J2, J3
J2, J4
J3, J4
C:
posture
Folded
Intermediate
Extended
D: weight (kg)
None
E:
acceleration
Low
Mid
High
F:
velocity
Low
Mid
High
G: deceleration
Low
Mid
High
Figure 13
Electric power data
10
15
1068
electric power can be applied to an articulated robot demonstrates the validity of functionality evaluation method based on energy.
Case 56
measurement of the generic function of articulated
robots. Quality Engineering, Vol. 5, No. 6, pp. 3037.
Reference
Takatoshi Imai, Naoki Kawada, Masayoshi Koike, Akira
Sugiyama, and Hiroshi Yano, 1997. A study on the
CASE 57
Abstract: A robust design effort was initiated using Taguchis dynamic parameter design approach. An L18 orthogonal array was used to evaluate the
control factors with respect to the signal factor (dome travel), the response
(actuating force), and the noise factors (process and material variations and
number of operational cycles). The numerical experiments were carried out
by using computer simulations only. The signal-to-noise ratio increased 48.7
dB as a result of this optimization. The analysis and selection of the best
system parameters conrmed the process estimates and resulted in a design
with dramatically improved mechanical and reliability performance.
1. Introduction
The newest multifunction switch at ITT Cannon,
the KMS switch, was designed specically to meet
phone market requirements. This switch is a new
type of ultraminiature tact switch (Figure 1).
An engineering team was created to address the
switch design and to improve its mechanical and
electrical performances. A parameter design approach was selected using the method of maximizing the SN ratio and of optimizing the forces and
the tactile feel.
The essence of the switch is the K2000 dome
(Figure 2). The dome provides the spring force, the
tactile feel, and the electrical contact. A properly
designed dome functions in a unique manner: As
the dome is compressed, a point is reached where
the actuating force actually declines and goes
through a minimum value. The person actuating
the switch feels the change in force, so it is called a
tactile feel. The tactile feel is usually quantied by the
ratio of actuating force to forward force (Figure 3).
2. Background
The KMS switch uses a very small dome with the
following specications:
3. Objectives
The objective of the parameter design effort using
Taguchis approach was to determine a set of design
parameters that would result in the largest number
of operating cycles while keeping a good tactile feel
and a constant actuating force for the life of the
product. The function of the switch is to give a tactile feel at a predetermined actuating force and to
transmit an electrical current when it is actuated
during the life of the product. This can be described using the forcedeection (F/D) curve (Figure 3). The ideal switch would exhibit an F/D curve
that would not change during the life of the
product.
The analytical objective for Taguchis parameter design approach is to maximize the SN ratio.
The SN ratio is a metric that measures the switch
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1069
1070
Case 57
K2000 dome
4.6 mm
3.5 mm
Figure 1
KMS tact switch
Figure 2
K2000 dome
4. Finite-Element Analysis
(Computer Simulations)
All the experiments in this study are numerical experiments that were conducted by using computer
simulation (nite-element analysis). A different
F/D curve was calculated for each combination of
factors. The aging effect (due to the number of
operations) corresponding to each dome design was
obtained from the calculated maximum stress level
by using a modelization of the Wohler curve (Figure
4).
Force
1071
Fa
M1
M2
Event 1
Event 2
Deflection
Figure 3
F / D curve for electromechanical switch
Operating Life
Whler
Curve
Parameter
Variation
Maximum Stress
Figure 4
Operating life obtained from the Wo
hler curve
1072
Case 57
5
4.5
4
Load (N)
3.5
3
2.5
2
1.5
1
0.5
0
0
0.05
0.1
0.25
0.15
0.2
Deflection (mm)
0.3
0.35
0.4
Figure 5
Ideal F / D curve
6
5
Load (N)
4
3
load
2
1
0
0
0.1
0.2
Deflection (mm)
Figure 6
Simulated F / D curve (corresponding to one design)
0.3
0.4
1073
5
4.5
4
3.5
Load (N)
3
full load real
2.5
2
1.5
1
0.5
0
0
2
3
Deflection (mm)
Figure 7
Simulated F / D curve versus the ideal switch F / D curve
4000
3500
3000
tp 1
tp 2
Stress (MPa)
2500
tp 3
2000
mp 1
mp 2
1500
mp 3
bp 1
1000
bp 2
500
bp 3
0
0
0,1
0,2
0,3
0,4
-500
Deflection (mm)
Figure 8
Stress curves at different points of the dome (corresponding to one dome design)
1074
Case 57
A:
B:
C:
D:
E:
F:
G:
H:
Signal, S
Deflection
Two levels
Response, M
Force characteristics
Fa: M1
Fra: M2
KMS Switch
0 / 300,000 ops
1/1 m
0.01/+0.01 mm
(two levels)
(two levels)
(two levels)
Figure 9
P-diagram
M1
T
Q
P
No.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
A
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
3
3
3
B
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
C
1
2
3
1
2
3
2
3
1
3
1
2
2
3
1
3
1
2
D
1
2
3
2
3
1
1
2
3
3
1
2
3
1
2
2
3
1
E
1
2
3
2
3
1
3
1
2
2
3
1
1
2
3
3
1
2
F
1
2
3
3
1
2
2
3
1
2
3
1
3
1
2
1
2
3
G
1
2
3
3
1
2
3
1
2
1
2
3
2
3
1
2
3
1
H
1
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
2
1
1
1
N1
1
1
2
N2
1
2
1
N3
1
2
2
N4
M2
2
1
1
N5
2
1
2
N6
2
2
1
N7
2
2
2
N8
1
1
1
N1
1
1
2
N2
1
2
1
N3
1
2
2
N4
2
1
1
N5
2
1
2
N6
Figure 10
Experimental layout showing L18 and outer group
2
2
1
N7
2
2
2
N8
1075
Table 1
Control factors and levels
Level
Control Factor
1
30
B:
0.12
0.15
0.2
C:
0.05
0.1
D:
E:
0/0
0.25/0.025
0.5/0.05
F:
G:
12.5
H: material type
35
A:
25
0.02
0.05
38
40
0.1
6. Experimental Layout
5. P-Diagram
The switch product, generally called an engineered
system, can be described by a visual method called
a P-diagram. This diagram indicates the relationship
between parameter types and the input and outputs
of the switch system. The dome deection is the
Table 2
Noise factors and levels
Level
Noise Factor
P:
process variation
0.01
0.01
T:
300
1076
N2
1.60
2.20
3.80
4.00
4.50
4.95
6.60
3.75
7.80
2.10
2.05
4.80
4.30
3.90
6.10
3.70
6.00
6.00
N1
1.40
2.10
2.95
3.60
3.90
4.30
6.00
3.20
6.90
1.80
1.90
4.20
4.00
3.40
8.00
3.10
5.40
5.30
N3
1.95
2.75
3.85
4.00
5.40
5.35
6.00
4.35
7.70
2.15
2.70
4.50
4.00
4.60
6.05
3.00
4.70
6.35
N4
2.20
3.15
4.40
4.20
6.00
6.00
7.00
4.40
8.50
2.40
3.00
5.15
4.60
5.10
7.40
3.45
6.00
7.00
N5
1.27
1.14
0.50
2.52
1.44
0.50
2.84
0.50
1.14
0.86
1.20
2.17
1.73
1.08
1.63
0.76
2.22
0.50
N6
1.41
1.18
0.50
2.78
1.50
0.50
2.94
0.50
0.87
0.82
1.28
1.58
1.64
0.93
0.50
0.58
2.15
0.50
N7
1.44
0.95
0.50
2.08
1.85
0.50
2.15
0.50
0.86
0.91
1.71
0.90
1.38
0.50
0.50
1.13
0.92
0.50
Figure 11
Numerical results
No.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
N8
1.86
0.90
0.50
2.35
1.88
0.50
2.13
0.50
0.50
0.79
1.25
0.73
0.71
0.50
0.50
0.50
1.19
0.50
N1
1.10
1.60
2.90
3.61
2.20
2.10
6.01
2.70
4.40
1.80
1.90
3.30
4.01
2.30
4.00
3.10
5.41
4.90
N2
1.30
2.00
3.80
4.01
2.30
2.90
6.61
3.20
6.95
2.10
2.00
0.95
4.31
2.80
5.00
3.70
6.01
2.40
N3
1.00
1.55
2.95
4.01
2.40
4.90
6.01
2.60
7.20
1.65
1.70
3.95
4.01
4.00
3.20
2.95
4.71
4.80
N4
1.20
1.90
3.60
4.21
2.20
5.35
7.01
2.55
8.20
2.20
2.05
4.50
4.61
4.50
1.50
3.40
6.01
1.90
N5
1.01
0.94
0.50
2.53
0.97
0.50
2.85
0.50
0.89
0.86
1.20
1.76
1.74
0.86
1.03
0.76
2.23
0.50
N6
1.16
1.10
0.50
2.79
0.95
0.50
2.95
0.50
0.82
0.82
1.25
0.61
1.65
0.79
0.50
0.58
2.16
0.50
N7
0.83
0.71
0.50
2.09
1.02
0.50
2.16
0.50
0.84
0.79
1.16
0.85
1.39
0.50
0.50
1.11
0.93
0.50
N8
10 log VM
SM
SN
VM
VN
ST
SNxM
VN
SN
0.96
1.06 1.25
0.53 1.25
0.08
2.15 0.366 0.06 12.90
0.71 0.93
6.41 0.93
0.92
8.13 0.79
0.51
2.57
0.32
0.50 0.19 38.11 0.19
5.44 38.83 0.53
2.76 11.59
7.18
0.73 32.58 33.98
2.36 0.00 10.15 0.00
1.45 10.15 0.00
1.99
7.58
10.56
0.93 11.38 22.56 11.38
3.22 39.19 5.26
5.29 4.71
2.53
0.50 1.79 70.98 1.79 10.14 75.82 3.04
4.50 40.51 33.98
2.14 0.00 63.01 0.00
9.00 63.01 0.00
0.50 1.35 32.76 1.35
4.68 36.28 2.17
2.49 2.66
1.31
0.96
0.50 1.25 172.00 1.25 24.57 175.69 2.44 12.46 10.00
0.76 0.05
6.07 0.05
0.87
6.22 0.11
0.44 9.85 13.40
0.32
2.01
2.91
0.97 0.51
3.88 0.51
0.55
5.02 0.63
2.97
0.62
5.35
0.70 3.43 36.34 3.43
5.19 45.06 5.30
2.47 37.91 33.98
0.72 0.00 34.58 0.00
4.94 34.58 0.00
3.19 5.58
0.54
0.50 0.88 43.96 0.88
6.28 45.56 0.72
7.19
2.59
11.16
0.50 13.05 83.53 13.05 11.93 113.78 17.20
0.50 0.00 27.32 0.00
3.90 27.32 0.00
1.95 33.92 31.02
1.20 0.00 65.97 0.00
9.42 65.97 0.00
4.71 40.71 33.98
6.50
0.37
8.51
0.50 7.09 77.38 7.09 11.05 98.14 13.68
10 log Vm
14.72
15.87
18.13
22.13
19.87
19.97
25.03
17.72
24.09
15.12
16.69
20.14
20.98
19.15
21.38
18.15
23.11
20.56
1077
Table 3
ANOVA for the SN ratio
Source
d.f.
Pool
1286.0658
643.0329
80.2653
1270.0431
23.84
1413.2788
706.6394
88.2049
1397.2561
26.22
0.5300
0.2650
376.9884
188.4942
23.5284
360.9657
6.77
36.5609
18.2804
380.5481
190.2740
23.7506
364.5254
6.84
1718.4728
859.2364
107.2525
1702.4501
31.95
104.5913
104.5913
13.0554
96.5800
1.81
HA
10.9772
5.4886
136.1928
2.56
e1
e2
(e)
48.0681
8.0113
Total
17
5328.0132
313.4125
d.f.
Sm
Vm Sm
SM
VM SM
SN
VN SN/7
MN
SMN
VMN SMN/7
16
To optimize the system, Taguchi recommended
that we use the following data transformations.
This is a two-step optimization process:
1. Signal-to-noise ratio:
SN 10 log
VM
VN
S 10 log Vm
to optimize the actuation force (Fa)
The SN ratio had to be maximized rst. The sensitivity, SM, had to be maximized next, and last, the
S parameter had to be optimized to focus on the
nominal force value. The data transformations resulted in the ANOVA tables and response graphs
shown in Tables 3 to 5 and Figures 12 to 14.
1. Signal-to-noise: 10 log(VM/VN) transformation
(Table 3 and Figure 12)
2. Sensitivity 1: SM 10 log VM transformation
(Table 4 and Figure 13)
3. Sensitivity 2: S 10 log Vm transformation (Table 5 and Figure 14)
Table 6 summarizes the results and the best level
choice.
8. Conrmation
SM 10 log VM
1078
Case 57
Table 4
ANOVA for 10 log VM
Source
Pool
d.f.
424.8138
212.4069
9.4948
380.0722
7.73
2461.2643
1230.6322
55.0106
2416.5227
49.13
16.8809
8.4404
223.2589
111.6294
4.9900
178.5172
3.63
33.0715
16.5358
328.7046
164.3523
7.3467
283.9630
5.77
1323.5088
661.7544
29.5812
1278.7672
26.00
55.7912
55.7912
HA
50.8520
25.4260
380.3037
7.73
e1
e2
(e)
156.5957
22.3708
Total
17
4918.1461
289.3027
Table 5
ANOVA for 10 log VM
Source
Pool
d.f.
2
73.9262
36.9631
39.6632
72.0624
48.30
12.2648
6.1324
6.5804
10.4010
6.97
21.9876
10.9938
11.7969
20.1238
13.49
3.4679
1.7339
0.6887
0.3444
2.0802
1.0401
28.8232
14.4116
15.4643
26.9594
18.07
0.2866
0.2866
HA
5.6751
2.8376
3.0448
3.8113
2.55
15.8427
10.62
e1
e2
(e)
6.5235
0.9319
Total
17
149.2005
8.7765
1079
24
22
20
18
16
14
12
10
A1
A2
A3
B1
Figure 12
Response graph for the SN ratio
SN Ratio (dB)
B2
B3
C1
C2
C3
D1
D2
D3
E1
E2
E3
F1
F2
F3
G1
G2
G3
H1
H2
1080
24
22
20
18
16
14
12
10
A1
A2
A3
B1
Figure 13
Response graph for 10 log VM
10 log VM
B2
B3
C1
C2
C3
D1
D2
D3
E1
E2
E3
F1
F2
F3
G1
G2
G3
H1
H2
1081
16.5
17
17.5
18
18.5
19
19.5
20
20.5
21
21.5
A1
A2
A3
B1
Figure 14
Response graph for 10 log Vm
10 log Vm
B2
B3
C1
C2
C3
D1
D2
D3
E1
E2
E3
F1
F2
F3
G1
G2
G3
H1
H2
1082
Case 57
Table 6
Best-level-choice table
SN
10 log (VM/VN)
10 log VM
10 log VM
Factor
r (%)
Best
Level
23.84
A1
7.73
A1
48.30
A3
A1
26.22
B3
49.13
B3
6.97
B3
B3
13.49
C2
C2
r (%)
Best
Level
r (%)
Best
Level
Best Choice
C
D
6.77
D1
3.63
D1
D1
E1
6.84
F1
5.77
F1
31.95
G1
26.00
G1
1.81
H1
F1
18.07
G3
G1
H1
9. Conclusions
This study was carried out using only computer simulations, to decrease the development costs and
time, but an experimental conrmation was made
and was positive. The designs for the switch and
manufacturing equipment have been made accord-
Table 7
Comparison of process average estimates and conrmation values
Process Estimates
Conrmation Values
Initial product
Parameter Set
SN 20.4 dB
10 log VM 8.5 dB
10 log Vm 23.7 dB
SN 20.4 dB
10 log VM 8.5 dB
10 log Vm 23.7 dB
Optimized product
SN 33.0 dB
10 log VM 32.0 dB
10 log Vm 16.8 dB
SN 28.3 dB
10 log VM 28.6 dB
10 log Vm 16.5 dB
Improvement
SN 53.4 dB
10 log VM 40.5 dB
10 log Vm 6.9 dB
SN 48.7 dB
10 log VM 37.1 dB
10 log Vm 7.2 dB
References
S. Rochon and P. Bouysses, October 1997. Optimization of the MSB series switch using Taguchis parameter design method. Presented at the ITT Defense
and Electronics Taguchi Symposium 97, Washington, DC.
1083
S. Rochon and P. Bouysses, October 1999. Optimization of the PROXIMA rotary switch using Taguchis
parameter design method. Presented at the ITT Defense and Electronics Taguchi Symposium 99,
Washington, DC.
S. Rochon and T. Burnel, October 1995. A three-step
method based on Taguchi design of experiments
to optimize robustness and reliability of ultraminiature SMT tact switches: the top actuated KSR
series. Presented at the ITT Defense and Electronics
Taguchi Symposium 95, Washington, DC.
S. Rochon, T. Burnel, and P. Bouysses, October 1999.
Improvement of the operating life of the TPA multifunction switch using Taguchis parameter design
method. Presented at the ITT Defense and Electronics Taguchi Symposium 99, Washington, DC.
Genichi Taguchi, 1993. Taguchi on Robust Technology Development. New York: ASME, Press.
Genichi Taguchi, Shin Taguchi, and Subir Chowdhury.
1999. Robust Engineering. New York: McGraw-Hill.
CASE 58
1. Introduction
A new generation of CCM02 electrical connectors
has been developed within Cannon for use with
Smart Cards, a credit card with an embedded computer chip capable of direct communication with a
host computer. The connector insulator part (Figure 1) is produced using plastic injection molding
technology. For the connector to function reliably,
the insulator part needs to be exceptionally at for
surface mounting, allowing the Smart Card to be
inserted and interfaced electrically with the contacts
mounted in the insulator.
This study presents a comparison between three
approaches based on Taguchis parameter design
methods used to optimize the electrical connector
product and associated plastic injection molding
process. The three approaches were studied and
compared to determine which provides the best
set of product/process design parameter values.
Smaller-the-better Z-axis deection, dynamic Z-axis
1084
2. Objective
The objective of the robust engineering effort was
to select the best product and process parameter
value set which maximizes the function of the injection molding process and results in electrical connector parts that are at and distortion free. The
goal of the study was to explore different robust engineering approaches [i.e., signal-to-noise (SN) ratios] to determine which one resulted in the best
product/process parameter conguration.
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
Figure 1
CCM02 MKII electrical connector insulator part
Figure 2
FEA mesh
Noise Parameters
N: Plaster Material
M: Mold Position on Z-Axis
Axis
P: Location on Part
Response: Change
in Position on the
Z -Axis (Deflection)
Control Parameters
A: Beam Length
B: Inside Dimension
C: Outside Dimension
D: Rib Width
E: Plane Thickness
F: Beam Thickness
G: Gate Location
H: Gate Diameter
Figure 3
P-diagram for the smaller-the-best approach
1085
1086
Case 58
Figure 4
Electrical connector insulator and factor labels
3. Approach 1: Smaller-the-Best
Deection on the Z-Axis
After brainstorming the system and comparing past
analyses on injection-molded parts of previous versions, the parameter diagram shown in Figure 3 was
developed. The static system response was dimensional change along the Z-axis from a reference
plane parallel to the at plane of the part. This
plane would be analogous to the part lying at on
1087
Table 1
Factor levels for the smaller-the-best approach
Level
Factor
10 11 12
Control factors
A:
beam length
(mm)
10
12
B:
inside dimension
(mm)
11.3
13.6
15.9
C:
5.68
7.98
D:
14
E:
plane thickness
(mm)
0.6
0.88
1.25
F:
beam thickness
(mm)
0.6
0.88
1.25
G:
gate location
H:
gate diameter
(mm)
0.4
0.8
1.2
Noise factors
N:
M: Z-axis mold
position (mm)
plastic material
0.44
1.04
1.34
P:
location on part
(node no.)
11 11 12
1088
Case 58
Table 2
Orthogonal layout for the control factors
Control Factor
OA
Row
Smaller-the-Best
SN (dB)
10.30
14.24
15.27
9.54
15.43
9.49
9.01
12.04
17.75
10
17.08
11
8.84
12
12.03
13
13.38
14
10.30
15
12.53
16
13.94
17
17.30
18
7.50
SN Ratio (dB)
1089
B3
B1
B2
A2
B1
B2
B3
A1
A2
C1
C2
C3
D1
H1
H2
H3
SN Ratio (dB)
A1
E1
E2
E3
F1
F2
F3
G1
G2
G3
Figure 5
Smaller-the-best control-factor-level average graphs
Noise Parameters
N: Plaster Material
P: Location on Part
Signal M: Z-Axis
Axis
Mold Position
Control Parameters
A: Beam Length
B: Inside Dimension
C: Outside Dimension
D: Rib Width
E: Plane Thickness
F: Beam Thickness
G: Gate Location
H: Gate Diameter
Figure 6
P-diagram for the dynamic Z-axis position approach
Response: Z-Axis
-Axis
Part Position
D2
D3
1090
Case 58
ies were conducted on this set of control factor
levels, the results of which are presented along with
the other approaches later.
Figure 7
Ideal function for the Z-axis position approach
Table 3
Factors levels for the Z-axis position approach
Level
Factor
10 11 12
Control factors
A:
beam length
(mm)
10
12
B:
inside dimension
(mm)
11.3
13.6
15.9
C:
5.68
7.98
D:
14
E:
plane thickness
(mm)
0.6
0.88
1.25
F:
beam thickness
(mm)
0.6
0.88
1.25
G:
gate location
H:
gate diameter
(mm)
0.4
0.8
1.2
Noise factors
N:
plastic material
P:
location on part
(node no.)
0.44
1.04
1.34
Signal factor
M: Z-axis mold
position (mm)
11 11 12
Z9 3.38
Z7 2.63
Z5 1.955
Z4 1.64
Z8 3.04
Z2 1.04
Z6 2.23
Z3 1.34
Z1 0.44
Figure 8
Finite-element node mesh for the Z-axis position approach
Table 4
Orthogonal layout and results for the Z-axis position approach
Control Factor
OA
Row
ZeroPoint SN
Slope,
10.22
0.969
14.24
0.981
15.22
0.990
9.43
0.967
15.44
0.984
9.36
0.969
8.89
0.961
11.96
0.983
17.87
0.985
10
17.12
0.986
11
8.69
0.970
12
11.97
0.978
13
13.35
0.982
14
10.22
0.969
15
12.49
0.978
16
13.98
0.977
17
17.42
0.984
18
7.32
0.971
1091
Case 58
SN Ratio (dB)
1092
B3
B1
B2
A2
B1
B2
B3
A1
A2
C1
C2
C3
D1
D2
SN Ratio (dB)
A1
D3
E1
E2
E3
F1
F2
F3
G1
G2
G3
H1
H2
H3
(mm/mm)
Figure 9
Control-factor-level average graphs for zero-point proportional SN ratio
B1
B3
B2
A2
B1
B2
B3
A1
A2
C1
C2
C3
D1
D2
(mm/mm)
A1
D3
E1
E2
E3
F1
F2
F3
G1
G2
G3
H1
Figure 10
Control-factor-level average graphs for zero-point proportional sensitivity,
H2
H3
Noise Parameters
N: Plaster Material
Signal: Mold
Dimension
Response: Part
Dimension
Control Parameters
A: Beam Length
B: Inside Dimension
C: Outside Dimension
D: Rib Width
E: Plane Thickness
F: Beam Thickness
G: Gate Location
H: Gate Diameter
Figure 11
Parameter diagram for the all-axes dimension approach
Table 5
Factors and levels for the all-axes dimension approach
Level
Factor
0.80
20.0
0.90
35.1
1.07
55.4
10
11
12
Control factors
A:
beam length
(mm)
10
12
B:
inside dimension
(mm)
11.3
13.6
15.9
C:
5.68
7.98
D:
14
E:
plane thickness
(mm)
0.6
0.88
1.25
F:
beam thickness
(mm)
0.6
0.88
1.25
G:
gate location
0.4
0.8
1.2
0.48
2.94
0.60
3.10
H: gate diameter
(mm)
Noise factors
N: plastic material
Signal factor
M: mold dimension
(mm)
0.65
6.42
1094
Case 58
Figure 12
Ideal function for the all-axes dimension approach
Figure 11 presents the P diagram for this approach, and Table 5 the list of factors and levels.
Figure 12 shows the ideal function for this approach, and Figure 13 the nite-element node mesh
associated with the signal factor levels (18). A slope
of 1.00 would result in the mold and the part having
identical dimensions.
Table 6 presents the orthogonal layout for this
approach, along with the resulting zero-point proportional SN ratio and slope, . Figures 14 and 15
present the zero-point SN and sensitivity (slope, )
control-factor-level average graphs for the dynamic
all-axes dimension approach using the data in Table 6.
From these graphs, the best set of control factor
levels is A2B3C2D1E3F2G1H1. Verication studies using
this set were done and are reported along with the
other approaches later.
Lz13
Lx06
Ly17
Lz07
Lx04
Lz05
Lx10
Lx18
Ly01
Lz02
Lz12
Lx0
Ly11
Ly16
Lx09
Lx15
Ly08
Lx14
Figure 13
FE node mesh for the all-axes dimension approach
1095
Table 6
Orthogonal layout and results for the all-axes approach
Control Factor
OA
Row
ZeroPoint SN
Slope,
32.32
0.985
32.80
0.983
34.17
0.982
33.41
0.985
38.93
0.982
26.07
0.984
26.45
0.984
33.23
0.984
38.23
0.981
10
30.88
0.982
11
29.71
0.986
12
33.15
0.984
13
30.31
0.985
14
35.31
0.982
15
33.95
0.983
16
36.13
0.981
17
32.40
0.983
18
32.21
0.983
7. Conclusions
1. All the computer simulation conrmed values
are within the condence interval predicted.
2. Approach 3, all-axes dimension zero-point
proportional, results in the largest undesirable deection on the electrical connector insulator part in the Z-axis.
3. Approach 1, smaller-the-best, and approach 2,
Z-axis position zero-point proportional, results
Case 58
SN Ratio (dB)
1096
B3
B2
B1
A2
B1
B2
B3
A1
A2
C1
C2
C3
D1
D2
D3
SN Ratio (dB)
A1
E1
E2
E3
F1
F2
F3
G1
G2
G3
H1
H2
H3
Figure 14
Control-factor-level average graphs
(mm/mm)
B1
T
B2
B3
A2
B1
B2
B3
A1
A2
C1
C2
C3
D1
D2
(mm/mm)
A1
D3
E1
E2
E3
F1
F2
F3
G1
G2
G3
H1
H2
Figure 15
Control-factor-level average graphs zero-point slope, , all-axes dimension
H3
1097
Table 7
Conrmation predictions and verication
Parameter Design
Approach
Veried
(Simulation
Conrmed)
Predicted
Calculated
Deection
from Mold
Reference
Z-Axis
Gate
SN (dB)
SN (dB)
Delta (mm)
Smaller-the-best
deection on
z-axis
G1
G2
18.99 1.53
18.28 1.53
17.76
19.30
1.03
0.93
Zero-point
proportional,
z-axis position
G1
G2
18.80 1.49
18.77 1.49
0.991 0.004
0.990 0.004
17.86
19.33
0.986
0.992
1.03
0.93
Zero-point
proportional, all
axes dimension
G1
G2
40.76 4.42
35.50 4.42
0.981 0.001
0.982 0.001
35.08
31.61
0.983
0.984
1.37
1.17
Figure 16
Verication of approach 1 (smaller-the-best Z-axis deection) and approach 2 (Z-axis position zero-point proportional)
Figure 17
Verication of approach 3 (all-axes dimension zero-point proportional)
Figure 18
L18 worst case
1098
1099
CASE 59
1. Introduction
As shown in Figure 1, an intercooler (I/C) is placed
in a suction air path between a turbocharger (T/C)
and engine. Although two cooling methods (air and
water) exist, the former is more widely used because
its structure is simple, its capacity is easy to increase,
and it needs no maintenance.
Figure 2 outlines the structure of an air-cooled
I/C. Compressed and heated up by a T/C, sucked
air is cooled down when it passes through tubes inside the I/C. At this point, if there is a large amount
of resistance against the airow traveling each tube,
the cooling performance deteriorates, and at the
same time, the charging efciency of air into engine
cylinders decreases. In addition, if imbalance in airow occurs in some portions, the cooling performance worsens, and in some cases, a noise problem
called airow noise takes place.
Trying to solve the problem of airow noise motivated us to apply the quality engineering technique. We attempted not only to take measures for
this noise problem but also to establish a design
technique applicable to future products and to reduce airow noise by improving the function of an
air-cooled I/C.
2. SN Ratio
The function of an I/C is to cool air, which is compressed and heated up by a compressor of a turbo1100
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1101
Figure 1
Turbocharger and intercooler
Using an L18 orthogonal array, we assigned control factors to its inner array and signal and error
factors to its outer array for experiments 1 to 18.
Table 2 shows the data. Based on these data, we
proceed with our analysis for computing SN ratio
and sensitivity (Figure 3) using the following
calculations.
Total variation:
(f 28)
L2 2003.07
L3 8772.67
L4 (29.0)(13.6) (30.1)(18.3) (40.0)(20.0)
(58.1)(34.4) 7340.69
(3)
Variation of proportional term:
(1)
Effective divider:
8202.828
(L1 L2 L3 L4)2
4r
(f 1)
(4)
SN
Linear equations:
2663.807
(f 3)
(5)
Error variation:
Se ST S SN 142.775
(f 24)
(6)
Error variance:
Se
5.9490
24
(7)
ST S
103.9475
27
(8)
Ve
Total error variance:
VN
Figure 2
Structure of intercooler
SN ratio:
1102
Case 59
Table 1
Measured data of airow (m/s)
Number of Revolutions of T/C:
Amount of Airow:
M: Theoretical Velocity (m/s):
10
6
M1
29.0
6
M2
30.1
8
M3
40.0
10
M4
46.1
6
M5
41.3
8
M6
49.5
10
M7
58.1
I1:
upper
J1: max.
J2: min.
3.4
2.7
5.6
4.4
6.7
5.4
9.0
7.0
8.5
6.5
10.7
8.4
12.2
9.8
I2:
lower
J1: max.
J2: min.
15.4
13.6
21.8
18.3
24.0
20.0
26.3
21.8
31.3
26.8
36.0
30.0
41.7
34.4
10 log
(1/4r) (S Ve)
28.19 dB
VN
(9)
Sensitivity:
S 10 log
1
(S Ve) 8.02 dB
4r
(10)
3. Conrmation of Improvement
Figure 5 shows the result for improvement conrmed based on measurement of airow noise under the current and optimal conditions.
As the gure shows, under the optimal condition, we can reduce the noise level by 6 dB at the
Table 2
Control factors and levelsa
Level
2
A:
Control Factor
Standarda
Standard 20
B:
5*
C:
D:
1a
2a
E:
tube length
120
140
F:
30
G:
Standard 7.5
Standard
Standard 7.5
45
55a
65a
Current level.
3a
160
60
a
a
1103
Figure 3
Response graphs
Under the optimal condition, the cooling performance was improved by 20% as compared to that
under the current condition. This performance improvement represents that the total efciency of the
engine was also ameliorated by 2%.
In addition, if the cooling performance required
for a certain I/C was the same as before, we could
reduce the size of the I/C, thereby leading not only
to cost reduction but also to higher exibility in the
layout of an engine room and easier development.
Table 3
Estimation and conrmation of SN ratio and sensitivity (dB)
SN Ratio
Sensitivity
Condition
Estimation
Conrmation
Estimation
Conrmation
Optimal
19.07
19.74
4.33
4.22
Current
26.55
28.51
5.76
5.15
7.48
8.77
1.43
0.93
Gain
1104
Case 59
Figure 4
Airow in conrmatory experiment
1 Graduation: 2 dB
Current Condition
Optimal Condition
Amount of Airflow
8 kg/min
No. Revs. of T/C
100,000 rpm
Amount of Airflow
10 kg/min
No. Revs. of T/C
100,000 rpm
1105
Figure 6
Conrmatory result for cooling performance
Reference
Hisahiko Sano, Masahiko Watanabe, Satoshi Fujiwava,
and Kenji Kurihara, 1998. Air ow noise reduction
of inter-cooler system. Quality Engineering, Vol. 6,
No. 1, pp. 4853.
This case study is contributed by Hisahiko Sano, Masahiko Watanabe, Satoshi Fujiwara, and Kenji
Kurihara.
CASE 60
1. Introduction
The brake booster for the automotive brake system
under study, located between the brake pedal and
master cylinder, boosts a drivers pedal force to generate enough braking force to stop a vehicle (Figure
1). It uses the vacuum pump system (the intake
manifold or built-in vacuum pump) of the engine
to boost the pedal force, and thats why it is called
vacuum servo or master vac. To check the brake
booster quality, booster forces at two given input
forces were inspected to see whether or not they
met the target values (Figures 2 and 3). However,
unacceptably large variations were found in the
booster forces. Therefore, it was decided to use robust design to solve this chronic problem.
1106
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1107
Output Force
Knee Point
1. Booster
2. Master Cylinder
3. Brake Pedal
4. Caliper
5. Drum
6. Pressure Limit Valve
Jump-In
Force
M1
M2
Input Force
Figure 3
Output characteristics
Figure 1
Automotive brake system
Vacuum Line
Valve Body
Reaction Disk
Output
Force
Figure 2
Brake booster
Input
Force
1108
Case 60
Noise Factors:
1. Reaction Disk Hardness
2. Valve Body Inner Diameter
3. Poppet Protrusion Height
Brake
Booster
Signal Factor:
Input Force
Response:
Output Force
Control Factors:
A. Reaction Disk Vent Hole
B. Reaction Disk Thickness
C. Number of Greased Surfaces
D. Quantity of Grease
E. Grease Application Method
Figure 4
P-diagram
Table 1
Control factors
Level
Control Factor
Yes
No
B: reaction disk
thickness
Current
Thicker
C: number of greased
surface
One side
Both sides
D: quantity of grease
Current
More
E:
Current
Better
grease application
method
Table 2
Noise factors
Noise Factor
N1
N2
Soft
Hard
Large
Small
Small
Large
1109
Table 3
L12 orthogonal array
Control Factor
M1
No.
10
11
12
Figure 5
Booster performance tester
N1
M2
N2
N1
N2
1110
Case 60
Table 4
SN ratio response
A
2.11
1.31
2.55
0.65
1.32
0.40
0.41
0.84
1.06
0.39
2.51216
0.89757
3.38785
0.40884
0.92659
Rank
Table 5
response
A
7.86959
7.80028
7.92845
7.85619
7.8032
7.76598
7.83529
7.70712
7.77938
7.83237
0.10361
0.03501
0.22132
0.07681
0.02917
Rank
Figure 6
SN ratio response
Figure 7
response
Table 6
SN ratio predicted for a derived optimal
condition
Condition
SN Predicted
(dB)
Control
Factors
Current
2.00
A2B1C1D1E1
Optimal
2.31
A2B1C2D1E2
Gain
4.31
Orthogonal Array
Since ve control factors were identied, the L12 orthogonal array was selected for experiments. These
control and noise factors were allocated, as shown
in the Table 3.
Ideal Function
The following ideal function was used for this study,
which can easily be derived from Figure 3:
y M
1111
(1)
The response tables (Tables 4 and 5) and response graphs (Figures 6 and 7) summarize our results. The results show that control factors A, C, and
E had large effects on the SN ratio. Factor B was
shown to have a minor inuence on the variation
but a large effect on cost. The SN ratio gain of the
optimum condition was predicted to be 4.31 dB (Table 6).
5. Conrmation Test
A conrmation test revealed good reproducibility:
Gain predicted (dB):
4.31
5.78
Reproducibility (%):
134
6. Conclusions
The robust design method was used to solve a
chronic brake booster problem. Using this highly
efcient optimization process, considerable improvement in reducing the defect rate due to
booster force variation was achieved. It would also
be very helpful in enhancing productivity.
References
10 log
(1/rr0)(S Ve)
rVe
rr (S
1
Ve)
(2)
(3)
CASE 61
1. Introduction
Series 47 and Series 51 mechanical water feeder and
low-water cutoff combinations, as well as Series 101A
electric water feeders, are essential to our product
mix. All three series incorporate the same feeder
valve. The valve cartridge that controls the water
ow is shown in Figure 1. This valve handles cold
water feed at inlet pressures up to 150 psig. This
project was undertaken to correct a eld condition
that is related to the valve, commonly referred to as
chattering.
2. Background
To increase its reliability and life and to make it
more installer-friendly, the feeder valve was rede1112
signed in 1995. The new design incorporates a plastic (Noryl) replaceable cartridge. Valve shutoff is
achieved when the moving poppet inside the cartridge is locked into place mechanically by an external stem that exerts force on the diaphragm. The
conguration of the cartridge within the valve is
shown in Figure 2. Internal components of the cartridge are shown in Figure 3.
Upon its introduction to the market, we began
receiving complaints of valve chattering in 47LWCO series. The 51-LWCO series received almost
no complaints. As the Series 47 is used in residential
boilers and the Series 51 in industrial boilers, we
surmised that the high complaint level on the Series
47 was due to by homeowners sensitivity to noise.
The 101A feeder was unaffected because it has a
quick-shutoff mechanism. The chattering could not
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
47
1113
51
101
Valve Cartridge
Figure 1
Feeder valve
Figure 2
Cartridge within the valve
1114
Case 61
3. Objectives
The objective of this product optimization was twofold: (1) to reduce variability under eld noise conditions (i.e., to maximize the SN ratio), and (2) to
optimize the design for minimum ow rate of 2.0
gal/min at 40 psig (i.e., to adjust the sensitivity).
This would be achieved by obtaining a consistent
ow rate across the valve in proportion to the inlet
water pressure and the cross-sectional ow area
(Figure 4).
4. The Team
Figure 3
Internal components of the valve
5. Experimental Layout
Figure 5 is a P-diagram that shows the control,
noise, and signal factors. Factor denitions follow.
Factor levels are shown in Table 1.
y = Flow Rate
y = M/M*
M/M*
Figure 4
Ideal function
y = Flow Rate
M = Cross-Sectional Flow Area
M* = Square Root of Inlet Pressure
1115
Noise Factors
Aging
Movement direction
Air cylinder speed
Signal Factors
Inlet pressure
Cross-sectional
flow area
System
Valve operation
Output
Flow rate
across the
valve
Control Factors
Fastening
Poppet design
Seat design
Seat durometer
Restrictor orifice size
Side hole opening size
Figure 5
P-diagram
Control Factors
Fastening: the cartridge poppet, either mechanically fastened to the diaphragm or unfastened, and allowed to move freely in the
cylinder (see Figure 3 for a graphical depiction)
Noise Factors
Aging: controls whether or not the cartridge
had been in use (the effect of age is recognized by the lasting impression of the inlet orice on the poppet seat)
Poppet design: the shape of the poppet, determining the amount of ow around the poppet
Movement direction: controls whether the response is being measured as the valve is opening or closing
1116
Case 61
Table 1
Factor and levels
Level
Factors
Control factors
A:
fastening
Not fastened
Fastened
B:
poppet design
Hex
Square
C:
seat design
Flat
Conical
Spherical
D:
durometer
80
60
90
E:
1
2
16
F:
0.187
0.210
New
Aged
Noise factors
N:
aging
1
8
FR:
0.002
0.001
DR:
40
150
Signal factors
M: inlet pressure (psig)
M*: cross-sectional ow area
80
6. Test Setup
The test setup is shown in Figures 6 and 7. Important notes:
Inlet feed pressure, one of the dynamic
signals, and air cylinder speed, the rate at
which the force gauge closes and opens the
valve, were regulated as part of the setup.
Flow rate, the output response, and force to
close or open the valve, used to calculate the
cross-sectional ow area, were taken in real
time from a ow meter and force gauge,
respectively.
As the valve opens and shuts, the ow rate and
shutoff force were taken simultaneously at the
rate of twice per second. The data were downloaded electronically directly into a computer
spreadsheet.
1117
Force
Gauge
Feeder
Valve
Data
Recorder
Data
Acquisition
Board
Figure 6
Test setup 1
Air
Cylinder
Force
Gauge
Feeder
Valve
Pressure
Gauge
Figure 7
Test setup 2
1118
Case 61
Table 2
SN response
Factor
A:
Fastening
B:
Poppet
Design
C:
Seat
Design
D:
Durometer
E:
Restrictor
Size
F:
Side Hole
Size
6.05
5.76
5.90
6.12
7.35
5.64
5.12
5.25
5.93
5.24
4.87
5.48
4.94
5.39
4.55
Level
3
Delta
0.93
0.51
0.99
0.88
2.80
0.15
Ranking
rr1 (S
(1/rr0)(S Ve)
Ve
Ve)
Table 3
Sensitivity response
Factor
A:
Fastening
B:
Poppet
Design
C:
Seat
Design
D:
Durometer
E:
Restrictor
Size
F:
Side Hole
Size
0.4665
0.4702
0.4734
0.4788
0.8957
0.4800
0.4856
0.4877
0.4680
Level
0.5046
0.4764
0.4075
0.4501
0.4728
0.1248
Delta
0.0574
0.0751
0.0342
0.0635
0.7134
0.0455
Ranking
1119
F2
F1
E3
E2
E1
D3
D2
D1
C3
C2
C1
B2
B1
A2
3.00
A1
SN Ratio
4.00
5.00
6.00
7.00
8.00
Figure 8
Response graph
fastened
B1:
hex
C3:
spherical
D2:
60
E2:
0.125 in.
F1:
0.187 in.
8. Conrmation
Based on the robust design experimentation, design
optimization would predictably result in an SN ratio
9. Conclusions
To solve the water feeder valve chattering problem,
a dynamic Taguchi L18 experiment was performed.
The experiment analyzed the level effects of six
control factors and three noise factors. The zeropoint proportional equation was used for the analysis. The study employed a double signal, inlet
pressure into the feeder and cross-sectional ow
area, with ow rate through the valve used as the
response.
Experimental results showed a predictive SN ratio gain of 5.90 dB. This indicates that design optimization and a signicant improvement in the
chattering problem could be achieved. A conrmation study using the optimal design factors resulted in a 4.63-dB gain.
Table 4
Results of conrmatory run
SN
Gain
Beta
Gain
Initial production
Design
8.87
0.88
Optimum prediction
2.97
5.90
0.46
0.42
Optimum conrmed
4.25
4.63
0.37
0.51
1120
The Series 47 chattering experiment was the rst
Taguchi robust design experiment for McDonnell &
Miller. We put the robust design technique to the
test by applying it to a product that has been in the
eld for several years and has shown a sporadic chattering problem from the beginning. Utilizing robust
design techniques we were able to predict the quality of the optimum design via the SN ratio gain.
Based on the 4.63-db gain, we felt that this design
needs to be tested in the eld for customer approval. At the same time, we started working on a
new cartridge that will eliminate altogether the freemoving part in the cartridge. With the traditional
designtestbuild method we could not pinpoint
control factors that would make the product robust
against eld noise conditions as explicitly as we were
able to do using the Taguchi robust design method.
Case 61
Acknowledgments The author expresses his sincere appreciation to Shin Taguchi of American Supplier Institute and Tim Reed of ITT Industries for
their guidance throughout the experiment.
References
J. L. Lyons, 1982. Lyons Valve Designers Handbook. New
York: Van Nostrand Reinhold.
R. L. Panton, 1984. Incompressible Flow. New York: Wiley.
G. H. Pearson, 1972. Valve Design. New York: Pitman.
Genichi Taguchi, Shin Taguchi, and Subir Chowdhury,
1999. Robust Engineering. New York: McGraw-Hill.
Yuin Wu and Alan Wu, 1998. Taguchi Methods for Robust
Design. Livonia, Michigan: ASI Press.
1121
Hex
Not fastened
Not fastened
Not fastened
Not fastened
Not fastened
Not fastened
Not fastened
Fastened
Fastened
Fastened
Fastened
15
16
17
18
Dummy treated.
Average
Fastened
Fastened
Fastened
12
13
Fastened
11
14
Fastened
10
Hex
Not fastened
80
Spherical
Flat
60
90
80
60
Flat
Conical
Spherical
Hexa
Hex
Hex
80
90
Spherical
Conical
60
80
Square
Square
Spherical
Conical
90
90
Conical
Flat
60
Flat
Square
Hex
Hex
Hex
Hex
a
60
80
90
60
80
D:
Durometer
90
Conical
Flat
Spherical
Conical
Flat
C:
Seat
Design
Inner Array
Spherical
Square
Square
Square
Hex
Hex
Hex
Not fastened
No.
B:
Poppet
Design
A:
Fastening
Appendix
0.125
0.500
0.063
0.063
0.125
0.500
0.500
0.063
0.125
0.125
0.500
0.063
0.500
0.063
0.125
0.063
0.125
0.500
E:
Restrictor
Size
0.8794
0.3998
0.1214
0.4158
0.1318
0.8376
0.1228
0.9315
0.3582
0.3540
0.1257
0.8111
0.9460
0.4702
0.4702
0.1223
0.9668
0.4471
0.4760
8.27
4.00
1.95
6.62
2.94
5.01
4.57
9.58
2.73
3.72
2.34
4.46
5.48
3.54
1.95
2.66
7.66
2.30
5.59
0.187
0.187a
0.210
0.187
0.210
0.187
0.187
0.187
0.210
0.187
0.187
0.210
0.210
0.187
0.187
0.187
0.210
0.187
Sensitivity
SN Ratio
F:
Cartridge Side
Hole Size
CASE 62
1. Function of a DC Motor
A dc motor has a mechanism that converts electric
power (electricity, I, voltage, V ) to motive power
(2 no. revolutions, n, torque, T ), and their
relationship is expressed as follows:
2nT IE
watts
(1)
Now corresponds to energy conversion efciency, and equation (1) represents a technical
means per se. On the other hand, since the requirement of a motor as a product is motive power to
drive a target object, if we cannot obtain the power,
the motor does not work. Therefore, we regard the
motive power required for a product as a signal and
the required motive power with less electric power
as the dc motors function. This relationship is expressed as
IE 2nT
watts
(2)
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
Output y:
Electric Consumption (W)
1123
Using the data of experiment 1 shown in Table
1, we detailed the calculation procedure of SN ratio
and sensitivity as shown below.
y = M
Total variation:
ST 8.562 8.502 10.142 10.062
7413.58
(3)
Effective divider:
r 3.952 3.982 4.702 4.732 1854.37
(4)
Linear equation:
L (3.95)(8.56) (3.98)(8.50)
(4.70)(10.14) (4.73)(10.06)
3701.03
L2
3701.032
7386.65
r
1854.37
26.93
80
2 Nm
80
Electric Power
Consumption
(W)
100
0
0
30 60
(f 89)
(7)
Error variance:
3 Nm
20
(6)
Se ST S 7413.58 7386.65
100
40
(f 1)
Error variation:
Load Torque
120
4 Nm
60
(5)
120
Electric Power
Consumption
(W)
(f 90)
60
40
20
0
10 15 20 25
Motive Power (W)
Input/Output Relationship
30
1124
Signal M
Output y
Signal M
Output y
Signal M
Output y
Signal M
Output y
Signal M
Output y
Signal M
Output y
Signal M
Output y
Signal M
Output y
Signal M
Output y
180-s-lapsed
90-s-lapsed
Initial
180-s-lapsed
90-s-lapsed
Initial
180-s-lapsed
90-s-lapsed
Initial
Measurement
Point of Time
Data
Typea
Load
(Nm)
Table 1
Results of experiment 1
4.90
10.57
4.81
10.28
4.73
10.09
4.61
9.28
4.75
8.70
4.64
8.98
3.95
8.56
4.10
7.68
4.09
7.64
4.91
10.50
4.76
10.33
4.73
10.00
4.62
9.31
4.73
8.76
4.65
9.01
3.98
8.50
4.10
7.61
4.13
7.61
4.93
10.43
4.76
10.28
4.80
10.01
4.63
9.28
4.73
8.76
4.66
8.90
3.98
8.42
4.09
7.80
4.11
7.61
4.95
10.38
4.82
10.11
4.72
10.19
4.67
9.25
4.73
8.67
4.64
8.93
3.97
8.35
4.10
7.72
4.11
7.59
4.95
10.36
4.88
10.05
4.64
10.29
4.65
9.15
4.76
8.72
4.64
8.96
3.98
8.32
4.11
7.66
4.13
7.68
4.96
10.37
4.85
10.19
4.68
10.11
4.65
9.15
4.78
8.73
4.67
8.80
3.97
8.43
4.09
7.70
4.12
7.58
4.97
10.40
4.82
10.16
4.77
10.01
4.66
9.16
4.76
8.70
4.65
8.80
4.01
8.38
4.11
7.75
4.12
7.51
4.64
9.17
4.76
8.74
4.63
8.91
4.00
8.31
4.10
7.69
4.11
7.69
4.96
10.39
4.83
10.19
4.70
10.20
4.94
10.37
4.84
10.04
4.70
10.14
4.64
9.18
4.74
8.74
4.66
8.92
3.99
8.26
4.11
7.56
4.13
7.55
4.96
10.37
4.90
10.00
4.73
10.06
4.65
9.12
4.75
8.73
4.69
8.88
4.01
8.23
4.09
7.76
4.13
7.56
1125
Se
26.93
0.3026
89
89
Ve
(8)
SN ratio:
10 log
10 log
(1/r)(S Ve)
Ve
(1/1854.37)(7386.65 0.3026)
0.3026
11.19 dB
Sensitivity:
1
S 10 log (S Ve)
r
10 log
1
(7386.65 0.3026)
1854.37
6.00 dB
calculated the SN ratio and sensitivity using the calculation method discussed in the preceding section.
Based on the result, we computed the level-by-level
average SN ratio and sensitivity and plotted the factor effects (Figure 3).
The optimal condition determined by the response graph of the SN ratio is shown together with
the current condition as follows:
(10)
Table 2
Factors and levels
Level
Factor
Control factors
A: xing method for part A
B: thickness of part B
C: shape of part B
D: width of part D
E: shape of part E
F: inner radius of part F
G: shape of part G
H: thickness of part G
Current
Small
1
Small
1
Small
1
Small
Signal factor
M: motive power (2nT )
Noise factor
I: time
2
Rigid
Mid
2
Mid
2
Mid
2
Mid
Large
3
Large
3
Large
3
Large
90-s-lapsed
180-s-lapsed
Case 62
SN Ratio (dB)
1126
SN Ratio (dB)
Figure 3
Response graphs
pared with that under current conditions. In addition, we can downsize components such a magnet
or yoke without losing the current level of the motive power and, moreover, can slash material and
other costs by several dozen million yen on a yearly
basis.
Table 3
SN ratio and sensitivity in conrmatory experiment (dB)
SN Ratio
Condition
Estimation
Sensitivity
Conrmation
Estimation
Conrmation
Optimal
16.88
16.43
5.99
6.11
Current
10.06
11.73
6.31
6.93
Gain
6.82
4.70
0.32
0.82
Benchmark
16.77
6.62
= 4.93
5 10 15 20 25
Motive Power,
M (2nT )
140
120
100
80
60
40
20
00
= 4.08
5 10 15 20 25
Motive Power,
M (2nT)
140
120
100
80
60
40
20
00
1127
Power Consumption, y(w)
= 4.59
5 10 15 20 25
Motive Power,
M (2nT )
Figure 4
Results of conrmatory experiment
Reference
Kanya Nara, Kenji Ishitsubo, Mitsushiro Shimura, Akira
Harigaya, Naoto Harada, and Hideki Rikanji, 2000.
CASE 63
1. Introduction
The goal of this project was to determine the vehicle
steering and suspension system combination that
optimizes the robustness of on-center steering performance. An ADAMS dynamic simulation computer model of a light truck vehicle was used to
generate the data necessary for evaluation of the
design. The model consisted of a detailed representation of the steering system combined with a full
vehicle model. To establish the validity of the
ADAMS model, the steering system and vehicle responses predicted were correlated with on-center
road test data from a baseline vehicle prior to performing this study.
Many steering systems fail to achieve good oncenter steering system performance, due partly to
large amounts of lash and friction. A steering system
with good road feel at high speeds must have maximum stiffness and minimum friction. These parameters are especially important in the on-center
region, so that the driver can detect and respond to
extremely small changes in road reaction forces re1128
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1129
Figure 1
ADAMS full vehicle model
to generate an approximate 0.2-g peak lateral acceleration at a forward vehicle speed of 60 mph. A
low level of lateral acceleration was chosen to reduce the nonlinear effects of vehicle dynamics on
the on-center performance, so that the steering system performance could be isolated and evaluated.
ay = lateral acceleration
Outputs from the simulation were characterized
= steering wheel angle
by plotting the vehicle lateral acceleration versus the
steering wheel angle (Figure 2). The speed of the
vehicle was varied, as in an ordinary on-center test,
Figure 2
while the steering wheel angle amplitude was held
Typical data for on-center ay versus cross plot
ay
1130
Case 63
ay
(v)
Figure 3
Ideal function for on-center steering
3. Ideal Function
Much discussion and effort went into dening the
ideal function for the case study. After careful consideration of many possible alternatives, it was decided that the ideal function characterizing the
steering system model on-center behavior would be
best represented by an equation of the form
ay (v)
where ay is the lateral acceleration of the vehicle
(g), (v) the sensitivity coefcient as a function of
vehicle speed (g/deg), (v is the vehicle speed in
mph), and the steering wheel angle (deg). This
formulation, shown in Figure 3, allowed for consideration of vehicles with both constant and varying
understeer. The formulation also assumed that hysteresis is a condition to be minimized in the steering
system. In fact, hysteresis was treated as part of the
noise in formulating the systems SN ratio. Nonetheless, it was recognized that hysteresis is inherent
in the vehicle dynamics and cannot be eliminated
entirely from the steering system performance.
Selection of the ideal function was based on the
assumption that the steering system open-loop function is as shown in Figure 3, where steering wheel
angle () is the driver input and lateral acceleration
(ay) of the vehicle is the output.
1131
The ideal function selected for this study was derived from the results of an on-center steering test.
The steering wheel angle versus lateral acceleration
cross plot shown in Figure 2 is representative of a
typical curve obtained from a vehicle on-center test.
It can be seen that the ideal function shown in Figure 3 is a representation of the overall slope of the
loop, but without hysteresis.
The response of the system in Figure 4 can be
affected by environmental and other factors outside
the direct control of the design engineer. Such factors are referred to as noise factors. Examples of
noise factors are tire ination pressure, tire wear,
and weight distribution. It is necessary to design the
system to achieve the most consistent input/output
response regardless of the operating conditions experienced by the customer. Figure 5 shows the overall system diagram, indicating the noise factors and
control factors considered in this study.
4. Noise Strategy
Several noise factors are shown in Figure 5. However, it is not necessary that all noise factors be included in the analysis. Typically, if a system is robust
against one noise factor, it will be robust against
other noise factors having a similar effect on the
system. Therefore, the noise factor strategy involves
selecting a subset of noise factors from all possible
noise factors affecting the system. The noise factors
selected should be those factors that tend to excite
the greatest variation in the system response.
The steering system response to noise has two
signicant modes, one being the change in slope of
the ideal function (sensitivity), the other a change
in the size of the hysteresis loop. Therefore, it is
necessary to group the noise factors into subsets,
each of which excites one or the other mode of response to variations in noise levels. For example,
some noise levels may expand the hysteresis loop
(Input)
Figure 4
Steering system function
Steering
System
ay
(Output)
1132
Case 63
Signal
Response
ay
Figure 5
Engineering system for the steering system model
Table 1
Noise strategy
Noise Level
N1
P1, Q1
N2
P1, Q2
N3
P2, Q1
N4
P2, Q2
Rack friction
Low
Low
High
High
Column friction
Low
Low
High
High
Weight distribution
High
Low
High
Low
Valve areas
High
Low
High
Low
Noise Factor
1133
6. Analysis
For each run in the L27 array used in this experiment, an evaluation of the SN ratio can be made,
using the relationship
SN 10 log
2
2
ay
Best-Fit
Line through
Zero Point
Signal
Data Points
for Each Noise
Level (N1...N4)
Figure 6
Best-t approximation
1134
With the average SN ratios, the optimum conguration was selected for maximizing the robustness
of the system by choosing the level of each control
factor with the highest SN ratio. An estimate of the
overall system SN ratio was then obtained by summing the improvements due to each optimal control factor setting from the overall average of SN
ratios for all 27 runs.
Case 63
Level 1:
S
N
av
1
18.6 18.4 18.3 18.9 18.9
9
18.9 19.8 19.9 19.9) 19.06
Level 2:
S
N
av
1
(20.0 20.0 20.2 18.4 18.4
9
18.2 18.5 18.7 18.6) 19.02
Level 3:
S
N
av
1
(18.9 19.1 19.0 19.8 19.6
9
19.9 18.4 18.2 18.3) 19.01
S
N
opt
S
N
av
S
N
n,opt
S
N
av
where
S
N
S
N
S
N
opt
av
L27 array
average SN ratio for the nth control
n,opt
G3
1135
Table 2
Summary of results for the L27 array at 60 mph
Factora
Run
SN
18.6
0.724
18.4
0.730
18.3
0.740
18.9
0.749
18.9
0.802
18.9
0.755
19.8
0.802
19.9
0.754
19.9
0.803
10
20.0
0.669
11
20.0
0.669
12
20.2
0.729
13
18.4
0.818
14
18.4
0.775
15
18.2
0.741
16
18.5
0.826
17
18.7
0.852
18
18.6
0.830
19
18.9
0.668
20
19.1
0.730
21
19.0
0.722
22
19.8
0.785
23
19.6
0.762
24
19.9
0.791
25
18.4
0.852
26
18.2
0.808
27
18.3
0.771
a
A, torsion bar rate; B, gear ratio; C, valve curve; D, yoke friction; E, column bearing radial stiffness; F, column bearing
radial damping; G, rag join torsional stiffness; H, gear housing support radial stiffness; I, suspension torsion bar rate; J, shock
rate; K, toe alignment; L, camber alignment.
1136
18.93
19.01
3
0.22
19.00
19.02
0.05
19.15
19.06
Delta
Level
0.16
19.07
19.09
18.93
1.54
19.89
18.84
18.35
Table 3
SN ratios for control factor levels at 60 mph
0.00
19.03
19.03
19.03
0.01
18.98
19.03
19.02
0.10
19.02
19.03
19.08
Factor
0.02
19.01
19.03
19.04
0.06
19.02
19.01
19.07
0.02
19.03
19.03
19.04
0.01
19.03
19.03
19.02
0.16
18.95
19.02
19.11
1137
Table 4
Sensitivities for control factor levels at 60 mph
Factor
Level
0.762 0.709 0.782 0.773 0.766 0.764 0.762 0.751 0.761 0.765 0.756 0.789
0.768 0.775 0.756 0.771 0.765 0.765 0.766 0.765 0.764 0.764 0.766 0.764
0.765 0.811 0.757 0.752 0.765 0.767 0.768 0.779 0.770 0.766 0.773 0.742
Delta 0.006 0.102 0.026 0.021 0.001 0.003 0.006 0.028 0.009 0.002 0.017 0.047
S
N
av
S
N
opt
19.03)
19.03)
19.03)
19.03)
19.03)
1138
Case 63
Figure 7
SN ratio comparison for control factor levels at 60 mph
8. Conclusions
The close correlation between predicted and actual
SN ratios and sensitivities for the optimum, nomi-
1139
Figure 8
Sensitivity () comparison for control factor levels at 60 mph
1140
Case 63
Table 5
Predicted versus actual SN ratios (dB)
45 mph
Conguration
60 mph
75 mph
Predicted
Actual
Predicted
Actual
Predicted
Actual
Optimum
17.75
17.80
17.94
18.01
17.85
17.95
Nominal
19.47
19.41
19.65
19.58
19.57
19.52
Worst-Case
19.94
19.90
20.33
20.27
20.31
20.31
Acknowledgments We would like to thank the following additional case study team members from
the Ford Motor Company, Car Product Development, for their contributions to the project: Chris
D. Clark, Paul Hurley, Phil Lenius, and Bernard Los.
We would also like to thank Don Tandy and the
people from the Vehicle Dynamics Section, Chassis
Engineering, Light Truck Engineering, at Ford Motor Company for their work in developing the
ADAMS vehicle model that made this case study
possible. Finally, we would like to thank Genichi
Taguchi and Shin Taguchi of the American Supplier
Institute and John King and John Coleman at the
Ford Design Institute for their guidance and assistance with this project.
CASE 64
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
(f 30)
(1)
1141
1142
Case 64
Error variation:
Se ST S SN SP
947,935,606
(f 26)
(7)
Error variance:
Ve
Se
36,459,062
26
(8)
SN Se
27
35,624,506
(9)
SN ratio:
Figure 1
Measurement method for texture of omelet
1 10 log
(1/6r)(S Ve)
33.03 dB
(10)
1
(S Ve) 108.55 dB
6r
(11)
VN
Sensitivity:
Effective divider:
r 0.05 0.25 0.1375
2
S1 10 log
(2)
Linear equation:
L1 (0.05)(9194) (0.25)(68,710)
36,384
Total variation:
ST 102,7282 150,5992
L6 (0.05)(3371) (0.25)(63,439)
30,662
Sm
(L1 L2 L6)
6r
59,129,381,251
(f 1)
SN
(f 1)
(f 1)
(13)
y s in Table 2:
y1 102,728 126,572 451,551
13,926,068
(12)
(102,728 150,599)2
(3)(4)
213,712,627,107
(4)
(f 12)
221,531,535,262
(3)
(14)
(f 2)
(6)
Sp
y 21 y 22 y 23
Sm 4,011,867,234
4
(f 2)
(15)
Figure 2
Comparative measurement of omelet
1143
1144
Case 64
Table 1
Measured texture data for area A (experiment 1) (dyn)
M1
M2
M3
M4
M5
Linear
Equation
P1
N1
N2
9,194
9,133
23,721
24,395
38,309
38,860
53,142
53,080
68,710
67,484
L1
L2
P2
N1
N2
7,171
15,323
22,066
33,405
38,983
49,893
55,839
65,952
72,572
81,888
L3
L4
P3
N1
N2
4,107
3,371
14,098
11,830
32,424
28,379
52,038
45,970
70,978
63,439
L5
L6
Error variation:
Se ST Sm Sp 3,807,040,921
(f 9)
(16)
Error variance:
Se
423,004,547
9
(17)
12 (Sm Ve)
16.23 dB
Ve
(18)
1
S2 10 log
12 (Sm Ve) 102.50 dB
(19)
Ve
SN ratio:
2 10 log
Sensitivity:
Table 2
Measured texture data for area B (experiment 1) (dyn)
P1
P2
P3
N1
min.
max.
102,728
121,484
130,004
158,015
159,609
181,613
N2
min.
max.
100,767
126,572
92,860
139,750
137,421
150,599
y1
y2
y3
Total
1145
Table 3
Control factors and levels
Level
1
A:
Control Factor
Yes
No
B:
15
30
C:
50
100
D:
E:
10
20
F:
Low
Mid
High
Low
Low
Low
Mid
Mid
Mid
High
High
High
0.5
H:
when F1
when F2
when F3
SN Ratio (dB)
B1 B2 B3
C1 C2 C3
D1 D2 D3
E1 E2 E3
F1 F2 F3
G1 G2 G3
H1 H2 H3
A1 A2
B1 B2 B3
C1 C2 C3
D1 D2 D3
E1 E2 E3
F1 F2 F3
G1 G2 G3
H1 H2 H3
Sensitivity (dB)
A1 A2
Figure 3
Response graphs
1146
Case 64
Table 4
Estimation and conrmation of gain (dB)
SN Ratio
Sensitivity
Condition
Estimation
Conrmation
Estimation
Conrmation
Optimal
51.10
51.90
213.13
212.41
Initial
58.08
54.60
204.84
203.35
Gain
6.98
2.70
8.29
9.06
Figure 4
Measured data under current and optimal conditions
1147
Reference
Toshio Kanatsuki, 2000. Improvement of the taste of
thick-baked eggs. Quality Engineering, Vol. 8, No. 4,
pp. 2330.
CASE 65
1. Introduction
Denition of Chatter
Chatter occurs when a wiper blade does not take
the proper set and skips across the windshield during operation, causing both acoustic and wipe quality deterioration, which signicantly affect customer
satisfaction. From the results of previous studies it
was determined that the components that contribute most to chatter are the arms and blades. This
study was conducted to understand more thoroughly the specic design factors relating to arm/
blade combinations that affect system performance.
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1149
Figure 1
Engineered system
Example:
D1 = 1 rpm1 = 40
D2 = 2 rpm2 = 55
D3 = 3 rpm3 = 70
D4 = 4
Y = M
0.10
0.15
0.20
Yi Mi
where Yi is the actual time it takes the blade to reach
a xed point on the windshield at the ith cycle, Mi
the theoretical time for the blade to reach a xed
point on the windshield at the ith cycle, and i/
motor rpm. The ideal system function is shown in
Figure 2.
Three rpm values were used in the experiment:
40, 55, and 70. In general, due to the noise effects,
the actual time would always be longer than the theoretical time determined by the design (i.e., b 1,
and b 1 would be the ideal case).
In this experiment, other measurements were
also observed: (1) the lateral load on the wiper arm
and (2) the normal load on the wiper arm.
Noise Strategy
In general, noises can be categorized as follows:
Di, i = 1, 2...
0.05
1150
Case 65
Design of Experiment
The L18 orthogonal array shown in Table 2 was selected for the design of experiment. It may be seen
that under each run there are 12 tests at different
rpms and combined noise conditions.
Test Setup
A special test xture was built for this experiment,
and three sensors were attached to the windshield
to record a signal when a wiper blade passes them.
Strain gauges were attached to the wiper arm for
continuous recording of lateral and normal loads to
the wiper arm. Typical testing results are shown in
Figure 3.
Test Results
In this study, only the analysis of time measurement
data is presented. Under each test, the system was
run for a certain period of operating time, and the
actual accumulated operating times for the blade to
pass a xed point were recorded against the ideal
operating time, determined by the nominal rpm setting of the constant-rpm motor. A testing data plot
for this measurement is shown in Figure 4.
The regression analyses were conducted for
those data and the slopes of the regression lines
Table 1
Control factors and levels
Level
Control Factor
A:
B:
superstructure rigidity
Median
High
Low
C:
vertebra shape
Straight
Concave
Convex
D:
spring force
Low
High
E:
prole geometry
New design
F:
rubber material
II
III
G:
graphite
Current
Higher graphite
None
H: chlorination
Median
High
Low
I:
Current clip
Modied
Twin screw
attachment method
1151
Table 2
Design of experiment
rpm 1
Factor
Run D A B C E F G H
S1
I
1 1 1 1 1 1 1 1 1
1 1 2 2 2 2 2 2 1
1 1 3 3 3 3 3 3 1
1 2 1 1 2 2 3 3 2
1 2 2 2 3 3 1 1 2
1 2 3 3 1 1 2 2 2
1 3 1 2 1 3 2 3 3
1 3 2 3 2 1 3 1 3
1 3 3 1 3 2 1 2 3
10
2 1 2 3 3 2 2 1 2
11
2 1 2 1 1 3 3 2 2
12
2 1 3 2 2 1 1 3 2
13
2 2 1 2 3 1 3 2 3
14
2 2 2 3 1 2 1 3 3
15
2 2 3 1 2 3 2 1 3
16
2 3 1 3 2 3 1 2 1
17
2 3 2 1 3 1 2 3 1
18
2 3 3 2 1 2 3 1 1
rpm 2
S2
S1
rpm 3
S2
S1
S2
Wet Dry Wet Dry Wet Dry Wet Dry Wet Dry Wet Dry
Figure 3
Plot of a typical testing data
1152
Case 65
Figure 4
Typical testing data plot
Table 3
SN ratio and value for each run
Factor
Run
SN
Current
34.41 1.089
Convex
Low
Current
44.27 1.133
Low
Modied
37.27 1.096
Concave New
30.81 1.073
Convex
low
Design 3 High
Convex
New
Geo 1 Material I
Geo 2 Material I
Straight New
New
34.27 1.078
Higher High
None
Modied
32.80 1.093
30.46 1.063
10
11
12
13
14
Convex
15
Straight Geo 2 Material III Higher Median Twin screw 32.70 1.065
16
17
Straight New
18
High
Modied
35.24 1.079
Current Low
Modied
42.59 1.096
None
High
Material I
Current
31.00 1.065
Current
33.67 1.075
Median Current
34.18 1.079
Higher Low
1153
were estimated; the mean square was used to estimate the variation. In Table 3, the SN ratio and
values are evaluated by
SN ratio 10 log10
2
2
34.647)
(34.42
(33.94
(34.10
(35.30
34.647)
34.647)
34.647)
34.647)]
value of optimal
[1.08249 (1.0783
(1.0792 1.08249)
(1.0765 1.08249)
(1.0767 1.08249)
(1.0752 1.08249)
1.033
1.08249)
(1.0763
(1.0807
(l.U768
(1.0737
1.08249)
1.08249)
1.08249)
1.08249)]
value of baseline
[1.08249 (1.0898
(1.0792 1.08249)
(1.0885 1.08249)
(1.0852 1.08249)
(1.0752 1.08249)
1.082
1.08249)
(1.0763
(1.0788
(1.0780
(1.0904
1.08249)
1.08249)
1.08249)
1.08249)]
1154
Case 65
Figure 5
SN ratio chart
6. Conclusions
This robust design study for a windshield wiper system indicates that the chlorination, graphite, arm
rigidity, and superstructure rigidity have signicant
impacts on the optimal wiper system. Load distribution on the blade has a minimal inuence. As a
Figure 6
value chart
1155
Design 2
Design 1
Optimal
Baseline
Arm
Rigidity
Median
Median
Superstructure
Straight
Straight
Load
Distribution
Table 4
Optimal conditions and prediction
Low
High
Spring
Force
Geo 1
New design
Blade
Prole
Material 1
Material 2
Rubber
Material
Current
Higher
Graphite
Median
Median
Chlorination
Attachment
Current
Twin screw
1.033
1.082
SN
24.71
35.13
1156
Case 65
Table 5
Conrmation test results
Predicted
Conrmed
Condition
SN Ratio
SN Ratio
Optimal
24.71
1.033
25.41
1.011
Baseline
35.13
1.082
36.81
1.01
11.4
Gain
10.42
and Brian Thomas and Don Hutter for their support during the course of this work.
CASE 66
1. Introduction
We established a fabrication facility for accelerometer sensors, evolving from pilot line to moderate
production levels. We now required a substantial increase in production capacity. To help dene the
equipment and workforce additions needed to
maintain the target production levels, we used a simulation software package to model the fabrication
line and robust design techniques to nd an optimum stable conguration.
We developed a model of the present fabrication
line and validated its behavior against our existing
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1157
1158
Case 66
Figure 1
Flow diagram
workforce and equipment availability variations combined into the outer array noise.
The process ow for the simulation model (Figure 1) was derived from the product traveler (the
formal step-by-step denition of the process sequence). The accelerometer sensor fabrication
process resembles semiconductor wafer fabrication
in that many workstations (e.g., cleaning, masking,
and inspection) are visited multiple times by a product wafer as it is processed to completion. The product wafers start in lots processed together until the
structures are fully formed. They are then regrouped into smaller batches for the remaining
tail-end processing.
The key response parameter of a fabrication line
is the number of products shipped. The number of
products shipped will ideally be proportional to the
number of product lot starts for at least light-tomoderate production levels. The degree to which
the product ship rate deviates from this proportionality denes the basic line stability. The onset of line
saturation, when further increases in the rate of lot
starts do not increase the rate of products shipped,
denes line capacity. Therefore, we selected the lot
start rate as the primary signal factor (M1) for the
fabrication ideal function.
We selected the strategy of changing the workweek schedule to adjust line capacity while keeping
workforce and equipment constant. We included
days per workweek as the second signal factor (M2)
to evaluate its effectiveness and sensitivity.
1159
Line element
configuration
(Control factors)
(Signal factors)
Fabrication line
Product shipped
(Response)
(Noise factors)
Work schedule
Resource availability
variation
Figure 2
P-diagram
3. Design of Experiment
Figure 2 shows the P-diagram for the fabrication
line ideal function. The zero-point proportional
double dynamic model was selected to establish the
product ship rate dependence on lot start rate (start
noneship none). The effect of varying the sched-
Y 1 2 (M2 M20)M1
where Y is the product ship rate (wafers/day), M1
the lot start rate (lots/week), M2 the scheduled
workweek (workdays/week), M20 the nominal
Table 1
Signal and noise factors
Level
Factor
Description
Value
Code
M1:
2
3
4
5
6
2
3
4
5
6
M2:
5
6
7
1
0
1
N:
noise
Worst
Best
1
1
1160
Case 66
Table 2
Control factors
Level
Factor
A:
meas
Baseline
Added
Description
B:
spcr1
Baseline
Added
C:
spcr2
Baseline
Added
D:
etch 1
Baseline
Added
E:
etch2
Baseline
Added
F:
snsr1
Baseline
Added
G:
snsr2
Baseline
Added
H:
PC cln
Baseline
Added
I:
D cln
Baseline
Added
J:
B cln
Baseline
Added
K:
oven
Baseline
Added
L:
metal dep
M:
coater
N:
process
Baseline
Improved
Improved HD
O:
stepper
P:
develop
Q:
descum
R:
screen
2/1
2 / 1
3/2
S:
saw
Number of saws
T:
batch
U:
staff
Base
Base1
Base2
V:
empty
W: empty
noise cases required that 60 simulations be evaluated for each line conguration.
The inner array factors are described in Table 2.
Control factors A to K, M, and O to S represent
fabrication line elements viewed as determiners of
production capacity. They were evaluated with baseline values as level 1 and augmented values as level
2 (and 3). Three strategies involving number and
type of metal deposition systems used in the process
were represented in factor L. Two process improvement strategies were compared with the current
1161
No.
10
11
12
13
14
15
16
17
18
Table 3
Inner array (L36) response summary
Factor
1.562
1.278
1.574
1.311
1.679
1.430
1.811
1.406
1.300
1.265
1.584
1.73
1.472
1.329
1.629
1.602
1.580
1.314
0.227
1.03
0213
0.226
2.10
1.07
0.03
0.237
0.233
4.16
0.214
2.29
0.219
0.264
0.64
2.46
0.228
0.06
0.247
0.220
2.02
2.06
0.219
0.58
0.254
0.220
3.11
0.39
0.212
0.220
2.50
1.48
0.239
0.186
3.98
0.22
SN1
(dB)
SN2
(dB)
16.34
18.94
14.66
18.02
16.70
15.88
15.25
16.96
17.80
17.22
16.59
15.97
17.27
18.72
15.21
15.75
16.38
20.95
1162
No.
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
Table 3 (Continued )
Factor
1.295
1.484
1.504
1.214
1.752
1.618
1.717
1.180
1.592
1.573
1.476
1.238
1.394
1.300
1.755
1.638
1.486
1.379
0.260
0.260
0.249
0.200
3.00
2.75
2.36
3.04
0.218
0.208
0.241
0.220
0.203
2.52
2.84
2.82
2.14
3.54
0.233
0.197
2.15
1.33
0.239
3.47
0.258
0.235
2.67
0.67
0.244
0.240
2.43
0.54
0.219
0.199
1.97
0.76
SN1
(dB)
SN2
(dB)
19.62
18.74
18.763
18.16
15.59
18.15
15.79
18.46
18.48
18.39
18.08
18.09
18.79
17.53
16.59
19.12
15.86
18.80
1163
5. Results
4. Experimental Procedure
All simulations and analyses were run using
Pentium-class personal computers. Experiment definitions and results were performed using Quattro
Pro for Windows 5.0 [4], and the model simulations
were created and executed using Taylor II software.
We used a previously developed spreadsheet template with associated macro programs to generate
the simulation model control factor combinations
according to the L36 inner array. Each of these 36
7-day week
6-day week
5-day week
7-day week
6-day week
5-day week
1164
Case 66
Figure 4
Average factorial effects
1165
Table 4
Summary of inner array response ANOVAs
Response Parametera (%)
Factor
A:
meas
B:
spcr1
SN2
1.6%
A1:
baseline
SN 1 and 2
B1:
baseline
No effect
Choice
Reason
1.5 (2)
3.7 (2)
C2:
added
Beta 2
1.2 (2)
D2:
added
SN 1 and 2
5.3 (2)
E2:
added
Beta 2
1.5 (2)
F2:
added
Beta 2
G1:
baseline
No effect
H1:
baseline
No effect
I1:
baseline
Beta 1
2.7 (1)
J1:
baseline
Beta 2
C:
spcr2
D:
etch1
0.8 (2)
E:
etch2
1.7 (1)
F:
snsr1
G:
snsr2
H:
PCcln
I:
D cln
J:
B cln
K:
oven
L:
metal dep
1.8 (2)
M: coater
N:
process
0.4 (1)
0.7 (1)
0.8 (2)
0.6 (1)
1.1 (2)
K2:
added
SN 1 and 2
2.3 (3)
L3:
Beta 2
1.8 (3)
M3: 3
Beta 2
N2:
improved
Beta
3.2 (2)
O:
stepper
10.5 (1)
1.1 (1)
5.2 (1)
6.3 (2)
O1:
SN 1 and 2
P:
develop
1.7 (3)
0.7 (3)
2.9 (3)
P2:
SN 1 and 2
Q:
descum
3.2 (2)
Q2:
Beta 2
R:
screen
4.4 (3)
R3:
3/2
SN 1 and 2
2.5 (2)
1.4 (3)
5.9 (3)
S:
saw
1.3 (2)
3.9 (2)
0.9 (3)
S2:
SN 1 and 2
T:
batch
11.2 (1)
3.9 (3)
16.7 (1)
T1:
Beta 1 and 2
U:
staff
38.8 (3)
75.9 (3)
27.6 (3)
39.3 (2)
U3:
base 2
Scaling
factor
33.0
16.2
Pooled error
a
SN1
8.4 (1)
Optimization Decision
29.3
4.9
concerns. Factor A (number of verication measurement stations) had been an occasional bottleneck in the past, so we changed to level A2
(added). The largest gain in performance due to
factor R (number of screening stations) is between
R1 and R2; R3 shows less improvement over R2. Since
R2 (2/1) demands less equipment and less
space, we selected that level. Because the product
1166
Run
37
38
Table 5
Conrmation run results summary
Factor
1.788
1.867
1.845
1.831
1.93
2.52
3.32
2.29
SN1
(dB)
0.267
0.240
0.267
0.234
15.42
15.29
14.25
15.57
SN2
(dB)
7-day week
6-day week
5-day week
1167
7-day week
6-day week
5-day week
Figure 5
Optimum conditions run results
References
1. Taylor II for Windows 4.2, 1997. Orem, UT: F&H
Simulations.
2. L. Farey, 1994. TRW/TED accelerometer wafer
process production facility; manufacturing simulation. TRW internal report for R. Bilyak.
3. G. Taguchi and S. Konishi, 1987. Orthogonal Arrays
and Linear Graphs. Dearborn, MI: ASI Press.
4. Quattro Pro for Windows 5.0, 1993. Scotts Valley, CA:
Borland International.
5. ASI, 1997. Robust design using Taguchi methods.
Handout for the Robust Design Workshop.
Part IV
Mahalanobis
Taguchi System
(MTS)
Human Performance
Inspection
Medical Diagnosis
Product
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
(Cases
(Cases
(Cases
(Cases
6768)
6973)
7478)
7980)
CASE 67
1. Objective of Experiment
To collect the necessary data, we asked 83 testees to
answer a questionnaire and create a program. The
83 testees broadly covered programmers, businesspeople, undergraduate students, and college students, all with some knowledge of programming.
The questionnaire consisted of 56 questions related
to programming and each testee was asked to respond from the following seven-point scale: positively no (3), no (2), somewhat no (1),
yes or no (0), somewhat yes (1), yes (2),
and positively yes (3). The program that each
testee worked on was a simple mask calculation
(BASE BALL GAMES: each alphabet corresponds to a single digit and the same alphabet
needs to have the same digit). That is, we believed
the results would show that people who were able
to write such a program would answer in a certain
way. Our model is shown in Figure 1.
Figure 2 shows a histogram of Mahalanobis distances for all data. Give an appropriate threshold,
we can completely separate persons with a sense of
programming and those with no sense of programming. According to this result, we concluded
that a Mahalanobis distance can be applied to questionnaire data.
However, when we judged whether or not a person has a sense of programming, the probability of
making a correct judgment is signicant. That is, we
need to ascertain the reliability of the base space
that we have formed.
Provided that we have numerous data, for example, 300 normal data, using 285 normal data to
create a base space, we compute the distance from
the base space for each of the remaining 15 data.
As long as the distances of these 15 sets of data are
distributed around 1, the base space created turns
out to be reliable. If the distance becomes larger,
we conclude that the number of data needed to
construct the base space is lacking or that the items
selected are inappropriate.
Yet, due to the insufcient number of data for
creation of the base space in this study, even if only
ve pieces of data are removed from the base space,
the reliability of the space tends to be lowered dramatically. Therefore, by excluding only one data
point from the base space, we created a base space
without the one point excluded and calculated the
distance of the point excluded. As a next step, we
removed another data point and restored the point
excluded previously in the base space. Again, creating a new base space without one piece of data,
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1171
1172
Case 67
Figure 1
Base space for programming ability
we computed the distance for the data currently removed. By reiterating this process, we evaluated the
reliability of the standard space.
Figure 3 shows the results obtained using the reliability evaluation method described above. The result reveals that the current base space cannot be
used for a judgment because persons with no sense
of programming is also distant from the base space.
Such a base space cannot be used for judgment,
primarily because of the small number of data.
3. Selection of Items
There are some reasons that the distance for a person with no sense of programming resulted in a
Figure 3
Data in base space for reliability evaluation
Figure 2
Distances from base space for persons with no sense of
programming
1
n
i1
1
D i2
(1)
Figure 4
Response graphs for item selection
1173
1174
Figure 4 demonstrates that all of the items affected a judgment positively. Now, by eliminating
the least effective item for question 9, we selected
effective items once again. In this second item selection, only question 13 showed an increasing
trend from left to right. Then, excluding question
13, we returned question 9 to the base space because it did not have an increasing trend but a small
effect (in fact, question 9 will be removed in the
later item selection because it did show a rising
trend from left to right).
In the third and later item selections, every time
we had at least one item with a rising trend from
left to right we eliminated it (or them) from the
base space and selected other items. Nevertheless,
once the total number of items was reduced to
about 50, all of the items turned out to be effective
or with a decreasing trend from left to right once
again. At this point in time, xing the 20 items, the
larger ones, and keeping them xed without assigning them to an orthogonal array, but assigning the
remaining 30 data to an L32 orthogonal array, we
selected effective items. In sum, by changing an
orthogonal array used, we attempted to seek
items that showed poor reproducibility. Through
this process, we expected to nd only highreproducibility items. Nevertheless, since we removed items, even if all the items indicated were
more or less effective, the distance calculated from
abnormal data was reduced. In terms of the SN ratio, the reduction in the number of items leads to
a smaller SN ratio. However, since as compared to
this diminution of the SN ratio, the average distance
outside the base space (with no sense) approximates
that in the base space, we concluded that this analysis procedure was improved as a whole.
Through this procedure, we reduced the number of items from 60 to 33 as shown in Figure 5. By
doing so, persons with no sense of programming
who were excluded from the base space started to
overlap 25% of the base space. To make them overlap more, we needed to increase the data. The contents of the questionnaire are abstracted in the list
below. All of the questions were composed of items
that some software- or programming-related books
regard as important for programmers and were considered essential from our empirical standpoint.
The list includes only items that have high reproducibility and are associated with a sense of
programming.
Case 67
Figure 5
After-item-selection data
1175
Table 1
Average and standard deviation for each effective question
With No Sense
Question
Average
Standard
Deviation
With Sense
Average
Standard
Deviation
Difference
between
Averages
0.17
1.61
0.18
1.51
0.01
0.67
1.42
0.29
1.40
0.37
0.05
1.55
0.12
1.76
0.16
2.12
0.92
1.76
1.09
0.36
0.41
1.53
0.35
1.11
0.76
10
0.58
1.56
0.65
1.90
0.07
12
1.29
1.41
1.53
1.37
0.24
14
0.47
1.60
1.24
1.39
0.77
16
0.32
1.63
1.35
0.86
1.03
17
0.32
1.97
0.06
2.14
0.38
19
0.24
1.48
0.18
1.55
0.07
20
0.53
1.25
0.29
1.53
0.24
21
0.77
1.41
0.40
1.22
0.37
22
1.44
1.45
1.14
1.50
0.30
23
0.85
1.42
0.29
1.36
0.55
24
0.41
1.93
0.53
1.81
0.12
26
0.39
1.50
0.76
1.52
0.37
27
0.14
1.82
0.41
1.87
0.55
30
0.08
1.71
0.41
1.97
0.34
31
0.31
1.76
0.29
1.76
0.02
35
1.09
1.57
0.76
1.68
0.33
37
1.33
1.40
1.18
1.29
0.16
38
0.03
1.69
0.41
1.28
0.38
39
0.65
1.45
0.29
1.45
0.36
41
0.33
1.27
0.35
1.54
0.02
47
0.75
1.19
0.24
1.52
0.52
50
0.67
1.14
0.29
1.05
0.37
51
0.63
1.77
0.53
1.87
1.15
54
0.19
1.87
0.53
1.94
0.72
55
0.19
1.58
0.18
1.38
0.01
56
0.16
1.60
0.06
1.78
0.10
58
1.12
0.33
1.00
0.00
0.12
1176
Case 67
Table 2
Distances from base space using data
classied by type of program
Class
0
0.5
Table 3
Distances from base space using after-itemselection data classied by type of program
Class
0.5
28
37
1.5
38
1.5
29
2
2.5
2.5
3.5
3.5
16
4.5
4.5
5.5
5.5
6.5
6.5
More than 7
16
61
More than 7
66
17
66
Total
66
17
66
Total
56. Do you dislike having to socialize with others, for example, at a party?
58. Gender
1177
References
Kei Takada, Muneo Takahasi, Narushi Yamanouchi,
and Hiroshi Yano, 1997. The study on evaluation of
capability and errors in programming. Quality Engineering, Vol. 5, No. 6, pp. 3844.
Kei Takada, Kazuhito Takahashi, and Hiroshi Yano,
1999. The study of reappearance of experiment in
evaluation of programmers ability. Quality Engineering, Vol. 6, No. 1, pp. 3947.
Kei Takada, Kazuhito Takahashi, and Hiroshi Yano,
1999. A prediction on ability of programming from
questionnaire using MTS method. Quality Engineering, Vol. 7, No. 1, pp. 6572.
Muneo Takahashi, Kei Takada, Narushi Yamanouchi,
and Hiroshi Yano, 1996. A basic study on the evaluation of software error detecting capability. Quality
Engineering, Vol. 4, No. 3, pp. 4553.
CASE 68
1. Introduction
1178
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
Table 1
Experimental difference between two programs
Sorting
Program
Mask Calculation
Program
Testee
Student
Businessperson and
student
Programming
time
Limited
Unlimited
Programming
language
Any language
(1)
x2 X2 b21x1
(2)
(3)
Y2
x1
V1
x2
V2
(4)
(5)
Y60
x60
V60
1
2
(Y 2 Y 22 Y 60
)
60 1
(7)
x1 X1
Y1
variables, X1, X2, ... , X60. Then, with X1, X2, ... , X60,
we created orthogonalized variables, x1, x2, ... , x60.
1179
(6)
Table 2
Classication of normal and abnormal data for software programming
Abnormal Data
Normal Data,
Normal Program
Good Program
Poor Program
Judgment criterion
A program after
removing abnormal
data
Number of Data
102
14
23
1180
Case 68
Table 3
Database for the base space
Measurement
Item 1
Item 2
2
3
1
1
Item 60
5
6
Average
0.24
0.66
6.42
Standard deviation
1.72
1.37
4.77
1.02
1.60
1.30
1.21
0.25
1.21
0.98
0.54
0.33
0.12
0.99
0.10
1.20
0.04
1.64
Normalized variable X
Regression coefcient b
Variance V
Normalized orthogonal
variable Y
1.02
1.60
1.30
1.34
0.06
1.07
0.10
0.12
0.70
Table 4
Normalized orthogonal variables of abnormal data
Measurement
Good program
Poor program
Orthogonal variable
Good program
Poor program
Item 1
Item 2
2
2
0
0
1
0
0
2
0
2
1.30
1.02
0.14
0.14
0.44
0.44
0.33
0.61
1.94
0.47
0.93
0.20
Item 60
5
6
7
2
3
7
5.18
3.98
5.62
2.50
1.94
5.01
Table 5
Mahalanobis distance of abnormal data
calculated from a base space constructed
by the group of people who wrote
normal programs
Distance
0
0.5
In accordance with the procedure below, using orthogonal normalized variables Y1, Y2, ... , Y60, we
computed larger-the-better SN ratios 1, 2, ... , 60
for A1, A2, ... , A60, and rearranged each items order
such that we could eliminate items leading to a
smaller SN ratio:
A1 only Y1 is used
A2 Y1 and Y2 are used
1181
Base
Space
0
Good
Program
Poor
Program
49
1.5
51
2
2.5
3.5
4.5
Then we detailed calculation of the SN ratio for testees creating a good program (refer to Table 6) as
follows.
5.5
6.5
7.5
(8)
8.5
9.5
10
Total
102
14
23
1 10 log
1
14
1
1
1
1.202
1.202
4.992
5.90 dB
SN ratio for A2:
2 10 log
1
14
1
1
1
0.862
1.382
3.572
5.21 dB
SN ratio for A60:
60 10 log
4.07 dB
1
14
(9)
1
1
1
2.182
2.522
1.542
(10)
1182
Case 68
Figure 1
Response graphs for item selection (for testees creating a good program)
Figure 2
Response graphs for item selection (for testees creating a poor program)
Table 6
Rearranging data for testees creating a good
program
No.
A1
A2
A60
1.20
0.86
2.18
1.20
1.38
2.52
0.69
1.94
1.53
1.16
0.96
2.03
10
0.22
0.18
1.72
14
4.99
3.57
1.54
1.20 0.69
0.38
8
(11)
1.16 (4.99)
0.49
6
(12)
0.73 0.22
0.02
8
(13)
Y1m4
1
1183
0.26 1.16
0.34
15
(14)
0.19
(15)
Figure 3
Rearrangement of items by Schmidt orthogonal expansion
1184
Case 68
Table 7
Normalized orthogonal variables of abnormal data
Normalized Orthogonal Variable
No.
m Level
Y1
Y2
Y60
1
2
8
9
10
14
m1
m1
m1
m2
m2
m2
1.20
1.20
0.69
1.16
0.22
4.99
0.35
0.35
2.04
1.59
1.98
1.39
2.84
5.38
0.98
0.85
2.87
0.92
1
2
15
16
17
23
m2
m3
m3
m4
m4
m4
0.73
0.22
0.22
0.26
0.22
1.16
1.07
1.98
0.23
0.46
0.96
0.85
1.71
0.53
2.06
3.67
1.69
2.19
judged correctly. However, the data do not necessarily indicate an accurate judgment.
Error variation:
Error variance:
Total variation:
ST Y
2
1M1
2
1M2
2
1M3
2
1M4
pl Y
0.50
(16)
Variation of :
S
0.36
(17)
Effective divider:
r M 21 M 22 M 23 M 24 10
(18)
Se ST S 0.14
Ve
Se
0.05
f
(f 3)
(19)
(20)
SN ratio:
10 log
(1/r) (S Ve)
1.62 dB
Ve
(21)
1185
Table 8
Directional judgment using all items
Good
Program
No.
M for
Good
Program
M for
Poor
Program
Poor
Program
No.
M for
Good
Program
M for
Poor
Program
2125.61
780.12
12.51
391.49
3973.87
527.55
118.96
10.84
4842.26
428.87
7785.43
316.39
1027.02
140.29
2097.13
1702.27
3437.98
897.78
3453.95
1185.03
3258.63
631.64
2755.32
1036.88
5805.01
594.37
1602.08
966.77
5062.20
753.75
1070.22
731.54
2984.79
1713.75
7412.83
192.16
10
4531.27
464.39
10
2571.95
713.33
11
1095.78
321.21
11
1006.82
1890.32
12
4209.38
1084.77
12
905.96
216.42
13
1011.24
670.23
13
6814.81
2039.88
14
2580.05
170.22
14
1184.24
143.26
15
2647.86
1052.03
16
5393.82
121.56
17
589.65
964.47
18
3003.96
248.19
19
350.94
385.61
20
456.86
10.69
21
3193.06
71.63
22
3413.29
253.36
23
1104.85
287.06
1186
Case 68
Table 9
Directional judgment using only stable items
Good
Program
No.
M for
Good
Program
M for
Poor
Program
Poor
Program
No.
M for
Good
Program
M for
Poor
Program
11.92
48.48
60.26
19.46
0.64
6.49
39.77
4.19
104.51
72.78
80.13
38.91
10.62
13.21
10.76
33.91
95.44
7.22
28.88
4.31
11.47
14.04
14.21
7.94
16.24
24.18
26.12
18.67
34.97
5.35
88.30
25.71
29.00
31.36
44.31
19.67
10
37.79
22.85
10
72.51
53.59
11
40.22
32.68
11
16.22
3.78
12
26.22
1.93
12
24.20
25.94
13
33.57
14.21
13
13.18
17.04
14
87.14
61.13
14
52.91
3.27
15
12.71
2.45
16
10.68
11.67
17
32.93
12.76
18
66.87
93.26
19
59.76
33.73
20
61.35
22.68
21
15.74
16.04
22
74.36
10.28
23
38.94
26.52
7. Results
Observing the Mahalanobis distances of normal
(base space) and abnormal data, it was found that
1187
Figure 4
Response graphs for item selection
Figure 5
Rearrangement of items by Schmidt orthogonal expansion
1188
Case 68
Table 10
Directional judgment using abnormal data
Good
Program
No.
Direction of
Distance M
Poor
Program
No.
Direction of
Distance M
53.17
40.63
18.92
9.31
55.64
45.33
3.11
18.44
28.49
23.91
7.04
19.25
17.66
7.10
12.49
38.26
29.90
17.12
10
15.67
10
35.41
11
23.18
11
5.86
12
0.47
12
29.15
13
7.77
13
12.16
14
60.18
14
9.74
15
4.83
16
3.24
17
4.79
18
61.08
19
29.90
20
11.70
21
16.68
22
3.69
23
22.33
References
1. Kei Takada, Kazuhito Takahashi, and Hiroshi Yano,
1999. A prediction on ability of programming from
questionnaire using MTS method. Quality Engineering, Vol. 7, No. 1, pp. 6572.
2. Genichi Taguchi, 1997. Application of Schmidts orthogonalization in Mahalanobis space. Quality Engineering, Vol. 5, No. 6, pp. 1115.
3. Hiroshi Miyeno. 1991. Aptitude Inspection of Programmers and SE. Tokyo: Ohmsha, pp. 105106.
4. Takayuki Suzuki, Kei Takada, Muneo Takahashi, and
Hiroshi Yano, 2000. A technique for the evaluation
of programming ability based on MTS. Quality Engineering, Vol. 8, No. 4, pp. 5764.
CASE 69
Abstract: In order to solve a problem with an appearance inspection of soldering using lasers, we applied the MahalanobisTaguchi system (MTS) to a
soldering ne-pitch quad at package (QFP) implementation process where
erroneous judgment occurs quite often, and attempted to study (1) the selection of effective inspection logic and (2) the possibility of inspection using
measurement results.
1. Introduction
Figure 1 illustrates the appearance of a QFP. The
QFP studied in our research has 304 pins with a 0.5mm pitch and is reow soldered using tinlead alloy
solder paste onto a copper-foil circuit pattern on a
glass epoxy board.
NLB-5000, laser inspection apparatus developed
by Nagoya Electric Words Co. Ltd., was used. Figure
2 outlines this apparatuss basic principle of inspection. An HeNe laser, following a prescribed route,
is swept over a soldered area of an inspection subject. The solder forms a specic llet shape whose
angle changes the reection angle of the laser.
Since several receptor cells are located separately
and independently, we can quantitatively capture
the solders shape according to changes in a laser
swept over the solders surface. Therefore, by using
this inspection apparatus, we can grasp the entire
shape, from the copper-foil patterns end to the
QFP leads end in the form of sequential data for a
resolution of the laser sweeping pitch. Calculating
these sequential data in accordance with a particular inspection logic and comparing them with the
judging standard, we can make a judgment as to a
good or defective solder.
An example of sequential data is shown in Table
1. From left to right in the table, each sequence
represents a periodic change of reection angle
when a laser is swept. A blank in the middle indicates the tip of a lead. Each numeral signies an
allocated value that corresponds uniquely to the angle of a solders llet. The larger it becomes, the
more gentle the llets slope becomes, and conversely, the smaller, the steeper. The most gentle
slope close to a at surface is denoted by 6; the
steepest is represented by 2. Therefore, a downward
trend in a sequence indicates that a solders llet
retains a concave shape. By analyzing the numerical
order and magnitude of a sequence, we attempted
to organize an appropriate inspection logic.
Figure 3 shows the cross section of a soldered
area on a QFP lead, and Figure 4 shows good and
defective soldering.
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1189
1190
Case 69
Table 1
Example of sequential data
No.
Data
666666666554443332 34466666
666666666554433322 34446666
66666666654433332 33466666
Figure 1
Appearance of QFP implementation
Figure 3
Cross section of soldering on QFP lead
Figure 2
Basic principle of laser appearance inspection of soldering
1191
Figure 4
Good and defective soldering
Frequency of Occurrence
Figure 5
Bonding strength of normal and abnormal units
1192
Case 69
Table 2
Classication of inspection logic items
Category
Frequency
77
49
Do not use
Do not use
Do not use
Table 3
Mahalanobis distances of normal and abnormal data
Normal Data
Class
0
0.5
1
Frequency
0
267
1626
Frequency
Abnormal Data,
Distance
5.5
37.09
28.24
6.5
100.39
Class
1.5
423
37.91
175
7.5
31.96
2.5
79
89.38
36
8.5
89.75
3.5
21
29.49
11
9.5
36.64
49.19
25.17
4.5
10
Next classes
29.20
10.12
11.92
12.69
10.39
1193
Figure 6
Response graphs of the SN ratio for inspection logic item
Table 3 shows the distribution of the Mahalanobis distances calculated. This table reveals high discriminability of normal data (proper products) and
abnormal data (defective products) by the use of
the 77 pieces of data as the base space. In the table,
the bottom part in a longer middle column shows
the distribution of higher classes.
In an actual inspection process, a smaller number of inspection logic items is desirable for a highperformance inspection, due to the shortened
Table 4
Inspection logic items after selection
No.
21
28
29
44
45
51
1194
Case 69
Figure 7
Response graphs of SN ratio by laser reection data
180
160
140
120
100
80
60
40
20
0
Abnormal
Frequency
lected extremely effective inspection logic items (Table 4). Since these items were selected from the 11
abnormal data obtained for our research, they do
not reect a whole diversity of defects that take
place in an inspection process of a mass production
line. However, together with three inspection logic
items (height and length of a solders llet and convexity of a solders shape) for category 3 in Table 2,
180
160
140
120
100
80
60
40
20
0
Normal
Distance
(b) Using the base space before-item selection
Figure 8
Mahalanobis distances for normal and abnormal units
6.5
5.5
4.5
3.5
2.5
1.0
Abnormal
Frequency
Distance
(a) Using the base space after-item-selection standard space
Abnormal
1195
Reference
Akira Makabe, Kei Takada, and Hiroshi Yano, 1998. An
application of the Mahalanobis distance for solder
joint appearance automatic inspection. Presented at
the Quality Engineering Forum Symposium, 1998.
Japanese Standards Association.
CASE 70
Abstract: Efcient conversion of electrical and thermal energy into robust ink
drop generation (and subsequent delivery) to a wide variety of substrates
requires target directionality, drop velocity, drop volume, image darkness, and
so on, over a wide range of signal and noise space. One hundred percent
inspection is justied based on loss function estimates, considering inspection
cost, defective loss, and quality loss after shipment. The Mahalanobis
Taguchi system (MTS) was applied to the nal inspection of image quality
characteristics for a number of reasons: (1) to summarize multivariate results
with a continuous number expressing distance from the centroid of good
ones; (2) to reduce measurement cost through identication and removal of
measurements with low discrimination power; (3) to balance misclassication cost for both type I and type II errors; and (4) to develop a case study
from which practitioners and subject-matter experts in training could learn
the MTS.
1. Introduction
Classication of thermal ink jet printheads as either
acceptable (for shipment customers) or rejectable
(for scrap) is a decision with two possible errors.
Type I error occurs when a printhead is declared
conforming when it is okay. Type II error occurs
when a printhead is declared conforming when actually it is not. Type I error means that scrap costs
may be high or orders cannot be lled due to low
manufacturing process yield. Type II error means
that the quality loss after shipment may be high
when customers discover they cannot obtain promised print quality, even with printhead priming and
other driver countermeasures. Over time, type II errors may be substantially more costly to Xerox than
type I errors. Consequently, engineers are forced to
consider a balance between shipping bad printheads and scrapping good ones.
A Mahalanobis space was constructed using multivariate image quality data from 144 good-print1196
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1197
3. Mahalanobis Distance
The quantity D 2 is called the Mahalanobis distance
from the measured featured vector x to the mean
vector mx, where Cov x is the covariance matrix x.
The future vector represents the vector-measured
characteristics for a given printhead in this study.
The question is: How far is a given printhead from
the database of good ones? If the distance is greater
than some threshold, it is classied as defective. If
the distance is less than some threshold, it is classied as no different from the population of good
ones. It can be shown that the surfaces on which D2
is constant are ellipsoids that are centered about
1198
Case 70
Figure 2
Hypothetical plot of aspect ratio versus dot diameter
the mean mx. In the special case where the measurements are correlated and the variances in all
directions are the same, these spheres and the Mahalanobis distance become equivalent to the Euclidean distance.
The Mahalanobis distance, D 2, was calculated as:
T
D 2 (x x)C 1
x (x x)
(1)
1
m
(x
m
mi)(xj mj )
(2)
For i j this becomes the usual equation for variance. D 2 is the generalized distance between the th
product and the mean feature vector for the population of good printheads.
The covariance of two measures captures their
tendency to vary together (i.e., to co-vary). Where
the variance is the average of squared deviation of
a measure from its mean, the covariance is the average of the products of the deviations of measured
values from their means. To be more precise, consider measure I and measure j. Let {x(1,I ), x(2,I ),
... , x(n,I } be a set of n examples of characteristic I,
and le{x(1,j), x(2,j), ... , x(n,j)} be a corresponding
c(1,2)
c(1,d )
c(2,1)
c(2,2)
c(2,d )
c(d,1)
c(d,2)
c(d,d )
(3)
1199
1200
condition. Only the normal condition produces a
uniform state, and innite abnormal conditions exist. To collect data for all those abnormal conditions
produced the Mahalanobis space. For new data we
calculated the Mahalanobis distance, and by judging
whether it exceeds the normal range, we can distinguish between normal and abnormal printhead
conditions.
The traditional way is to discriminate which of
multiple preexisting categories the actual device belongs to. For printheads, where one has to decide
if it is good or not, a large volume of defective condition data and normal condition data are measured beforehand, and a database is constructed
with both. For newly measured data (for which you
do not know whether it is a good one or bad one),
you calculate which group the data belongs to.
However, in reality, there are a multitude of
image-quality problems. So if information about various types of images is not obtained beforehand,
printheads cannot be distinguished. Because the
traditional discrimination method requires this information, it is not practical.
5. Results
A d d correlation matrix of the 58 observed characteristics is partially shown in Table 1. These correlations make looking at individual measured
characteristics problematic for classifying printheads. When there is correlation between characteristics, diagnosis cannot be made from single
characteristics. Table 2 shows the calculated D 2 values for the 145 good printheads. Table 3 shows the
Mahalanobis distance for the 44 bad printheads, using the inverse correlation matrix from the good
printheads.
Figure 3 shows the Mahalanobis distances D 2 for
both the good and bad printheads. A wide range of
D 2 values were observed for the nonconformant
printheads. Note that the good ones are quire uniform. This is usually the case. There are one or two
cases of printheads classied as bad but more likely
part of the good ones. The threshold value for D 2
was determined by economic considerations. Misclassication countermeasure costs are usually
considered.
Case 70
1
m
1
1
1
2 2
D 21
D2
Dm
(4)
1201
Table 1
Print quality inspection correlation matrix (partial for 58 variables)
1.00 0.24 0.24 0.25 0.12 0.15 0.03 0.24 0.34 0.24 0.37 0.44
0.20 0.10 0.18 0.18 0.29 0.30 0.27 0.05 0.07 0.03 0.08 0.29
0.25 0.05 0.15 0.21 0.45 0.17 0.05 0.01 0.01 0.00 0.02 0.27
0.7
0.25 0.04 0.13 0.24 0.03 0.19 0.07 0.18 0.03 0.04 0.02
0.04 0.20 0.16 0.06 0.16 0.27
0.24 1.00 0.08 0.03 0.19 0.13
0.03 0.24 0.15 0.31 0.97 0.06
0.28 0.05 0.24 0.21 0.11 0.94
0.35 0.29 0.27 0.28 0.20 0.29
0.08 0.37 0.28 0.02 0.25 0.22
0.42
0.38
0.24
0.06
0.24 0.08 1.00 0.53 0.44 0.21 0.11 0.35 0.02 0.21 0.22 0.02 0.01
0.16 0.10 0.06 0.18 0.08 0.29 0.35 0.09 0.07 0.15 0.30 0.05 0.03
0.05 0.14 0.16 0.05 0.13 0.15 0.08 0.08 0.22 0.08 0.12 0.26 0.09
0.07 0.11 0.29 0.15 0.02 0.17 0.04 0.21 0.32 0.18 0.28 0.18 0.21
0.06 0.18 0.07 0.03 0.19 0.02
0.25 0.03 0.53 1.00 0.20 0.02 0.23 0.29 0.10 0.34 0.33 0.26 0.16
0.04 0.24 0.09 0.03 0.05 0.28 0.82 0.00 0.09 0.13 0.14 0.11 0.20
0.09 0.01 0.28 0.10 0.13 0.00 0.15 0.29 0.37 0.17 0.04 0.34 0.05
0.11 0.02 0.10 0.24 0.12 0.03 0.11 0.28 0.72 0.04 0.23 0.06 0.27
0.01 0.02 0.01 0.09 0.31 0.12
0.12 0.19 0.44 0.20 1.00 0.26 0.05 0.08 0.21 0.31 0.17 0.13 0.02
0.01 0.06 0.09 0.14 0.15 0.28 0.08 0.05 0.05 0.15 0.05 0.11 0.22
0.16 0.05 0.07 0.12 0.13 0.25 0.06 0.13 0.03 0.13 0.08 0.17 0.11
0.08 0.01 0.19 0.06 0.12 0.10 0.22 0.07 0.06 0.12 0.07 0.08 0.12
0.05 0.12 0.10 0.07 0.08 0.12
0.15 0.13 0.21 0.02 0.26 1.00 0.48 0.20 0.16 0.07
0.04 0.07 0.02 0.01 0.11 0.04 0.08 0.03 0.67 0.38
0.09 0.06 0.11 0.05 0.25 0.10 0.04 0.50 0.02 0.53
0.20 0.08 0.09 0.11 0.07 0.10 0.01 0.12 0.28 0.12
0.08 0.10 0.01 0.01 0.12 0.08
0.03 0.25 0.11 0.23 0.05 0.48 1.00 0.11 0.10 0.31 0.26 0.11 0.10
0.12 0.10 0.01 0.11 0.22 0.08 0.11 0.12 0.14 0.17 0.28 0.02 0.11
0.04 0.03 0.06 0.06 0.09 0.23 0.00 0.14 0.14 0.10 0.25 0.07 0.03
0.07 0.04 0.05 0.07 0.04 0.05 0.28 0.18 0.15 0.04 0.04 0.07 0.13
0.01 0.02 0.02 0.03 0.07 0.05
7. Conclusions
The MahalanobisTaguchi system was used to analyze image-quality data collected during nal
inspection of thermal ink jet printheads. A Mahalanobis space was constructed using results from a
variety of printheads classied as good by all criteria,
1202
Case 70
Table 2
Print quality inspection Mahalanobis distances for 145 good printheads
0.9788539
0.7648473
0.8365425
1.1541308
1.5237813
0.6360767
1.9261662
0.6253195
1.8759381
0.6151459
0.8409623
2.3004483
0.8254882
0.9997392
0.8141697
1.0554864
1.3502383
1.4477616
1.575254
0.6408639
0.9636183
0.6377904
1.1194686
0.7462556
1.0316818
1.8069662
1.9241475
1.0319332
0.9899644
1.2182822
1.2182822
1.1331171
0.8370348
2.2826932
0.9207974
1.1467803
1.0811343
1.005008
0.9170464
1.2295005
0.5892113
0.4927686
0.8277777
0.9806735
0.9319074
0.9434869
0.8671445
0.8750086
0.8219719
1.2870564
0.6186984
1.5087154
1.0080104
1.343378
1.3982634
1.338509
0.5055908
0.9222672
1.2616672
2.0557088
0.4774843
0.7116405
0.4109008
0.9073186
1.0098421
0.610628
1.8020641
0.7664054
0.5304222
1.22455
0.6284057
1.407896
0.9961997
0.6662852
0.8032502
1.2031408
0.9997437
1.2198565
1.0614666
0.5896821
0.4663961
1.63111
0.7763159
1.0746786
0.744278
0.5047632
1.0467326
1.0163124
0.9345957
0.6408224
0.5880229
1.3450012
0.7046379
1.4418307
0.7789877
1.0798978
0.6764585
0.8649927
0.4805313
0.4805313
1.0123116
0.7207266
1.3288415
0.6955458
0.9248074
1.2550094
0.9059842
1.1129645
0.7871329
0.6158813
0.762891
0.7765336
0.5245411
0.5759779
1.0465223
1.2335352
1.6694431
0.9200819
1.4373301
0.8082051
0.9922023
0.9889715
0.5822106
1.5744132
0.5348402
1.0790753
0.6256058
0.8307524
0.8303376
1.3077433
0.6317183
1.2935359
0.509095
1.2105074
1.1197155
0.7580503
0.9266635
0.6304593
Self-check: 58.00000
Table 3
Mahalanobis distances for 44 bad printheads
22.267015
2.5930287
15.757659
47.675354
9.2641579
14.052908
11.691446 15.090696
13.894336
8.6390361 179.43874
20.041785
10.695167
29.33736
34.2429
9.3282624
35.30109
18.388437
55.787193
78.914301
29.200082
42.105292 28.056818
33.730109
22.46681
9.3395192
153.76889
13.926274
6.0059889 63.12819
6.777362 22.493329
16.813476
46.802471 121.26875
16.492573
4.6645559
44.442126
45.355212
8.1542113
26.666635
1203
Figure 3
Mahalanobis distance versus printhead number for both passing and failing printheads
establishing an origin and unit number. Mahalanobis distance for nonconforming printheads was calculated, demonstrating the ability to discriminate
normal and abnormal printheads. Among the 58
product image quality attributes investigated, only a
fraction of the total were doing the work of discriminating good from bad for actual production, identifying the opportunity to reduce measurement and
overhead cost for the factories. Misclassication of
printheads was checked, and as expected, scrapping
good printheads was more prevalent than shipping
bad printheads. The scrap cost was approximately
one-tenth the cost for type II error, so naturally one
would be more likely to scrap good ones.
Other opportunities to apply the Mahalanobis
Taguchi system are being explored for asset recovery. For example, when copiers and printers are
returned from customer accounts, recovery of motors, solenoids, clutches, lamps, and so on, that are
still classied as good can be reused in new or refurbished machines. Other opportunities include
image segmentation algorithm development, pattern recognition to prevent counterfeit copying of
currencies, outlier detection in data analysis, system
test improvement, diagnosis and control of manufacturing processes, chemical spectral analysis,
1204
Case 70
Table 4
SN ratios for each L64 combination
Obs.
Expt.
Obs.
Expt.
16.1455
33
33
137804
15.1823
34
34
15.8187
13.4513
35
35
11.5528
14.7418
36
36
14.3503
13.0265
37
37
14.3373
13.9980
38
38
14.3891
14.8928
39
39
14.0671
15.1808
40
40
14.8759
14.3932
41
41
15.3622
10
10
15.7381
42
42
15.4632
11
11
16.0573
43
43
12.5654
12
12
13.2949
44
44
13.9813
13
13
11.3572
45
45
16.3452
14
14
12.5372
46
46
13.0218
15
15
14.0662
47
47
15.3362
16
16
15.9616
48
48
15.8292
17
17
10.4232
49
49
12.2162
18
18
12.4296
50
50
14.5625
19
19
14.1015
51
51
15.3548
20
20
11.5102
52
52
16.6430
21
21
15.1508
53
53
13.9844
22
22
14.3844
54
54
8.9751
23
23
12.2187
55
55
8.3045
24
24
15.3981
56
56
13.8523
25
25
13.3026
57
57
11.0553
26
26
9.4273
58
58
14.9251
27
27
12.2470
59
59
15.4196
28
28
15.1932
60
60
11.7635
29
29
14.4695
61
61
9.8751
30
30
14.4875
62
62
14.5514
31
31
14.8804
63
63
13.5409
32
32
14.6857
64
64
4.7039
1205
Table 5
Mahalanobis distances for a sample of L64
3.3948335
0.8921714
1.6113871
7.8548821
1.3079055
0.6227532
1.7869636
0.7672923
0.8890575
0.8156152
0.6789542
1.5259677
3.3683819
2.6817723
3.7212996
0.4336029
5.1049158
2.3212435
0.5979884
1.3396863
Row 1 of L64
7.5160489
0.7916304
3.53441
1.1780486
0.9019846
5.3121466
2.1628283
3.1689748
0.4765632
0.620615
4.0599383
0.7085592
0.4735541
0.3589341
0.6596119
4.3913711
6.7014764
0.7498931
0.9902953
15.351063
0.5178774
30.419276
0.35931
11.770036
2.1538859
0.8776536
3.5200479
4.1954716
0.9804545
0.7533587
2.772986
0.5691297
0.743007
0.2022022
6.0096008
2.4407172
1.1707115
7.0534854
5.0493192
0.861313
4.8885268
1.0382971
0.5850997
1.3342356
Row 2 of L64
6.0833953
0.5233098
2.3330918
3.080755
1.1164188
2.7278281
2.26442
2.0991015
0.8417254
1.5613432
1.5777033
1.0099527
1.1433147
1.2277685
1.1887293
2.0078949
7.4701672
0.8908078
1.1618087
14.323771
0.7514819
3.9968699
0.5262845
12.579455
2.432585
1.0529453
1.2869917
14.94987
0.9018103
3.2325816
2.5569832
1.8731996
1.2410093
3.3668795
1.6268848
7.125921
2.7765327
2.9596748
6.2922649
1.0671635
4.4021798
2.0984149
2.3458211
0.5454912
Row 3 of L64
8.802438
4.211971
5.6621817
1.7826083
1.0736915
3.371364
2.2404416
2.7976614
1.9823413
1.5005829
6.0013726
1.5793315
1.631539
0.7359613
2.2946022
7.4753245
31.945727
1.2316633
1.9070269
6.9516176
0.7033499
25.570186
3.6939773
10.05149
22.267015
6.3039776
2.5930287 14.052908
20.041785
10.695767
35.30109
18.388437
78.914301
29.200082
33.730109
9.3395192
22.46681
153.76889
47.516228
11.691446
29.33736
55.787193
42.105292
6.777362
Row 64 of L64
86.911572 15.757659
15.090696 13.894336
13.926274 34.2429
6.0059889 63.12819
28.056818 46.802471
22.493329 16.492573
47.675354
9.2641579
8.6390361 179.43874
9.3282624 44.442126
16.813476
45.355212
121.26875
8.1542113
4.6645559 26.666635
Table 6
ANOVA procedure for L64 results
Dependent variable: , larger-the-better form
Source
d.f.
Sum of Squares
Mean Square
F Value
Model
58
275.93384768
4.75748013
1.06
Error
22.47309837
4.49461967
298.40694604
C.V.
15.50422
Root MSE
2.12005181
Corrected total
63
R2
0.924690
Pr F
0.5408
Mean
13.67403198
1206
Case 70
Table 7
ANOVA table
Source
d.f.
ANOVA SS
Mean Square
F Value
Pr F
C1
C2
1
1
2.86055676
34.60858396
2.86055676
34.60858396
0.64
7.70
0.4612
0.0391
C3
0.187724428
0.18724428
0.04
0.8463
C4
1.49095175
1.49095175
0.33
0.5896
C5
0.37607339
0.37607339
0.08
0.7840
C6
1.14916741
1.14916741
0.26
0.6346
C7
0.16314465
0.16314465
0.04
0.8564
C8
29.49021893
29.49021893
6.56
0.0506
C9
0.82834518
0.82834518
0.18
0.6856
C10
8.20578845
8.20578845
1.83
0.2346
C11
0.19192667
0.19192667
0.04
0.8444
C12
0.11836547
0.11836547
0.03
0.8774
C13
0.34995540
0.3499540
0.08
0.7914
C14
2.07436008
2.07436008
0.46
0.5271
C15
33.60580077
33.630580077
7.48
0.0411
C16
7.51486220
7.51486220
1.67
0.2525
C17
13.34268884
13.34268884
2.97
0.1455
C18
9.47454201
9.47454201
2.11
0.2062
C19
10.53510713
10.53510713
2.34
0.1863
C20
1.69292927
1.69292927
0.38
0.5662
C21
1.27346455
1.27346455
0.28
0.6173
C22
0.00896986
0.00896986
0.00
0.9661
C23
0.37507340
0.37507340
0.08
0.7843
C24
0.70398097
0.70398097
0.16
0.7086
C25
0.13738739
0.13738739
0.03
0.8681
C26
0.00175670
0.00175670
0.00
0.9850
C27
0.09980698
0.9980698
0.02
0.8874
C28
0.01056163
0.01056163
0.00
0.9632
C29
9.74085337
9.74085337
2.17
0.2010
C30
0.68426748
0.68426748
0.15
0.7125
C31
0.05612707
0.05612707
0.01
0.9154
C32
0.26460629
0.26460629
0.06
0.8179
C33
2.35115887
2.35115887
0.52
0.5019
C34
5.05833213
5.05833213
1.13
0.3373
1207
Table 7 (Continued )
Source
d.f.
F Value
Pr F
C35
0.04174250
0.04174250
0.01
0.9270
C36
0.69712660
0.69712660
0.16
0.7099
C37
0.00644011
0.00644011
0.00
0.9713
C38
0.17791141
0.17791141
0.04
0.8501
C39
0.21159149
0.21159149
0.05
0.8368
C40
0.00395108
0.00395108
0.00
0.9775
C41
0.06935294
0.06935294
0.02
0.9060
C42
0.03615277
0.03615277
0.01
0.9320
C43
32.02214448
32.02214448
7.12
0.0444
C44
0.93469126
0.93469126
0.21
0.6675
C45
0.01268770
0.01268770
0.00
0.9597
C46
1.39043263
1.39043263
0.31
0.6020
C47
2.01474985
2.01474985
0.45
0.5328
C48
5.24140378
5.24140378
1.17
0.3295
C49
0.03621724
0.03621724
0.01
0.9320
C50
0.31732921
0.31732921
0.07
0.8011
C51
0.61788448
0.61788448
0.14
0.7260
C52
5.03501292
5.03501292
1.12
0.3383
C53
0.00618522
0.00618522
0.00
0.9718
C54
0.38732998
0.38732998
0.09
0.7809
C55
0.91071239
0.91071239
0.20
0.6715
C56
0.00739509
0.00739509
0.00
0.9692
C57
46.41832533
46.41832533
10.33
0.0236
C58
0.31011793
0.31011793
0.07
0.8033
ANOVA SS
References
T. Kamoshita, K. Tabata, H. Okano, K. Takahashi, and
H. Yano, 1996. Optimization of multi-dimensional
Mean Square
CASE 71
1. Introduction
3. Objectives
2. Background
The KSM6 is a detector switch with the following
parameters (Figure 2): (1) force characteristics, (2)
travel characteristics, (3), hysteresis performances
between the ON and OFF curves, and (4) noise and
bounces characteristics.
1208
4. Experiment
The measurements were conducted in the ITT lab
in Dole by using the F/D method (forcedeection
electrical and mechanical measurements). It dealt
with a traction/compression machine that enables
it to establish the evolution curves of the component force according to the travel applied, thanks
to an actuator. A connection enabled us to obtain
the electric state of the product according to the
same travel in the same way.
The F/D curve gave the points necessary to establish the mechanical and electrical characteristics
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1209
Figure 1
Three-dimensional and exploded assembly views of the KSM6 tact switch
Force (N)
On curve
Hysteresis
Hysteresis
Off curve
Travel (mm)
Electrical travel
Actuating
direction
Figure 2
Typical forcedeection curve of the KSM6 switch
Travel max.
1210
Case 71
of the product. These characteristics allowed the validation of a product according to the specication
(see Figure 3). This evaluation was based on
switches coming from the assembly line. There were
two groups: (1) good switches and (2) scrapped
switches (rejected because of one or more parameters out the specication).
Following are the 19 specied parameters selected for the KSM6 switch:
1. Contact preload (N)
2. Fa (N)
3. Electrical travel ON (mm)
4. Electrical travel OFF (mm)
5. Mechanical travel (mm)
6. Electrical force ON (N)
7. Electrical force OFF (N)
8. Return force (N)
9. Force (at 1.85 mm)
10. Return force (at 1.85 mm)
ri j
1 r12
r21 1
rk1 rk2
xi1xj1
n
r1k
r2k
1
Specified Parameters
-
Data Recording
Data Analysis
Data Interpretation
Figure 3
System, subsystems, and components
KSM6
Data Acquisition
Inputs: travels
Outputs: forces
(2)
1211
X3
3
St. Dev.
Variable 3
X3
X2
Variable 2
X2
X1
Variable 1
X1
Mean
80
No.
Reference Data
Table 1
Reference group output data normalization
19
X19
Variable 19
X19
1.0
0.0
Variable 1
Z1
1.0
0.0
Variable 2
Z2
1.0
0.0
Variable 3
Z3
Normalized Data
1.0
0.0
Variable 19
Z19
1212
0.072
FCE ON
FCE OFF
FR
0.041 0.017
18 0.011
RC
0.035
0.081
0.012
1.616
0.042 0.002
0.753
0.280
0.054 0.067
0.056
0.007 0.107
NT OFF
0.587
0.569
0.012
0.112 0.083
0.025
NT ON
19
0.179 1.290
BOUN
0.006 0.072
0.766
1.577 0.204
NB ON
0.602
0.210 0.114
NB OFF
0.846
0.711 0.001
13
0.932
0.715 0.151
HYST3
0.510
0.729
0.747
12
0.869
0.961
HYST2
0.716
0.797
11
0.956
0.719 0.143
10
0.745
0.937
0.812 0.030
HYST1
0.983
1.000
0.758 0.056
FM OFF
0.814
0.831
0.807
0.942
0.968
1.000 0.056
FM ON
0.811
0.839
0.014
0.758
0.807
5 0.099 0.063
0.014
0.072
CM
1.000
0.961
1.961
0.968
0.734 0.063
0.734
0.477
1.000
0.770
0.770
1.000
CE OFF
0.511
0.812
0.839
0.477 0.099
FR
8
0.745
0.719
0.812 0.736
0.715
0.747
0.961
0.797
FM OFF
10
0.711
0.729
0.869
0.716
HYST1
11
0.210
0.280
0.569
0.510
HYST2
12
0.935
0.956
0.948
0.932
0.753
0.544
0.851
0.957
1.000
0.654
0.486
0.822
1.000
0.957
0.020
0.009
0.674
0.549
1.000
0.822
0.851
0.006
0.016
0.002
0.030
0.008
0.005
0.017
0.092 0.038
0.179
0.112
0.007
0.127
0.042
BOUN
19
0.012
0.092
0.008
0.002
0.050
0.004
0.017
0.005
0.030
0.016
0.044
0.020
0.056
0.010
0.010 0.008
1.000
1.000 0.008
1.000 0.033
0.050
0.039
0.073
0.009 0.009
0.039
1.000
0.038
0.309
0.073
0.038
1.000
0.050
0.689
0.674
0.054
0.081
0.041 0.067
0.012
0.127 0.105
0.753 0.054 0.214 0.091 0.096
0.196
0.044
RC
18
0.004 0.026
0.689
1.000
0.549
0.486
0.544
NT OFF
17
0.012 0.105
0.107
0.025
0.345
0.602
NT ON
16
0.646 0.577
0.345 0.419
0.807 0.792
0.948 0.705
0.164
0.807
0.846
NB ON
15
0.935 0.705
0.650
1.000 0.660
0.937 0.668
NB OFF
14
HYST3
13
0.983
0.831 0.755
0.814
FM ON
9
0.942 0.733
0.811 0.281
0.511
OM
5
CE ON
0.812
CE OFF
4
FA
1.000
CE ON
3
FA
2
PRL
PRL
1
Table 2
Correlation matrix results for the reference group
1213
R1
(3)
(4)
Good Parts
Sample
Good Parts
Rejected Parts
10
1.27624221
2.46283229
11
0.91560766
12
0.73373554
13
0.4391565
14
0.37539039
15
0.91071876
16
0.29173633
17
0.28862911
18
0.40312754
19
0.46821194
20
0.29330727
60
0.24485985
61
0.45349242
62
0.22177811
63
0.29027765
64
0.08698667
65
0.15542392
66
0.31067779
67
0.09868037
68
0.34478916
69
1.23338162
70
0.45290798
Rejected Parts
71
0.29085425
0.76586855
Table 3
Mahalanobis distance values for the reference
and abnormal groups
Sample
Table 3 (Continued )
0.45588274
3.34905375
72
1.11503408
2.40756945
73
0.3832427
0.17740621
4.13031615
74
1.15630344
0.67630344
2.44623681
75
0.70401821
0.15559801
0.51367029
1.70684039
76
0.91088082
3.62376137
77
0.29566716
0.47617251
1.8801606
78
0.81947543
0.48574861
2.74470151
79
0.35900551
4.58774521
80
2.58171136
0.61893043
1214
Case 71
also used. The resulting MDs of the abnormal samples are summarized in Table 3.
6. Discussion
D2
5
4.5
4
3.5
3
2.5
2
1.5
1
0.5
0
SN 10 log
1
n
i1
1
y 2i
(5)
7. Conrmation
The conrmation method consists in doing the
same MTS calculations by taking off the nonsignificant factors. By selecting only the signicant factors, 12 out of 19 in our case, we created a new MTS
graph (Figure 7). MD values for the normal and
abnormal samples are shown in Table 6. The optimization evaluation gives very good results. Indeed,
we can conrm that there is still a very good selection between the bad and the good pieces, even
though we eliminated seven nonsignicant
parameters.
ABNORM AL GROUP
NORM AL GROUP
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89
Number
Figure 4
Mahalanobis distance for normal and abnormal groups
1215
1 1 1 2 2 2 2 2 2 2 2 1 1
1 2 2 1 1 2 2 1 1 2 2 1 1
1 2 2 1 1 2 2 2 2 1 1 2 2
1 2 2 2 2 1 1 1 1 2 2 2 2
1 2 2 2 2 1 1 2 2 1 1 1 1
2 1 2 1 2 1 2 1 2 1 2 1 2
2 1 2 1 2 1 2 2 1 2 1 2 1
2 1 2 2 1 2 1 1 2 1 2 2 1
2 1 2 2 1 2 1 2 1 2 1 1 2
2 2 1 1 2 2 1 1 2 2 1 1 2
2 2 1 1 2 2 1 2 1 1 2 2 1
2 2 1 2 1 1 2 1 2 2 1 2 1
2 2 1 2 1 1 2 2 1 1 2 1 2
10
11
12
13
14
15
16
10
2.28
3.474
0.911
1 2 4.621 2.465 1.894 1.644 1.415 2.481 1.483 4.436 7.794 3.705
1 2 4.119 1.045 3.491 1.422 1.475 4.627 1.534 3.672 7.794 3.371
2 1 4.878 1.251 4.353 2.739 1.657 4.367 2.037 2.648 7.826 0.552
2 1 4.183 2.941 3.128 1.898 1.318 1.366 2.146 3.102 7.849 0.617
1 1 4.492 2.8
1 1 4.404 1.541 2.829 2.241 0.796 1.089 1.021 4.061 7.822 3.894
2 2 5.198 1.873 6.523 3.057 1.443 1.094 1.203 0.918 7.835 0.56
1 1 1 2 2 2 2 1 1 1 1 2 2
1 1 1 1 1 1 1 2 2 2 2 2 2
1 1 1 1 1 1 1 1 1 1 1 1 1
J K L M N O
No. A B C D E F G H I
Factor
Table 4
MDs calculated for abnormal group within the L16 orthogonal array
Case 71
SN Ratio
1216
A1
A2
B1
B2
C1
C2
D1
D2
E1
E2
F1
F2
G1
G2 H1
H2
I1
I2
J1
J2
K1
K2
G2 H1 H2
I1
I2
J1
J2
K1
K2
L1
L2
M1 M2
N1
N2
O1
O2
L2 M1 M2 N1 N2
O1
O2
SN Ratio
Figure 5
Response graph for the SN ratio
A1 A2
B1
B2
C1 C2 D1
D2
E1
Figure 6
Response graph for the mean values
E2
F1
F2
G1
L1
1217
Table 5
ANOVA for the SN Ratio
Source
d.f.
3.064
3.064
3.3156
3.3156
0.8761
0.8761
0.2492
0.2492
0.4806
0.4806
9.0515
9.0515
1.9429
1.9429
1.1272
1.1272
7.9037
N
O
3.2996
11.447
3.2996
11.447
2.9981
2.9836
10.3507
2.2097
1.53
2.1937
1.52
10.341
7.15
8.1847
7.9456
5.49
7.9037
7.1468
6.7978
4.7
7.9029
7.9029
7.146
6.797
4.7
4.342
4.342
3.9262
3.2361
2.24
89.7075
89.7075
81.1166
88.6016
61.23
0.0013
0.0013
(e)
7.7414
1.1059
16.5886
11.46
Total
15
144.7111
9.6474
e1
e2
D2
8
7
6
5
4
3
2
1
0
ABNORMAL GROUP
NORMAL GROUP
7 13 19 25 31 37 43 49 55 61 67 73 79 85
Number
Figure 7
Mahalanobis distance for normal and abnormal groups
1218
Case 71
Table 6
Mahalanobis distance values for the reference
and abnormal groups
Table 6 (Continued )
Sample
Good Parts
71
0.2190487
72
0.95366322
73
0.32950224
74
0.26453747
75
0.22518346
76
0.20727962
77
0.35644774
78
0.90913625
79
0.44455372
80
1.26760236
Rejected
Parts
Sample
Good Parts
Rejected
Parts
0.58527515
4.4383654
1.42813712
4.15092412
0.15154057
2.82153352
0.29781518
2.75523643
0.54822524
3.12827267
0.5647138
2.88094718
0.32688929
4.01820264
0.24148032
7.18625986
0.5559233
3.62863953
10
1.80277538
2.67755672
11
0.70160571
12
1.05331414
13
0.36788692
14
0.35625463
15
0.64054606
16
0.33201852
17
0.31591459
18
0.43327776
19
0.53893526
20
0.33842311
60
0.18403453
61
0.22158889
62
0.2094948
63
0.3233402
References
64
0.08915155
65
0.16588856
66
0.37570123
67
0.11199467
68
0.46124969
69
0.28254858
70
0.57035309
S. Rochon and P. Bouysses, October 1997. Optimization of the MSB series switch using Taguchis parameter design method. Presented at the ITT Defense
and Electronics Taguchi Symposium 97, Washington, DC.
S. Rochon and P. Bouysses, October 1999. Optimization of the PROXIMA rotary switch using Taguchis
parameter design method. Presented at the ITT
Defense and Electronics Taguchi Symposium 99,
Washington, DC.
8. Conclusions
The feasibility of applying the MTS approach has
been demonstrated. In our present case we used it
to characterize and to select the parameters specied for a detector switch. Indeed, thanks to this
method, we were able to keep 12 specic parameters out of the 19 initially selected. The conrmation of the discrimination between the good and the
bad switches without using the nonselected parameters was very good.
The MTS method was very helpful to eliminate
the seven parameters from the specication and realize a signicant reduction of the checking costs.
In our case, we could reduce these costs by 37%,
which corresponds to a $200,000 yearly prot.
1219
method. Presented at the ITT Defense and electronics Taguchi Symposium 99, Washington, DC.
S. Surface and J. Oliver II, 2001. Exhaust sensor output
characterization using MTS. Presented at the 18th
ASI Taguchi Methods Symposium.
G Taguchi, 1993. Taguchi on Robust Technology Development. New York: ASME.
G. Taguchi, S. Chowdhury, and Y. Wu, 2001. The MahalanobisTaguchi System. New York: McGraw Hill.
G. Taguchi, S. Chowdhury, and S. Taguchi, 1999. Robust
Engineering. New York: McGraw-Hill.
CASE 72
1. Introduction
Delphi Automotive Systems manufactures exhaust
oxygen sensors for engine management feedback
control. The stoichiometric switching sensor is located in the exhaust stream and reacts to rich and
lean exhaust conditions. The sensor output signal
(0 to 1 V) must maintain a consistent response
throughout its life to ensure robust operation and
allow tight engine calibrations that minimize tailpipe emissions.
Test Engineering at Delphi performs a variety of
accelerated test schedules to expose the sensor realistically to representative vehicle conditions. Sensor performance measurements are conducted to
monitor the sensor output characteristics throughout its life. Characterizing the sensor performance
is often accomplished by recording and analyzing
the sensor output voltage under a range of controlled exhaust conditions.
As emission control standards become more
stringent and sensor technology improves to meet
these demands, the testing community needs to improve its techniques to describe product perform1220
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1221
4. Experiment
Traditional methods of sensor performance analyses
consider key characteristics of interest to customers
and ensure that product specications are met.
Product development teams, however, want to understand not just time to failure but also the initiation and rate of signal degradation. Sensor output
characteristics must indicate this response change
over time.
The goals of this evaluation were to determine
whether discrimination among aged test samples
could be achieved, whether the analysis could comprehend both product and test setup variations, and
whether it would provide a tool capable of detecting
early sensor signal change. The MTS method used
here generates a numerical comparison of a reference group to an alternative group to detect levels
of abnormality. The method also identies the key
factors associated with these differences.
Denition of Groups
This evaluation was based on sensors with differing
levels of aging. Twenty-six oxygen sensors were chosen as the reference (normal) group. These sensors
had no aging and were of the same product design
with similar fresh performance. These sensors were
characterized in two groups with similar test
conditions.
Fresh
1000
Aging 1
Aging 2
Voltage (mV)
800
600
400
200
0
0
50
100
Time (ms)
150
Figure 1
Sensor output for different sensors after various aging tests
200
1222
Case 72
Sensor Output
Signal
(New/Aged)
Engine Exhaust
Heat
O x ygen
S enso r
Data Recording
Data Analysis
Data Interpretation
2000
4000
6000
8000
10000
12000
14000
16000
Data
Acquisition
Figure 2
System, subsystems, and components
Denition of Characteristics
As discussed, the open-loop engine-based performance test generates an exhaust stream to which the
test sensors respond. Traditional sensor output characteristics that are often specied include maximum
voltage, minimum voltage, voltage amplitude, response time in the lean-to-rich direction, and response time in the rich-to-lean direction. These
parameters are denoted in Figure 4 and were included in the evaluation. One test-related parameter indicating the location of the test sensor (nest
position) was also included, as multiple sensors were
tested simultaneously. Considering the reference
1223
F res h
1000
A ged
Degraded
V oltag e (m V )
800
600
400
200
0
0
50
100
T im e (ms)
150
200
Figure 3
Sensor output voltage traces before and after aging
group sample size of 26, only nine additional characteristics were selected, for a total of 15 (referred
to as factors A to O). The other characteristics, although not dened here, comprise a best attempt
at variables that describe the waveform.
data for the 15 characteristics of interest were organized for the 26 reference (nonaged) sensors.
The data were normalized for this group (Table 1)
by considering the mean and standard deviation of
this population for each variable of interest:
Zi
5. Mahalanobis Distance
The purpose of an MTS evaluation is to detect signal behavior outside the reference group. Existing
xi xi
i
Max. Voltage
Rich
800
700
600
500
Voltage Amplitude
400
300
Lean
1000
900
200
Min. Voltage
100
Lean-to-Rich Time
Rich-to-Lean Time
0
0
500
(1)
1000
1500
2000
Time (ms)
Figure 4
Sensor output parameters during the engine perturbation test
1224
X3
3
X2
2
X1
Std. Dev.
Variable 3
X3
Reference Data
Variable 2
X2
Mean
26
No.
Variable 1
X1
Table 1
Reference group output data normalization
15
X15
Variable 15
X15
1.0
0.0
Variable 1
Z1
1.0
0.0
1.0
0.0
Variable 3
Z3
Normalized Data
Variable 2
Z2
1.0
0.0
Variable 15
Z15
1225
0.631
0.042
0.597
0.304
0.056
0.696
0.710
0.096
0.933
0.772
0.478
0.043
10
11
12
13
14
15
Vmin
Vmax
Vampl
LRTime
RLTime
Position
0.005
0.388
0.597
0.732
0.116
0.218
0.144
0.389
0.074
0.326
0.572
0.599
0.659
0.231
0.236
0.048
0.364
0.538
0.968
0.812
0.439
0.208
0.456
0.446
0.123
0.815
1.000
0.553
0.542
0.696
Vmin
4
0.480
0.305
0.553
1.000
0.636
0.679
C
3
0.046
0.618
0.897
0.611
0.135
0.542
0.636
1.000
0.679
0.956
0.956
1.000
B
2
A
1
0.244
0.414
0.241
0.295
0.112
0.796
0.850
0.393
0.156
0.155
1.000
0.815
0.305
0.631
0.710
Vmax
5
Table 2
Correlation matrix results for the reference group
0.422
0.026
0.230
0.296
0.167
0.948
0.281
0.719
0.899
0.366
0.052
0.075
0.743
0.174
0.090
0.150
1.000
0.119
1.000
0.119
0.439
0.393
0.208
0.156
0.156
0.446
0.005
0.281
0.587
0.213
0.581
0.597
0.544
1.000
0.174
0.150
0.456
0.611
0.135
0.123
0.732
0.042
0.116
0.096
RLTime
8
0.480
LRTime
7
Vampl
6
0.167
0.353
0.358
0.304
0.337
0.826
1.000
0.544
0.090
0.075
0.850
0.812
0.659
0.897
0.933
I
9
0.119
0.291
0.200
0.206
0.660
1.000
0.826
0.597
0.052
0.422
0.796
0.968
0.599
0.597
0.772
J
10
0.538
0.364
0.241
0.093
0.822
0.717
0.326
1.000
0.282
0.251
0.206
0.304
0.213
0.899
0.167
0.295
0.251
1.000
0.660
0.337
0.581
0.366
0.743
0.112
0.326
0.056
0.304
0.572
0.043
L
12
0.478
K
11
0.561
0.453
1.000
0.717
0.282
0.200
0.358
0.587
0.719
0.286
0.241
0.048
0.074
0.618
0.597
M
13
0.225
1.000
0.453
0.241
0.093
0.291
0.353
0.281
0.281
0.230
0.414
0.236
0.218
0.389
0.388
Position
14
1.000
0.225
0.561
0.822
0.326
0.119
0.167
0.005
0.948
0.025
0.244
0.231
0.144
0.046
0.005
O
15
1226
Case 72
Table 3
Mahalanobis distances for the reference and abnormal groups
No.
SN
0 hours
Normal Group
MD
Aged
Abnormal Group
MD
24,720
0.9
14.1
24,716
1.2
15.7
24,730
0.8
9.6
24,719
0.6
14.3
24,728
1.4
15.1
24,723
1.6
5.3
22,963
0.8
79,651.8
23,013
1.5
86,771.7
23,073
1.1
84,128.8
10
24,673
1.4
11
24,700
0.8
12
24,694
0.9
13
24,696
1.1
14
24,701
1.2
15
24,697
1.1
16
24,633
0.6
17
24,634
1.0
18
24,635
1.1
19
24,636
1.0
20
24,637
0.6
21
24,593
1.1
22
24,595
0.5
23
24,598
0.7
24
24,599
1.2
25
24,602
1.0
26
24,603
0.7
1227
100,000
20
F res h
Fres h
80,000
60,000
Degraded
15
Degraded
A ged
MD
MD
A ged
40,000
10
5
20,000
0
0
10
15
20
25
30
10
15
20
25
30
S a m ple
Sa m ple
Figure 5
Power of discrimination for the reference to abnormal groups
1
r21
rk1
r12 r1k
1 r2k
rk2 1
(2)
x x
rij il jl (l 1, 2, ... , n)
n
Upon review of the correlation matrix (Table 2),
it is clear that a correlation exists between parameters. For this reason, application of the multivariable
MTS approach makes sense because no single characteristic can describe the output fully.
The inverse of the matrix was then calculated
[equation (3)] and nally, the Mahalanobis distance
[equation (4)], denoted by MD. This completes the
calculations for the normal group. All reference
samples had MD distances of less than 2 (Table 3):
1
a11
a21
ak1
MD
a12 a1k
a22 a2k
ak2 akk
1
ZR 1Z T
k
(3)
(4)
6. Selection of Characteristics
Optimization with L16 Orthogonal Array
To reduce data processing complexity, it is desirable
to consider fewer characteristics and eliminate those
1228
Case 72
RL time Before and After Aging
Fresh
Fresh
Degraded
Degraded
Aged
10
15
20
25
RLtime (ms)
Aged
30
10
15
20
25
30
Sample
Sample
Figure 6
Discrimination not evident with traditional single variable approach
1
n
i1
1
y i2
(5)
7. Conclusions
The feasibility of use of the MTS multivariable approach has been demonstrated. The combination of
15 variables from the sensor output waveform allowed much improved discrimination compared to
1229
No.
10
11
12
13
14
15
16
Factor
0.5
1.2
0.9
4.0
7.4
3.4
6.4
2.0
1.6
1.8
4.0
2.5
2.4
2.9
1.6
14.1
Table 4
MDs calculated for abnormal group within the L16 orthogonal array
2.3
3.0
1.1
4.9
1.7
3.8
1.9
3.3
1.8
2.5
3.0
2.8
2.8
5.2
2.2
15.7
0.8
1.0
0.8
2.0
2.6
1.6
2.3
1.5
1.3
1.2
3.3
1.0
1.6
1.8
1.5
9.6
0.7
2.3
0.6
3.9
4.7
3.4
4.8
3.1
1.1
2.3
2.0
2.9
3.0
3.3
2.2
14.3
7.8
7.9
4.9
10.5
7.3
6.9
9.5
5.6
5.4
6.1
5.1
7.3
9.5
8.0
6.5
15.1
2.2
2.4
1.7
2.6
1.7
1.9
3.1
1.7
3.1
1.1
1.5
2.6
1.5
1.1
1.0
5.3
131.4
161.1
74.1
379.9
287.0
348.6
279.1
151.6
11,568.6
9,156.7
1.560.3
25,535.4
6,300.9
25,731.9
9,722.0
79,651.8
35.3
35.1
12.3
44.9
33.4
25.7
37.0
30.2
13,508.6
10,742.5
1,538.7
28,754.8
7,283.8
32,363.5
11,988.6
86,771.7
49.3
53.1
25.3
59.5
47.5
35.4
57.8
49.2
12,824.1
10,330.4
1,526.0
27,635.1
7,102.0
30,961.9
11,349.3
84,128.8
1230
Case 72
SN Ratio Analysis
10
9
8
7
6
A 1 A 2 B 1 B 2 C 1 C 2 D 1 D 2 E 1 E 2 F 1 F 2 G 1 G 2 H 1 H 2 I1 I2 J 1 J 2 K 1 K 2 L 1 L 2 M 1 M 2 N 1 N 2 O 1 O 2
Mean Analysis
6 00 0
4 00 0
2 00 0
0
A 1 A 2 B 1 B 2 C 1 C 2 D 1 D 2 E 1 E 2 F 1 F 2 G 1 G 2 H 1 H 2 I1
I2 J 1 J 2 K 1 K 2 L 1 L 2 M 1 M 2 N 1 N 2 O 1 O 2
Figure 7
SN and means response charts
1231
Level 1
Level 2
4
3321
2165
5486
4.2
6.2
10.4
3311
2170
5481
12
0.6
8.6
8.0
11
2671
2490
5161
2.7
7.0
9.7
Vmin
D
a
10
2672
2489
5161
11
1.1
7.8
8.9
Vmax
E
13
2156
2748
4903
15
0.1
8.4
8.3
Vampl
F
7614
36
Rank
Rank
1.7
Delta
7578
7.5
Level 2
Delta
9.2
Level 1
Table 5
SN and means response tables
3.3
6.7
10.0
RLTimea
H
12
2164
2743
4907
4943
1354
6297
Means Response
1.4
7.6
9.0
LRTime
G
SN Response
4934
1358
6292
14
0.1
8.3
8.4
2997
2327
5324
13
0.3
8.5
8.2
3012
2319
5331
4.5
6.1
10.6
3254
2198
5452
2.1
7.3
9.4
3250
2200
5450
1.5
7.6
9.1
15
884
3383
4267
10
1.2
7.7
8.9
Position
N
14
899
3376
4275
3.2
6.7
9.9
Mahalanobis Distance
1232
Case 72
MD = M
(1) reduce variability
(2) minimize
M1
M2
M3
M4
M5
M = Time (% test completion) (EOT)
Figure 8
Used in conjunction with product optimization, MD can
conrm improvements in aging response over time
Reference
product engineers with more than 100 data variables related to a waveform, the MD distance could
help them make decisions during product development evaluations.
This study used existing data with no new experimental tests needed. The study points out the potential of its application, as no new test procedures
are required, with the exception of additional data
CASE 73
1. Introduction
Since an inspector responsible for a visual inspection is considered to have many subconscious inspection standards in order to realize an automated
inspection process, we need to build up a system of
integrating multidimensional information. On the
other hand, for the automation of an inspection
process, necessary multidimensional information, or
a group of characteristics, is not necessarily identical
to that held by an inspector. That is, although characteristics used by an inspector are unknown in
some cases, even characteristics easily handled by a
computer can be utilized successfully if they lead to
the same results as those determined by a visual
inspection.
In our research, selecting differential and integral characteristics calculated by a pattern as feature
values, we take advantage of the MTS method,
which creates a Mahalanobis space with normal
products (base data). In this research, using disks
judged as normal by inspectors, we created a base
space.
To obtain a highly accurate image at low cost, we
adopted a method of capturing an image with a linear CCD (change-coupled device) camera while rotating a disk at constant speed (Figure 1). The
captured image is input into a personal computer
in real time by way of a special image input board.
By doing so we can obtain extremely thin and long
image data, that is, (width of a disk) (circular
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1233
1234
Figure 1
Appearance inspection system for disks
Case 73
A key point in our research was to deal with image data of a disk as a group of waveforms. By recognizing a disturbance in a waveform, we attempted
to detect a defect. As a feature value for a waveform
pattern, we calculated the following differential and
integral characteristics, explained below.
Next, we dened a waveform for one unit (a
waveform consisting of pixels for one rotation) as
Y(t). As Figure 3 illustrates, we drew p parallel lines
to the t-axis at an equal interval along the Y-axis.
Counting the number of intersections between each
line and the waveform, Y(t), as per each line, we set
this as a differential characteristic. In addition, calculating the sum of all intervals between intersections, we dened it as an integral characteristic.
The differential characteristic can be regarded to
indicate a frequency of uctuations in a waveform
(i.e., information equivalent to the frequency of a
waveform). Since we can obtain a frequency for
each amplitude, the distribution of a frequency in
the direction of an amplitude can be known. On
the other hand, the integral characteristic indicates
an amount of occupation for each amplitude. Both
the differential and integral characteristics were
considered to cover a frequency and amplitude of
a waveform and useful information regarding them.
In addition, since the differential and integral characteristics can be calculated more simply than the
traditional characteristics such as a frequency, we
can expect a faster processing speed.
Furthermore, in addition to the differential and
integral characteristics, in this research we added
four characteristics that are regarded to be effective
for a disk appearance inspection.
Figure 2
Waveform of disk image
Figure 3
Differential and integral characteristics
1235
Table 1
Feature values in the wave pattern of a diska
Characteristic
P
Differential
Integral
18
5989
75
1318
C
54
D
66
46
5971
68
1098
82
66
72
5944
65
828
92
65
37
40
1593
38
40
1593
39
1525
2552
40
2552
3447
sists of 36 waveforms. Then we computed 240 feature values, including the differential and integral
characteristics for each waveform. However, after excluding characteristics that do not obviously contribute to image recognition, we nally obtained 160
feature values (items). Therefore, using 1000 160
base data, we computed 36 correlation matrices. A
typical example of feature values that are not con-
Figure 4
Distances of disk and wave patterns
1236
Mahalanobis distance, respectively. In (a), representing base data, all Mahalanobis distances show a
small value. Part (b) represents the distances of a
nondefective disk; almost all lie below 2. For (c), for
a disk with a small amount of adhesive stuck, many
data exceed the value of 4. This cannot be regarded
as normal. In (d ) for a disk with friction material
peeled off, many high peaks can be seen on the left
side of the plot. This implies that relatively large
defects exist on the corresponding positions. In addition, as a result of recognizing various types of
disks, we observed that most of the recognition results were consistent with results via a human visual
inspection.
Case 73
10 log
dB
(1)
Figure 5
Response graphs for item selection
1237
Since the Mahalanobis distances for defective areas
in this case were increased by the items selected, we
can conclude that the detectability of defects is enhanced. This is supposedly because we have picked
up only items that were effective for defect recognition, thereby mitigating noises in recognition. On
the other hand, no increase in Mahalanobis distance can be seen in the normal areas. Since the
base space is created from the data for a normal
disk, its Mahalanobis distance lies around 1.
Our research demonstrates that by narrowing
down the original items, we can enhance the capability of recognition with less calculation.
Reference
Shoichi Teshima, Tomonori Bando, and Dan Jin,
1997. A research of defect detection using the
MahalanobisTaguchi system. Quality Engineering,
Vol. 5, No. 5, pp. 3845.
Figure 6
Recognition result by selected items
CASE 74
1. Introduction
In general, few drugs are effective for chronic hepatitis. Even Interferon, which has lately been said to
be a cure for hepatitis C, has only a 40 to 50% efcacy rate. For hepatitis B the rate is believed to be
much smaller. Moreover, Interferon causes signicantly strong sideeffects, and at the same time,
cannot be applied to all patients suffering from
hepatitis. In our research we use Azelastine, an antiallergy medicine. We examined two groups: (1)
healthy persons and (2) patients with chronic
hepatitis.
Healthy Persons (Normal Group)
We examined 200 people, who took a biochemical
test consisting of 16 examination items (described
later), and from the diagnostic history at Tokyo
Teishin Hospital, these people were diagnosed as
being in good health with no disease. Although
initially, we wished to increase the number of examinees, because of the limited capacity of the
1238
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1239
Items Studied
The healthy persons and patients with chronic active hepatitis had already had the following 16 blood
tests: total protein (TP), albumin (Alb), A/G
(albumin/globulin) ratio, cholinesterase (ChE),
glutamate oxaloacetate transaminase (GOT),
glutamate pyruvate transaminase (GPT), lactate
dehydrogenase (LDH), alkaline phosphatase (ALP),
-glutamyltranspeptidase (-GTP), leucine aminopeptidase (LAP), total cholesterol (TCh), triglyceride (TG), phospholipases (PL), creatinine (Cr),
blood urea nitrogen (BUN), and uric acid (UA).
In addition, assuming the age and gender of the
examinees to be relevant and that no extra cost was
Analysis Method
The Mahalanobis distance D 2 was not convenient to
analyze as raw data. We divided D 2 by the degrees
of freedom, 18. The average of the resulting values
for the normal group was equal to 1. Next, by logarithmizing the value above and multiplying it by
10, we obtained the value used in our analysis. Then
the data for the ith patient were computed as
follows:
Yi 10 log
D 2i
18
Table 1
Data for normal group
Age:
No. 1
59
Male
No. 2
51
Female
No. 3
38
Male
No. 4
56
Female
TP (g / dL)
6.5
7.0
7.1
7.2
Alb (g / dL)
3.7
4.4
4.3
4.0
A/G
1.32
1.69
1.54
1.25
ChE (pH)
0.70
0.98
0.92
0.93
GOT (IU / L)
12
21
16
22
GPT (IU / L)
18
15
16
190
247
152
188
7.7
6.3
4.8
6.1
-GTP (IU / L)
12
23
40
16
LAP (U / dL)
275
340
355
304
235
225
177
216
TG (mg / dL)
140
87
93
86
PL (mg / dL)
238
227
185
213
Cr (mg / dL)
0.8
1.1
1.4
1.0
1.3
15
15
13
UA (mg / dL)
3.8
4.6
4.6
4.2
1240
Case 74
2. Analysis Results
From here on, we detail the actual data and analysis
results.
Data for the Normal Group
Table 1 shows a part of the data for 200 healthy
persons.
Data for Patients with Chronic Active Hepatitis
Since the total data for patients with chronic active
hepatitis were composed of 32 examinees, 36
months, and 18 items, we show only a part of them
in Table 2.
Mahalanobis Distance
Since D 2 for the normal group comprises variances
with 18 degrees of freedom and is considered rela-
Table 2
Data for patients with chronic active hepatitisa
Table 3
Mahalanobis distance for patients with chronic
active hepatitis
Item
No. 17
No. 18
No. 19
No. 20
1554.0
1873.6
1216.5
1418.6
1741.7
2869.2
2550.1
1164.8
1531.9
1412.8
1190.5
1239.7
1248.8
1877.5
1656.3
903.1
1871.2
1756.8
2558.0
533.3
1362.9
1881.7
2100.2
896.3
2346.0
2346.3
2808.8
623.7
2489.1
1195.0
2207.3
710.8
1M
2M
3M
4M
2492.7
790.5
1843.8
790.3
TP (mg / dL)
7.0
7.9
7.9
7.8
10
3380.7
990.1
367.7
918.0
3.6
4.0
3.6
4.1
11
3196.8
1416.1
1784.8
862.9
2815.7
1301.7
2398.7
811.8
A/G
1.06
1.03
0.97
1.08
12
ChE (IU / L)
303
275
312
297
13
2191.2
131.2
473.6
739.5
GOT (IU / L)
178
168
160
174
14
2218.3
979.4
204.1
690.8
GPT (IU / L)
225
228
218
202
15
1930.3
1306.5
560.2
884.2
1787.3
1662.5
157.2
332.8
LDH (IU / L)
446
429
454
467
16
ALP (IU / L)
152
146
146
162
17
2572.1
1348.3
178.1
393.9
-GTP (IU / L)
27
31
28
30
18
1893.3
1650.7
498.1
604.6
59
69
64
68
19
1298.3
1223.9
738.0
520.2
1508.3
1329.2
948.3
969.2
186
185
181
197
20
TG (mg / dL)
153
148
100
157
21
1634.1
1180.4
1175.2
274.6
PL (mg / dL)
193
222
198
216
22
1785.5
1106.5
942.8
339.6
Cr (mg / dL)
0.7
0.7
0.7
0.7
23
1546.5
560.4
366.6
269.9
1284.4
782.4
1049.5
56.6
11
12
11
15
24
UA (mg / dL)
5.9
0.7
5.6
5.5
25
2037.8
1286.7
1652.9
59.2
26
2294.8
588.8
432.9
84.1
1241
Table 4 shows the average of each case, the corresponding value of the linear equation, and the
value that we obtain by dividing it by the total of
squares of coefcients (which is equivalent to a
monthly slope for the linear trend). Performing an
analysis of variance for these linear trend values, we
obtain Table 5. Based on this result, we calculated
the estimations shown in Table 6. From these we can
conclude that since a Mahalanobis distance is considered a distance from the gravity center of a multidimensional space, if the distance for a certain
patient from the normal group increases, the degree of his or her disease is regarded to increase.
On the contrary, as the distance decreases, the patient becomes better and closer to the state of a
normal person. As a whole, as compared with the
contrast group for chronic active patients, the Azelastine group obviously has a good linear trend.
However, this trend does not necessarily hold true
for all patients. Yet if we look at the values beyond
the 95% condence limits, the values of the linear
trend reveals that the patients are getting better. For
example, for the linear trend for the contrast group,
the average is 19.9572 and the 95% condence
interval is 202.137 162.2232. On the other
hand, the average for the Azelastine group is
115.5413 and its condence interval is 297.7217
66.6391. Therefore, patients 17, 19, 20, and 32,
those who take the linear trend value below the
95% condence interval, are judged to be getting
Figure 1
Plots for contrast and Azelastine groups
1242
Case 74
Table 4
Linear trend of 10 log(D2/f) (for one month)
Azelastine Group
Contrast Group
CN
L/D
CN
L/D
58.3
0.0065
17
419.8
0.0470
51.3
0.0057
18
121.8
0.0136
67.9
0.0076
19
249.0
0.0279
87.7
0.0098
20
289.6
0.0324
116.4
0.0130
21
189.8
0.0213
46.9
0.0053
22
123.5
0.0138
27.1
0.0030
23
82.5
0.0092
69.2
0.0078
24
185.7
0.0208
68.6
0.0077
25
48.9
0.0055
10
48.2
0.0054
26
160.5
0.0180
11
33.9
0.0038
27
22.3
0.0025
12
144.45
0.0162
28
27.2
0.0030
13
122.0
0.0137
29
75.5
0.0085
14
182.8
0.0205
30
72.3
0.0081
15
2.1
0.0002
31
48.8
0.0055
16
17.1
0.0019
32
203.5
0.0228
Table 5
ANOVA for linear trend (L)
Source
F0
73,090.5
73,090.5
Within C1 15 110,878.7
7,391.9
Within C2 15 300,000.6
20,000.0
Total
32 630,848.5
9.89**
2.71*
Table 6
Estimations of linear trend and that divided by
D and 95% condence limit
Contrast group
Lower limit
Estimation
Upper limit
Azelastine group
Lower limit
Estimation
Upper limit
Average
Lower limit
Estimation
Upper limit
L/D
202.1376
19.9572
162.2232
0.022646
0.002236
0.018174
297.7217
115.5413
66.6391
0.033354
0.012944
0.007466
242.8962
67.7492
107.3977
0.027212
0.007590
0.012032
1243
3. Conclusions
Reference
Tatsuji Kanetaka, 1992. Application of Mahalanobis distance to measurement of drug efciency. Standardization and Quality Control, Vol. 45, No. 10, pp. 40
44.
CASE 75
Abstract: This case deals with use of the MahalanobisTaguchi system (MTS)
and multivariate analysis together with a special medical examination focusing on liver disease. More specically, as a medical application of statistics,
16 blood biochemical data, ages, and genders were analyzed to evaluate
examinees health conditions. This is the rst research case study of the MTS
method used in an application of Mahalanobis distance (D 2) to a multivariate
analysis.
1. Introduction
Generally speaking, when analyzing multiple variables, we should create a good database above all. A
good database is generally homogeneous. In the
case of biochemical data for human beings, while
data for healthy people have little variability and
distribute homogeneously, those for patients suffering diseases vary widely and cannot be viewed as
homogeneous.
For these reasons it is desirable to prepare a database by using examination results of healthy people (normal persons). However, when attempting to
diagnose a specic disease, unless necessary examination items to diagnose the disease are included,
we cannot make use of the database. In a periodic
medical checkup, due to constraints of budget, personnel, or time, few examination items are selected
for diagnosis. Even in a complete physical examination, only common items are checked. Therefore,
if the data for a regular medical checkup or complete physical examination are used as a normal
contrast, we sometimes lack examination items.
Then it is quite difcult to obtain enough data on
normal persons. However, when we determine a
standard value in certain examination facilities, we
can obtain healthy peoples data. Yet it is not indisputable whether judgment of health conditions is
made accurately enough. In this research, as our
1244
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1245
Female
Female
Female
Female
Female
Female
Female
Female
Female
10
Gender
Male
No.
48
41
50
51
56
52
52
53
51
59
Age
7.0
6.9
6.6
7.1
7.2
6.7
6.6
6.7
7.0
6.5
TP
4.2
4.3
3.6
4.0
4.0
3.9
4.3
4.2
4.4
3.7
Alb
1.50
1.65
1.20
1.29
1.25
1.39
1.87
1.68
1.69
1.32
A/G
0.93
0.81
0.71
0.88
0.93
0.97
0.90
0.83
0.98
0.70
ChE
21
16
14
16
22
13
12
18
21
12
GOT
Table 1
Age, gender, and biochemical data of normal persons
19
12
16
18
GPT
230
195
190
187
188
198
244
220
247
190
LDH
35
13
12
13
16
13
15
16
23
12
-GTP
LAP
411
319
270
272
304
312
289
278
340
275
TCh
197
160
149
235
216
192
171
230
225
235
TG
110
55
57
96
86
51
59
67
87
140
PL
213
175
165
251
213
203
175
232
227
238
Cr
1.0
1.0
0.9
1.0
1.0
1.0
0.8
0.8
1.1
0.8
12
14
14
13
14
13
17
15
13
BUN
UA
5.2
3.0
3.4
3.5
4.2
4.1
3.8
3.2
4.5
3.8
1246
1.36
0.73
0.73
0.73
1.36
1.03
0.80
0.45
0.45
1.29
196
197
198
199
200
0.000
0.73
1.23
195
Ave.
0.73
1.90
194
0.000
0.000
0.73
1.32
193
0.000
0.73
1.32
192
Total
Gender
0.73
Age
1.42
No.
191
0.000
0.000
0.65
0.26
0.34
0.04
0.65
0.65
1.48
1.48
0.34
0.34
TP
0.000
0.000
1.40
0.89
2.16
0.26
0.13
0.51
1.65
1.27
0.51
0.26
Alb
A/G
0.000
0.000
1.11
0.78
2.05
0.36
0.75
1.32
0.49
0.04
0.94
0.08
Table 2
Normalized data of normal persons
ChE
0.000
0.000
0.04
0.24
1.91
0.48
0.87
0.38
1.21
1.21
0.69
1.18
GOT
0.000
0.000
0.93
0.47
0.67
1.39
1.13
1.36
0.67
1.16
0.90
0.44
GPT
0.000
0.000
0.93
0.43
0.44
1.51
1.30
0.87
1.09
1.29
0.65
0.65
LDH
0.000
0.000
0.96
0.24
2.18
1.57
1.87
0.51
0.68
0.41
0.92
0.06
ALP
0.000
0.000
1.83
0.93
0.45
0.86
0.01
0.34
0.71
0.25
0.21
0.21
-GTP
0.000
0.000
0.34
1.04
0.26
2.81
0.41
0.72
0.88
0.26
0.57
0.57
LAP
0.000
0.000
0.02
0.47
0.21
1.99
0.56
0.93
1.04
0.03
0.24
0.59
TCh
0.000
0.000
1.79
0.31
1.45
0.25
1.30
1.24
1.75
1.09
1.36
1.24
TG
0.000
0.000
1.02
1.85
0.74
0.14
1.36
1.22
0.55
2.15
0.38
0.38
PL
0.000
0.000
1.39
0.38
0.33
0.35
1.11
1.11
1.80
1.15
1.44
0.72
Cr
0.000
0.000
0.85
1.25
0.73
0.20
0.73
0.20
0.73
0.20
0.33
0.20
BUN
0.000
0.000
0.54
1.18
0.39
0.39
0.39
1.63
1.32
1.16
0.23
0.39
UA
0.000
0.000
0.55
0.56
1.12
1.12
0.48
0.08
0.96
1.10
0.40
0.24
1247
Table 3
D 2 / f and 10 log(D 2 /f) of normal persons
No.
D2 / f
10 log(D2 / f )
No.
D2 / f)
10 log(D2 / f )
No.
D2 / f
10 log(D2 / f )
1.989
2.987
36
0.513
2.897
71
0.722
1.413
0.609
2.153
37
0.530
2.761
72
1.165
0.66
1.104
0.428
38
0.567
2.465
73
0.927
0.328
0.747
1.267
39
0.675
1.704
74
0.946
0.242
0.571
2.432
40
0.746
1.273
75
1.186
0.741
0.893
0.491
41
1.587
2.006
76
0.557
2.545
0.779
1.087
42
0.859
0.662
77
1.47
1.674
0.872
0.593
43
0.391
4.053
78
1.244
0.947
0.587
2.316
44
0.616
2.102
79
1.719
2.352
10
1.274
1.053
45
0.724
1.400
80
0.440
3.567
11
0.698
1.564
46
0.813
0.901
81
0.999
0.005
12
0.596
2.251
47
1.027
0.117
82
0.707
1.505
13
0.623
2.053
48
1.107
0.443
83
1.128
0.524
14
1.249
0.965
49
0.907
0.422
84
0.713
1.468
15
0.673
1.722
50
1.773
2.486
85
0.755
1.219
16
0.482
3.174
51
1.129
0.527
86
1.086
0.357
17
1.144
0.586
52
0.634
1.979
87
0.842
0.749
18
0.529
2.766
53
0.786
1.047
88
0.972
0.124
19
1.131
0.535
54
1.368
1.360
89
0.555
2.560
20
1.299
1.135
55
1.184
0.733
90
0.761
1.183
21
0.341368
4.668
56
0.324
4.900
91
1.90
2.801
22
1.028
0.118
57
1.412
1.499
92
0.576
2.395
23
0.588
2.33
58
1.039
0.165
93
0.775
1.106
24
1.776
2.495
59
0.782
1.068
94
0.417
3.803
25
0.657
1.825
60
1.140
0.570
95
1.682
2.746
26
0.600
2.222
61
1.213
0.839
96
0.571
2.433
27
0.720
1.427
62
0.629
2.015
97
2.251
3.523
28
1.084
0.349
63
1.135
0.548
98
1.195
0.775
29
3.442
5.368
64
0.652
1.858
99
0.678
1.690
30
1.082
0.343
65
0.811
0.910
100
1.286
1.093
31
0.960
0.177
66
1.609
2.06
32
0.521
2.831
67
0.905
0.436
33
1.076
0.319
68
0.999
0.003
34
1.629
2.120
69
2.264
3.549
35
0.896
0.475
70
0.978
0.098
1248
Case 75
item to mj, the standard deviation to j, and the data
from the ith persons jth examination item to Xi j,
we calculated the normalized data:
Yi j
Xi j mj
j
(1)
D2
f
(2)
Figure 2
10 log(D2 / f ) of normal persons
1249
Age
35
42
41
52
53
45
41
41
48
51
No.
10
Female
Female
Female
Female
Female
Female
Female
Female
Female
Male
Gender
7.1
6.7
6.7
6.7
7.3
7.4
6.6
7.1
7.5
7.0
TP
4.3
3.8
3.7
4.0
4.3
4.2
3.9
4.1
4.2
4.0
Alb
1.54
1.31
1.23
1.48
1.43
1.31
1.44
1.37
1.27
1.33
A/G
Table 4
Data of special medical examination
ChE
0.99
0.73
0.82
0.74
0.81
0.77
0.88
0.64
0.67
0.67
12
26
16
15
16
14
19
15
17
GOT
12
14
15
10
15
10
GPT
220
119
180
178
196
204
243
295
187
144
LDH
6.1
6.3
6.3
4.5
5.4
6.1
5.8
11.4
5.4
4.1
ALP
13
15
44
10
13
14
19
-GTP
LAP
305
251
269
273
312
408
289
230
315
265
TCh
305
176
214
135
169
283
185
191
202
151
114
69
56
98
70
344
96
384
148
53
TG
PL
203
190
218
160
181
300
218
270
218
183
Cr
1.2
0.9
1.1
1.0
0.9
1.4
1.3
1.3
1.3
1.5
15
14
14
15
11
14
13
16
14
1.4
BUN
UA
4.7
3.6
4.4
3.1
3.3
5.3
4.7
3.5
5.6
5.7
1250
Case 75
Table 5
D2 / f and 10 log(D2 /f ) of special medical examination
No.
D2 / f
10 log(D2 / f )
No.
D2 / f
10 log(D2 / f )
No.
D2 / f
10 log(D2 / f )
1.894
4.933
21
3.114
4.933
41
1.594
2.025
1.320
1.207
22
3.280
5.159
42
1.872
2.723
18.591
12.683
23
3.797
5.794
43
2.366
3.741
2.160
3.345
24
5.305
7.670
44
0.959
0.181
5.648
7.519
25
1.537
1.867
45
4.609
8.813
0.630
2.006
26
4.611
6.638
46
4.274
6.308
1.075
0.314
27
2.856
4.557
47
5.847
7.670
0.846
0.725
28
1.706
2.320
48
1.653
2.184
2.956
4.70
29
4.848
6.855
49
7.325
8.648
10
0.815
0.888
30
5.498
7.402
50
2.076
3.172
alcohol the previous night or meals taken before the test, and there was no necessity of
considering liver disease, the examinee was diagnosed as normal.
3. If there were data out of the limit even in the
second test, the examinee takes a precise examination later.
4. Among the people who took precise examinations, those who were judged to have certain signs that indicated the potential for a
1251
Table 6
Threshold for 10 log(D2 /f) and nal diagnosis (estimation and 95% condence interval)
10 log(D2 / f ) Less Than
Threshold
95% Condence
Interval
WNL
Lower limit
Estimation
Upper limit
98.19
100.00
Lower limit
Estimation
Upper limit
Lower limit
Estimation
Upper limit
Threshold
SAB
SLI
LVD
0.00
1.81
0.00
1.81
0.00
1.81
97.43
98.44
99.05
0.94
1.56
2.57
0.00
1.74
0.00
1.74
79.54
86.84
91.65
6.48
10.53
16.05
1.57
2.63
4.38
0.00
1.77
WNL
SAB
SLI
LVD
20.07
30.27
42.87
15.72
24.39
35.86
10.64
17.07
26.25
6.05
9.68
0.1512
24.86
35.48
47.76
22.27
32.26
44.19
14.93
22.58
32.65
0.00
1.77
13.58
21.05
31.16
30.00
42.11
55.25
25.58
36.84
49.75
19.31
29.27
41.71
Table 7
Biochemical data and nal diagnosis
Within Target Limits
95% Condence
Interval
WNL
SAB
LVD
WNL
SAB
SLI
LVD
Lower limit
87.46
3.44
48.05
8.08
5.94
4.56
Estimation
93.33
0.00
6.67
0.00
65.00
15.00
11.25
8.75
Upper limit
96.56
2.09
12.54
2.09
78.85
26.16
20.29
16.14
SLI
1252
Figure 4
Within and beyond target limits and nal diagnosis
Figure 5
Within and beyond target limits using ve test items and
nal diagnosis
Case 75
Figure 6
Within normal limits or others
Industrial Safety and Health Law. Of the cases diagnosed as normal, 5.55% (2.71 to 10.94%) and
2.68% (1.23 to 5.35%) were actually SAB and SLI,
respectively. Additionally, 55.93% (37.57 to 72.80
percent) of the cases judged as abnormal were categorized as WNL. Considering these results, we
cannot say that constraint in the number of examination items leads to better judgment.
Determination of Data within and beyond the
Target Limit
Since we were dealing with a medical checkup, we
studied the relationships among the nal diagnosis
(of whether data were within or beyond a normal
Figure 7
Percent contribution for biochemical data limits and 10
log(D2 / f ) threshold
1253
10 log(D2 / f ) 5
10 log(D2 / f ) 6
10 log(D2 / f ) 7
10 log(D2 / f ) 10
Lower limit
Estimation
Upper limit
Beyond
normal limit
SN ratio
Contribution
Lower limit
Estimation
Upper limit
20.88
35.00
52.35
47.65
65.00
79.12
5.0331
12.76
3.38
6.67
12.72
87.28
93.33
96.62
60.05
70.73
79.53
0.00
1.68
57.87
1.38
20.47
29.27
39.95
98.32
1.00
87.21
90.32
92.74
7.26
9.68
12.79
81.66
6.49
1.15
1.56
2.13
97.87
98.44
98.85
0.00
1.69
98.31
100.00
56.90
1.21
8.67
13.16
19.68
80.32
86.84
91.43
0.00
1.97
98.03
100.00
23.82
5.05
13.80
23.26
36.46
63.54
76.74
88.20
Within Beyond
Less
Greater
Less
Greater
Less
Greater
Less
Greater
Than
Than
Than
Than
Than
Than
Than
Than
95% Condence Target Target
Limit
Limit Threshold Threshold Threshold Threshold Threshold Threshold Threshold Threshold
Interval
Within normal
limit
Final
Diagnosis
Within or
Beyond Target
Limits
Table 8
Data within or beyond target limit versus threshold for 10 log(D2 /f) versus nal diagnosis (estimation and 95% condence
interval)
1254
Case 75
2. Threshold for 10 log(D 2/f ) set to 5. All of the
cases are WNL (i.e., no abnormality is mingled with the cases with a value less than or
equal to the threshold). In contrast, the fact
that 29.27% (20.47 to 39.95%) of the cases
with a value greater than the threshold are
diagnosed as WNL was regarded as a problem.
A higher percent contribution and SN ratio,
57.87% and 1.38, respectively, were obtained.
Figure 8
SN ratio for liver disease, biochemical data limits, and 10
log(D2 / f ) threshold
Figure 9
Percent contribution for liver disease, biochemical data
limits, and 10(D2 / f ) threshold
1255
10 log(D2 / f ) 5
10 log(D2 / f ) 6
10 log(D2 / f ) 10
10 log(D2 / f ) 10
SN ratio
8.38
20.00
39.56
60.44
80.00
91.62
0.0161
17.86
2.66
6.67
15.76
Contribution
84.24
93.33
97.34
23.85
41.46
61.56
0.00
2.35
28.71
3.95
38.44
58.54
76.15
97.65
100.00
37.15
54.84
71.39
28.61
45.16
62.85
45.10
8.89
0.00
2.14
97.86
100.00
67.60
78.95
87.08
12.92
21.05
32.40
63.42
2.39
1.48
2.63
4.63
95.37
97.37
98.52
0.00
2.08
97.92
100.00
48.02
0.34
4.89
9.30
16.99
83.01
90.70
95.11
Within Beyond
Less
Greater
Less
Greater
Less
Greater
Less
Greater
95%
Than
Than
Than
Than
Than
Than
Than
Than
Condence Target Target
Limit
Limit Threshold Threshold Threshold Threshold Threshold Threshold Threshold Threshold
Interval
Lower limit
Estimation
Upper limit
With liver
disease
Final
Diagnosis
Within or
Beyond Target
Limits
Table 9
Data within or beyond target limit versus threshold for 10 log(D2 /f) versus nal diagnosis of liver disease
1256
Case 75
Figure 10
Transition of chronic active hepatitis
5. Another Application of
Mahalanobis Distance
To date, we have attempted to apply the D 2 value to
the medical eld by judging whether or not certain
data belong to a certain group, as Mahalanobis originally intended in his research. Here, our discussion
is based on the idea that D 2 is regarded as the distance from the center of gravity of the group forming a database.
Lets take the case of people with a certain disease: for example, patients with chronic active hepatitis. Dening their distance from normal people
as D 2, we can see that their distance from the gravity
center of the normal group is equivalent to how serious their disease is. Since a decreasing D 2 value
indicates proximity to a normal group when we
keep track of the condition of patients suffering
1257
chronic active hepatitis, their health is expected to
improve. On the contrary, when D 2 increases gradually, their disease is considered to be aggravated
because the distance from the normal group
changes by increments.
Figure 10 shows the 18-item data measured for
24 months for 21 patients with chronic active hepatitis: eight for group 1, seven for group 2, and six
for group 3. Each of the three groups received a
different treatment in the last 12 months.
Looking at D 2 and the linear effect line for each
group, we can see that whereas health degraded
during the rst 12 months, for the last 12 months,
the D 2 value decreased, due to the active therapy
(i.e., health was improved). This implies that a timebased analysis of D 2 enables us to judge the transition of data. That is, in the medical science eld,
we can make a judgment on medical efcacy.
Although the Mahalanobis distance was complicated and impractical as developed by Mahalanobis,
today, when computer technology is widely used,
even a microcomputer can easily calculate the distance. As a result, it is regarded as one of the most
broadly applicable techniques for multivariate
analysis.
References
Tatsuji Kanetaka, 1987. An application of Mahalanobis
distance: an example of chronic liver disease during
active period. Standardization and Quality Control,
Vol. 40, No. 11, pp. 4654.
Tatsuji Kanetaka, 1997. An application of Mahalanobis
distance to diagnosis of medical examination. Quality Engineering, Vol. 5, No. 2, pp. 3544.
CASE 76
1. Introduction
For our research, we collected data on 232 patients
with brain disease, which we classied into four
groups, each of which was a possible combination
of whether urinary continence was attained by a patient at the end of the rst or fourth week after he
or she suffered brain trauma (Table 1). Group I
consisted of 150 patients who had recovered continence at the end of the rst and fourth weeks, respectively. Group II contains only one patient who
had recovered continence at the end of the rst
week but had not at the end of the fourth week.
Eighty-one patients who had not recovered continence at the end of the rst week belong to groups
III and IV. Among them, 30 patients who had recovered at the end of the fourth week were classied as group III, and the remaining 51 patients who
could not recover were categorized as group IV.
The most interesting groups in our pattern recognition were groups III and IV, because if we can
predict precisely whether a patient who has not
been able to recover continence after one week will
recover it after four weeks, we can use such information to establish a nursing procedure for each
patient.
A Mahalanobis space was constructed using the
data of the rst week from group III: The patients
in this group did not recover continence at the end
of the rst week but recovered by the end of the
fourth week.
1258
Items
Twelve factors, including the six diseases shown in
Table 2, were used to calculate Mahalanobis distances. Disturbance of consciousness and urinary
continence were, according to their degrees, classied into the following categories and quantied as
an ordinal number:
1. Disturbance of consciousness 1 (eye-opening
level): 4 points
2. Disturbance of consciousness 2 (utterance
level): 5 points
3. Disturbance of consciousness 3 (movement
level): 6 points
6. Urinary continence level after one week: 8
points
For each of the above, a larger number indicates
a better condition. Since 8 points were added to a
state of complete continence recovery, all other
possible scores from 1 to 7 indicated incomplete
recovery.
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
Table 1
Classication of patients by pattern in starting
time of urinal continence recovery
1259
Table 2
Items used for calculation of Mahalanobis
distance
Factor
No.
Factor
End of
First Week
End of
Fourth Week
Number of
Patients
Group
Disturbance of consciousness 1
(eye-opening level)
150
II
Disturbance of consciousness 2
(utterance level)
III
30
IV
Disturbance of consciousness 3
(movement level)
51
4a
Age
9a
Total
232
a
a
10a
11a
12a
Categorical variable.
Table 3
Binary expression for six types of diseases
(7) (8) (9) (11) (12)
1260
Case 76
Figure 1
Mahalanobis distances after the end of the rst and fourth weeks and the progress
of urinary continence recovery for disease 1 (head injury, operation group)
Figure 2
Mahalanobis distances after the end of the rst and fourth weeks and the progress of
urinary continence recovery for disease 2 (head injury, nonoperation group)
Figure 3
Mahalanobis distances after the end of the rst and fourth weeks and the progress of
urinary continence recovery for disease 3 (brain tumor, operation group)
Figure 4
Mahalanobis distances after the end of the rst and fourth weeks and the progress of
urinary continence recovery for disease 4 (cerebral aneurysm, operation group)
1261
1262
Case 76
Figure 5
Mahalanobis distances after the end of the rst and fourth weeks and the progress of
urinary continence recovery for disease 5 (brain infection, nonoperation group)
Figure 6
Mahalanobis distances after the end of the rst and fourth weeks and the progress
of urinary continence recovery for disease 6 (brain hemorrhage, operation and
nonoperation groups)
3. Selection of Variables by
Orthogonal Array
1263
cally are experiment numbers. For example, in experiment 1 we calculated a Mahalanobis distance
with all 11 factors, and in experiment 2 computed
MD with ve variables, such as factors 3, 4, 6, 8, and
12.
The right side of Table 4 shows the SN ratio and
sensitivity. Let the Mahalanobis distances of 30 patients who belong to group III, the patients who recovered continence at the end of the fourth week,
be yi,1, yi,2, ... , yi,30. Also let the Mahalanobis distances
of 50 patients who belong to group IV, the patients
who could not recover continence, be yi,31, yi,32, ... ,
yi,81. The SN ratio and sensitivity were dened as
follows. We evaluated the certainty of judgment by
the SN ratio and the distance between two groups
by sensitivity S:
(1/r)(SG Ve)
Ve
10 log10
(1)
S G4 G3
(2)
r
2
(dB)
1
1
30
51
(3)
Table 4
SN ratio sensitivity
No.
10
11
1.47
4.92
2.70
4.78
6.61
2.66
5.07
3.30
6.91
2.07
4.54
2.97
4.54
3.17
5.88
2.35
3.58
4.82
10
7.82
2.02
11
10.35
2.20
12
3.87
4.42
1264
Case 76
Sm
SG
( y1 y2 y81)2
81
( y 21 y 22 y 230)
30
2
2
( y 231 y 32
y 81
)
51
Se ST Sm SG
Ve
( f 81)
(4)
( f 1)
(5)
(6)
( f 79)
(7)
Se
79
(8)
Figure 7
Response graphs for item selection
Figure 8 shows the relationship between Mahalanobis distances after one week and continence recovery after four weeks when using all items for all
diseases. In this case, the SN ratio, , and sensitivity,
S, result in 1.47 dB and 4.92. In contrast, according
to the result in the preceding section, Figure 9 illustrates the relationship between the Mahalanobis
distances after one week and the recovery after four
weeks, when using all items except items 7, 8, and
9. The resulting SN ratio, , and sensitivity, S, are
computed as 0.40 dB and 5.30.
Excluding the three items yields a better sensitivity, S, but a 1.07 dB lower SN ratio than when using
all factors. A comparison of Figures 8 and 9 demonstrates that taking all items can lead to better
discriminability.
The items excluded, 7, 8, and 9, are some of the
items of diseases expressed as binary variables. This
example highlights that when we select some cate-
Figure 8
Mahalanobis distances after the end of the rst and fourth weeks and the progress in urinary
continence recovery for all diseases, including all items
Figure 9
Mahalanobis distances after the end of the rst and fourth weeks and the progress in
urinary continence recovery for all diseases when items were selected
1265
1266
gorical variables, it is regarded as inappropriate to
exclude some of the variables from all the variables.
If a selection cannot be averted, we should carefully
exclude each variable in turn. This is because each
binary variable contains other factors effects.
In our research, using all the factors was chosen
as the optimal selection for creating a Mahalanobis
space.
References
Yoshiko Hasegawa, 1995. Pattern recognition by Mahalanobis: application to medical eldprediction
Case 76
of urinary continence recovery among patients with
brain disease. Standardization and Quality Control,
Vol. 48, No. 11, pp. 4045.
Yoshiko Hasegawa and Michiyo Kojima, 1997. Prediction of urinary continence recovery among patients
with brain disease using Mahalanobis distance.
Quality Engineering, Vol. 5, No. 5, pp. 5563.
Michiyo Kojima, 1986. Basic research on prediction of
urinary continence recovery among patients with
brain disease using conscious urinary diagram.
Kango Kenkyu/Igaku Shoin. Vol. 19, No. 2, pp. 161
189.
CASE 77
1. Introduction
In studies utilizing the MahalanobisTaguchi system
(MTS), missing data are sometimes generated,
caused by an inability to calculate a Mahalanobis
distance. For example, when the inverse matrix of
a correlation matrix cannot be calculated due to its
multiple collinearity, and if such data are left untreated, most computer software automatically treats
them as zero. As a result, calculation of an inverse
matrix becomes possible, but such results are
meaningless.
As a solution to this problem, the following two
methods are proposed:
1. If it is possible to collect enough information
with no missing data for normal people, we
can create a Mahalanobis space with them.
2. Using only data for items other than those
with missing data, we form a Mahalanobis
space.
Method 2 is considered to be intricate and impractical because we need to recreate a Mahalanobis
space for each combination of items with no missing
data. In addition, although theoretically, method 2
does not require any countermeasures for missing
data, accuracy in judgment could be lowered because of the decreased number of examination
items.
Since we could secure no missing data for 354
normal persons in our research, we created a Mahalanobis space by adopting method 1. The main
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1267
1268
Case 77
Table 1
Denition of normal and abnormal persons in comprehensive judgment
Denition
Comprehensive Judgment
Normal persons
A1:
A2:
B1:
B2:
Abnormal persons
Frequency
normal
healthy with comments
observation needed
under observation
Total
Proportion (%)
59
295
345
6
4.3
21.4
25.1
0.4
56
165
15
436
4.1
12.0
1.1
31.7
1377
100.1
Mahalanobis Distance Application for Health Examination and Treatment of Missing Data
1269
Table 2
Blood test items used for our study
No.
Test Item
Abbreviation
WBC
RBC
Hemoglobin count
Hb
Hematocrit count
Hct
AST
ALT
Alkaline phosphatase
ALP
Glutamyltranspeptidase
-GTP
Lactatdehydrogenase
LDT
10
Total bilirubin
TB
11
TTT
12
ZTT
13
Total protein
TP
14
Albumin
Alb
15
2-Globulins
2-GI
16
-Globulins
-GI
17
-Globulins
-GI
18
Amylase
19
Total cholesterol
20
Triglyceride
TG
21
HDL-C
22
FBS
23
BUN
24
Creatinine
Cr
25
Uric acid
UA
AMY
a
TC
Requisite. Indicates examination items that the Industrial Safety and Health Regulations (Article 44) required to be checked
during a periodical medical checkup.
a
1270
Case 77
Table 3
Simulation model of missing dataa
Examination Item
Checkup
25
1377
Procedure 1
Regarding the proportion of missing data as a parameter, we calculated the total number of data
missing. However, since the items age and gender
are not often missed, we excluded both.
Total number of missing data
(total number of checkups)
(number of blood test items)
(proportion of missing data)
Procedure 2
We generated two types of random numbers:
R1:
R2:
counted reached the required total number of missing data calculated in procedure 1.
In the case of taking countermeasures for missing data, in procedure 3 we complemented a missing measurement for a checkup number, j, and item
number, i, with the following three types of averages
as a complementary value:
1. Average from all examinees. When there are
missing measurements, they are quite often
complemented with an average of all data because of its handiness. Our research also studied this method.
2. Complement with item-by-item average from comprehensive judgment. When a comprehensive judgment has been made by company B, it was
expected that by complementing the data
with an average from an examination item
stratied by a comprehensive judgment, we
could improve the discriminability more than
by using an item-by-item average for all
examinees.
3. Complement with item-by-item average from normal
persons. Because a group of normal persons
was regarded as homogeneous, an item-byitem average from normal persons was expected to be of signicance, stable, and
reliable.
This strategy is based on our supposition that examination items in which even a group of abnormal
persons has missing data are likely to have almost
the same values as those for a group of normal persons. Table 4 shows item-by-item averages from normal persons.
Mahalanobis Distance Application for Health Examination and Treatment of Missing Data
Table 4
Item-by-item average from normal persons
(units omitted)
Item
Examination
Item
Average
WBC
5.85
RBC
Hb
14.93
Hct
44.74
AST
19.6
ALT
14.9
ALP
104.2
-GTP
LDT
10
TB
0.84
11
TTT
1.11
12
ZTT
7.27
13
TP
7.37
68.61
476.7
27.3
343.3
14
Alb
15
2-GI
8.03
16
-GI
7.84
17
-GI
12.53
18
AMY
156.9
19
TC
201.9
20
TG
94.9
21
HDL-C
58.5
22
FBS
92.6
23
BUN
16.09
24
Cr
0.87
25
UA
5.56
Number of items
354
1271
the average of Mahalanobis distance for each comprehensive judgment when we change the proportion of the missing data in the case where they are
complemented by item-by-item averages from normal persons.
When the proportion of missing data is 10% the
averages of Mahalanobis distances for the two
groups of normal persons, A1 and A2, were 1.27 and
1.60, respectively, both of which are regarded as relatively small. On the other hand, those for the two
groups of abnormal persons, C1 and C2, were 16.11
and 6.01, respectively, which are viewed as large.
Therefore, we can discriminate between normal and
abnormal persons by the averages of Mahalanobis
distances.
Even when the proportion of missing data is 20
or 30%, the averages of Mahalanobis distances for
A1 and A2 are obviously smaller than those for C1
and C2. As a result, normal and abnormal persons
can be discriminated.
Proportion of Missing Data and Change in
Discriminability A 2 2 table (Table 6) summarizes the results of discriminability based on thresholds selected on the basis that a type I error occurs
as often as a type II error does, in the case of complementing missing data with item-by-item averages
from normal persons.
1. Proportion of missing data is 1%. When the
threshold is 1.28, the occurrences of type I
and type II error are 16.95% and 16.78%, respectively. The contribution, , results in 43%.
Since the SN ratio, , is reduced by 0.220 dB
from 1.077 dB when the proportion of missing data is zero, to 1.297 dB, the resulting
discriminability decreases slightly, to 95.06%
of that in the case of no missing data.
2. Proportion of missing data is 5%. If the threshold is set to 1.35, the occurrences of type I
and type II error are 20.92 and 19.00%, respectively, whereas the contribution is 35%.
The SN ratio declines by 1.605 dB, from
1.077 dB when the proportion of missing
data is zero to 2.682 dB. The eventual discriminability is then lowered to 69.01% of that
when there is no missing data.
3. Proportion of missing data is 10%. If the threshold is set to 1.55, type I and type II error take
place at the possibilities of 20.92 and 19.00%,
1272
Case 77
Table 5
Average of Mahalanobis distances in case of complement with item-by-item averages from normal
persons
Percent of
Missing
Measurements
A1
A2
B1
B2
C1
C2
G1
G2
0.92
1.02
2.41
1.35
16.16
5.08
9.28
4.62
0.92
1.05
2.51
1.35
16.29
5.12
9.32
4.67
1.03
1.22
2.76
1.85
16.26
5.35
9.33
4.89
10
1.27
1.60
2.86
2.26
16.11
6.01
9.29
5.72
20
1.85
1.98
3.51
1.62
16.65
5.36
8.70
5.78
30
1.97
2.38
4.11
1.72
17.02
5.72
8.27
5.62
Comprehensive Judgment
Table 6
Result of discriminability in case of complement with item-by-item averages for normal persons
Percent of
Missing
Data
Comprehensive
Judgment
Mahalanobis Distance
D 1.26
D 1.26
297
37
57
184
D2 1.28
D2 1.28
294
37
60
184
D2 1.35
D2 1.35
281
42
73
179
D2 1.55
D2 1.55
283
42
71
179
D2 1.80
D2 1.80
249
61
105
160
D2 2.05
D2 2.05
248
67
106
154
10
20
30
A1 A2
C1 C2
A1 A2
C1 C2
A1 A2
C1 C2
A1 A2
C1 C2
A1 A2
C1 C2
A1 A2
C1 C2
Total
Error
(dB)
354
221
16.10
16.74
0.44
1.077
354
221
16.95
16.74
0.43
1.297
354
221
20.62
19.00
0.35
2.682
354
221
20.06
19.00
0.36
2.545
354
221
29.66
27.60
0.17
6.766
354
221
29.94
30.32
0.15
7.504
Mahalanobis Distance Application for Health Examination and Treatment of Missing Data
respectively. Perhaps because of an uneven
generation of random numbers, the resulting
contribution is 35% and the SN ratio rises to
2.545 dB, up slightly from that when is 5%.
At the same time, since the SN ratio drops by
1.468 dB, from 1.077 dB when the proportion of missing data is zero to 2.545 dB, the
discriminability diminishes to 71.32% of that
when there is no missing data.
In addition, when the proportions of missing
data are 20 and 30%, the SN ratios are 6.766 and
7.504 dB. These results reveal that when the proportion of missing data exceed 10%, the discriminability tends to deteriorate drastically.
1273
Figure 1
Proportion of missing data for each case of countermeasure and change in threshold
1274
Case 77
Figure 2
Proportion of missing data for each case of countermeasure and change in SN ratio
Mahalanobis Distance Application for Health Examination and Treatment of Missing Data
On the other hand, we compared cases of complement with item-by-item averages from all examinees (case 2) and complement with item-by-item
averages from normal persons (case 4). The SN ratio for complement with item-by-item averages from
normal persons (case 4) shows a better value (i.e.,
better discriminability) than complement with itemby-item averages for normal persons (case 4) does.
Since an item-by-item average for all examinees
denotes a mean value for all normal, abnormal, and
even other-type persons, its stability depends greatly
on the constitution of data collected. In contrast,
because a group of normal persons is homogeneous, an item-by-item average from normal persons
can be considered stable.
Considering all of the above, as the most appropriate method of complementing missing data for
medical checkups, we selected the complement with
item-by-item averages from normal persons (case 4).
1275
Setup of Threshold
For the sake of convenience, our research determined thresholds on the assumption that type I and
type II error have the same loss. Although a threshold should essentially be computed from the loss
function, it is not easy to determine a threshold that
balances future losses of type I and type II error.
Thus, focusing on our objective of taking effective
measures for missing data for medical checkups, on
the assumption that both errors losses are equal,
we determined thresholds. Since the issue of threshold selection is regarded as quite crucial, we continue to work on this issue.
7. Conclusions
6. Countermeasures for Missing Data
Missing Data Occurrence and Simulation Model
In our research, we performed a simulation based
on a model that generates missing measurements
equally and randomly for any examination item.
However, the actual occurrence of missing data for
each examination item for medical checkups is not
uniform but is rather low for the particular requisite
items prescribed by Industrial Safety and Health
Regulations (Table 2). This implies that important
items tend to have a smaller number of missing
data.
In addition, when we complemented missing
measurements with item-by-item averages from normal persons (case 4), because an item-by-item average from normal persons was used even for
abnormal persons missing data, the threshold resulted in a smaller value than for complementing
with other types of averages (cases 2 and 3). The
reason is likely to relate to our setup of a simulation
model that randomly generates missing data for the
25 blood test items.
Taking all of the above into account, we can observe that by setting up a simulation model that creates missing data in accordance with an item-by-item
proportion of actual missing data, we can properly
1276
Case 77
SN ratio. However, a group of abnormal persons is so heterogeneous that its average
hardly has signicance. Thus, if the method
of complementing missing data with item-byitem averages from comprehensive judgment
is used for cases other than A1 and A2, its use
is not considered appropriate.
Mahalanobis distance with a minimal loss of information by complementing with item-by-item averages from normal persons when missing data for
medical checkups. The result gained from our research is believed to contribute to the rationalization of a medical checkup: that is, reduction in
medical expenses through the elimination of excessive detailed examinations, mitigation of the time
and economic cost to examinees, alleviation of examinee anxiety, or assistance with information or instructions from doctors, nurses, or health workers.
Pattern recognition using a Mahalanobis distance can be applied not only to a medical checkup
but also to a comprehensive judgment in many
elds. Our research could contribute to the problem of missing measurements in any application.
References
Yoshiko Hasegawa, 1997. Mahalanobis distance application for health examination and treatment of
missing data: 1. The case with no missing data. Quality Engineering, Vol. 5, No. 5, pp. 4654.
Yoshiko Hasegawa, 1997. Mahalanobis distance application for health examination and treatment of
missing data: 2. The case with no treatment and
substituted zeros for missing data. Quality Engineering, Vol. 5, No. 6, pp. 4552.
CASE 78
Abstract: For our research, we used medical checkup data stored for three
years by a certain company. Among 5000 examinees, there were several
hundred sets of data with no missing measurements. In this case, since the
determination of a threshold between A and B was quite difcult, a different
doctor sometimes classied a certain examinee into a different category, A or
B. It was the aim of this study to improve the certainty of such judgment.
1. Introduction
The medical checkup items can be classied
roughly into the following 19:
1. Diseases under medical treatment (maximum of three types for each patient)
2. Diseases experienced in the past (maximum
of three types for each patient)
3. Diseases that family members had or have
(categorized by grandparents, parents, and
brothers)
A:
normal
5. Subjective symptoms
B:
observation needed
C:
D:
8. Eyesight
9. Blood pressure
10. Urinalysis
11. Symptoms of other senses
12. Hearing ability
13. Chest x-ray
14. Electrocardiogram
15. Blood (19 items)
16. Photo of eyeground
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1277
1278
Case 78
Persons Judged as Unhealthy
Figure 1
Base space I for one-year-healthy persons
healthy). In addition, items that lead to multicollinearity are also omitted. Eventually, the original
nearly 100 items were reduced to 66.
one-year healthy persons data (685 persons diagnosed as A and 735 as C). According to Figure 1,
we formed base space I with the data for 685 persons judged as A and studied whether the data
could be discriminated from those for 735 undertreatment patients identied as C. Furthermore, to
improve discriminability, we selected examination
items.
Base Space II
With the data for two-year-healthy persons, we
looked for further improvement in accuracy of discriminability. According to Figure 2, we created base
space II with the data for two-year-healthy 159 per-
Preceding Year
A: Normal
Current Year
A: Normal
Current Year
A: Normal
B: Observation Needed
C: Under Treatment
D: Detailed Examination
Needed
Figure 2
Base space II for two-year-healthy persons
Base Space
We judge that
combinations
except for A-A
are not
regarded as
healthy
Forecasting Future Health from Existing Medical Examination Results Using the MTS
sons diagnosed as A and calculated the distances for
159 persons diagnosed as A and the ones for 37
persons as C. Since the result within one year was
treated as one data point, from a two-year-healthy
person diagnosed as A, we obtained two data points.
Since missing data leads to lower accuracy in judgment, we did not use situations with missing data.
Base Space III
After studying discriminability using the MahalanobisTaguchi system, we investigated predictability,
which characterizes a Mahalanobis distance. That is,
we studied the possibilities not only of judging
whether a certain person is currently healthy but
also foreseeing whether he or she will be healthy in
the next year based on the current medical checkup
data.
First, combining the two-year medical checkup
data, we created base space III based on Figure 3.
The persons who were diagnosed as category A this
year were dened as healthy persons. Base space III
was prepared from the preceding years data for
these healthy persons. In other words, a person cat-
1279
Current Year
A: Normal
Create a base space only with the
preceding years data from persons
currently judged as healthy
for the next year
Current Year
C: Under Treatment
1280
Case 78
Current Year
A : Normal
B: Observation Needed
C: Under Treatment
D: Detailed Examination Needed
Current Year
A : Normal
Create a base space only with the preceding two years data from persons
curreently judged as healthy for the current year
Standard Space
Current Year
A : Normal
B: Observation Needed
C: Under Treatment
D: Detailed Examination Needed
Current Year
C: Under Treatment
Figure 4
Base space IV: predicting current years health condition with three-year-healthy persons data
Forecasting Future Health from Existing Medical Examination Results Using the MTS
data that were expected to be distant (abnormal
data). The greater the number of abnormal data,
the more desirable. However, under some circumstances, we cannot collect a sufcient number of abnormal data. In this case, even a few data are
regarded as applicable. We dened the distance for
each abnormal data item, which was calculated from
the base space for the rst row, as D1, D2, ... , Dn.
Now, since the abnormal data should be distant
from the normal space, we used a larger-the-better
SN ratio for evaluation:
10 log
1
n
i1
1
D i2
To select necessary items, we calculated the average for each level and created the response
graphs where an item whose value decreases from
left to right was regarded as essential, whereas one
with a contrary trend was considered irrelevant to a
judgment.
4. Results of Analysis
The possibility of judging health examinations using
base spaces prepared in different ways was studied.
Prediction by Base Space I
Using only the one-year data, we investigated
whether or not we could make a medical checkup
judgment. The data have no missing data and consist of those for 685 persons judged as healthy (A)
and 735 persons judged as unhealthy (C ), with 109
items in total. In the hope of improving discriminability, we selected the following necessary items for
a judgment by excluding unnecessary ones:
1281
Table 1
Distribution of one-year datas distances from
base space I after item selection
Data
Class
0
0.5
Data
Class
11
17
6.5
356
42
1.5
261
121
7.5
49
141
71
8.5
3.5
44
21
9.5
4.5
21
10
5.5
17
Next
class
77
Since the discriminability deteriorates if we iterate item selection over four times, by creating base
space I with items that came up with the highest
discriminability without reaching a downward trend
1282
Case 78
Figure 5
Distances from base space II using two-year data
Table 2
Distribution of distances from base space II for
next year using two-year data
Data
Class
Data
Class
0.5
6.5
85
1.5
65
7.5
2.5
8.5
3.5
9.5
0
0
10
4.5
Next
class
5.5
2
Total
159
37
Forecasting Future Health from Existing Medical Examination Results Using the MTS
Table 3
Distribution of distances from base space III
using two-year data for the prediction of next
year
Data
Class
A(%)
C(%)
1283
Table 4
Distribution of distances from base space IV
from three-year data before item selection
Data
Class
A (%)
C (%)
A (%)
100.0
0.0
100.0
100.0
0.0
100.0
52
100.0
0.0
100.0
68
56.7
0.0
100.0
0.0
0.0
100.0
2.5
0.0
0.0
98.8
0.0
2.6
98.8
55
3.5
0.0
2.6
97.6
68
0.0
2.6
89.4
74
4.5
0.0
5.3
84.7
12
82
0.0
7.9
80.0
88
5.5
0.0
10.5
75.3
5.5
92
0.0
13.2
67.1
93
6.5
0.0
15.8
60.0
6.5
94
0.0
18.4
55.3
94
7.5
0.0
23.7
47.1
7.5
94
0.0
23.7
45.9
94
8.5
0.0
23.7
41.2
8.5
95
0.0
28.9
36.5
95
9.5
0.0
28.9
29.4
9.5
95
10
0.0
28.9
24.7
10
96
27
20
0.0
100.0
23.5
100
Next
class
Next
class
Total
120
38
85
Total
336
137
100
0.5
40
100
140
88
1.5
1.5
112
14
46
15
39
24
13
33
2.5
12
42
18
3.5
18
4.5
0.5
1284
Case 78
6. Discussion
In our rst analysis, we studied base space I generated from the one-year data for persons judged as
healthy and created a base space with items that
maximized discriminability after item selection.
However, despite a large number of data, including
those for 685 persons judged as A and 735 as C, an
overlap area in the distribution has occurred and
caused insufcient discriminability.
In the case where we took advantage of a Mahalanobis distance from base space II, which were
formed by the two-year data for persons diagnosed
as healthy, judgments A and C can be distinguished.
Figure 6
Distances from base space IV using three-year data after item
selection
Forecasting Future Health from Existing Medical Examination Results Using the MTS
Table 5
Distribution of distances from base space IV
using three-year data after item selection
Data
Class
0
0.5
1285
Table 6
Distribution of distances from base space III
using two-year data after item selection
A (%)
C (%)
A (%)
Data
Class
100.0
0.0
100.0
A (%)
C (%)
100
100.0
0.0
100.0
0.5
15
100
69
99.2
0.0
100.0
170
88
1.5
44
27
41.7
0.0
96.7
1.5
117
15
45
15
38
5.0
2.6
74.2
31
28
10
35
2.5
15
0.0
13.2
42.5
2.5
25
53
12
0.0
26.3
30.0
17
66
3.5
0.0
31.6
20.0
3.5
12
74
0.0
47.4
14.2
10
82
4.5
0.0
52.6
9.2
4.5
84
0.0
52.6
6.7
88
5.5
0.0
60.5
5.8
5.5
91
0.0
63.2
5.8
92
6.5
0.0
63.2
5.0
6.5
82
0.0
68.4
5.0
83
7.5
0.0
71.1
3.3
7.5
84
0.0
76.3
3.3
94
8.5
0.0
78.9
2.5
8.5
94
0.0
78.9
1.7
94
9.5
0.0
78.9
0.0
9.5
95
10
0.0
84.2
0.0
10
95
Next
class
0.0
100.0
0.0
Next
class
100
Total
120
38
120
Total
338
137
1286
Case 78
Figure 7
Distances from base space III using two-year data after item
selection
Table 7
Prediction of medical checkup for threshold
1.5 (%)
Threshold
Expected
Judgment for
Next Year
1.5
1.5
Total
90
10
100
15
85
100
we have investigated whether we can raise data reliability by including time-series data in the calculation. However, since there are numerous changes
in examination items for a medical checkup at a
company because of budgetary issues, job rotations,
or employees ages, the number of data that can
cover all medical checkups for three years in a row
without missing data is very limited. Although we
have created a base space by using the three-year
data for 120 persons, eventually, the sum of the
items has decreased because the number of data
turned out to be small. Therefore, considering the
stability of base space IV, we reduced the number
of items to 101. As a result, we have been able to
separate judgments A and C completely. But the situation occurred where the data that should be
judged as A were distant from a base space. That
is, a small number of data in the base space resulted
in the space becoming sensitive, so judgments A
and C overlapped.
In most cases, to conrm whether abnormal
datas distances become distant after creating a base
space, we make sure that normal datas distances
that are not used to construct the base space lie
around 1.0. In this case we faced two typical phenomena, an overlap between abnormal data and the
base space and an increase in normal datas dis-
Forecasting Future Health from Existing Medical Examination Results Using the MTS
tances. In our research, since the distances of the
abnormal data have also increased, distance calculated from the normal data that are not used to construct the base space are not close to 1.0. One of
the possible reasons is that the excessive number of
items in the base space has led to a difcult judgment. Therefore, it was considered that we should
select only items effective to discriminate abnormal
data. Yet the most important issue is how many items
should be eliminated. Removal of too many items
will blur the standard space. We left 60 items in our
study. Through the item selection addressed above,
we eventually obtained fairly good improvement in
prediction accuracy, even though the normal data
that are not used for the base space were somewhat
distant from the base space. By increasing the number of data, we can expect to realize a highly accurate prediction in the future. In addition, all of the
items sieved through the item selection are regarded as essential from the medical viewpoint. By
improving the accuracy in item selection with an
increased number of data points, we can obtain a
proof of medically vital items, thereby narrowing
down examination items and streamlining a medical
checkup process.
7. Conclusions
According to the results discussed above, we conrmed primarily that a Mahalanobis distance is
highly applicable to medical checkup data. For the
prediction of health conditions for the next year,
despite its incompleteness, it was concluded that we
obtained relatively good results. Additionally, although we have attempted to perform an analysis
of the three-year data, we considered that a larger
number of data are needed to improve the prediction accuracy.
We summarize all of the above as follows:
1. Although we cannot fully separate healthy and
unhealthy persons by using Mahalanobis distances based on the one-year medical checkup
data despite a large amount of data used, if
the two-year data are used, we can distinguish
both groups accurately.
1287
References
Yoshiko Hasegawa, 1997. Mahalanobis distance application for health examination and treatment of
missing data: III. The case with no missing data.
Quality Engineering, Vol. 5, No. 5, pp. 4654.
Yoshiko Hasegawa, 1997. Mahalanobis distance application for health examination and treatment of
missing data: II. A study on the case for missing
data, Quality Engineering, Vol. 5, No. 6, pp. 4552.
Yoshiko Hasegawa and Michiyo Kojima, 1994. Prediction of urinary continence recovery among patients
with brain disease using pattern recognition. Standardization and Quality Control, Vol. 47, No. 3, pp.
3844.
Yoshiko Hasegawa and Michiyo Kojima. 1997. Prediction of urinary continence recovery among patients
with brain disease Using Mahalanobis Distance,
Quality Engineering, Vol. 5, No. 5, p. 5563.
Tatsuji Kanetaka, 1997. An application of Mahalanobis
distance to diagnosis of medical examination. Quality Engineering, Vol. 5, No. 2, pp. 3544.
Hisato Nakajima, Kei Takada, Hiroshi Yano, Yuka
Shibamoto, Ichiro Takagi, Masayoshi Yamauchi, and
Gotaro Toda, 1998. Forecasting of the future health
from existing medical examination results using MahalanobisTaguchi system. Quality Engineering, Vol.
7, No. 4, pp. 4957.
CASE 79
1. Introduction
In order for a computer to recognize a handwriting
character by taking advantage of a multidimensional
information process, we create a base space. Considering that the base space needs to recognize various
types of commonly used characters, we collected as
many writings for a particular character but different shapes as possible by asking a number of testees
to hand-write characters. Each testee writes down a
hiragana character, or Japanese syllabary, on a 50
mm by 50 mm piece of paper as a sample character.
The total number of writings amounted to approximately 3000. Figure 1 shows a sample character.
A sample character is saved as graphical data according to the procedure shown in Figure 2, and its
features are extracted. The graphical data are divided into n n cells (Figure 3). If a part of the
character lies on a certain cell, the cell is set to 1.
Otherwise, the cell is set to 0. This process is regarded as binarization of the graphical data. Since
the number of binarized cells is n n without special treatment, if n is equal to 50, the number adds
up to 2500. In such a case it is quite difcult to
calculate a 2500 2500 inverse matrix with a personal computer, which is required to create a base
space. Therefore, by extracting the characters features, we reduce the number of binarized cells.
For the 25 25 binary data for the character
shown in Figure 4, we performed feature extraction
in accordance with the following three procedures:
1288
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1289
Figure 1
Handwriting the character ah
Figure 3
Binarization of handwriting character in Figure 1
Handwriting Character
Scanner
Personal Computer
Reading of Image
Binarization
Feature Extraction
Figure 2
Reading of handwriting characters and extraction of features
Memory Device
1290
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
1
1
1
0
1
1
1
0
0
19
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
1
1
1
1
0
0
1
1
0
0
19
0
0
0
0
1
0
0
0
0
0
0
0
0
0
1
1
0
0
0
0
0
0
1
0
0
19
0
0
0
0
1
0
0
0
0
0
0
0
0
1
1
0
0
0
0
0
0
0
1
1
0
20
0
0
0
0
1
0
0
0
0
0
0
0
1
1
0
0
0
0
0
0
0
0
0
1
0
20
0
0
0
0
1
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
1
1
21
0
0
0
0
1
0
0
0
0
0
0
1
1
0
0
0
0
0
0
0
0
0
0
1
1
21
0
0
0
1
1
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
1
0
21
11
0
0
0
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
1
1
0
21
Figure 4
Japanese character whose features are extracted in 25 25 cells
Integral
Method I
Integral
Method II
Differential
Method
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1
1
1
0
0
0
18
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
15
1
1
1
1
0
0
0
0
0
0
1
1
1
1
1
1
1
1
1
0
0
1
1
0
0
23
0
0
0
1
0
0
0
0
0
1
1
0
0
0
0
0
0
0
1
1
1
1
1
0
0
20
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
1
1
0
0
0
0
18
0
0
0
1
0
0
0
0
0
1
1
0
0
0
0
0
0
0
1
1
0
0
0
0
0
17
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
1
1
0
0
0
0
0
0
16
0
0
0
1
0
0
0
0
0
0
1
1
0
0
1
1
1
1
0
0
0
0
0
0
0
15
0
0
0
1
1
0
0
0
0
0
0
1
1
1
1
0
0
0
0
0
0
0
0
1
0
21
0
0
0
0
1
0
0
0
0
0
1
1
1
0
0
0
0
0
0
0
0
0
1
1
0
20
0
0
0
0
1
0
0
0
0
1
1
0
1
0
0
0
0
0
0
0
0
0
1
0
0
19
0
0
0
0
1
0
0
0
1
1
0
0
1
1
0
0
0
0
0
0
0
1
1
0
0
19
0
0
0
0
1
0
0
0
0
0
0
0
0
1
1
0
0
0
0
0
1
1
0
0
0
18
0
0
0
0
1
0
0
0
0
0
0
0
0
0
1
1
1
0
1
1
1
0
0
0
0
17
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
1
1
1
0
0
0
0
0
0
15
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
Integral
Method I
1
1
1
9
25
1
1
1
11
11
10
11
15
17
19
20
22
22
23
22
22
21
19
14
2
Integral
Method II
1
1
1
9
19
1
1
1
2
6
8
7
8
6
7
5
6
6
9
5
6
7
10
8
2
1
1
1
1
2
1
1
1
2
3
3
2
3
4
4
4
4
4
4
3
3
3
3
2
1
Differential
Method
1291
Extraction of Features
Figure 6
Distribution of characters by the Mahalanobis distance
1292
Case 79
Table 1
Frequency of occurrence of target character for
base space of character ah
(a) Occurrences of ah and oh
Mahalanobis Distance
Less Than 1.5
More Than or
Equal to 1.5
ah
0.8
0.2
oh
0.1
0.9
Character
Figure 7
Relationship between Mahalanobis distance and cumulative
frequency using base space for ah
oh
More Than or
equal to 1.5
0.08
0.02
More than or
Equal to 1.5
0.72
0.18
4. Conclusions
Table 1 shows the frequencies of occurrence for
ah, oh, and nu when the Mahalanobis distance 1.5. Using (a) of Table 1, we obtain (b) for
the character recognition rate, which indicates 0.72
as the probability of correct recognition and 0.02 as
that of incorrect recognition. The hatched areas
represent the probabilities that the Mahalanobis distances for ah and oh above and below 1.5. Comparing the Mahalanobis distances calculated for
ah and oh, we make a correct judgment if the
distance for ah is smaller than that for oh, an
incorrect judgment if for ah is larger than that for
oh. Since the probability of correct recognition in
the hatched areas can be assumed to be more than
half the total probability, or 0.13, by adding up this
correct recognition rate and the aforementioned
Reference
Takashi Kamoshita, Ken-ichi Okumura, Kazuhito Takahashi, Masao Masumura, and Hiroshi Yano, 1998.
Character recognition using Mahalanobis distance.
Quality Engineering, Vol. 6, No. 4, pp. 3945.
CASE 80
1. Introduction
Since in recent years the number of cellular phones
has increased rapidly, not only are size and weight
reduction under active research but also the
phones appearance. More attractive, easy-tounderstand printed characters and labels are highly
sought after.
A keypad character is normally integrated into a
keypad. While the size of a character changes to
some degree with the type of cellular phone, the
digits from 0 to 9 and symbols like * and #,
which were studied in our research, are approximately 3 mm 2 mm with a linewidth of 0.4 mm.
On the other hand, Kana (Japanese syllabaries) or
alphabetical characters are about 1 mm 1 mm
with a linewidth of approximately 0.16 mm. While
fade resistance is required for these characters, for
characters on keypads, a material and printing
method that make a back light pass through are
used such that a cellular phone can be used in a
dark environment.
The keypad characters are printed in white on
the blacklike background. Various defects are
caused by a variety of errors in a printing method.
Figure 1 shows a typical example. While Figure 1a
is judged as a proper character, Figure 1b, having a
chipped area on the left, is judged as a defect. In
fact, a part of the line is obviously concave. As long
as this defect lies within a range of 10% of a linewidth, it does not stand out much. However, if it
exceeds 30%, a keypad is judged as defective because the defect gives a poor impression on a character. In addition to this, there are other types of
defects, such as extreme swelling of a line, extremely closed round area of an 8, or additional
unnecessary dots in a free space on a keypad.
2. Image-Capturing System
In order to capture a highly accurate image of a
character, we organized an image-capturing system
with a CCD (charge-coupled device) camera, image
input board, and personal computer. Figure 2a
shows a photo of an image-capturing device, and
Figure 2b outlines the overall system. The CCD camera used for our research can capture an image with
1000 1000 pixels at a time. Using this imagecapturing system, we can capture a 3 mm 2 mm
character as an image of 150 100 pixels and a 0.4mm linewidth as that of 20 pixels.
Because the light and shade of a captured image
are represented in a 256-level scale, we binarize the
image data such that only part of a character can
be extracted properly. By setting up an appropriate
threshold, we obtained a clear-cut binary image.
Furthermore, we performed a noise-reduction
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1293
1294
Case 80
(a)
(b)
Figure 1
Printed character
Figure 3
Extraction of feature values
3. Base Space
A judgment on whether a certain character is
proper or defective is made in accordance with the
inspection standard. However, as discussed above, it
is quite difcult to prescribe detailed inspection
standards for all defect types, such as a chipped area
or swelling, or all defect cases on a character. In
addition, even if a printed character is too bold or
ne on the whole, when it is well balanced, we may
Area Camera
Image Input
Board
Keypad
Character
(a)
Figure 2
Character inspection system
(b)
1295
4. Results of Inspection
After creating software that covers the aforementioned workow, ranging from image capturing to
character extraction, character division into unit
regions, extraction of feature values, generation of
base space, and calculation of Mahalanobis distances, we performed an experiment. For the division into unit regions, the software automatically
split up an image into unit regions so that each consists of 30 30 pixels. Therefore, the number of
unit regions varies with the size of a character or
sign. For instance, when the size of a character is
150 pixels in height 100 pixels in width, its image
is divided into 5 4 unit regions, whereas it is split
up into 4 3 unit regions in the case of a character
of 120 pixels in height 80 pixels.
Figure 4 shows an example of displaying an inspection result. A binarized character is indicated
on the left of the screen. On the right, a Mahalanobis distance for each unit region is shown. In
addition, if a Mahalanobis distance exceeds a
predetermined threshold, the corresponding area
in the character on the left lights up. On the other
hand, for all Mahalanobis distances surpassing 1000,
1000 is displayed.
As discussed before, using 200 sets of characters
that have been preselected as proper by the two in-
Table 1
Feature values extracted from a character
Differential
Characteristic
Integral
Characteristic
Differential
Characteristic
Integral
Characteristic
30
30
15
16
30
15
27
28
15
27
27
14
28
26
29
14
29
27
30
12
30
27
1296
Case 80
Figure 4
On-screen inspection result for character recognition by
MTS
Figure 5
Inspection results
0.009
1.006
0.582
0.000
1.471
0.914
0.984
0.237
0.784
1.189
0.205
3.815
10.555
0.668
0.488
0.117
0.200
0.398
0.223
0.000
0.789
0.523
1000.000
0.719
0.491
0.533
5.318
0.958
0.336
1000.000
0.411
0.183
1297
Table 2
Comparison of inspectors judgment and Mahalanobis distance
Inspectors Judgment
Mahalanobis Distance
Proper
Defective
0 MD 1
16
1 MD 2
78
2 MD 3
62
(a)
3 MD 4
35
14
(b)
4 MD 10
22
19
36
44
249
81
10 MD
Total
5. Conclusions
As shown in Table 2, since the Mahalanobis distances indicate a clear-cut trend for a judgment on
whether a certain character is proper or defective,
the evaluation method grounded on the MTS
method in our research can be regarded as valid to
detect a defective character. Despite the fact that
this method needs to be improved further because
it sometimes judges a proper character as a defective, we have successfully demonstrated that our
conventional evaluation of quality in character
printing, which has long relied heavily on human
senses, can be quantied.
The following are possible reasons that a proper
character is judged as defective:
1. Since the resolution of character image capturing (a character of 3 mm 2 mm is
captured by an image of 150 100 pixels) is
regarded as insufcient, higher resolution is
needed. Although 200 sets of proper characters were used in this research as the data for
creating a Mahalanobis space, the number is
still insufcient to take in patterns that are
tolerable as a proper character. At this time,
we believe that the second problem is more
signicant. Considering that the research on
character recognition deals with 3000 handwriting characters for a standard space, we
need to collect at least 1000 characters as the
standard data.
Reference
Shoichi Teshima, Dan Jin, Tomonori Bando, Hisashi
Toyoshima, and Hiroshi Kubo, 2000. Development of printed letter inspection technique using
MahalanobisTaguchi system. Quality Engineering,
Vol. 8, No. 2, pp. 6874.
Part V
Software Testing
and Application
Algorithms (Cases 8182)
Computer Systems (Case 83)
Software (Cases 8487)
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
CASE 81
1. Introduction
What makes a problem difcult? Suppose that you
are assigned to work on a situation where (1) the
phenomenon is relatively rare; (2) the phenomenon involves not only the entire drive train hardware and software of a vehicle, but specic road
conditions are required to initiate the phenomenon; (3) even if all conditions are present, the phenomenon is difcult to reproduce; and (4) if a
vehicle is disassembled and then reassembled with
the same parts, the phenomenon may disappear
completely!
For many years, various automobile manufacturers have occasionally experienced a phenomenon
like this associated with slow oscillation of vehicle
rpm under steady pedal position (ringing) or cruisecontrol conditions (hitching). Someone driving a
vehicle would describe hitching as an unexpected
bucking or surging of the vehicle with the cruise
control engaged, especially under load (as in towing). Engineers dene hitching as a vehicle in
speed-control mode with engine speed variation of
more than 50 rpm (peak to peak) at a frequency
below 16 Hz.
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1301
1302
Case 81
narrowed the possibilities to one probable hypothesis, instability in the controlling system.
By instrumenting a vehicle displaying the hitching phenomenon, Tananko was able to produce the
plot shown in Figure 1. This plot of the three main
signals of the control system (actual rpm, ltered
rpm, and MF DES, a command signal) veried the
AFD hypothesis by showing the command signal out
of phase with ltered rpm when the vehicle was kept
at constant speed in cruise-control mode.
Actual rpm is out of phase with the command
signal because of delays associated with mass inertia.
In addition, the ltered rpm is delayed from the
actual rpm because of the time it takes for the
ltering calculation. The specic combination of
these delays, a characteristic of the unied control
system coupled with individual characteristics of the
drive train hardware, produces the hitching phenomena. The solution lies in using Taguchis techniques to make the software/hardware system
robust.
3. P-Diagram
The parameters studied in this project are given in
the P-diagram shown in Figure 3.
4. Noise Factors
2. System Description
A simple schematic of the controlling system is
shown in Figure 2. The mph set point is determined
0.067
18
Command Signal
(MF_DES)
17
0.066
0.067
2400
0.066
2375
0.065
2350
0.064
2325
0.063
2300
0.062
2275
0.061
2250
16
0.065
15
14
0.064
13
0.063
12
rpm
Filtered rpm
11
0.062
10
0.061
9
7.00
7.25
7.50
7.75
8.00
8.25
8.50
8.75
9.00
9.25
9.50
s
Figure 1
Hitching phenomena
Cruise Control
N-DES
KP_Cruise
1303
Governor Feedback
MF_DES
CG loop Kp
FK_MPH_ERR
MPH
(VF_DES)
CG loop Ki
N
ICP Observed
ICP Calc.
Feedforward FN1300
Engine &
Vehicle
Feedback
(for feedback)
Feedforward FN 1100
Puls
width
MPH
ICP Sensor
ICP_DES
IPR_Flow
fn of N &
VFDES
Feedforward FN 1400
Duty Cycle
Injection Timing
ICP_duty
Figure 2
Simplied functional ow
Noise Factors
Driving Profiles
Diesel Engine
Speed Control
Signal Factors
Engine rpm (t )
Set-point mph (t )
A
B
C
D
E
F
Figure 3
P-diagram
Control Factors
Rpm Measurement
ICP loop Kp
ICP loop Ki
CG loop Kp
CG loop Ki
KP_CRUISE
Response Variables
Vehicle Speedmph (t )
rdExpert
TM
1304
Case 81
The six control factors listed in Table 1 were selected for the study. These factors, various software
mph = (rpm)
Vehicle Speed
(mph)
6. Control Factors
SN Ratio:
= 10 log(2/2)
Figure 4
Ideal function 1: hitching
SN Ratio:
= 10 log(2/2)
Figure 5
Ideal function 2: tracking
mph = (s_mph)
Vehicle Speed
(mph)
1305
Table 1
Control factors and levels
Control Factor
No. of
Levels
Level
1
A:
Rpm measurement
6 teeth
12 teeth
B:
ICP loop Kp
0.0005
0.0010
C:
ICP loop Ki
0.0002
0.0007
0.0012
D:
CG loop Kp
0.8a
fna
1.2a
E:
CG loop Ki
0.027
0.032
0.037
F:
KP CRUISE
0.5
3
0.0015
Current level.
Table 2
Control factor orthogonal arraya
A: Col. 1
rpm
Measurement
B: Col. 2
ICP loop Kp
C: Col. 3
ICP loop Ki
D: Col. 4
CG loop Kp
E: Col. 5
CG loop Ki
(1) 6 teeth
(1) 0.0005
(1) 0.0002
(1) 0.8b
(1) 0.027
(1) 0
(1) 6 teeth
(1) 0.0005
(2) 0.0007
(2) fn
(2) 0.032
(2) 0.5b
(1) 6 teeth
(1) 0.0005
(3) 00012
(3) 1.2b
(3) 0.037
(3)
(1) 6 teeth
(2) 0.0010
(1) 0.0002
(1) 0.8b
(2) 0.032
(2) 0.5b
(1) 6 teeth
(2) 0.0010
(2) 0.0007
(2) fnb
(3) 0.037
(3)
(1) 6 teeth
(2) 0.0010
(3) 0.0012
(3) 1.2
(1) 0.027
(1) 0
(1) 6 teeth
(3) 0.0015
(1) 0.0002
(2) fn
(1) 0.027
(3)
(1) 6 teeth
(3) 0.0015
(2) 0.0007
(3) 1.2b
(2) 0.032
(1) 0
(1) 6 teeth
(3) 0.0015
(3) 00012
(1) 0.8b
(3) 0.037
(2) 0.5b
10
(2) 12 teeth
(1) 0.0005
(1) 0.0002
(3) 1.2
(3) 0.037
(2) 0.5b
11
(2) 12 teeth
(1) 0.0005
(2) 0.0007
(1) 0.8b
(1) 0.027
(3)
12
(2) 12 teeth
(1) 0.0005
(3) 00012
(2) fnb
(2) 0.032
(1) 0
13
(2) 12 teeth
(2) 0.0010
(1) 0.0002
(2) fnb
(3) 0.037
(1) 0
14
(2) 12 teeth
(2) 0.0010
(2) 0.0007
(3) 1.2
(1) 0.027
(2) 0.5b
15
(2) 12 teeth
(2) 0.0010
(3) 00012
(1) 0.8b
(2) 0.032
(3)
16
(2) 12 teeth
(3) 0.0015
(1) 0.0002
(3) 1.2b
(2) 0.032
(3)
17
(2) 12 teeth
(3) 0.0015
(2) 0.0007
(1) 0.8b
(3) 0.037
(1) 0b
18
(2) 12 teeth
(3) 0.0015
(3) 00012
(2) fn
(1) 0.027
(2) 0.5b
No.
a
b
Level in parentheses.
Current condition.
F: Col. 6
KP CRUISE
1306
Case 81
Response
70
60
50
40
40
50
60
Signal (Expt. 6)
70
(a)
Response
70
60
50
40
40
50
60
Signal (Expt. 5)
70
(b)
Figure 6
Data plots for hitching ideal function
8. Factor Effects
Data from the L18 experiment were analyzed using
rdExpert software developed by Phadke Associates,
Inc. The control factor orthogonal array is given in
the appendix to the case. The signal-to-noise (SN)
ratio for each factor level is shown in Figure 7. From
the analysis shown in the gure, the most important
factors are A, D, and F.
1. Factor A is the number of teeth in the ywheel associated with rpm calculations. The
1307
Table 3
SN ratios
Hitching SN
No.
Noises 14
Tracking SN
Noises 56
Noises 14
Noises 56
2.035
9.821
3.631
2.996
11.078
4.569
11.091
7.800
4.188
4.126
9.332
9.701
15.077
7.766
11.545
8.256
11.799
3.908
12.429
9.233
1.793
3.001
3.415
1.390
9.798
4.484
11.841
9.793
5.309
6.212
4.392
2.053
8.987
8.640
9.618
9.324
10
13.763
12.885
10.569
10.267
11
18.550
18.680
12.036
14.106
12
2.538
15.337
1.826
3.128
13
0.929
16.065
1.492
1.200
9.022
9.501
8.688
7.856
15
18.171
18.008
11.260
14.804
16
11.734
11.823
12.031
11.852
17
18.394
16.338
7.000
1.881
18
13.774
17.485
9.943
9.142
Average
9.631
10.480
8.452
6.083
SN Ratio (dB)
14
F Value
%SS
5.9
13.6
0.8
3.9
Figure 7
Factor effects for ideal function 1: hitching
2.3
10.7
4.6
21.3
0.4
2.0
7.5
34.6
1308
Case 81
Table 4
Results of conrmatory experiment
Ideal Function 2: Tracking
Noise Conditions
56
Noise Conditions
14
Noise Conditions
56
Best
Observed
Predicted
18.44
21.25
19.01
17.85
11.80
12.31
15.39
12.37
Worst
Observed
Predicted
0.04
2.26
6.45
3.28
4.04
2.74
1.56
2.89
Baseline
Observed
Predicted
14.88
8.08
9.56
5.66
12.86
11.55
10.31
10.8
9. Further Improvements
The factor effect plots of Figures 7 and 9 indicate
that improvements beyond the conrmation experiment can be achieved by exploring beyond level A2
for factor A, below level D1 for factor D, and beyond
level F3 for factor F. These extrapolations were subsequently tested and validated.
10. Conclusions
The team now knew how to eliminate hitching completely. Many members of this team had been work-
Response
70
60
50
40
40
50
60
Signal (Conf. Expt. Best)
Figure 8
Plot of ideal function 1 (hitching) with best factor
combination
70
1309
SN Ratio (dB)
6.08
F Value
% SS
4.2
1.7
0.1
0.0
0.2
0.2
1.6
1.3
0.1
0.1
118.3
94.4
Figure 9
Factor effects for the tracking ideal function
this manuscript. The data for this case were analyzed by using the rdExpert software developed by
Phadke Associates, Inc. rdExpert is a trademark of
Phadke Associates, Inc.
References
M. S. Phadke, 1989. Quality Engineering Using Robust Design, Prentice Hall, Englewood Cliffs, NJ.
CASE 82
1. Introduction
ITT Industries Aerospace/Communications Division, (IIN/ACD) located in Clifton, New Jersey, is a
leader in the design and manufacturing of secure
communications systems for military and government applications. The video codec investigated in
this study is currently in use for the HMT and SUO
programs.
Source Coding Algorithm
The H.263 video code is the ITU standard for very
low bit rate video compression that makes it a prime
candidate for use in a bandlimited military environment. Currently, it is used primarily for videoconferencing over analog telephone lines. Tests
performed by the MPEG-4 committee in February
1996 determined the H.263 video quality outperformed all other algorithms optimized for very low
bit rates.
The basic form of the H.263 encoder is shown
in Figure 1. The video input to the codec consists
of series of noninterlaced video frames occurring
approximately 30 times per second. Images are encoded using a single luminance (intensity) and two
chrominance (color difference) components. This
1310
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1311
Video In
DCT
Video Out
Quantizer
Inverse
Quantizer
Inverse
DCT
+
Picture
Memory
Figure 1
H.263 video codec system
previous frame. The best-t prediction is found, usually as the result of an optimized search algorithm,
and its location is transmitted as a motion vector
that indicates its change from the target macroblock
location. The macroblock is therefore encoded using two pieces of information, its motion vector and
a prediction error.
Each macroblock undergoes a frequency-domain
transform by using a discrete cosine transform
(DCT). The DCT is a two-dimensional transform
that maps values from the spatial domain into the
frequency domain. For interframe macroblocks, the
prediction error is calculated and transformed,
while for intraframe macroblocks the pixel values
themselves are transformed. The DCT coefcients
are then quantized, mapping them into discrete
steps. The step size, or Q-factor, remains constant
throughout the video transmission and can range
from 1 to 31. Because quantization introduces loss,
higher Q values introduce more loss than lower Q
values. Finally, the quantized coefcients are entropy encoded and transmitted to the decoder
along with the motion vectors and other relevant
formatting information.
The main purpose of the picture memory is to
provide a reference image for the construction of
P-frames. It accomplishes this by decoding the resid-
1312
Case 82
Noise Factors
Spatial Detail
Foreground/Background Movement
Lighting Conditions
Response
Reconstructed Digitized
Video Signal
Signal
Digitized Video Signal
Control Factors
Unrestriced Motion Vectors
Syntax Arithmetic Coding
Advanced Prediction Mode
PB Frames
Double-Precision DCT Arithmetic
Integer Pel Search Window
I & P Frame Quantization Levels
PB Quantization Level
Frame Skip Rate
I/P Frame Repeat Ratio
Figure 2
Parameter diagram
2. Objectives
The objective of this experiment is to determine the
sensitivity of the H.263 video codec parameters with
respect to overall video quality and to identify the
parameters that will best increase video quality without sacricing the compressed video bit rate.
Parameter Diagram
To facilitate the robust design process, a parameter
diagram of the system was generated. The parameter diagram is shown in Figure 2. The system noise
and control parameters are as follows:
Noise Factors
Spatial detail: The amount of spatial detail in the
image
Foreground motion: the amount of motion present
in the primary or foreground object in the
video scene
Background motion: the amount of motion present
in the secondary or background objects in the
video scene
Lighting conditions: the quality and direction of
lighting in the video scene
1313
I/P ratio. This parameter controls the number of
P-frames transmitted between successive Iframes. It can range from 5 through 40.
Signal Factor
Digitized video image. The signal factor is the pixel
valus for the luminance and two chrominance
components of the video signal. The signal is
measured on a frame-by-frame basis, and a
composite SNR is calculated from these
measurements.
Response Variable
Reconstructed video image. The response variable is
thed pixel values for the luminance and two
chrominance components of the reconstructed video signal.
Ideal Function
The effects of the control parameters on video performance were studied using an ideal function that
measured the intensities of the reference video image pixels versus the uncompressed, reconstructed
H.263 video image pixels for each of the video components (e.g., luminance and the two chrominance
components; Figure 3). This ideal function was developed under this project and measures the overall
video quality in an objective manner. Previously developed image quality metrics were based largely on
the subjective techniques of a juried evaluation system. The ideal function has many advantages over
a perceptive evaluation system. First, the measurement is repeatable. Given an image, the ideal function will always return the same measurement for
image quality. Under a juried system, the image
quality metric can vary considerably, depending on
the makeup and composition of the jury. Second,
the ideal function takes less time to score than the
juried system. Typically, a video sequence of over
1000 frames took only 10 minutes of postprocessing
analysis to calculate the quality metric. Third, the
ideal function examines and scores each image in
the sequence independent of the preceding image.
This is an important distinction from a juried system, in which the human jurors vision can integrate variations from frame to frame and miss
some of the losses in image quality.
1314
Case 82
Reference Image
Pixel Intensity
Ideal Function
Slope = 1
Reconstructed Image
Pixel Intensity
Figure 3
Image quality ideal function
Table 1
Noise factors and levels
Level
Noise Factor
Spatial detail
Low
High
Lighting variation
Low
Medium
Movement
Low
High
3. Experimental Layout
Through a series of brainstorming sessions, noise
factors and control factors were identied and levels
assigned to these parameters, as described below.
Noise Factor Levels
Levels for the noise factors are shown in Table 1.
Since there were a limited number of noise factors,
a full factorial noise source was created, which used
two video sequences. The rst video sequence,
called claire, is a talking head type of video with a
low amount of both foreground and background
movement and spatial details. The lighting condition for this sequence is low and does not vary. The
second sequence, called car phone, contains a high
amount of spatial detail and movement.
Control Factor Levels
Table 2 indicates the control factors, their mnemonics, and the levels used in this experiment.
1315
Table 2
Control factors and levels
Level
Mnemonic
A:
Control Factor
UVecFlag
0(off)
1(on)
B:
SCodFlag
0(off)
1(on)
C:
advanced prediction
AFlag
0(off)
1(on)
D:
PB frames
PBFlag
0(off)
1(on)
E:
DCTFlag
0(off)
1(on)
F:
IPelWIn
15
G:
I-frame quantization
IQP
10
30
H: P-frame quantization
PQP
10
30
I:
FSkip
J:
PB quantization
DBQuant
K:
I / P ratio
IPRatio
4/1
19 / 1
39 / 1
Experimental Design
Based on the identied control factors, an L36 orthogonal array was chosen for the test conguration
for the Taguchi experiments. Since there are ve
two-level factors and six three-level factors, a 211
312 , an L36 array was chosen to hold the parameters
and was modied as follows. The rst ve columns
in the standard L36 array were used to house the
two-level control factors, and columns 12 through
17 were used to house the three-level control factors. Columns 6 through 11 and 18 through 23 were
not used in the experimental design. All subsequent
calculations were modied as appropriate to take
into account the missing columns in the experiment
design.
Each set of L36 Taguchi experiments was repeated six times, once for each of three video components (Y, U, V ) and once for each of the two
video scenes (car phone and claire), for a grand
total of 216 experiments. In each experiment, the
pixel intensity levels and the compression bit rates
were measured and recorded. For each set of experiments, a data reduction and slope and SN ratio
analysis was performed. Finally, a composite SN ratio analysis was performed for overall video quality
and for the bit rate.
The nal orthogonal array with the control factor levels is shown in the Table 3.
1316
Case 82
Table 3
Control factors L36 orthogonal array
No.
10
10
20
15
30
30
40
20
10
10
40
15
30
30
10
20
10
30
40
15
30
10
30
40
11
10
12
15
30
10
20
13
10
30
20
14
30
40
15
15
10
16
10
30
17
30
20
18
15
10
40
19
10
40
20
30
10
21
15
30
20
22
10
10
23
30
30
20
24
15
40
25
30
10
40
26
30
27
15
10
20
28
30
10
29
30
20
30
15
10
40
31
30
30
40
32
33
15
10
10
20
34
30
20
35
10
40
36
15
10
30
1317
Table 4
Y, U, V, composite SN ratio, and BPS results
SNR (dB)
No.
Composite
BPS
17.4
6.4
3.8
15.7
55.0
18.9
11.1
9.5
17.4
44.7
21.0
14.8
12.4
19.7
38.3
17.3
6.4
3.9
15.7
54.1
17.6
10.4
9.4
16.2
43.4
21.6
14.9
12.2
20.2
41.9
19.9
10.4
8.4
18.3
46.1
19.5
12.5
10.5
18.0
41.1
19.2
10.2
7.5
17.6
52.8
10
19.5
10.7
10.0
17.9
41.9
11
19.6
8.7
6.4
18.0
51.9
12
18.6
11.8
9.7
17.2
45.2
13
19.8
12.1
10.2
18.3
42.2
14
16.5
5.3
4.6
14.8
51.4
15
19.7
8.8
6.5
18.1
48.9
16
19.8
11.5
9.6
18.3
44.8
17
19.6
8.6
6.0
18.0
51.4
18
17.8
9.7
8.1
16.3
45.6
19
18.5
6.6
5.2
16.8
51.0
20
19.1
12.5
10.3
17.7
47.1
21
19.9
11.0
9.0
18.3
44.7
22
20.0
11.3
9.3
18.4
46.0
23
20.6
14.8
12.1
19.2
41.3
24
16.3
4.2
4.0
14.7
51.6
25
18.2
11.4
9.7
16.8
44.9
26
19.5
8.8
6.2
17.9
50.0
27
19.5
8.0
5.5
17.8
51.7
28
19.8
12.6
10.3
18.4
45.9
29
20.7
11.3
9.0
19.1
43.5
30
16.9
6.2
4.2
15.2
53.0
31
21.1
14.8
12.3
19.7
38.1
32
17.4
6.4
3.8
15.7
54.3
33
18.9
11.1
9.5
17.4
44.3
1318
Case 82
Table 4
(Continued )
SNR (dB)
Composite
BPS
34
No.
18.3
7.7
5.4
16.6
52.4
35
19.1
9.9
8.6
17.5
43.9
36
19.0
11.2
9.2
17.5
45.9
Avg.
19.1
10.1
8.1
17.5
47.0
Min.
21.6
14.9
12.4
14.7
55.0
Max.
16.3
4.2
3.8
20.2
38.1
5.3
10.6
8.5
5.5
16.9
Range
1
2comp
BPS 10 log
where BPS is the composite bit rate SN ratio and
is the compressed video stream bit-rate (bits/s).
Experimental Results
The results of the Taguchi experiments are shown
in Table 4, which lists the results of the SN calculations for the Y, U, V and composite video quality
metric and the compressed video stream bit rate
(BPS).
Factor Effects Plots
Based on the results of the Taguchi experiments
and data reduction efforts, factor effects plots were
generated for the composite image quality and BPS
SNR. These results are shown in Figures 4 and 5.
1319
-16.0
-17.0
-18.0
-19.0
Figure 4
Composite image quality factor effects
54.0
52.0
50.0
48.0
46.0
44.0
42.0
40.0
Figure 5
BPS factor effects
1320
The data also suggest that larger values of I/P
ratio are more desirable. It should be noted that this
is the only winwin factor (i.e., larger values of the
I/P ratio reduce BPS and at the same time increase
image quality). Larger values of the I/P ratio should
be investigated to conrm that this trend is accurate. However, I believe that there comes a point of
diminishing returns, especially in noisy environments. Transmission errors may have a major interaction with the I/P ratio. Therefore, this factor
should be studied when a simulation with transmission errors becomes available.
Case 82
Table 5
ANOVA results
Factor
Composite SN
Ratio as %
Sum of
Squares
BPS SN Ratio
as % Sum of
Squares
UVecFlag
0.0
0.3
SCodFlag
0.3
0.0
AFlag
0.0
0.2
PBFlag
1.1
0.1
DCTFlag
0.0
0.2
IPelWin
0.1
0.0
IQP
7.3
4.7
PQP
50.3
78.8
FSkip
26.0
1.9
DBQuant
0.0
0.0
IPRatio
8.5
8.9
7. Discussion
The factor effects plots for BPS are shown in Figure
6. This plot shows a similar factor effect sensitivity
behavior, as was exhibited by the L36 experiment.
The predictions for image quality and BPS SN ratio
were conrmed by the actual experimental results.
The experimental results conrm that the users can
expect a 2.5-dB improvement in image quality when
using the optimum factor effects settings. The image quality and BPS SNR forecasts correlate well
with the experimental results in all cases.
Finally, the difference in image quality can readily be seen in Figure 7. This gure shows one image
Table 6
Conrmation experiment orthogonal array
No.
A
(IQP)
B
(PQP)
C
(FSkip)
D
(IPRatio)
10
20
30
40
10
40
10
10
10
30
20
30
20
30
10
40
30
30
10
10
20
1321
Table 7
Conrmation experiment results summary
Composite SNR (dB)
No.
Measured
Forecast
Measured
Forecast
15.7
15.7
55.9
55.3
17.2
17.0
46.6
46.2
18.9
18.7
41.0
40.8
14.7
15.4
50.1
50.2
18.5
18.6
47.0
45.2
18.3
18.3
43.4
42.0
17.9
18.1
52.9
49.6
16.8
16.8
45.1
42.5
19.8
19.0
43.7
43.0
Optimum
14.5
14.9
53.0
52.8
Baseline
17.6
17.3
44.8
44.6
52.0
50.0
48.0
46.0
44.0
42.0
40.0
A
Figure 6
BPS factors effects
1322
Case 82
Figure 7
Verication experiment images
from the original video sequence and the reconstructed images from two of the validation experiments (e.g., the baseline and the optimal
experiments). This gure shows that the baseline
image suffers a considerable image quality loss. This
can readily be seen in the facial details in the baseline image versus the original image. The baseline
image suffers quality loss primarily in loss of facial
details. In the optimal image sequence, the facial
8. Conclusions
In conclusion, the objective video scoring methods
developed under this project, in conjunction with
robust design techniques and Taguchi methods,
have shown to be an unbiased, cost-effective approach to parameter design for the H.263 video codec. Using these techniques, an improvement of 2.3
dB was observed over the baseline approach in
terms of the image quality metric derived. The analysis in this paper demonstrates the importance of
applying robust design techniques early in the design process.
Additionally, the image quality SN ratio metric
developed in this project was independent of any
compression technologies and can be applied to a
wide range of still and motion video codecs.
Through the use of an objective quality metric, the
1323
time and costs involved with evaluating and quantifying video codecs have been reduced signicantly.
Acknowledgments The author expresses his sincere thanks to Sessan Pejan of Sarnoff Research
Labs, Princeton, New Jersey, for his help and expertise in video processing.
Reference
ITU-T, 1996. Video Coding for Low Bit Rate Communications, ITU-T Recommendation H.263.
CASE 83
1. Introduction
ITT Industries A/CD is a world leader in the design
and manufacture of tactical wireless communications systems. ITT was awarded a major design contract for a very high frequency (VHF) radio. This
radio requires a complex controller. As a result of a
trade study, the CMX real-time operating system
(RTOS) was selected as the scheduler for the radio
controller. The CMX RTOS provides many real-time
control structure options. Selecting these structures
based on objective criteria early in the design phase
is critical to the success of this real-time embedded
application.
2. Background
Although the RTOS is a commercial off-the-shelf
(COTS) product, there are still many features and
options to select to achieve an optimal result. The
features and options selected are highly dependent
on the system architecture. For this reason, a param1324
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1325
ISR to task
Task to task
Task to hardware
C: Signaling
Event
Mailbox
Traffic
Kill
Low-Level
Assembly
Language
Events
*Commands
*Traffic
ISR
(Interrupt Service
Routine)
Create task
rendevous
Background
Send Event to
Radio Control
Radio Control
Immediate
Interrupt Register
B: Messaging Mailbox
Queue
Post Traffic
D: Resource Lock
A: Speed 30 MHz
12 MHz
Resource
Semaphore
Lock Mechanism
Prevent access to hardware by
multiple operations
Figure 1
Event processing in the controller architecture
Traffic Active?
Check status
of traffic task
Queue
1326
As part of the design phase, functional ows were
constructed for every radio control and trafc operation. These functional ows focused on how the
architecture processed commands. The ultimate test
of the architecture is to handle each functional ow
efciently. Figure 2 shows an example of the volume
functional ow.
There is a direct correlation between the functional ow and the architecture. The ISR can be
seen as the input mechanism that starts a thread.
The queue is represented by the queue column on
the functional ow. It queues up requests going to
radio control. The signaling mechanism in the architecture is shown in the queue request. Volume is
a benign event that is performed as a background
process by radio control. The resource lock is shown
in the radio control column of the thread. It indicates where the hardware must be locked to prevent
hardware contention. If contention were allowed to
occur, the radio would lock up, thus failing catastrophically. Each of the primary communications
mechanisms is identied clearly in the ow: messaging, signaling, and resource lock. The architecture
shown in Figure 1 implements these mechanisms.
Since events are constantly being passed to radio
control at a variable rate, they must be queued so
that they are not lost. The RTOS provides two mechanisms for passing these events to radio control:
mailboxes and queues. The various processing tasks
must communicate between themselves to send terminate messages or wait for trafc to be complete.
Interprocess communication can be achieved using
mailboxes or events. Since each of the processing
tasks must have exclusive access to hardware output
registers, a lock must be implemented so that there
is never simultaneous access to the output registers
by more than one task. This can be accomplished
by using semaphores or by dening a resource.
3. Objectives
The objective of this experiment was to determine
the correct parameters, which will give maximum
RTOS performance for the embedded application.
The parameter diagram is given in Figure 3. The
noise factors are as follows:
Interrupt mix: The mixture of different types of
interrupts entering the system
Case 83
Interrupt rate: The rate at which interrupts are
entering the system
The control factors are as follows:
Processor speed. This is the speed at which the
processor is set. The radio design is trading
off the performance at two speeds, 12.8 and
25.6 MHz (12- and 30-MHz oscillators are
available and will be used for the parameter
design)
Messaging. The system architecture is based on
the passing of messages between tasks. Requests enter the system via interrupts which
are transformed into message requests, which
are acted upon by the system. The two types
of messaging schemes being investigated are
mailboxes and queues. Mailboxes allow a task
to wait for a message returning a pointer to
the message passed. Any task can ll a mailbox
through the use of a mailbox ID. Mailboxes
are implemented as a rst in, rst out. Mailboxes can have multiple ownership. Queues
are circular rst in, rst out and last in, rst
out with up to 32k slots. Slot sizes remain constant and are xed at compile time. There are
many manipulation functions for queues.
Signaling. There are several signals, which
must be implemented in the radio controller.
An example is for an event to send a terminate signal to the current-running trafc
thread. The two methods for efciently sending messages to tasks are events and mailboxes. Events use a 16-bit mask to be used to
interrupt a task or allow a task to wait for a
specied event or condition to be present. It
should be noted that this limits the design to
16 signals per task. Mailboxes can be used to
allow a task to wait for and then check the
message when it is received. This allows more
information to be passed.
Resource locking. A key problem with previous
controllers was the inability to block competing operations from seizing a resource. This is
a particular problem between the controller
and the digital signal processors (DSPs). If a
second command is sent to the DSPs before
the current one is acted upon, the system can,
and often will, lock up. By using a resource
locking mechanism, this detrimental behavior
1327
- p u t Vo l / W H S P
-update display
-Function Code
8 4 , Retur n ACK
HCI
Ye s
- p u t Vo l / W H S P
-update display
-retur n Function
Code 84
-Go to A
-update display
-General Alar m
Control Thread
-General Recover y
Thread
No
ACK
Star t Timer = T B D
-update display
A-NORM/UP/DOWN/WHSP
-check new value 8
VO L
-get Vol level, WHSP Status &
time out from database
Figure 2
Volume functional ow
Ye s
-DD, Message
Reject
No
ACK
-Create Vol/WHSP
Request For m
-Send to Queue
Function Code D9
Data Byte 01-09
or
Function Code 94 4
Data Byte 00-01
EXTERNAL
EVENTS
update Vol/WHSP
check range retur n
status = OK
DATA B A S E
Blink display,
Free Text 1 & 2
and Menu Line
TBD
Free Text 1 & 2
D I S P L AY
-RCV Request
-Prioritize Request
(Benign)
-Queue Request
QUEUE
-Create Response
-send Res. to HCI
Vo l u m e / W H S P f u n c t i o n
L O C KO U T
-Star t Timer = T B D
- C M D Vo l u m e / W H S P
change to PP and WP
RADIO CONTROL
PP
WP
1328
Case 83
Noise Factors
Interrupt Rate
Interrupt Mix
RTOS
Input
Events
(data arrival)
Output
Task Control
Inter Task Messaging
Control Factors
Processor Speed
Messaging
Signaling
Resource Locking
Figure 3
RTOS parameters
4. Experimental Layout
Test conditions are as follows:
Average case:
Table 1
Control factors and levels
Level
Control Factor
Speed (MHz)
12
30
Messaging
Mailbox
Queue
Signaling
Event
Mailbox
Resource lock
Resource
Semaphore
(t
end
tISR)2
1329
Table 2
L8 design experiment
No.
Speed
Messaging
Signaling
Resource lock
12
Mailbox
Event
Resource
12
Mailbox
Semaphore
Semaphore
12
Queue
Event
Semaphore
12
Queue
Semaphore
Resource
30
Mailbox
Event
Semaphore
30
Mailbox
Semaphore
Resource
30
Queue
Event
Resource
30
Queue
Semaphore
Semaphore
1330
Case 83
Table 3
SN ratio for tend tISR
No.
SN 7
SN 30
27.7
27.7
27.7
27.7
32.0
32.3
32.0
32.6
21.7
21.8
21.8
21.8
28.7
28.4
28.5
28.3
A 1 A2
B1 B2
C1 C2
A 1 A2
D1 D2
B1 B2
C1 C2
24.0
25.0
26.0
27.0
28.0
29.0
30.0
31.0
24.0
25.0
26.0
27.0
28.0
29.0
30.0
31.0
(a) SN 7 Factor Effects
Figure 4
Factor effects for tend tISR
D1 D2
Table 4
Design conrmation using factors A and B
7 Events
per
Second
30 Events
per
Second
21.7
21.8
32.0
32.5
10.3
10.7
Conrmation Check
Delta
7. Performance Analysis/System
Stress Test
1331
the heart of the reason that products become obsolete and require major upgrades prematurely.
Margin is the ability of a system to accept change.
Far too often, margin is measured as a static number
early in the design phase of a project. Tolerances
are projected and the margin is set. This answer
does not take into account the long-term changes
that a system is likely to see toward the end of its
life or the enhanced performance required to penetrate new markets.
Additional information needs to be gathered.
How will the system perform when stressed to its
limits? Where does the system fail? Figure 5 shows
how this system will perform past the expected limit
of 30 events per second. The system was set to use
mailboxes, events, and resources and run at 12 and
30 MHz. The message rate was increased steadily.
The real simple syndication (RSS) of the event time
was computed and plotted versus the message rate.
Clearly shown in the gure is the fact that 12 MHz
will lead to a much shorter product life since the
performance cannot be increased signicantly, and
the steep failure curve. However, 30 MHz shows a
system that can almost triple its target event rate and
still maintain acceptable performance. The shallow
assent of the event processing time RSS indicates
that the system will behave more robustly. The 30MHz system will degrade gracefully rather than
catastrophically.
Initially, it was expected that the two graphs
would have an identical shape displaced by the 2.5
times speed increase. This is not the case because
the real-time interrupts controlling RTOS scheduling operate at a constant rate of 1 millisecond. Since
the timer rate is constant, it has a larger effect on
the slower clock rate than on the faster clock. This
150.0
100.0
12 MHz
50.0
30 MHz
0.0
0
20
40
Figure 5
Performance versus event rate
60
80
100
Tasks per Second
120
140
160
1332
Case 83
A0 2
t 0.18t 2
20
where A0 is the cost of replacing the controller module and 0 is the response time that 50% of users
would nd unacceptable. In this case the quality loss
function represents the cost of the quality loss as the
response time grows. This occurs because the user
becomes dissatised with the product as the response time increases. Table 5 shows the quality loss
for the current system. The message trafc and re-
1000
QLF ($)
800
600
400
200
0
0
20
40
Response Time
Figure 6
Quality loss function
60
80
Table 5
Current system
Events per Second
7
(50%)
10
(40%)
20
(10%)
Quality
Loss
12 MHz
24
24
24
$103.68
30 MHz
12
12
12
25.92
sulting loss expected were computed. Table 6 provides the same information for a future system with
anticipated growth in processing functions. The
quality loss is computed by summing the quality loss
at each event rate times the percentage at that rate.
As an example, the quality loss of the future system
at 12 MHz is
$241.29 (0.5)[(0.18)(242 )] (0.4)[(0.18)(352 )]
(0.1)[(0.18)(752 )]
These data provide another view of the costbenet
trade-off:
Current system:
$77.76
Future system:
$215.37
Table 6
Future system
Events per Second
20
(50%)
40
(40%)
50
(10%)
Quality
Loss
12 MHz
24
35
75
$241.29
30 MHz
12
12
12
25.92
1333
9. Conclusions
In conclusion, applying robust design techniques
and Taguchi methods to the selection of RTOS features for the controller architecture has yielded an
optimal selection of key communication structures.
These techniques allowed for an optimal design using objective methods for parameter selection. In
addition to gaining valuable insight into the RTOS
internal operation, several key observations were
made and problems discovered early in the design
cycle. This study demonstrates that robust design
techniques and Taguchi methods can be applied to
software engineering successfully to yield optimal
results and replace intuition and guesswork. This
technique can be applied to all types of operating
environments. Although this application used an
embedded RTOS, it was applied successfully to the
UNIX operating system.
Taguchis techniques have not only allowed for
selection of optimum performance parameters for
the VHF radio but have provided a tool for expanding the functionality of this radio and future
product lines. The performance analysis and the accumulation analysis provided valuable insight into
the robustness of future enhancements. Looking at
the quality loss function, a designer can objectively
predict how this system will behave under increased
workloads. The accumulation analysis shows that the
parameter selected will behave optimally even at the
system failure point. This gives added condence
that the correct parameters have been chosen not
only for this immediate project but also for future
projects.
Additionally, it was learned that the RTOS exhibits catastrophic failures in two ways when the queues
become full. One type of failure is when requests
1334
continue to be queued without the system performing any work. This is observed by the idle time increasing with tasks only being queued and not
executed. The other type of failure is when the
queue lls and wraps around, causing requests to
become corrupted. By performing these experiments during the design phase, these failures can
be avoided long before code and integration.
Finally, it was observed that when events have
properties that couple them, as seen when one
event waits on the result of another event to complete, the dominance of processor speed diminishes
at the slower event rates. This conrms that the system would respond well to processor speed throttling at low event rates.
Using these optimization techniques resulted in
a 10.7-dB improvement in system response time by
Case 83
using the optimal parameter set. Using optimal parameters reduces the message latency from 40.6 ms
to 12.1 ms, a decrease of 28.5 ms. A performance
analysis and quality loss function showed that there
are signicant cost and performance benets in
choosing 30 MHz over 12 MHz for both the current
system and future systems. The quality loss for the
current and future systems is $78 and $215, respectively, since the added cost of changing to a 30-MHz
processor is minor, as compared to the quality loss
it is a highly desirable modication. Finally, getting
the equivalent power of a 30-MHz processor, with
speed throttling during idle, will require only a 1%
power increase with the same board area.
This case study is contributed by Howard S.
Forstrom.
CASE 84
Abstract: In assuring software quality, there are few evaluation methods with
established grounds. Previously, we have been using a trial-and-error process.
One of the reasons is that unlike the development of hardware that machines
produce, that of software involves many human beings who are regarded to
have a relatively large number of uncertain factors. Therefore, the Sensuous
Value Measurement Committee was founded in the Sensory Test Study Group
of the Japanese Union of Scientists and Engineers, which has been dealing
with the application of quality engineering to the foregoing issues. This research, as part of the achievement, focuses on applying quality engineering
to a production line, including human beings with many uncertain factors.
1. Introduction
Because of diversity in production lines of software,
it is quite difcult to perform all evaluations at the
same time. Figure 1 summarizes the ow of a software production line. We focused on software production that can be relatively easily evaluated.
Figure 2 shows the details of software production.
By application of the standard SN ratio, we can obtain a good result [4]. On the basis of the experiment, delving more deeply into this issue, we
attempted to evaluate the relationship between capability in programming (coding) and errors.
y1
y2
y3
yn
y1 y2 yn
n
(1)
(2)
According to the variation in equation (1), the effect by the signal factor results in
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1335
1336
Case 84
[Product Planning]
Users
Demand
[Product Design]
[Process Planning]
Software
Companys
Design Target
Software
Companys
Design Specs
Software
Production
Process
Evaluation Method
of Target
Evaluation Scale of
Specs
Evaluation Scale of
Process
[Production Engineering]
[Inspection]
Software
Production
Evaluation Scale of
Product
Product
Shipment
Inspection Method
Figure 1
Work ow of software production
Sp
p2
total of squares of coefficients
[( y1 y2 yn /n]2
(1/n)2 (1/n)2 (1/n)2
( y1 y2 yn)2
np 2 Sm
n
(3)
(4)
Sp
np 2
10 log
Se
np(1 p)
p
1p
10 log
1
1
p
dB
(5)
Programming
Specs
Coding
Figure 2
Details of software production
Debugging
Program
1337
Total variation:
Table 1
Error types and levels
ST np (79)(0.747) 79.0
Level
(7)
Factor
Logical error
A: Mistake
B: Unknown
Use
Use
No use
No use
Error variation:
Syntax error
C: Mistake
D: Unknown
Use
Use
No use
No use
Thus,
Sp np 2 (79)(0.7472) 32.91
10 log
essary data are A, B, and C; D is not used for the
analysis of row 2 because its number of levels is two.
The sum of errors for experiment 1 in the L8
orthogonal array is
Sp
Se
10 log
(8)
(9)
1
1
p
4.70 dB
(10)
A B C 2 15 3 20
Therefore, the number of lines where correct codes
are written is
(total number of lines number of errors for D)
number of errors 59
Then the proportion of correct codes, p, is computed as
p
59
0.747
86 7
(6)
4. Capability Evaluation of
Software Production
With an L12 orthogonal array, we studied an evaluation method of capability of software production.
As factors regarding an instructional method and
evaluation, 10 types of factors were selected (Table
3). The C language is used and the program consists
of about 100 steps. As testees, six engineering students who have had 10 hours of training in the C
language were chosen. Next, we tested these testees
according to the levels allocated in the L12 orthogonal array.
Table 2
Error types and number of errors assigned to L8
No.
Total
15
27
15
15
15
20
7
9
2
22
15
10
3
1338
Case 84
Table 3
Factors and levels
Level
1
A:
type of program
Factor
Business
Mathematical
B:
instructional method
Lecture
Software
C:
logical technique
F.C.
PAD
D:
E:
coding guideline
Yes
No
F:
High
Normal
G:
High
Normal
H: number of functions
Small
Large
I:
intellectual ability
High
Normal
J:
mathematical ability
High
Normal
F: ability in Japanese language. The ability to understand Japanese words and relevant concepts and to utilize them effectively is judged.
G: ability for ofce work. The ability to comprehend words, documents, or bills accurately
and in detail is judged.
H: number of functions. As long as the difculty
level of each program was not changed, there
are two types of specications, minimal and
redundant, in which a testee writes a program.
I: intellectual ability. The ability to understand
explanations, instructions, or principles is
judged.
J: mathematical ability. The ability to solve mathematical problems is judged.
For human abilities ( F, G, I, and J ), we examined
the testees using the general job aptitude test issued
by the Labor Ministry. After checking each testees
ability, we classied them into high- and low-ability
groups.
5. Experimental Results
We computed interactions between the factors selected and error types. More specically, an L8 or-
1339
Figure 3
Layout of direct-product experiment using an L12 orthogonal array to evaluate
the capability of software production and an L8 orthogonal array to evaluate
error types
Table 4
Number of errors obtained from experiment
L12
L8
15
D
Total
number
of lines
10
11
12
51
75
50
13
40
12
160
47
35
57
10
12
15
12
12
86
96
266
113
146
60
104
84
300
94
103
116
1340
Case 84
Table 5
ANOVA with respect to SN ratio based on capability of software production and error typesa (dB)
Factor
Factor
Factor
22.9
A A
11.0
A C
12.8
41.8
B A
1.5
B C
0.0
178.5
C A
90.5
C C
109.5
31.1
D A
2.4
D C
70.3
77.2
E A
85.2
E C
79.7
106.8
F A
141.3
F C
218.4
17.5
G A
47.5
GC
146.7
929.8
H A
123.1
H C
11.4
33.7
I A
50.1
IC
21.4
22.9
J A
27.5
J C
79.2
66.4
A B
29.6
A D
7.6
2408.7
B B
44.4
B D
8.1
6.5
C B
23.5
C D
0.1
236.7
D B
1.5
D D
9.1
41
938.4
E B
48.1
E D
0.1
(e)
86
(2466.8)
F B
21.1
F D
0.9
ST
95
6980.9
G B
147.1
G D
9.7
H B
77.0
H D
14.7
I B
0.6
I D
32.2
J B
39.3
J D
17.8
Figure 4
Response graphs of the SN ratio for capability of software
production
errors and subsequently, to a high SN ratio. In addition, logical technique (C ) and ability in the Japanese language (F ) have a large effect. Since factors
other than these have a small effect, good reproducibility cannot be expected. On the other hand,
since we observed no signicant effect for intellectual ability, even in the experiment on error-nding
ability, this factors effect in programming was considered trivial.
In the response graphs for error types shown in
Figure 5, level 1 represents error used for analysis,
and level 2, when it was not used. Since the SN ratio
when not using error is high, all the errors in this
study should be included in an analysis. Unknown
error in logical error (B) turned out to be particularly important.
Furthermore, according to the analysis of variance table in Figure 6, while some factors generate
signicant interactions with A, B, and C , interactions with D had only trivial interactions with other
factors, but D per se was relatively large compared
Figure 5
Response graphs of the SN ratio for error type
1341
Figure 6
Interaction between SN ratios for capability of software
production and SN ratios for error type
with other error types. By combining largeinteraction errors into one error and making a different classication, we are able to categorize errors
into one with clear interactions and one without interactions in the next experiment.
It was observed that an interaction occurs more
frequently in the factors regarding ability in the
Japanese language and ability for ofce work. Considering that a similar result was obtained in a
conrmatory experiment on error nding [3], we
supposed that the reason was a problem relating to
the ability test itself.
6. Conclusions
As a result of classifying capability of software production by each error type and evaluating it with an
SN ratio for the number of errors, we concluded
that our new approach was highly likely to evaluate
the true capability of software production. We are
currently performing a conrmatory experiment on
its reliability.
Since students have been used as testees in this
study, we cannot necessarily say that our results
would hold true for actual programmers. Therefore,
we need to test this further in the future.
The second problem is that a commercial ability
test has been used to quantify each programmers
ability in our experiment. Whether this ability test
is itself reproducible is questionable. An alternative
method of ability evaluation needs to be developed.
On the other hand, while we have classied errors into four groups, we should prepare a different
error classication method in accordance with an
1342
experimental objective. For example, if we wish to
check only main effects, we need to classify the error
without the occurrence of interaction. On the contrary, if interactions between error types and each
factor are more focused, a more mechanical classication that tends to trigger the occurrence of interactions is necessary.
References
1. Kei Takada, Muneo Takahashi, Narushi Yamanouchi, and Hiroshi Yano, 1997. The study of evaluation
of capability and errors in programming. Quality Engineering, Vol. 5, No. 6, pp. 3844.
Case 84
2. Genichi Taguchi and Seiso Konishi, 1988. Signal-toNoise Ratio for Quality Evaluation, Vol. 3 in Tokyo:
Quality Engineering Series. Japanese Standards Association, and ASI Press.
3. Norihiko Ishitani, Muneo Takahashi, Narushi Yamanouchi, and Hiroshi Yano, 1996. A study of evaluation of capability of programmers considering
time changes. Proceedings of the 26th Sensory Test Symposium, Japanese Union of Scientists and Engineers.
4. Norihiko Ishitani, Muneo Takahashi, Kei Takada,
Narushi Yamanouchi, and Hiroshi Yano, 1996. A basic study on the evaluation of software error detecting capability. Quality Engineering, Vol. 4, No. 3, pp.
4553.
CASE 85
1. Evaluation of Software
Error-Finding Ability
We asked testees to debug a program that had intentional errors. The results were two types of output data (Table 1). The types of error found can be
expressed by a 0/1 value. A mistake can occur when
no error is judged as an error. It is essential to judge
an error as an error and also to judge no error as
no error. Since Genichi Taguchi has proposed a
method using the standard SN ratio for this type of
case, we attempted to use this method.
We show the calculation process of the standard
SN ratio as below. We set the total number of lines
to n, the number of correct lines to n0, that of lines
including errors to n1, that of correct lines judged
as correct to n00, that of correct lines judged as incorrect to n01, that of incorrect lines judged as correct to n10, and that of incorrect lines judged as
incorrect to n00. The input/output results are shown
in Table 2.
Now the fraction of mistakes is
n01
n0
(1)
n
q 10
n1
(2)
p0
(3)
1 [(1/p) 1][(1/q) 1]
10 log
1
1
(1 2p0)2
dB
(4)
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1343
1344
Case 85
Table 1
Results of software error-nding experiment
Output
Judged as
Correct
Judged as
Incorrect
Correct
Incorrect (bug)
Input
examining testees ability in the Japanese language, business ability, and intellectual ability,
and allocation of the results to an L12 orthogonal array
H: target value: because of errors included in
a program intentionally, whether or not to inform testees of the number of errors to be
found
J: work intermission: whether or not to give the
testee a 10-minute pause to do other work that
has no relation to the review work
K: number of errors: number of errors included
intentionally in a program
Using the control factors above, we performed
the following experiment. First, we provided 4-hourlong instruction in the C language to six testees to
conduct the experiments of six combinations as
numbers 1 to 7 of L12. Then we provided another
four hours of training to the same testees (in total,
eight hours of instruction) and conducted an experiment based on combinations of numbers 7 to
12 in the L12 orthogonal array. However, because
each testee was dealing with three characteristics at
the same time, it was difcult to allocate them in-
Table 2
Input/output for two types of mistakes
Level
Factor
A:
instruction time
(hours)
B:
instructional
method
Lectured
Self-taught
C:
number of
reviewers
D:
20
30
E:
checklist
Yes
No
F:
ability in Japanese
language
High
Normal
G:
High
Normal
H: target
Yes
No
I:
intellectual ability
High
Normal
J:
work intermission
Yes
No
K:
number of errors
Output
0
(Correct)
1
(Incorrect)
Total
0 (correct)
n00
n01
n0
1 (incorrect)
n10
n11
n1
Total
r0
r1
Input
Table 3
Factors and levels for experiments on error
nding
1345
Figure 1
Response graphs for previous and conrmatory experiments
1346
Case 85
Table 4
Factors and levels for experiment and programming
Level
1
A:
type of program
Factor
Business
Mathematical
B:
instructional method
Lectured
Self-taught
C:
logical technique
F.C.
PAD
D:
E:
coding guideline
Yes
No
F:
High
Normal
G:
High
Normal
H: number of specs
Small
Large
I:
intellectual ability
High
Normal
J:
mathematical ability
High
Normal
Table 5
ANOVA for experiment on programming
Factor
Factor
E
H
J
1
1
1
270.2
972.0
152.2
G A
H A
I A
1
1
1
45.4
44.0
45.7
A
B
1
1
124.4
1461.1
J A
E B
1
1
44.8
319.7
A A
B A
D A
1
1
1
58.6
51.8
67.9
I B
D C
E C
1
1
1
115.4
93.8
69.4
76
767.7
F C
G C
I C
1
1
1
59.2
62.0
73.9
95
4899.2
Total
Ve 767.7 / 76 10.1.
1347
Table 6
Factors and levels for conrmatory experiment
Level
1
A:
type of program
With objective
With objective
B:
instructional method
Lectured
Self-taught
C:
specs
With owchart
With no owchart
D:
E:
test pattern
Yes
No
F:
High
Normal
G:
High
Normal
H: number of specs
Small
Large
I:
intellectual ability
High
Normal
J:
mathematical ability
High
Normal
y1 y2 yn
n
(5)
Sp
Se
10 log
1
1
p
dB
(6)
An L12 orthogonal array was used for this experiment. As control factors, 10 types of factors were
selected (Table 4). As an ability test, a commercial
aptitude test was prepared. The C language is used
and the program consisted of about 100 steps. As
testees, six engineering students who had taken 10
hours of training in the C language were chosen.
We tested them according to the levels allocated in
the L12 orthogonal array.
The following is an explanation of the factors
selected:
A: type of program: type of program created by
a testee
B: instructional method: lecture on the C language from a teacher, or the use of instructional software
C: logical technique: schematic of a programs
ow
D: period before deadline: requirement that program be handed in within two or three days
after specications are given
E: coding guideline: material that states ways of
thinking in programming
H: number of specs: as long as the difculty level
of each program is not changed, minimal and
Table 7
Classication of errors in conrmatory
experiment
Level
Factor
Logical error
A: mistake
B: unknown
Use
Use
No use
No use
Syntax error
C : mistake
D: unknown
Use
Use
No use
No use
1348
Case 85
Figure 2
Comparison of response graphs between previous and conrmatory experiments
Table 8
ANOVA for experiment on programming
Factor
A
B
D
F
G
K
1
1
1
1
1
1
425.5
266.2
474.5
369.4
2063.1
319.4
A
D
1
1
77
Factor
A A
G A
A B
G B
A C
A D
1
1
1
1
1
655.6
327.1
820.0
398.5
409.5
211.7
1952.3 C D
911.2 D D
1
1
229.0
382.9
3061.8 H D
K D
1
1
417.0
215.7
Total
Ve 3061.8 / 77 39.8.
periment for the upper and lower halves of the orthogonal array. The results reveal that main effects
are brought about by human abilities and number
of specs. In addition, in terms of interactions between factors and errors, A, B, or the error of C
demonstrated a relatively large effect. That is, it was
assumed that these errors were affected by individual differences (Tables 4 and 5).
95 13,910.3
1349
Table 9
Gain obtained from experiments on error nding and programming
Experiment on Error
Finding
Experiment on
Programming
Previous
Conrmatory
Previous
Conrmatory
All factors
Optimal
Worst
Gain
22.6
19.1
41.6
17.6
7.3
24.9
13.3
0.0
13.4
31.0
0.5
30.5
18.4
14.9
12.6
13.0
2.7
19.0
12.9
0.4
12.6
22.2
9.4
12.8
1350
Reference
Kei Takada, Muneo Takahashi, Narushi Yamanouchi,
and Hiroshi Yano, 1998. The study of reappearance
of experiment in evaluation of programmers ability
Case 85
of software. Quality Engineering, Vol. 6, No. 1, pp. 39
47.
CASE 86
1. Introduction
The development of EW systems includes testing
and evaluation of new systems against a wide and
varied range of radar emitter types, taking a significant amount of time and effort. To minimize development time while improving product quality
and performance, robust engineering methods
were implemented. The objective was to reduce EW
system test time for a large quantity of emitters by
dening a reduced set of orthogonal emitter types.
2. Experimental Approach
Most quality engineering methods employ control
factors to determine the sensitivity of system performance to various design parameters. In this situation the approach was used to evaluate a systems
performance against a wide variety of simulated
eld conditions (noise factors), in a time-efcient
manner. A secondary goal is to determine system
deciencies in order to develop a more robust system design. A third result of the task was to yield a
single measure of system performance (mean signalto-noise ratio) to track hardware and software improvements. Since the objective of this task was to
permit detection, noise factors were selected in lieu
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1351
1352
Case 86
Display
Technique
G enerator
Receiver
Transm itter
Diplexer
Receiver
Signal
Processor
Transm itter
Synchronizer
Display
Function:
Identification o f E m i t t e r
M ode of Em itter
Location of Emitter
Function (M odes):
Search for Targets
Acquisition of Targets
Tracking of Targets
Figure 1
Electronic warfare scenario
The challenge for this project was to select a signal (limited group of emitters) that represents the
functional variation of the entire ensemble of radar
types. The transfer function is to identify each emitter in the signal set without error. The Taguchi
problem type was selected to be maximum the
best, given that correct identication of the emitter
is considered to be a correct signal, and any error
in the system output is considered noise. In this case
the system must detect all signals without radar type
or mode identication error.
It is recognized that the EW system will need to
be tested against the entire group of radar systems.
However, since software upgrades are made on a
nearly weekly basis, the 16 hours required to evaluate the entire ensemble of emitter types becomes
expensive and time consuming. The goal of this
project was therefore to cut the weekly 16 hours of
test time by at least one-half while maintaining test
integrity.
3. P-Diagram
Figure 2 illustrates the P-diagram for this project.
The input signal is the domain of emitters as described by the L18 array. The output signal is correct
identication of each radar emitter and mode to
which the EW system is subjected. The control factors consist of the EW system hardware and software
conguration that is tested in a particular system
conguration. The system conguration is variable
depending on project phase due to improvements
in hardware and software.
1353
Noise Factors
- Frequency, Diversity, and Variation
- PRI Type
- Scan Type
- Peak Amplitude Variation
- Angle of Arrival
- Background Noise
EW System Outputs
- Threat Identification
- Threat Mode
- Angle of Arrival
- EW System Stability
EW System
with specific OFP
/ UFP
Inputs
- Emitter Types
* Frequency
* PRI
* Scan
- Peak Amplitude
Control Factors
- User Data File Parameters
- Operational Flight Program
- HW / SW Configuration
Figure 2
P-diagram
Table 1
Selection of array
Level
Factor
Overall mean
d.f.
Frequency diversity
Single
Multiple
Frequency
Low
Mid
High
PRI type
CW
Stable
Agile
Scan type
Steady
Circular
Raster
Peak power
Nominal
Medium
Low
AOA
Boresite
Offset 1
Offset 2
Illumination
Short
Medium
100 %
Background
None
Low density
High density
2
Total
Growth factor.
16
1354
Case 86
Table 2
Selection of 18 robust emitters
Experiment
Item
Mean
Distance
RF
Diversity
RF
PRI
Type
Scan
Type
Peak
Power
76
Single
High
Stable
Circular
Medium
Single
High
Stable
Circular
Short
Peak
Power
Illumination
Item
Item
Distance
RF
Diversity
RF
PRI
Type
Scan
Type
66
70
Illumination
74
76
80
82
83
84
88
90
6. Signal-to-Noise Ratio
The ideal transfer function for this problem is to
identify each emitter and its mode without error
over the domain of emitter types. Each emitter was
tested and evaluated to determine whether its radar
type, radar mode, and angle of arrival were correct.
Any incorrect output was assigned a count of 1 per
incorrect output. Since there may be subjective considerations during this evaluation, the operator also
1355
Item
10
76
80
61
12
70
33
147
141
82
89
Experiment
10
11
12
Distance
Item
(Mean)
Multiple
Multiple
1
Multiple
Single
Single
Single
Single
Single
Single
Single
Single
Single
Single
A
RF Diversity
Table 3
L18 array for robust testing of EW systems
1
High
Mid
1
High
Low
1
High
Mid
1
Low
Mid
Low
Low
1
Mid
High
Mid
Mid
High
High
High
B
RF
Agile
Stable
CW
Agile
Stable
CW
Agile
Stable
CW
Agile
Stable
CW
C
PRI Type
Circular
1
Steady
Circular
Raster
Steady
Raster
Circular
Raster
1
Circular
Steady
Steady
Raster
Circular
Steady
D
Scan Type
Medium
Nominal
Low
Low
Medium
Nominal
Nominal
Low
Medium
Low
Medium
Nominal
E
Peak Power
F1
F3
F2
F2
F1
F3
F1
F3
F2
F3
F2
F1
H2
H3
H3
H1
H2
H3
H1
H2
H1
H2
H3
1
Medium
Short
Short
1
Short
100%
100%
1
Medium
Short
Medium
Short
100%
Medium
Short
1
100%
Short
H
H1
100%
G
Illumination
4.000
2.000
3.000
2.000
0.000
2.000
0.000
4.000
0.000
6.000
0.000
1.000
Observation
m1
(Defects)
1356
65
69
149
32
134
155
13
14
15
16
17
18
Iteam mean
Item
Experiment
Table 3
(Continued )
15.75
2.26
1.00
Distance
Item
(Mean)
14
4
1
Multiple
Single
Multiple
1
Multiple
Single
Multiple
1
Multiple
Single
Multiple
A
RF Diversity
13
5
Low
Low
Low
Mild
Mild
Mid
B
RF
18
0
Agile
Stable
CW
Agile
Stable
CW
C
PRI Type
15
3
Circular
Steady
Raster
Conical
1
Steady
Raster
Raster
Conical
Circular
D
Scan Type
18
0
Nominal
Low
Medium
Medium
Nominal
Low
E
Peak Power
18
0
F2
F1
F3
F3
F2
F1
H2
H3
H1
100%
1
Medium
100%
Short
18
0
H1
1
Medium
Short
12
6
H3
100%
H
H2
Short
G
Illumination
0.000
0.000
1.000
4.000
0.000
2.000
Observation
m1
(Defects)
1357
SN 20 log
total defects
n1
Table 4
Calculation of emitter performance
Defects
Experiment
(Emitter)
Type
Mode
AOA
Color
Total
SN 20 log
SN 20 log
24240
51
10.6 dB
For the parameter single,
SN 20 log
M1 M2 M3 M4
M5 M6 M7 M8 M9
M10 M14 M16 M18
1
13
SN 20 log
1060402
023010
1
13
7.8 dB
7. Conrmation
The reduction in test time surpassed the goal of a
50% decrease in weekly test time. The 18 emitters
can be tested in only four hours, a reduction of 4:
1 in effort. This is quite signicant given that the
sensitivity to test results did not yield any additional
system weakness when the monthly test sequence
was conducted. The monthly test sequence extended over a 16-hour period, with a number of
emitters remaining to be evaluated.
Table 5 shows test results during several weeks of
development but does not represent the nal system
conguration. Several important results can be observed from this table:
1. The overall SN ratio increased from 9.4 dB
to 5.0 dB. This yields a steady improvement
for each system upgrade, which can be used
to track system performance.
2. From the rst test conducted, the system demonstrated a sensitivity to input signal peak
power. This was signicant because peak
power is one parameter that is not normally
evaluated in the conventional sequence, due
1358
Case 86
Table 5
System performance summary
Factor/
Parameter
Test
5
9.4
8.2
8.7
6.9
5.8
5.0
RF diversity
Single
Multiple
8.9
10.6
5.0
13.3
7.8
10.6
5.0
10.6
5.3
6.9
3.3
8.3
RF frequency
Low
Mid
High
8.8
10.9
7.4
2.0
9.5
9.1
8.8
10.6
5.3
4.9
8.8
5.3
3.5
7.5
4.4
0.0
8.8
0.0
PRI type
CW
Stable
Agile
9.5
8.2
10.0
2.5
9.5
10.5
8.0
6.0
11.3
4.4
8.0
8.0
1.3
8.0
6.7
6.0
2.5
6.0
Scan type
Steady
Circular
Raster
13.3
7.6
7.0
5.1
11.6
7.0
7.6
8.3
9.6
6.0
9.5
5.5
6.8
7.6
3.5
5.1
6.8
3.5
Peak power
Nominal
Medium
Low
4.4
8.5
13.0
6.7
10.9
6.0
5.3
8.0
11.7
5.3
6.7
8.5
1.3
7.4
7.4
0.0
4.4
8.5
Mean SN
ratio (dB)
8. Conclusions
An L18 orthogonal array was employed successfully
to yield a robust test methodology for EW systems.
The robust group of 18 emitter types, coupled with
operationally based noise factors, has surpassed the
capability established by testing each radar type individually. Analysis of the L18 test results compared
with test results from the entire domain of emitters
demonstrates that the L18 can nd more system sen-
1359
beit not real emitter set. The L18 orthogonal array
has effectively demonstrated the capability to test
more system functionality in signicantly less time
than is possible using conventional methods.
This case study is contributed by Stan Goldstein and
Tom Ulrich.
CASE 87
1. Introduction
No effective debugging method has been developed
to date. Since a debugging procedure or the range
depends completely on each debugging engineer, a
similar product might be of different quality in
some cases. In addition, because it is so labor intensive, debugging is costly. Thus, a more efcient
method of nding bugs could lead to signicant
cost reduction.
In the context addressed above, Genichi Taguchi
has proposed an effective debugging method in two
articles titled Evaluation of objective function for
signal factor [1, 2] and Evaluation of signal factor
and functionality for software [3]. In reference to
these, we detail the results of our experiment on
debugging using an orthogonal array.
Since this methodology is a technique for evaluating an objective function for multiple signals, it
is regarded as related not to quality engineering but
rather to the design of experiments, in a sense.
1360
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
3. Analysis of Bugs
From the results of Table 2, we created approximate
two-way tables for all combinations. The upper left
part of Table 3 shows the number of each combination of A and B: A1B1, A1B2, A1B3, A2B1, A2B2, and
A2B3. Similarly, we created a whole table for all combinations. A part where many bugs occur on one
side of this table was regarded as a location with
bugs.
Looking at the overall result, we can see that
bugs occur at H3. After investigation it was found
that bugs do not occur in the on-factor test of H,
but occur by its combination with G1 ( G1, the
same level because of the dummy treatment used)
and B1 or B2. Since B3 is a factor level whose selection blocks us from choosing (or annuls) factor levels of H and has interactions among signal factors,
1361
Table 1
Signal factors and levels
1
A1
A2
B1
B2
B3
C1
C2
C3
D1
D2
D3
E1
E2
E2
F1
F2
F1
G1
G2
G1
H1
H2
H3
Table 2
L18 orthogonal array and output
No.
Ouput
10
11
12
13
14
15
16
17
18
1362
Case 87
Table 3
Binary table created from L18 orthogonal array
B1
B2
B3
C1
C2
C3
D1
D2
D3
E1
E2
E3
F1
F2
F3
G1
G2
G3
H1
H2
H3
A1
A2
B1
B2
B3
C1
C2
C3
D1
D2
D3
E1
E2
E2
F1
F2
F1
G1
G2
G1
Variation of B:
1 1 0 1 1 0
4
3
18
0.44
(f 5)
SAB
(1)
22 22
42
0.00
9
18
22 22 02
42
0.44
6
18
(f 2)
(3)
Variation of A, B interaction:
SAB SAB SA SB
Variation of A:
SA
SB
(f 1)
0.00
(f 2)
(4)
Table 4
Main effect
Factor
Main Effect a
0.0
0.44
0.11
0.11
0.03
0.03
0.44
1.77
SAB
0.00
2
interaction effect
S
combinational effect AB 0.09
5
1363
(5)
(6)
4. Bug Identication
Finally, we succeeded in nding bugs by taking advantage of each combination of factors (Table 5).
As below, using the method as described, the bugs
can be found from an observation of specic combinations. Following are the differences between
our current debugging process and the method using an orthogonal array:
1. Efciency of nding bugs
a. Current process. What can be found through
numerous tests are mainly independent
1364
Case 87
Table 5
Combinational and interaction effects
Factor
Combination
Interaction
AB
0.09
0.00
AC
0.09
0.17
AD
0.09
0.17
AE
0.09
0.25
AF
0.04
0.00
AG
0.15
0.00
AH
0.36
0.00
BC
0.26
0.39
BD
0.14
0.14
BE
0.17
0.19
BF
0.42
0.78
BG
0.22
0.11
BH
0.39
0.22
CD
0.14
0.22
CE
0.17
0.36
CF
0.22
0.44
CG
0.12
0.03
CH
0.26
0.06
DE
0.07
0.11
DF
0.12
0.19
DG
0.12
0.03
DH
0.26
0.11
EF
0.12
0.22
EG
0.16
0.01
EH
0.23
0.01
FG
0.20
0.06
FH
0.42
0.11
GH
0.62
0.44
References
1. Genichi Taguchi, 1999. Evaluation of objective function for signal factor (1). Standardization and Quality
Control, Vol. 52, No. 3, pp. 6268.
2. Genichi Taguchi, 1999. Evaluation of objective function for signal factor (2). Standardization and Quality
Control, Vol. 52, No. 4, pp. 97103.
3. Genichi Taguchi, 1999. Evaluation of signal factor
and functionality for software. Standardization and
Quality Control, Vol. 52, No. 6, pp. 6874.
4. Kei Takada, Masaru Uchikawa, Kazuhiro Kajimoto,
and Jun-ichi Deguchi, 2000. Efcient debugging of
a software using an orthogonal array. Quality Engineering, Vol. 8, No. 1, pp. 6069.
Part VI
On-Line Quality
Engineering
On-Line (Cases 8892)
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
CASE 88
2. Obstacle in Application to an
Automobile Production Line
To perform an On-QE analysis with current data, we
collected actual data regarding 18 major characteristics in a process. Table 1 reveals the fact that obvious imbalance between control cost and quality
loss leads to a large loss under current conditions,
especially in that the quality loss accounts for the
majority of the total loss. On the other hand, the
optimal conguration improves the balance of loss,
thereby reducing the proportion of quality loss to
50%. In addition, by adopting a calculated optimal
measurement interval, adjustment interval, and adjustable limit, we could compress the total loss to 1/
33 and gain a considerable improvement.
However, judging from our current facilities capability, we realized that the optimal adjustable limit
calculated contains infeasible matters such as a 1m-level adjustment. Therefore, by determining an
optimal adjustable limit, as long as it is practical under the current conditions, we had to recalculate an
optimal adjustment interval and measurement interval. As shown in Figure 1, although we needed
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1367
1368
Case 88
Table 1
Evaluation of balance between control cost and quality lossa
Optimal Condition
Current Condition
Control cost
Measurement cost (1)
Adjustment cost (2)
Quality loss (3), (4), (5)
Total loss [sum of (1)(5)]
Monetary
Valueb
Proportion to
Total Loss
(Balance) (%)
Monetary
Valueb
Proportion to
Total Loss
(Balance) (%)
5
1
994
0.5
0.1
99.4
3
12
15
10.0
40.0
50.0
1000
100.0
30
100.0
(1) Measurement cost in process, (2) adjustment cost in process, (3) loss within adjustable limit, (4) loss beyond adjustable
limit, (5) loss due to measurement error.
b
The monetary value is estimated by assuming that the total loss 100.
a
Figure 1
Limitation of application of analytical result to actual manufacturing line
1369
Figure 2
Relationship between quality loss and Cp and Cpk in the manufacturing process
1370
Case 88
Figure 3
Work ow based on real-time quality data trend management and analysis system
Figure 4
QTS: real-time display of control charts
1371
4. Achievements and
Future Considerations
Figure 5
QTS: display of histograms for all characteristics
Figure 6
Example of X control chart for each process stability
1372
Case 88
thereby diminishing quality problems (Figure
9). In addition, by prolonging the measurement cycle (from 1/50 in our conventional
process to 1/100 or 1/350), we achieved a
55% reduction in labor hours (Figure 10).
2. As a basis for a systematic activity, we established a business work ow that can improve
quality continuously.
1. The loss by process management in the current manufacturing lines was claried, and
drastic improvement in process stability was
obtained (Figure 8). Consequently, we reduced problems dramatically in the process,
Adoption of Optimal
Measurement Cycle
XR Control
Chart Histogram
Levels 1, 2, 3
Hole Diameter
Hole Diameter
Change in
Tool Life
Change in
Change in
Change in
XX
[Level 0]
A process needs to be
immediately investigated,
analyzed, and corrected.
*Trend confirmed
[Level 1]
A process needs to be
controlled by trend
characteristics within
current control limits.
*Cp < 1.33 and
*Cpk < 1.33
<Correction of
Variability
(Process Improvement)>
[Level 2]
Plant administration needs
to be improved (intentionally biased adjustment of
tools when changed).
*Cp 1.33 and
*Cpk < 1.33
[Level 3]
Trend management with a
control limit of +/ /3 can
be implemented because of
process stability.
*Cp 1.33 and
*Cpk 1.33
<Correction of
Deviation k>
<Longer Measurement
Cycle (Optimization)>
Idea of QE
(Quality
Engineering)
Benefits (Labor)
DB
Figure 7
Flowchart of plant administration / process improvement based on process stability focused on trend values
Number of Characteristics
18
Level 0
Level 0
1373
Level 0
Level 2
Level 1
Level 1
Level 3
Level 2
Level 3
Level 2
Level 3
Level 3
0
Nov. 1999
Apr. 2000
July 2000
Future State
Figure 8
Transition of process stability for characteristics in 1 / 50 sampling test
Figure 9
Transition of quality problems
1374
Case 88
Figure 10
Labor reduction for sampling test (On-QE-applied seven characteristics)
Reference
Yoshito Ida and Norihisa Adachi, 2001. Progress of online quality engineering. Standardization and Quality
Control, Vol. 54, No. 5, pp. 2431.
1375
CASE 89
1. Introduction
Preventive maintenance is a scheduled maintenance
method to prevent failures during use of an item
and to maintain availability of the item. A key issue
on a practical basis is how to conduct optimal
maintenance.
When operating a casting machine (Figure 1),
we suffered considerable damage because of an apparent failure caused when a roller of a chain (Figure 2b), used for a bucket elevator (Figure 2a), was
worn out and suddenly tore apart. Like a tray elevator, a bucket elevator transports a work by hanging it on a chain. An apparent failure is dened as a
case where a machine stops when an automated
measurement system fails.
To prevent the same failures from occurring
again, referring to the history of repairs in the maintenance record book, we designed an optimal maintenance method in accordance with the following
procedures:
1. We understand the current situation correctly.
2. Using on-line quality engineering, we analyze
preventive maintenance from the viewpoint of
the following three types and select one optimal maintenance method (Figure 3):
b. Periodic maintenance
19
19
6.9 months
a. Periodic checking
1376
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
(1)
1377
Current checkup interval: n0 21 days
Current maintainable limit: D0 80%
Figure 1
Vpro casting machine
Figure 2
Vertical bucket elevator
1378
Case 89
Figure 3
Types of preventive maintenance
Table 1
Maintenance data for bucket elevator (n 19)
Months until Next
Replacement
Cumulative
Frequency
Cumulative
Percentage
3.5
10.5
5.3
36.8
6.0
10
52.6
7.1
13
68.4
8.2
16
84.2
11.0
19
100.0
repair cost,
C (loss by machinery stoppage)
(current maintenance cost)
3,600,000 870,000 4,470,000 yen
(2)
L0 (checkup cost)
(loss by machinery stoppage)
(repair cost) (time lag loss)
B
n 1A
C
lA
0
n0
2
u
u
u
00
4,470,000
0
147
30,408.2 yen/days
(3)
3. Optimal Preventive
Maintenance Method
To choose an optimal maintenance method, we
used the loss function to analyze the following three
types of preventive maintenance methods.
Using Only Periodic Checkups
We calculated an optimal periodic checkup interval,
n, and maintenance limit, D, that minimize equa-
1379
every three days and a maintainable limit of 83%
are optimal. As a next step, we calculated the loss
function L1 under optimal conditions:
L1 (checkup cost) (maintenance cost) (loss
by machinery stoppage due to maintainable
limit) (loss by machinery stoppage due to
periodical checkup)
Figure 4
Wear and tear of link plate
2B
u
A
(2)(500)
(147) 2.45
3,600,000
D2
D 02
(4)
(6)
832
3
3
2
1/4
(7)
(10)
D 20
832
147
2
0
902
C
A
2 u
u
2u
670,000
3,600,000
(84)
84
(2)(1472)
3C
A
(11)
1/4
3C
A
125 days
(8)
832
125
14,319.4 yen/day
uu
3C u 2 2
D
A u0 0 0
3,600,000
(147)(902)
D2
n D2
3
2 u
500
870,000
3
125
Setting a mean time between failures with no maintenance to u, we can predict a failure rate according
to equation (5):
(5)
2
u u0 02
D0
B
C
A
20
n
u
u
7976.2 6997.1
1/4
(3)(870,000)
3,600,000
14,973.3 yen/day
(12)
1/4
(90) 83.04%
(9)
1380
Case 89
(2)(670,000)
(147) 89.7
3,600,000
2C
u
A
u u
(13)
670,000
126
B
C
A
n
u
u*20
832
6
3
2
u 1
u2
u* 2u20
2
2
u 0
u
2AB
3 AC
4
(19)
C
u
2
(u)2
2u
2AB
43 AC 0
(20)
2AB 2C AC u
4
3
(2)(670,000)
(147)
(2)(3,600,000)(500)
870,000
117.3 days
(21)
2u2
(2)(1472)
343 days
u
(21)(6)
(22)
(15)
(16)
Then
20
(D)2
(14)
u* 2u
C
u
C
u
2
u
2u
maintenance cost C
(18)
832
291
(17)
D3
n D2
3
2 u
500
870,000
3,600,000
6
291
(343)(902 )
(D)2
20
(2)(500)
(343) 5.7 days
2BA u* 3,600,000
(23)
5.7 days is rounded up to 1 week. Thus, we can perform one checkup a week.
The optimal maintainable limit D is calculated as
1381
7660
Preventive Maintenance
Only periodic checkup
Only periodic maintenance
Both periodic checkup and
maintenance
Normalization
Maintenance Method
Table 2
Cost comparison among all maintenance methods
1754
753
21
Maintenance
42
Checkup
2010
1340
Periodic
Maintenance
1812
1763
773
Machinery
Stop
3608
3773
2887
7660
Total
47.1
49.2
37.7
100.0
Proportion
(%)
1382
Case 89
Figure 5
Effects of preventive maintenance methods
3C
A
1/4
(3)(870,000)
3,600,000
1/4
(90)
83.04%
(24)
D2
832
343
20
902
291 days
(25)
4. Conclusions
Table 2 and Figure 5 show the cost calculation results for each type of maintenance method. Looking
at the ex post facto maintenance method, we can
see that the repair cost amounts to 7.66 million yen.
Among all the maintenance methods, simultaneous
use of the periodic checkup and maintenance methods were regarded as being most effective in overall
cost and the best balanced. Considering all of the
above, we come to the following conclusions:
1. As a maintenance method for chains, we perform periodic maintenance and checkups
simultaneously. The following are the principal management concerns:
a. Optimal periodic maintenance interval: once in
six months
b. Optimal checking interval: once a week
c. Maintainable limit: 83%
2. By taking advantage of both periodic maintenance and checkups, we can obtain an annual
Reference
Tadao Kondo, 1996. Design of preventive maintenance:
combination of periodic maintenance and regular
check of bucket elevator. Quality Engineering, Vol. 4,
No. 6, pp. 3035.
CASE 90
1. Introduction
In off-line quality engineering, since we consider a
generic function, signal factor, control factor, or
noise factor and optimize design parameters with an
orthogonal array, we can easily and intuitively follow
the PDCA (plandocheckaction) cycle. But in online quality engineering, to derive a loss function we
need to scrutinize and clarify measurement or adjustment cost through on-site research at production lines. Figuratively speaking, on-line quality
engineering requires untiring and low-key activities
as groundwork, but only a company that wrestles
actively with both off-line and on-line quality engineering can survive erce competition.
Many companies, including ours, are striving
consistently to reduce waste in direct and indirect
workforce departments on a daily basis. However, we
have been urged to innovate not only with new
products and processes but also with existing ones
by making the most of quality engineering techniques. So to promote on-line quality management
in our company, we hold training seminars directed
to managers and supervisors in the production division. The objective of the seminars is to understand the basic concepts and use of the loss function
as a rst step to improving quality and reducing
cost.
For building a case study as a typical example for
members in our in-house training seminar, we
picked the following case: Feedback Control by
Quality Characteristics in Machining Component K
under the slogan Do it rst.
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1383
1384
Case 90
3. Optimal System
2UAB D
0 0
(2)
n0 1
l0
2
D 20
2m0
U0
D1
(1)
Figure 1
Current loss, L0
1215 components
1250 components every 5 hours
3C0D 2020
Au0
10 m
1/4
9.4 m
(3)
1385
Once the optimal measurement interval and adjustable limit were determined, we could predict
the average adjustment interval (U1) under the
condition
U1 U0
D 21
102
19,560
2
D0
252
3130 components
(4)
The loss by feedback control under the optimal conguration can be calculated by substituting the
newly computed optimal measurement interval, n1,
adjustable limit, D1, and average adjustment interval, U1, into equation (1) as follows:
L1
B0
C
A
0 2
n1
U1
D 21
n1
l0
3
2
D 21
m2 0
U1
(5)
Figure 2
Comparison of loss between current and optimal conditions
1386
Case 90
Detailed Measures
Renewal of the current measuring instrument,
jigs, and tools
Preparation of measurement standards
After adding two 5-mm peaks on the raw material
of component K, we used it as our actual standard.
In that way we could calibrate environmental errors
such as temperature in machining processes.
Estimation of Measurement Error Variance (2m1)
after Taking Measures Toward Improvement
Throughout the improvement process, we can ameliorate each parameter and obtain the following
new values:
m2 1
D 2m
nm
lm
2
D 2m
2s
um
We investigated modication of the optimal adjustable limit D1. It turns out to be difcult to secure
D1 10 m for all dimension Hs, even if we assume
the technical measures above. Therefore, we set up
D 1* 15 m on a practical basis.
(6)
5. Estimation of Improvement of
Feedback Control
After taking the aforementioned improvement measures, we obtained the following parameters. Then
we estimated the improvements in loss and process
capability index.
Improved Loss
Each Parameter Obtained after Improvement The
following are changed parameters after technical
improvement measures. Except for them, all parameters are the same as those for the optimal
conguration.
Measurement cost: B1 524 yen
Adjustment cost: C1 4110 yen
1387
current Cp
2
Measurement error variance: m1
3.0 m2
701 components
750 components every 3 hours
(7)
D 1*
10
19,560
D 20
252
D 02
m2 0
U0
2
62
D 1*
n*
1 1
l1
2
D 1*2
2m1
U1
1.06
(8)
B
C
A
L2 1 1 2
n*
U 1*
1
D 1*2
n*
1 1
l0
3
2
D 1*
m2 0
U1
n0 1
l0
2
Estimation of Loss
7042 components
2
0
0.56
optimal Cp
2U0B1
A D0
2
60
(9)
Comparison of Improved Benets For the improved benets discussed above, we compared
losses under current conditions, optimal current
conditions, and after improvement. Figure 3 is an
analytical chart for losses L0, L1, and L2. Comparing
the loss function under current conditions and the
practical loss function after improvement, we can
save 19.18 yen per component, or 10.1 million yen
annually. However, even after improvement, the loss
inside the adjustable limit accounts for a large
portion of the 68%, so further improvement is
required.
Process Capability Index
Comparing process capability indexes under both
current and optimal conditions, we obtained the following improvement:
In our case, selecting one of the nine quality characteristics, we attempted to reduce cost by using the
loss function. For other quality characteristics, by
taking full advantage of each specic technology, we
can build an optimal system for the entire machine
through cost reduction based on a loss function. At
this point we need to take notice of the following.
For measurement costs, we should consider a feasible and efcient setup for reduction of measurement time and optimal measurement interval for all
characteristics, including the use of specialized measurement technicians.
As for adjustment cost and inside- and outsideadjustable limit costs, the key point is to establish
component production processes with less variability as well as those that are fast, low-cost, and safe,
through use of off-line quality engineering. The resulting benets through these improvements are
unveiled gradually in terms of workforce reduction
or enhanced company credibility. From the viewpoint of management, the most vital issue is to estimate how much the reduction in labor cost, users
complaints, or repair cost contributed to cost reduction. That is, instead of stressing workforce reduction after improvement, we should be able to
1388
Case 90
Baseline Loss L0
50%
9% 11% 13%
Optimum L1
13.31 yen
7.22 yen
69%
2%10%11%9%
After Kaizen Activities L2
Figure 3
Cost and loss reduction through on-line quality engineering
Reference
Toshio Kanetsuki, 1996. An application of on-line quality engineering. Quality Engineering, Vol. 4, No. 1,
pp. 1317.
CASE 91
1. Introduction
The following are products that we used and their
dimensions. The dimensions that we used for our
study are boldfaced.
Metal plate (thickness): 2.0 10.0 20.0 mm
Plastic tube A (height): 19.5 8.255 mm
Plastic tube B (height): 19.3 10 mm
Metal cap (inside diameter): 29.55 3.9 mm
Using on-line quality engineering, we performed
several studies regarding these products and dimensions. Control constants (parameters) related to the
analysis were researched by the suppliers.
AA
(1)
where A is the producers loss, A0 the average consumers loss, and 0 the consumers functional limit.
For the metal cap, we can see a remarkable difference between the calculated optimal and current
tolerances. Our investigation revealed that the reason was that the tolerance was arbitrarily set to 50
m for a functional limit of 100 m. In addition,
the difculty in the measurement of the metal caps
inner diameter was also related. In this type of case,
where there is a considerable difference between
the current and optimal situations, tolerance design
was implemented improperly.
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1389
1390
Case 91
Table 1
Comparison of tolerances of mechanical component parts (nominal-the-best characteristics)
Metal Plate
(Thickness)
Plastic
Tube A
(Height)
15
15
40
400
100
500
25
1000
400
Plastic
Tube B
(Height)
Metal Cap
(Inside
Diameter)
2.6
18
28
11
25
35
50
B
C
A
n0
u0
0
D 20
n D2
0 0 s2
3
2 u0
(2)
2uA B D
(3)
3C D 20 2
A u0
1/4
(4)
D2
D 02
(5)
2T
2
2c
20
(6)
1391
0.0772
0.0891
0.1662
1.61
1.05
0.1553
0.0776
0.0482
0.0295
16,610
0.6
2250
0.4
6780
2.66
1300
0.2102
0.1667
0.10
0.3984
0.1746
0.0348
0.0004
0.0571
0.0087
0.0004
0.1667
800
200
200
0.01
100
23,000
3,500
0.3325
0.5
23,000
25
0.09
600
Low-measuring force
micrometer
Plastic Tube A
(Height)
15
Snap micrometer
Metal Plate
(Thickness)
Table 2
Comparison in control of measuring instrument used for mechanical component parts
14.44
0.0441
5.71
8.68
0.0896
0.0448
0.0278
0.017
28,800
11,760
0.0995
0.0417
0.0463
0.116
800
200
17,280
17,280
70
Low-measuring force
micrometer
Plastic Tube B
(Height)
29
208.92
104.68
103.24
0.1987
0.0993
0.0532
0.0461
7510
13
4340
0.2100
0.0850
0.0833
0.0417
400
200
4800
10
4800
50
Caliper
Metal Cap
(Inside
Diameter)
1392
Case 91
markably large. Considering that it exceeds the estimated optimal tolerance, we should use a more
accurate measuring instrument in place of a caliper,
which has a large measurement error. However, a
part can be judged as proper in a functional inspection process when it can be assembled or inserted, even if its error is larger in an actual process,
indicating that dimensional inspection is not greatly
emphasized. In other words, the manufacturer does
not trust tolerance to a great extent.
As for a control practice, because the optimal
checking (measurement) interval and other values
are calculated numbers, we should adjust them into
values that can be managed more easily. For instance, looking at the measuring instrument control
of plastic tube A, we can see that one checkup is
conducted every day because the current checkup
and correction interval is 23,000 units, and at the
same time, 23,000 units are produced daily using
this process. Under optimal conditions, these numbers are estimated as 6780 for checking and 16,610
for correction. Thus, since these numbers are not
convenient for checking operations, we changed
them slightly to 6710-unit (7-hour) and 16,290-unit
(17-hour) intervals. Furthermore, with careful consideration of the loss, we can also change them to
7670-unit (8-hour) and 15,330-unit (16-hour) intervals by taking into account that three 8-hour shifts
are normally used in a day.
Loss:
L0
B
C
A
n0
u0
0
n0 1
l
2
D 20
D 02
m2
u0
(7)
2uA B D
(8)
1/4
3C D 20 2
A u0
(9)
D2
D 02
(10)
A 2
A 1
2
0 n
(y m)
n
(11)
In addition, we decomposed the error in equation (11) into a factor of deviation from a target
value and dispersion factor. Using the resulting
equation, we calculated the split-up losses:
L
A 1
2 n
n(y m)2
(y y)
n
(12)
1393
Table 3
Comparison in process control of mechanical component parts
Metal Plate
(Thickness)
1. Producers loss (yen/unit)
15
Plastic Tube A
(Height)
5
Plastic Tube B
(Height)
Metal Cap
(Inside Diameter)
25
70
50
600
23,000
5760
4800
25
50
2400
23,000
17,280
177,600
6. Time lag
300
240
120
80
200
700
800
200
1200
1500
1600
20,000
0.3333
0.0304
0.1389
0.0417
0.5000
0.0652
0.0926
0.1126
0.9706
0.1688
0.3233
1.7365
1.8059
0.2644
0.5548
1.8908
760
12,690
6580
3770
20
25
2280
22,750
11,400
46,170
0.2635
0.0552
0.1215
0.0531
0.5271
0.0659
0.1403
0.4332
0.9993
0.1232
0.2662
0.4886
1.7898
0.2443
0.5281
0.9748
1394
Case 91
Table 4
Summary of quality losses in process control of mechanical component parts
Metal Plate
(Thickness)
Plastic Tube A
(Height)
Plastic Tube B
(Height)
Metal Cap
(Inside Diameter)
1. Quality loss
0.227
1.6074
0.1469
0.5713
2. (m)
0.14
200.9
144.0
285.7
3. (m)
0.37
14.2
12.0
16.9
0.219
0.4128
0.0416
0.2385
0.008
1.1946
0.1053
0.3328
6. Conclusions
Through a comparative investigation of actual processes and optimal results based on the ideas of online quality engineering, we have shown how to
utilize this management tool.
Reference
Yoshitaka Sugiyama, Kazuhito Takahashi, and Hiroshi
Yano, 1996. A study on the on-line quality engineering for mechanical component parts manufacturing processes. Quality Engineering, Vol. 4, No. 2,
pp. 2838.
This case
Sugiyama.
study
is
contributed by Yoshitaka
CASE 92
Abstract: In quality engineering, the optimum controlling conditions are determined by balancing the cost of control and the cost of quality. This approach is effective and widely applicable in all manufacturing processes. This
study reports a successful case that was applied to a manufacturing process
for semiconductor rectiers. In the process, the temperature-measuring interval was determined such that the cost was reduced while the quality was
maintained, which contributed to customer satisfaction.
1. Introduction
Where lead and pellets are soldered in a semiconductor manufacturing process, we place a product
in a heat-treating furnace. Figure 1 outlines a semiconductor manufacturing process. To control the
furnace we take a periodic measurement with another temperature recording instrument (Figure 2).
However, while the adjustable limits are set the same
as in the manufacturing specications, we have
never done a thorough economical study of the
checking interval because it had always been determined according to past experience and results.
To verify its validity, we applied the concept of feedback control of process conditions from quality
engineering.
We have
2400 yen (hourly labor cost)
15 minutes
60 minutes
600 yen
(1)
2. Measurement interval: n units. The hourly number of heat-treating products is 170,000, and
they are measured every eight hours. Thus,
(170,000)(8) 1,360,000 units
(2)
(3)
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1395
1396
Case 92
Forming
Marking
Dipping
Lead-Solder
Deburning
Molding
Resin
Encapsulation
Etching
Mounting
Lead
Pellet
Figure 1
Quality control process chart for semiconductor
manufacturing
D2
n D2
2s
3
2 u
22
979,200,000
22
0.52
3
2
4,896,000,000
1.98C
(5)
B
C
A
2
n0
u0
D 20
n0 1
l
2
D 02
m2
u0
600
3000
3
2
1,360,000
408,000,000
10
52
1,360,000 1
5900
2
52
1.962
408,000,000
Figure 2
Temperature recording of heat-treating furnace in mounting
process
(6)
1397
600
3000
3
2
800,000
2,234,208
10
32 800,000 1
5900
3
2
32
1.982
2,234,208
Figure 3
Overall loss for each product in mounting process
0.178 yen
Optimal measurement interval:
n
(2)(408,000,000)(600)
3
(3)(3000)
52
(102)
3
408,000,000
10
5
1/4
0.37C
(8)
0.372
52
2,234,208 units
(10)
(9)
(11)
Table 1
Current losses versus improvements
Current
After Improvement
0.117612
0.117612
0.00126
0.00075
0.25
0.009
Adjustment loss
0.000006
0.00134
Measurement loss
0.000441
0.04905
1398
instrument, we would be able to obtain an improvement of 0.25 yen per product, which is equal to approximately 88 million yen on a yearly basis. We
need to improve or develop the measuring instrument in-house, since such an instrument is not currently available in the marketplace.
Case 92
Reference
Nobuhiro Ishino and Mitsuo Ogawa, 1999. Study of online QE for manufacturing of the rectiers semiconductor. Quality Engineering, Vol. 7, No. 6, pp. 51
55.
Part VII
Miscellaneous
Miscellaneous (Cases 9394)
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
CASE 93
Abstract: This study involves the estimation of total working hours in software
development. The objective was to determine the coefcients of the items
used for the calculation of total working time. This method was developed
by Genichi Taguchi and is called experimental regression.
1. Introduction
In many cases, ofce computer sales companies receive an order for software as well as one for hardware. Since at this point in time, functional software
is not well developed, the gap between the contract
fee and actual expense quite often tends to be signicant. Therefore, if the accuracy in rough estimation (also called the initial estimation) of working
hours based on tangible information at the point of
making a contract were to be improved, we could
expect a considerable benet from the viewpoint of
sales.
On the other hand, once a contract is completed, detailed functional design is started. After
the functional design is completed, actual development is under way. In this phase, to establish a development organization, development schedule,
and productivity indexes, higher accuracy in laborhour estimation than the one for the initial estimation is required. Thus, it is regarded as practically
signicant to review the estimated number of development working hours after completion of the
contract. This is called working-hour estimation for
actual development.
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1401
1402
Case 93
(1)
Table 1
Initial values of coefcients in equation of actual labor-hour estimation
Initial Value of Coefcient ai for Level:
i
1
Variable xi
Number of les
2
0
3
3
Number of items
10
20
10
300
150
10
Work experience as SE
25
50
11
20
40
12
Number of subsystems
15
30
13
30
60
1403
ST 12,734,085
(3)
(4)
Se
ST
(100) 94.6%
(5)
Now, since a linear proportional regression equation was used, we dened a ratio to the total sum
of squares as the contribution. In addition, the standard deviation of errors, , is
Sn 135.73
e
(6)
Table 2
Converged values after sixth calculation
Level
Coefcient
a1
3.0
3.1875
3.375
a2
0.28125
0.3125
0.34375
a3
5.0
5.625
6.25
a4
3.0
3.1875
3.375
a5
1.5
1.75
2.0
a6
5.0
5.3125
5.625
a7
2.0
2.125
2.25
a8
0.4375
0.46875
0.5
a9
168.75
159.375
150.0
a10
23.4375
25.0
26.5625
a11
18.75
20.0
21.25
a12
15.0
15.9375
16.875
a13
56.25
58.125
60.0
1404
Case 93
Table 3
Part of the deviations and sum of residuals squared
No.
Deviation
Sum of
Residuals
Squared
43.3114871
734,657.989
681,703.051
42.3895264
780,436.115
15.9880074
694,443.380
3.93175610
667,459.864
36
8.90810745
678,690.578
0.460980390
Combination
1400
3. Rough Estimation
Comparing with the actual development workinghour estimation, the initial estimation at the time
of receiving order is discussed below.
Selection of Items
As we stated at the beginning, we needed to determine a contract fee when receiving an order for
software. However, most items were not determined
at this point. More specically, the only items that
could be claried were the (1) number of les, (2)
number of table creation programs, (3) number of
subsystems, and (4) number of special devices. Next,
we studied whether we could roughly estimate the
development working hours using only these four
items.
Setup of Initial Values
A linear proportional regression equation based on
the four items selected is assumed. Since all factors
other than these four are represented by only these
four, the initial values of the coefcients are set with
wide intervals (Table 4). Now, the sufxes of the
coefcients follow those in the estimation equation
of actual development working hours.
1200
Estimation
1000
800
600
400
200
0
0
500
1000
1500
Observation
Figure 1
Estimation by observation and result using equation (2)
Table 4
Initial values of coefcients for initial estimation
equation
Initial Value of
Coefcient ai
at Level:
i
Variable xi
1405
Table 6
Coefcients of common items in two equations
Development
Working Hours
Coefcient
Actual
Estimated
Number of les
3.1875
15.46875
3.1875
11.71875
Number of les
15
30
15
30
Number of subsystems
15.9375
14.0625
12
Number of subsystems
50
100
58.125
50.0
13
50
100
Table 5
Converged values of initial estimation equation
after six trials
Level
Coefcient
a1
15.0
15.46875
15.9375
a4
11.25
11.71875
12.1875
a12
12.5
14.0625
15.625
a13
43.75
50.0
56.25
As the coefcients of the initial estimation equation, we adopted the values at level 2. The sum of
squared residuals Se results in 977,692.517 when the
values at level 2 are used. The percent contribution
was 92.4% and the standard deviation of errors was
162.55. Then, this is approximately 1.2 times as
much as 135.73, the counterpart for the estimation
equation of actual development working hours
based on 13 items.
In a conventional process, most companies have
determined a contract fee according to only the
numbers of different screen displays and documents. Thus, we calculated a regression equation
based on them. Even in the case of using a general
linear regression equation computed by the least
squares method that minimizes errors [y 216.68
16.9 (number of documents) 9.82 (number of
screen displays)], the resulting standard deviation of
errors amounts to 256.33, which is 1.6 times that for
the initial estimation equation and about twice as
much as that for the actual estimation equation. In
addition, since the regression equation grounded
on actual labor-hour results was not used, the resulting standard deviation of errors would be much
larger. Therefore, our initial estimation equation is
worthwhile to adopt.
This case study is contributed by Yoshiko Yokoyama.
CASE 94
x2:
x3:
x4:
x5:
x6:
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1407
Table 1
Layout and results of experiment
Column
Row
A
123
B
4812
C
5
D
6
e
7
e
9
E
10
F
11
G
13
H
14
e
15
138
54
120
55
95
24
95
41
105
66
90
49
129
58
130
96
130
63
10
124
73
11
102
31
12
95
34
13
104
91
14
123
54
15
119
50
16
95
61
x7:
m:
x8:
a2:
estimate of A2 A1
a3:
estimate of A3 A1
x9:
a4:
estimate of A4 A1
x10:
b2:
estimate of B2 B1
b3:
estimate of B3 B1
x11:
b4:
estimate of B4 B1
x12:
c2:
estimate of C2 C1
d2:
estimate of D2 D1
e2:
estimate of E2 E1
f2:
estimate of F2 F1
x:
Supplementary characteristic.
1408
Case 94
Table 2
Integer variables
Column
Row
x1
x2
x3
x4
x5
x6
x7
x8
x9
x10
x11
x12
138
54
120
55
95
24
95
41
105
66
90
49
129
58
130
96
130
63
10
124
73
11
102
31
12
95
34
13
104
91
14
123
54
15
119
50
16
95
61
g2:
estimate of G2 G1
h2:
estimate of H2 H1
a:
1409
Table 3
Initial values and values after convergence
Level after Fifth
Iteration
Initial Level
Parameter
a2
0.0
16.0
32.0
16.0
24.0
32.0
a3
0.0
16.0
32.0
0.0
8.0
16.0
a4
0.0
16.0
32.0
16.0
24.0
32.0
b2
32.0
16.0
0.0
16.0
8.0
0.0
b3
32.0
16.0
0.0
32.0
16.0
0.0
b4
8.0
0.0
8.0
8.0
0.0
8.0
c2
8.0
0.0
8.0
8.0
0.0
8.0
d2
8.0
0.0
8.0
8.0
0.0
8.0
e2
8.0
0.0
8.0
8.0
0.0
8.0
f2
8.0
0.0
8.0
8.0
0.0
8.0
g2
8.0
0.0
8.0
8.0
0.0
8.0
h2
8.0
0.0
8.0
4.0
6.0
8.0
0.0
0.8
1.6
0.4
0.6
0.8
1410
Case 94
Table 4
Comparison between estimation and
observation
No.
Observation
Estimation
(Est.) (Obs.)
54.0
52.6
1.4
55.0
41.4
13.6
24.0
39.4
15.4
41.0
34.4
5.6
66.0
67.4
1.4
49.0
33.4
15.6
58.0
73.0
15.0
96.0
93.4
2.6
63.0
49.4
13.6
10
73.0
75.0
2.0
11
31.0
42.2
11.2
12
34.0
34.4
1.4
13
91.0
75.0
16.0
14
54.0
54.6
0.6
15
50.0
77.0
27.0
16
61.0
55.4
5.6
x2:
x3:
x4:
x5:
x6:
x7:
x8:
x9:
x10:
Table 5
SN ratios
Column
Row
A
1
B
2
C
3
D
4
E
5
F
6
e
7
e
8
dB
39.79
42.74
46.52
44.33
46.65
34.92
43.75
32.81
40.98
1411
Table 6
Placement of factors in Table 5 as integer-type variables
B
2
C
3
D
4
E
5
F
6
x1
B2
x2
B3
x3
C2
x4
C3
x5
D2
x6
D3
x7
E2
x8
E3
x9
F2
x10
F3
39.79
42.74
46.52
44.33
46.55
34.92
43.75
32.81
40.98
m:
estimate of (A1)B1C1D1E1F1G1
b2:
estimate of B2 B1
b3:
estimate of B3 B1
c2:
estimate of C2 C1
c3:
estimate of C3 C1
d2:
estimate of D2 D1
d3:
estimate of D3 D1
e2:
estimate of E2 E1
e3:
estimate of E3 E1
f2:
estimate of F2 F1
f3:
estimate of F3 F1
3. Other Applications
Ranges of Initial Values
The initial values of b2, ... , f3 are determined based
on engineering knowledge (Table 7). Setting the results of level 2 of the eighth iteration,
m y (b2x1 b3x2 h3x10) 38.71
y is calculated as
(4)
In the section above, only the optimum conguration was estimated. But it is possible to estimate any
combination. Sometimes there are incomplete
(missing) data in an orthogonal experiment, and
sequential analysis is conducted followed by analysis
of variance. Even in such a case, the estimation of
missing combination(s) can be made by calculating
1412
Case 94
Table 7
Initial and converged values of parameters
Level
Conveyed Eighth Iteration
b3: B3 B1
2.5
c2: C2 C1
c3: C3 C1
1.5
d2: D2 D1
d3: D3 D1
e2: E2 E1
e3: E3 E1
f2: F2 F1
f3: F3 F1
16
10
the coefcients using experimental regression analysis from the rest of the data.
In the experiments of manufacturing areas, data
are collected based on the layout of experimental
design. But in some cases, we want to utilize the
existing data from observation prior to experimentation. In most cases, there is no orthogonality
Table 8
Comparison between estimation and
observation
(Est.) (Obs.)
No.
Observation
Estimation
39.79
38.71
1.08
42.74
42.71
0.03
46.52
46.21
0.31
44.33
43.71
0.62
46.55
45.71
0.84
34.92
35.21
0.29
43.75
44.21
0.46
32.81
33.21
0.40
40.98
42.71
1.73
(7)
If the coefcients a1, a2, and a3 are known, the volume of a tree is estimated from chest height and
tree height.
To determine a1, a2, and a3, 52 trees were cut
down and the values, D and H, and volume, y, were
measured (Table 9).
Although it is possible to conduct the logarithmic transformation, then estimate the coefcients
by linear regression analysis, nonlinear equation (7)
is used without transformation in this case.
Sequential Approximation
Since D and H are diameter and height, respectively,
it looks like D is close to the power of 2, and H is
around 1. In other words, a2 is around 2 and a3 is
1413
Table 10
Initial coefcients
1
a1
0.00005
0.00010
0.00015
a2
1.5
2.0
2.5
a3
0.5
1.0
1.5
Figure 1
Response graphs of partial data
V
around 1. Considering the units of D being in centimeters, H being in meters, and y being in square
meters, the unit of a1 is probably around 0.0001.
The initial values are set as shown in Table 10.
Converged Values and Their Evaluation
After conducting sequential approximation, it converged after nine iterations to obtain the results
shown in Table 11. Volume, y, is given by the following equation using the values of the second level:
y 0.0001096D1.839H 0.8398
(8)
Table 9
Observational results for trees
1
10[(0.003)2 0.0602 0.0092]
0.00554
(9)
The standard deviation was 0.023, showing good reproducibility of conclusions for the data that were
not used for the equation of estimation, shown in
Tables 11, 12, and 13.
(10)
If a2 and a3 are known, log a1 is calculated as a constant. The following equation, which was obtained
from the least squares method, does not have any
technical contradictions and is considered as one of
the solutions.
No.
D (cm)
H (m)
y (m3)
36.3
22.50
1.161
24.5
18.83
0.461
26.3
17.35
0.535
29.5
19.22
0.701
a1
0.0001094
0.0001096
0.0001098
a2
1.8379
1.8398
1.8418
52
14.5
14.37
0.124
a3
0.8387
0.8398
0.8418
Table 11
Converged coefcients after ninth iteration
1414
Case 94
Table 12
Data for verication and estimated values
D (cm)
H (m)
y (m3)
y (m3)
y y
46.3
21.90
1.701
1.698
0.003
25.8
20.65
0.491
0.551
0.060
27.7
17.60
0.566
0.549
0.017
No.
23.6
20.05
0.473
0.456
0.017
21.2
14.50
0.272
0.285
0.013
20.2
17.90
0.337
0.312
0.025
18.1
17.33
0.264
0.248
0.016
14.8
13.51
0.128
0.139
0.011
14.4
13.83
0.126
0.135
0.011
10
11.8
12.11
0.075
0.083
0.008
(11)
1
10(0.0522 0.0642 0.0042)
0.000864
(12)
Table 13
Data for verication and estimated values
No.
D (cm)
H (m)
y (m3)
y (m3)
y y
46.3
21.90
1.701
1.753
0.052
24.8
20.65
0.491
0.555
0.064
27.7
17.60
0.566
0.549
0.017
23.6
20.05
0.473
0.458
0.015
21.2
14.50
0.272
0.279
0.007
20.2
17.90
0.337
0.309
0.028
18.1
17.33
0.264
0.244
0.020
14.8
13.51
0.128
0.134
0.006
14.4
13.83
0.126
0.130
0.006
10
11.8
12.11
0.075
0.079
0.004
(13)
T0
1415
1
ln{1 [0 n/k(1 A)]}
(14)
x2:
x3:
x4:
x5:
x7:
1416
Case 94
Table 14
Results of observation
No.
Daya
Tuesdayb
T (hours)
Cloudy
4.0
20.0
1.00
18.5
21.7
2.00
22.5
3.00
Wednesday
Snow
0.0
21.0
1.00
19.5
22.5
2.00
23.0
3.00
Cloudy
0.0
20.5
0.17
20.0
Thursday
22.0
0.33
23.0
0.50
2.5
22.0
0.25
21.5
76
Climate
Clear
Monday
77
22.8
0.50
78
23.3
0.75
Table 15
Variables, including integer types for room temperature
No.
10
11
4.0
20.0
1.00
18.5
4.0
21.7
2.00
18.5
4.0
22.5
3.00
18.5
0.0
21.0
1.00
19.5
0.0
22.5
2.0
19.5
0.0
23.0
3.00
19.5
76
2.5
22.0
0.25
21.5
77
2.5
22.8
0.50
21.5
78
2.5
23.3
0.78
21.5
1417
Table 16
Initial and convergent values of parameters for room temperature
Level
After 16th Iteration
Initial Values
1
a1
0.8
1.6
0.3
0.4
0.5
a2
0.8
1.6
0.7
0.75
0.8
a3
0.8
1.6
0.7
0.75
0.8
a4
0.8
1.6
0.4
0.5
0.6
a5
0.8
1.6
0.4
0.45
0.5
a6
0.8
0.4
0.275
0.2625
0.25
a7
0.8
0.4
0.4
0.35
0.3
a8
0.4
0.8
a9
a10
0.016
1.4
12
0.032
0.7
0.725
0.75
5.0
5.25
5.5
0.0
0.001
0.002
x8: A
x9: r( y)
x10: T
x11: n
, the heat transfer coefcient, seemingly has a
small value if the heater was not operated the preceding day. is the coefcient to calibrate the outer
temperature: a constant for a particular building. k
is a proportional constant for temperature rise during time passage, it is affected by climate.
The equations for and k are set as
a8(1 a1x1 a2x2 a3x3
a4x4 a5x5)
(15)
(16)
(17)
where
a8(1 a1x1 a2x2 a3x3 a4x4 a5x5)
(18)
k a9(1 a6x6 a7x7)
(19)
1418
Case 94
Table 17
Deviations and total residuals squared after the sixth iteration
No.
Total Residuals
Squared
Deviation
0.296194191
0.881743940E-01
0.132098118
3
4
0.372788513E-01
Combination
33.0726635
28.4722355
31.0850840
29.2814816
0.146478391E-01
35
36
0.149078921
27.7087209
29.7143803
The initial parameter values and the ones after convergence are shown in Table 16.
For sequential approximation, the data of twentythree out of twenty-six days were used. Some of the
deviations and the total residuals squared are shown
in Table 17. Using the second level, the values of a,
b, and k are shown in Table 18.
The proportional coefcient of heat transfer ()
for the day after a holiday is about 30% smaller than
on other days. The result that coefcients three and
four days after a holiday are larger than those ve
Table 18
Values of , , and k
Relation to Unknown
Parameter a1
Value
a8
0.725
a8(1 a1)
a8(1 a2)
a8(1 a3)
a8(1 a4)
a8(1 a5)
a10
0.001
a9
5.25
Cloudy
a9 (1 a6)
Snow
a9 (1 a7)
Parameter
x
Day after holiday
Clear
1419
Section 3
Taguchis Methods
versus Other
Quality Philosophies
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
39
Quality Management
in Japan
1423
References
1428
1428
1430
1440
Dawn of Quality
Management
1423
1424
Period Following
World War II
Many books have already detailed quality management in Japan after the war. In
1946, the GHQ of the Allied Forces summoned experts in statistical research from
the United States to study required resources and telecommunication network
systems needed for reviving Japan. The investigation signicantly stimulated Japanese statisticians and quality management researchers. In 1950, W. Edwards Deming, a U.S. specialist in sampling theory and statistical quality management,
instructed Japanese quality management researchers in statistical quality management. His key advice was: (1) product development and production is an endless
cycle (shown in Figure 39.1), and by repeating this cycle, we can advance our
products and corporate activity to produce them (this cycle is called the Deming
cycle in Japan); and (2) statistical procedures can be applied to various stages in
corporate management. After his instruction in Japan, various companies put his
advice into practice.
Based in part on money, Deming donated from royalties on the textbook used
for his lectures, the Deming Prize was founded in 1951 [3]; it is still awarded to
companies that accomplish signicant results through strenuous quality management. The rst companies to win the prize in 1951 were Yahata Steel, Fuji Steel,
Showa Denko, and Tanabe Seiyaku. Since the Deming Prize played an essential
role in developing Japans industry worldwide, it was regarded as the Messiah in
Japans industries.
Between the end of the war and the 1960s, statistical quality management methods, developed primarily in Europe and the United States, were studied vigorously
and subsequently adopted by many companies and formulated as the Japan Industry Standards (JIS). In this period, companies focused mainly on introducing
1425
Figure 39.1
Deming cycle
statistical methods such as the Shewhart control chart, statistical testing, sampling
inspection, and Fischers design of experiments. It is often dubbed the SQC (statistical quality control) period.
While the Shewhart control chart [4] (Figure 39.2) contributed considerably
to the production process, it brought about another result. The fundamental managerial concept behind the control chartthat we should recognize that any type
310
Average, X
UCL = 305.4
300
X = 291.5
290
280
LCL = 277.6
Figure 39.2
X-R control chart
Range, R
30
UCL = 27.3
20
R = 12.9
10
Number of Lots
Number
Lots
1426
In the 1960s, trade in products such as automobiles, machinery, and electric appliances, all of which had been protected by the Japanese government during
Japans postwar rehabilitation period, was liberalized. Although quality management activity was reinforced to make the products more competitive, it was necessary to enhance products attractiveness as well as their quality. Therefore, all
departments from product planning to sales, or all hierarchical people from top
management to manufacturing operators, were encouraged to participate in quality management activity, and at the same time, interdepartmental activity was reinforced. This was TQC (later changed to TQM).
B. Plan
Figure 39.3
PDCA cycle
1427
TQC stresses not only presentation of the PDCA as top managements policy
and deployment of the policy to middle management in charge of a core of corporate administration but also companywide promotion of job improvement. To
lter the job improvement activity down to bottom-level employees, the QC circle
was introduced and subsequently spread nationwide. The QC circle, founded by
the Japanese Union of Scientists and Engineers in 1962, contributed tremendously
to quality improvement.
The other characteristic of TQC was to synthesize quality management procedures by introducing quality assurance and reliability management, which had
been pioneered in the United States. The concept of quality assurance, including
compensation to customers, was introduced to solve the problems caused by huge
complicated public systems or their failures. The mileage assurance for automobiles was a typical example. Since indemnication incurred by system failures diminishes corporate prot, each company was urged to reduce system failures. As
a result, quality assurance systems were established and reliability management was
introduced in many companies. For example, reliability management was used in
the Apollo program developed by NASA at that time. In Japan it was introduced
in large-scale public works such as the Shinkansen bullet train program by National
Railway (currently, JR) and the satellite program by the National Space Development Agency (NASDA), the automotive industry, or private joint enterprises with
U.S. companies. The reliability management program manages what tasks should
be completed, when they should be implemented, which departments should take
responsibility, and how the problems should be solved from the product planning
stage to use by consumers. Reliability management techniques such as FMEA, FTA,
and design review were introduced in the 1970s and 1980s.
At the same time, quality function deployment (QFD) was developed by Shigeru
Mizuno and Yoji Akao and adopted immediately by various companies. The quality
table had been tried since 1966 and was proposed in the ofcial book published
in 1978 [5]. QFD and quality engineering (to be detailed later) are the two major
powerful quality management techniques that were born in Japan and spread
worldwide.
Since to improve and assure product quality, improvement and assurance of
material and part quality were required, quality management and assurance were
expanded to suppliers of materials and parts. This trend diffused from assembly
manufacturers to part suppliers, from part suppliers to material suppliers, and then
to all industries. As a consequence, many Japanese products are accepted worldwide. To illustrate this, one of every four automobiles being driven in the United
States is produced by Japanese manufacturers. Ironically, self-regulation of exportation of automobiles to the United States was continued for a long time because
of complaints by the U.S. automotive industry. Sony and Panasonic brand products
are sold throughout the world; and the electric scoreboard in Yankee Stadium,
known as the U.S. baseball hall of fame, was replaced by a Japanese product.
Since quality management activity evolved into the TQC and contributed greatly
to the prosperity of the entire country, led by the manufacturing industry, TQC
became known as a crucial means of managing a corporation in Japan. In the
1970s, nonmanufacturing industries such as the building, electric power, sales, and
service industries adopted TQC. For the building industry, Takenaka Corp. and
Sekisui Chemical in 1979, Kajima Corp. in 1982, Shimizu Corp. in 1983, Hazama
Diffusion of Quality
Management Activity
1428
Stagnation of Quality
Management in the
Second Half of the
1990s
When the Japanese economy, led by the manufacturing industry, reached a peak
in the latter half of the 1980s, a number of companies attained top-rank status in
the world. Also at that time, the income of the Japanese people became the highest
in the world. As a consequence, a bubble economy was born whereby, instead
of making a prot through strenuous manufacturing activity (or value added), we
invested surplus money in land, stocks, golf courses, leisure facilities, or paintings
to make money in the short term. The value of major products and services was
determined only by supply and demand. The key players in the bubble economy
were the nancial industry, such as securities companies and banks, the real estate
industry, the construction industry, and the leisure industry, all of which introduced TQM later or not at all. Since the bubble broke in 1992 and the prices
of stocks and land slumped, these industries have been suffering. Unlike the manufacturing industry, these industries have blocked foreign companies from making
inroads into the Japanese market.
The tragedy was that many manufacturers rode the wave of the bubble economy
and invested a vast amount of money in stocks and land to make a short-term
prot. These manufacturers suffered tremendous losses, and consequently, lost
time and money they could have used to maintain quality management activities.
Since the mid-1990s, the Japanese quality management activity has lost momentum. In fact, the quality management movement itself has sometimes been regarded as a bubble, just at the time that quality management is desperately needed.
The rst step in quality management is to solve quality problems quickly and thus
to improve quality. Most quality management techniques were invented and developed for this purpose. While high-level statistical methods such as control
1429
1430
Quality Function
Deployment to
Incorporate the
Customers Needs
After several trials and errors in the second half of the 1960s, quality function deployment was proposed in the Japanese quality management eld in 1978 [5]. Proposed by Shigeru Mizuno and Yoji Akao, quality function deployment is a method
of reecting customers voices in product design and clarifying production management items to satisfy them. After it was released, this method was adopted in
the development process by many corporations.
Quality function deployment is implemented in accordance with Figure 39.4.
In the rst step, by allocating customers voices in the rows of a matrix and product
quality characteristics in the columns, we determine corresponding product quality
Figure 39.4
Structure of quality function deployment
1431
In 1925, Fishers Statistical Methods for Research Workers was released [7]. Ten years
later, the Faculty of Agriculture of the Hokkaido University led the way in applying
Fishers design of experiments (experimentation using a small number of samples)
to agricultural production. In 1939, Motosaburo Masuyama applied this technique
to medical science.
APPLICATION OF DESIGN OF EXPERIMENTS TO INDUSTRIAL PRODUCTION
The rst experiment using an orthogonal array in the scene of industrial production was performed during World War II in the United States to improve the yield
in penicillin production. Shortly after the war, Morinaga Pharmaceutical conducted an experiment under the direction of Motosaburo Masuyama. Thereafter,
the design of experiments started to be used on a regular basis in postwar Japan.
Motosaburo Masuyama, Genichi Taguchi, and other experts in the design of experiments, as well as applied statisticians, helped various industries conduct a great
number of experiments. The number of applications of the design of experiments
increased in line with a boom in quality management activity in Japan, thereby
Design of
Experiments
1432
1433
the same time, by taking advantage of the research on columns where interactions
emerge, he assigned interactions called linear graphs and recommended multilevel
assignment to improve ease of use. In his books, he illustrated actual applications
to explain how to use the method. His development of easy-to-use orthogonal
arrays, linear graphs to allocate interactions and multiple levels, and demonstration of actual applications boosted the popularity of the orthogonal arrays in Japan.
Sample arrays are shown in Table 39.1 and Figure 39.5.
While he assigned interactions to linear graphs, Taguchi had a negative opinion
about selecting interactions in actual experiments. He insisted that he had devised
linear graphs to allocate interactions easily only in response to many engineers
keen interest, but that we should not pursue interactions. He also maintained that
since an orthogonal array was used not for the purpose of partial experimentation
but for checking the reproducibility of factor effects, main effects should be allocated to an orthogonal array. As a result, other researchers who were using Fishers
design of experiments stood strongly against Taguchis ideas, and this controversy
continued until the 1980s. This dispute derived from the essential difference in
objectives between Taguchis and Fishers experiments. Whereas Taguchi emphasized design optimization, Fisher looked at quantication of contribution for each
factor. Because this controversy was caused partially by Taguchis avoidance of the
conditional in case of, after his concept was systematized as quality engineering
in the mid-1980s, the dispute gradually calmed down.
In addition to Taguchis linear graphs, other assignment techniques to deftly
allocate factor interactions were studied. For example, to vie with Western research, techniques such as the resolution III, IV, and V were released and applied
to actual experiments.
BEGINNINGS OF QUALITY ENGINEERING
Table 39.1
Taguchis orthogonal array (L8)
Column
No.
1434
Figure 39.5
Taguchis linear graphs
(L8)
design of experiments described the experiment, after which much attention was
paid by some Japanese engineers.
Seito had baked tiles using an expensive tunnel kiln. Because of the large positional variability in the tunnels inside temperature, they had faced much variation in dimensions, glosses, and warping after baking. Although a common
countermeasure in quality management was to reduce the temperature variability,
they modied the mixture of materials (design) instead. As a consequence, once
a certain mixture was xed and a certain secret medicine was added at 0.5% by
weight, tiles baked evenly and variability in dimensions, glosses, and warps was
drastically reduced. Since Seito kept this experiment condential afterward, even
Taguchi, who cooperated in the experiment, did not introduce the details.
This experiment was very signicant . At that time, there had been two known
methods of improving quality: (1) to nd the causes of variability and to manage
them, or (2) a feedback (or feedforward) method of making repeated adjustments
to eliminate eventual variability. The Seito experiment shed light on a third option:
attenuation of causal effects.
At this time Taguchi began to dedicate much of his research to how to apply
the attenuation of causal effects to other engineering elds. His efforts became
known as parameter design (robust design) in the latter half of the 1970s.
CLASSIFICATION OF FACTORS
1435
Quality Engineering
After the Seito experiment in 1953, Taguchi developed attenuation of causal effects into parameter design by helping to conduct experiments at several companies. His motivation was to apply the design of experiments to incorporation of
quality at the initial design stage instead of quantitatively analyzing causes of quality
problems after they occurred. He thought that a method of attenuation of causal
effects must be used to incorporate quality. Taking into account that the mainstream in quality management at that time was into problem solving, Taguchis
idea can be seen not as quality management but as technological improvement.
This was the main reason that some engineers began to pay attention to it. Figure
39.6 shows the debug model.
Initially, after conditions regarding the environment or use were assigned to
the outer array of an orthogonal array and control factors (design parameters)
were allocated (direct product experiment), whether there existed interactions
Design
Prototyping
Countermeasure
Figure 39.6
Development using
debug model
Cause Analysis
Evaluation
Problem Selection
1436
Research was done on the accuracy of measuring instruments in the 1960s and
1970s, where the scale of an SN ratio represented the intensity of signal to noise.
Noises such as variability in measuring environment conditions or measurement
methods altered output values and degraded measurement accuracy. Ideally, we
would expect that output, y, is proportional to objective value, M, and has no
errors. Thus, an ideal condition can be expressed as:
y M
(39.1)
(39.2)
(39.3)
Thus, to more precisely estimate input M, we should use the following transformed
version of equation (39.3):
M
y e(M )
(39.4)
This implies that when we estimate input using equation (39.2), the following
estimated input error from equation (39.2) exists:
MM
e(M )
(39.5)
1437
A
11
No.
10
11
12
13
14
15
16
B
4
C
5
D
3
Table 39.2
Example of a direct product experiment
E
9
AC
14
AB
15
Column
R(Car)
1,6,7
V(Position)
2,8,10
e
12
e
13
K1 K2 K3
Distance
1438
(39.6)
[e(M )]2
n1
(39.7)
we obtain
2
2
(39.8)
This equation (39.8) becomes better as the error increases. Thus, to convert this
into an inverse scale, we take its reciprocal:
2
2
(39.9)
The error computed by equation (39.9) is called the SN ratio error in the eld of
measurement.
When we analyze data using the SN ratio, we often make use of the following
scale, called the decibel value of the SN ratio by taking a logarithm of both sides of
the following equation because the equation has an unwieldy form of ratio:
10 log
2
2
(39.10)
The reason that we multiply the logarithm by 10 is that we wish to unify this
form with the SN ratio used to indicate the magnitude of noises of electric appliances and to transform original data into more handy ones when analyzed.
In the initial parameter design introduced earlier, we determined optimal levels
using plot graphs of interactions between control and noise factors. However, as
the number of control and noise factors increases, the number of graphs also rises.
For this reason it is painstaking to nd the best level for each control factor in
actual analysis. For example, even if the effect of a certain noise diminishes at a
certain control factor level, another noise quite often strengthens its effect. In this
case we cannot easily judge which control factor levels are best. Since the SN ratio
is a scale representing an entire effect of all noises, we can determine the best
level if we simply select control factor levels leading to the largest SN ratio, which
is regarded as quite a handy scale.
Nevertheless, it took years until this SN ratio was understood completely and
applied to practical uses. One of the reasons is that equation (39.10) per se has
been difcult for ordinary engineers to comprehend. The other is an issue of its
statistical background. That is, since it was not known what type of statistical value
the SN ratio in equation (9) was supposed to be, any method of signicance test
and condence interval calculation of the SN ratio could be introduced when it
was invented. Although the fact that if the error is an accidental error (repeated
error), noncentral chi-square distribution is applicable was proved afterward, the
issue was not easily solved. Because Taguchi only proposed calculating the SN ratio
1439
2
m2
(39.11)
Since the examples in the parameter design of an electric circuit were extremely easy to understand, an increasing number of companies introduced
Parameter design in their development process in the 1980s. One typical company
was Fuji Xerox, a joint corporation founded by Xerox Corp. of the United States
and Fuji Film of Japan. In 1985 the company implemented parameter design and
prescribed parameter design procedures for new product development. Other
companies gradually began to adopt the technique as a method of improving
product design [13].
In those days, systematization of parameter design was advanced and templates
for procedures were also created so that anyone could use the method. Additionally, by taking advantage of examples of an electric circuit, Taguchi showed not
merely a parameter design procedure but also tactics to incorporate quality in
product design: two-step optimization. This consists of the following two steps:(1) to
determine design parameters in order to enhance the SN ratio (i.e., to reduce
noise effects), and (2) to make adjustments to a target value using design parameters that can change average output and are unrelated to SN ratio. Since twostep optimization can reduce noise effects at rst, we can expect to minimize the
incidence of quality troubles caused by noises. Consequently, we can reallocate
personnel from following up on quality problems to productive design tasks. This
change can be appealing to managers in a development department.
TRANSITION OF PARAMETER DESIGN FROM PRODUCT DEVELOPMENT TO
TECHNOLOGICAL DEVELOPMENT
If we call the 1980s a period of parameter design in product design, quality engineering can then be dubbed a period of parameter design in technological
1440
Since the late 1990s, Taguchi has begun to propose applying multiple dimensional
distance (commonly known as Mahalanobis distance) advocated by Mahalanobis, an
Indian statistician. Indeed, this is not a means of quality design; however it is
applicable to a wide variety of applications, such as inspection, prediction, or pattern recognition. Due to space limitation, we do not cover it here, but refer the
reader to Part VI in Section 2 of this handbook.
References
1. Takanori Yoneyama, 1979. Talk about Quality Management. Tokyo: Japan Union of
Scientists and Engineers.
2. Various Authors, 1977. Quality Management Handbook, 2nd ed. Tokyo: Japanese Standards Association.
3. Deming Prize Committee, A Guide to the Deming Prize.
4. W. A. Shewhart and W. E. Deming, 1960. Fundamental Concept of Quality Management.
Tokyo: Iwanami Shoten.
References
5. Shigeru Mizuno, Yoji Akao, et al., 1978. Quality Function Deployment: Approach to
Company-Wide Quality Management. Tokyo: Japan Union of Scientists and Engineers.
6. Don Clausing, 1994. Total Quality Development. New York: ASME Press.
7. R. A. Fisher. 1951. Design of Experiments. London: Oliver & Boyd.
8. R. L. Placket and J. P. Burman, 1946. The Design of optimum multi-factorial experiments. Biometrika, Vol. 33.
9. Genichi Taguchi, 1957. Design of Experiments, Vols. 1 and 2. Tokyo: Maruzen.
10. Genichi Taguchi, 1976. Design of Experiments, Vols. 1 and 2, 3rd ed. Tokyo: Maruzen.
11. Vijayan Nair, 1992. Taguchis parameter design: a panel discussion. Technometrics,
Vol. 32, No. 2.
12. Genichi Taguchi et al., 1992. Technological Development of Transformability. Tokyo: Japanese Standards Association.
13. Genichi Taguchi et al., 1994. Quality Engineering for Technological Development. Tokyo:
Japanese Standards Association.
1441
Deming and
Taguchis Quality
Engineering
40
40.1. Introduction
1442
40.2. Brief Biography of William Edwards Deming
40.3. Demings Key Quality Principles
1444
1442
1447
40.1. Introduction
Many modern quality engineering and quality management philosophies originated in the United States throughout the twentieth century. Several people played
key roles in the development and implementation of these revolutionary methods
for managing organizations. Among the most inuential of these people was William Edwards Deming. His philosophies on quality broadened the scope of quality
engineering from a tactical nature to a strategic management way of life.
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1443
1444
Deming advocated a radical change in management philosophy based on his observations in applying statistical quality principles to common business practices.
To make this transformation, people must understand four key elements of
change, which he termed profound knowledge.
The layout of profound knowledge appears here in four parts, all related to each
other: appreciation for a system, knowledge about variation, theory of knowledge, and
psychology.
One need not be eminent in any part nor in all four parts in order to understand
it and to apply it. The 14 points for management in industry, education, and government follow naturally as application of this outside knowledge, for transformation
from the present style of Western management to one of optimization.
The various segments of the system of profound knowledge proposed here cannot
be separated. They interact with each other. Thus, knowledge of psychology is incomplete without knowledge of variation. [2, p. 93]
Deming realized that most scientic management philosophies did not take
into account variation in people. To implement the changes he advocated, management needs to understand the psychology of its workers and the variation in
psychology among them individually.
A leader of transformation, and managers involved, need to learn the psychology of
individuals, the psychology of a group, the psychology of society, and the psychology
of change.
Some understanding of variation, including appreciation of a stable system, and
some understanding of special causes and common causes of variation are essential
for management of a system, including management of people. [2, p. 95]
1445
Demings 14 Points
for Management
1446
Although the 14 points of management are numbered, they are not listed in
any specic priority. Each of the points must be implemented within an organization to achieve the highest level of performance.
Demings Seven
Deadly Diseases
1447
So obvious, so fruitless. The vice president of a huge concern told me that he has a
strict schedule of inspection of nal product. To my question about how they use the
data came the answer: The data are in the computer. The computer provides a record
and description of every defect found. Our engineers never stop until they nd the
cause of every defect.
Why was it, he wondered, that the level of defective tubes had stayed relatively
stable, around 41/2 to 51/2 percent, for two years? My answer, The engineers were
confusing common cause with special causes. Every fault was to them a special cause,
to track down, discover, and eliminate. They were trying to nd the causes of ups and
downs in a stable system, making things worse, defeating their purpose. [2, pp. 181
182]
1448
References
1. R. Aguayo, 1990. Dr. Deming: The American Who Taught the Japanese about Quality. New
York: Simon & Schuster.
2. W. E. Deming, 1994. The New Economics. Cambridge, MA: MIT Press.
3. W. E. Deming, 1982. Out of the Crisis. Cambridge, MA: MIT Press.
Enhancing Robust
Design with the
Aid of TRIZ and
Axiomatic Design
41
41.1.
41.2.
Introduction
1449
Review of the Taguchi Method, TRIZ, and Axiomatic Design
Taguchi Method
1451
TRIZ
1452
Axiomatic Design
1454
Comparison of Disciplines
41.3.
41.4.
41.5.
41.6.
41.7.
1451
1456
41.1. Introduction
The importance and benets of performing robust parameter design advocated
by G. Taguchi are well known [1,2]. Many people are familiar with Taguchis robust
parameter design with such terminologies as orthogonal array, signal-to-noise ratio,
and control and noise factors. However, a generally ignored but most important
task for a successful robust parameter design project is to select an appropriate
system output characteristic.
The identication of a proper output characteristic is a key step to having a
higher success rate for robust design projects. To identify a proper output characteristic, Taguchi suggests the following guidelines [2,3]:
Identify the ideal function or ideal input/output relationship for the product
or process. The quality characteristic should be related directly to the energy
transfer associated with the basic mechanism of the product or process.
Select quality characteristics that are continuous variables, as far as possible.
Select additive quality characteristics.
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1449
1450
41. Enhancing Robust Design with the Aid of TRIZ and Axiomatic Design
Quality characteristics should be complete. They should cover all dimensions
of the ideal function or input/output relationship.
Quality characteristics should be easy to measure.
According to Taguchi, it is important to avoid using quality symptoms such as
reliability data, warranty information, scrap, and percent defective in the late product development cycle and manufacturing environment as the output characteristic. But improving a symptom may not be helpful in improving the robustness
of a systems ability to deliver its functions, which is really the key objective of a
robust design project. Understanding a system function, especially the basic function, is the key for robust technology development [1]. Dening the ideal state of
the basic function, called the ideal function, is the centerpiece for identifying output
characteristics.
The reason for using an energy-related system output response, according to
the discussion of Pahl and Beitz [8] and Hubka and Eder [9], is due to the fact
that an engineering system is always designed for delivering its basic function. To
deliver its basic function, at least one of the three types of transformation must be
used (Figure 41.1).
1. Energy: mechanical, thermal, electrical, chemical; also force, current, heat,
and so on
2. Material: liquid, gas; also raw material, end product, component
3. Signal: information, data, display, magnitude
For example, in a machining process as an engineering system, the ideal relationship between output and input should be that the output dimensions are exactly the same as the dimension intended. This type of transformation system is
the material transformation, since energy transformation is a very important type
of transformation and there are many similarities in using these three types of
transformation to identify the appropriate output characteristic. Without loss of
generality, energy transformations are used as examples throughout this chapter.
Some of the published literature and articles point out that an energy-related
characteristic is very helpful to identify proper quality characteristic and should be
considered. Nair and Vigayan [4] cite Phadkes discussion, nding system output
response that meets all of these guidelines is sometimes difcult or simply not
possible with the technical know-how of the engineers involved. In general, it will
be quite challenging to identify system output responses that will meet all of these
criteria. Taguchi acknowledges this fact and states that the use of Taguchi methods
will be inefcient to the certain extent if these guidelines are not satised. Revelle
et al. [5] cite Shin Taguchi, Verdun, and Wus work and points out that the selection of system output response that properly reects the engineering function of
a product or process is the most important and perhaps the most difcult task of
the quality engineer. The choice of an energy-related system output response is
Figure 41.1
Technical system:
conversion of energy,
material, and signals
Energy
Material
Signal
Technical System
Energy
Material
Signal
1451
vital to ensure that the system output response is monotonic. According to Box
and Draper [6], the monotonicity property requires that effects of control factor
be both linear and additive. Based on Box and Drapers study, Wasserman [7]
concludes that from a response surface methodology perspective, the requirement
of the monotonicity property is equivalent to an assumption that the true functional response is purely additive and linear. The reconciliation of Taguchis viewpoint is possible based on the assumption that energy-related characteristics are
used to ensure that interactions are minimal.
Therefore, identication of the key transformation process is very important
to understanding and identifying the ideal functions of the engineering system.
By the choice of a good functional output, there is a good chance of avoiding
interactions [2,3]. Without interactions, there is additivity or consistency or reproducibility. Laboratory experiments will be reproduced and research efciency
improved.
However, the foregoing guidelines for selecting an appropriate output characteristic are still very conceptual, and their implementation is highly dependent on
the project leaders personal experience. There is very little literature that shows
how a system output response can be designed and selected in a systematic fashion.
In this chapter we address these shortcomings. With an emphasis on robustness
at the early stages of the product development, the proposed methodology will
integrate the concept of Taguchi methods with the aid of TRIZ and axiomatic
design principles. The proposed methodology has the following three mechanisms:
(1) denition and identication of different system architectures, inputs/outputs,
and the ideal function for each of the system/subsystem elements; (2) systematic
attempts to facilitate a design that is insensitive to various variations caused by
inherent functional interactions or user conditions; and (3) bridging the gap between robust conceptual design and robust parameter design through proper identication and selection of a system/subsystem output response.
Taguchi Method
1452
41. Enhancing Robust Design with the Aid of TRIZ and Axiomatic Design
professionals have shifted their quality paradigm from defect inspecting and problem solving to designing quality and reliability into products or processes.
Taguchis approach to design emphasizes continuous improvement and encompasses different aspects of the design process grouped into three main stages:
1. System design. This corresponds broadly to conceptual design in the
generalized model of the design process. System design is the conceptual
design stage in which scientic and engineering expertise is applied to develop new and original technologies. Robust design using the Taguchi
method does not focus on the system design stage.
2. Parameter design. Parameter design is the stage at which a selected concept is
optimized. Many variables can affect a systems function. The variables need
to be characterized from an engineering viewpoint. The goals of parameter
design are to (a) nd that combination of control factors settings that allow
the system to achieve its ideal function, and (b) remain insensitive to those
variables that cannot be controlled. Parameter design provides opportunities
to reduce the product and manufacturing costs.
3. Tolerance design. Although generally considered to be part of the detailed
design stage, Taguchi views this as a distinct stage to be used when sufciently
small variability cannot be achieved within a parameter design. Initially, tolerances are usually taken to be fairly wide because tight tolerances often
incur high supplier or manufacturing costs. Tolerance design can be used
to identify those tolerances that, when tightened, produce the most substantial improvement in performance.
Taguchi offers more than techniques of experimental design and analysis. He
has a complete and integrated system to develop specications, engineer the design to specications, and manufacture the product to specications. The essence
of Taguchis approach to quality by design is this simple principle: Instead of trying
to eliminate or reduce the causes for product performance variability, adjust the
design of the product so that it can be insensitive to the effects of uncontrolled
(noise) variation. The losses incurred to society by the poor product design are
quantied using what Taguchi calls a loss function, which is assumed to be quadratic
in nature. The ve principles of Taguchis methods are:
1. Select the proper system output response.
2. Measure the function using the SN ratio.
3. Take advantage of interactions between control and noise factors.
4. Use orthogonal arrays.
5. Apply two-step optimization.
TRIZ
TRIZ is a Russian acronym that stands for the theory of inventive problem solving,
originated by Genrikn Altshuller [18]. How can the time required to invent be
reduced? How can a process be structured to enhance breakthrough thinking? It
was Altshullers quest to facilitate the resolution of difcult inventive problems and
pass the process for this facilitation on to other people. In trying to answer these
questions, Altshuller realized how difcult it is for scientists to think outside their
elds of reference, because that involves thinking with a different technology or
language. In the course of the study of some 400,000 inventions as depicted in
patent descriptions, Altshuller noticed a consistent approach used by the best in-
Substance-eld (S-F) analysis is a TRIZ analytical tool for modeling problems related to existing technological system. Substance eld is a model of a minimal, functioning and controllable technical system [11]. Every system is created to perform
some functions. The desired function is the output from an object or substance
(S1), caused by another object (S2) with the help of some means (types of energy,
F ). The general term substance has been used in the classical TRIZ literature to
refer to some object. Substances are objects of any level of complexity. They can
be single items or complex systems. The action or means of accomplishing the
action is called a eld. Within the database of patents, there are 76 standard substance-eld solutions that permit the quick modeling of simple structures for analysis. If there is a problem with an existing system and any of the three elements
is missing, substance-eld analysis indicates where the model requires completion
and offers directions for innovative thinking. In short, S-F analysis is a technique
used to model an engineering problem. S-F analysis looks at the interaction between substances and elds (energy) to describe the situation in a common language. In cases where the engineering system is not performing adequately, the
S-F model leads the problem solver to standard solutions to help converge on an
improvement. There are four steps to follow in making the substance-eld model
[12]:
1. Identify the elements.
2. Construct the model.
3. Consider solutions from the 76 standard solutions.
4. Develop a concept to support the solution.
OTHER TRIZ TOOLS, STRATEGIES, AND METHODS
The TRIZ innovative process consists of two parts: the analytical stage and the synthesis stage. A basic description of some of the instruments/tools is as follows:
1. Ideality concept. Every system performs functions that generate useful and
harmful effects. Useful effects are the desirable functions of the system; harmful
effects are the undesirable effects of the system. When solving problems, one
of the goals is to maximize the useful functions of a system. The ideality
concept has two main purposes. First, it is a law that all engineering systems
evolve to increasing degrees of ideality. Second, it tries to get the problem
solver to conceptualize perfection and helps him or her to break out of
psychological inertia or paradigms.
2. ARIZ. This algorithm of inventive problem solving is a noncomputational
algorithm that helps the problem solver take a situation that does not have
1453
1454
41. Enhancing Robust Design with the Aid of TRIZ and Axiomatic Design
obvious contradictions and answer a series of questions to reveal the contradictions to make it suitable for TRIZ. There are four main steps in ARIZ.
3. Contradiction table. This is one of Altshullers earliest TRIZ tools to aid inventors. It shows how to deal with 1263 common engineering contradictions
(e.g., when improving one parameter, another is degraded).
4. Inventive principles. These are the principles in the contradiction table. There
are 40 main principles and approximately 50 subprinciples. These are proposed solution pathways or methods of dealing with or eliminating engineering contradictions between parameters.
5. Separation principles. This technique has been used with great success to deal
with physical contradictions. The most common separation principles can
take place in space, time, or scale.
6. Laws of evolution of engineering systems. Altshuller found through his study of
patents that engineering systems evolve according to patterns. When we understand these patterns or laws and compare them to our engineering system, we can predict and accelerate the advancement of our products.
7. Functional analysis and trimming. This technique is helpful in dening the
problem and improving ideality or value of the system. The functions of a
system are identied and analyzed with the intent of increasing the value of
the product by eliminating parts while keeping the functions. Functionality
is maximized and cost is minimized.
Axiomatic Design
Design is attained by the interactions between the goal of the designer and the
method used to achieve the goal. The goal of the design is always proposed in the
functional domain, and the method of achieving the goal is proposed in the physical domain. Design process is the mapping or assigning relationship between the
domains for all the levels of design.
Axiomatic design is a principle-based design method focused on the concept
of domains. The primary goal of axiomatic design is to establish a systematic foundation for design activity by two fundamental axioms and a set of implementation
methods [13]. The two axioms are:
Axiom 1: Independence Axiom. Maintain the independence of functional
requirements.
Axiom 2: Information Axiom. Minimize the information content in design.
In the axiomatic approach, design is modeled as a mapping process between a
set of functional requirements (FRs) in the functional domain and a set of design
parameters (DPs) in the physical domain. This mapping process is represented by
the design equation:
FR [A]DP
(41.1)
where
Ai j
FRi
DPj
(41.2)
1455
1
p
(41.3)
In the simple case of uniform probability distribution, equation (3) can be written
as
I log2
system range
common range
(41.4)
where system range is the capability of the current system, given in terms of tolerances; common range refers to the amount of overlap between the design range and
the system capability; and design range is the acceptable range associated with the
DP specied by the designer. If a set of events is statistically independent, the
Figure 41.2
Structure of the design
matrix
Uncoupled
Decoupled
Coupled
1456
41. Enhancing Robust Design with the Aid of TRIZ and Axiomatic Design
probability of the union of the events is the product of the probabilities of the
individual events.
Comparison of
Disciplines
The purpose of such a comparison is to point out the strength and focuses of
different contemporary disciplines, such as axiomatic design (Suh), robust design
(Taguchi), and TRIZ (Altshuller).
A product can be divided into functionally oriented operating systems. Function
is a key word and the basic need for describing our product, behavior. Regardless
of what method is to be used to facilitate a design, they all have to start with an
understanding of functions. However, what is the denition of function? How is
the function dened in these disciplines? Understanding the specic meanings of
function (or the denition of function) within each of these disciplines could help
us to take advantage of tools to improve design efciency and effectiveness.
According to Websters dictionary, function has three basic explanations: (1) the
acts or operations expected of a person or thing, (2) the action for which a person
or thing is specially tted or used, or (3) to operate in the proper or expected
manner. Generally, people would agree that a function describes what a thing does
and that it can be expressed as the combination of noun and verb: for example,
creating a seal or sending an e-mail.
In axiomatic design, function is dened as desired output that is the same as
the original denition. However, the importance of functional requirements is not
identied in the axiomatic design framework. There are no guidelines or termination criteria for functional requirement decomposition. Functional requirements are treated as equally important, which is not necessary practical and
feasible.
In robust design, the denition of function has the same general meaning but
with more meaning in terms of ideal function, which is concerned about what
fundamental things a system is supposed to do so that the energy can be transferred smoothly. For example, how can a seal be formed effectively? What is the
basic function of an engineered seal system? Therefore, the denition of function
in robust design using the Taguchi method may best be dened as energy
transformation.
In TRIZ methodology, the denition of function also has the same general
meaning, with negative thinking in terms of physical contradictions. Altshuller
sought to deliver all system functions simultaneously with maximization of existing
resources.
Table 41.1 shows a comparison of axiomatic design, TRIZ, and Robust Design;
Table 41.2 shows the comparison using design axioms. Based on the comparisons,
we can see that these three disciplines have their own focuses. They complement
each other. The strengths and weakness are summarized in Table 41.3.
1457
Desired
output
Basic function
Energy
transformation
TRIZ
Robust
design
Function
Focus
Axiomatic
design
Approach
Best to Be Applied
Table 41.1
Comparison of axiomatic design, TRIZ, and robust design
Thought Process
Effective application of
engineering strategies; identify
ideal function (ideal
relationship); start with a
proper system output
response, then maximize the
systems useful function
Attacking on contradictions;
start with design parameter,
then back to functional
requirements
Emphasis
1458
41. Enhancing Robust Design with the Aid of TRIZ and Axiomatic Design
Table 41.2
Comparison using design axioms
Approach
Independence Axiom
Information Axiom
Axiomatic
design
TRIZ
Robust
design
Table 41.3
Summary of strengths and weaknesses of axiomatic design, TRIZ, and robust design
Approach
Strengths
Axiomatic design
Weaknesses
Customer attributes are vague.
Robust design
using Taguchi
methods
It is a black-box approach.
1459
Push Force
Figure 41.3
Plastic moldings
strength test
1460
41. Enhancing Robust Design with the Aid of TRIZ and Axiomatic Design
In Figure 41.4, there is one substance (a piece of plastic molding) at the beginning; in the end there are two substances (a piece of plastic molding and a
push bar) and a force eld and the bent (not broken) piece plastic molding. We
use the following symbols to represent the initial situation:
S1:
S2:
S3:
push bar
push force
Let us now look at how the system works. Mechanical eld ( Fpush force) acts on
push bar S2, which, in turn, acts on the piece of plastic molding (S1). As a result,
S1 is deformed (bent) to S 1. Graphically, the operation can be represented as in
Figure 41.4.
Until now, can we see the alternative system output characteristic? Can the S1
be used to evaluate system behavior instead of push force? Lets validate this idea:
Can the evaluation system work if we take off any of the substance? No, the system
will fall apart and cease to apply the force to the piece of plastic molding. Does
this mean that the evaluation systems operation is secured by the presence of all
of its three elements? Yes. This follows from the main principle of materialism:
Substance can be modied only by material factors [i.e., by matter or energy
(eld)]. With respect to technical systems, substance can be modied only as a
result of direct action performed by another substance (e.g., impact-mechanical
eld) or by another substance. S 1 is modied from S1 and is the output due to
system input force of Fpush force. The characteristic S 1 is closer to the structure of
plastic molding than the push force.
According to Hubka and Eder [9], to obtain a certain result (i.e., an intended
function); various phenomena are linked together into an action chain in such a
way that an input quantity is converted into an output quantity. This chain describes the mode of action of a technical system. The mode of action describes
Fpush force
Withstand force
Figure 41.4
System diagram
S1
S2
S1
S 1
1461
the way in which the inputs to a technical system are converted into its output
effects. The mode of action is characterized by the technical system internal conversion of inputs of material, energy, and information. The output effect is
achieved as the output of an action process (through an action chain) internal to
the technical system in which the input measures are converted into the effects
(output measures) of that technical system. The action process is a direct consequence of the structure of the technical system. Every technical system has a purpose, which is intended to exert some desired effects on the objects being changed
in a transformation process. The behavior of any technical system is closely related
to its structure.
As a consequence, the S 1 (the bent S1) in terms of displacement (bent distance)
is a better system output response (Figure 41.5). As a matter of fact, the displacement of S 1 is proportional to the push force, which enhances effectiveness of the
efforts of robust parameter design. A robust parameter design case study has been
developed successfully using the output characteristic of displacement in an automotive company [14,15].
In a robust design approach using the Taguchi method, the displacement, M,
can also be used as an input signal. The spring force, Y, within the elastic limit,
can be used as system output response. The displacement is an input signal, M.
The ideal function will be given by
Y M
(41.5)
Y will be measured over the range of displacement (Figure 41.6). The signal-tonoise (SN) ratio will be optimized in the space of noise factors such as environment
and aging.
Identication of system output response using S -eld models sheds light on the
essence of transformation of engineered systems and allows one to use universal
technical or engineering terminology rather than customer perceptions, such as
percent of failures, good or bad, to evaluate the systems behavior. The key idea
is how the material, information, and energy are formed or transferred.
Searching for system output response based on S -eld model analysis presents
a general formula that shows the direction of identifying the possible system output
characteristic. This direction depends heavily on the design intent of the system.
Push Force
Displacement
Figure 41.5
Better system output
response
1462
41. Enhancing Robust Design with the Aid of TRIZ and Axiomatic Design
Y = M
Push Force, Y
Figure 41.6
Range of displacement
Displacement, M
41.4. Examples
To illustrate this, lets use as an example a case study on a temperature-rising
problem in a printer light-generating system [16]. During the development stage
of a printer, it was noticed that the temperature in the light source area was much
higher than expected. To solve this problem, there are some possible countermeasures, such as upgrading the resin material to retard ammability or adding
a certain heat-resisting device. Since these countermeasures would result in a cost
increase, it was decided to lower temperature. However, trying to lower temperature creates the need to measure temperature. Such an approach is not recommended, for two reasons. First, the environmental temperature must be controlled
during experimentation. Second, the selection of material must consider all three
41.4. Examples
1463
S 1 (Lamp)
S 2 (Fan)
Figure 41.7
Harmful side effect
F1 Temperature (Heat)
1464
41. Enhancing Robust Design with the Aid of TRIZ and Axiomatic Design
S 2 (Fan)
S 1 (Lamp)
Figure 41.8
Rotated S-eld
F1 Temperature (Heat)
F2 rpm
Rule 4: Chain of Action and Effect for System Output Response. If an output characteristic is conicting with another output characteristic in terms of the
same design parameters, it is necessary to improve the efciency by introducing a substance or a sub S -eld and consider the chain of action in a
technical system.
Rules 3 and 4 are often used together to identify a proper system output characteristic. For example, in a mechanical crimped product case study [17], pull
strength force and voltage drop have to be optimized simultaneously (Figure
41.12). But the optimized design parameters are not the same with respect to the
two different system output responses. Obviously, something may have to be compromised, unfortunately.
The reliability of complex electrical distribution systems can be dramatically
affected through problems in the connecting elements of wires to the terminal in
this case study. Minimum voltage drop is the design intent, and maximum pull
strength is required for the long-term reliability concerns.
In this example, the pull strength is created by crimping force ( F1) acting on
wire (S1) and terminal (S2). The S -eld system diagram may be expressed as shown
in Figure 41.13.
The pull strength, F2, is not a good system output response for two reasons:
rst, pull strength has to be compromised by voltage drop. Second, the pull
strength does not take the long-term reliability into consideration in terms of gas
holes, void, and so on. According to rule 4, we could introduce an output response
and consider the chain of action modes and the chain of effects. What effect can
Fvoltage
S1
(Lamp)
Figure 41.9
Changed boundary of
the technical system
S 2 (Fan)
F1 Temperature
S 2 (Fan)
F2 rpm
S 3 (rpm)
1465
Y = M
1
Air Speed, Y
Y1
Figure 41.10
Relationship between
voltage and speed
Y2
Motor Voltage, M
we nd before the effect of pull strength is formed? When we crimp the wires and
terminal, the wires and terminal are compressed into a certain form. Such a form
can be measured by compactness. Can the compactness be used as a system output
response? Lets validate this idea. The compactness is formed before the pull
strength, and the compactness takes the gas holes and voids into consideration.
What is the relationship between the compactness and the pull strength? The data
show that the compactness is strongly related to the pull strength and the voltage
drop. Therefore, the compactness could be used as a system output characteristic.
The S -eld diagram can be modied as shown in Figure 41.14.
The identication system output response using substance-eld analysis is based
on the law of energy transformation and the law of energy conductivity. Selecting
a proper system output response using S -eld analysis is one of the approaches
based on the energy transformation thought process. Any technical system consists
of three elements: two substances and a eld. The identication system output
response using substance-eld analysis furnishes a clue to the direction of identifying a system output response for the purpose of conducting robust parameter
design through a dynamic approach. This approach is very helpful when it is not
clear how an object or a system, especially in the process of identifying a system
output response, is related to the energy transformation for the purpose of design
optimization.
F1
F1
F2
Figure 41.11
System output response
S1
S2
S1
S2
S3
S4
1466
41. Enhancing Robust Design with the Aid of TRIZ and Axiomatic Design
Pull Strength
N (mV)
Figure 41.12
Pull strength and
voltage drop
Voltage Drop
CH 1
CH 2
Compression Rate
Figure 41.13
S-eld diagram
1467
F1 (Crimping Force)
S1 (Wire)
S3 (Compactness)
S2 (Terminal)
F2 (Pull Strength)
with many variables is very complicated. To prioritize the tasks and the proper
focus, it is necessary to sort the primary and secondary functional requirements
and handle each functional requirement according to its importance. For the purpose of design evaluation and optimization, it is essential to select a proper system
output response to evaluate and understand an engineered system or a products
functional behavior. Such system output characteristics (responses) should be related to basic functions. A basic function is a function to transfer material, energy,
and information from the input of the system to the output of the system. Obviously, the basic function of a product or process technology is related to its capability (highest probability) to transform input to output in terms of material,
information, and energy.
Functional requirements are included in the functional domain. The designer
should thoroughly understand problems in the functional domain and should not
limit possible selections without a special reason. Clearly dening the problem is
closely related to dening the functional requirements. On the other hand, the
designer should select the design elements in the physical domain by specifying
the functional requirements. Selecting a system output response characteristic is
closely related to the physical domain to reect how material, information, and
energy are transferred smoothly from input to output in the technical system.
According axiomatic design principles, the essence of the design process lies
in the hierarchies. The designer begins with functional requirements (top-down
approach); and because of the different priorities of all the functional requirements, the designer can categorize all the functional requirements into different
hierarchies. The important point in this process is that the functional requirements
must be satised with specic design parameters. As it goes to the lower level,
more details should be considered. This can be a very effective way of considering
all details of the design. The functional requirements of the higher level must be
satised through the appropriate design parameters in order for the lower-level
functional requirements to be satised.
By using an axiomatic approach, the ideas in the initial stages of the design
can be brought to bear in a scientic way once the design zigzagging mappings
have been completed according to the design axiom. To evaluate the systems
functional behavior, of course, a key system output response has to be identied.
The lower level of functional requirement in the axiomatic design framework is
Figure 41.14
Modied S-eld diagram
1468
41. Enhancing Robust Design with the Aid of TRIZ and Axiomatic Design
not necessarily the best system output response for the purpose of system evaluation. But the lower level of functional requirement is certainly the proper starting
point to identify or develop a proper system output characteristic. Additional creativity in the design can be induced when going through this task. The bottomup approach is necessary to identify a system output response based on a result of
zigzagging mapping.
41.7. Conclusions
In this chapter we suggest an approach for identifying a proper system output
response using substance-eld analysis along with analysis of the chain of action
mode. The approach presented consists of four elements: (1) system output responsefocused substance-eld model development, (2) change in the scope or
boundary of a technical system, (3) efciency of system output responsefocused
substance-eld model, and, (4) chain of action and effect for system output
response.
The law of energy transformation and the law of energy conductivity guide the
identication of system output response using substance-eld analysis. One of the
biggest advantages of using this approach is that the signal factor will come with
the system output response identied. With the proper identication of signal
factor and system output response, the chance of using dynamic robust design will
be increased. Of course, the effectiveness of the robust parameter design will be
improved.
Compared with other approaches to the identication of system output response, the approach presented in this chapter provides specic and detailed directions not only to search for but also to create an energy-related system output
response. The approach has been applied successfully to several challenging case
studies at some automotive companies. The ndings from the case studies motivated the researchers to bridge the gap between the robust conceptual design and
the robust parameter design.
References
1. G. Taguchi, 1993. Taguchi on Robust Technology Development: Bring Quality Engineering
Upstream. New York: ASME Press.
2. G. Taguchi, 1987. System of Experimental Design. White Plains, NY: Unipub/Kraus
International Publications.
3. M. S. Phadke, 1989. Quality Engineering Using Robust Design. Englewood Cliffs, NJ:
Prentice Hall.
4. Vijayan Nair, 1992. Taguchis parameter design: a panel discussion. Technometrics,
Vol. 32, No. 2.
5. J. B. Revelle, J. W. Moran, and C. A. Cox, 1998. The QFD Handbook. New York: Wiley.
6. G. E. Box and N. R. Draper, 1988. Empirical Model-Building and Response Surface
Methodology. New York: Wiley.
7. G. S. Wasserman, 19971998. The use of energy-related characteristics in robust
product design. Quality Engineering, Vol. 10, No. 2, pp. 213222.
References
8. G. Pahl and W. Beitz, 1988. Engineering Design: A Systematic Approach. New York:
Springer-Verlag.
9. V. Hubka and W. E. Eder, 1984. Theory of Technical System: A Total Concept Theory for
Engineering Design. New York: Springer-Verlag.
10. D. Clausing. 1998. Product development, robust design, and education. Presented
at the 4th Annual Total Product Development Symposium, American Supplier
Institute.
11. Y. Salamatov, 1999. TRIZ: The Right Solution at the Right Time. The Netherlands:
Insytec.
12. J. Terninko, A. Zusman, and B. Zlotin, 1996. Step-by-Step TRIZ: Creating Innovative
Solution Concepts. Nottingham, NH: Responsible Management.
13. N. P. Suh, 1990. The Principles of Design. New York: Oxford University Press.
14. Improvement of an aero-craft material casting process. Presented at the 1995 Quality Engineering Forum.
15. M. Hu, 1997. Reduction of product development cycle time. Presented at the 3rd
Annual International Total Product Development Symposium, American Supplier
Institute.
16. A research on the temperature rising problem for a printer light generating system.
Presented at the 1996 Quality Engineering Forum.
17. M. Hu, A. Meder, and M. Boston, 1999. Mechanical crimping process improvement
using robust design techniques. Robust Engineering: Proceedings of the 17th Annual
Taguchi Methods Symposium, Cambridge, MA.
18. G. Altshuller, translated by L. Shulyak, 1996. And Suddenly the Inventor Appeared
TRIZ: The Theory of Inventive Problem Solving. Worcester, MA: Technical Innovation
Center.
This chapter is contributed by Matthew Hu, Kai Yang, and Shin Taguchi.
1469
42
42.1.
42.2.
42.3.
42.4.
42.5.
42.6.
42.7.
42.8.
Introduction
1470
Personnel
1470
Preparation
1472
Procedures
1473
Product
1474
Analysis
1475
Simulation
1475
Other Considerations
1476
42.9. Conclusions
Reference
1477
1477
42.1. Introduction
Successful testing is a crucial component of any robust design project. Even if all
other related activities are executed awlessly, poorly conducted testing and evaluation can result in suboptimal results or failure of the project. Despite its critical
nature, execution of the experimental design is sometimes considered a black
box process. Successful execution of a robust design experiment depends on
attention to the four Ps of testing: personnel, preparation, procedure, and
product.
42.2. Personnel
Numerous quality methodologies have received considerable attention in the past
decade. Robust design, total quality management, six sigma . . . to many engineers
these are perceived as nothing more than the latest in a never-ending stream of
quick-x fads adopted by management. Each technique has its proponents, appropriate applications, and case history; each also has its skeptics, misapplications, and
1470
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
42.2. Personnel
anecdotal failures. The interaction between these philosophies and the existing
corporate culture of the adopting organization can either lead to empowering
synergies or nigh-impenetrable barriers to success. Because of the team-oriented
interdisciplinary nature of modern engineering, successful testing in a robust design context cannot ignore the psychology of the people involved in the process.
This issue is of particular importance in organizations that give development
responsibility to one functional area and testing and/or validation duties to another. It must also be addressed if the predominant engineering mindset is focused
on design-test-x and testing to targets (or bogeys). To obtain full participation
and commitment from the team, one must eliminate misconceptions and ensure
that each team member has an appropriate level of understanding of robust engineering techniques. It is not necessary for each team member (whether from
design, quality, or testing activities) to be an expert in robust design principles;
however, each should be familiar enough with the methodology to be able to
contribute and apply his or her knowledge and experience to the project.
One positive outcome of the recent attention that quality management has
received is that most people in engineering and testing activities are more attentive
to quality issues and are somewhat familiar with the associated terms, principles,
and methodologies. However, some of this knowledge may be secondhand and not
entirely accurate. Although robust design and engineering techniques (Taguchi
methods) have received considerable attention in the United States, particularly
in the past decade, many engineers and technicians do not have a full understanding of the underlying principles.
One common misconception is that quality is tested in. This mindset is often
found within organizations that separate design and testing activities and use the
inefcient designtestx cycle to develop products. It is essential to convince
engineers in this type of environment that quality is primarily a function of their
designs and not something added as an afterthought by testing. Once this hurdle
is overcome, one can demonstrate how robust engineering practices can assist
them in their design activities.
Another frequent belief is that robust engineering is just about design of experiments. Although classical and Taguchi experimental design techniques have
some similarities, the philosophical differences are considerable. It is important to
educate people clinging to this point of view about the underlying conceptual
differences between the two methodologies.
Some people may be skeptical of any test plan other than one-factor-at-a-time
evaluation. The best way to illustrate the power of robust engineering principles
to adherents of this outlook is to share appropriate case studies with them. The
ever-growing published body of work available about robust engineering applications should yield a number of suitable illustrations that can be used to educate
the team.
Finally, quality is conformance to specications is a mantra that will survive
well into the twenty-rst century, despite its inherent aws. People with this type
of goalpost mentality believe that all product within specication limits is of
equal quality; an explanation of the loss function and appropriate case study examples should eliminate this misconception.
Some of the most critical members of the team are those who will be performing the tests. The engineers and/or technicians responsible for conducting the
experiment and collecting data must be included in the planning process. Many
1471
1472
42.3. Preparation
Few oversights will limit the success of a quality engineering project more than
that of improper scoping. The focus of the project, whether it is on a component,
subsystem, system, or entire product, must be dened clearly at the onset. The
development of P-diagrams, ideal functions, and experimental designs is dependent on the boundaries specied by the team. The robust engineering process is
also heavily inuenced by the nature of the issue: research and development activities validating new technologies will have different needs and requirements than
will design engineers creating a product for an end user.
Once the region of interest within the design space has been identied, the
team must dene the ideal function to be studied and construct an appropriate
P-diagram. Existing documents can provide a wealth of information of use in this
development: warranty information, failure mode effects analyses (FMEAs), design
guides, specications, and test results from similar products can contribute to the
development of a comprehensive P-diagram. It is difcult to develop an exhaustive
list of error states, noise factors, and control factors without examining this type
of supporting information. Published case studies and technical papers may also
yield insights into similar products or technologies.
Although the primary focus during this exercise must be the operating environment and the customers use of the product, some attention should be paid
to the capabilities of the testing infrastructure to be used. Although it is important
to specify an ideal function that is energy-based (to minimize the possibility of
interactions between control factors), it does little good to select one that cannot
be measured readily. Input from the engineer or technician who will perform the
trials can assist the team in developing a feasible concept. Many enabling test
technologies have been developed (such as miniature sensors, noncontact measuring techniques, and digital data collection and analysis), and the people who
use them are best able to accurately communicate their capabilities and availability.
If the input signal or output response cannot be measured easily, appropriate
surrogates must be selected. These alternative phenomena should be as closely
coupled to the property of interest as possible; second- and third-order effects
should be avoided whenever possible.
Once the ideal function has been specied and the P-diagram has been constructed, the team can begin development of the experimental design. The control
factors to be studied should be selected based on the teams engineering knowledge and data from the sources listed above. Techniques such as Altshullers theory
of inventive problem solving (TRIZ/TIPS), structured inventive thinking (SIT),
Pughs total design, or Suhs axiomatic design can be employed in the development of design alternatives and the specication of control factor levels.
Some consideration should be given to resource constraints at this time. Test
length, cost, and facility and/or instrument availability may restrict the magnitude
42.4. Procedures
of the overall experiment. It is important to consider these feasibility issues without
sacricing parameters that the team feels are critical. If a meaningful test plan
cannot be developed within these limits, it may be necessary to request additional
resources or to select an alternative test method.
Testing personnel should be involved in the selection of signal and noise factors. Their insights and knowledge of available test capabilities and facilities can
facilitate this phase of test plan construction. Analysis of customer usage proles
and warranty data will also help the team compound appropriate noise factors.
Once this phase is complete, the experimental design should be complete, with
signal and noise factors identied and the inner and outer arrays established.
42.4. Procedures
As the team develops the experimental design, it must match its expectations with
the capabilities of the test facility. Some robust design case studies showcase experiments done on a tabletop with virtually no test equipment; others describe
studies that required elaborate measurement devices. Sophistication is not a guarantee of success; appropriate instrumentation maximizes the probability of a favorable outcome.
As the team considers the test procedures to be used, its members should discuss the following questions:
What ancillary uses are anticipated for the data?
What measurements will be taken?
What is the desired frequency of measurements?
What measurement system will be used?
What test procedure is appropriate?
When establishing the test plan, it is important to consider what other uses may
be appropriate for the data collected. For example, the development of useful
analytical models depends on correlation with observations and measurements of
actual performance. Data obtained by the team in the course of the robust engineering process may be useful in the development of analytical tools that can speed
future design of similar components or systems. Therefore, collection of data not
directly related to the control and noise factors being studied may be desirable; if
this unrelated information can be recorded without disruption of the main experiment, it may be cost-effective to do so.
Once the superset of data has been dened, the team must determine what
measurements will be taken. The test plan should clearly dene what data are to
be collected; it may be useful to develop a check sheet or data collection form to
simplify and standardize the recording of information. Parameters such as technician identity, ambient temperature, relative humidity, and measurement time
should be considered for inclusion if they are relevant to the experiment.
Measurement frequency must also be established. If the intent of the study is
to examine the functions of components that do not degrade appreciably, simple
one-time measurements may be appropriate (e.g., airow through a variety of duct
congurations). However, if the system is expected to degrade over its service life,
time and/or cycle expectations should be included in the experimental design as
noise factors, and the measurement interval must be specied. Some consideration
1473
1474
42.5. Product
Once test planning is complete and the procedures have been established, the
team must procure the materials to be tested. A number of questions must be
addressed:
Are surrogate parts appropriate?
What adjacent components are required?
Is the testing destructive?
The nature of the study and the noise factors involved will determine whether
surrogate parts are suitable or design-intent components must be fabricated. For
example, if the study is attempting to measure airow in a duct, a proxy may be
42.7. Simulation
appropriate. An existing duct may be reshaped to reect the desired congurations specied by the experimental design. However, care must be taken not to
introduce unintended noise. In this example, if the reshaping process introduced
surface nish changes or burrs that affected the airow, this testing would not
accurately reect the performance of a normally manufactured component. It may
be more appropriate to conduct the tests using samples made by stereolithography
or three-dimensional printing processes. This would normalize the noise induced,
affecting each sample equally and allow the testing to reect more accurately the
impact of each control factor.
Adjacent components may also be required for the testing. Use of nearby components or systems may simplify xturing and improve the simulation of the test
samples operating environment. If the surrounding items generate heat or electromagnetic interference, for example, omitting them may cause the test to reect
actual performance of the components in service inaccurately. In the case of software testing, processor loading and memory consumption of other executing programs may inuence the results. Although the project scope may be limited to a
certain subsystem or component, the team must remain aware of those systems
with which it interfaces.
If the testing is destructive, the team must consider how to minimize unwanted
variability. Certain types of piece-to-piece variation can be compounded into the
noise factors; if this is not practical, every effort must be made to make the test
samples as uniform as possible to assure test outcomes are a result of changes to
control factors and are not due to component variation.
42.6. Analysis
Once testing is complete, the team must analyze the results. Some postprocessing
of test data may be necessary: for example, frequency analysis of recorded sounds.
All data should be subjected to identical processing to assure that no spurious
effects are introduced by these procedures.
Commercial products are available for the calculation of signal-to-noise ratios,
main effects, and analysis of variance (ANOVA), or the team may elect to compute
these values themselves using spreadsheets or other computational aids. Once the
analysis is complete, the team will conduct conrmation and verication of the
optimal conguration.
42.7. Simulation
Although they are not considered to be testing in the traditional sense, analytical
models and computer-aided engineering (CAE) have their place in the robust
engineering process.
If a simulation or model has been correlated adequately to real-world performance and behavior, it can be used to conduct the tests required during the optimization phase of robust engineering. An obvious advantage of this approach is that
the use of computational modeling minimizes the costs associated with fabricating
test samples, and typically reduces the time to conduct the tests. The sophistication
of commercially available modeling software continues to increase, and the power
1475
1476
Reference
1477
42.9. Conclusions
Testing is the heart of the robust engineering process. It is this fulllment of the
experimental design that allows subsequent analysis to determine optimal congurations and conrm and validate those conclusions. If testing resources are appropriately managed and utilized, a robust engineering team maximizes its
probability of success.
Reference
1. Eugene S. Ferguson, 1992. Engineering and the Minds Eye. Cambridge, MA: MIT Press.
Documentation
Total Product
Development and
Taguchis Quality
Engineering
43
43.1.
43.2.
43.3.
43.4.
43.5.
43.6.
43.7.
43.8.
43.9.
Introduction
1478
The Challenge
1478
Model of New Product Development
1479
Event Thinking in New Product Development
1480
Pattern Thinking: Big Ideas Make Big Differences
1481
Taguchis System of Quality Engineering
1485
Structure Thinking in New Product Development
1486
Synergistic Product Development
1490
Conclusions
1491
References
1491
43.1. Introduction
Quality in product development began with attempts to inspect quality into products or services in the process domain, the design domain, or the customer domain. The evolution of quality involves a signicant change in thinking from
reacting to inspection events to utilizing process patterns in engineering and manufacturing to build quality into the product. The use of pattern- and structurelevel tools, such as the Taguchi system of quality engineering, TRIZ, and axiomatic
design, provide a foundation for world-class product development.
1478
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1479
with bulging inventories of full-sized carryover models. These automobile manufacturers found themselves in a full-blown economic recession, with consumers
purchasing signicant numbers of smaller, more-fuel-efcient, imported automobiles. Consumers discovered that these imported vehicles also had superior quality,
and the challenge began.
From 1975 to 1979, total auto sales in the United States were 51.8 million, with
U.S.-made vehicles accounting for 42.3 million. From 1985 to 1989, 53.3 million
vehicles were sold in the United States. Compared to 19751979, U.S.-made vehicles dropped 10%, to 38.1 million, of which 3.3 million were from U.S. plants
built by importers, while imported vehicles rose 60%, to 15.2 million [1]. In 1984,
Louis Ross of Ford Motor Company said [2]: Imports in 1984 took 26 percent of
the U.S. car market. So here in our home market domestic auto manufacturers
and their supply baseare playing in the World Series, and in order to score we
have to be able to meet or beat the worlds best in quality, in cost, and in productivity, and in product content.
This challenge continues today. So far, much of the work in improving quality
and productivity has been in implementing lean manufacturing and assembly systems. The players scorecard is market share [3]. Following is a summary of the
U.S. automotive market share (%):
GM
Ford
DaimlerChrysler
Japanese
European
Korean
1988
1995
2001
36
25
15
20
2
2
33
26
15
23
2
1
28
23
14
27
4
4
The challenge will be won by the companies that can build quality and productivity into the design (product and process) during the product development
process.
Patterns
Events
Customer Attributes
Structure
Figure 43.1
Senges levels of
thinking overlaid on
Suhs domain model of
product development
Functional
Domain
Physical
Domain
Process
Domain
Process Variables
Customer
Domain
Design Parameters
Functional Requirements
1480
1481
to ship. At a company like GE, the cost of quality associated with event thinking
in 1996 was estimated to be as high as $10 billion each year in scrap, reworking
of parts, correction of transactional errors, inefciencies, and lost productivity [6].
Problem-solving opportunities associated with event thinking in product development are illustrated in Figure 43.2.
Customer
Domain
Functional
Domain
Physical
Domain
Process
Domain
Structure
Figure 43.2
Problem-solving
opportunities associated
with event thinking in
the various design
domains
Patterns
Events
Warranty,
Customer
Complaints
Failed
Verification
Tests
Inspection and
Scrap/
Rework
1482
8.
7.
1.
Form
Program
Team
1483
3.
Program
Input
6.
4.
Customers
Wants/
Needs/
Delights
Prevent
Concurrent
9.
Product/ Failure Mode Optimize
Process and Decrease Function in
Variability
Design
the Presence
of Noise
Select
Product/
Process
Concept
CustomerDriven
Specifications
Design
Architecture
10.
Tolerance
Design
13.
Design/
Manufacturing
Confirmation
Product/Process
Design
(Define
Requirements)
(Design to
Requirements)
14.
Launch/Mass
Production
Confirmation
11.
2.
Program
Information
Center
5.
Finalize
Process/
Control
Plans
12.
System
Architecture
and
Function
Design
Verification
15.
Update
Corporate
Memory
Figure 43.3
Pattern-level big ideas that make big differences
1.
Purpose
Form
Program
Team
Breakthrough Idea
Step 1. High-performance teams emerge
and grow through systematic
application of learning organization
and team disciplines to foster
synergy, shared direction,
interrelationships, and a balance
between intrinsic and extrinsic
motivation factors.
Program
Input
2.
Program
Information
Center
15.
Update
Corporate
Memory
Figure 43.4
Pattern-level big ideas associated with program input
1484
Purpose
4.
3.
Customers
Wants/
Needs/
Delights
CustomerDriven
Specifications
Design
Architecture
(Define
Requirements)
Breakthrough Idea
System
Architecture
and
Function
15.
Update
Corporate
Memory
Figure 43.5
Pattern-level big ideas associated with design architecture
Purpose
Breakthrough Idea
6.
Select
Product/
Process
Concept
7.
8.
Concurrent
Prevent
9.
Product/ Failure Modes
Optimize
Process and Decrease Function in
Variability
Design
the Presence
of Noise
10.
Tolerance
Design
Product/
Process Design
(Design to
Requirements)
11.
Finalize
Process/
Control
Plans
12.
Design
Verification
Figure 43.6
Pattern-level big ideas associated with product / process design
15.
Update
Corporate
Memory
Purpose
Step 13. Design/Manufacturing Confirmation
(Confirm
Manufacturing
Capability)
1485
14.
Launch/Mass
Production
Confirmation
Breakthrough Idea
(Launch and
Ramp-up)
15.
Update
Corporate
Memory
Step 14. Implementation of robust processes and upfront launch planning, with just-in-time
training, will promote a smooth launch and
rapid ramp-up to production speed. Final
confirmation of deliverables enables team
learning with improved understanding of
cause/effect relationships.
Step 15. Retain whats been learned. Create and
maintain capability for locating, collecting,
and synthesizing data and information into
profound knowledge. Communicate in a
timely, user-friendly manner all technical,
institutional, and social lessons learned.
Figure 43.7
Pattern-level big ideas associated with prelaunch and launch
Customer
Domain
Functional
Domain
Physical
Domain
Process
Domain
Structure
Patterns
Events
Understanding
Customer
Needs, Wants,
Delights
CustomerDriven
Specifications
Systems Engineering
Concurrent Engineering
Pughs Concept Selection
FMEA
PFMEA
Taguchis System of Quality Engineering
Figure 43.8
Examples of pattern
thinking for problem
prevention in various
design domains
1486
1487
effective. When pattern-level methods work well, the event outcomes become
world-class.
In the evolution of quality, two very powerful design methods have emerged at
the structural level: axiomatic design and TRIZ (see Figure 43.9).
Axiomatic design is the result of work by Nam Suh [11]. In the late 1970s, he
asked himself the question: Can the eld of design be scientic? Suh wanted to
establish a theoretical foundation for design by identifying the underlying principles or axioms that every great design must have in common. He knew that he
could never prove that these principles were true, but could he nd a set of principles for which no one could nd a counterexample? After a decade of work, two
principles emerged. From these two principles, theorems and corollaries could be
derived that together form a scientic basis for the structure of great design.
The rst principle that Suh discovered was the principle of independence. Consider Suhs domain model shown in Figure 43.1. Mapping between domains represents a mapping of whats to hows. The independence axiom states that in
great designs, the hows are chosen such that the whats maintain independence; that is, design parameters must be chosen in such a way that functional
requirements maintain independence. For example, consider the water faucet designs shown in Figure 43.10.
The functional requirements for a water faucet are to control the ow rate and
the water temperature. As functional requirements, these are independent by
denition. A design that obeys the independence axiom will maintain this
independence.
The faucet on the left of Figure 43.10 has two design parameters: a hot water
knob and a cold water knob. What is the relationship between these design parameters and the functional requirements? When the hot water knob is turned,
temperature is affected, but so is ow. Turning of the cold water knob also affects
temperature and ow. Therefore, this design is coupled and the independence of
the functional requirements has not been maintained. If a consumer has optimized
ow rate, then turns one of the knobs to optimize temperature, the ow rate is
changed and is no longer optimal. Designs of this type can eventually satisfy customers only by iterating between the two design parameters.
Consider the design on the right of Figure 43.10. This faucet has one handle
and the design parameters are to lift the handle to adjust ow and to move the
Customer
Domain
Structure
Patterns
Events
Functional
Domain
Axiomatic
Design
TRIZ Directed
Evolution
Understanding
Customer
Needs, Wants,
Delights
Warranty,
Customer
Complaints
Physical
Domain
TRIZ
CustomerDriven
Specifications
Systems Engineering
Process
Domain
Axiomatic
Design
Lean
TRIZ Manufacturing
Concurrent Engineering
Pughs Concept Selection
FMEA
PFMEA
Taguchis System of Quality Engineering
Inspection and
Failed
Scrap/
Verification
Rework
Tests
Figure 43.9
Quality evolution in the
various design domains
1488
Figure 43.10
Water faucet example
Design 2:
DP1 = handle lifting
DP2 = handle moving side to side
handle from side to side to adjust temperature. In this design, adjusting temperature does not affect ow, and adjusting ow does not affect temperature. From
an axiomatic design point of view, this design is superior because it maintains the
independence of the functional requirements.
Imagine what happens when a designer is working in a situation with a dozen
or more functional requirements. If the design is coupled, optimization of one
function may adversely affect several other functions. When these functions are
xed, the original function no longer works well. The designer is always tuning
and Band-Aiding such a design and the customer will never be completely happy.
However, if the design is created in such a way that each of the functional requirements is handled independently by the design parameters, each function of the
design can easily be optimized with pattern-level tools. Taguchis additive model
allows us to reach the same conclusion. Functional independence (Suh) or the
absence of control factor interactions (Taguchis additive model) provides the opportunity to design and manufacture products that will attenuate all forms of unwanted variation, including variation due to conditions of customer use.
The independence axiom can be used to evaluate how good a design will be
when the design is still on paper! But suppose that you have two design alternatives
that both follow the independence axiom. Which one is better? Suhs second principle states that the better design will minimize the information content necessary
for implementation; that is, the better design has a higher probability of meeting
the required functional performance [4].
This second principle is related to Taguchis application of the loss function.
When a manufacturing process is composed of parameters with broad, shallow loss
functions implying wide tolerances, the process will function with minimum effort
or controls and required information content is low. If the process has many parameters with steep, narrow loss functions, implying very tight tolerances, it must
be operated with many rules or constraints and must be tuned constantly. Required
information content is high.
When a product design, like a copy machine, consists of parameters with broad,
shallow loss functions, the design will function with minimum effort or controls
over a wide range of conditions (paper type, environmental temperature, etc.) so
the information content required is low. On the other hand, if the copy machine
has many parameters with steep, narrow loss functions, it must be operated with
many rules or constraints, has a very high downtime, and must be tuned contin-
1489
1490
References
could do is to make products (or services) that cause customers to brag to their
friends and neighbors. When it comes time to purchase another product, not only
do they come back to buy yours, but they bring friends and neighbors with them.
The industry challenge will nally be won by those companies that integrate and
synergistically implement the big ideas that make big differences into the way they
design products and services. People will go out of their way to nd such products
and services.
43.9. Conclusions
Quality tools and methods have evolved utilizing three stages of thinking (event,
pattern, and structure) across four domains associated with product development.
Taguchis system of quality engineering plays a central role at the pattern or process level of thinking. Implementation of axiomatic design and TRIZ at the structure level will enhance the effectiveness of pattern-level processes. When the
pattern-level processes work to their capability, the customer-related events are
world-class. Companies that wish to accelerate development of their own quality
programs can utilize the concepts (big ideas) explained in this chapter to understand their current level of evolution and to implement focused actions that can
move them quickly past their competition.
ACKNOWLEDGMENT
The authors would like to thank Dr. Carol Vale of the Ford Motor Company for
her assistance in editing this document and making the content more readable.
References
1. Wards Automotive Yearbook, 1979, 1985, 1990. Southeld, MI: Wards Communications.
2. Wards Automotive Yearbook, 1985. Southeld, MI: Wards Communications.
3. Time, January 14, 2002, p. 43.
4. Nam P. Suh, 2001. Axiomatic Design: Advances and Applications. New York: Oxford
University Press.
5. Peter M. Senge, 1990. The Fifth Discipline. New York: Doubleday.
6. M. Harry and R. Schroeder, 2000. Six Sigma. New York: Doubleday.
7. K. Ishikawa, 1988. Quality Analysis. ASQC Conference TransactionsPhiladelphia, pp.
423429.
8. Dan Clausing and John R. Hauser, 1998. House of Quality. Cambridge: Harvard
Business School Press.
9. Stuart Pugh, 1991. Total Design. Reading, MA: Addison-Wesley.
10. Genichi Taguchi, 1993. Taguchi on Robust Technology Development: Bringing Quality
Engineering Upstream. New York: ASME Press.
11. Nam P. Suh. 1990. The Principles of Design. New York: Oxford University Press.
12. G. Altshuller. 1998. 40 Principles: TRIZ Keys to Technical Innovation. Worchester, MA:
Technical Innovation Center.
1491
Role of Taguchi
Methods in Design
for Six Sigma
44
44.1. Introduction
1492
44.2. Taguchi Methods as the Foundation of DFSS
Overview of DFSS (IDDOV)
1494
Overview of Quality Engineering
1496
Quality Management of the Manufacturing Process
Linkage between Quality Engineering and IDDOV
1501
1507
1498
1498
1498
1493
1510
1512
1515
1516
44.7. Conclusions
References
1517
1519
1520
1520
44.1. Introduction
Quality engineering (also called Taguchi methods) has enjoyed legendary successes in many Japanese corporations over the past ve decades and dramatic
successes in a growing number of U.S. corporations over the past two decades.
In 1999, the American Supplier Institute Consulting Group [1] introduced the
IDDOV formulation of DFSS that builds on Taguchi methods to create an even
stronger methodology that is helping a growing number of corporations deliver
dramatically stronger products.
1492
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1493
1494
Overview of DFSS
(IDDOV)
American Supplier Institute Consulting Groups formulation of DFSS [1] is structured into ve phases:
1. Identify project. Select project, rene its scope, develop project plan, and form
high-powered team.
2. Dene requirements. Understand customer requirements and translate them
into technical requirements using quality function deployment.
3. Develop concept. Generate concept alternatives using Pugh concept generation
and TRIZ and select the best concept using Pugh concept selection methods;
conduct FMEA [1]. The develop concept may be conducted at several levels,
starting with the system architecture for the entire product. Then concepts
are developed for the various system elements as needed.
4. Optimize design. Optimize technology set and concept design using robust
optimization, parameter design, and tolerance design. Optimization is conducted at the system element levels.
5. Verify and launch. Finalize manufacturing process design using Taguchis online quality engineering, conduct prototype cycle and pilot run, ramp-up to
full production, and launch product.
The rst two phases, identify project and dene requirements, focus on getting
the right product. The last two phases, optimize design and verify and launch,
focus on getting the product right.
The middle phase, develop concept, is the bridge between getting the right
product and getting the product right. Bridging across external customer requirements and internal technical requirements makes development of conceptual designs a difcult and critical step in the product development process. As the bridge
between upstream and downstream requirements, conceptual designs should creatively respond to upstream requirements developed through the quality function
deployment (QFD) process and downstream engineering, manufacturing, and service requirements that do not necessarily ow out of QFD. The central technical
requirement that the selected concept and technology set should optimize well to
provide good robustness (high signal-to-noise ratio) leads to the potential loopback
from optimization to concept development indicated below.
The ve phases, IDDOV, are segmented into 20 steps.
Identify Project
1. Rene charter and scope.
2. Develop project plan (plan, do, check, act).
3. Form high-powered team.
1495
1496
Overview of Quality
Engineering
Quality engineering (called Taguchi methods in the United States) consists of offline and on-line quality engineering. (Off-line quality engineering is called robust
engineering in the West.)
The purpose of robust engineering is to complement traditional engineering
methods with robust methods to increase the competitiveness of new products by
reducing their cost and improving their quality, starting with research and development prior to specic product applications. A central focus of robust engineering is robust optimization of conceptual designs and technology sets.
Quality engineering is applicable to all aspects of R&D, product development,
and manufacturing process development: system/concept design, parameter design, and
tolerance design. The three phases are common for technology, product, and manufacturing applications.
Off-line quality engineering (robust engineering) encompasses technology development and product design, which is characterized in more detail.
Technology Development (R&D Prior to Knowledge about Requirements)
System/concept design. Select the best technology set from all possible technology alternatives that might be able to perform the objective function.
Parameter design. Determine the optimal values of the parameters that affect
the performance (robustness) of the technology set selected.
Tolerance design. Find the optimal trade-off between the cost of quality loss
due to variation of the objective functions and the cost of technology set
materials and processes.
1497
1498
Quality Management
of the Manufacturing
Process
Taguchis on-line quality engineering differs from the typical American process of
controlling the manufacturing process against tolerance specication limits. Taguchis methodology focuses on maintaining the process close to target rather than
within control limits. Taguchi observes [5]: Right now, many American industries
use very sophisticated equipment to measure the objective characteristics of their
products. However, because some quality managers of these companies do not
interrupt the production process as long as their products are within control limits,
the objective characteristics of their products are usually uniformly distributed, not
normally distributed.
It is clear from studies and common sense that products with all parameters
set close to targets are more robust under actual customer usage conditions than
products with some portion of parameters set close to control limits. Characteristics
set close to control limits are sometimes suggestively called latent defects since they
can rapidly become very real defects (or premature failures) under customer usage.
Linkage between
Quality Engineering
and IDDOV
The American Supplier Institute Consulting Group (ASI CG) structure of DFSS
expands the system/concept design phase of off-line quality engineering into three
phases: identify project, dene requirements, and develop concept. Parameter design and tolerance design are aggregated under optimize design. On-line quality
engineering pertains to the verify and launch phase of IDDOV. Off-line quality
engineering pertains to the following areas:
System/concept design. Identify project, dene requirements and concept.
Parameter design. Optimize design.
Tolerance design. Perform on-line quality engineering, verify and launch.
The correlation among the three phases of quality engineering and the ve
phases of DFSS suggest that IDDOV can be regarded as an implementation of
quality engineering. The relationship is shown in Figure 44.1.
Parameter Design
Tolerance Design
Figure 44.1
Aligning QE and DFSS
IDENTIFY
PROJECT
DEFINE
REQUIREMENTS
DEVELOP
CONCEPT
OPTIMIZE
DESIGN
On-Line Quality
Engineering
VERIFY AND
LAUNCH
1499
rened process will be identied as the new robust product development process
(NRPD process).
Some of the problems addressed by the new process are illustrated in the style
of a case study that is actually an amalgamation of case studies conducted during
the 1990s and early 2000s in several large manufacturing corporations. The products ranged across automotive, ofce equipment, computer, medical diagnostic
equipment, and chemical industries. The synthesized case study sets the stage for
developing the NRPD process.
This study is derived from experience with several large corporations. Each study
of an existing NPD process was conducted as background for improving the corporations capability to deliver winning products consistently. The NPD processes
in the study tended to focus more on management of the development process
than on management of the engineering process. The NPD processes were universally designed to coordinate the myriad of details related to the development
of complex products guided by a plan that contained the types of information
identied in the reporters credo of 5Ws1H: who, what, when, where, why, how.
The process starts with why the product is needed for the business as described in
a business case document. The remaining four Ws address information about what
needs to be done where by when and by whom. Where identies physical locations,
organizations, and suppliers. When identies inch-stones and milestones in the
plan. How refers to the engineering activities, the weakest element in the process.
Managers and engineers, through an elegant software program, could readily
access any element of the NPD process. Drop-down menus identied the engineering methodologies and tools that should be used at any point in the process.
The 5Ws1H and go/kill phase gate criteria and process were also available in
drop-down menus. A redyellowgreen trafc light evaluation was used on all projects within the program and accumulated up to indicate the status of the total
program.
Initial management presentations explaining their in-place NPD processes seem
very orderly and, in some cases, very impressive. However, interviews and focus
groupstyle discussions revealed a surprisingly high level of chaos in the engineering process. A seemingly endless series of buildtestx cycles were conducted
throughout the development processbuild and test to see if the product met
requirements and specications, and x the shortfalls, over and over and over.
Engineers and managers alike identied the popular phase gate model that
denes every phase gate as a go/kill decision [6] as a major source of problems.
If the model (hardware and/or simulation) designed and constructed for a build
testx cycle in a particular phase of NDP did not meet specications to the degree
required by the phase gate criteria, the team members feared that the program
would be killed during the next phase gate review. The continuous series of go/
kill tests throughout the development process put the kind of pressure on the
team that destroyed condence and fostered look good rather than do good
behaviors. For example, the teams often struggled to appear as if they were meeting requirements and specications just before each phase gate review. What better
way to look good than to build and show off a sophisticated-looking model that
appeared to meet specications. The new demonstration model was often wasted
effort, since its purpose was to help the team to get through the Phase Gate rather
than to advance the engineering process.
Synthesized Case
Study
1500
Conclusions
The engineering activities within NPD are strongly inuenced by the specics of
the engineering methodologies that are used. Most in-place NPD processes do not
encompass the newer methodologies, such as robust engineering or IDDOV.
1501
Product development involves two very different activities: (1) concept development and (2) detailed product and process design. These activities are usually
further partitioned into four to six phases linked by Phase Gates, as shown in
Figure 44.2. The new product development (NPD) process shown in the gure is
representative of the many variants. The phasephase gate construct is a broadly
accepted model for NPD [6].
A new product development process is effectively characterized through four
elements.
Developing Linkages
between NPD and
IDDOV
The relationship of IDDOV and a typical NDP process is indicated in Figure 44.3.
The dashed lines indicate the relationship of IDDOV and NPD phases. The stars
represent phase gates. The phase gate indicated by the double star and solid line
delineates completion of concept development activities and the beginning of detailed product and process design activities. As mentioned previously, the rst four
phases of DFSS are crowded into preconcept and concept phases of NPD.
While IDDOV can improve the capability to deliver competitive products signicantly using the existing NPD process, the alignment with IDDOV is at best
awkward. The new engineering methodologies contained within IDDOV suggest
changes to the phasephase gate structure of development processes.
Popular NPD processes simply do not capture the benets that derive from the
relatively new capability to optimize concept designs and technology sets. This
important new capability provides competitive benets that must be captured to
enjoy leadership. NPD needs to be modied to capture the benets of contemporary best practice engineering methodologies. Progressive corporations that rened their NPD process to capture the benets of IDDOV over the last several
years are beneting from improved competitiveness.
Two-step optimization creates the opportunity to conduct robust optimization
(step 1) during concept development and then set the system to target (step 2)
during detailed product and process design. Taguchi introduced the notion of
optimizing concepts and technology sets in the early 1990s [6]. He states (p. 90):
Research done in research departments should be applied to actual mass production processes and should take into account all kinds of operating conditions.
Therefore, research departments should focus on technology that is related to the
functional robustness of the processes and products, and not just to meeting customers requirements. Bell Laboratories called this type of research two-step design methods. In two-step design methods, the functional robustness of product
Concept Development
PRECONCEPT
CONCEPT
VERIFY AND
VA LIDATE
RAMP-UP AND
LAUNCH
Figure 44.2
Phases of a typical new
product development
process
1502
Concept Development
PRECONCEPT
CONCEPT
VERIFY AND
VALIDATE
RAMP-UP AND
LAUNCH
Figure 44.3
Linkage between IDDOV
and NPD
IDENTIFY
PROJECT
DEFINE
REQUIREMENTS
DEVELOP
CONCEPT
OPTIMIZE
DESIGN
VERIFY AND
LAUNCH
and processes is rst maximized and then these product and processes are tuned
to meet customers requirements.
This quote correctly suggests that two-step optimization can be brought all the
way forward to R&D activities that precede NPD concept development activities.
A rened new product development process explicitly identies optimization upfront in the development process. Figure 44.4 depicts one possible structure for a
rened NPD process supported by IDDOV that will serve as the foundation for
building a winning development process.
Elevating optimization to appear explicitly in what is now properly identied
as the new robust product development (NRPD) process is an important renement. Optimization serves somewhat different purposes in its two appearances in
the NRPD phases. Some subsystems may involve new concept designs and/or new
technology sets, while other subsystems may involve carryover designs from precedent products.
New concept designs should be optimized (step 1 of the two-step optimization
process) in the concept optimization phase and adjusted to target (step 2) either
in the concept optimization phase or the optimization, design, and development
phase.
Candidate carryover designs should be assessed for potential robustness prior to
accepting them as elements of the concept design. Most carryover designs can be
improved by conducting robust optimization experiments. Such carryover designs
might be both optimized and set to target in the optimization, design, and development phase.
The differences between NPD and NRPD depicted in Figures 44.3 and 44.4,
respectively, have huge implications. As noted earlier, optimization is normally re-
Concept Development
PRECONCEPT
Figure 44.4
Optimization cited
explicitly in NRPD
IDENTIFY
DEFINE
DEVELOP
PROJECT REQUIREMENTS CONCEPT
OPTIMIZE
DESIGN
VERIFY AND
LAUNCH
1503
garded as just one of many functions performed during detailed design that are
too numerous to be identied in the high level of NPD phases. Elevating optimization to appear explicitly in the phase structure of NRPD is a big change. Pulling
optimization forward to become part of the concept development activities is an
even bigger change. The ability to optimize concepts and technology sets early in
the development process streamlines the NRPD engineering process to signicantly reduce the risk of innovation, enhance engineering excellence, reduce likelihood of canceling a development program, reduce the number of buildtestx
cycles needed, shorten development schedules, and reduce development, product,
and warranty costs.
Figures 44.3 and 44.4 provide a reasonable representation of the linkages between IDDOV, NPD, and NRPD, respectively, for relatively simple products or for
subsystems within a more complex product. The gures provide a good representation for suppliers of functional subsystems to original equipment manufacturers
that deliver complex products. However, a different representation is needed for
complex products.
The development of complex products usually involves a number of IDDOV
projects in various points within the NRPD process, as depicted in Figure 44.5.
The activities in the verify and launch phase of IDDOV differ between the
different applications. In the concept development phases, a lowercase v indicates
the conduct conrmation run step of the robust optimization process. In the
detailed design and development phases, V references all of the activities necessary
to verify and validate product and process design, ramp-up production, and launch
the product. In the verify and validate phase of NRPD, the activities indicated by
V ? are situational, as suggested by the question mark. Complexity determines how
to align IDDOV projects with the product or subsystem under development.
PHASE PROCESSES
Xerox and numerous other corporations have encompassed off-line quality engineering (robust engineering) in their NPD [7]. In this section, the activities within
each of the phases of NRPD are described briey. The full signicance of the
NRPD process may not become evident until the compatible phase gate processes
are introduced in Phase Gate Processes.
In the rst three phases of the NRPD process (preconcept, concept, concept
optimization), the team focuses on getting the right product concept up-front for
a awless GOOD START transition from concept development activities to detailed
product and process design. In the three detailed product and process design
Concept Development
PRECONCEPT
I
D
..
..
..
..
D D O V
D D O V?
D D O V
D D O V?
..
..
..
..
D D O V
Figure 44.5
IDDOV projects
conducted in different
phases of NRPD for a
complex product
1504
Concept and
Technology Set
Product B
Product C
The phase gate and project review processes are closely linked. The project review
process is broken out because it can be used at numerous points during the development process beyond the formal phase gates. The phase gate processes need
to be designed to be compatible with the engineering and management processes.
The ability to establish the technical sturdiness of the combination of a conceptual
design and its technology set through robust optimization early in the development process opens up the opportunity to streamline the phase gate processes.
The ability to verify the robustness of a concept and technology set decreases the
likelihood of unanticipated downstream technical problems dramatically. Hence,
the need for demoralizing, downstream go/kill[6]-style phase gates is largely eliminated. Proactive help/go-style phase gates become more effective in the context
of early optimization. The intent of help/go-style phase gates is to help teams
accelerate progress by resolving issues and demonstrating readiness to move forward into the next phase. Forward-looking preparation for entry into the next
phase creates a more positive environment than backward-looking inquisitional
evaluations against exit criteria.
The only go/(not-ready-yet or kill) phase gate resides between concept and
technology activities and detailed product and process design activities, as indicated in Figures 44.4 and 44.5. In many corporations, the creative advanced development activities are conducted in different organizations by different teams
from product development activities. The two-organization model makes the transition from conceptual development activities to detailed product and process design activities a signicant event with the commitment of major resources. In the
early 1980s, Xerox identied this event as GOOD START.
The primary criteria for a GOOD START are (1) business readiness and (2)
technical readiness. Business readiness requires that completed business plans project nancial attractiveness. Technical readiness requires that a strong product concept that meets upstream customer requirements and downstream technical
requirements has been selected, and that technical feasibility based on mature
technologies has been demonstrated.
The ability to optimize concepts and technology sets prior to transitioning to
detailed product and process design supports the effective implementation of the
GOOD START discipline. Robust optimization provides solid evidence about the
strengths of concept designs and technology sets. This is an enormously important
new capability. The engineering and management teams can know whether a new
technology or a new concept design will be robust under actual customer usage
conditions very early in the development process, before making the major resource commitment necessary to support detailed product and process design.
In addition, robust optimization assessment methods can be used to evaluate
potential carryover designs prior to making reuse versus new concept design
1505
1506
1507
1508
1509
1510
Range of Variation
and Sigma Level
Robust Design
Project
Signal/Noise
Figure 44.7
Improvement due to
robust optimization
Sigma Level
3.5
4.9
24 dB
27 dB
USL
UCL
LCL
LSL
Baseline (SN)
1.5 Shift
Design
Cycle
Changeover
OP/BL = (1//2)gain/6
One Optimization
Project Produces
Step-Function
Improvement
7.0
30 dB
Optimized (SN)
6-dB Gain
Reduced Shift
8.5
Time
1511
element of a system, a more representative choice for the initial sigma level might
be 2; then the sigma level would increase to 4 for a gain of 6 dB.
Dealing with the famous 1.5 shift can be troublesome. Perhaps the shift was not
present prior to optimization. Does it apply to distributions of variation in products
as well as processes? Six sigma programs often assume that the 1.5 shift is always
present. This assumption does not work in the context of Taguchi methods.
MANUFACTURING PROCESS PERSPECTIVE
Both the similarities and the differences between Figure 44.7 and the Juran trilogy
[8] shown in Figure 44.8 are striking. A small difference is the choice of metrics.
The vertical axis in the Juran trilogy is the cost of poor quality, which, of course,
is related directly to the SN and sigma level (added to gure) used in Figure 44.7.
The most striking difference is the slight tilt in the control charts in Figure 44.7
that results from continuous improvement actions. As the range of variation is
reduced over time, the sigma level increases to create the slight tilt.
Jurans trilogy contains three steps for improving quality:
1. Quality planning. Establish plans to improve quality to meet requirements.
2. Quality control. Get process in control before trying to improve it.
3. Quality improvement. Make improvement and maintain process control at new
levels.
The Juran trilogy represents one action in a sequence of continuous improvement actions. The axis of the control chart remains horizontal between improvement actions. The 1.2 improvement indicated in Figure 4.8 is normally regarded
as a large improvement for a manufacturing process but a small step compared to
the gain typically achieved using robust optimization. The tilted lines in Figure
44.7 represent a series of Juran trilogies (i.e., the continuous improvement that
results from a sequence of small step-function improvements smoothed into lines).
1. Quality
Planning
2. Quality
Control
Original Zone of
Quality Control
Quality
Sporadic Peak
Chronic Waste
6
Figure 44.8
Jurans quality trilogy with representative sigma levels added (Adapted from Joseph M. Juran and A. Blanton
Godfrey, Jurans Quality Handbook, McGraw-Hill, New York, 1999, Section 2.7.)
1512
0.1
1.0
1.5
2.0
2.5
3.0
6.0
10.0
18.0
24.0
11
16
21
25
29
50
69
88
94
1513
USL
LSL
USL
Before Optimization
After Optimization
Good Concept
Poor Concept
Cannot be optimized
Will create persistent problems in
manufacturing
and reliability
Cannot be corrected by detailed design
Can be optimized
Will not require expensive tolerance control
in manufacturing
Is robust under real-world usage conditions
Figure 44.9
Poor concepts cannot
be recouped
1514
Taguchi asserts that robust optimization reduces the failure rate (FR) by a factor
of (12 )gain/3, leading to the equation, FRoptimized FRinitial (12 )gain/3.
Again, assuming an average gain in the SN ratio of 6 dB, the failure rate is
reduced by a factor of 4. As with the range of variation, the failure rate of a poor
concept may not be improved signicantly by optimization (i.e., the GAIN realized
is substantially less than 1.5 dB). In Figure 44.10, the traditional bathtub curve is
used to provide a familiar graphical representation of the impact of optimizing a
good concept design.
Many teams build in a safety factor by using the more conservative gain/6 rather
than the gain/3 relation to project improvements in failure rates. In Figure 44.10,
the more conservative gain/6 relation is used, which yields a factor of 2 reduction
in failure rate rather than the more aggressive gain/3 relation, which would yield
a factor of four improvement. Whether to use gain/3 or gain/6 to make failure
rate predictions is a matter of judgment.
The (12 )gain/3 or (12 )gain/6 factor applies to the failure rate, not initial quality,
reliability, or durability, which are identied in Figure 44.10 with portions of the
bathtub curves. Reliability is calculated from the failure rate. Condra [9] denes
reliability as quality over time, a denition that Juran repeats in his handbook
[8]. For a constant failure rate typical of electronic systems, reliability, R(t) eFRt.
For failure rates that change over time, FR(t), typical of mechanical systems
and certain electronic components, reliability, R(t), changes with time according
to the formula R(t) exp[ FR(t) dt], where the integral range is 0 to t. Reliability predictions are typically made using the Weibull function.
In either case, changes in the failure rate [normally called the hazard rate, h(t),
in reliability engineering] are related by an exponential function to changes in
reliability. The brief discussion about reliability is presented only for the purpose
of clarifying how the failure-rate bathtub curves in Figure 44.10 relate to Taguchis
(12 )gain/3 factor.
Before Optimization
Wear-out
Failure Rate
Figure 44.10
Good concepts
optimized to deliver
improved initial quality,
reliability, and durability
After Optimization
INITIAL
QUALITY
(Mfg. Startup)
RELIABILITY
(Random Failures)
Time
DURABILITY
(Wear-out)
1515
Innovation
1516
Calculations of eld cost start with the now familiar relation for failure rate. To
illustrate a reasonable assumption, the failure rate formula is adjusted by taking
an exponential factor between gain/6 and gain/3 (e.g., gain/4.5):
FRnew FRinitial (21 )gain/4.5
This relationship provides the basis for making nancial projections of service cost.
Service cost is inclusive of warranty cost and customer cost.
Define annual service cost (no. failures/year) (average cost per failure)
Then
annual service costnew annual service costinitial (21 )gain/4.5
Assuming an initial annual service cost of $500, a gain of 6 dB reduces the
annual service cost to $200:
annual service costnew $500 (21 )6/4.5 $200
This simple means of projecting changes in service cost (warranty and customer
cost) provides better estimates than more complex alternative methods.
Product Cost
Taguchi suggests that when the gain in SN ratio is sufciently large, half of the
gain should be allocated to reduce cost. Consider, for example, an automotive
rotor/brake pad subsystem. Optimization motivated by need to reduce audible
noise yielded a 20% improvement in performance in addition to eliminating the
audible noise problem. The 20% performance improvement was captured as a
20% weight reduction that translated into about a 12% cost reduction.
Loss to Society
Minimize loss to society using tolerance design to balance internal company cost
and external eld cost. External eld cost may be warranty cost or customer cost.
For example, starting with current internal cost, balance company and customer
costs by conducting tolerance design using the quality loss function. The meth-
1517
PRODUCT
Product Problem
D
Problem
Area
Root Cause
Figure 44.11
System architecture block diagram
Dig
Dig
Dig
1518
1519
x process. The case study involves a type C chronic, recurring problem. Because
the source of the problem was found to be in the software, the choice of control
factors was not constrained by sunk investments in tooling. The freedom to select
the appropriate control factors facilitated a complete resolution of the problem.
The case study entitled Optimization of a Diesel Engine Software Control Strategy by Larry R. Smith and Madhav S. Phadke [12] was presented at the American
Supplier Institute Consulting Group, 19th Annual Taguchi Symposium, held in
2002. Genichi Taguchi selected the paper for the Taguchi Award as the symposiums best case study.
The case study addresses a long-standing, intermittent problem in diesel engine
power train control system called hitching under cruise control and ringing in idle
mode. Hitching and ringing refer to oscillations in engine rpm. The intermittent
character and relatively rare occurrence of the problem had confounded engineers in numerous previous attempts to solve the problem.
The essence of the six sigma style case study was the unusual use of engineering
methodologies in a DMAIC-like nd-and-x process to eliminate completely a longstanding and expensive problem that many previous efforts failed to x. TRIZ was
used to analyze the problem and identify software as the location of the cause.
Robust optimization was used to improve performance by eliminating the problem
permanently. Elimination of the hitching/ringing problem saved $8 million in
warranty cost.
A fascinating element in the case study that is not emphasized in the paper is
the method used to develop a noise strategy. Several engineers had the patience
to drive several trucks for extended periods under a variety of conditions. Different
driving proles constitute important noise factors because they cause major
changes to the load on the engine. The noise factors were selected from driving
conditions that the team of drivers eventually found that would initiate hitching.
Six noise factors were identied that related to acceleration and deceleration in
1-mph increments for different speed bands and rolling hills at different speeds.
The case study illustrates the potential power of an engineering-oriented
DMAIC process for improving existing products. The notion is straightforward.
The DMAIC process works best when engineering methods are used to improve
existing engineered products and statistical methods are used to improve existing
statistical processes. The heart of the new robust six sigma process would be robust
engineering just as it is in IDDOV. The new process would have the power to
change nd and x, x, x a chronic, reoccurring problem into nd and eradicate
the problem and prevent its reoccurrence, provided that the selection of control
factors was not overly constrained.
In the conclusions section, the authors state:
Many members of this team had been working on this problem for quite some time.
They believed it to be a very difcult problem that most likely would never be solved.
The results of this study surprised some team members and made them believers in
the Robust Design approach. In the words of one of the team members, When we
ran that conrmation experiment and there was no hitching, my jaw just dropped. I
couldnt believe it. I thought for sure this would not work. But now I am telling all
my friends about it and I intend to use this approach again in future situations.
1520
44.7. Conclusions
The dramatic benets of robust optimization and IDDOV are characterized
through quantication of improvements in terms of familiar metrics such as sigma
level, product failure rate, and nancial projections assuming the average gain in
an SN ratio of 6 dB.
In addition, four new developments are introduced:
1. The IDDOV formulation of design for six sigma is shown to be a powerful
implementation of Taguchis quality engineering.
2. A new NPD process is described that takes advantage of IDDOV and twostep optimization.
3. An explicit formula for projecting reduction in warranty cost due to optimization is presented.
4. Innovation is fostered by the ability to optimize concepts and technology sets
to reduce risk prior to a GOOD START commitment in NPD.
Finally, the tantalizing notion of creating a new robust DMAIC process focused
on improving existing engineered products is introduced. Although this chapter
focused only on product applications, nearly all of the notions and methodologies
discussed are equally applicable to business, engineering, manufacturing, and service processes.
ACKNOWLEDGMENTS
The incredibly talented members of the ASI CG family made this chapter possible.
I am especially indebted to my friends and colleagues, Subir Chowdhury, Shin
Taguchi, Alan Wu, and Jim Wilkins for support, suggestions, and corrections. We
are all indebted to Genichi Taguchi for his continuing stream of many outstanding
contributions focused on helping corporations and engineers reduce the loss to
society. I am also indebted to my good friend, Larry R. Smith, for continuing
support over many years and for reading this manuscript and making valuable
suggestions.
References
1. Subir Chowdhury, 2002. Design for Six Sigma: The Revolutionary Process for Achieving
Extraordinary Prots. Chicago: Dearborn.
2. Genichi Taguchi, Subir Chowdhury, and Shin Taguchi, 2000. Robust Engineering:
Learn How to Boost Quality While Reducing Cost and Time to Market. New York: McGrawHill.
3. Yuin Wu and Alan Wu, 2000. Taguchi Methods for Robust Design. New York: ASME
Press.
4. Stuart Pugh, 1990. Total Design: Integrated Methods for Successful Product Engineering.
Harlow, Essex, England: Addison-Wesley.
5. Genichi Taguchi, 1993. Taguchi on Robust Technology Development: Bringing Quality
Engineering Upstream. New York: ASME Press.
6. Robert G. Cooper, 1993. Winning at New Product Development: Accelerating the Process
from Idea to Launch. Reading, MA: Addison-Wesley.
References
7. How to bring out better products faster. Industrial Management & Technology issue
of Fortune, November 23, 1998.
8. Joseph M. Juran and A. Blanton Godfrey, 1998. Jurans Quality Handbook, 5th ed.
New York: McGraw-Hill p. 2.7.
9. Lloyd W. Condra, 1993. Reliability Improvement with Design of Experiments. New York:
Marcel Dekker.
10. Mikel Harry and Richard Schroeder, 2000. Six Sigma: The Breakthrough Management
Strategy Revolutionizing the Worlds Top Corporations. New York: Doubleday.
11. Subir Chowdhury, 2001. The Power of Six Sigma: An Inspiring Tale of How Six Sigma Is
Transforming the Way We Work. Chicago: Dearborn.
12. Larry R. Smith and Madhav S. Phadke, 2002. Optimization of a diesel engine software control strategy. Proceedings of the 19th Annual Taguchi Symposium, sponsored by
the American Supplier Institute Consulting Group.
1521
Appendixes
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
Appendix A
Orthogonal Arrays and Linear
Graphs: Tools for Quality
Engineering*
Introduction
1526
Orthogonal Arrays
1526
Assigning Interactions between Two Columns
1527
Some General Considerations
1527
Linear Graphs
1527
Linear Graph Symbols
1527
Suggested Steps in Designing Experiments
1528
General Considerations
1529
ArraysL4(23)
1530
L8(27)
1530
L12(211)
1531
L16(215)
1532
L32(231)
1536
L64(263)
1550
L9(34)
1564
L18(2137)
1564
L27(313)
1565
L54(21325)
1568
L81(340)
1570
L16(45)
1586
L32(2149)
1587
L64(421)
1588
L25(56)
1591
L50(21511)
1592
L36(211312)
1593
* This material is from the appendixes of Genichi Taguchis System of Experimental Design
(White Plains, NY: Unipub/American Supplier Institute, 1987).
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1525
1526
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
1594
L36(23313)
L9(221)
1595
L27(322)
1596
Introduction
This appendix has been produced to provide a convenient and easily accessible
reference source for students and practitioners of quality engineering as developed
by Dr. Genichi Taguchi. It contains the most commonly used orthogonal arrays
and applicable linear graphs. The specic arrays and graphs were compiled by Dr.
Taguchi and Shozo Konishi, and the American Supplier Institute wishes to acknowledge the importance and value of their contribution to quality engineering.
Experience has shown that students and practitioners in plants need a separate
and convenient collection of the arrays and graphs to save time when developing
experiments. The American Supplier Institute has, therefore, produced this appendix with the invaluable assistance of Professor Yuin Wu, Shin Taguchi, and Dr.
Duane Orlowski.
Orthogonal Arrays
In design of experiments, orthogonal means balanced, separable or not
mixed. A major feature of Dr. Taguchis utilization of orthogonal arrays is the
exibility and capability for assigning a number of variables to them. An even more
important feature, however, is the reproducibility or repeatability of the conclusions drawn from small scale experiments, in research and development work
for product/process design, as well as in actual production and in the eld.
Orthogonal arrays have been in use for many years, but their application by Dr.
Taguchi has some special characteristics. At rst glance, Dr. Taguchis methods
seem to be nothing more than an application of Fractional Factorials. In his approach to quality engineering, however, the primary goal is the optimization of
product/process design for minimal sensitivity to noise.
In Dr. Taguchis methodology, the main role of an orthogonal array is to permit
engineers to evaluate a product design with respect to robustness against
noise, and the cost involved. It is, in effect, an inspection device to prevent a
poor design from going downstream.
Quality engineering, according to Dr. Taguchi, focuses on the contribution of
factor levels to robustness. In contrast with pure research, quality engineering
applications in industry do not look for cause and effect relationships. Such a
detailed approach is usually not cost-effective. Rather, good engineers should be
able to involve their skills and experience in designing experiments by selecting
characteristics with minimal interactions in order to focus on pure main effects.
Dr. Taguchi stresses the need for a conrmatory experiment. If there are strong
interactions, a conrmatory experiment will show them. If predicted results are
not conrmed, the experiment must be re-developed.
The convention for naming arrays is La(bc) where a is the number of experimental runs, b the number of levels of each factor, and c the number of columns
in the array.
Arrays
1527
Arrays can have factors with many levels, although two and three level factors
are most commonly encountered. An L8 array, for example, can handle seven
factors at two levels each, under eight experimental conditions.
L12, L18, L36, and L54 arrays are among a group of specially designed arrays that
enable the practitioner to focus on main effects. Such an approach helps to increase the efciency and reproducibility of small scale experimentation as developed by Dr. Taguchi.
As noted above, Dr. Taguchi recommends that experiments be designed with emphasis on main effects, to the extent possible. Because there will be situations in
which interactions must be analyzed, Dr. Taguchi has devised specic triangular
tables for certain arrays to help practitioners assign factors and interactions in a
non-arbitrary method to avoid possible confounding.
There are six such triangular matrices or tables in this appendix for arrays
L16(215), L32(231), L64(263), L27(313), L81(340), and L64(421). These triangular tables
have been placed on pages facing their respective arrays for convenient reference.
The method for using them is as follows. Lets use array L16(215) and its triangular table as an example. When assigning an interaction between two factors,
designated A and B, factor A is assigned to column 1 of the array, and factor B to
column 4. Their interaction effect is assigned to column 5. Note the triangular
table. Where row (1) and column 4 intersect is the number 5. This indicates that
the interaction AB is assigned to column 5. For factors assigned to columns 5
and 4, their interaction would be assigned to column 1.
As conditions permit, focus on main effects.
The most frequently used arrays are: L16, L18, L27, and L12.
Assigning
Interactions between
Two Columns
Some General
Considerations
Linear Graphs
One of Dr. Taguchis contributions to the use of orthogonal arrays in the design
of experiments is the linear graph. These graphs are pictorial equivalents of triangular tables or matrices that represent the assignment of interactions between
all columns of an array. Linear graphs are, thus, a graphic tool to facilitate the
often complicated assignment of factors and interactions, factors with different
numbers of levels, or complicated experiments such as pseudo factor design to an
orthogonal array.
A linear graph is used as follows:
1. Factors are assigned to points of the graph.
2. An interaction between two factors is assigned to the line segment connecting the two respective points.
The special symbols and the groups they represent are used for split unit design.
For all arrays except L32(231) and L64(263), we use the following designations:
Group 1
Group 2
Group 3
Group 4
Linear Graph
Symbols
1528
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
For L32(231) and L64(263), we use the following designations:
Groups 1 and 2
Group 3
Group 4
Group 5
SN Analysis
SN response tables
SN response graphs
ANOVA
SN ANOVA
7. Interpret results
Select optimum levels of control factors. (For nominal-the-best use
mean response anlaysis in conjunction with SN analysis.)
Predict results for the optimal condition.
8. ALWAYSALWAYSALWAYS run a conrmatory experiment to verify predicted results.
If results are not conrmed or are otherwise unsatisfactory, additional
experiments may be required.
Arrays
General Considerations
It is generally preferable to consider as many factors (rather than many interactions) as economically feasible for the initial screening.
Remember that L12 and L18 arrays are highly recommended, because interactions are distributed uniformly to all columns.
1529
1530
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
L4 (23)
L8 (27)
Arrays
1531
L12 (211)
1532
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
L16 (215)
Arrays
1533
1534
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
Arrays
1535
1536
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
L32 (231)
Arrays
1537
1538
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
Arrays
1539
1540
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
Arrays
1541
1542
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
Arrays
1543
1544
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
Arrays
1545
1546
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
Arrays
1547
1548
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
Arrays
1549
1550
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
L64 (263)
Arrays
1551
1552
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
Arrays
1553
1554
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
Arrays
1555
1556
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
Arrays
1557
1558
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
Arrays
1559
1560
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
Arrays
1561
1562
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
Arrays
1563
1564
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
L9 (34)
L18 (2137)
Arrays
1565
L27 (313)
1566
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
Arrays
1567
1568
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
L54 (21325)
Arrays
1569
1570
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
L81 (340)
Arrays
1571
1572
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
Arrays
1573
1574
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
Arrays
1575
1576
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
Arrays
1577
1578
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
Arrays
1579
1580
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
Arrays
1581
1582
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
Arrays
1583
1584
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
Arrays
1585
1586
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
L16 (45)
Arrays
1587
L32 (2149)
1588
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
L64 (421)
Arrays
1589
1590
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
Arrays
1591
L25 (56)
1592
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
L50 (21511)
Arrays
1593
1594
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
L36 (23313)
Arrays
1595
1596
Appendix A Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering
L27 (322)
Arrays
1597
Appendix B
Equations for On-Line
Process Control
Feedback Control by Quality Characteristics (Continuous Variables; Chapter 24)
Loss Functions and
Equations
L0
B
C
A
2
n0
u0
B
C
A
2
n
u
n1
l
2
D 20
n1
l
2
D2
D 20
m2
u0
D2
2m
u
where
n
D
2uA B D
0
1/4
D2
D 02
A:
B:
C:
n0:
n:
D0:
D:
u0:
u:
1598
3C D 2
A u0
u u0
Parameters
2
0
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
Feedback Control by Quality Characteristics (Using Limit Samples or Gauges; Chapter 24)
l:
m:
L0:
L:
1599
Remarks
L0
B
C
A
2
n0
u0
B
C
A
2
n
u
B
C
A
n
u
n0 1
l
2
D 20
D2
2
3
n1
l
2
n1
l
2
D 02
u0
D2
u
2
u
where
n
D
2uA B D 2uBA
0
3C
Au
1/4
3C
Au
1/4
Set u0 u
when D0
, u u2
A:
B:
C:
n0:
n:
D0:
D:
u0:
u:
u:
Parameters
1600
L0:
L:
L0
B
C
A
2
n0
u0
B
C
A
2
n
u
D 20
D2
n1
l
2
n1
l
2
D 20
m2
u0
D2
2m
u
where
n
D
2u0B
D0
u u0
Parameters
3C D 20 2
A u0
1/4
D2
D 02
A:
B:
C:
n0:
n:
D0:
D:
u0:
u:
l:
m:
1601
Remarks
B
n 1A
C
lA
0
n0
2
u
u
u
B
n1A
C
lA
n
2 u
u
u
where
n
l)B
2(u
A C/u
2uB
Parameters
Remarks
Appendix C
Orthogonal Arrays and Linear
Graphs for Chapter 38
L4(23)
No.
a
b
}
1
1602
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1603
L8(27)
No.
b
c
}
1
(1)
3
(2)
2
1
(3)
5
6
7
(4)
4
7
6
1
(5)
7
4
5
2
3
(6)
6
5
4
3
2
1
(7)
1604
L12(211)
No.
10
11
10
11
12
}
1
1605
L16(215)
No.
10
11
12
13
14
15
10
11
12
13
14
15
16
}
1
(1)
3
(2)
2
1
(3)
5
6
7
(4)
4
7
6
1
(5)
7
4
5
2
3
(6)
6
5
4
3
2
1
(7)
9
10
11
12
13
14
15
(8)
9
8
11
10
13
12
15
14
1
(9)
10
11
12
13
14
15
11
8
9
14
15
12
13
2
3
(10)
10
9
8
15
14
13
12
3
2
1
(11)
13
14
15
8
9
10
11
4
5
6
7
(12)
12
15
14
9
8
11
10
5
4
7
6
1
(13)
15
12
13
10
11
8
9
6
7
4
5
2
3
(14)
14
13
12
11
10
9
8
7
6
5
4
3
2
1
1606
1607
1608
1609
L9(34)
No.
b2
}
1
1610
L18(2137)
No.
10
11
12
13
14
15
16
17
18
1611
L27(313)
No.
10
11
12
13
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
c2
c2
}
1
1612
L27(313) (Continued)
Interaction between Two Columns
1
10
11
12
13
(1)
3
4
2
4
2
3
5
6
7
6
5
7
7
5
6
9
10
8
10
8
9
12
13
11
13
11
12
(2)
1
4
1
3
8
11
9
12
10
13
5
11
6
12
7
13
5
8
6
9
7
10
(3)
1
2
9
13
10
11
8
12
7
12
5
13
6
11
6
10
7
8
5
9
(4)
10
12
8
13
9
11
6
13
7
11
5
12
7
9
5
10
6
8
(5)
1
7
1
6
2
11
3
13
4
12
2
8
4
10
3
9
(6)
1
5
4
13
2
12
3
11
3
10
2
9
4
8
(7)
3
12
4
11
2
13
4
9
3
8
2
10
(8)
1
10
1
9
2
5
3
7
4
6
(9)
1
8
4
7
2
6
3
5
(10)
3
6
4
5
2
7
(11)
1
13
1
12
(12)
1
11
1613
1614
L36(211312)a
No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
1 2 3 4
1 1 1 1 1 1 1 1 1 1
1 1 1 1
1 1 1 1 1 1 1 1 1 1
1 1 1 1
1 1 1 1 1 1 1 1 1 1
1 1 1 1
1 1 1 1 1 2 2 2 2 2
1 2 2 1
1 1 1 1 1 2 2 2 2 2
1 2 2 1
1 1 1 1 1 2 2 2 2 2
1 2 2 1
1 1 2 2 2 1 1 1 2 2
2 1 2 1
1 1 2 2 2 1 1 1 2 2
2 1 2 1
1 1 2 2 2 1 1 1 2 2
2 1 2 1
10
1 2 1 2 2 1 2 2 1 1
2 2 1 1
11
1 2 1 2 2 1 2 2 1 1
2 2 1 1
12
1 2 1 2 2 1 2 2 1 1
2 2 1 1
13
1 2 2 1 2 2 1 2 1 2
1 1 1 2
14
1 2 2 1 2 2 1 2 1 2
1 1 1 2
15
1 2 2 1 2 2 1 2 1 2
1 1 1 2
16
1 2 2 2 1 2 2 1 2 1
1 2 2 2
17
1 2 2 2 1 2 2 1 2 1
1 2 2 2
18
1 2 2 2 1 2 2 1 2 1
1 2 2 2
19
2 1 2 2 1 1 2 2 1 2
2 1 2 2
20
2 1 2 2 1 1 2 2 1 2
2 1 2 2
21
2 1 2 2 1 1 2 2 1 2
2 1 2 2
22
2 1 2 1 2 2 2 1 1 1
2 2 1 2
23
2 1 2 1 2 2 2 1 1 1
2 2 1 2
24
2 1 2 1 2 2 2 1 1 1
2 2 1 2
25
2 1 1 2 2 2 1 2 2 1
1 1 1 3
26
2 1 1 2 2 2 1 2 2 1
1 1 1 3
27
2 1 1 2 2 2 1 2 2 1
1 1 1 3
28
2 2 2 1 1 1 1 2 2 1
1 2 2 3
29
2 2 2 1 1 1 1 2 2 1
1 2 2 3
30
2 2 2 1 1 1 1 2 2 1
1 2 2 3
31
2 2 1 2 1 2 1 1 1 2
2 1 2 3
32
2 2 1 2 1 2 1 1 1 2
2 1 2 3
1615
L36(211312)a (Continued)
No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
1 2 3 4
33
2 2 1 2 1 2 1 1 1 2
2 1 2 3
34
2 2 1 1 2 1 2 1 2 2
2 2 1 3
35
2 2 1 1 2 1 2 1 2 2
2 2 1 3
36
2 2 1 1 2 1 2 1 2 2
2 2 1 3
}
1
a
By introducing columns 1, 2, 3, and 4 in place of columns 1, 2, ... , 11, one obtains L36(23313).
1616
L54(21325)
No. 1
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 2 2
1 1 1 1 1 1 1 3 3
1 2 2 2 2 2 2 1 1
1 2 2 2 2 2 2 2 2
1 2 2 2 2 2 2 3 3
1 3 3 3 3 3 3 1 1
1 3 3 3 3 3 3 2 2
1 3 3 3 3 3 3 3 3
10
2 1 1 2 2 3 3 1 1
11
2 1 1 2 2 3 3 2 2
12
2 1 1 2 2 3 3 3 3
13
2 2 2 3 3 1 1 1 1
14
2 2 2 3 3 1 1 2 2
15
2 2 2 3 3 1 1 3 3
16
2 3 3 1 1 2 2 1 1
17
2 3 3 1 1 2 2 2 2
18
2 3 3 1 1 2 2 3 3
19
3 1 2 1 3 2 3 1 2
20
3 1 2 1 3 2 3 2 3
21
3 1 2 1 3 2 3 3 1
22
3 2 3 2 1 3 1 1 2
23
3 2 3 2 1 3 1 2 3
24
3 2 3 2 1 3 1 3 1
25
3 3 1 3 2 1 2 1 2
26
3 3 1 3 2 1 2 2 3
27
3 3 1 3 2 1 2 3 1
28
1 1 3 3 2 2 1 1 3
29
1 1 3 3 2 2 1 2 1
30
1 1 3 3 2 2 1 3 2
31
1 2 1 1 3 3 2 1 3
32
1 2 1 1 3 3 2 2 1
33
1 2 1 1 3 3 2 3 2
1617
L54(21325) (Continued )
No. 1
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
34
1 3 2 2 1 1 3 1 3
35
1 3 2 2 1 1 3 2 1
36
1 3 2 2 1 1 3 3 2
37
2 1 2 3 1 3 2 1 2
38
2 1 2 3 1 3 2 2 3
39
2 1 2 3 1 3 2 3 1
40
2 2 3 1 2 1 3 1 2
41
2 2 3 1 2 1 3 2 3
42
2 2 3 1 2 1 3 3 1
43
2 3 1 2 3 2 1 1 2
44
2 3 1 2 3 2 1 2 3
45
2 3 1 2 3 2 1 3 1
46
3 1 3 2 3 1 2 1 3
47
3 1 3 2 3 1 2 2 1
48
3 1 3 2 3 1 2 3 2
49
3 2 1 3 1 2 3 1 3
50
3 2 1 3 1 2 3 2 1
51
3 2 1 3 1 2 3 3 2
52
3 3 2 1 2 3 1 1 3
53
3 3 2 1 2 3 1 2 1
54
3 3 2 1 2 3 1 3 2
}
}
1
Glossary
additivity A concept relating to the independence of factors. The effect of additive factors occurs in the same direction (i.e., they do not interact).
adjusting factor A signal factor used to adjust the output.
analysis of variance (ANOVA) Analysis of the impacts of variances caused by factors. ANOVA is performed after the decomposition of the total sum of squares.
array A table of rows and columns used to make the combinations of an
experiment.
base space A normal group selected by specialists to construct Mahalanobis
space.
CAMPS Computer-aided measurement and process design and control system.
classied attribute The type of quality characteristic that is divided into discrete
classes rather than being measured on a continuous scale.
compounding of noise factors A strategy for reducing the number of experimental runs by combining noise factors into a single output for
experimentation.
conrmation experiment A follow-up experiment run under the conditions dened as optimum by a previous experiment. The conrmation experiment is
intended to verify the experimental predictions.
confounding A condition in which experimental information on variables cannot
be separated. The information becomes intermingled with other information.
control factor A product or process parameter whose values can be selected and
controlled by the design or manufacturing engineer.
data analysis The process performed to determine the best factor levels for reducing variability and adjusting the average response toward the target value.
decomposition of variation The decomposition of collected data into the sources
of variation.
degrees of freedom The number of independent squares associated with a given
factor (usually, the number of factor levels minus 1).
design of experiments Experimental and analysis methods to construct mathematical models related to the response to the variables.
direct product design A layout designed to determine the interaction between
any control factor assigned to an orthogonal array and any noise factor assigned
to the outer array.
1618
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
Glossary
distribution A way of describing the output of a common-cause system variation
in which individual values are not predictable but in which the outcomes as a
group form a pattern that can be described in terms of its location, spread, and
shape. Location is commonly expressed by the terms of the mean or median,
and spread is commonly expressed in terms of the standard deviation or the
range of a sample. Shape involves many characteristics, such as symmetry and
peakedness. These are often summarized by using the name of a common distribution, such as the normal, binomial, or Poisson.
downstream quality Also called customer quality, such as car noise, vibration, or
product life. It is the type of quality easily noticed by the customer. This is the
type of quality to avoid using for quality improvement.
dummy treatment A method used to modify an orthogonal array to accommodate
a factor with fewer levels.
dynamic characteristic A characteristic that expresses the functionality, transformation, and adjustability of the output of a system or subsystem.
dynamic operating window The gap between the total reaction rate and the sidereaction rate in chemical reactions. The greater the gap, the more desirable
the result.
environmental noise factors Sources of variability due to environmental conditions when a product is used.
error sums of squares The total of the sums of squares that are considered as
residual.
expected value of a variance The mean of an innite number of variances estimated from collected data.
factor A parameter or variable that may affect product or process performance.
factorial effect The effect of a factor or an interaction or both.
feedforward control A way to compensate for the variability in a process or product by checking noise factor levels and sending signals that change an adjustment factor.
nite element method A computer-aided method used in many areas of engineering and physics to determine an approximate solution to problems that
are continuous in nature, such as in the study of stress, strain, heat transfer, or
electrical elds.
fractional factorial layout An experimental design that consists of a fraction of
all factor-level combinations.
generic function The relationship between the objective output and the means
as the signal that engineers use in order to generate the objective output.
go/no-go specications The traditional approach to quality, which states that a
specication part, component, or assembly that lies between upper and lower
specications meets quality standards.
hilow table A table showing the magnitude of effects in descending order.
ideal function The ideal relationship under the standard condition between output and the signal by the users.
indicative factor A factor that has a technical meaning but has nothing to do
with the selection of the best level.
inner array A layout or orthogonal array for the control factors selected for experimentation or simulation.
1619
1620
Glossary
interaction A condition in which the impact of a factor or factors on a quality
characteristic changes depending on the level of another factor (i.e., the interdependent effect of effect of two or more factors).
larger-the-better characteristic The type of performance parameter that gives improved characteristic performance as the value of the parameter increases (e.g.,
tensile strength, power, etc.) This type of characteristic belongs to the category
of quality characteristics that has innity as the ideal value and has no negative
value.
least squares method A method to express the results by a function causes using
existing data in such a way that the residual error of the function is minimized.
linear equation The equation showing the case when the input/output regression
line does not pass through the origin.
linearity A measure of the straightness of a response plot. Also the extent to
which a measuring instruments response is proportional to the measured
quantity.
loss function See quality loss function.
Mahalanobis distance A statistical tool introduced by P. C. Mahalanobis in 1930
to distinguish the pattern of one group from others.
MahalanobisGramSchmidt process (MTGS) One of the processes used to compute a Mahalanobis distance.
Mahalanobis space (MS) A normal space, such as a healthy group in a medical
study.
MahalanobisTaguchi system (MTS) A measuring method of recognizing patterns using a Mahalanobis distance.
manufacturing tolerance The assessment of tolerance prior to shipping. The tolerance manufacturing tolerance is usually tighter than the consumer tolerance.
mean The average value of some variable.
mean-squared deviation (MSD) A measure of variability around the mean or target value.
mean-squared error variance The variance considered as the error.
mean sum of squares The sum of squares per unit degree of freedom.
measurement accuracy The difference between the average result of a measurement with a particular instrument and the true value of the quantity being
measured.
measurement error The difference between the actual value and measured value
of a measured quantity.
measurement precision The extent to which a repeated measurement gives the
same result. Variations may arise from the inherent capabilities of the instrument, from changes in operating condition, and so on. See also repeatability
and reproducibility.
midstream quality Also called specied quality, such as dimension or specication.
This is the type of quality to avoid using for quality improvement.
multiple correlation Correlation by multiple variables.
noise factor Any uncontrollable factor that causes product quality to vary. There
are three types of noise: (1) noise due to external causes (e.g., temperature,
Glossary
humidity, operator, vibration, etc.); (2) noise due to internal causes (e.g., wear,
deterioration, etc.); and (3) noise due to part-to-part or product-to-product
variation.
nominal-the-best characteristic The type of performance characteristic parameter
that has an attainable target or nominal value (e.g., length, voltage, etc.).
nondynamic operating window A gap between the maximum and the minimum
functional limits. A wide gap means a more robust function.
number of units The total of the square of the coefcients in a linear equation.
It is used to calculate the sum of squares.
objective function The relationship between the objective characteristic and the
signal used by the users.
off-line quality control Activities that use design of experiments or simulation to
optimize product and process designs. These activities include system design,
parameter design, and tolerance design.
on-line quality control Activities that occur at the manufacturing phase and include use of the quality loss function to determine the optimum inspection
interval, control limits, and so on. On-line quality control is used to maintain
the optimization gained through off-line quality control.
origin quality Also called functional quality. This is the type of quality used to
improve the robustness of product functions.
orthogonal array A matrix of numbers arranged in rows and columns in such a
way that each pair of columns is orthogonal to each other. When used in an
experiment, each row represents the state of the factors in a given experiment.
Each column represents a specic factor or condition that can be changed from
experiment to experiment. The array is called orthogonal because the effects
of the various factors in the experimental results can be separated from each
other.
orthogonal polynomial equation An equation using orthogonal polynomial
expansion.
outer array A layout or orthogonal array for the noise factors selected for
experimentation.
parameter design The second of three design stages. During parameter design,
the nominal values of critical dimensions and characteristics are established to
optimize performance at low cost.
P-diagram A diagram that shows control, noise, and signal factors along with
output response.
percent contribution The pure sum of squares divided by the total sum of squares
to express the impact of a factorial effect.
point calibration Calibration of a specic point in measurement.
pooled error variance The error variance calculated by pooling the smaller factorial effects.
preliminary experiment An experiment conducted with only noise factors to determine direction and tendencies of the effect of these noise factors. The results
of the preliminary experiment are used to compound noise factors.
pure (net) error sum of squares The sum of squares after adding the error terms
originally included in the regular sums of squares of factorial effects.
1621
1622
Glossary
pure (net) sum of squares The sum of squares of a factorial effect after subtracting the error portion.
quality The loss imparted by a product to society from the time the product is
shipped.
quality characteristic A characteristic of a product or process that denes product
or process quality. The quality characteristic measures the degree of conformity
to some known standard.
quality engineering A series of approaches to predict and prevent the problems
that might occur in the marketplace after a product is sold and used by the
customer under various environmental and applied conditions for the duration
of designed product life.
quality function deployment (QFD) The process by which customer feedback is
analyzed and results are incorporated into product design. The QFD process is
often referred to as determining the voice of the customer.
quality loss function (QLF) A parabolic approximation of the quality loss that
occurs when a quality characteristic deviates from its best or target value. The
QLF is expressed in monetary units: the cost of deviating from the target increases quadratically the farther it moves from the target. The formula used to
compute the QLF depends on the type of quality characteristic being used.
reference-point proportional equation The equation showing the case when a
signal level is used as a reference input.
repeatability The variation in repeated measurements of a particular object with
a particular instrument by a single operator.
reproducibility The state whereby the conclusions drawn from small-scale laboratory experiments will be valid under actual manufacturing and usage conditions (i.e., consistent and desirable).
response factor The output of a system or the result of a performance.
response table A table showing level averages of factors.
robust design A process within quality engineering of making a product or process insensitive to variability without removing the sources.
robust technology development An approach to maximize the functionality of a
group of products at the earliest stage, such as at research-and-development
stage, and to minimize overall product development cycle time.
robustness The condition used to describe a product or process design that functions with limited variability despite diverse and changing environmental conditions, wear, or, component-to-component variation. A product or process is
robust when it has limited or reduced functional variation, even in the presence
of noise.
sensitivity The magnitude of the output per input shown by the slope of the
input/output relationship.
sensitivity analysis Analysis performed to determine the mean values of experimental runs used when the means are widely dispersed.
sequential approximation A method of providing gures to ll the open spots in
an incomplete set of data to make data analysis possible.
signal factor A factor used to adjust the output.
signal-to-noise (SN) ratio Any of a group of special equations that are used in
experimental design to nd the optimum factor-level settings that will create a
Glossary
robust product or process. The SN ratio originated in the communications eld,
in which it represented the power of the signal over the power of the noise. In
Taguchi methods usage, it represents the ratio of the mean (signal) to the
standard deviation (noise). The formula used to compute the SN ratio depends
on the type of quality characteristic being used.
signal-to-noise (SN) ratio analysis Analysis performed to determine the factor
levels required to reduce variability and achieve the ideal function of a
characteristic.
six sigma A quality program developed by Motorola Corporation with a detailed
set of quality processes and tools.
slope calibration Calibration of slope in measurement.
smaller-the-better characteristic The type of performance characteristic parameter that has zero as the best value (e.g., wear, deterioration, etc.). This type of
characteristic belongs to the category of quality characteristics that has zero as
the best value and has no negative value.
speed difference method The SN ratio of a dynamic operating window calculated
from the difference of two quality modes.
speed ratio method The SN ratio of a dynamic operating window calculated from
the ratio of two quality modes.
split-type analysis A method to determine SN ratios without noise factors.
standard deviation A measure of the spread of the process output or the spread
of a sampling statistic from the process (e.g., of subgroup averages). Standard
deviation is denoted by the Greek letter (sigma).
standard rate of mistake The rate of mistake when type I and type II mistakes
are adjusted to be equal.
standard SN ratio The SN ratio calculated from the standard rate of mistake.
subsystem A system to consist as part of the total system.
system design The rst of three design stages. During system design, scientic
and engineering knowledge is applied to produce a functional prototype design. This prototype is used to dene the initial settings of a product or process
design characteristic.
Taguchi methods Methods originated by Genichi Taguchi such as quality engineering, MTS, or experimental regression analysis.
tolerance design The third of three design stages. Tolerance design is applied if
the design is not acceptable at its optimum level following parameter design.
During tolerance design, more costly materials or processes with tighter tolerances are considered.
transformation The function of transforming the input signal, such as mold dimension or CNC machine input program, into the output, such as product
dimension.
tuning factor A signal factor used to adjust the output.
two-step optimization An approach used in parameter design by maximizing robustness (SN ratio), rst nding the optimum control factor set points to minimize sensitivity to noise and then adjusting the mean response to target.
upstream quality Also called robust quality. This is the type of quality used to
improve the robustness of a specic product.
1623
1624
Glossary
variability The property of exhibiting variation (i.e., changes or differences).
variance The mean-squared deviation (MSD) from the mean. The sum of squares
divided by the degrees of freedom.
variation The inevitable differences among individual outputs of a process.
variation of a factor The sum of squares of a factorial effect.
Youden square An incomplete Latin square.
zero-point proportional equation The equation showing the case when the input/output regression line passes through the origin. The equation most frequently used in dynamic characteristics.
Bibliography
Genichi Taguchis Publications
Year
Title
Note
Publisher
1951
1956
1957
Maruzen
1958
Maruzen
1959
Coauthored
1962
Coauthored
JSA
1962
Coauthored
JSA
1962
1962
10
1966
Statistical Analysis
Maruzen
11
1966
JSA
12
1967
Process Control
13
1969
Quality Control
14
1971
Coauthored
JSA
15
1972
Quality Assessment
Coauthored
JSA
16
1972
Coauthored
JSA
17
1972
Maruzen Co.
18
1972
Maruzen Co.
Maruzen
Coauthored
Coauthored
JSA
JSA
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1625
1626
Bibliography
Year
Title
Note
Publisher
19
1972
Maruzen Co.
20
1972
Coauthored
JSA
21
1973
Coauthored
JSA
22
1975
Coauthored
JSA
23
24
1976
1977
Maruzen Co.
Maruzen Co.
25
1977
JSA
26
1979
JSA
27
1979
28
1980
29
1980
30
1981
31
1982
32
1982
33
1983
Coauthored
JSA
34
1984
Coauthored
JSA
35
1984
36
1984
37
1986
38
1987
Unipub
39
1988
JSA
40
1988
41
1988
JSA
42
1988
JSA
Coauthored
JSA
Corona
Coauthored
JSA
JSA
Coauthored
JSA
JSA
JSA
Coauthored
Coauthored
JSA
JSA
1627
Year
Title
43
1988
Note
Publisher
44
1988
Chief of
Publication
Committee
JSA
45
1988
Coauthored
JSA
46
1989
JSA
JSA
Chief of
Publication
Committee
JSA
Coauthored
McGraw-Hill
47
1989
48
1991
49
1992
50
1992
51
1992
ASME
52
1992
Corona Co.
53
1994
JSA
54
1994
55
1994
ASI
56
1995
57
1996
JSA
58
1999
JSA
59
1999
60
1999
61
1999
Robust Engineering
62
2000
63
2000
San-nou University
Coauthored
JSA
JSA
Coauthored
Coauthored
JSA
JSA
Keizaikai
Coauthored
McGraw-Hill
JSA
Coauthored
JSA
1628
Bibliography
Year
64
2000
65
66
67
Title
Note
Publisher
MahalanobisTaguchi System
Coauthored
McGraw-Hill
2002
Coauthored
2002
Coauthored
JSA
2002
Coauthored
JSA
Index
A
Abnormality:
measurement scale for, 398. See also
MahalanobisTaguchi System
methods for identifying, 417
Active dynamic SN ratio, 132133
Active signal factors, 39, 97
Active SN ratios, 236
Additive model (Taguchi), 1488
Additivity:
of factorial effects, 592
and noise level, 586
with particle size data, 588
and physical condition, 585
Adhesion condition of resin board and copper
plate, 890894
Adjustability, 225
Adjusting factors, 60. See also Control factors
Adjustment, 60, 319. See also Tuning
basic equations for, 1601
in feedback control:
based on product characteristics, 455459
of process condition, 468473
linearity in, 227228
in on-line quality engineering, 116
process, 439
Adjustment cost, 441
Advanced Machining Tool Technology
Promotion Association, 137
Airow noise reduction of intercoolers, 1100
1105
Akao, Yoji, 1427, 1430
Algorithms case studies (software):
optimization of a diesel engine software
control strategy, 13011309
Taguchis Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
1629
1630
ASI, see American Supplier Institute
ASI CG (American Supplier Institute
Consulting Group), 1498
Assignment (layout) to an orthogonal array,
490
Attributes:
classied, 234, 587
signal-to-noise ratios for, 234236, 290308
value of continuous variables vs., 239
customer, 1482
Attribute data, disadvantages of using, 234235
Automated process management, 3133
Automatic inspection of solder joints,
Mahalanobis distance in, 11891195
Automatic transmissions, friction material for,
841847
Automotive industry, 78, 419. See also specic
companies
Automotive industry case studies:
airow noise reduction of intercoolers, 1100
1105
brake boosters, reduction of boosting force
variation of, 11061111
casting conditions for camshafts, simulation
of, 900903
clear vision by robust design, 882889
clutch disk torsional damping system design,
973983
DC motor, optimal design for, 11221127
diesel engine software control strategy, 1301
1309
direct-injection diesel injector optimization,
9841004
exhaust sensor output characterization with
MTS, 12201232
friction material for automatic transmissions,
841847
linear proportional purge solenoids, 1032
1049
minivan rear window latching improvement,
10251031
on-line quality engineering in, 13671375
product development challenge in, 1478
1479
steering system on-center robustness, 1128
1140
two-piece gear brazing conditions,
optimization of, 858862
wiper system chatter reduction, 11481156
Index
Axiomatic design, 14541458
comparison of TRIZ, robust design, and,
14541458
function in, 1456
and structure thinking, 14871489
B
Baba, Ikuo, 140
Banks, 10, 11
Base space, 143
Batch production, 459
Bean sprouting conditions, optimization of,
631636
Bebb, Barry, 158
Becker technique, 795
Bell Laboratories, 41, 57, 60, 154157, 1501
Benchmarking, 10, 30, 96
Between-product noise, 176, 319
Biochemistry case studies:
bean sprouting conditions, optimization of,
631636
small algae production, optimization of, 637
642
Biomechanical case study, 795805
Block factors, 603
Blow-off charge measurement systems,
optimization of, 746752
Bobcat Division, Ingersoll-Rand, 161
Boosting force variation in brake boosters,
reduction of, 11061111
Bossidy, Larry, 161
Box, George, 1479
Brain disease, urinary continence recovery in
patients with, 12581266
Brake boosters, reduction of boosting force
variation of, 11061111
Brazing conditions for two-piece gear, 858862
Bucket elevator, preventive maintenance design
for, 13761382
Build-test-x cycles, 1480
Business applications of MTS, 419420
Business readiness, 1505
C
CAE, see Computer-aided engineering
Calibration, 116, 177
linearity in, 227228
point, 247250
reference-point, 247250
slope, 228, 247
Index
zero-point, 228
Cameras, single-use, stability design of shutter
mechanisms for, 965972
CAMPS (Computer-Aided Measurement and
Process Design and Control System), 140
Camshafts, simulation of casting conditions for,
900903
Cannon, 1084
CAs (customer attributes), 1482
Casting conditions for camshafts, simulation of,
900903
Cause, special vs. common, 14461447
Cause-and-effect diagrams, 1429
Cause parameters, 340. See also Low-rank
characteristics
Central Japan Quality Control Association
(CJQCA), 148, 155
Central Japan Quality Management Association,
144
Ceramic oscillation circuits, parameter design
of, 732734
CF, see Correction factor
Champion, corporate, 390
Change management, 390391
Character recognition, Mahalanobis distance in,
12881292
Chebyshev, P. C., 543
Chebyshevs orthogonal polynomial equation,
254257, 543
Checking interval:
feedback control based on product
characteristics, 455459
feedback control of process condition, 468
473
Check sheets, 1429
Checkup interval, 117
Chemical applications. See also Chemical
reactions
for on and off conditions, 8385
pharmacology, 686694
Chemical applications case studies, 629714
biochemistry:
bean sprouting conditions, optimization of,
631636
small algae production, optimization of,
637642
dynamic operating window:
component separation, evaluation of, 666
671
1631
herbal medicine granulation, use of, 695
704
photographic systems, evaluation of, 651
658
measurement:
dynamic operating window, evaluation of
component separation with, 666671
granule strength, optimization of
measuring method for, 672678
thermoresistant bacteria, detection method
for, 679685
ultra trace analysis, dynamic optimization
in, 659665
separation:
developer, particle-size adjustment in ne
grinding process for, 705714
herbal medicine granulation, use of
dynamic operating window for, 695
704
in vitro percutaneous permeation,
optimization of model ointment
prescriptions for, 686694
Chemical engineering, Taguchi method in, 85
93
evaluation of images, 8891
functionality, such as granulation or
polymerization distribution, 91
function of an engine, 8687
general chemical reactions, 8788
separation system, 9193
Chemical reactions:
basic function of, 296298
general, 8788
ideal functions of, 230232
and pharmacy, 142
with side reactions:
reaction speed difference method, 298301
reaction speed ratio method, 301308
without side reactions, 296298
Chemical reactions case studies:
evaluation of photographic systems with
dynamic operating window, 651658
polymerization reactions, optimization of,
643650
Chowdhury, Subir, 168, 1493, 1496
Circuits case studies:
design for amplier stabilization, 717731
evaluation of electric waveforms by
momentary values, 735740
1632
Circuits case studies (continued )
parameter design of ceramic oscillation
circuits, 732734
robust design for frequency-modulation
circuits, 741745
CJQCA, see Central Japan Quality Control
Association
Clarion, 133, 141
Classied attributes, 587
signal-to-noise ratios for, 234236, 290308
for chemical reactions without side
reactions, 296298
for chemical reactions with side reactions,
298308
for two classes, one type of mistake, 291
293
for two classes, two types of mistake, 293
296
three-or-more-class, 234
two-class, 234
value of continuous variables vs., 239
Clausing, Don, 158
Clear vision by robust design, 882889
Clutch disk torsional damping system design
optimization, 973983
Common cause, 14461447
Communication (in robust engineering
implementation), 391392
Comparison, 529, 542543
Computer-aided engineering (CAE), 14751476
Computer-Aided Measurement and Process
Design and Control System (CAMPS), 140
Computer systems case study, 13241334. See
also Software testing and application case
studies
Concepts:
creation of, 39
optimization of, 1497, 15121514
selection of, 2628, 40
Concept design, 1496, 1497. See also System
design
Concept optimization phase, 1503, 1504
Concept phase, 1503, 1504
Concurrent engineering, 276, 1451
Conformance to specications, quality as, 171
Consumer loss, quality level and, 1718
Continuous improvement cycle, 1447
Continuous variables:
signal-to-noise ratios for, 234236, 239289
Index
double signals, 275284
estimation of error, 271275
with known signal factor level interval,
256260
with known signal factor level ratio, 262
263
linear equation, 250254
linear equation using tabular display of
orthogonal polynomial equation, 254
256
with no noise, 284289
with no signal factor, 264271
reference-point proportional equation,
247250
split analysis, 284289
with unknown true values of signal factor
levels, 264
when signal factor levels can be set up,
260262
zero-point proportional equation, 241247
in two-way layout experiments, 563572
value of classied attributes vs., 239
Contradiction table, 1454
Contrast, 529, 542
Contribution, 107
Control charts, 1429
Control costs, 17
Control factors, 30, 603, 1435. See also Adjusting
factors; Indicative factors
avoiding interactions between, 290, 356
denition of, 74
interactions of noise factors and, 227
optimum levels for, 212
in two-stage optimization, 40, 41
Copper, separation system for, 9193
Copper plate, adhesion condition of resin
board and, 890894
Corporate leader/corporate team, 390391
Corporate strategy, R&D investment in, 26
Correction factor (CF, CT), 510511
Correction (in on-line quality engineering), 116
Cost(s):
adjustment, 441
balance of quality and, 1822, 440441
control, 17
and DFSS, 15161517
for loss of quality, 171
process control, 441
production, 17, 442
Index
of quality control systems during production,
474477
and quality level, 1822
and quality loss function, 380
unit manufacturing cost, 18, 121, 439, 441
Coupled design, 1455
Covariance, 536
Credit, 1011
Criminal noises, 1516
Cross-functional team, 390392
CT, see Correction factor
Customers tolerance, 182
Customer attributes (CAs), 1482
Customer loss, 193
Customer quality, see Downstream quality
Customer satisfaction, 134
Cycle for continuous improvement, 1447
Cycle time reduction, 3940, 228
D
Data analysis, 506514
deviation, 507508
with incomplete data, 609616
sequential approximation, 613616
treatment, 611613
sum and mean, 506507
variation and variance, 509514
Data transformation, 507
DC motor, optimal design for, 11221127
Debugging software, streamlining of, 13601364
Debug model, 1435
Decibel value of the SN ratio, 1438
Decomposition, 64. See also Decomposition of
variation
of degrees of contribution, 520521
of degrees of freedom, 518
of indicative factors, 563565
two-way layout with, 563572
one continuous variable, 563567
two continuous variables, 567572
Decomposition of variation, 505, 513
and analysis of variance, 515521
to components with unit degree of freedom,
528551
comparison and its variation, 528534
linear regression equation, 534543
orthogonal polynomials, application of,
542551
between experiments, 574575
1633
Decoupled design, 1455
Deep-drawing process, optimization of, 911915
Defect detection, MTS in, 12331237
Defective items, 445
Degrees of contribution:
calculation of, 521522
decomposition of, 520521
explanation of, 560
Degrees of freedom, 509510, 618
decomposition of, 518
decomposition of components with, 528551
comparison and its variation, 528534
linear regression equation, 534543
orthogonal polynomials, application of,
542551
Delphi, 161
Delphi Automotive Systems, 984, 1220
Deming, Pluma Irene, 14421443
Deming, William Albert, 14421443
Deming, William Edwards, 1424, 14421448
biographical background, 14421444
continuous improvement cycle, 1447
as father of quality in Japan, 157
at Ford Motor Company, 157
14 points for management, 14451446
on making excellent products, 1491
and pattern-level thinking, 1481
quality principles of, 14441447
seven deadly diseases, 1446
Taguchi methods vs. teachings of, 14471448
Deming cycle, 1424, 1426, 1443
Deming Prize, 1424, 1428
Derivation(s):
of quality loss function, 440
of safety factors, 443448
with Youden squares, 623625
Design. See also specic topics
interaction between signals/noises and, 57
management for quality engineering in, 30
31
Design-build/simulate-test-x cycle, 164
Design constants:
selection of, 40
tuning as alteration of, 57
Design for assembly (DFA), 1482
Design for manufacturing (DFM), 1482
Design for six sigma (DFSS, IDDOV), 161163,
14921520
aligning NPD process with, 14981507
1634
Design for six sigma (continued )
nancial benets, 15151517
IDDOV structure, 14941496
and intersection of six sigma and robust
engineering, 15171519
and NPD, 15011507
and quality engineering, 1498
Taguchi methods as foundation for, 1493
1494
technical benets, 15071515
Design for Six Sigma (Subir Chowdhury), 1493
Design life, 57
Design method for preventive maintenance,
463467
Design of experiments (DoE), 503505
decomposition of variation, 505
denition of, 503504
development of, 154
at Ford Motor Company, 158
functionality, evaluation of, 504
in Japan, 14311435
objective of, 178, 356
quality engineering vs., 356
and reproducibility, 504505
in United States, 159
Design of Experiments (Genichi Taguchi), 131,
515, 523, 552, 563, 573, 597
Design process:
analysis stage of, 40
stages of, 40
synthesis stage of, 40
tools for, 30
Design quality, 438
Design quality test, 38
Design review, 1427
Design-test-x cycle, 1471
Detector switch characterization with MTS,
12081219
Deteriorating characteristics, 205207
Developer:
high-quality, development of, 780787
particle-size adjustment in ne grinding
process for, 705714
Deviation, 507508
estimate of, 512
magnitude of, 509
from main effect curve, 557
from mean, 507, 508
from objective value, 507508
Index
standard, 517
in variance, 511514
in variation, 509
DFA (design for assembly), 1482
DFM (design for manufacturing), 1482
DFSS, see Design for six sigma
Diagnosis. See also Medical diagnosis
basic equations for, 1601
MTS for, 398
to prevent recalls, 3338
process, 439, 477479
Diagnosis interval, 117
Diesel engine software control strategy, 1301
1309
Direct-injection diesel injector optimization,
9841004
Direct product design, 227
Direct product layout, 323
Discrimination and classication method, 417
Disk blade mobile cutters, optimization of,
10051010
Distribution of tolerance, 204205
DMAIC (six sigma), 15171519
Documentation of testing, 1477
DoE, see Design of experiments
Dole, 1208
Domain model of product development, 1479
1480
Double signals, 275284
Downstream quality, 269, 354, 355
Drug efcacy, Mahalanobis distance in
measurement of, 12381243
Dummy treatment, 427, 428, 605
D-VHS tape travel stability, 10111017
Dynamic operating window method, 142
for chemical reactions with side reactions,
298308
for component separation evaluation, 666
671
for herbal medicine granulation, 695704
for photographic systems evaluation, 651658
Dynamic SN ratio, 139, 225
nondynamic vs., 139, 225
for origin quality, 354355
in U.S. applications, 160
uses of, 139, 236
E
Earthquake forecasting, MTS in, 419
Index
ECL, see Electrical Communication Laboratory
Economy:
direct relationship of SN ratio and, 225
and productivity, 57
ECs (engineering characteristics), 1482
Education (in robust engineering
implementation), 392
Effective power, 61
Effect of the average curve, 558559
Elderly:
improving standard of living for, 12
predicting health of, 15
Electrical applications case studies, 715792
circuits:
amplier stabilization, design for, 717731
ceramic oscillation circuits, parameter
design of, 732734
electric waveforms, evaluation of, 735740
frequency-modulation circuits, robust
design for, 741745
electronics devices:
back contact of power MOSFETs,
optimization of, 771779
blow-off charge measurement systems,
optimization of, 746752
ne-line patterning for IC fabrication,
parameter design of, 758763
generic function of lm capacitors,
evaluation of, 753757
pot core transformer processing,
minimizing variation in, 764770
electrophotography:
functional evaluation of process, 788792
high-quality developer, development of,
780787
Electrical Communication Laboratory (ECL),
154156
Electrical connector insulator contact housing,
optimization of, 10841099
Electrical encapsulants:
ideal function for, 234
optimization of process, 950956
robust technology development of process,
916925
Electric waveforms, momentary value in
evaluation of, 735740
Electro-deposited process for magnet
production, 945949
Electronic and electrical engineering, Taguchi
method in, 5973
1635
functionality evaluation of system using
power, 6165
quality engineering of system using
frequency, 6573
Electronic components:
adhesion condition of resin board and
copper plate, 890894
on and off conditions, 8385
resistance welding conditions for, 863868
Electronic devices case studies:
back contact of power MOSFETs,
optimization of, 771779
blow-off charge measurement systems,
optimization of, 746752
ne-line patterning for IC fabrication,
parameter design of, 758763
generic function of lm capacitors,
evaluation of, 753757
variation in pot core transformer processing,
minimizing, 764770
Electronic warfare systems, robust testing of,
13511359
Electrophotography case studies:
functional evaluation of an
electrophotographic process, 788792
high-quality developer, 780787
toner charging function measuring system,
875881
Emission control systems, 10321049
Employment, productivity and, 6
Engines:
function of, 8687
idle quality, ideal function of, 230
Engineering and the Minds Eye (Eugene S.
Ferguson), 1476
Engineering characteristics (ECs), 1482
Engineering quality, 10
Engineering systems, laws of evolution of, 1454
Enhanced plastic ball grid array (EPBGA)
package, 916925
Environmental noises, 1415
EPBGA package, see Enhanced plastic ball grid
array package
Equalizer design, ideal function in, 233
Error, human, 1314
Error factors, 1434, 1436
Error variance, 240, 420, 511
Error variation, 513, 517, 518, 559
Estimated values, 492, 494499
1636
Estimate of deviation, 512
Estimation of error, 271275
Europe, paradigm shift in, 29
Evaluation technologies, 26
Event thinking, 14801481, 1490
Evolution of engineering systems, laws of, 1454
Exchange rates, 7
Exhaust sensor output characterization with
MTS, 12201232
Experimental regression analysis, 149, 484, 488
499
estimated values, selection of, 492, 494499
initial values, setting of, 488489
orthogonal array, selection and calculation of,
489494
parameter estimation with, 484, 488499
sequential approximation, 490, 492496
Experimentation:
conrmatory, 361
development of experimental design, 1472
1473
orthogonal array L18 for, 357
successful use of orthogonal arrays for, 358
Experts, internal, 392
External noise, 1512
External test facilities, 14761477
F
Fabrication line capacity planning using a
robust design dynamic model, 11571167
Factors. See also specic types, e.g., Control factors
classication of, 14341435
denition of, 552
Failure mode and effects analysis (FMEA), 392,
1427, 1482
Failure rates, 15141515
FDI (Ford Design Institute), 161
Feedback control, 32, 79, 439
based on process condition, 468473
temperature control, 471473
viscosity control, 469470
based on product characteristics, 454467
batch production process, 459462
process control gauge design, 462467
system design, 454459
designing system of, 118120
measurement error in, 459
by quality characteristics, 13831388
Feeder valves, reduction of chattering noise in,
11121121
Index
Feedforward control, 32, 79, 439
Felt-resist paste formula, optimization of, 836
840
FEM (nite-element method) analysis, 1050
Ferguson, Eugene S., 1476
Film capacitors, generic function of, 753757
Financial system product design, 1011
Find-and-x improvement efforts, 1515
Fine-line patterning for IC fabrication, 758763
Finite-element method (FEM) analysis, 1050
Fire detection, MTS in, 419
Fireghting, 159, 167, 318
Fire prevention, 167, 168
Fisher, Ronald A., 503, 1424, 14311434
Fixed targets, signal-to-noise ratios for, 236
Flexible manufacturing systems (FMSs), 12
Flexor tendon repairs, biomechanical
comparison of, 795805
Flex Technology, 160
FMEA, see Failure mode and effects analysis
FMSs (exible manufacturing systems), 12
Ford Design Institute (FDI), 161
Ford Motor Company, 16, 81, 157158, 160,
161, 229, 230, 1431
Ford Supplier Institute (FSI), 157, 158
Forecasting. See also Prediction
of earthquakes, 419
of future health, 12771287
MTS for, 398
of weather, 419
Foundry process using green sand, 848851
Four Ps of testing, 1470
personnel, 14701472
preparation, 14721473
procedures, 14731474
product, 14741475
14 points for management, 14451446
Freedoms, individual, 9, 11
Frequency modulation, 6769, 741745
Friction material for automatic transmissions,
841847
FSI, see Ford Supplier Institute
Fuel delivery system, ideal function in, 229230
Fuji Film, 24, 135, 1439
Fujimoto, Kenji, 84
Fuji Steel, 1424
Fuji Xerox, 149, 158, 428, 1439
Function(s):
denitions of, 1456
Index
in design, 1456
generic, 8
denition of, 85, 229, 355
determining evaluation scale of, 148
identication of, 360
machining, 8083
objective function vs., in robust technology
development, 355
optimization of, 352
robustness of, 352
selecting, 313314
SN ratio for, 133
technological, 1440
ideal, 60, 229234
actual function vs., 15081509
based on signal-to-noise ratios, 228234
chemical reactions, 230232
denition of, 1450, 1508
with double signals, 279
electrical encapsulant, 234
engine idle quality, 230
equalizer design, 233
fuel delivery system, 229230
grinding process, 232
harmful part of, 1509
identifying, 377378
injection molding process, 229
intercoolers, 232
low-pass lters, 233234
machining, 229
magnetooptical disk, 233
power MOSFET, 234
printer ink, 234
specifying, 1472
transparent conducting thin lms, 233
useful part of, 1509
voltage-controlled oscillators, 234
wave soldering, 233
welding, 230
wiper systems, 230
loss, 1452, 1488
modulation, 6769
objective, 10
denition of, 229, 355
and design of product quality, 26
generic function vs., 355
standard conditions for, 40
of subsystems, 313
transformability as, 140
1637
tuning as, 57
on and off, 138
orthogonal, 543
phase modulation, 6872
quality loss function, 133134, 171179, 380
classication of quality characteristics, 180
derivation of, 440
equations for, 15981601
justice-related aspect of, 175176
for larger-the-better characteristic, 188, 191
for nominal-the-best characteristic, 180
186, 190, 445
and process capability, 173176
quadratic representation of, 174
quality aspect of, 176
for smaller-the-better characteristic, 186
188, 190
and steps in product design, 176179
for tolerance design optimization, 386
in TRIZ, 1456
Functional analysis and trimming (TRIZ), 1454
Functional evaluation:
of electrophotographic process, 788792
SN ratio for, 133
Functional independence, 1488
Functionality(-ies):
evaluation, functionality, 2830, 147, 504
of articulated robots, 10591068
of spindles, 10181024
standardization draft for, 148
in trading, 148
improvement of, 27
and system design, 314317
Functionality design, 11, 73
Functional limit, 443
Functional quality, see Origin quality
Functional risk, 11
Functional tolerance, 193
Function limit, 196201
G
Gain, 79
Gas-arc stud weld process parameter
optimization, 926939
Gauss, K. F., 535
GDP, see Gross domestic product
General Electric (GE), 161, 1481
Generic function(s), 8
denition of, 85, 229, 355
1638
Generic function(s) (continued )
determining evaluation scale of, 148
identication of, 360
machining, 8083
objective function vs., in robust technology
development, 355
optimization of, 352
robustness of, 352
selecting, 313314
SN ratio for, 133
technological, 1440
Generic technology, 26, 39, 59
Global Standard Creation Research and
Development Program, 148
GNP (gross national product), 6
GOOD START model, 15051507
GramSchmidt orthogonalization process
(GSP), 400, 415418. See also Mahalanobis
Taguchi GramSchmidt process
Granule strength, optimization of measuring
method for, 672678
Graphs, 1429. See also Linear graphs
Greco Latin square, 1432
Green sand, foundry process using, 848851
Grinding process:
for developer, particle-size adjustment in,
705714
ideal function of, 232
Gross domestic product (GDP), 11, 12
Gross national product (GNP), 6
GSP, see GramSchmidt orthogonalization
process
H
Hackers, 15
Handwriting recognition, 12881292
Hara, Kazuhiko, 141
Hardware design, downstream reproducibility
in, 5859
Harmful effects (TRIZ), 1453
Harmful part (ideal functions), 1509
Harry, Mikel, 161, 1517
Hazama, 1427
Health. See also Medical diagnosis
of elderly, 12, 15
forecasting, 12771287
MT method for representing, 102104
Herbal medicine granulation, dynamic
operating window method for, 695704
Index
Hermitian form, 6264. See also Decomposition
of variation
Hewlett-Packard, 2729, 73
Hicks, Wayland, 158
Higashihara, Kazuyuki, 84
High-performance liquid chromatograph
(HPLC), 666671
High-performance steel, machining technology
for, 819826
High-rank characteristics, 201
denition of, 340
relationship between low-rank characteristics
and, 203204
tolerance for, 205
Histograms, 1429
History of quality engineering, 127168
in Japan, 127152
application of simulation, 144147
chemical reaction and pharmacy, 142
conceptual transition of SN ratio, 131133
design of experiments, 14311435
development of MTS, 142143
electric characteristics, SN ratio in, 140
141
evaluation using electric power in
machining, 138139
machining, SN ratio in, 135137
on-line quality engineering and loss
function, 133135
origin of quality engineering, 127131
QFD for customers needs, 14301431
seven QC tools and QC story, 14281430
since 1950s, 14351440
transformability, SN ratio of, 139140
in United States, 153168
from 1980 to 1984, 156158
from 1984 to 1992, 158160
from 1992 to 2000, 161164
from 2000 to present, 164168
and history of Taguchis work, 153156
Holmes, Maurice, 158
Hosokawa, Tetsuo, 27, 28
House of quality, 164166
HPLC, see High-performance liquid
chromatograph
Human errors, 1314
Human performance case studies:
evaluation of programming ability, 11781188
prediction of programming ability, 11711177
Index
I
ICP-MS (inductively coupled plasma mass
spectrometer), 659
IDDOV, see Design for six sigma
Ideal function(s), 60, 229234
actual function vs., 15081509
based on signal-to-noise ratios, 228234
chemical reactions, 230232
denition of, 1450, 1508
with double signals, 279
electrical encapsulant, 234
engine idle quality, 230
equalizer design, 233
fuel delivery system, 229230
grinding process, 232
harmful part of, 1509
identifying, 377378
injection molding process, 229
intercoolers, 232
low-pass lters, 233234
machining, 229
magnetooptical disk, 233
power MOSFET, 234
printer ink, 234
specifying, 1472
transparent conducting thin lms, 233
useful part of, 1509
voltage-controlled oscillators, 234
wave soldering, 233
welding, 230
wiper systems, 230
Ideality:
concept of, 1453
principle of, 1489
Ideation, 1301
IHI, see IshikawajimaHarima Heavy Industries
IIN/ACD, see ITT Industries Aerospace/
Communications Division
Image, denition of, 88
IMEP (indicated mean effective pressure), 230
Implementation of robust engineering, 389393
and bottom-line performance, 393
corporate leader/corporate team in, 390391
critical components in, 389390
education and training in, 392
effective communication in, 391392
and integration strategy, 392393
management commitment in, 390
Ina, Masao, 1433
1639
Inao, Takeshi, 131
Ina Seito Company, 155, 589, 1433, 1434
Incomplete data, 609616
sequential approximation with, 613616
treatment of, 611613
Incomplete Latin squares, 617. See also Youden
squares
Independence axiom, 1454, 14871488
Indian Statistical Institute, 155, 398
Indicated mean effective pressure (IMEP), 230
Indicative factors, 30, 68, 563, 603, 1434, 1436.
See also Control factors
Individual freedoms, 9, 11
Inductively coupled plasma mass spectrometer
(ICP-MS), 659
Industrial Revolution, 9. See also Second
Industrial Revolution
Industrial waste, tile manufacturing using, 869
874
Information axiom, 1454
Information technology, 1011
Ingersoll-Rand, 161
Initial values (experimental regression analysis),
488489
Injection molding process. See also Plastic
injection molding
ideal function in, 229
thick-walled products, 940944
Ink, printer:
ideal function for, 234
MTS in thermal ink jet image quality
inspection, 11961207
Inner noise, 176, 319
Inner orthogonal arrays, 322323
Input, signal-to-noise ratios based on, 236237
Inspection, traditional approach to, 378379
Inspection case studies:
automatic inspection of solder joints,
Mahalanobis distance in, 11891195
defect detection, 12331237
detector switch characterization, 12081219
exhaust sensor output characterization, 1220
1232
printed letter inspection technique, 1293
1297
thermal ink jet image quality inspection,
11961207
Inspection design, 3233, 80, 463467
Inspection interval, 117
1640
Index
Index
Latent defects, 1498
Latin squares, 617, 1432
Laws of evolution of engineering systems, 1454
Layout to an orthogonal array, 490
LC (liquid chromatographic) analysis, 666
Leadership for robust engineering, 390391
Least squares, 60, 535
for estimation of operation time, 485486
parameter estimation with, 485489
LIMDOW disk, 27, 28
Limits, 379
Linear actuator optimization using simulation,
10501058
Linear equations, 240, 241
number of units in, 559
for signal-to-noise ratios for continuous
variables, 250256
zero-point proportionality as replacement for,
137
Linear graphs, 597608, 1433, 15271528
combination design, 606608
dummy treatment, 605
for L4, 1530
for L8, 597600, 602, 604, 1530, 1603
for L 9, 1564, 1609
for L16, 15321535, 1586, 16061608
for L18, 1564, 1610
for L25, 1591
for L27, 1567, 1613
for L27, 1597
for L32, 15381549, 1587
for L36, 1594
for L50, 1592
for L59, 1568
for L64, 15541563, 1590
for L81, 15721585
for multilevel factors, 599606
Linearity (SN ratios), 225, 226
in adjustment and calibration, 227228
dynamic, 236
Linear proportional purge solenoids, 1032
1049
Linear regression equations, 14061419
Liquid chromatographic (LC) analysis, 666
Liver disease diagnosis case studies, 401415,
12441257
Loss:
denition of, 171
due to evil effects, 18
1641
due to functional variability, 1819, 440442
due to harmful quality items, 440441
to society, 15161517
Loss function, 1452, 1488. See also Quality loss
function
Low-pass lters, ideal function for, 233234
Low-rank characteristics. See also Parameter
design; Tolerance design
denition of, 340
deterioration of, 205207
determination of, 201204
and high-rank characteristics, 203204
specication tolerancing with known value of,
201204
M
MACH, see Minolta advanced cylindrical heater
Machine and Tool, 136
Machine Technology, 136
Machining, 8083
analysis of cutting performance, 81, 82
ideal function, 229
improvement of work efciency, 8283
Machining case studies:
development of machining technology for
high-performance steel by
transformability, 819826
optimization of machining conditions by
electrical power, 806818
spindles, functionality evaluation of, 1018
1024
transformability of plastic injection-molded
gear, 827835
Maeda, 1428
Magnetic circuits, optimization of linear
actuator using simulation, 10501058
Magnetooptical disk, ideal function for, 233
Magnet production, electro-deposited process
for, 945949
Mahalanobis, P. C., 103, 155, 398, 1440
Mahalanobis distance (MD), 102103, 114116,
398, 1440
applications of, 398
in automatic inspection of solder joints,
11891195
in character recognition, 12881292
computing methods for, 400
GramSchmidt process for calculation of, 416
for health examination and treatment of
missing data, 12671276
1642
Mahalanobis distance (continued )
in measurement of drug efcacy, 12381243
in medical diagnosis, 12441257
in prediction:
of earthquake activity, 14
of urinary continence recovery in patients
with brain disease, 12581266
and Taguchi method, 114116
Mahalanobis space, 114116
in predicting health of elderly, 15
in prediction of earthquake activity, 14
and Taguchi method, 114116
MahalanobisTaguchi GramSchmidt (MTGS)
process, 415418
MahalanobisTaguchi (MT) method, 102109
general pattern recognition and evaluation
procedure, 108109
and Mahalanobis distance/space, 114116
for medical diagnosis, 104108
MahalanobisTaguchi System (MTS), 102103,
109116, 397421
applications, 418420
in automotive industry:
accident avoidance systems, 419
air bag deployment, 419
base space creation in, 143
in business, 419420
denition of, 398
discriminant analysis vs., 143
early criticism/misunderstanding of, 142143
in earthquake forecasting, 419
in re detection, 419
general pattern recognition and evaluation
procedure, 109114
and GramSchmidt orthogonalization
process, 415418
liver disease diagnosis case study, 401415
and Mahalanobis distance/space, 114116
in manufacturing, 418
in medical diagnosis, 418
other methods vs., 417418
in prediction, 15, 419
sample applications of, 398400
for study of elderly health, 12
in weather forecasting, 419
MahalanobisTaguchi System (MTS) case
studies, 11691297
human performance:
evaluation of programming ability, 1178
1188
Index
prediction of programming ability, 1171
1177
inspection:
automatic inspection of solder joints,
Mahalanobis distance in, 11891195
defect detection, 12331237
detector switch characterization, 12081219
exhaust sensor output characterization,
12201232
thermal ink jet image quality inspection,
11961207
medical diagnosis:
future health, forecasting of, 12771287
health examination and treatment of
missing data, 12671276
liver disease, 401415
measurement of drug efcacy, Mahalanobis
distance for, 12381243
urinary continence recovery among
patients with brain disease, prediction
of, 12581266
use of Mahalanobis distance in, 12441257,
12671276
products:
character recognition, Mahalanobis
distance for, 12881292
printed letter inspection technique, 1293
1297
Main effect curve, 557
Maintenance design, 439. See also Preventive
maintenance
Maintenance system design, 33
Management:
commitment to robust engineering by, 390
general role of, 23
for leadership in robust engineering, 390
391
major role of, 10
in manufacturing, 1622, 120123
consumer loss, evaluation of quality level
and, 1718
cost vs. quality level, 1822
at Ford Motor Company, 16
tolerance, determination of, 1617
for quality engineering, 2538
automated process management, 3133
design process, 3031
functionality, evaluation of, 2830
recalls, diagnosis to prevent, 3338
Index
research and development, 2528, 3940
and robust engineering, 377388
parameter design, 382386
tolerance design, 386388
strategic planning in, 13
Management by total results, 149
Management reviews, 1507
Manufacturing. See also Manufacturing processes
management in, 1622
balance of cost and quality level, 1822
determining tolerance, 1617
evaluation of quality level and consumer
loss, 1718
at Ford Motor Company, 16
MTS in, 418
of semiconductor rectier, 13951398
Manufacturing case studies. See also Processing
case studies
control of mechanical component parts,
13891394
on-line quality engineering in automobile
industry, 13671375
semiconductor rectier manufacturing, 1395
1398
Manufacturing processes:
automated, 3133
on-line quality engineering for design of,
14971498
quality management of, 1498
stages of, 441
Manufacturing tolerance, 194195
Market noises, 13
Market quality, items in, 18
Martin, Billy, 387
Masuyama, Motosaburo, 1431
Material design case studies:
felt-resist paste formula used in partial
felting, optimization of, 836840
foundry process using green sand, parameter
design for, 848851
friction material for automatic transmissions,
841847
functional material by plasma spraying, 852
857
Material strength case studies:
resistance welding conditions for electronic
components, optimization of, 863868
tile manufacturing using industrial waste,
869874
1643
two-piece gear brazing conditions,
optimization of, 858862
Matsuura Machinery Corp., 138
Maxell, Hitachi, 27
McDonnell & Miller Company, 330337
MD, see Mahalanobis distance
Mean, 506507
deviation from, 507, 508
as variance, 509
variation from, 510
working, 506507
Mean-squared deviation (MSD), 183184
Measurement. See also Measurement systems
of abnormality, 398
on continuous scale vs. pass/fail, 474
establishing procedures for, 14731474
in on-line quality engineering, 116
technologies for, 26
Measurement case studies:
clear vision by robust design, 882889
component separation using a dynamic
operating window, 666671
drug efcacy, Mahalanobis distance in
measurement of, 12381243
electrophotographic toner charging function
measuring system, 875881
granule strength, optimization of measuring
method for, 672678
thermoresistant bacteria, detection method
for, 679685
ultra trace analysis, dynamic optimization in,
659665
Measurement error, 459
Measurement interval, 117
Measurement systems:
efciency of, 228
for electrophotographic toner charging
function, 875881
in MTS, 420. See also MahalanobisTaguchi
System
SN ratio evaluation of, 225
types of calibration in, 247
Mechanical applications case studies, 7931167
control of mechanical component parts in
manufacturing, 13891394
fabrication line capacity planning, 1157
1167
exor tendon repairs, biomechanical
comparison of, 795805
1644
Mechanical applications case studies (continued )
machining:
electric power, optimization of machining
conditions by, 806818
high-performance steel by transformability,
technology for, 819826
plastic injection-molded gear,
transformability of, 827835
material design:
automatic transmissions, friction material
for, 841847
felt-resist paste formula used in partial
felting, optimization of, 836840
foundry process using green sand, 848851
plasma spraying, development of functional
material by, 852857
material strength:
resistance welding conditions for electronic
components, optimization of, 863868
tile manufacturing using industrial waste,
869874
two-piece gear brazing conditions,
optimization of, 858862
measurement:
clear vision of steering wheel by robust
design, 882889
electrophotographic toner charging
function measuring system,
development of, 875881
processing:
adhesion condition of resin board and
copper plate, optimization of, 890894
casting conditions for camshafts,
optimization of, 900903
deep-drawing process, optimization of,
911915
electrical encapsulation process,
optimization of, 950956
electro-deposited process for magnet
production, quality improvement of,
945949
encapsulation process, development of,
916925
gas-arc stud weld process parameter
optimization, 926939
molding conditions of thick-walled
products, optimization of, 940944
photoresist prole, optimization of, 904
910
Index
plastic injection molding technology,
development of, 957964
wave soldering process, optimization of,
895899
product development:
airow noise reduction of intercoolers,
11001105
articulated robots, functionality evaluation
of, 10591068
brake boosters, reduction of boosting force
variation of, 11061111
chattering noise in Series 47 feeder valves,
reduction of, 11121121
clutch disk torsional damping system
design, optimization of, 973983
DC motor, optimal design for, 11221127
direct-injection diesel injector optimization,
9841004
disk blade mobile cutters, optimization of,
10051010
D-VHS tape travel stability, 10111017
electrical connector insulator contact
housing, optimization of, 10841099
linear actuator using simulation, 1050
1058
linear proportional purge solenoids, 1032
1049
minivan rear window latching,
improvement of, 10251031
omelets, improvement in the taste of,
11411147
shutter mechanisms of single-use cameras
by simulation, 965972
spindles, functionality evaluation of, 1018
1024
steering system on-center robustness, 1128
1140
ultraminiature KMS tact switch
optimization, 10691083
wiper system chatter reduction, 11481156
Mechanical engineering, Taguchi method in,
7385
conventional meaning of robust design, 73
74
functionality design method, 7479
generic function of machining, 8083
problem solving and quality engineering, 79
80
signal and output, 80
Index
when on and off conditions exist, 8385
Medical diagnosis:
MahalanobisTaguchi method for, 104108
MTS in, 418
Medical diagnosis case studies:
future health, forecasting of, 12771287
health examination and treatment of missing
data, Mahalanobis distance for, 1267
1276
liver disease, 401415, 12441257
measurement of drug efcacy, Mahalanobis
distance for, 12381243
urinary continence recovery among patients
with brain disease, prediction of, 1258
1266
use of Mahalanobis distance in, 12441257
Medical treatment and efcacy
experimentation, Taguchi method in, 93
97
Metrology Management Association, 134, 140
141
Midstream quality (specied quality), 354, 355
Minivan rear window latching improvement,
10251031
Minolta advanced cylindrical heater (MACH),
361376
functions of, 362, 363
heat generating function, 365, 367370
peel-off function, 367, 369, 371373
power supply function, 362366
temperature measuring function, 369, 374
376
traditional xing system vs., 361362
Minolta Camera Company, 232
Mitsubishi Heavy Industries, 1482
Mitsubishi Research Institute, 26
Mixed-level orthogonal arrays, 596
Miyamoto, Katsumi, 136
Mizuno, Shigeru, 1427, 1430
Modication, 177, 319
Modulation function, 6769
Momentary values, evaluation of electric
waveforms by, 735740
Monotonicity property, 1451
Monte Carlo method, 442
Moore, Willie, 16, 158
Morinaga Pharmaceutical, 1431
Moritomo, Sadao, 136
Motorola, 1517
1645
MSD, see Mean-squared deviation
MTGS process, see MahalanobisTaguchi Gram
Schmidt process
MT method, see MahalanobisTaguchi method
MTS, see MahalanobisTaguchi System
Multilevel assignment, 618, 1433
Multiple factors, tolerance design for, 212220
Multiple targets, signal-to-noise ratios for, 236
Multivariate charts, MTS/MTGS vs., 417
Musashino Telecommunication Laboratory,
127129
My Way of Thinking (Genichi Taguchi), 127129
N
Nachi-Fujikoshi Corp., 137, 138, 141
NASA, 1427
NASDA (National Space Development Agency),
1427
National Railway, 81, 83, 1427
National Research Laboratory of Metrology, 140
National Space Development Agency (NASDA),
1427
Natural environment, 14
Natural logarithms, 142
New product development (NPD), 14981507
and design for six sigma, 14981507
and IDDOV, 1496, 15011507
leveraging of DFSS by, 14931494
phase gate process, 15051506
phase-phase gate structure, 15011503
phase processes, 15031505
project review process, 15061507
structure thinking in, 14861490
Suh/Senge model of, 14791480
synthesized case study, 14991500
New robust product development (NRPD),
1499, 15021503, 1507
Night-vision image-intensier tube, 950
Nikkei Mechanical, 27, 28
Nikon Corp., 27, 28
Nippon Denso, 135, 157, 1439
Nippon Electric, 1439
Nippon Telegraph and Telephone Public
Corporation (NTT), 15, 73, 127129, 199
Noise(s), 13. See also Signal-to-noise ratio
between-product, 319
criminal, 1516
denition of, 319, 381
environmental, 1415
1646
Noise(s) (continued )
external, 1512
human errors, 1314
inner, 319
interaction between design and, 57
internal, 1512
outer, 319
parameter design for minimization of, 319
320
types of, 319
and types of variability, 224
Noise factors, 29, 30, 381382, 1434
compounded, 4041, 360361
indicative, 30
interactions of control factors and, 227
in robust design, 353
testing personnel in selection of, 1473
in tolerance design, 208210
true, 30
in two-stage optimization, 40, 41
variation caused by repetitions, 242
Noise level, additivity for, 586587
Noise variables, 176
Nominal system parameters, 40
Nominal-the-best characteristic, 17, 180
denition of, 442
quality level at factories for, 448450
quality level calculation for, 1921
quality loss function for, 180186, 190, 445
tolerance design for, 208210
Nominal-the-best scale, 1439
Nominal-the-best SN ratio, 139
Nomura Research Institute, 26
Nondynamic characteristics, 236
Nondynamic SN ratio, 139, 225, 264271
dynamic vs., 139, 225
larger-the-better applications, 269
nominal-the-best applications, 265268
operating window, 269271
smaller-the-better applications, 268
for upstream quality, 243
uses of, 236
Nonlinear regression equations, 14061419
Normalization, 103
Normal state, 196
NPD, see New product development
NRPD, see New robust product development
NTT, see Nippon Telegraph and Telephone
Public Corporation
Number of workers, optimal, 469471
Index
O
Objective function(s), 10
denition of, 229, 355
and design of product quality, 26
generic function vs., 355
standard conditions for, 40
of subsystems, 313
transformability as, 140
tuning as, 57
Objective value, deviation from, 507508
Observational equation, 535
Off functions, 138
Off-line quality engineering, 135, 504, 1496
1497. See also Robust engineering
for product design, 1497
role of, 1493
steps in, 504
for technology development, 1496
Ohms law, 508
Oken, 143
Oki Electric Industry, 4455, 144
Omega transformation, 292293
Omelets, improvement in taste of, 11411147
On and off conditions, 8385
One-factor-at-a-time approach, 291, 357
One-factor-by-one method, orthogonal arrays
vs., 594595
One-way layout, 523527
with equal number of repetitions, 523527
with unequal number of repetitions, 527
On functions, 138
On-line process control:
equations for, 15981601
feedback control:
of process conditions, 16001601
by quality characteristics, 15981600
process diagnosis and adjustment, 1601
On-line quality engineering (OQE), 504, 1497
1498
applications of, 135
denition of, 14, 79, 116
feedback control:
process condition, based on, 468473
product characteristics, based on, 454467
major concept in, 135
for manufacturing process design, 14971498
origin of eld, 135
parameter design vs., 437
process condition, feedback control based
on, 468473
Index
temperature control, 471473
viscosity control, 469470
process diagnosis and adjustment, 474481
and cost of quality control systems during
production, 474477
diagnostic interval, optimal, 477479
number of workers, optimal, 469471
product characteristics, feedback control
based on, 454467
batch production process, 459462
process control gauge design, 462467
system design, 454459
role of, 1493
Taguchi method in, 116123
tolerancing and quality level in, 437453
determination of tolerance, 442448
larger-the-better characteristics, quality level
at factories for, 451453
nominal-the-best characteristics, quality
level at factories for, 448450
product cost analysis, 440
production division responsibilities, 440
441
related issues, 437439
and role of production in quality
evaluation, 441442
smaller-the-better characteristics, quality
level at factories for, 450451
On-line quality engineering case studies, 1365
1398
automobile manufacturing process,
application to, 13671375
feedback control by quality characteristics,
13831388
manufacturing process, control of mechanical
component parts in, 13891394
preventive maintenance of bucket elevator,
13761382
semiconductor rectier manufacturing, 1395
1398
Op-amp, 27
Operating cost, 18, 440
Operating window:
dynamic, 142, 298308
for chemical reactions with side reactions,
298308
for component separation evaluation, 666
671
for herbal medicine granulation, 695704
1647
for photographic systems evaluation, 651
658
nondynamic, 269271
Optimal diagnostic interval, 477479
Optimization. See also specic case studies
of concept/technology sets, 1497, 15121514
evaluation technique for, 148
of generic functions, 352
in research and development, 1502, 1515
robust, 1497, 1501, 1502. See also parameter
design
improvement in, 1508
and innovation, 1515
for reliability improvement, 1515
in robust technology development, 361
stage, 4055
example of, 4455
formula of orthogonal expansion, 4344
orthogonal expansion, 4041
testing method and data analysis, 4243
variability improvement, 4155
of technology, 1497, 15121514
of tolerance design, 386
two-stage:
control factors in, 40, 41
noise factors in, 40, 41
two-step, 178, 353354, 377, 1502. See also
Parameter design
concept step in, 1504
denition of, 60
in new product development, 15011502
in product design, 1439, 1497
in robust technology development, 353
354
in technological development, 1440
Optimum performance, exceeding, 379380
OQE, see On-line quality engineering
Origin quality (functional quality), 354355
Orthogonal arrays, 30, 356358, 584596, 1526
1527
assignment (layout) to, 490
inner, 322323
Japanese experimentation using, 14321433
L4, 1530, 1602
L8, 597600, 602, 604, 1530, 1603
L 9, 1564, 1609
L 9, 1595
L12, 589, 1531, 1604
L16, 1532, 1586, 1605
1648
Orthogonal arrays (continued )
L18, 74, 589594, 1564, 1610
L25, 1591
L27, 15651566, 16111612
L27, 1596
L32, 15361537, 1587
L36, 491492, 589, 1593, 16141617
L50, 1592
L54, 1568
L64, 15501553, 15881589
L81, 15701571
layout of signal factors in, 9798
linear graphs of, see Linear graphs
mixed, 286289
objective of using, 357
one-factor-by-one method vs., 594595
outer, 322323
quality characteristics and additivity, 584588
for reproducibility assessment, 30
split analysis for:
mixed, 286289
two-level, 284286
two-level, 284286
types of, 596
Orthogonal expansion:
formula of, 4344
meaning of, 537
for standard SN ratios and tuning robustness,
4041
Orthogonal function, 543
Orthogonal polynomial equation, 254257
Outer noise, 176, 319
Outer orthogonal arrays, 73, 322323
Out of the Crisis (W. Edwards Deming), 1448
Output, signal-to-noise ratios based on, 236237
Output characteristics, 14491450
P
Panasonic, 1427
Paradigm shift, 29
Parameter design, 318338, 14391440. See also
Two-step optimization
applied to semiconductors, 141
control-factor-level intervals in, 609
denition of, 40, 177, 314, 1452
evaluation for, 39
history of, 140141
initial example of, 140
key steps of, 387
Index
management perspective on, 382386
for manufacturing process design, 1498
for noise minimization, 319320
objective of, 284, 438439
on-line quality engineering vs., 437
optimum control factor levels in, 212
origin of, 14351436
primary uses of, 356
for product design, 18, 177178, 1497
purpose of, 504
and quality loss function, 177178
research on, 177
and robust design, 382, 1435, 1452. See also
Robust design
in robust engineering, 382386
for selected system, 40
in simulation, 144
systematization of, 1439
systems for, 39
for technology development, 1496
transition to technological development,
14391440
of water feeder valve, 330338
of Wheatstone Bridge, 320330, 349350
Parameter Design for New Product Development
(Genichi Taguchi), 140141
Parameter estimation:
with experimental regression analysis, 484,
488499
with least squares method, 485489
Pareto charts, 1429
Particle size distribution, data showing, 587588
Pass/fail criteria, specication limits vs., 181
182
Passive dynamic SN ratio, 132
Passive signal factors, 39, 97
Passive SN ratios, 236
Patterning:
ne-line patterning for IC fabrication, 758
763
transformability, patterning, 141
Pattern recognition, 397398
design of general procedure for, 108114
in earthquake-proof design, 14
in manufacturing, 418
with MT method, 108109
with MTS, 109114, 398
MTS/MTGS vs. other techniques for, 417
Pattern thinking, 14801485, 1490
Index
PCA (principal component analysis), 417
PDCA (plan, do, check, action) cycle, 1426,
1427
P-diagrams, 1472
Pearson, Carl, 1424
Performance, 393. See also Functionality(-ies)
Personnel (for testing), 14701472
Peterson, Don, 157
Phadke, Madhave, 156, 1519
Pharmacology case studies:
drug efcacy, Mahalanobis distance in
measurement of, 12381243
optimization of model ointment prescriptions
for in vitro percutaneous permeation,
686694
Phase gate process, 15051506
Phase modulation function, 6872
Phase-phase gate structure, 15011503
Phase processes, 15031505
Photography:
evaluation of photographic systems with
dynamic operating window, 651658
photographic lm, 28, 8891
stability design of shutter mechanisms of
single-use cameras, 965972
Photoresist prole using simulation, 904910
Physical condition:
additivity with, 585
data showing, 584585
Physics, quality engineering vs., 31
Plan, do, check, action cycle, 1426, 1427
Plan-do-study-act cycle, 1481
Plasma spraying, development of functional
material by, 852857
Plastic injection molding:
development of technology by
transformability, 857864, 957964
electrical connector insulator contact
housing, optimization of, 10841099
gear, transformability of, 827835
Point calibration, 247250
Pollution:
monitoring, 24
post-World War II, 2324
and quality improvement, 79
Polymerization reactions, optimization of, 643
650
Pot core transformer processing, variation in,
764770
1649
Power factor, 61
Power MOSFET:
ideal function for, 234
optimizing back contact of, 771779
Power spectrum analysis, 513
Preconcept phase, 1503, 1504
Prediction. See also Forecasting
MTS for, 398
of urinary continence recovery in patients
with brain disease, 12581266
Premature failures, 1498
Preparation (for testing), 14721473
Preventive maintenance, 79
for bucket elevator, simultaneous use of
periodic maintenance and checkup,
13761382
design method for, 463467
Price:
and production quality, 439
and total cost, 121
and unit manufacturing cost, 18, 439, 441
Primary error variation, 574576, 580
Princeton University, 155
Principal component analysis (PCA), 417
Principle of ideality, 1489
Printed letter inspection, MTS for, 12931297
Procedures, testing, 14731474
Process capability, quality loss function and,
173176
Process capability index, 173
Process condition control, 468473
Process control, 32, 441
Process design:
robust, 58
steps in, 314. See also specic steps
value of SN ratios in, 227
Process development, SN ratio and reduction of
cycle time, 228
Process diagnosis and adjustment, 474481
and cost of quality control systems during
production, 474477
optimal diagnostic interval, 477479
optimal number of workers, 469471
Processing case studies:
adhesion condition of resin board and
copper plate, optimization of, 890894
casting conditions for camshafts by
simulation, 900903
deep-drawing process, optimization of, 911
915
1650
Processing case studies (continued )
electro-deposited process for magnet
production, improvement of, 945949
encapsulation process:
optimization of, 950956
robust technology development of, 916
925
gas-arc stud weld process parameter
optimization, 926939
molding conditions of thick-walled products,
optimization of, 940944
photoresist prole using simulation, 904910
plastic injection molding technology by
transformability, 957964
wave soldering process, optimization of, 895
899
Process maintenance design, 32
Product(s):
product planning:
denition of, 108
design life in, 57
and production engineering, 1213
product quality, 10, 171
denition of, 442
design of, 26
quantitative evaluation of, 32, 438439
standards for, 32
product species, 171
for testing, 14741475
Product case studies:
character recognition, Mahalanobis distance
for, 12881292
printed letter inspection technique, 1293
1297
Product costs, 1516
Product cost analysis, 440
Product design:
direct, 227
for nancial systems, 1011
focus of, 18
off-line quality engineering for, 1497
orthogonal array for modication in, 74
paradigm shift for, 359360
parameter design in, 177178. See also
Parameter design
quality loss function and steps in, 176179
robust, 58
robustness in, 318
steps in, 314. See also specic steps
Index
system design in, 176177. See also System
design
tolerance design in, 179. See also Tolerance
design
value of SN ratios in, 227
Product development, 14781491. See also New
product development; Robust technology
development
control-factor-level intervals in, 609610
event thinking in, 14801481
paradigm shift for, 359360
pattern thinking in, 14811485
product vs. engineering quality in, 10
with robust engineering, 167
SN ratio and reduction of cycle time, 228
structure thinking in, 14861490
Suh/Senge model of, 14791480
synergistic, 14901491
Taguchi system for, 14851486
Product development case studies:
airow noise reduction of intercoolers, 1100
1105
articulated robots, functionality evaluation of,
10591068
brake boosters, reduction of boosting force
variation of, 11061111
chattering noise in Series 47 feeder valves,
reduction of, 11121121
clutch disk torsional damping system design,
optimization of, 973983
DC motor, optimal design for, 11221127
direct-injection diesel injector optimization,
9841004
disk blade mobile cutters, optimization of,
10051010
D-VHS tape travel stability, 10111017
electrical connector insulator contact
housing, optimization of, 10841099
linear actuator, simulation for optimization
of, 10501058
linear proportional purge solenoids, 1032
1049
minivan rear window latching, improvement
in, 10251031
new ultraminiature KMS tact switch
optimization, 10691083
omelets, improvement in the taste of, 1141
1147
spindles, functionality evaluation of, 1018
1024
Index
stability design of shutter mechanisms of
single-use cameras by simulation, 965
972
steering system on-center robustness, 1128
1140
wiper system chatter reduction, 11481156
Production control, 439
Production control costs, 17
Production cost, 17, 18, 442
Production division responsibilities:
in quality evaluation, 441442
for tolerancing in testing, 440441
Production engineering, 1213
for reduction of corporate costs, 18
responsibilities of, 17
Production processes, 18
Production quality, 438
Productivity:
denition of, 11
developing, 1013
and economy, 57
improvement in, 67
and individual freedom, 9
and quality, 78
Profound knowledge, 1444
Programming ability:
evaluation of, 13431350
evaluation of capability and error in, 1335
1342
MTS evaluation technique for, 11781188
questionnaire for prediction of, 11711177
Project leaders, 391
Project review process, 15051507
Proportional equation:
reference-point, 247250
zero-point, 241247
Proseanic, Vladimir, 1301
Prototypes, 318
Pugh, Stuart, 164, 1482, 1495
Pugh analysis, 392
Pugh concept, 1495
Pure variation, 518, 559
Q
QC circle, 1427
QCRGs, see Quality Control Research Groups
QC story, 14281430
QE, see Quality engineering
QFD, see Quality function deployment
1651
QFP, see Quad at package
QLF, see Quality loss function
Quad at package (QFP), 1189, 1190
Quadratic planning, 488
Quality. See also specic topics
as conformance to specications, 171
and corporate organization, 10
and cost, 16, 440441
denition of, 171, 314, 442
Demings key principles of, 14441447
design, 438
downstream, 354, 355
engineering, 10
expressing, through loss function, 133134
four levels of, 354
as generic term, 57
measurement of, 138
midstream, 354, 355
origin, 354355
product, 10, 171
in product design for nancial systems, 1011
production, 438
and productivity, 78
risks to, 1316
Taguchis denition of, 171, 193
and taxation, 89
as tested in, 1471
types of, 10
upstream, 354, 355
Quality assurance department, 2324, 38
Quality assurance (in Japan), 1427. See also
Quality control
Quality characteristics, 1617. See also specic
types
and additivity, 584588
classication of, 180
downstream, 269
high-rank, 340
larger-the-better, 17
low-rank, 340
nominal-the-best, 17
quality loss function and classication of, 180
smaller-the-better, 17
true, 1482
Quality control:
cost of, during production, 474477
paradigm shift for, 359360
specication limits vs., 182
traditional method of, 378379
1652
Quality control costs, 17
Quality Control (Ina Seito), 1433, 1434
Quality control in process, 116. See also On-line
quality engineering
Quality Control Research Groups (QCRGs),
148149
Quality Engineering for Technological Development
(Genichi Taguchi), 133, 147
Quality engineering (QE). See also Robust
design; Taguchi method
conventional research and design vs., 77, 78
denition of, 314
and design for six sigma, 1498
at Ford Motor Company, 16
functionality design in, 11
generic (essential) functions in, 8
history of, 127168
in Japan, 127152, 14281440
in United States, 153168
in Japan, 1427, 14351440
main objective of, 23
management for, 2538
automated process management, 3133
in design process, 3031
diagnosis to prevent recalls, 3338
evaluation of functionality, 2830
management role in research and
development, 2528, 3940
off-line, 14961497. See also Off-line quality
engineering
on-line, 14971498. See also On-line quality
engineering
physics vs., 31
in research and development, 13
research and development strategy in, 3955
cycle time reduction, 3940
stage optimization, 4055
responsibilities of, 2324
as robust design, 58
standardization of, 148
Taguchi method vs., 58
timeline of, 129130
transition and development of, 135
tuning in, 5758
U.S. Taguchi method vs., 58
Quality Engineering Series, 133
Quality Engineering Society (Japan), 147, 149,
161, 162, 168, 1435, 1440
Quality function deployment (QFD), 164, 392
and DFSS process, 1494
Index
in Japan, 1427, 14301431
origin of, 1482
Quality improvement, 79
Quality level:
and consumer loss, 1718
and cost, 1822
for larger-the-better characteristic, 19, 2122,
451453
for nominal-the-best characteristic, 1921,
448450
for smaller-the-better characteristic, 19, 21,
450451
in tolerance design, 437453
Quality loss function (QLF), 133134, 171179,
380
and classication of quality characteristics,
180
derivation of, 440
equations for, 15981601
justice-related aspect of, 175176
for larger-the-better characteristic, 188, 191
for nominal-the-best characteristic, 180186,
190, 445
and process capability, 173176
quadratic representation of, 174
quality aspect of, 176
for smaller-the-better characteristic, 186188,
190
and steps in product design, 176179
parameter design, 177178
system design, 176177
tolerance design, 179
for tolerance design optimization, 386
Quality management:
design of experiments, 14311435
history of, 14231428
in Japan, 14231440
of manufacturing processes, 1498
quality engineering, 14351440
quality function deployment, 14301431
seven QC tools and QC story, 14281430
Quality programs, 392
Quality tables, 1482
Quality target, 438
R
Radio waves, 65
RCA Victor Company, 397
R&D, see Research and development
Index
Reactive power, 61
Reagan, Ronald, 1444
Real-time operating system optimization, 1324
1334
Recalls, prevention of, 3338
Reference-point calibration, 247250
Reference-point proportional equation, 240,
241, 247250
Regression analysis:
experimental, 149, 484, 486, 488499
estimated values, selection of, 492, 494499
initial values, setting of, 488489
orthogonal array, selection and calculation
of, 489494
parameter estimation with, 484, 488499
sequential approximation, 490, 492496
MTS/MTGS vs., 417
Regression equations, parameter estimation in,
485489
Reliability, 40. See also Functionality
Reliability analysis, 392
Reliability and Maintainability 2000 Program (U.S.
Air Force), 159
Reliability design, Monte Carlo method in, 442
Reliability management, 1427
Repeatability, denition of, 224
Repetition, 1436
Reproducibility, 504505
denition of, 224
in design of experiments, 504505
evaluation of, 3031
for hardware design, 5859
meaning of term, 356357
with orthogonal arrays, 594
Research and development (R&D), 6
application of quality engineering to, 1496
in banks, 11
cycle time reduction, 3940
functionality prediction in, 2830
management of, 10, 2528, 3940
need for, in Japan, 7
quality engineering strategy in, 3955
cycle time reduction, 3940
stage optimization, 4055
robust optimization in, 1515
stage optimization, 4055
orthogonal expansion for standard SN
ratios and tuning robustness, 4041
variability improvement, 4155
1653
strategy in, 13
two-step optimization in, 1502
research and development (R&D):
management role in, 3940
Research Institute of Electrical Communication,
57
Resin board, adhesion condition of copper
plate and, 890894
Resistance welding conditions for electronic
components, 863868
Resource constraints, 1472
Responses, 228, 503
Response tables, 361
Ringer Hut, 1428
Risk(s):
countermeasures against, 1316
functional, 11
to quality, 1316
as signals vs. noises, 13
Robots, articulated, 10591068
Robust design. See also Quality engineering;
Robust engineering; Robust technology
development
conventional meaning of, 7374
function in, 1456
noise conditions in, 353
objective of, 58
origin of, 154, 155
as parameter design, 382. See also Parameter
design
quality engineering as, 58
Taguchi method for, 14511452
as term, 57
TRIZ and axiomatic design for enhancement
of, 14491468
comparison of disciplines, 14511458
further research, possibilities for, 1466
1468
limitations of proposed approach, 1465
1466
system output response, identication of,
1456, 14591465
two-step process for, 377
using Taguchi methods, 14511452
value of SN ratios in, 227
Robust engineering. See also Off-line quality
engineering
development of, in U.S., 161, 162
implementation strategies for, 389393
1654
Robust engineering (continued )
and bottom-line performance, 393
corporate leader/corporate team in, 390
391
critical components in, 389390
education and training in, 392
effective communication in, 391392
and integration strategy, 392393
management commitment in, 390
institutionalization of, 164168
managers perspective on, 377388
parameter design, 382386
tolerance design, 386388
misconceptions about, 1471
parameter design in, 382386
product development with, 167
purpose of, 1496
and six sigma, 15171519
steps in, 387, 388
thinking approach in, 392
tolerance design in, 386388
Robust engineering case studies:
chemical applications, 629714
biochemistry, 631642
chemical reaction, 643658
measurement, 659685
pharmacology, 686694
separation, 695714
electrical applications, 715792
circuits, 717745
electronics devices, 746779
electrophoto, 780792
mechanical applications, 7931167
biomechanical, 795805
fabrication line capacity planning using a
robust design dynamic model, 1157
1167
machining, 806835
material design, 836857
material strength, 858874
measurement, 875889
processing, 890964
product development, 9651156
Robust Engineering (Genichi Taguchi, Subir
Chowdhury, and Shin Taguchi), 1493
Robust optimization, 1497, 1501, 1502. See also
parameter design
improvement in, 1508
and innovation, 1515
Index
for reliability improvement, 1515
in research and development, 1515
Robust quality, see Upstream quality
Robust technology development, 352376
advantages of, 358359
encapsulation process, 916925
guidelines for application of, 359361
key concepts in, 353
Minolta advanced cylindrical heater, 361376
objective function vs. generic function, 355
orthogonal array, 356358
selection of what to measure, 354355
SN ratio, 228, 355356
two-step optimization, 353354
Rolls-Royce, 3335
Ross, Louis, 1479
Rules of Dimensional Tolerances of a Plastic Part
(JIS K-7109), 140
S
Safety design, 199201
Safety factor(s), 32
derivation of, 443448
equation for, 197198
for larger-the-better characteristic, 447
for smaller-the-better characteristic, 446
in specication tolerancing, 195201
Scatter diagrams, 1429
Scoping of project, 1472
Second Industrial Revolution, 524
automated management operations in, 3132
elements of productivity in, 59
and management in manufacturing, 1622
productivity development in, 1013
and quality assurance departments, 2324
risks to quality in, 1316
Seed technology, 164
Seiki, Sanjo, 137
Seiko Epson, 433
Sekisui Chemical, 1427
Semiconductor rectier manufacturing, 1395
1398
Senge, Peter, 1479, 1480
Sensitivity (SN ratios), 225, 226, 236, 360
Separation case studies:
herbal medicine granulation, dynamic
operating window for, 695704
particle-size adjustment in a ne grinding
process for a developer, 705714
Index
Separation principles, 1454
Sequential approximation, 610, 613616
in experimental regression analysis, 490, 492
496
with incomplete data, 613616
Service costs, 1516
Seven deadly diseases (Deming), 1446
Seven QC tools, 14281430
S-F analysis, see Substance-eld analysis
S-eld analysis, 14611468
Sharp Corp., 141
Shewhart, Walter, 155, 1424, 1443, 1446, 1481
Shewhart control charts, 14241426
Shimizu Corp., 1427
Shoemaker, Anne, 156
Showa Denko, 1424
Sigma level, 1510, 1511
Signals, 13, 57
Signal factors, 2930
active vs. passive, 29, 97
for degree of disease, 143
in orthogonal array, 9798
in software testing, 9798
testing personnel in selection of, 1473
Signal-to-noise (SN) ratio, 13, 223237
active, 236
active dynamic, 132133
based on input and output, 236237
based on intention, 236
benets to using, 225, 227228
classications of, 234237
for classied attributes, 234236, 290308
chemical reactions without side reactions,
296298
chemical reactions with side reactions,
298308
two classes, one type of mistake, 291293
two classes, two types of mistake, 293296
conceptual transition of, 131133
for continuous variables, 234236, 239289
double signals, 275284
estimation of error, 271275
with known signal factor level interval,
256260
with known signal factor level ratio, 262
263
linear equation, 250254
linear equation using tabular display of
orthogonal polynomial equation, 254
256
1655
with no noise, 284289
with no signal factor, 264271
reference-point proportional equation,
247250
split analysis, 284289
with unknown true values of signal factor
levels, 264
when signal factor levels can be set up,
260262
zero-point proportional equation, 241247
correspondence course on, 132
decibel value of, 1438
denition of, 224, 225, 314
determining evaluation scale of, 148
differences in (gain), 79
direct relationship of economy and, 225
dynamic, 139, 225
nondynamic vs., 139, 225
for origin quality, 354355
in U.S. applications, 160
uses of, 139, 236
in electric characteristics, 140141
elements of, 224226
xed vs. multiple targets, 236
functional evaluation, 133
generic function, 133
ideal functions based on, 228234
improving, 4155
larger-the-better, 95, 139
in machining, 135137
in measurement systems evaluation, 225
in metrology eld, 132
in MTS, 398
new method of calculating, 144147
nominal-the-best, 139
nondynamic, 139, 225, 264271
dynamic vs., 139, 225
larger-the-better applications, 269
nominal-the-best applications, 265268
operating window, 269271
smaller-the-better applications, 268
for upstream quality, 243
uses of, 236
origin of, 1436, 14381439
orthogonal expansion for, 4041
passive, 236
passive dynamic, 132
in process design, 227
in product design, 227
1656
Signal-to-noise (SN) ratio (continued )
for reactions without side reactions, 296298
for reactions with side reactions, 298308
and reduction of process/product
development cycle time, 228
and reproducibility, 505
in robust design, 227
in robust technology development, 355356
selection of, 240
for shape retention, 140
in simulations, 144
smaller-the-better, 95, 139
standard, orthogonal expansion and tuning
robustness for, 4041
in technology development, 225
and traditional approach for variability, 224
of transformability, 139140
Signicance test, 521
Signicant factorial effects, 521
Simulation(s), 14751476
casting conditions for camshafts, 900903
design, 40, 7273
linear actuator, optimization of, 10501058
orthogonal array L36 for, 357
parameter design in, 144
photoresist prole, optimization of, 904910
SN ratio applied to, 144
stability design of shutter mechanisms of
single-use cameras, 965972
technological development using, 148
Simultaneous engineering, 276
Single-use cameras, stability design of shutter
mechanisms of, 965972
Six sigma, 161, 162. See also Design for six
sigma
improvement in, 1508
and robust engineering, 15171519
thinking levels used in, 1490
and whack-a-mole engineering, 1508
Slope calibration, 228, 247
Small algae production, optimization of, 637
642
Smaller-the-better characteristic, 17, 180
quality level at factories for, 450451
quality level calculation for, 19, 21
quality loss function for, 186188, 190
safety factor for, 446
tolerance design for, 208210
Smaller-the-better SN ratio, 95, 139
Index
Smart Cards, 1084
Smith, Larry R., 1519
SN ratio, see Signal-to-noise ratio
SN Ratio Manual (Genichi Taguchi), 131
Software:
simulation, 14751476
for test results analysis, 1475
Software testing, 425433
main difculty in, 425426
Taguchi method in, 97102, 425433
layout of signal factors in orthogonal array,
9798
software diagnosis using interaction, 98
102
system decomposition, 102
two types of signal factor and software, 97
typical procedures for, 426433
Software testing and application case studies,
12991364
algorithms:
diesel engine software control strategy,
optimization of, 13011309
video compression, optimization of, 1310
1323
debugging, streamlining of, 13601364
electronic warfare systems, robust testing of,
13511359
prediction of programming ability from
questionnaire, 11711177
programmer ability in software production,
evaluation of, 13431350
programming, evaluation of capability and
error in, 13351342
real-time operating system, robust
optimization of, 13241334
Soldering:
automatic inspection of joints, Mahalanobis
distance in, 11891195
optimization of wave soldering process, 895
899
Sony, 134, 1427
SPC, see Statistical process control
Special cause, 14461447
Specications, conformance to, 171
Specication limits:
pass/fail criteria vs., 181182
quality control vs., 182
Specication tolerancing, 192207
deteriorating characteristics in, 205207
Index
distribution of tolerance, 204205
safety factor in, 195201
when central value of low-rank characteristics
has been determined, 201204
Specied quality, see Midstream quality
Spectrum analysis, ANOVA and, 560
Speed difference method, 142, 299
Speed ratio method, 142, 302
Spindles, functionality evaluation of, 10181024
Split analysis, 284289
SQC, see Statistical quality control
Stability, denition of, 224
Stage optimization, 4055
example of, 4455
formula of orthogonal expansion, 4344
orthogonal expansion, 4041
testing method and data analysis, 4243
variability improvement, 4155
Standards, quality, 32
Standard deviation, 517
Standardization and Quality Control, 133, 147, 156
Standardization and Quality Management, 147
Standard of living, 7, 12
Statistical Methods for Research Workers (Ronald A.
Fisher), 1431
Statistical process control (SPC), 135, 1481. See
also On-line quality engineering
Statistical quality control (SQC), 157, 1425,
1426
Steel, high-performance, machining technology
for, 819826
Steering system on-center robustness, 1128
1140
Stepwise regression, MTS/MTGS vs., 417
Strategic planning, 13
Strategic planning team, 391, 392
Strategy, 13
denition of, 23
for R&D, 25, 26
Sun Tzu on the Art of War as, 25
technological, 39
Structure thinking, 1480, 14861490
Substance eld model, 1453
Substance-eld (S-F) analysis, 1453, 1459, 1462
Subsystems, 313
Suh, Nam P., 14541456, 14791480, 1487
Suh/Senge model, 14791480
Sullivan, Larry, 157
Sum, 506507
1657
Sumitomo Electric Industries, 141
Sum of squares, 509
Sun Tzu on the Art of War, 25
Synergistic product development, 14901491
Synthesis stage, 40, 1453
System(s), 102
creation of, 39
optimization of, 30
of profound knowledge, 1444
selection of, 2628, 40, 176177, 313
System design, 313317, 387388, 504
denition of, 313, 1452
and functionality of selected system, 314317
manufacturing process design, 1497
product design, 176177, 1497
and quality loss function, 176177
robust design, 1452
and selection of system, 313
technology development, 1496
System engineering, 1482
System output responses, 14491451, 1456,
14591465
System testing, 425. See also Software testing
T
Tactics, 13
management systems as, 23
technology as, 25
Taguchi, Fusaji, 153
Taguchi, Genichi. See also Taguchi method
on American quality management, 1498
at Bell Labs, 156157
biography of, 153156
on bridging MTS methods, 143
and cells in studying main/side effects, 142
and cost vs. quality, 138
and design of experiments, 149, 1431, 1432,
14341436
and development of quality engineering,
127131
at Ford Motor Company, 157
foresight of, 131
on function in plastic injection molding, 140
history of work, 153156
honors received by, 162
impact of work, 148
at Konica, 133
and Mahalanobis distance, 1440
and management by total results, 149
1658
Taguchi, Genichi (continued )
and MTS application to re alarm, 143
and natural logarithms, 142
and on/off functions, 138
and origin of parameter design, 1434
and output characteristic identication,
14491450
and parameter design, 141
and pattern level of thinking, 1482
publications by, 16251628
in Quality Engineering Society, 161
quality engineerings reliance on, 144
and relationships between time/work and
electric power, 137
and selection of interactions for experiments,
1433
semiannual U.S. visits by, 160
and sensor welding study, 141
and signal and noise factors as consumer
conditions, 138
and simulation, 144
and two-step optimization, 1439, 1440
at Xerox, 158
Taguchi, Shin, 157, 158, 1493
Taguchi method, 56133. See also Quality
engineering
in chemical engineering, 8593
evaluation of images, 8891
functionality such as granulation or
polymerization distribution, 91
function of an engine, 8687
general chemical reactions, 8788
separation system, 9193
and Demings teachings, 14471448
in design for six sigma, 14931494
in electronic and electrical engineering, 59
73
functionality evaluation of system using
power, 6165
quality engineering of system using
frequency, 6573
essence of, 58
ve principles of, 1452
and leveraging of DFSS, 14931494
and Mahalanobis distance/space, 114116
and MahalanobisTaguchi method, 102109
and MahalanobisTaguchi System, 102103,
109116
Index
in mechanical engineering, 7385
conventional meaning of robust design,
7374
functionality design method, 7479
generic function of machining, 8083
problem solving and quality engineering,
7980
signal and output, 80
when on and off conditions exist, 8385
in medical treatment and efcacy
experimentation, 9397
misconceptions of, 163, 164
naming of, 158
in on-line quality engineering, 116123
in product development, 14851486
quality engineering vs., 58
for robust design, 14511452
in software testing, 97102, 425433
layout of signal factors in orthogonal array,
9798
software diagnosis using interaction, 98
102
system decomposition, 102
two types of signal factor and software, 97
Taguchi Methods for Robust Design (Yuin Wu and
Alan Wu), 1493
Taguchi paradigm, see Taguchi method
Taguchi quality engineering, 1482. See also
Taguchi method
Takahaski, Kazuhito, 84
Takenaka Corp., 1427
Takeyama, Hidehiko, 136
Tanabe Seiyaku, 1424
Tananko, Dmitry, 1301
Target value/curve, 6061
Taxation:
and quality, 89
for quality improvement, 89
Teams:
familiarity with robust design methods, 1471
for robust engineering implementation, 390
392
self-assessment of, 1506
and strategic planning, 391, 392
testing considerations for, see Testing
Technical Committee of the Quality
Engineering Society, 149
Technical readiness, 1505
Index
Technological Development in Electric and Electronic
Engineering (Kazuhiko Hara), 141
Technological development methodology, 147
Technological Development of Transformability (Ikuo
Baba), 140
Technological strategy, 39. See also Generic
technology
Technology(-ies):
generic, 26
optimization of, 1497, 15121514
as tactics, 25
Technology development. See also Robust
technology development
dynamic SN ratios in, 225
off-line quality engineering for, 1496
transition of parameter design to, 14391440
using simulation, 148
Technometrics, 1435
Temperature control, 471473
Test for additional information, MTS/MTGS
vs., 417
Testing, 14701477
analysis of results, 1475
design quality test, 38
during development, 360
documentation of, 1477
by external service providers, 14761477
four Ps of, 1470
measurement and evaluation technologies, 26
personnel for, 14701472
preparation for, 14721473
procedures for, 14731474
product for, 14741475
simulation, 14751476
of software, see Software testing
of systems, 425
tolerancing in, 437453
determination of tolerance, 442448
issues related to, 437439
product cost analysis, 440
production division responsibilities for,
440441
quality level at factories for larger-thebetter characteristics, 451453
quality level at factories for nominal-thebest characteristics, 448450
quality level at factories for smaller-thebetter characteristics, 450451
1659
role of production in quality evaluation,
441442
Test planning, 392
Theory of inventive problem solving, see TRIZ
Thermal ink jet image quality, 11961207
Thermoresistant bacteria, detection method for,
679685
Thermotherapy, 96, 97
Thick-walled products, optimization of molding
conditions for, 940944
Thinking, Senges levels of, 1480
event thinking, 14801481
pattern thinking, 14801485
structural thinking, 1480, 14861490
Three-level orthogonal arrays, 596
Three-or-more-class classied attributes, 234
Tile manufacturing using industrial waste, 869
874
Toagosei and Tsumura & Co., 133, 142
Toa Gosei Chemical Company, 232
Tokumaru, Soya, 132
Tolerance, 192193
denition of, 193
determination of, 1617, 442448
functional, 193
manufacturing, 194195
Tolerance design, 208220, 340351. See also
Specication tolerancing
after parameter design, 318
analysis of variance in, 342346
denition of, 79, 340
for larger-the-better characteristics, 210211
management perspective on, 386388
for manufacturing process design, 1498
for multiple factors, 212220
for nominal-the-best characteristics, 208210
optimization of, 386
for product design, 179, 1497
purpose of, 504
quality and cost balanced by, 16
and quality characteristics, 1617
and quality loss function, 179
in robust design, 1452
in robust engineering, 386388
for smaller-the-better characteristics, 208
210
steps in, 212220, 386387
for technology development, 1496
1660
Tolerance design (continued )
two methods for, 442
of Wheatstone bridge, 346351
Tolerance specication, 1497. See also
Specication tolerancing
Tolerancing, 439
Toner charge measurement, 746752
Tools:
design, 30
preparation of, 39
Tosco and Sampo Chemical, 133
Total output, 314
Total quality control (TQC), 14261428
Total quality management (TQM), 1426, 1428,
1435
Total social productivity, 1112
Total sum squared, 509
Total variation, 509
Toyota Motor Company, 12, 26, 135, 155
TQC, see Total quality control
TQM, see Total quality management
Trading, functionality evaluation in, 148
Training (in robust engineering
implementation), 392
Transformability, 229
machining technology for high-performance
steel by, 819826
as objective function, 140
origin of, 137
of plastic injection-molded gear, 827835
plastic injection molding technology
development by, 957964
Transmitting wave, stability of, 6567
Transparent conducting thin lms, ideal
function for, 233
TRIZ (theory of inventive problem solving),
14521454, 14561458
comparison of axiomatic design, robust
design, and, 1456
function in, 1456
and pattern-level thinking, 1490
and structure thinking, 14891490
True noise factors, 30
True quality characteristics, 1482
Tuning, 5758, 319
cause-and-effect relationship in, 41
orthogonal expansion for robustness in, 40
41
in quality engineering, 5758
Index
in two-stage optimization, 60
Two-class classied attributes, 234
Two-level control factors, 357
Two-level orthogonal arrays, 596
Two-piece gear brazing conditions, optimization
of, 858862
Two-stage optimization, 4041
Two-step optimization, 178, 353354, 377. See
also Parameter design
concept step in, 1504
denition of, 60
in new product development, 15011502
in product design, 1439, 1497
in robust technology development, 353354
in technological development, 1440
Two-way layout, 552562
analysis of variance for, 555562
with decomposition, 563572
one continuous variable, 563567
two continuous variables, 567572
factors and levels in, 552555
with repetitions, 573583
different number of, 578583
same number of, 573578
U
Uehara, Yoshito, 136
Ueno, Kenzo, 139, 140
Ultraminiature KMS tact switch optimization,
10691083
Ultra trace analysis, dynamic optimization in,
659665
UMC, see Unit manufacturing cost
Uncoupled design, 1454, 1455
Unemployment:
and productivity improvement, 6
and quality improvement, 8
Union of Japanese Scientists and Engineers, see
Japanese Union of Scientists and Engineers
United States:
history of quality engineering in, 153168
from 1980 to 1984, 156158
from 1984 to 1992, 158160
from 1992 to 2000, 161164
from 2000 to present, 164168
and history of Taguchis work, 153156
paradigm shift in, 29
productivity improvement in, 6
quality engineering in, 58
Index
quality targets in, 16
U.S. Air Force, 159
U.S. Census Bureau, 1443
U.S. Department of Agriculture, 1443
U.S. Department of Defense, 159
Unit manufacturing cost (UMC), 18, 121, 439,
441
University of Electro-Communications, 133, 137,
138, 141, 143
Upstream quality (robust quality), 354, 355
Urinary continence recovery in patients with
brain disease, prediction of, 12581266
Useful effects (TRIZ), 1453
Useful part (ideal functions), 1509
User friendly design, 14
Utilization of interaction between control and
noise factors, see Parameter design
Utilization of nonlinearity, see Parameter design
V
Values, objective, 507508
Value analysis (VA), 1482
Value engineering (VE), 1482
Variability, 225, 226
dynamic, 236
improving, 4155
nondynamic, 236
reducing, 40
repeatability as, 224
reproducibility as, 224
SN ratios and traditional approach for, 224
stability as, 224
Variables:
continuous:
signal-to-noise ratios for, 234236, 239289
in two-way layout experiments, 563572
value of classied attributes vs., 239
noise, 176
Variance, 509514. See also Analysis of variance
denition of, 509
error, 511. See also Error variance
Variation, 509514
decomposition of, see Decomposition of
variation
denition of, 509
error, 513, 517, 518, 559
between experiments, 574
of primary error, 574576
between primary units, 574
1661
pure, 518, 559
range of, 15101513
within repetitions, 575576, 581
of the secondary error, 575576, 581
total, 509
Variation of the general mean, 520
VE (value engineering), 1482
Video compression, optimization of, 1310
Virtual testing, 14751476
Viscosity control, 469470
Vision, project, 391
Visnepolschi, Svetlana, 1301
Visual pattern recognition, 397398
Voltage-controlled oscillators, ideal function for,
234
W
Wakabayaski, Kimihiro, 137
Warranty costs, 1516
Water feeder valve, parameter design of, 330
338
Wave soldering:
ideal function for, 233
optimization of process, 895899
Weather forecasting, MTS in, 419
Welch, Jack, 161
Welding:
gas-arc stud weld process parameter
optimization, 926939
ideal function of, 230
resistance welding conditions for electronic
components, 863868
Whack-a-mole engineering, 1508
Whack-a-mole research, 73, 74
Wheatstone bridge:
parameter design of, 320330, 349350
purpose of, 320
tolerance design of, 346351
Window latching, minivans, 10251031
Wiper systems:
chatter reduction in, 11481156
ideal function of, 230
Working hours, estimation of, 14011405
Working mean, 506507
Wu, Alan, 1493
Wu, Yuin, 157158, 1493
X
Xerox Corporation, 18, 158, 161, 433, 441,
1439, 1503, 1505
1662
Y
Yaesu Book Center, 1428
Yahata Steel, 1424
Yano, Hiroshi, 132133, 136, 140, 161
Yano Laboratory, 133
Yield, 235, 293
Yokogawa Hewlett-Packard, 141, 313
Yoshino, Fushimi, 142
Youden squares, 617625
calculations with, 620623
Index
derivations with, 623625
objective of using, 617619
Z
Zero approximation, 613
Zero operating window, 269, 270
Zero-point calibration, 228
Zero-point proportional equation, 240247
Zero-point proportionality, 137
Zlotin, Boris, 1301
Zusman, Alla, 1301