You are on page 1of 620

20TH ISPE INTERNATIONAL CONFERENCE

ON CONCURRENT ENGINEERING
This page intentionally left blank


20th ISPE International Conference
on Concurrent Engineering
Proceedings
Edited by
Cees Bil
RMIT University
John Mo
RMIT University
and
Josip Stjepandi
PROSTEP AG

Amsterdam Berlin Tokyo Washington, DC


2013 The Authors and IOS Press.
This book is published online with Open Access by IOS Press and distributed under the terms of the
Creative Commons Attribution Non-Commercial License.
ISBN 978-1-61499-301-8 (print)
ISBN 978-1-61499-302-5 (online)
Library of Congress Control Number: 2013948068
Publisher
IOS Press BV
Nieuwe Hemweg 6B
1013 BG Amsterdam
Netherlands
fax: +31 20 687 0019
e-mail: order@iospress.nl
Distributor in the USA and Canada
IOS Press, Inc.
4502 Rachael Manor Drive
Fairfax, VA 22032
USA
fax: +1 703 323 3668
e-mail: iosbooks@iospress.com
LEGAL NOTICE
The publisher is not responsible for the use which might be made of the following information.
PRINTED IN THE NETHERLANDS



Letter from the Editors
Dear CE2013 delegates,
It is our pleasure to present to you the proceedings of the 20th ISPE International Con-
ference on Concurrent Engineering. We are delighted that the International Society for
Product Enhancement (ISPE) has chosen Melbourne as the venue for their anniversary
conference. This is the first time the conference is held in Australia and we hope that
you will find the time also to explore the sights of our beautiful city, state and country.
Concurrent Engineering (CE), as a concept, initiates processes with the goal to im-
prove product quality, production efficiency and overall customer satisfaction. The
definition of a product has evolved from manufacturing and supplying goods only, to
providing goods with added value, to eventually promoting a complete service business
solution with support from introduction into service, operations to decommissioning.
Services are becoming increasingly important to the economy: the service industry has
grown significantly, and even companies that fall outside the traditional service indus-
try are more and more reliant on service-based business. For example, in Japan, USA,
Germany and Russia more than 60% of the GDP is due to service-based activities.
The 20th ISPE International Conference on Concurrent Engineering will carry the
theme Product and Service Engineering in a Dynamic World. This theme was chosen
to celebrate the first students graduating from the new Master of Engineering (System
Support Engineering) (MSSE) at RMIT University. This programme was a response to
a requirement for an educational programme specifically focused on training industry
leaders in the design and implementation of support solutions for complex engineering
systems, as organisations are becoming increasingly sophisticated, performance-based
and cost competitive. The Master degree was developed in cooperation with industry
partners.
We are looking forward to hearing about new developments and insights in the
various sessions including service engineering, cloud computing and digital manufac-
turing, knowledge-based engineering and sustainability in concurrent engineering.
Again, welcome to Melbourne and we wish you an inspiring conference and a
wonderful stay in Australia.
Cees Bil
John Mo
Conference Chairmen
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
v
This page intentionally left blank

Contents
Letter from the Editors v
Cees Bil and John Mo
System Support Engineering Application: A Refinery Case 1
Mohammed Alsaidi and John P.T. Mo
Software Tool Development to Improve the Airplane Preliminary Design Process 12
W.A.J. Anemaat, B. Kaushik, J. Carroll and J. Jeffery
A Software Architecture to Synchronize Interactivity of Concurrent Simulations
in Systems Engineering 19
Christian Bartelt, Volker B, Jan Brning, Andreas Rausch,
Berend Denkena and Jean Paul Tatou
Learning and Concurrent Engineering in the Development of a High Technology
Product/Service System 30
Ronald C. Beckett
Cloud Automatic Software Development 40
Hind Benfenatki, Hamza Saouli, Nabila Benharkat, Parisa Ghodous,
Okba Kazar and Youssef Amghar
A Hybrid Model for New Product Development A Case Study in the Brazilian
Telecommunications Segment 50
Odivany P. Sales, Tefilo M. de Souza and Osiris Canciglieri Jnior
Improved Engineering Design Strategy Applied to Prosthesis Modelling 60
Thiago Greboge, Marcelo Rudek, Andreas Jahnen and
Osiris Canciglieri Jnior
Understanding the Customer Involvement in Radical Innovation 72
Danni Chang and Chun-Hsien Chen
A Novel System for Customer Needs Management in Product Development 81
Wunching Chang, Chun-Hsien Chen and Xingyu Chen
Kansei Clustering Using Design Structure Matrix and Graph Decomposition
for Emotional Design 91
Chun-Hsien Chen, Yuexiang Huang, Li Pheng Khoo and Danni Chang
Research on a Framework of Task Scheduling and Load Balancing in
Heterogeneous Server Environment 101
Tifan Xiong, Chuan Wang, Li Wan and Qinghua Liu
Empirical Performance Evaluation in Collaborative Aircraft Design Tasks 110
Evelina Dineva, Arne Bachmann, Erwin Moerland, Bjrn Nagel and
Volker Gollnick



vii

A Task Oriented Approach to Documentation and Knowledge Management of
Systems Enabling Design and Manufacture of Highly Customized Products 119
Fredrik Elgh
Beyond Concurrent Engineering: Parallel Distributed Engineering for More
Adaptability and Less Energy Consumption 129
Shuichi Fukuda
Business-Product-Service Portfolio Management 137
Giuliani Paulineli Garbi and Geilson Loureiro
Development of Three Dimensional Measured Data Management System in
Shipbuilding Manufacturing Process 147
Kazuo Hiekata, Hiroyuki Yamato and Shogo Kimura
A Design Method for Unexpected Circumstances: Application to an Active
Isolation System 155
Masato Inoue, Masaki Takahashi and Haruo Ishikawa
An Approach of Body Movement-Based Interaction Towards Remote
Collaboration 163
Teruaki Ito
How to Successfully Implement Automated Engineering Design Systems:
Reviewing Four Case Studies 173
Joel Johansson and Fredrik Elgh
Service Process Estimation and Improvement on Verbal Characteristics 183
Leonid Kamalov, Alexander Pokhilko, Ivan Gorbachev and
Evgeny Kamalov
Lean Approach in Concurrent Engineering Applications 190
enay Karademir and Can Cangelir
Physics-Based Distributed Collaborative Design for Aerospace Vehicle
Development and Technology Assessment 198
Raymond M. Kolonay
Provisioning Service Resources for Cloud Manufacturing 216
Lingjun Kong, Wensheng Xu and Jianzhong Cha
A Virtual Environment for Collaborative Engineering with Formal Verification 225
Wolfgang Herget, Christopher Krau, Andreas Nonnengart,
Torsten Spieldenner, Stefan Warwas and Ingo Zinnikus
Development of a Parametric Form Generation Procedure for
Customer-Oriented Product Design 235
Ming-Chyuan Lin, Yi-Hsien Lin, Ming-Shi Chen and Jenn-Yang Lin
The Design of Production Strategy Based on Risk Analysis Using Process
Simulation 244
Taiga Mitsuyuki, Hiroyuki Yamato, Kazuo Hiekata and Bryan Moser
Development of Support System Solutions for Capability Transition 254
Kevin Downey and John P.T. Mo
viii

A Study on Method of Measuring Performance for Project Management 264
Shinji Mochida
A Simulation-Based Approach to Decision Support for Lean Practitioners 274
Effendi Bin Mohamad, Teruaki Ito and Dani Yuniawan
Focussed Web Based Collaboration for Knowledge Management Support 284
Marc Oellrich and Frank Mantwill
QFD Application on Developing R&D Project Proposal for the Brazilian
Electricity Sector: A Case Study System Assets Monitoring and Control for
Power Concessionaires 293
Joo Adalberto Pereira, Osris Canciglieri Jnior,
Juliana Pinheiro de Lima and Samuel Bloch da Silva
Methodological Proposal to Determine a Suitable Implant for a Single Dental
Failure Through CAD Geometric Modelling 303
Anderson Luis Szejka, Joo Adalberto Pereira, Marcelo Rudek and
Osiris Canciglieri Jnior
Design for Sustainability of Product-Service Systems in the Extended
Enterprise 314
Margherita Peruzzini, Michele Germani and Eugenia Marilungo
A Case Study on Implementing Design Automation: Identified Issues and
Solution for Documentation 324
Morteza Poorkiany, Joel Johansson and Fredrik Elgh
A Framework and Generator for Large Parameterized Feature Models 333
Robert Rger and Georg Rock
Visual Planning and Scheduling of Industrial Projects with Spatial Factors 343
Vitaly Semenov, Anton Anichkin, Sergey Morozov, Oleg Tarlapan and
Vladislav Zolotov
DMU Management Product Structure and Master Geometry Correlation 353
Glden enaltun and Can Cangelir
Implementation of an Artificial Neuro-Electronic System for Moisture Content
Determination of Subbase Soil 361
N.S. Shetu and M.A. Masum
An Electricity Market Trade System for Next Generation Power Grid 371
Kyohei Shibano, Kenji Tanaka and Rikiya Abe
Parametric Mogramming with Var-Oriented Modeling and Exertion-Oriented
Programming Languages 381
Michael Sobolewski, Scott Burton and Raymond Kolonay
Conceptual Design of Sustainable Liquid Methane Fuelled Passenger Aircraft 391
M. Burston, T. Conroy, L. Spiteri, M. Spiteri, C. Bil and G.E. Dorrington
Securing Data Quality Beyond Change Management in Supply Chain 401
Sergej Bondar, Christoph Ruppert and Josip Stjepandi

ix

Multi-Objective Optimization of Low-Floor Minibus Suspension System
Parameters 411
Goran agi, Zoran Luli and Josip Stjepandi
Prospective Evaluation of Assembly Work Content and Costs in Series
Production 421
Ralf Kretschmer, Stefan Rulhoff and Josip Stjepandi
FDMU Functional Spatial Experience Beyond DMU? 431
Shuichi Fukuda, Zoran Luli and Josip Stjepandi
Automatic Generation of Curved Shell Plates Processing Plan Using Virtual
Templates for Knowledge Extraction 441
Jingyu Sun, Kazuo Hiekata, Hiroyuki Yamato, Norito Nakagaki and
Akiyoshi Sugawara
Global Logistic Management for Overseas Production Using a Bulk Purchase
4PL Model 451
Amy J.C. Trappey, Charles V. Trappey, Ai-Che Chang, W.T. Lee and
Hsueh-Yi Cho
Constructing a Hierarchical Learning Cost Curve for Photovotaic System 461
Amy J.C. Trappey, Charles V. Trappey, Penny H.Y. Liu, Lee-Cheng Lin
and Jerry J.R. Ou
Process Modeling for Supporting Risk Analysis in Product Innovation Chaine 469
Germn Urrego-Giraldo and Gloria Luca Giraldo G.
Sustainability Indicators for the Product Development Process in the Auto Parts
Industry 481
Paulo Roberto Savelli Ussui and Milton Borsato
An Ontology-Based Approach for Aircraft Maintenance Task Support 494
Wim J.C. Verhagen and Richard Curran
A Predictive Method for the Estimation of Material Demand for Aircraft
Non-Routine Maintenance 507
M. Zorgdrager, R. Curran, W.J.C. Verhagen, B.H.L. Boesten and
C.N. Water
A Modelica-Based Modeling, Simulation and Knowledge Sharing Web
Platform 517
Li Wan, Chao Wang, Tifan Xiong and Qinghua Liu
Challenges of Online Trade Upon Retail Industry 526
Kin Kong Wu and Chun Hei Wu
Cloud Technology for Service-Oriented Manufacturing 539
Xun Xu
Overview on the Development of Concurrent Design Facility 550
Dajun Xu, Cees Bil and Guobiao Cai

x

A Low Cost CDF Framework for Aerospace Engineering Education Based on
Cloud Computing 560
Dajun Xu, Cees Bil and Guobiao Cai
A Framework for Completeness in Requirements Engineering: An Application
in Aircraft Maintenance Scenario 568
Marina M.N. Zenun and Geilson Loureiro
Aero-Structure Direct Operating Cost Estimation and Sensitivity Analysis
Within a Knowledge Based Engineering System 578
Xiaojia Zhao and Richard Curran
Heat Diffusion Method for Intelligent Robotic Path Planning 588
Jeremy Hills and Yongmin Zhong
Subject Index 603
Author Index 605
xi
This page intentionally left blank
System support engineering application: a
refinery case
Mohammed ALSAIDI
a, 1
and John P.T MO
b

a
School of Aerospace, Mechanical and Manufacturing Engineering, RMIT
University, Australia
b
School of Aerospace, Mechanical and Manufacturing Engineering, RMIT
University, Australia
Abstract. Modern refineries are complex, very high in value and production. They
are expected to function for years to come, with ability to handle the change in
technology and feed quality. The aging of a refinery and continues increase of
vendors and contractors numbers forces the refinerys operation management to
design a support system which can capture these changes. Furthermore, an
accurate performance measurement and risk evaluation processes are highly
needed. Therefore, this paper explores the nature of the support system design for
a refinery. The research work explores the operation support system from a range
of perspectives, interviewing managers from across the refinery organization. The
factors contributing to complexity of a support system are described in context is
presented which clusters them into several key areas. It is proposed that this
framework may then be used as a tool for analysis and management of support
system. The paper will conclude with discussion of potential application of the
framework and opportunities for future work.
Keywords: System Support engineering, refinery, performance, complexity,
support system management.
Introduction
Modern refineries are complex, very high in value and production [1]. They are
expected to function for years to come, with ability to handle the change in technology
and feed quality. Refinery stakeholders are demanding more value out of their asset by
ensuring sustainability in operation. These include availability, readiness, extended
operation and other value schemes. Literatures show that complex engineering industry
is proposing the whole systems approach to satisfy customers needs. Support systems
have to focus on links, interactions and the alignments of the elements [2]. As the
refinery stakeholders intend (in some cases have) to outsource the support service and
activities, the service provider will take significant part of the risk of sustaining
capabilities of the refinery for the duration of the service contract [3-9]. In other phrase,
the performance of the refinery will relay or directly affected by service of support
provider(s). It is to the interest of the refinery owners (operator) that the refinery does
perform as they wish. Hence, the relationship between the support service stakeholders
should be clearly drawn and understood in regard to the implication and the nature of
performing together to get the most out of the system.

1
Corresponding Author: Mohammed ALSaidi, RMIT University-Australia, E-mail:
mohammed.s.alsaidi@gmail.com
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-1
1
The aging of a refinery and change in feed quality (crude oil) will lead to continues
increase of number of contractors and processing units. This increase forces the
refinerys operation management to design a support system which can capture these
changes. Furthermore, an accurate performance measurement and risk evaluation
processes are highly needed to be developed in alliance with the support system
development.
Therefore, this paper explores the nature of the support system design for a
refinery. The research work explores the operation support system from a range of
perspectives, interviewing managers from across the refinery organization. The factors
contributing to complexity of a support system are described in context is presented
which clusters them into several key areas. It is proposed that this framework may then
be used as a tool for analysis and management of support system. The paper will
conclude with discussion of potential application of the framework and opportunities
for future work.
1. The case studied refinery
Refining process is simply producing petroleum products and by-products by treating
Crude Oil [10, 11] through three key processes: Distillation, Conversion and Clean-up.
Petroleum refinery is an industrial process plant where crude oil is processed [12] into
products such as petroleum naphtha, gasoline, diesel fuel, asphalt base, heating oil,
kerosene, and liquefied petroleum gas [13, 14].


Figure 1: A general layout of a refinery

Sohar refinery is owned and operated by Oman Oil Refineries and Petroleum
Industries Company (ORPIC). ORPIC Created from the integration of three
companies[15] :
1. Oman Refineries and Petrochemicals Company LLC (ORPC)
2. Aromatics Oman LLC (AOL)
3. Oman Polypropylene (OPP)
M. Alsaidi and J.P.T. Mo / System Support Engineering Application: A Renery Case 2
ORPIC is one of Oman's largest companies and is one of the rapidly growing
businesses in the Middle East's oil industry. It employs more than 1,600 employees
[16]. Sohar refinery is a combination of three major complexes:
1) On-Site Process Units
Units where all chemical reaction occurs
2) Utilities Facilities
Power Plant, Electricity Receiving and Distribution System.
Sea Water Intake Station
Water system.
Steam and Condensate system.
Fuel Gas and Natural Gas System.
Instrument Air and Plant Air system.
Nitrogen System.
Chemicals Preparation and Injection Facilities
3) Offsite Facilities [17]
Feedstock & Slops Tankage
Product Tankage
Marine Loading
Truck Loading
Waste Water Treating System
Sulfur Granulation
Bagging System
Others


Figure 2: Overall process flow diagram for Sohar refinery

Sohar refinery is the heart of other chemical industries complexes in the Sohar port
site, where it is the main supplier of their raw petrochemical materials. Hence, the
criticality of the Sohar operative performance significantly rises. Therefore, in order to
meet functional demand by the end users, the capability and efficiency of the system
should keep increasing [18]. As a result of that, the management of the Sohar refinery
needs to measure the performance of the support system to insure operations meets the
demands. Performance measurements depend on good operation support data that is
M. Alsaidi and J.P.T. Mo / System Support Engineering Application: A Renery Case 3
analyzed with sound methods and be translated into information and knowledge
allowing decisions to take place. Refinery officials often complain of information
overload and difficult to allocate and that they do not have all the relevant information
to make sound and well-informed decisions.
To identify what parameters to measure, it is needed to first understand what to
change to improve performance and subsequently, identify what are the measuring
parameters. After reasonable investigation, data analysis and staff interviews, main
challenges were highlighted to be:
People working behavior and culture understanding and training within
organization.
Process and system integration and harmonization as whole coherent
systemic approach.
Maintaining ongoing performance sustainability and improvement.



Figure 3: Management view of the targeted operation support system.

This support the indication of a need to develop a structure(s) that practitioners in
the refinery can use to help in support system design for operating refinery as a long-
term service that maintains optimized performance and achieves the best return on
investments. This structure should integrate industry domain knowledge to create and
deliver a specific support solution for in-service refinery, as the circumstance requires.
Classical techniques in refinery management involve performance monitoring,
process control and fault diagnosis techniques that aim to determine the limit of the
units service life. Theoretically, replacement should be made at the time when the unit
facility is about to fail so that the full service value of the unit can be utilized. However,
this is not possible as modern petrochemical processing systems [19] are of increasing
complexity and sophistication. Many other factors are governing the operations of the
refinery. Most of these factors such as opportunity costs or lost customers are difficult
to quantify and measure. Many asset management decisions are made on rules of
thumbs rather than using analyzed system performance data. Decisions such as asset
replacement, upgrade or system overhaul are in many respects equivalent to a major
investment, which is risk sensitive. Therefore, solution centered proposition is needed.
It is proposed that system support concept could be a guide in providing a
systematic modeling approach [20]. Therefore, a proposition was made to apply the
generic framework of system supports engineering on designing operation system
support of the refinery.
M. Alsaidi and J.P.T. Mo / System Support Engineering Application: A Renery Case 4
2. Concepts of system support engineering (SSE)
SSE concept involves the integration of service and system engineering to design
support solutions. It incorporates a core knowledge base, drawing upon principles
derived from a wide range of business and engineering disciplines. SSE is solution
centered, delivering output solutions which are a mix of service and product. Service
is a dynamic and complex activity. In all services, irrespective of industry sectors or
types of customers, services are co-produced with and truly involving consumers. In
support solutions, service engineering and system engineering are used together as
critical knowledge agents to guide the solution design. Service engineering emphasizes
customization of solution designs to meet service needs, while system engineering
accentuates technical performance of the solution. Service and Support is a strategic
business model. The customer/supplier relationship is different from those of
transactional service offerings where interactions are limited mainly to episodic
experiences. In this model, the interactions with the customer are enduring, like the
systems they support, and a support solution seeks to cement a constructive long term
customer relationship. To simplify this process, a generic framework of SSE was
drawn by employing a empirical research [21].
SSE framework is consists of 3 elements (People, Process and Product) in an
operation environment. Also, it contains three levels structure (Execution,
Management and Enterprise). The SSE framework model called 3PE model as shown
in figure 4. This model was verified through multiple industrial visits and professionals
contribution during data collection process. The SSE framework was able to outline the
relation between the elements of system support. However, the details interaction is
still yet to be investigated further.


Figure 4: General vision of system support engineering framework (multi-level 3PE).
M. Alsaidi and J.P.T. Mo / System Support Engineering Application: A Renery Case 5
The system support engineering model could systemically empower the
application and implementation of ORPIC practical modern strategies. That
though clearly indicates the type, the level of details, interaction elements and
the operational environment. As ORPIC is intending to:
1) Higher highly skilled, trained and experienced employees whom have the
ability to response to the pressure of change and keep up with dynamicity
of the system and in some cases the uncertainty of some point or situates.
Basically able to use the available information to deal with what the day
could though on them. This require that the employee should clearly
understand how the system work, interact and information flow roots and
format.
2) Adopt experience and knowledge sharing systems and exercises.
3) Increase the rule of cooperation to the extent of partnership in some cases
with its main stakeholder especially licensers and contractors. This
basically aims to increase the focus and operation especially of the
organization which could positively reflect on the quality of the
performance. Moreover, to cut off or minimize cost by introduce saving
on some activities. And strategically to get continuance feedback and
suggestions form the key stakeholders and keep the gates open for extra
business opportunities. This requires a clear understanding of interaction
and communication roots, methods and format. Also it requires a clear
identification of each party obligations, responsibilities and expectations
in case of an extraordinary event.
4) Adopt holistic systemic approach to support high performance and reduce
the uncertainties.
The benefits of the system support engineering model in relation to sustain and
support operation are:
I. The performance elements in the system are independently measurable.
II. The measures are meaningful to people who use them by capturing a
dimension of their performance in a way that they can understand.
III. The measures are continually evaluated in reference to the organization
short and long term goals.
IV. The measurement method will depending on the measured element where
the most suitable and accurate method will be performed on the element
and then later on all the results will be collected together to have overall
system performance analysis in order to measure the system overall
performance. This process may sound very lengthy but its effective and
the process will become faster as the practice continued and the
information start to cumulate.
In a case of contracting, the System support framework is used to identify and
undertake relationship with each element. Inevitably, the planning process begins by
identifying the requirements and the operation environment. Then, by simultaneously
considering the requirements changes over time and contribution potential of customers.
M. Alsaidi and J.P.T. Mo / System Support Engineering Application: A Renery Case 6
3. Results and Discussion
The framework provided three increasingly detailed views or levels of abstraction from
three different perspectives. It allows different people to look at the same system from
different perspectives. This creates a holistic view of system support. The framework in
this regards helped to: g
Guide to set requirements identification procedure for the development
process of an operational support system in the refinery.
Provide an overview of the behavior vector of support system development
process and clearly drawn relations between elements.
Captures the strategic decisions, inventions and engineering trade-offs.
Give an appreciation of Technical and commercial issues those are linkable
from the maintenance point of view.
Using the philosophies of the SSE a standard development procedure was
developed as shown in figure 5.


Figure 5: overview of development process of a support system using the SSE concepts.
This procedure was discussed and imputed with the professionals in the refinery
opinion and feedback. Also, it is analyzed through reviewing the literature. This is the
startup guideline in applying the SSE concept in the refinery case.
Then, a standard risk analysis process (shown in figure 6) was developed. In
support system engineering the risk would be the cost and technical and safety
uncertainty effects on the support system outcomes and performance. Risk tolerance
will depend on the criticality of the unit or process that the support system is designed
for.


Figure 6: standard risk analysis procedure
M. Alsaidi and J.P.T. Mo / System Support Engineering Application: A Renery Case 7
The third stage was to develop a standard design and implementation process to fit
into the development procedure of a support system, with consulting the refinery
professionals. Investigations indicate that the standard design and implementation
method should:
Organizes and covers all the requirements in order to avoid misperception
and shortage and minimize reliance on expert judgments.
Present the nature of the interaction and interface between the elements in
the support system where it is clearly identified and gives a clear meaning
to all participants.
Give an allocation for objectives and outcomes which are clearly defined
and established. This will be standardly structured to be used for the
decision-making process.
Some of the key elements are the order information and feedback information,
which are grouped in the same classification in each level (i.e., enterprise, management
and process) with different detailed depths. This will provide an easier allocation
mechanism for future reference. All the information should be structured in order to
provide the basic building unit for the design and implementation method. Figure 7
shows the design and implementation method.


Figure 7: standard Design and implementation method

As now the obligatory procedures and methods are offered for the refinery
professionals, the next step is to develop information structure format which will travel
and carry information through the development process of a support system. Several
versions of information structuring tables were developed and tested against the
proposed or planned projects in the refinery. The table in Figure 8 showed the best
results so far and was implemented by practitioners in a project. This will give a
unified information arrangement construction where the information category is
defined to avoid misunderstanding or confusion.


M. Alsaidi and J.P.T. Mo / System Support Engineering Application: A Renery Case 8

Figure 8: standard information structure

This format can be uploaded and integrated to the Enterprise Resource Planning
(ERP) which is SAP in the Sohar refinery case. Where the information could be made
available to a variety of users and controlled by classified accesses gates.
4. Conclusion and Future Research
This paper explored the nature of the support system design for a refinery. The research
work explored the operation support system from a range of perspectives, interviewing
managers from across the refinery organization. The factors contributing to complexity
of a support system are described in context is presented. It has been proposed that
system support engineering generic framework may be used as a tool for developing,
analyzing and managing support system design procedure.
Further investigation is suggested for future research:
The information exchange system need to be further investigated from the
IT point of view. Where the investigation will concentrate on the
technical requirements to develop the logic gate and automatic
information system software which will control and filter the flow of
information.
The performance indicators (KPIs) need to be further investigated to
improve the details and accuracy of measurement of the developed
procedures and this KPIs are integrated to the ERP system.
5. Acknowledgment
The authors acknowledge and deeply express their appreciation for the support
provided by Oman Oil Refineries and Petroleum Industries Company (ORPIC).
M. Alsaidi and J.P.T. Mo / System Support Engineering Application: A Renery Case 9
6. Authors Biography
6.1. First author:
Mohammed S. ALSaidi is practicing Engineer and a PhD candidate in RMIT
University under supervision of Prof. John Mo. He got B.E with (Hon) in
Manufacturing Engineering and Engineering Management (2009). He holds number of
professional memberships. He attended several industrial trainings and successfully
achieved industrial projects. His research interests are system engineering, engineering
and operation management, and manufacturing system.
6.2. Second author
Prof. John Mo is Discipline Head of Manufacturing and Materials Engineering at
RMIT University, Australia. Prior to joining RMIT, he was Senior Principal Research
Scientist in CSIRO and led research teams including Manufacturing and Infrastructure
Systems. In his 11 years in CSIRO, his team worked on many large scale government
and industry sponsored projects including electricity market simulation, infrastructure
protection, wireless communication, fault detection and operations scheduling. He was
the project leader promoting productivity improvement in furnishing industry and
consumer goods supply chain. John has over 200 publications in referred journals,
conferences and book chapters, and close to 100 confidential reports.
References

[1] Plant Management and Economics, in Fundamentals of Petroleum and Petrochemical
Engineering. 2010, CRC Press. p. 343-373.
[2] ALSaidi, M.S., J.P.T. Mo, and A.S.B. Tam, Systemic approach to strategic performance
sustainability and evaluation, in PMA 2012 Conference - From Strategy to Delivery2012:
Cambridge University, UK
[3] Li, Y., X. Wang, and T.M. Adams, Ride service outsourcing for profit maximization.
Transportation Research Part E: Logistics and Transportation Review, 2009. 45(1): p. 138-148.
[4] Feng, B., Z.-P. Fan, and Y. Li, A decision method for supplier selection in multi-service
outsourcing. International Journal of Production Economics, 2011. 132(2): p. 240-250.
[5] Lin, S. and A.C. Ma, Outsourcing and productivity: Evidence from Korean data. Journal of Asian
Economics, 2012. 23(1): p. 39-49.
[6] GRg, H. and A. Hanley, SERVICES OUTSOURCING AND INNOVATION: AN EMPIRICAL
INVESTIGATION. Economic Inquiry, 2011. 49(2): p. 321-333.
[7] Bustinza, O.F., D. Arias-Aranda, and L. Gutierrez-Gutierrez, Outsourcing, competitive
capabilities and performance: an empirical study in service firms. International Journal of
Production Economics, 2010. 126(2): p. 276-288.
[8] Lee, H.-H., E.J. Pinker, and R.A. Shumsky, Outsourcing a Two-Level Service Process.
Management Science, 2012. 58(8): p. 1569-1584.
[9] Cai, S., K. Ci, and B. Zou, Producer Services Outsourcing Risk Control Based on Outsourcing
Contract Design: Industrial Engineering Perspective. Systems Engineering Procedia, 2011. 2(0):
p. 308-315.
[10] Crude Petroleum Oil, in Fundamentals of Petroleum and Petrochemical Engineering. 2010, CRC
Press. p. 1-23.
[11] Fahim, M.A., T.A. Alsahhaf, and A. Elkilani, Fundamentals of Petroleum Refining. Chemical,
Petrochemical & Process. 2010: Elsevier. 1-487.
[12] Processing Operations in a Petroleum Refinery, in Fundamentals of Petroleum and
Petrochemical Engineering. 2010, CRC Press. p. 49-82.
M. Alsaidi and J.P.T. Mo / System Support Engineering Application: A Renery Case 10
[13] Sadeghbeigi, R., Fluid Catalytic Cracking Handbook - An Expert Guide to the Practical
Operation, Design, and Optimization of FCC Units (3rd Edition), Elsevier.
[14] Petroleum Products and Test Methods, in Fundamentals of Petroleum and Petrochemical
Engineering. 2010, CRC Press. p. 25-48.
[15] Company, O.O.R.a.P.I. Our company. 2011 [cited 2013 2 en Mar]; Available from:
http://orpic.om/page/details/key/our-company#rights.
[16] (GPCA), G.P.a.C.A. ORPIC. 2012 [cited 2013 3 rd Mar]; Available from:
http://gpca.org.ae/congulf/blog/orpic/.
[17] Offsite Facilities, Power and Utilities, in Fundamentals of Petroleum and Petrochemical
Engineering. 2010, CRC Press. p. 131-150.
[18] Mo, J., services and support supply chain design for complex engineering systems, in supply chain
management P. Li, Editor. 2011, InTech: Rijeka. p. 515-532.
[19] Instrumentation and Control in a Refinery, in Fundamentals of Petroleum and Petrochemical
Engineering. 2010, CRC Press. p. 297-324.
[20] Mo, J.P.T., System Support Engineering: The Foundation Knowledge for Performance Based
Contracting, in ICOMS20092009: Sydney, Australia.
[21] ALSaidi, M.S. and J.P.T. Mo, An Empirical Approach to Model Formulation for System Support
Engineering. International Journal of Engineering Business Management, 2013.


M. Alsaidi and J.P.T. Mo / System Support Engineering Application: A Renery Case 11
Software Tool Development to Improve the
Airplane Preliminary Design Process
W.A.J. Anemaat
a,1
, B. Kaushik
a
, J. Carroll
a
, J. Jeffery
b
a
DARcorporation, 1440 Wakarusa Drive, Suite 500, Lawrence, KS 66049, USA
b
J2 Aircraft Dynamics The Innovation Centre, Sci-Tech Daresbury, Keckwick Lane,
Daresbury, Cheshire, WA4 4FS, United Kingdom

Abstract. Aircraft design is a compromise of many different disciplines. Yet the history books are
littered with projects that failed because something was overlooked in the early design stages that have come
back to haunt it in the latter stages. This is very evident when it comes to evaluation of flying qualities and
aircraft behavior as this is almost left out of the overall picture in the early design phases. It has long been
considered that first order approximations are sufficient to indicate any issues in the early stages of design,
but these can only show so much and are built on simplifications and approximations of a standard set of
modes of motion. As airframes move away from classical designs due to improved materials, advanced
manufacturing techniques evolve, or the improved efficiency associated with unconventional designs, so the
approximations break down further. At this point it becomes prudent to perform more detailed assessment
earlier in the project lifecycle. However, this too can have issues as it may be viewed that there is
insufficient data, or the airframe is too complex to build a mathematical model. Then there is the question as
to what to test and as such 6-DoF flight modeling is left until later in the process. This can have obvious
consequences further down the project as 80% of the project and lifecycle costs are committed in the first
20% of the design. This immediately identifies that more effort should be put in the initial 20% to evaluate
the complete design including flight modeling.

All the problems and issues that are presented above are now solvable through modern modeling
techniques and software tools. This paper describes the tools developed to integrate flight simulation early
on in the design process. Aerodynamics, stability and control estimates from the Advanced Aircraft Analysis
software are corrected with the use of wind tunnel data, wind tunnel data is scaled to full size air plane and
actual flight conditions. This data is then fed into the J2 Universal Toolkit to actually fly the airplane.
Quality 6-DoF models can be built with minimal data and relative ease allowing engineers to start to look at
running detailed analyses across multiple ideas and options very quickly and much earlier in the design
process combining the more detailed handling qualities assessment with the aerodynamic evaluation,
performance, propulsion and weight calculations right from the beginning of the design. Each point in the
regulations has a configuration and maneuver associated with proofing compliance, these configurations and
maneuvers can be set up in the modeling tool and all ideas and options can be evaluated. This very quickly
identifies areas where the aircraft cannot get certified, and these ideas can then either be eliminated or
modified. By following an integrated approach and implementing full 6-DoF flight modeling from the early
stages of the design using simple methods initially, looking at sensitivity studies and the impact of tolerances
throughout the process, and flying the complete certification envelope throughout the design provides a truly
concurrent engineering approach to the design as all other disciplines feed into and have an impact on the
behavior and flying qualities. This method allows more ideas to be evaluated earlier, enables the impact of
changes to be tracked, and ensures that no surprises remain by the time the first flight comes around. This
can reduce timescales, reduce the amount and cost of re-work to fix issues following flight test, and result in
a better all-round design.

The paper shows the tools developed, the processes followed and an example airplane design using
these tools.
Keywords. airplane design, flight dynamics, wind tunnel testing, design tools

1
Corresponding Author.
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-12
12
Introduction
Early in the design stages of an aircraft the primary focus is on the aerodynamics,
stability & control and the weight/structural elements of the aircraft. While it is true
that the flying qualities are checked, the depth of the analysis into the flying qualities
does not reach that of the aerodynamics, stability & control and weight in the
conceptual and preliminary design stages. Incorporating a detailed flying qualities
analysis in the conceptual and preliminary design phase will decrease the number of
configurations tested in the wind tunnel and/or analyzed in CFD and will give the pilots
a better idea of the flight characteristics of the aircraft before the aircraft is even built
and flight tests are carried out.
Modern modeling tools allow the dynamic simulations of the proposed aircraft
configurations to be incorporated sooner in the design and development phases such
that there are no surprises in the later design phases and flight testing. The tools used
to accomplish this task are the Advanced Aircraft Analysis (AAA) software and the J2
Universal Toolkit. The AAA software is used to generate the aerodynamic, mass and
stability & control properties of the conceptual and preliminary design configurations.
These aircraft properties are then fed into the J2 Universal Toolkit to perform a
dynamic simulation of the aircraft.
1. Tools
The tools used in the analysis are outlined in the subsequent sections.
1.1. Advanced Aircraft Analysis
The Advanced Aircraft Analysis (AAA) software (Reference 1) provides a
framework to support the iterative and non-unique process of aircraft conceptual and
preliminary design. AAA provides an analysis method based on semi-empirical
relationships to take an aircraft from early weight sizing through open loop and closed
loop dynamic stability and sensitivity analysis while working within regulatory and
cost constraints. The AAA program consists of 10 modules that perform tasks
necessary to evaluate the characteristics of a given aircraft at each stage in the
conceptual and preliminary design process. The AAA software also has methods for
correcting data obtained from wind tunnel tests and scaling them to full scale
parameters.


1.2. J2 Universal Toolkit
The j2 Universal Tool-kit (Reference 2) is a complete set of design and analysis
tools for Flight Physics. It includes aircraft model building (from any data source from
conceptual design to flight test), system integration (external models, FCS, Landing
Gear, Weights and balance, aircraft systems, etc.), Flight Mechanics, (Static/Dynamic,
Lateral/Longitudinal, Linear/Non-Linear), Performance Analysis, and Flight Test
Analysis. The whole system is a data driven solution, effectively divorcing the aircraft
W.A.J. Anemaat et al. / Software Tool Development 13
model from the analyses. This means the model is self-contained and is responsible for
calculating its own states and parameters from the inputs and environmental conditions
that are specified. The user interface for the j2 Universal Tool-kit is shown in Figure 1.

Figure 1: j2 Universal Tool-kit User Interface

Using an integrated environment avoids the continual movement of data around
different applications and the possible errors that can result from data transfer steps.
Being able to simulate the aircraft through offline tests and a real time environment can
help engineers and pilots to understand behavior and rehearse tests, and to compare
their experience on the ground to the real aircraft for additional feedback

2. Application
An interface is developed between the J2 Universal Toolkit and the AAA software to
facilitate including flight simulation technique at the earliest stages of the design
process. This interface will allow quick transfer of information from the AAA
software to the J2 Universal Toolkit increasing the configurations considered as well as
providing another basis of comparison for many different design configurations to
narrow down the design field. This will also detect issues with current design
configurations and will allow the designers to fix potential problems before any further
testing is done. For these examples, only a cruise condition with landing gear retracted
at a single center of gravity location is considered. The method applies for each flight
condition and center of gravity location.
W.A.J. Anemaat et al. / Software Tool Development 14
2.1. Conceptual Design Phase
During the conceptual design phase, aircraft designers attempt to come up with
configurations that satisfy a particular mission. Unfortunately, there are many
configurations that will satisfy a particular mission with pros and cons for each
configuration. This is where an analysis of the configurations flying qualities using
flight simulation software is utilized. While modern modeling techniques can increase
the design field, usually the designer can down select to four to five configurations that
warrant further investigation. Including the flight simulation and flying qualities at this
stage in the design provides another basis of comparison for the aircraft configurations
and may narrow the design configurations further based on the handling qualities of the
aircraft configuration.
As an example, the following geometry are determined adequate to perform a
given mission. Three-views of the aircraft configurations are shown in Figure 2.

Figure 2: Initial VLJ Configuration


This data is fed into the J2 Universal Toolkit and various maneuvers are performed
to determine handling qualities and if the configuration is certifiable. The longitudinal
and lateral-directional flight characteristics for the various configurations are given in
Table 1 and Table 2, respectively.





W.A.J. Anemaat et al. / Software Tool Development 15

Table 1 Longitudinal Flying Qualities
Configuration Phugoid Stability
Level
Short Period
Damping Level
Conventional Level 1 Level 2
Canard Level 1 Level 1
T-Tail Level 1 Level 1

Table 2: Lateral-Directional Flying Qualities
Configuration Spiral Stability
Level
Roll Performance
Level
Dutch Roll
Damping Level
Conventional Level 1 Level 1 Level 1
Canard Level 1 Level 1 Level 1
T-Tail Level 1 Level 1 Level 1


2.2. Preliminary Design Phase
It is assumed that from the conceptual design phase there are wind tunnel tests of
the down selected configurations based on the dynamic analysis. The wind tunnel data
is then transferred to the AAA wind tunnel module where the aerodynamics and
stability & control derivatives are scaled to the full scale aircraft. Select flight response
modes are shown in Figure 3 through Figure 5.


Figure 3: Phugoid Response Mode
W.A.J. Anemaat et al. / Software Tool Development 16

Figure 4: Dutch Roll Response Mode

Figure 5: Roll Response Mode


For different configurations, these time responses can then be compared at various
flight conditions, aiding in the down selection process for the aircraft.

3. Conclusion
Incorporating flight simulation into the early stages of the design process increases
the number of configurations considered, provides an additional basis of comparison
for configurations that satisfy the mission requirements, provide insight into design
choices that could potentially cause costly design changes in the later on in the life of
the design and provide flight test pilots an opportunity to get a feel for how the aircraft
will act in the air before the aircrafts first flight. The speed of the modern modeling
tools and techniques provide avenues to perform detailed flight handling analyses
during the beginning stages of an aircraft design where many configurations are
considered and even when a configuration is selected it is under constant development
and change. This could potentially lead to fewer flight tests, which translates into
lower development costs of the airframe.

W.A.J. Anemaat et al. / Software Tool Development 17
References
[1] Anon, Advanced Aircraft Analysis, version 3.5, DARcorporation,2013.
[2] Anon. j2 Universal Toolkit, version 5.1, j2 Aircraft Dynamics 2013.
W.A.J. Anemaat et al. / Software Tool Development 18


A Software Architecture to Synchronize
Interactivity of Concurrent Simulations in
Systems Engineering
Christian BARTELT
a, 1
, Volker B
b
, Jan BRNING
b
, Andreas RAUSCH
a
, Berend
DENKENA
b
and Jean Paul TATOU
a

a
Software Systems Engineering (SSE), University of Clausthal, Germany
b
Institute of Production Engineering and Machine Tools (IFW), Leibniz Universitt
Hannover, Germany
Abstract. Due to distributed development of complex technical systems like
machine tools, different system components are modeled and simulated in
independent program suits. Several standards specify exchange of model data, but
communication during concurrent simulations is not standardized yet. Therefore,
the SimBus (Simulation Bus) was developed to close this gap. This novel software
architecture allows flexible coupling and implementation of existing simulation
software suits.
Keywords. System Simulation, Software Architecture, Machine Tool, Multi
Domain Simulation
Introduction Integration of Concurrent running Simulations
Nowadays, design and simulation software systems have become an absolutely
essential part in development of complex products. This trend is promoted by concepts
for integrated development processes like concurrent engineering or digital factory
(German Engineers guideline VDI 4499) [1]. Established methods like product data
management (PDM) or model data standards like STEP (ISO 10303) [2] are part of
these efforts to realize continuous data exchange.

Problem of Synchronization of Concurrent Interacting Simulations
Engineering modern CNC machine tools with mechanical, electrical and
mechatronic components is a typical example of systems engineering. During the
development of machine tools, several R&D departments and suppliers are involved.
Every participant uses specialized software for his subject and, depending on the
development progress, models with a different degree of detail. To represent the
behavior of the machine tool, all sub-models have to interact with each other in a so-
called all-in-one system simulation scenario.
Due to different complexity and simulation technologies of partial simulations in
such a scenario, it is necessary to represent the system by a distributed simulation.

1
Corresponding Author.
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-19
19


Therefore, definite simulation modules have to represent several physical and logical
system components (Figure 1). The dynamic interaction of modules and the
implementation of basic simulation functions, e.g. job control, need a common
communication interface, which is binding on all attended simulation systems and tools.
In consequence of an integrated development process, it is necessary to realize a
scalable and reconfigurable simulation platform. Due to this, each attended module
type has to act as a black box with unique functions as specialized interfaces.

Figure 1: Physical and logical system components of machine tool

The virtual model of machine tool has to be applicable resp. reusable in different
phases of machine tools life cycle like development and process planning. Different
models of predefined system components can be realized with a varying degree of
detail. A library of available system components models allows flexible configuration
of simulation scenarios, which are fitted for special use cases. E.g. during dimensioning
of a customized machine tool, simulation will be mainly focused on mechanical
reliability of machine structure and less on precision prediction of tool paths. In this
case the user can choose a detailed FEA model of structures mechanical behavior and
a simplified model of numerical control and drive system. In another example,
simulations for process planning will be mainly focused on process stability and quality
of work piece. Therefore, it would be necessary to choose detailed models, which are
able to simulate dynamic behavior of control and drive system to analyze the
compliance with tool path.

Shortcomings of Systems Simulation Infrastructures
Nevertheless typical simulation scenarios are limited to single system components,
which are developed independently by different R&D departments. To realize basic
software communication between (distributed and concurrently running) simulation
components, several open and proprietary standards/architectures existing generally
by using tailored software interfaces regarding bus software architectures. A common
use case is coupling Finite Element Analysis (FEA) and Multi Body Simulation (MBS)
C. Bartelt et al. / A Software Architecture to Synchronize Interactivity of Concurrent Simulations 20


systems by using proprietary interfaces of an integrated software suite or by using
commercial Computer Aided Control Engineering (CACE) resp. Digital Block
Simulation (DBS) software, like Matlab/Simulink [3], [4]. Due to the high degree of
specialization, the interaction of these interfaces is mostly limited to bidirectional
communication. Beside the predestined way to connect technically the simulation
software components by a standard middleware platform, there are two further
ambitious challenges the design of correct interrelations between apriori independent
simulation domain models at first and the synchronization of concurrently clocked
simulations modules secondly.
Related Work Software Architectures for Integrated Systems Simulations
Looking for architecture patterns to use as starting point when designing a
middleware infrastructure for the integration of different concurrently running
simulations quickly leads to the High Level Architecture (HLA) [5][7]. One main
intention of the HLA is a collaborative interaction simulation of different autonomous
system simulations (traffic simulation, military tactic simulation etc.). Hence, the HLA
is an architecture in which the sender of a request does not need to know his receiver.
The HLA is thereby an event-based architecture that supports the emergent
interconnectivity of components. For the simulation of machine tool processes, we do
not have any problem with the emergent interconnectivity all simulated machine
modules are hard-wired at simulation start-up time.
The other approach arising in the last years in the research communities on
integration technology for simulation software is the use of a unified language for
system modeling [7], [5]. That means, the description of the whole system could be
done in one modeling language like Modelica [8], [9]. Thereby, the simulation of the
whole system in an environment like OpenModelica should not pose a problem of
models interoperability. Although this approach is ambitious and promising, it remains
less practicable in the short term, because its success depends largely on the ability of
the unified language to model all aspects of a system from all systems engineering
disciplines. However, reducing the need of modeling the whole systems using a unified
language to the specification of a standard interface for models interchange and co-
simulation between the simulation tools using a unified language seems to be more
practicable. Examples of such tentative using the XML description language could be
found in [10][12]. As explained in [10], [11] the Functional Mockup Interface (FMI)
describes a models interchange interface between simulation environments. The intents
of the FMI are that a modeling and simulation environment (acting as slave) can
generate C-Code of a system model that can be utilized by other modeling and
simulation environments (acting as master) either in source or binary form. Therefore,
the so-called slave components are not directly coupled with each other but only with
the so-called master simulators. Furthermore, a master algorithm that will coordinate
all interactions have to be developed. But we are more interested in a system simulation
environment in which there is not necessary to choose one simulation tool as a master.
1.
C. Bartelt et al. / A Software Architecture to Synchronize Interactivity of Concurrent Simulations 21


A Software Architecture for Coupling Concurrent Heterogeneous System
Simulations
The proposed software architecture is driven by a concept for the configuration of
simulation scenarios. A basic aspect of this concept implies reusability of simulation
modules regarding the execution in any different simulation scenarios analogous to
enabling interconnections between mechatronic modules in the real world. Each
module realizes a reactive behavior. After the manual configured interconnection of
modules and their initialization/activation, the modules interact self-managed and
realize the intended simulation scenario.
In the following subsection the configuration concept for interacting simulation
modules is presented in detail. Afterwards the software architecture SimBus that
realizes the middleware to manage the interaction between several heterogeneous
simulations is explained. In Subsection .3 the required interaction scheduling is
described in detail.
1. Configuration of Simulation Scenarios
The configuration concept considers three engineering levels, which are depicted in
Figure 2. At the lowest level generic module interfaces (e.g. controller, integrator,
mechanical structure etc.) are described by a formal interface description language.
These interfaces are designed in a simulator-independent manner, that means without
considering subsequent simulator-specific requirements. Also, modules designed at the
next higher engineering level with interfaces from the level above have to be suitable to
use in any simulation scenario.
Figure 2. Configuration Levels
Therefore, one can realize or instantiate concrete modules to use in a specific simulator
(e.g. Beckhoff controller or Siemens controller with the same generic controller
interface etc.). All specified modules are collected in a global module library, which
can be used to define concrete simulation scenarios by interconnecting modules from
the library. The third engineering level simulation project definition is dedicated to
concrete scenario specification. At this level, the engineer would, accordingly to his
simulation scenario, select a set of modules from the module library and would
interconnect them regarding the corresponding interface descriptions from the lowest
level.
2.
2.
2
C. Bartelt et al. / A Software Architecture to Synchronize Interactivity of Concurrent Simulations 22


.2. An Architecture for Integrated Simulation of Mechatronic Components
Our proposed software architecture describes a virtual communication bus that
interconnects the different simulation modules (c.f. Figure 3). The virtual bus is
realized by a middleware, which provides a set of communication management services.
These services are necessary to coordinate the interactions between the simulation
modules. Indeed, the simulation modules and their interconnectivity realize the
container design pattern and are thereby able to concurrently execute simulation tasks
based on domain specific models. Each of these containers includes one simulator and
one SimBus Adapter. For the communication between containers, a SimBus Module
Interface is provided.

Figure 3. SimBus Architecture Simulator Interaction

Lets consider an example of a simulation involving two simulators (Simulator A
and B in Figure 3). Normally, simulators are based on tool specific software
technologies (e.g. Multi Body Simulation, Finite Element Analysis, Digital Block
Simulation etc.) and are able to simulate domain specific models, each simulator
having a provided and a required interface. All providing interface functions are
implemented by a model (e.g Model X and Y in Figure 3), which is simulated by the
simulator (e.g. a controller model in Matlab Simulink). Required interfaces describe
functions that must be called at external modules to execute the model simulation.
The SimBus Adapters support the communication between modules via the bus
and schedule the execution of requested functions within the simulator. A SimBus
Adapter consists of a SimBus Connector and an Execution Thread Scheduler. The
connector implements a wrapper that requests functions of the simulation model via a
tool-specific software interface (output) and prepares responds from external modules
for the executed simulation model (input). The Execution Thread Scheduler manages
2
C. Bartelt et al. / A Software Architecture to Synchronize Interactivity of Concurrent Simulations 23


external function requests that are implemented in the simulation model. The
scheduling mechanism is described in detail in Subsection .3.
Supported by this architecture, engineers can use independently developed
simulation models based on heterogeneous software simulators (Simulink, ASCET,
CAx-tool, CutS [13]) using the same software container. But the connector that wraps
these simulators is implemented differently due to the fact that simulation tools use
different simulation models and have different interface for integration (dll, plugins
etc.).
In Figure 3 the representative processing of a request is depicted. Module B
initiates a request towards Module A. In Step 1, the Execution Thread Scheduler
processes an external request of Function 4 within Module B. It calls the corresponding
simulator function supported by its tool-specific connector. The implementation of
Function 4 needs the external service of Function 2 from Module A during its
execution (Step 2.). Therefore Simulator B calls the function at Module A using the
input wrapper of its SimBus Connector in Step 3. Then the request of Function 2 is
queued in the first Waiting State of the Execution Thread Scheduler of Module A in
Step 4. Finally if the Function-2-request is switched in the third Running State
the Execution Thread Scheduler calls the corresponding function on Simulator A using
the output wrapper of its SimBus Connector.

Figure 4. SimBus Architecture - Generic Module Scheduling
Beside the simulation-model-specific Data Exchange Interface, each module
realizes the same Execution Management Interface (c.f. Figure 4. SimBus Architecture
- Generic Module Scheduling). The Execution Thread Scheduler controls the
processing of requests on the simulator. It synchronizes the processing of asynchronous
requests between different modules via the bus. Each external request processing by the
2
C. Bartelt et al. / A Software Architecture to Synchronize Interactivity of Concurrent Simulations 24


Execution Thread Scheduler can
assume one of the three states
Waiting, Runnable, and
Running (c.f. Figure 5). The state
changing is controlled by the
SimBus management component
called SimBus Manager. This
component triggers the state change
of all requests by a broadcast signal
via the bus to all modules. This
mechanism is further described in
detail in the following subsection.
SimBus Synchronized
Interaction between Executed,
Heterogeneous System Models
As previously mentioned, the
SimBus platform provides a master
component called SimBus Manager,
which assumes the role of a
scheduler at the execution time span
of the simulation modules. In other
words, the SimBus Manager
broadcast via the execution
management interface an activation
signal to all modules connected on
the bus (c.f. Figure 4). Receiving
this signal means for each module
that it may execute pending jobs.
The communication between the
SimBus Manager and the modules
on the bus corresponds to the
control flow on the bus.
We differentiate between the control
flow and the data flow (data
exchange between the simulation
components). This separation is
useful since we allow the simulation
components to communicate to each
other via function calls without
using the SimBus Manager as
transfer-buffer and by controlling
the function execution in
components with the SimBus
Manager, we ensure a deadlock free
execution of the models in a
distributed environment in that
remote function calls between
Figure 5. Processing within Execution Thread Scheduler
C. Bartelt et al. / A Software Architecture to Synchronize Interactivity of Concurrent Simulations 25


modules are asynchronous (non-blocking) and each initiated remote function call runs
in its own thread until the call is completed. Thus, 10 calls of the same functions
initiate 10 different threads. Processing a remote function call at the receiver side
assume that the corresponding thread is first send in the state Waiting, secondly in
state runnable and finally in the state running. The function execution takes place
only in the running state. If the execution cannot complete because some input data is
not yet available (according function calls are initiated), the thread is sent in the waiting
state, meaning that the thread sleeps until its input data is available. Each component
gets its input data by calling the corresponding producer in an asynchronous manner; it
may continue its execution or wait for the response if the requested data is needed
immediately. The main advantage of our proposed software architecture is a
decentralized data communication between components. Each component requests its
input data when needed. The progress of the component execution is then control by a
master component, which assumes the coordination of the concurrent execution.
In Figure 5 an exemplary extract of request processing within the Execution
Thread Scheduler is depicted. Following the interaction between modules in Figure 3
the processing of the Function 4 request is shown. At the initial stage of the figure, the
execution of Function 4 is in the state Running. This means that the processing of
Function 4 within Simulator A is active. During execution of Function 4, Function 2 of
Module A is requested (Step 2., 3., 4., 5. in Figure 3). This Function request is queued
in the Waiting state of the Execution Thread Scheduler of Module A as is depicted in
the second stage of Figure 5. Afterwards the execution of Function 4 is idle until it
receives the requested data from Module A. When all execution threads in state
Running are terminated or idle, the SimBus Manager sends the state change signal.
The third and fourth stage show the state changes of Function 2 and 4 execution in both
modules. At the final fifth stage, the request of Function 2 calls its implementation
within Simulator A using the SimBus Connector.
Evaluation by Machine Tool Simulation
The developed software architecture was implemented based on a CORBA-like
platform using [14] ICE. The implementation provides a scheduling service (SimBus
Manager) and a framework that contains an abstract module container (c.f. Module A,
B in Figure 3). Furthermore a SimBus Connector was implemented for all required
simulation tools (Simulink-Connector, CutS-Connector etc.). At runtime each
instantiated module hosts a certain connector to communicate with the tool-specific
simulator. Within the abstract module container, the Execution Thread Scheduler and
the universal Execution Management Interface is implemented. The Data Exchange
Interface was designed for each module type (lowest level in Figure 2). The design of
Data Exchange Interfaces is based on the Interface Description Language of ICE.
Using this interface description, skeleton code to bind the interface on the Execution
Thread Scheduler resp. the SimBus Connector can be generated automatically by the
ICE-tools. For the SimBus Manager a user interface was implemented. With this tool
the user can initiate/start all required modules and can control the simulation step by
step by broadcasts on the SimBus. To realize the upper level of Figure 2, a scenario
configuration editor was implemented based on the Graphical Modeling Framework of
Eclipse/EMF. With this configuration editor the user can create new simulation
scenarios using a predefined library of simulation modules.
3.
C. Bartelt et al. / A Software Architecture to Synchronize Interactivity of Concurrent Simulations 26



The use case process machine interaction was built up for evaluation of the
shown approach. Four modules, based on machine tool sub-systems described in the
first Section, were defined in this scenario setup. As shown in Figure 6, we have
separate modules for the physical and logical sub-systems numerical control, a separate
position controller, a structural model and material removal representing the results of
machining. Objective of this scenario setup is the simulation of process machine
interaction based on real tool paths.

Numerical Control Position Controller
Structural Model
Target Axis Value Axis Acceleration Target Axis Value
Axis Acceleration
Material Removal
Current Axis Value
Current Axis Value
Current Axis Value Process Force
Process Force

Figure 6: Use case process machine interaction
The abstract modules were realized in several software implementations resp. models
with different degrees of detail. Thereby, each module is represented by two or three
exchangeable models using the common module interfaces. Due to this variety of
available models in this example, it is possible to configure the shown scenario setup in
36 different ways without formal restrictions of compatibility. The number of
alternatives increases with the number of models representing each module. Regarding
to quality of simulation performance, the user has to ensure that the configuration of
simulation affords adequate results.
Conclusions
A flexible integrability of several heterogeneous simulators to holistic machine
simulations requires a suitable middleware platform that manages the interactivity
between simulated machine modules. But common used (event-based) reference
architectures for simulator integration (e.g. HLA) are not overcome this challenge. For
this reason a software architecture which schedules the interactivity between concurrent
running but pre-connected simulators has to be researched. In the previous sections
such software architecture Simulation Bus (SimBus) was described in detail. This
architecture was implemented as a middleware platform for a flexible configuration of
simulation scenarios based on a predefined pool of simulation modules. Subsequently a
4.
C. Bartelt et al. / A Software Architecture to Synchronize Interactivity of Concurrent Simulations 27


representative simulation scenario process machine interaction was deployed on
that platform and was successfully evaluated.
. Acknowledgments
This research work was supported by Niederschsisches Ministerium fr
Wissenschaft und Kultur (NMWK) within the Project Progression Diligent
Production, subproject FleXimPro Flexible software architecture for the integrated
simulation of manufacturing processes of hybrid machine tools.
References
[1] O. Sauer, M. Schleipen, und C. Ammermann, Digitaler Fabrikbetrieb. Virtual
Manufacturing, in 4. ASIM Fachtagung: Simulation in Produktion und Logistik,
Integrationsaspekte der Simulation: Technik, Organisation und Personal, 7. und 8.
Oktober 2010, Karlsruhe, 2010.
[2] ISO10303-1:1994, Industrial automation systems and integrationproduct
data representation and exchange - Part 1: overview and fundamental principles.
International Organization for Standardization, 1994.
[3] M. Zaeh und M. Hennauer, Prediction of the dynamic behaviour of machine
tools during the design process using mechatronic simulation models based on finite
element analysis, Production Engineering - Research and Development, Bd. 5, S.
315320, 2011.
[4] C. Brecher und S. Witt, Simulation of machine process interaction with
flexible mulit-body simulation, in Proceedings of the 9. CIRP Intenational Workshop
on Modeling of Machining Operations, 2006, S. 171178.
[5] D. Chen, L. Wang, und J. Chen, Large-Scale Simulation: Models, Algorithms,
and Applications, 1. Aufl. CRC Press, 2012.
[6] J. S. Dahmann, F. Kuhl, und R. Weatherly, Standards for Simulation: As
Simple As Possible But Not Simpler The High Level Architecture For Simulation,
SIMULATION, Bd. 71, Nr. 6, S. 378387, Jan. 1998.
[7] Pitch - HLA Tutorial. [Online]. Available: http://www.pitch.se/hlatutorial.
[Accessed: 14-Mrz-2013].
[8] Modelica and the Modelica Association Modelica Association. [Online].
Available: https://www.modelica.org/. [Accessed: 21-Mrz-2013].
[9] R. Kossel, W. Tegethoff, M. Bodmann, und N. Lemke, Simulation of
complex systems using Modelica and tool coupling, in The 5th International Modelica
Conference, 2006, S. 485490.
[10] T. Blochwitz, M. Otter, J. Akesson, M. Arnold, C. Clau, H. Elmqvist, M.
Friedrich, A. Junghanns, J. Mauss, D. Neumerkel, und others, Functional Mockup
Interface 2.0: The Standard for Tool independent Exchange of Simulation Models, in
9th International Modelica Conference, Munich, 2012.
[11] O. Enge-Rosenblatt, C. Clau, A. Schneider, P. Schneider, und O. Enge,
Functional Digital Mockup and the Functional Mock-up InterfaceTwo
Complementary Approaches for a Comprehensive Investigation of Heterogeneous
Systems, in 8th International Modelica Conference, Dresden, 2011.
[12] V. B, J. Brning, und B. Denkena, Standardized Communication in
5
C. Bartelt et al. / A Software Architecture to Synchronize Interactivity of Concurrent Simulations 28


Simulation of Interacting Machine Tool Components, in Concurrent Engineering
Approaches for Sustainable Product Development in a Multi-Disciplinary
Environment: Proceedings of the 19th ISPE International Conference on Concurrent
Engineering, 2012, S. 825836.
[13] B. Denkena und V. B, Technological NC Simulation for Grinding and
Cutting Processes Using CutS, in Proceedings of the 12th CIRP Conference on
Modelling of Machining Operations, Donostia-San Sebastin, Spain, 2009, Bd. II, S. S.
563566.
[14] ZeroC, Inc., Internet Communications Engine (Ice), Welcome to ZeroC, the
Home of Ice, 2013. [Online]. Available: http://www.zeroc.com/. [Accessed: 10-Apr-
2013].

C. Bartelt et al. / A Software Architecture to Synchronize Interactivity of Concurrent Simulations 29

Learning and Concurrent Engineering in
the Development of a High Technology
Product/Service System

Ronald C BECKETT
1

School of Management and Marketing
Deakin University,
Victoria, Australia

Abstract This paper explores project management techniques that can support the
development of novel product-service systems. Some observations from the
development of an airborne earth properties measurement system are provided. The
intellectual property and the data this system could potentially deliver was more
important than the potential commercial value of the product itself. What was sought
was a complete business service solution. A concurrent engineering approach was
implemented linking both product development and survey data/analysis services.
The blend of product and service was integrated using a function modeling technique.
It was observed that the implementation of some functions required radical
innovation whilst others could be implemented through incremental improvements to
current practice. It is suggested in the paper that adapting production learning curve
concepts that reflect the relative degrees of uncertainty involved in individual
subsystems can enhance project management forecasting practice

Keywords. Concurrent engineering, radical innovation, product/service systems

1. Introduction
Tatsunori et al [13][14] considered the design of product-service systems in response to
market trends towards service-based solutions rather than products. They noted that
this requires a particular kind of value proposition that may combine tangible products
with intangible services, and that new kinds of design tools may be required. Potential
problems were seen as a gap between customer analysis and product/service activity
design, and the separation of product and service design activities. Menora et al [10]
explored ways in which new service development might be different from product
development, starting with consideration of what constitutes a new service. A radical
innovation may be a new kind of service for undefined markets delivered via ICT; a
new entry into an existing market or a new offering to existing market. Incremental
innovation may be a service line extension, a service improvement or a style change.
They suggested three research challenges:

1
Corresponding author ron.beckett@deakin.edu.au
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-30
30
Specifying a priori the type(s) of new service to be studied in order to design a
study around that new service and frame the implications of research findings.
Integrating understanding of relevant facets of new product development
(process and performance) that are most applicable to furthering the study
and understanding of new systems development.
Choosing the appropriate unit of analysis that facilitates the research design,
analysis and answering of the specific new system design research
question(s) investigated.
This paper presents observations from a case study of radical product innovation
(that could support a new service offering to an existing market) where both the
product and the service aspects were developed in parallel. The unit of analysis was an
individual project sponsored by a firm that was primarily interested in the data
provided by the total system. This differs from most product-service cases reported in
the literature where adding a service to a firm's product is seen as a way of capturing
additional value. In the case study, ownership of the associated intellectual property
and the data delivered was more important than the potential commercial value of the
product itself. What was sought was a complete business service solution. The radical
nature of the product and the concurrent engineering development of quite different
kinds of subsystems involved: different professional communities, in some cases close
collaboration with key technology providers, a high rate of learning, and the evolution
of a form of agile project management approach.
The paper begins with some observation from the literature on concurrent
engineering practice in these circumstances, followed by a brief case study description,
and reflections on the case related to project management aspects.
2. Some Observations from the Literature
Valle & Vzquez-Bustelo [16] analyzed 134 responses from Spanish firms utilizing
concurrent engineering techniques which suggested reduced development time and
superior product were outcomes in an incremental innovation environment, whilst
lower cost was the dominant outcome associated with radical innovation. Ellram et al
[6] considered the interaction of product process and supply chains in a concurrent
engineering environment to more effectively deliver a new product (see Figure 1).
They saw that potential barriers were a lack of top-level commitment, a failure to
integrate historical and new practices, and a lack of alignment between the professional
communities involved
In the context of the increasing economic influences of the services industry
sector, Tien & Berg [15] suggest a systems engineering approach drawing on emerging
information, communication and decision technology tools to develop new services.
Yang [19] proposed a systems approach to service development in a concurrent
engineering environment, noting that service quality was the measure of success. Yang
suggested a number of number of design activities had to be integrated: (i) process
design; (ii) quality design; (iii) production-management design; (iv) capacity design;
(v) management design; and (vi) physical and technical design.
The points taken from this brief overview is that what makes sense in the parallel
development of a product/service system is context-sensitive, but that adopting a
systems engineering approach may support integrated development. These points are
illustrated in the following case study.
R.C. Beckett / Learning and Concurrent Engineering 31
3. The VK1 Case
The case describes a product/service in transition from concept to an operational tool.
The concept of an advanced type of gravity gradiometer sensor emerged from basic
research at an Australian university in the 1990's [8]. The technology offered promise
as a superior form of earth property aerial survey mapping tool that would supplement
other kinds of data to help geophysicists identify significant mineral deposits. The
aerial survey instrument must discriminate weak signals from substantial background
noise, with high volume data processing arrangements to incorporate some
transformations and corrections to yield usable data. This led to modeling, simulation
and system tuning requirements and an iterative approach to system development [3].
The researchers had developed a prototype to demonstrate what they called proof
of concept in that the soundness of the underlying theory was demonstrated. On this
basis, engineering development of the instrument was funded. An iterative approach to
project management became the norm. At this point, it was again declared that proof
of concept had been demonstrated, in that it had been shown that a suitable instrument
could be made. This did not necessarily impress the geologists who wanted to utilize
data collected using the instrument. To them, what had been provided at this point was
the equivalent of a medical CAT scan instrument without any imaging software. In
their view, proof of concept would occur once the instrument had collected data from
a region with well-understood earth properties, and this data was presented in a form of
map that could be interpreted in geological terms.
A development program was initiated in the 2000's following a scientific expert
panel review of possibilities. Scientists from a university, engineers recruited by an
industry sponsor, and university technicians and tradesmen were organized as a project
team. The university hosted the team, and a project manager employed by the industry
client managed it. A number of complex technologies had to be combined in a unique
way to achieve the desired outcome, but the project sponsor was experienced in the use
of sophisticated sensors and complex data processing of aerial survey data. A pre-
project review of the underlying science suggested that whilst all of the advanced

Figure 1. ( from Ellram et al [6])
R.C. Beckett / Learning and Concurrent Engineering 32
technologies to be integrated had been used somewhere else before, their integration
may prove problematic, and this was indeed the case. The iterative nature of the
development process raised a number of issues. Whilst the researchers were quite
comfortable with the process, and had formal procedures for capturing test data, etc.,
when it came to clearly defining what had to be made and how it could be consistently
produced, there was an apparent lack of system, leading to misunderstandings and
mistakes in manufacture. The client, who was familiar with stage-gate management,
found it difficult to understand where and how progress was being made and how client
requirements were being met in this iterative environment. Subsequently some systems
thinking project lifecycle tools were introduced.
The systems thinking practices focused on the end-game of providing an aerial
survey service also facilitated:
The development of hierarchical functional specifications using an IDEF(0)
The declaration and specification of interfaces at a early stage to support the
parallel development of different components of the whole system
A life-cycle reference architecture that was re-used at multiple hierarchical
levels as an evolutionary pattern within development phases as well as across phases
[7]
Adoption of the Plan-Do-Check-Act ISO 9000 philosophy [rcb]. The objective
was to assure the quality of the underlying science, the quality of the system
engineering, the reliability of production processes, the quality and reliability of the
product, and the quality and reliability of the field data collection service and data
processing operations. A dedicated Wiki-based system was used to both satisfy ISO
9000 data management requirements and support project knowledge capture [4]
Even at the high level representation shown in figure 2, the significant number of
influence factors and the fact that a preceding function may provide both inputs to the
next function and conditions governing its operation are evident. There were five major
sub-tier functions associated with the survey logistics service system, and three
associated with the survey data processing and interpretation subsystem. Functional
descriptions were taken down to sub-sub-tier elements and proved relatively stable
even though design details continued to change. An example of change in the
descriptions was the addition of system trouble-shooting capability. Some subsystems
were regarded as associated with incremental innovation, some with radical innovation,
and all required some form of collaboration with a technology or service provider (see
figure 1). In one way or another there was a degree of uniqueness about the
management requirements of each subsystem, and each had a designated development
team, which generally included a member from one of the other teams.
By the end of the 2000s initial flight trials involving some subsystems had
commenced [2]. A significant number of patents relating to ways of making this unique
product had been filed. There were some interesting discussions about that time as to
the readiness of the equipment to fly. Should more lab work be done first, or will things
be learned that could not be learned in the lab? Subsequence experience suggested that
lab and flight trial iterations informed each other there was learning across sub-
systems as well as learning within each one.

modeling tool [1] without presuming the system configuration (people/hardware/
software mix) - see Figure 2.
R.C. Beckett / Learning and Concurrent Engineering 33
4. Some Observations from the Case
The project management arrangements that evolved blended a conventional mix of
milestone/high level activity identification with less common iterative, agile
management practices (see figure 3) within this broad framework. Although it was not
seen in this context at the time it simply made sense.



Figure 3. A representation of iterative development (adapted from Virine, [17])



Figure 2. IDEF(0) Top Level Model
R.C. Beckett / Learning and Concurrent Engineering 34
An illustrative iteration strategy was a series of tests designed to help fine-tune
simulation models used in optimizing subsystem configurations. A standard agenda
was adopted for the weekly product development team meetings that addressed both
administrative and development activities. Team members that generally worked
independently reported on progress against a succession of short-term assigned tasks
and were assigned new ones. Information about each task was placed on a board in the
project room, and through the week, each team member updated information about
their task. This practice has parallels with agile project management techniques used in
software development (see e.g. planbox [11]) where an initiative may involve a
number of projects, each of which has a backlog of items to be worked on
progressively involving a number of tasks. Backlog items are scheduled into an
iteration cycle having a preset duration (e.g. a week)).
In the context of this project management perspective the question of how to plan
for iterations arose, recognizing this might only be evident in retrospect. Conventional
planning identifies a linked series of requisite activities, with an estimate of time and
resources required based on the assumption that all will go according to plan. Worst
case scenarios may also be considered but how can these be imagined? Savci &
Kayis [12] suggested that concurrent engineering practice running many activities in
parallel may have its own risks. They suggested pooling experience from multiple
projects to identify potential areas of risk and identifying management responses may
help keep things on track. There is anecdotal evidence from the operation of other
kinds of aerial survey platforms that system reliability improves over time as
experience is gained. In other words, there is a kind of learning curve. Learning curves
have been observed in other situations like the repetitive construction of complex
objects such as aircraft, and empirical formulae are used in forecasting total cost over
many years of production using these curves.
Records from VK1 monthly and quarterly project meetings over several years were
available in the sponsor firm document repository, and these were analyzed to assess
estimates to complete compared with actual as a particular activity progressed. For the
more complex activities, the estimate tended to increase soon after work had started,
based on what had been learned at that time. As an activity progressed, the residual
time reduced, but not at the rate forecast. As might be expected, more iterations were
required in the more complex systems, with higher rates of learning being indicated.
In the notion of a learning curve historically associated with aircraft production,
there is a characteristic reduction in cost at every doubling of the production number.
With an 85% learning curve, the second aircraft will be made in 85% of the time of the
first one, and the fourth aircraft will be made in 85% of the time of the second one. The
400th aircraft will be made in 85% of the time of the 200th one and so on. Mapping
actual numbers on a log-log plot often reveals such patterns.
The Table below provides an order-of-magnitude summary of the effect of
different learning curves. A 95% factor is appropriate where there will be limited scope
for learning, for example in a highly automated process or when the task is a very
familiar one. 75% is appropriate where there is a high rate of learning, for example
where the task is complex or there is uncertainty about what has to be done or how to
do it. For conditions of substantial uncertainty a 50% curve may be appropriate. By
way of example, in a high learning rate environment (75%) the level of effort to
support the first flight may be 12.3 times that required after experience has been gained
over 400 flights. Using another example, in making custom parts, experienced people
may have made say ten parts broadly similar to the one now under consideration. If the
R.C. Beckett / Learning and Concurrent Engineering 35
part or the process is quite complex (75% curve) it may take 2.6 times that estimated
based on the ten part experience, but if the part is simple or the process is automated
(95% curve) it may only take 20% longer.

Number of
Repetitions
Factor 95 on
400 base
Factor 95 on
10 base
Factor 85 on
400 base
Factor 85 on
10 base
Factor 75 on
400 base
Factor 75 on
10 base
1 1.56 1.19 4.16 1.73 12.3 2.6
10 1.31 1 2.4 1 4.7 1
100 1.11 1.38 1.79
400 1 1 1

Table 1. Worst case multipliers based on learning curve factors (base estimate 1
shown shaded for different scenarios)

The idea here is to identify some rationale for estimating the impact of learning in
complex development projects, recognizing that different parts of a total system may
have different levels of learning associated with them. By way of example, an existing
gravity gradiometer system uses a sensor based on US military technology purchased
as a black box from a military supplier. Due to the prior military application testing
and experience, the sensor is very reliable (save for infant mortality problems often
associated with complex systems), and it might be expected that the level of adaptation
associated with its use might be low. A learning factor scenario of 95 % might describe
that situation. The primary cause for concern with the total system has related to the
correction and interpretation of data collected in terms of its geophysical implications.
Anecdotal evidence suggests that software refinement took 4 5 years with an
accumulated effort that might imply a learning factor of 75%.
5. Discussion
Badham et al [5] noted from studies of CE implementation effectiveness gave mixed
results . They saw influence factors as: (a) senior management commitment to new
product introduction, (b) preparation for implementation as a political resource
allocation process, (c) a focus on organizational issues to avoid cross-functional
conflicts, and (d) a project leader as product champion, team member motivator and
stakeholder manager. The case study sponsor organization was experienced in the
management of complex projects, and factors (a), (b) and (c) were well attended to.
The VK1 project leadership function involved multiple champions at different levels
within the organization drawn from different professional communities, and a hierarchy
of weekly, monthly and quarterly review meetings to facilitate project communication
and integration. The development status of competing technologies was also discussed
at the quarterly reviews, with the possible option of putting development effort behind
another technology. But then IP access may become problematic. This grounded view
helped revisit the value proposition promised by the project, even if development time
was longer than hoped for.
Yan and Jiang [18] observed some issues associated with the use of concurrent
engineering practice that they suggest might be accommodated by blending agile
R.C. Beckett / Learning and Concurrent Engineering 36
management and concurrent engineering concepts. They saw the potential benefits as
giving resource sharing special consideration, providing organizational flexibility
through the use of agile teams and fitting in with the firm's existing organizational
structure with little change being required. In the VK1 case study, both the project
sponsor and the university provided some kinds of resources to support a dedicated
project team located within the University. , The University Head of School involved
suggested the collaborative working arrangements that evolved provided more effective
technology transfer than licencing or spinoff company strategies. Karlstrm and
Runeson have suggested that [9:p49] Agile methods give the stage-gate model
powerful tools for micro planning, day-to-day work control, and progress reporting.
The functioning product and face-to-face meetings, for example, support much more
powerful communications than written documents. The stage-gate model, in turn, gives
agile methods a means to coordinate with other development teams and
communicate with functions such as marketing and senior management. This is also
consistent with the practices that evolved in the VK1 project.
Chachere et al [6] studied a process for rapid concept development at NASA's Jet
Propulsion Laboratory using a combination of expert designers, advanced modeling,
visualization and analysis tools, social processes plus a specialized design facility.
Planning involved a focus on average and worst case scenarios to clarify what had to be
managed an exception handling orientation rather than a best practice one. They
discuss the impact of latency (a measure of time delay in a system) in information or
decision flows. They suggest Just-In-Time knowledge flows with short lead times plus
facilitation make the next step clear plus team autonomy and keeping it simple. These
features were evident in the VK1 project, but rather than being designed-in, they were
the result of several team members having worked together for a long time with
minimal financial resources. This raises the interesting question of whether adding
more resources would have significantly sped things up, as then the current experts
would have to devote part of their time to bringing others up to speed. On the other
hand, there is anecdotal evidence that collaborating with others through social networks
has been beneficial.
6. Concluding remarks
There is increasing interest in the fast-track development of products/service systems
and new services. Some academic literature suggests this may be achieved by a
combination of concurrent engineering project practices and systems engineering
methods. The literature also suggests the nature of and benefits derived from
concurrent engineering practices is contingent on the nature of the innovation sought
(incremental or radical) and the extent of collaboration involved (supply chain and joint
development). This paper describes project management techniques that evolved to
support the development of novel product-service system - an airborne earth properties
measurement system for use in mining exploration. The intellectual property and the
data this system could potentially deliver were more important than the potential
commercial value of the product itself. What was sought was a complete business
service solution. The blend of product and service was integrated using a function
modeling technique. The high level functional descriptions and the management of
interfaces between the subsystems provided a stable platform for relatively independent
subsystem development. It was observed that the implementation of some functions
R.C. Beckett / Learning and Concurrent Engineering 37
required radical innovation, sometimes in conjunction with specialist technology
providers, whilst others could be implemented through incremental improvements to
current practice, sometimes in collaboration with established service providers. A point
to be made here is that without getting into the subsystem level, just what has to be
managed may not be evident.
The concurrent engineering approach that evolved included some attributes
found in agile project management practices now often used in software development.
This involves managing a series of iterations that facilitate fast learning about what
works and what doesnt. But when developing a complex system, how many iterations
are required and how is this influenced by the rate of learning needed? Some ideas
based on learning curve concepts used in other settings is presented. There is some
evidence from the literature that the project management practices that evolved in the
case study project have been observed in other project settings, and thus may have
more general application.
References
[1] AIWIN Automated Function Modeling for Windows, Knowledge Based Systems Inc
http://www.kbsi.com (last accessed May 20, 2012)
[2] Anstie, J, Aravanis, T, Johnston, P, Mann, A, Longman, M, Sergeant, A, Smith, R, van Kann, F,
Walker, G, Wells, G and Winterflood, J (2010), Preparation for flight testing the VK1
gravity gradiometer. In R. J. L. Lane (editor), Airborne Gravity 2010 - Abstracts from the ASEG-PESA
Airborne Gravity 2010 Workshop: Published jointly by Geoscience Australia and the Geological
Survey of New South Wales, Geoscience Australia Record 2010/23 and GSNSW File GS2010/0457.
[3] Beckett, R.C (2008) An Integrative Approach to Project Management in a Small Team Developing a
Complex Product International Conference on Industrial Engineering and Engineering Management,
Singapore, December 8 11 (ISBN 978-1-4244-2630-0)
[4] Beckett, R.C (2009) Capturing knowledge during a dynamically evolving R&D project: A particular
application of Wiki software International Journal of Knowledge, Culture and Change Management,
Vol 9, No 2, pp 59 - 68
[5] Badham, R Couchman, P and Zanko, M (2000) Implementing Concurrent Engineering. Human Factors
and Ergonomics in Manufacturing, Vol. 10 (3) 237249
[6] Chachere, J, Kunz, J and Levitt, R (2004) Observation, Theory, and Simulation of Integrated
Concurrent Engineering: Grounded Theoretical Factors that Enable Radical Project Acceleration.
Center for Integrated Facility Engineering, Stanford University CIFE Working Paper #WP087, August
[6] Ellram, L.M, Tate, W.L and Carter, C. R (2007) Product-process-supply chain: an integrative approach
to three-dimensional concurrent engineering. International Journal of Physical Distribution & Logistics
Management Vol. 37 No. 4, 2007 pp. 305-330
[7] GERAM (1999) Industrial Automation Systems Requirements for Enterprise Reference Architectures
and Methodologies. Annex A GERAM: Generalized Enterprise Reference Architecture and
Methodologies, ISO/FDIS 15704, National Institute of Standards and Technology, USA.
[8] van Kann, F.J, Buckingham, M.J, Edwards, C and Mathews, R (1994) Performance of a
superconducting gravity gradiometer. Physica B: Condensed Matter, Volumes 194-196, Part 1, pp 61-
62
[9] Karlstrm, D and Runeson, P (2005) Combining Agile Methods with Stage-Gate Project Management.
IEEE SOFTWARE May/June, pp 43 49
[10] Menora, L.J, Tatikonda, M.V and Sampson, S. E (2002) New service development: areas for
exploitation and exploration. Journal of Operations Management Vol 20, pp 135157
[11] Planbox Agile Project management Tool https://www.planbox.com (last accessed May 20 2013)
[12] Savci, S & Kayis, B (2006): Knowledge elicitation for risk mapping in concurrent engineering projects,
International Journal of Production Research, Vol 44, No 9, pp1739-1755
[13] Tatsunori Hara, Tamio Arai, Yoshiki Shimomura and Tomohiko Sakao (2007) Service/Product
Engineering: a new discipline for value production. 19th International Conference on Production
Research, Valparaiso, Chile, July 29 - August 2
R.C. Beckett / Learning and Concurrent Engineering 38
[14] Tatsunori Hara, Tamio Arai & Yoshiki Shimomura (2009): A CAD system for service innovation:
integrated representation of function, service activity, and product behaviour, Journal of Engineering
Design, 20:4, 367-388
[15] Tien, J, M and Berg, D (2003) A Case for Service Systems Engineering. Journal of Systems Science
and Systems Engineering, Vol. 12, No.1, pp13- 38
[16] Valle, S and Vzquez-Bustelo, D (2009) Concurrent engineering performance: Incremental versus
radical innovation. Int. J. Production Economics 119, pp136148
[17] Virine, L. (2008) Adaptive project management, PM World Today, Vol. X, No. V, May []
[18] Yan, H.S and Jiang, J (1999) Agile concurrent engineering. Integrated Manufacturing Systems Vol 10.
Iss 1, pp 103-112
[19] Yang, C-C (2007): A Systems Approach to Service Development in a Concurrent Engineering
Environment, The Service Industries Journal, 27:5, 635-652
R.C. Beckett / Learning and Concurrent Engineering 39
Cloud Automatic Software Development
Hind BENFENATKI
a,1
, Hamza SAOULI
b
, Nabila BENHARKAT
c
,
Parisa GHODOUS
a
, Okba KAZAR
b
, Youssef AMGHAR
c

a
Universit de Lyon, CNRS
Universit Claude Bernard Lyon 1- France, LIRIS UMR 5205
b
Computer Science Department University of Biskra
c
INSA-Lyon, LIRIS, UMR5205
F-69621, France
Abstract. Software Engineering must face the new challenges imposed by the
Cloud Computing paradigm. New methodologies for software development must
be proposed. For this purpose, this paper presents a specific methodology for col-
laborative software development in the Cloud, and then describes the architec-
ture of Automatic Software Development as a Service (ASDaaS). The goal of
ASDaaS is to popularize software development in the Cloud and make it acces-
sible to non-IT professionals. In fact, with Cloud Computing and the conver-
gence toward Everything as a Service, we no longer consider the classical con-
text of software development, where IT teams or integrators are solicited to per-
form software development. ASDaaS allows a stakeholder, without computer
skills to perform automatic developments from functional requirements, SLA
(Service Level Agreement) requirements, and business rules definition. ASDaaS
promotes the discovery and composition of web services. It is itself composed of
a set of services which can carry out and cover the whole process of software de-
velopment. ASDaaS also allows the automatic development on Cloud platforms
of undiscovered services by model transformation. Indeed, for each new devel-
opment, a choice of PaaS (Platform as a Service) is performed by matching de-
velopment constraints imposed by the stakeholder, with the features and services
offered by the Cloud Platform.
Keywords. Collaborative Development, Cloud Computing, Business rules,
Business Process
Introduction
The main ideas of the paradigm "Cloud Computing" is to enable companies to acquire
Computing resources on demand, to make payment according to use, and to discharge
the concern of the resources provenance [1].
Cloud Computing offers many advantages for software development [2], particu-
larly because it offers the possibility of an elastic resource allocation. It promotes a
simple and ergonomic use, without having to worry about the underlying infrastruc-
ture or the deployed development environment. Another advantage of this new para-
digm for software engineering is that companies and organizations can develop mas-

1
hind.benfenatki@universite-lyon.fr
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-40
40
sively distributed software systems by dynamically assembling services. These ser-
vices may come from different providers [3].
With the Cloud Computing paradigm, traditional software development method-
ologies gradually give way to the composition of services that represent business
software artifacts. However, the immaturity of the Cloud model in the field of soft-
ware engineering induces the lack of specific and adapted methodologies to develop
appropriate software. Indeed, the methodologies experienced by software engineering
do not meet the distributed and collaborative nature, simplistic and user-friendly ap-
proach and flexibility in the allocation of resources offered by Cloud Computing.
In this paper, we propose a methodology for collaborative development of Cloud-
based service-oriented software, and an architecture of Automatic Software Devel-
opment as a Service (ASDaaS). ASDaaS allows the development of business process
with minimal intervention of the business stakeholder and without the intervention of
developers. It promotes the discovery and the composition of services described in the
business process; and the automatic development of undiscovered services in different
Cloud platforms according to the APIs (Application Programming Interface) and
services proposed by the Cloud platforms, and the SLA (Service Level Agreement)
and KPIs (Key Performance Indicators) described by the user. ASDaaS also allows
the selection of a Cloud infrastructure for the software deployment.
The rest of this paper is structured as follows. Section 1, describes the software
engineering issues vis--vis the Cloud. Section 2 addresses related works. Section 3
describes the proposed methodology. Section 4 introduces ASDaaSs architecture.
Section 5 draws final conclusions and describes our future works.
1. Cloud Computing : Software Engineering challenges
Cloud Computing is a model for access through the network, and on demand to a set
of shared and configurable Computing resources (eg networks, servers, storage, appli-
cations, and services) that are rapidly deployable and releasable with minimal admin-
istration effort or provider interaction. This paradigm considers three main service
models: SaaS (Software as a Service), PaaS (Platform as a Service), IaaS (Infrastruc-
ture as a Service) [4] on which other services such as Data as a Service and Desk-
top as a Service are grafted. Four deployment models are possible with Cloud Com-
puting: public, private, community and hybrid.
From a software engineering point of view, Cloud Computing faces several chal-
lenges related in particular to:
Distributivity : the modularity imposed by the Cloud Computing is a challenge in
itself, which leads to dividing the application into units that can be deployed in a
distributed environment [5]. Therefore, the debug and test operations can therefore
be difficult. The distribution of data in the Cloud must also meet the challenge of
access to data [6].
Security: storing data on the Cloud infrastructure creates security challenges with
respect to data access and confidentiality due partially to the multi-tenant architec-
ture [7].
Cloud service composition: developers are brought to discover and compose web
services to avoid reinventing existing services [2].
H. Benfenatki et al. / Cloud Automatic Software Development 41
Interoperability: the migration of developed applications from a Cloud platform
to another poses interoperability and portability problems, due to the advocated
architecture and the APIs compatibility between the different Cloud providers [6].
Dependence on PaaS: the technological advancement heavily relies on the PaaS
support technologies [6].
Elasticity and scaling: another challenge is that the hardware architectures are
not fixed but rather flexible due to the property of elasticity in the Cloud Compu-
ting technologies.
Evaluation: evaluation of applications developed on the Cloud is difficult due to
the inability to assess the complexity of the APIs provided by the PaaS, since the
implementation of these is ignored by users [6].
2. Related works
Due to the recent nature of the Cloud Computing paradigm, few methodologies and
approaches exist in literature. In [8], the authors proposed an approach that used the
Domain Specific Languages (DSL) within the process of development and deploy-
ment of software on the Cloud. This approach relies on two basic steps. The first con-
cerns the development of a DSL for a given domain in order to facilitate the modeling
of an application. In the second step, the DSL language is used by Application De-
signer to model applications. These models are automatically translated into specific
executable code to a target Cloud platform, and then this code is automatically de-
ployed on the Cloud platform. This approach can be costly in DSL development time
during the first phase.
In [9], Ardagna et al. proposed the MODACLOUDS system issued from an Eu-
ropean project [10] that uses the principle of MDD (Model Driven Development) for
the development of applications on the Cloud. MODACLOUDS comprises a design
environment and a runtime environment. The design environment includes the model-
ing and code generation part, and includes a DSS (Decision Support System) module
that allows risk analysis for the selection of Cloud providers and evaluation of the
impact of the adoption of Cloud in the internal business processes. The runtime envi-
ronment allows to observe the system running and to provide feedback to the design
environment. This allows developers to react to performance fluctuations and rede-
ploy applications on different Clouds over the long term. This approach is very inter-
esting; however, it does not exploit the APIs and services provided by Cloud plat-
forms, and returns the final choice of Cloud platform to the developer (human inter-
vention).
The work proposed by [11] describes the SaaS Development Life Cycle
(SaaSDLC) and outlines six phases. During the envisioning phase, an identification
of new business opportunities and applications that can benefit from the characteris-
tics of the Cloud is made. A platform evaluation is then performed on the basis of
cost and capabilities. The planning phase includes overall functionality require-
ments and design specifications, and establishes project plans. Once the Cloud plat-
form is selected, a subscription is made. The Subscription phase involves the Cloud
provider and the client in the feasibility assessment of the application security archi-
tecture, and data architecture on the Cloud platform. Then comes the service devel-
H. Benfenatki et al. / Cloud Automatic Software Development 42
opment phase which is composed of a series of iterations. Deployment and testing
are performed continuously throughout each iteration. The last phase operations
includes the creation of process deployment and operation for the functioning of the
service hosted on the Cloud. The SaaSDLC does not consider reuse of Cloud services.
It promotes the development to a specific platform, making application portability
more difficult.
As part of IBM Research to provide a Software Development as a Service
SDaaS, the authors in [12] propose a hybrid development method (agile & workflow)
for the large projects. Two separate tracks characterize this method: Prototype and
Release Tracks. Prototype Track comprises a pure agile development, with short seg-
ments of development of specific features, customer demonstrations and iterations.
The Release Track includes integration and testing of all the features that have been
prototyped and completed in the previous cycle prototype. This work does not consid-
er the Cloud Computing aspect in the development process, but is mainly focused on
the adaptation of agile and workflow methods for projects development.
In [13] Guha et al. advocate the intervention of Cloud provider in the Agile eX-
treme Programming software development process, especially in planning, design,
building, testing and deployment phases to mitigate the challenges associated with
Cloud software development, and makes it more advantageous. The roles and activi-
ties of the Cloud provider and developers are pre-defined. In this paper, the authors
consider the aspect of roles between the various stakeholders in the agile development
process of Cloud applications, and do not consider the other characteristics of Cloud
applications.
In [14], the authors describe Service-Oriented Software Development Cloud
(SOSDC), a Cloud platform for developing service-oriented software and a dynamic
hosting environment. The SOSDC adopts a system architecture covering the three
levels of Cloud services. The IaaS level contains "Dynamic Provisioning Software
Appliance" and is primarily responsible for providing software appliances. The PaaS
level "App Engine: Dynamic Hosting Environment for Service Oriented Software"
provides App Engine for testing, implementing and monitoring the deployed applica-
tion without having to consider the technical details. SaaS level aims to provide
"Online Service-Oriented Software Development Environment". Once an application
is developed, the developer may request an App Engine hosting environment by spec-
ifying the deployment requirements. This approach aims to provide a dynamic devel-
opment environment by providing on demand appliance for developers, but does not
exploit public Cloud platforms.
The methodology proposed in this work, (i) maintains interoperability through
the use of web services and the modeling of functionalities to be developed; (ii) meets
the requirements of the distributed nature of Cloud; and (iii) does not depend on a
particular platform.
3. The proposed methodology
We no longer consider the traditional way of development, where the customer goes
to a software integrator, or mobilizes his/her IT department to develop software that
meets their needs. In this work, we do not consider developers as key stakeholders,
H. Benfenatki et al. / Cloud Automatic Software Development 43
but business stakeholder who feel the need to automate a Business Process Manage-
ment (BPM). Unlike other applications, BPM considers the organization of the com-
pany. Therefore, the definition of business rules is paramount.
Our methodology benefits from Cloud Computing and Web services to automati-
cally build and deploy a service-oriented software. It promotes the discovery and
composition of services, but also entails the development of new services on Cloud
platforms if undiscovered.
The result of this methodology is an Automatic Software Development as a Ser-
vice (ASDaaS). ASDaaS is a development environment in which the different ser-
vices are developed and deployed on various Cloud platforms, amounting to interact-
ing decentralized and interdependent systems. The ASDaaS uses three types of data
input (requirements in terms of functionality, the SLA and the business rules that
describe the business constraints) to generate a software release.
ASDaaS is an upper layer of the SaaS. This positioning allows users of ASDaaS
not to depend on a particular platform. In addition, it allows us to proceed to the
choice of an appropriate platform for each development. Each platform has different
APIs, and the choice of platform should depend on the need for development.
The idea is that via a browser a business stakeholder can access an environment
of automatic development of business processes (ASDaaS). ASDaaS incorporates the
composition of web services, the automatic development of services on Cloud Plat-
forms (PaaS: Google App Engine, Windows Azure), and the deployment of ser-
vices on Cloud infrastructures (IaaS: Amazon Web Services).
The originality of our approach lies in the following facts: (i) It does not require
great knowledge in software engineering to gain access to development; (ii) It pro-
motes the composition of developed services on multiple Cloud platforms : Inter-
Clouds; (iii) It allows selection of a development platform (PaaS) according to the
development; (iv) It allows selection of IaaS for deployment that meets the SLA and
pre-defined KPIs; (v) and it automatically deploys developed applications on a prese-
lected Cloud infrastructure.
Figure 1 illustrates the different phases of the proposed methodology. We note
that the business stakeholder can, at any time, introduce changes following his feature
needs.
3.1. ASDaaS Subscription .
To initiate the project, a project identity sheet is created by the project creator. This
sheet contains an identification of the different organizations involved in the project,
and several stakeholders of each organization. A profile is assigned to each stakehold-
er. The creator of the project has an administrator profile. This allows him to allocate
tasks to different stakeholders and have a rise of information feedback through operat-
ing reports, showing the progress of the work.
3.2. Requirements expression.
This phase consists of describing the inputs of ASDaaS described by the business
stakeholder and translating the service needs following the WSDL file for searching
purposes. Three types of inputs are identified:
H. Benfenatki et al. / Cloud Automatic Software Development 44
x The requirements and expectations of the customer expressed in terms of
functionality (services) in a business process. It consists of defining the in-
puts / outputs of each service, without having to write the algorithms. This
phase involves to i) modeling the Workflow Description , ii) defining the
number of processes, iii) describing the features included in the process, in
addition to the input / output, and iv) defining the context and launch archi-
tecture.
x SLA: the stakeholder describes his/her requirements in terms of quality of
service, Cloud platforms and infrastructures security.
x Business rules that define business constraints.
3.3. Application interface creation.
The client creates the interface through an easy-to-use tool. The business stakeholder
has at his disposal a variety of forms (windows, tabs, buttons, check boxes ... etc).
The creation of the interface allows the stakeholder to see his requirements more
clearly.
3.4. Service discovery.
In this phase, through a web services search engine, we have to discover services
corresponding to several features included in the different business processes. The
advantage of this phase is to reducing the workload and improving reusability of web
services. The complex features that have not been discovered as a web service, are
broken down, if possible, to allow start a finer new search. If no further decomposi-
tion is possible, and no web service has been found for this feature, we move on to the
phase of service automatic development.
3.5. Service composition.
This phase composes the discovered and developed services. It occurs according to
the discovery and implementation of the features.
3.6. Automatic development of undiscovered services.
This phase identifies and develops the features described in the requirements but that
have not been discovered as web services. This involves three key steps:
1. Undiscovered services modeling: undiscovered services are modeled accord-
ing to the UML (Unified Modeling Language) in order to allow automatic
generation of a code for a targeted Cloud platform with full knowledge of the
tools it offers. The model-driven approach allows one to model once and
run everywhere.[9].
2. Discovery and selection of development platforms for each service: this step
allows us to choose a Cloud platform based on the module to be developed
and technologies proposed by the PaaS platform. It includes the participation
of the Cloud provider in the process of establishing the contract between the
customer and the provider.
H. Benfenatki et al. / Cloud Automatic Software Development 45
3. Automatic deployment and publication of the software artifact as a web ser-
vice.
3.7. Tests and validation.
The tests are made once the services are discovered and developed. Two types of tests
are considered: (i) the test of application features and the respect of business rules,
with the aim of validating or bringing modifications to the expression of needs; (ii)
application availability tests done automatically at various moments. Validation is
made by the customer. It takes place after the deployment of the application and after
the last tests.
3.8. IaaS selection for application deployment.
A Cloud infrastructure is selected according to performance indicators (KPIs) and
SLA required by the client.
3.9. Automatic deployment.
This phase consists in automatically deploying the application on a Cloud infrastruc-
ture (IaaS).
In our methodology, the maintenance is not an occasional service; it takes place
in a continuous way. Maintenance includes both changes made at any time by the
business stakeholder, at any time, in the expression of its needs and web service
changes (adding, deleting or modifying features). In other words, the phases of ser-
vice discovery and composition do not stop. In the next section, we describe an archi-
tecture for Automatic Software Development as a Service.

Figure 1. Cloud-based Service-Oriented software development methodology.
H. Benfenatki et al. / Cloud Automatic Software Development 46

We propose an example to illustrate our methodology. We want to implement a
service that can generate an itinerary of sightseeing tours, from a chosen destination
and period and depending on the weather. As a first step, we define (i) the functional
requirements and business rules, (ii) the SLA, (iii) the KPIs for IaaS selection for
deployment, and for PaaS selection for automatic development of services. Figure 2
shows the functional requirements of a process.
These rules must be respected: (i) if the weather is good, we will favor outdoor
tours; (ii) otherwise we will favor covered visits.
We put at the disposal of the user a tool to create an interface that can, in this case,
generate a questionnaire where the user can enter the destination and the dates for the
trip. Then based on KPIs and SLA, we proceed to the selection of a Cloud Infrastruc-
ture for deployment. In parallel, we discover services: we search for service S1 to
estimate the weather for given period and destination, and service S2 to identify the
sights of the destination, and finally, S3 to establish a visit itinerary. The service S3
was not found, so we will model the service and generate the code automatically on a
Cloud platform, previously chosen, depending on the parameters (SLA, pre-cited
KPIs, and PaaS APIs). These services are composed according to the discovery and
the development. Then the resulting application is deployed on the preselected Cloud
infrastructure.



Figure 2. Functional requirements.
4. Automatic Software Development as a Service (ASDaaS)
ASDaaS is an automatic business process development environment. Its architecture
is illustrated in figure 3.
The project management service basically takes care essentially of two activi-
ties: (i) The creation of a new project, and (ii) The monitoring of the existing projects.
Prototype as a Service allows the creation of a prototype from needs expression
and interface creation. Features described by the business stakeholder are discovered
by Discovery as a Service. For undiscovered services, PaaS Discovery as a Ser-
vice, offers the possibility of their development on a specific PaaS platform accord-
ing to: (i) PaaS characteristics; (ii) APIs offered by PaaS; and (iii) Respect for the
SLA imposed by the client. Once the PaaS is selected, the service Automatic Devel-
opment as a Service provides an automatic development from models. An automatic
generation of source code is made thanks to a code generator which can transform
models towards the code (approach MDD). Code generation is made by exploiting the
APIs of chosen PaaS and by respecting the architecture imposed by the Cloud provid-
er. The developed services are then published.
The service Composition as a Service composes discovered services and de-
veloped services. Service discovery and composition are continuous processes which
do not stop even after the deployment of the application, with a view to ongoing im-
H. Benfenatki et al. / Cloud Automatic Software Development 47
provement of the application. A CVS will manage different versions of composition
in order to allow backtracking, if necessary.
A selection of IaaS for the deployment is made by the service IaaS Discovery as
a Service. This depends on a number of parameters such as (i) the characteristics of
different IaaS, dependent on KPIs previously defined; and (ii) respect of the SLA
imposed by the client. The various prototypes corresponding to the various composi-
tions are deployed on the preselected infrastructure. The deployment is done through-
out the development process to allow stakeholders to perform tests on the resulting
prototypes.


Figure 3. ASDaaS architecture.
5. Conclusion and future works
In this paper, we have described a methodology for Cloud-based collaborative soft-
ware development, and then presented the ASDaaS architecture. This architecture is
composed of eight services: Project Management, Prototype as a Service, Service
Discovery as a Service, Composition as a Service, PaaS Discovery as a Service, Au-
tomatic Development as a Service, IaaS Discovery as a Service, and Deployment as a
Service. These services collaborate to provide a composite software based on discov-
ered and developed web services. We believe that the paradigm ASDaaS will change
the role of software developers who will no longer have to worry about developing
functionalities required by the client, but rather about ensuring compliance with secu-
rity settings.
In our future work, we will focus on providing a safety layer in the Service Dis-
covery as a Service. Then we will use the approach based on QoS of Raluca and
Florica [24] to select the web services that have the same functionalities, to enhance
the results of discovery engine.
H. Benfenatki et al. / Cloud Automatic Software Development 48
References
[1] Michael Armbrust , Armando Fox , Rean Griffith , Anthony D. Joseph , Randy H. Katz , Andrew
Konwinski , Gunho Lee , David A. Patterson , Ariel Rabkin , Matei Zaharia: Above the Clouds: A
Berkeley view of Cloud Computing. Technical Report No. UCB/EECS-2009-28 . (2009).
[2] Bharat Chhabra, Dinesh Verma, Bhawna Taneja: Software Engineering Issues from the Cloud Applica-
tion Perspective. International Journal of Information Technology and Knowledge Management, pp
669-673. (2010).
[3] Yi Wei, Blake, M.B: Service-Oriented Computing and Cloud Computing : Challenges and Opportuni-
ties. Internet Computing, IEEE , pp 72-75. (2010).
[4] Peter MELL, Timothy Grance. The NIST Definition of Cloud Computing. Ntional Institute of Stand-
ards and Technology. (2011).
[5] Jan S. Rellermeyer, Michael Duller, Gustavo Alonso. Engineering the Cloud from Software Modules.
CLOUD '09 Proceedings of the 2009 ICSE Workshop on Software Engineering Challenges of Cloud
Computing. pp 32-37. (2009).
[6] Muhammad Ali Babar, Muhammad Aufeef Chauhan. A Tale of Migration to Cloud Computing for
Sharing Experiences and Observations. SECLOUD '11 Proceedings of the 2nd International Work-
shop on Software Engineering for Cloud Computing. pp50-56. (2011).
[7] Chunming Rong, Son T. Nguyen, Martin Gilje Jaatun. Beyond lightning: A survey on security chal-
lenges in Cloud Computing. ELSEVIER : Computers and Electrical Engineering. pp 4754 . (2012).
[8] Krzysztof Sledziewski, Bordbar, B.; Anane, R. A DSL-based Approach to Software Development and
Deployment on Cloud. 24th IEEE International Conference on Advanced Information Networking
and Applications. pp 414-421. (2010).
[9] Danilo Ardagna, di Nitto, E.; Mohagheghi, P.; Mosser, S.; Ballagny, C.; D'Andria, F.; Casale, G.;
Matthews, P.; Nechifor, C.-S.; Petcu, D.; Gericke, A.; Sheridan, C. MODACLOUDS: A Model-
Driven Approach for the Design and Execution of Applications on Multiple Clouds. Modeling in
Software Engineering (MISE), 2012 ICSE Workshop pp 50-56. (2012).
[10] MODAClouds : http://www.modaClouds.eu/.
[11] Hanu Kommalapati, William H. Zack. The SaaS Development Lifecycle. InfoQ:
http://www.infoq.com/articles/SaaS-Lifecycle. (2011).
[12] Tobin J. Lehman, Sharma, A. Software Development as a Service: Agile Experiences. SRII Global
Conference (SRII), 2011 Annual. pp 749-758. (2011).
[13] Radha Guha, Al-Dabass, D. Impact of Web 2.0 and Cloud Computing Platform on Software Engi-
neering. International Symposium on Electronic System Design. pp 213-218 (2010).
[14] Hailong Sun, X. W, Xu Wang; Chao Zhou; Zicheng Huang; Xudong Liu. Early Experience of Build-
ing a Cloud Platform for Service Oriented Software Development. Cluster Computing Workshops
and Posters (CLUSTER WORKSHOPS). pp 1-4. (2010).
[15] http://www.auml.org/
[16] Mohammed AbuJarour, Felix Naumann, Mircea Craculeac. Collecting, Annotating, and Classifying
Public Web Services. Proceeding OTM'10 Proceedings of the 2010 international conference on On
the move to meaningful internet systems. pp 256-272. (2010).
[17] Jol Plisson, Nada Lavrac, Dunja Mladenic. A Rule based Approach to Word Lemmatization, SiKDD
multiconference, 12-15 October, Ljubljana, Slovenia. (2004).
[18] Cohen, W. W., Ravikumar, P., Fienberg, S. E. A Comparaison of String Distance Metrics for Name-
Matching Tasks, American Association for Artificial Intelligence (www.aaai.org). (2003).
[19] Lei Chen, Geng Yang, Dongrui Wang, Yingzhou Zhang. WordNet-powered Web Services Discovery
Using Kernel-based Similarity Matching Mechanism. 2010 Fifth IEEE International Symposium on
Service Oriented System Engineering. pp64-68. (2010).
[20] Saouli Hamza, Kazar Okba, Benharkat Acha-Nabila, Amghar Youssef. Web Services Discovery,
Selection and Ranking Based Multi-Agent system in Cloud Computing Environment. International
Journal of Information Studies, 4, pp. pp 123-147. (2012).
[21] http://www.Cloudbus.org/Cloudsim/
[22] The Aglets Users 2.0.2 manual, Aglets Development Group,( 2009).
[23] Wickremasinghe, B., Calheiros, N. R., Buyya, R. CloudAnalyst: A CloudSim-based Visual Modeller
for Analysing Cloud Computing Environments and Applications,
http://www.gridbus.org/reports/CloudAnalyst 2009. pdf. (2010).
[24] Raluca Iordache and Florica Moldoveanu. A Conditional Lexicographic Approach for the Elicitation
of QoS Preferences. On the Move to Meaningful Internet Systems: OTM 2012 pp 182-193. (2012).
H. Benfenatki et al. / Cloud Automatic Software Development 49
A hybrid model for new product
development - a case study in the Brazilian
telecommunications segment
Odivany P. SALES
a,1
, Tefilo M. de SOUZA
b,3
and Osiris CANCIGLIERI JNIOR
a,2

a
Professor in the Production Engineering Department at Pontifical Catholic University
of Paran (PUCPR)
b
Professor in the Electrical Engineering Department at UNESP University of So
Paulo State
Abstract. This paper presents a hybrid model for the integrated development of
products based on the decision models Funnel and Stage Gate. This hybrid model
was designed in its complete and simplified versions with the main purpose of
technological support in the process of developing new products based on the
maturity development level of the company. The simplified model was applied in
the development of a new product in a Brazilian telecommunications company
through a case study, its results were described and analyzed and
recommendations for the continuity of the research were presented.
Keywords: Product Development, Stage Gate, Decision Funnel Model, Service
Development, Case Study.
1. Introduction
In recent years, the Brazilian telecommunications industry has experienced a great
evolution on technological and marketing areas. The major changes underwent by this
market can be credited to the modifications in the competitive environment where the
segment opening and the large enterprises privatization contributed significantly to the
entry of new players in the region. Another fact that had contributed was the changes in
consumer behaviour partly influenced by other market sectors where new products are
launched with focus on meeting their needs and create new opportunities for change in
their lifestyle and consumption. Considering this scenario, it can be said that when
checking a large amount of new product launches occurred simultaneously a reduction
in the products life cycle [10, 9], which requires from the companies a constant

1
Ph.D. Research Student of Graduate Program in Production Engineering and Systems (PPGEPS) at
Pontifical Catholic University of Paran (PUCPR), Rua Imaculada Conceio, 1155, Prado Velho, Curitiba,
CEP 80215-901, PR, Brazil; Tel: +55 (0) 32711304; Fax: +55 (0) 32711345; Email: odivany@hotmail.com.
2
Professor in the Department of Production Engineering at Pontifical Catholic University of Paran (PUCPR),
Rua Imaculada Conceio, 1155, Prado Velho, Curitiba, CEP 80215-901, PR, Brazil; Tel: +55 (0) 32711304;
Fax: +55 (0) 32711345; Email: osiris.canciglieri@pucpr.br.
3
Professor in the Electrical Engineering Departament at UNESP University of So Paulo State, State of So
Paulo, Brazil. ; Email: teofilo@feg.unesp.br.

20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-50
50
updating and research for new releases allowing that these companies continue to be a
consumption option for their customers.
According to [2], in many industrialized countries and in more developed and
mature segments, the market share profits are costly and the acquisitions of other
companies do not usually work as expected. Based on this scenario, companies rely on
their own internal processes to launch successful new products into the market and
attend the needs of their customers.
The development of a consistent methodology for launching new products should
take into consideration the fulfilment of the consumer expectations, and for that, the
whole process, from the conception of the idea, passing through the concept
development and through the filter for the best developing options, must be tightly
controlled and thereby a project view should be established to fulfil the success
possibilities through the achievement of the pre-set objectives.
The success definition in a design [7] is to meet, among others, the following
factors: to reach time and cost initially proposed; to meet the performance or
specification required and obtain the customer acceptance. Methodologies for product
development that prioritize rapid decision making within the process can contribute to
the launch of successful products. The time reduction of product development, from the
idea to its release, has become a common practice within the market [6].
Having in mind the importance of the decision-making processes within the
product development process, this paper analyzes two models that address this issue
from different perspectives. The Decisions Funnel is analyzed as a model used in the
initial stages of the design where there is a necessity of ideas prioritization for the
making decision on which product will be developed. The Stage Gate is seen after the
product definition and is used in various stages of the design to ensuring the level of
quality and minimal rework in order that the product cycle development will be as
shorter as possible. Considering these two models, it was possible to build a hybrid
model where the decisions funnel and the stage gate can act in a coordinated manner
within a design aiming the development of quality products in short time and that meet
the expectations of consumers and the internal customers of the process.
At the end of this article is presented a case study for the demonstration of the
simplified hybrid model in the case study which was chosen because it is a recent issue
and this method seeks to explain in depth a particular phenomenon [15].
2. Literature Review
2.1 Decisions Funnel Method
The innovation linked to the reduction in the product development cycle and in the
meeting of the consumers expectations are key factors to success in the products
launching. A clear definition of the objectives at the beginning of the design is
fundamental to the adequate decision-making process. Companies with a low level of
maturity in product development are not focused on this type of definition (KAHN et
al., 2011).
In many cases and companies the decision of not innovating in the new products
development is simply too risky. This type of decision can reduce the competitiveness
of the company and make room for the growth of their competitors. It is not known the
products that competitors will launch and what will be the consumer markets reaction.
O.P. Sales et al. / A Hybrid Model for New Product Development 51
Thus, within the innovation process, in all stages of the decision making, collaboration
between the different areas of the company is vital to achieve the results satisfactorily
[8]. According to [1] the design risk is reduced as decisions are taken. Thereby, in view
of the need to reduce uncertainties in the product development, the right decisions
related to what should be developed must be taken as soon as possible.
The cost of change tends to be higher as the design time gets longer (PMI, 2008),as
the changes are implemented early in the design they tend to be more easily
assimilated in relation to the costs as well as the work team. Considering that only
companies that offer innovative products, with good quality and developed in a short
period of time will survive [5] the establishment of clear targets in the design early
phases in order that strategic decisions are taken as well as the formation of the design
team with the identification of their interests becomes a major factor within any
methodology for product development.
In this scenery, this paper considers the use of the decisions funnel early in any
new products development process aiming to get a consistent decision relating to what
should be developed among all the possibilities presented by the design team.
According to Baxter (2000) from each 10 ideas for new products, only 3 will be
developed and 1.3 will become releases and only one single idea will become a
successful product (Figure 1).

Figure 1: Quantity of ideas that become successful products.
Source: adapted from [1] (2000)

Thus, considering all the costs involved in the development process and also the
time spent in this process, the ideal would be to eliminate the options without success
possibilities as soon as possible, that is, early in the process. At this time, the decision
funnel works to reduce uncertainties and to consolidate a clear idea of what needs to be
developed.
With the Decisions Funnel is possible to leave an environment of deep uncertainty
within the process to an environment of controlled risk at the end of the funnel, that is,
as decisions are taken within the process, development options are made, concepts are
defined, products are chosen or abandoned, configuration details are clarified until that
finally a product is designed. This entire process is executed from a great quantity of
decisions taking by the design team and by the company executives involved in product
development (Figure 2).
O.P. Sales et al. / A Hybrid Model for New Product Development 52

Figure 2: Decision Funnel.
Source: adapted from [1].

Considering the decision funnel, the risk of failure in the process of product
development should be reduced as decisions are made, however it is important to
emphasize that it is not possible to develop something without risks. The very fact of
doing nothing exposes the company to the risk of being overtaken by their competitors.
The decisions funnel organizes the process of decision making in order to understand
and execute it with the support of the company and its executives. At the end of the
process there is a decision lined with the entire team.

2.2 Stage Gate Method
Whereas there is a correlation between product release speed and the its quality [11],
there is the necessity of making the right decisions early in the process and also to
possess a coherent and organized product development process to ensure the quality
with little rework in order to achieve a short product cycle.
Obtaining a consistent process is important but insufficient if the company fails to
reproduce it throughout its development. With a constantly renewing market, is not
enough just to have a successful product, but a sequence of products that achieve
success and meet the expectations of their consumers. Generating great ideas is only
half of the battle, the other half is related to the exit of the initial concept and reach a
successful launch through product development [2].
In view of an organizational environment that promotes the growth of good ideas
and a culture of decision making that can select the most appropriate ideas and
concepts for the strategic moment experienced by the company (decisions funnel), the
next step is to obtain a reliable process consisting of moments of decision clearly
defined for conducting the process of product development from the definition of
which product needs to be developed. This process is called Stage Gate [4], it
comprises steps and at the end of each one there is an analysis moment and decision
making aiming its approval for the start of the next stage. Each step is called STAGE
(where the activities of product development planned for that time are executed) and
O.P. Sales et al. / A Hybrid Model for New Product Development 53
each analysis and decision making time is known as GATE. Within the standard model
presented there are five stages and five decision points (Figure 3).


Figure 3: STAGE GATE.
Source: [4]

The stages in the Stage Gate model respond for well-defined work stages within
the product development and within each stage different areas of the company perform
activities considering multidisciplinary teams. The existence of previously defined
stages does not prevent that the company adapt its process in order to take in
consideration its culture and also the segment. However, the model of decision making
at the end of each stage must be followed. The Stage Gate is defined by the following
stages:
- Scoping;
- Build Business Case;
- Development;
- Testing & Validation;
- Launch.
The Stage Gate model can be considered as a process that guides the development
of a new product from the idealization stage until its launch [3, 12]. This model
includes a step of reviewing the results and benefits achieved by the design known as
post-launch review.
The fact of the decision funnel acts strongly in the strategy of decisions making
enables that the Stage Gate acts in the organization of the product development process
through the execution of multidisciplinary activities in each step of the process with
subsequent decision of Go / No Go to the next step. Given this view, the Stage Gate
model helps even companies that have a consistent methodology for managing new
products development design [11, 13, 14].
3. Proposed Model
In view of the necessity of reproducing new products successful launches that attend
the expectations of the customers with high quality and getting ahead to the
O.P. Sales et al. / A Hybrid Model for New Product Development 54
competition, this article explores the use of a hybrid model of decision making that
considers the decisions funnel (in the early stages of the design) and the Stage Gate.
The hybrid model is customized for each company and fits the design management
methodology used by the company, that is, the hybrid model uses concepts of a
consistent approach of the strategic decision making in the selection of the product to
be developed (funnel) and the process running in well-defined stages with
multidisciplinary teams including criteria for the transition to a next step (Figure 4).


Figure 4: Adaptation of the STAGE GATE to use with the funnel.
Source: adapted from [4].

With the use of the hybrid decision making model it is expected benefits related to
product development aligned to the company strategy, designs running with more
easiness within several areas and less chance of completing useless designs for the
organization [13, 14].
In the complete hybrid model the funnel takes into account the validation of
aspects related to innovation, product benefits and internal processes, and within the
GATE STAGE, the stage related to the scope detailing is also used as a form of
validation and greater design and expected results detailing (Figure 5).

Figure 5 Complete Hybrid Model for the product development.
Source: author
O.P. Sales et al. / A Hybrid Model for New Product Development 55
It is also possible to use a simplified hybrid model for the development of new
products, considering that the company decided to develop a product which processes
and results are not as uncertain for the company and some simplifications can be used
within the process.
In this research the simplifications are concentrated in the withdrawal of
validations related to product innovation within the funnel and also the withdrawal of
the stage related to the product scope detailing, having in mind that this detailing has
already occurred within the decisions funnel in the investigation of the benefits and
internal processes within the telecommunications company studied (Figure 6).


Figure 6 Simplified Hybrid Model for product development.
Source: author

Depending on the complexity of the product that is being developed, it is up to the
design team to conclude that some steps of the hybrid model can be simplified or even
not be used. A clear example is the development of a product that is the extension of a
family of products that the company already sells. In this case, the aspects related to
innovation are low and the risks assumed by the company are known.
4. Case Study
It was analyzed the development of a product within a company of the Brazilian
telecommunications segment which supplies phone service and internet access and due
to the product low complexity it was applied the simplified hybrid model described in
this paper as follows:
a) Company description
The company studied in this research is a large company founded more than 10
years ago (over 5000 employees), active in most of parts of Brazil that sells telephone
services, provides internet access at high speed connections and supply TV service
subscription.
b) Product description
The studied product is oriented to the internet consumers segment who need high
speed connection and use applications requiring bandwidth as video, games online,
real-time applications, etc. The product distinguishing characteristic is totally related to
its connection speed. All applications added to the connection speed as antivirus, online
O.P. Sales et al. / A Hybrid Model for New Product Development 56
backup and telephone support are similar to those already available at lower speeds and
previously released with great success by the company.
The expected benefits with the launch of this product are the improvement of the
companys innovation image and its positioning as the company that cares about
meeting the customers needs of high speeds for internet access; the increasing in the
profits derived from the acquisition of the Cable TV product (since with new
broadband speeds available, the customer may also be interested in other products of
the company) and differentiation compared to its main competitors.

c) Applying the Decisions Funnel Model
Analysis of the stages of decision-making process within the funnel:
1) Stage 1 Innovation
In the case of the analysed product the innovative nature is limited since the company
already offers other speeds. Thus, the main concern was related to quality aspects and
not to the innovation.
2) Stage 2 - Benefits
Considering that the benefits of the product must be measurable, it was made an
analysis with the expectation of new sales for this product, but also what would be the
attractiveness to consumers who are already customers of other broadband speeds. All
analyses were presented and approved by the company's management to make clear the
priority that this development should have in the more than 30 areas that were
participating at some point in the process.
3) Stage 3 Processes and Concepts
For this product it was verified a low need for constructing new operational processes
and procedures, as it can be seen within the company as an extension of a family of
products already consolidated. As the processes were well defined and worked well it
did not required a deep analysis in the STAGE GATE of the product development
process.
4) Expected Benefits (Decisions Funnel Model)
Among the expected benefits in the case of using this decision making process, it can
be mentioned: a) Reduction in the design amount of uncertainty - mainly due to the fact
that the product is an extension of the current family of the companys broadband
products; and b) Reduction in the development cycle of the product - in previous
designs this pre-analysis was not performed and there was no concern about the
benefits measurement format, the designs were constantly subjected to new analysis of
commercial viability which delayed its release;

d) Applying the Stage Gate Model
In the analysis of the present case in the simplified hybrid model it was observed that
the product degree of maturity was high and it did not demand an intensive work in the
detailed scope steps of the STAGE GATE model, that is, the model was initiated from
the second phase (detailed specification and construction of the business case).
1) Benefits (STAGE GATE)
a) Participation in a multidisciplinary team throughout the product development;
b) Consistency in the approval for the process continuation to the next phase of the
development the reduction of the rework is evident mainly due to the high
degree of maturity achieved in the decisions funnel.
O.P. Sales et al. / A Hybrid Model for New Product Development 57
5. Conclusion
This paper presented an analysis of decision processes DECISIONS FUNNEL and
STAGE GATE within the context of new product development in a company from the
telecommunications segment. After the analysis of the decisions funnel, this model was
identified as suitable for use in the initial phases of the product development process
since its proximity to strategic definitions that need to be taken even before the
decisions that implicate in the expenditure of a large amount of financial resources are
taken. Next, the STAGE GATE model was analysed and it was recommended as the
most suitable for the tactical / operational development process after the approval of the
product development. The running the design by phases view and the use of an
approval model to proceed to the next step showed to be ideal for reducing rework and
for using parallel done by multidisciplinary teams.
In this context, this article proposed a hybrid model, in its complete and simplified
versions, in order to use what is the best in the two models presented above with the
purpose of developing products in a shorter time and with higher quality. The
simplified hybrid model focus of the case study showed to be adequate for the
development of products with a not so high degree of innovation (for example, a
product which is an extension of an existing product family). In this model some
simplifications were made to shorten or eliminate unnecessary steps for a product with
a low degree of complexity. Aspects of the detailed scope were considered only in one
process phase and the premise of this simplified model is that the other areas can have
a clear idea of how the product should be developed.
The factors that initially motivated this work allowed highlighting the benefits of
using the simplified hybrid model within the development of a product related to the
provision of internet connection for customers of a Brazilian telecommunications
company. This model has showed the possibility of reducing the development cycle
ensuring for companies that uses it a sustainable competitive advantage.
The improvement in the products quality can also be achieved, once reducing
rework and improving in the design running with multidisciplinary teams can be
achieved using the hybrid model. Forward to future possibilities that this research can
open, it is believed that some items may be better researched and detailed, such as:
a) The case study used in this study took into account the development carried out
in a single company of the telecommunications segment, however how the
number of companies within this segment is numerous it would be important to
apply this hybrid model in other companies within the same sector;
b) Another possibility would be to extrapolate the telecommunications segment
and verify the applicability of this model in other areas such as energy, food,
education and so on;
c) The application of the complete and simplified hybrid models require some
characteristics of the development and it can take some time to differentiate
them so that the best model can be chosen. It would be interesting and useful to
provide more detailed information as checklists that would give as a start point
which model would be most appropriate for use in a particular development;
d) The case study used in this study took into account the development carried out
in a single company of the telecommunications segment, however how the
number of companies within this segment is numerous it would be important to
apply this hybrid model in other companies within the same sector;
O.P. Sales et al. / A Hybrid Model for New Product Development 58
e) Another possibility would be to extrapolate the telecommunications segment
and verify the applicability of this model in other areas such as energy, food,
education, etc.

References

[1] BAXTER, Mike. Projeto de Produto Guia prtico para o design de novos produtos.
So Paulo: Editora Edgard Blucher Ltda, 2000.
[2] COOPER, Robert G. The Innovation Dilemma: how to Innovate when the Market is
Mature. Journal of Product Innovation Management, v. 28, p. 2-27, 2011.
[3] COOPER, Robert G.. The Stage-Gate Idea-to-Launch Process Update, Whats
New, and NexGen Systems. Journal of Product Innovation Management, v. 25, p.
213-232, 2008.
[4] COOPER, Robert G.. Winning at New Products Accelerating the Process from
Idea to Launch 3tr ed. Addison-Wesley Publishing Company, 2001.
[5] DUHOVNIK, Jozef; ZARGI, Urban; KUSAR, Janez; STARBEK; Marko. Project-
driven Concurrent Product Development. Concurrent Engineering: Research and
Applications, v. 17, 2009.
[6] ELING, Katlin; LANGERAK, Fred; GRIFFIN, Abbie. A Stage-Wise Approach to
Exploring Performance Effects of Cycle Time Reduction. Journal of Product
Innovation Management, v. 30, 2013.
[7] KERZNER, Harold. Project Management: A Systems Approach to planning,
scheduling and controlling. 8a ed. - Ohio: John Wiley & Sons, Inc, 2003.
[8] KESTER, Linda; GRIFFIN, Abbie; HULTINK, Erik Jan; LAUCHE, Kristina.
Exploring Portfolio Decision-Making Processes. Journal of Product Innovation
Management, v. 28, p.641-661,2011.
[9] KHAN, Kenneth B.; BARCZAK, Gloria; NICHOLAS, Jonh; LEDWITH, Ann;
PERKS, Helen. An Examination of New Product Development Best Practice.
Journal of Product Innovation Management, v. 29, p. 180-192,2011.
[10] KOTLER, Philip. Administrao de Marketing Anlise, Planejamento,
Implementao e Controle 5a ed. So Paulo: Editora Atlas SA, 2009.
[11] MACNALLY, Regina; AKDENIZ, M. Billur; CALANTONE, Roger J. New
Product Development Processes and New Product Profitability: Exploring the
Mediating Role of Speed to Market and Product Quality. Journal of Product
Innovation Management, v. 28, p.63-77,2011.
[12] PMI. PMBOK Guide: A Guide to the Project Management Body of Knowledge.
Newtown, PA: Project Management Institute, 2008.
[13] SALES, Odivany; CANCIGLIERI JR, Osris. O Modelo STAGE GATE dentro do
Processo de Desenvolvimento de um Produto Uma Anlise Comparativa com o
Desenvolvimento de um Produto de uma Empresa de Telecomunicaes. 8
Congresso Brasileiro de Gesto de Desenvolvimento de Produto CBGDP 2011,
Porto Alegre, RS, Brasil, 12 a 14 de setembro de 2011.
[14] SALES, Odivany; CANCIGLIERI JR, Osris. Proposta Conceitual para o
Desenvolvimento de Produtos no Segmento de Telecomunicaes Uma
Abordagem Comparativa Utilizando os Processos de Tomada de Decises Funil
e STAGE-GATE. XXXI Encontro Nacional de Engenharia de Produo,Belo
Horizonte, MG, Brasil, 04 a 07 de outubro de 2011.
[15] YIN, Robert K. Estudo de Caso Planejamento e Mtodos,4a Ed. Porto Alegre:
ArtMed Editora, 2010.
O.P. Sales et al. / A Hybrid Model for New Product Development 59
Improved Engineering Design Strategy
Applied to Prosthesis Modelling
Thiago GREBOGE
a,
1
, Marcelo RUDEK
a,2
, Andreas JAHNEN
b,3
and Osiris
CANCIGLIERI JNIOR
a,2

a
Pontifical Catholic University of Paran PUCPR, R. Imaculada Conceio,1155,CEP
80215-901, Brazil
b
Public Research Centre Henri Tudor, Ressource Centre for HealthCare Technologies
(SANTEC), 29, Avenue John F. Kennedy, L-1855, Luxembourg
Abstract. The prosthesis design is a delicate and accurate engineering task that
can be automated in the early steps before manufacturing. The challenge of the
creation of the biomedical model is the complexity of the geometrical modelling as
we have to deal with natural shapes. The representation of the human bones in
terms of machining parameters the bottleneck in the design of complex products,
and concurrent procedures can aid in this task. This work addresses the design
requirements in order to build an anatomical skull prosthesis piece in the
Computed Aided Design (CAD) systems. A novel methodology based on ellipse
adjustment has been investigated in order to define the manufacture parameters. In
geometric terms an ellipse seems with the bones border shape in a Computed
Tomography (CT) slice. The arc that fills the correspondent failure in the bone
border is extracted from the respective adjusted ellipse to each CT slice and the set
of those extracted arcs can be superimposed to define the stack of images to build
a 3D CAD model. Evolutionary Algorithms were also applied to improve the
quality of data generated. A prototype was implemented by an open source Java
based tool (ImageJ) in order to create synthetic defects to simulate problems in the
3D virtual skull model. In context of product development this approach brings an
essential integration between design and manufacturing process to reduce the
elapse time among the medical procedure, modelling and machining.
Keywords. Product development, computed tomography, human prosthesis,
geometric prosthesis modeling.
1. Introduction
The engineering principles in concurrent context might be applied to product
development in the medical area. Due the relationship between medical and
engineering areas, the collaborative tasks are necessaries to improve the development
in all steps of production. A piece of prosthesis in terms of engineering requirements
can be thought as a complex product and an essential phase is the modelling. Skull

1
Ph.D. Research Student of Graduate Program in Production Engineering and Systems (PPGEPS) at
Pontifical Catholic University of Paran (PUCPR), Rua Imaculada Conceio, 1155, Prado Velho, Curitiba,
CEP 80215-901, PR, Brazil; Tel: +55 (0) 32711304; Fax: +55 (0) 32711345; Email:tgreboge@gmail.com.
2
Professor in the Department of Production Engineering at Pontifical Catholic University of Paran
(PUCPR), Rua Imaculada Conceio, 1155, Prado Velho, Curitiba, CEP 80215-901, PR, Brazil; Tel: +55 (0)
32711304; Fax: +55 (0) 32711345; Email: marcelo.rudek@pucpr.br, osiris.canciglieri@pucpr.br.
3
Researcher at the Public Research Centre Henri Tudor, Resource Centre for HealthCare Technologies
(SANTEC), 29, Avenue John F. Kennedy, L-1855, Luxembourg.
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-60
60
prosthesis has been receiving special attention because deals with esthetical and
functional problems and these issues define the level of possible restrictions in
machining.
Nowadays the literature demonstrates a gradual evolution of methods to bone
modelling and anatomic prosthesis conception. Different methods have been
experimented as [1-9] where the objective is to develop a tool to aid the manual
development of prosthesis due the difficulty level and imposed limitations in the
artisanal production as exposed by [10].
There is a natural difficulty with production requirements and process organization
due the prosthesis to be a customized product for each individual problem and do not a
serial production. A facilitation was proposed by [5,11] based on the concept the
ellipses adjustments on the skull slices. An ellipse can be found by intuitive and natural
way using the superformula equation, as described in Gielis[12] in his study with
leaves shapes in plants. This concept was adapted to skull modelling by [9]. The skull
bone curvature registered in the Computed Tomography (CT), has a circular form with
different variations among all tomography slices. In this approach, the basic concept is
adjusting ellipses to each bone border in all CTs with different parameters. The
adjusted ellipse is a facilitator in the process, because it is possible to find a shape
descriptor based on the ellipse parameters. This shape descriptor can be exported to
CAD/CAM Computer-Aided Design/Computer-Aided Manufacturing) environment as
a previous step before machining.
The main problem in this approach is that several ellipses can be created with
similar shape of a skull bone border for the same CT slice, differing themselves only by
slight displacement in their parameters. A rule of selection is necessary as a decision
factor, and the existent techniques based on evolutionary algorithms have been
demonstrated promisor in the generation of descriptors of transversal sections of skull
as Genetic Algorithms (GA), Particle Swarm Optimization (PSO) and Harmony Search
(HS) discussed and exemplified in [9] and [11].
Also is true that some skull problems could be solved by symmetry, where a side
of skull (good side) might be mirrored to cover the failure in opposite side using image
processing techniques, but for instance failures in frontal region cannot be solved by
mirroring. The proposal here overtakes this limitation and produces a general case
application.
In this process, CAD modelling is essential tool in concurrent engineering terms
because we need to correlate medical requirements together with engineering
procedures. The specialized methodology proposed by [11] based on the Elliptical
Adjustment Algorithm (EAA) was built to link them. This algorithm is applied as a
possible strategy of concurrent engineering approach that intercepts the fundamental
steps of prosthesis modelling.
2. Proposed Method
A previous method proposed by [5] described some essential stages in prosthesis
modelling as it is suggested in figure 1. Some improvements over it were performed
with the generation of 3D model. Basically the improvement in engineering terms is in
the field of software requirements strategies to design the prosthesis piece.
T. Greboge et al. / Improved Engineering Design Strategy Applied to Prosthesis Modelling 61


Figure 1. Overview about context of modelling process, adapted from [5];

As shown in the context overview in figure 1, the method proposes three main
parts, as:
Acquisition of Tomography Images from DICOM files.
From computed tomography device it is possible to apply computing techniques to
map in images the patient's body structure. The images are transferred and stored in
DICOM format [13] and this standard defines a medical communication protocol
and a file format including defined metadata. It is used for medical
intercommunication inside and outside the clinical environment. The aim of this
model is to maintain the information standardized among manufacturers, as storage
images format, data compression, information about exams procedures, and
parameters of resolution between images and the respective aspect ratio of physical
information. The images and respective dimensional information are extracted to be
used as imputing data to virtual model creation.

Generation of the Virtual Prosthesis Model in 3D.
The second part in the process is to build the 3D model of piece of prosthesis. This
phase has three distinct steps:
i. The selection of defective CT slices: In this step, the proposed method
identifies and separates the group of defective slices. Those are having
an interrupted bone contour are selected as imputing data to next stage;
ii. The application of Ellipse Adjustment Algorithm (EAA): This is the
kernel of the proposed method and deals with the representation of
bone border shape by an elliptical descriptor.
iii. The generation of 3D model of prosthesis piece: Deals about the
exporting data and 3D reconstruction in CAD.

Machining of 3D Model.
The last step is the machining. All data prepared in previous step must be prepared
to build the customized prosthesis as a real product. This phase involves various
medical requirements and they are not addressed in this moment.
The generation of the virtual prosthesis model is the main objective of this research
and the second phase steps are detailed following in the text.
T. Greboge et al. / Improved Engineering Design Strategy Applied to Prosthesis Modelling 62
3. Generation of Virtual Prosthesis Model
3.1 Proposition of problem
A hypothetic condition as presented in figure 2.a can be used to simulate a problem in
the skull. An application developed in the Java based tool called ImageJ [14] permits
the creation of a synthetic failure straight in the 3D visualization. This is an important
condition because we might use this previous known information to evaluate the
method and also we might to create different cases to study in skull repairing.

Figure 2. (a) ImageJ interface to 3D visualization and respective
simulated failure region. (b) Skull slices view in 3D and bone missing
region.
The main question is to find a way to fill the defective region by an automatic way.
The design strategy adopted is to decompound the 3D view in 2D representations of the
defective area. The better way is to use the same original CT slice information to fill
each separated slices contours, and after rebuild the complete 3D information to
prosthesis modelling. This strategy intends to be linked with manufacturing procedures
during engineering phases.
The adopted approach is based on adjustment of ellipses as proposed by [5]
because the method was thought to provide the linkage with 3D modelling and
machining steps.

3.2 Proposition of the Ellipse Adjustment Algorithm (EAA)

The objective of the Ellipse Adjustment Algorithm (EAA) is to find one ellipse capable
to perform a self-adjustment on skull border. How discussed in [9] some CT slices at
middle of skull seem as an ellipse. The EAA use this principle in order to create a
synthetic contour with similar shape of the internal and external skull border for each
CT slice that contains a failure. The EAA have two main parts. First, the failure
position identification, and second the ellipses parameters estimation based on an
optimization method.



T. Greboge et al. / Improved Engineering Design Strategy Applied to Prosthesis Modelling 63
3.2.1. Failure Position Identification

For each CT slice, a fundamental step is finding the failure position. The Region of
Interest (ROI) is delimited by the discontinuity of the skull border. The position where
the skull edge is interrupted must be identified in order to define the limits of the space
of solution. The space solution is the arc that fills those uncompleted region in CT. In
figure 3.a is representing a polar mapping of bone border coordinates and respective
dashed arc-solution.

Figure 3. (a) Edge points position by polar coordinate system. (b) Representative
scheme in histogram format of radius (r) mapping and failure position identification
(ROI Region of Interest) of inner skull edge.

In this example it is shown the inner edge of the skull. Each point in this edge can
be mapped by a radius (r) and its respective coordinate (x,y) on the edge. In convenient
form, the initial angle = 0rad is defined wright down from centre straight to base of
skull edge. In the figure, this position is the starting position denoted by cp1 (contour
point 1), and all others contour points are named with cpi, with i=1,..n, where n is total
number of pixels in the edge. The sequence in cpi values might be interrupted if occurs
a gap in border. If exists an interruption in border continuity, we have a failure in this
respective skull slice, and the r value in those positions tends to infinite (r). The set
of points along region of missing pixels are denoted as dpi, (disconnected points) with
i=1,,m, where m depends of size of failure. Applying the variation of with a step
length (for example 0.001 rad) in the range [0rad < < 2rad] it is created a set of cpi
points with another subgroup inside that contain the dpi points. Without lose the
generality, the dpi values can assume r=0 because in those respective positions no
pixels borders exists and the radius is null.
Figure 3.b shows a scheme in histogram format of the radius (r) mapping and
respective failure position identification (ROI Region of Interest). With this approach
is possible to gather the exact coordinates (respective ) of the break edge points.
This process must be repeated for all CT slices. If the dpi set is empty, is because
there is no interruption (failure) in the respective slice. Using this approach, we can
select only CTs with defective slices among all others CTs of exam.

T. Greboge et al. / Improved Engineering Design Strategy Applied to Prosthesis Modelling 64
3.2.2. Ellipse Adjustment Definitions

Figure 4 shows an example of a CT slice with an uncompleted bone region, and
respective parameters that can be found by the Ellipse Adjustment Algorithm (EAA).
The parameters to be finding are the minor a and major b axis length, and the centre
point coordinate P(x0, y0) for both bone borders. Those parameters are corresponding
values to two ellipses, those are the inner (E1) and external (E2) ones.

Figure 4. Identification of parameters used in the Ellipse Adjustment Algorithm.

Also in figure 4, it is defined a limited range to restrict the set of values of the
lower and upper bounds. This range limits the values of the coordinates of ellipses
centres (P) and the respective sizes of a+/- and b+/- lengths. This initial setup is applied
in order to reduce the processing time. The conditions are defined in equation 1 to the
ellipses radius rk and in equation 2 to centre Pk, to k possible ellipses with k=1,,
max , with max is the quantity of desired iterations. These values represent the limits
for internal E1 and external E2 sets of parameters.

(1)

(2)
Based on several simulations, the lower and a higher value of range can be defined
among some limits as for example (x=10, y=10) to axis size, and (x=15, y=15) to
the variation of central point position, in both x and y directions. Through
experimentations, the conditions are that the a value is between 120 and 200, and b
value is between 90 and 200. The coordinates of the centre of the image will be
between 240 and 275. The unit of measurement is in pixels. If necessary, the values of
and can be changed according size modifications of image or particular cases (for
example, in the slices more on top of skull).

3.2.3. Optimization to Ellipse Adjustment

Slight changes in the values of these parameters generate a lot of possibilities to create
arcs and the more feasible answer must be finding. The main objective is to find the
T. Greboge et al. / Improved Engineering Design Strategy Applied to Prosthesis Modelling 65
best ellipse that fits the skull border for each CT slice. To perform this, we need to find
the best values of a, b, x0, y0 in order to obtain the both ellipses E1 and E2 as similar
as possible with original inner and external border edge. The equation 3 shows the
polar formulation to create an ellipse.


(3)
By changing the values of a, b, x0, y0 (as represented in figure 4) we can create an
ellipse to be superimposed on the original contour. The problem is to find the best
combination of those parameters. An objective function is proposed to evaluation of
both E1 and E2 ellipses. The equation 4 shows the fitness function F whose objective is
minimizing this distance between the original edge and the respective created ellipse.
This is a reformulated equation based on [5,9,11].This improved approach permits to
measure the distance between a pixel of the generated ellipse E and its corresponding
pixel on the original contour. The F value can be evaluated for each possibility and the
best one permits to identify the corresponding values of a, b, x0, y0 that are closer than
original edge.


(4)
The F must be evaluated for each ellipse E (internal and external ellipses) in each
K slice, using the information about the pixels positions. In this definition we have xE
and yE as pixels coordinates of generated ellipse E1 or E2, and xC, yC are the
coordinates of pixels on the respective original CT contour. The evaluation is
performed to both internal and external ellipses and contours. An important parameter
to define the quality of adjustment is the n value that represents the amount of border
pixels used. If n assumes the total number of pixels of the contour, the cost of
processing time is high. By experimentations it is not necessary to use all border pixels
and the n can assume less values to avoid long processing time. Also by observation
some more interesting points can be selected from the contour. For example, the
points nearest to the ROI have more influence in the result. So, the coordinates of
selected contour pixels produces the same ellipse solution, and it can be chosen based
on histogram position (figure 3.b).
Due to the many possibilities of values assumed by the ellipse parameters, some
optimization method is necessary to estimate the best of them. The investigation of [11]
shows that Genetic Algorithms (GA) or Particle Swarm Optimization (PSO) could be
applied in this matter. A generic formulation to the optimization algorithm in order to
evaluate the value of F has the general guidelines:

1. Initializing the parameters a, b, x
0
, y
0
and also the length
(n) of vector solution ;
2. Execute the analysis of initial values set evaluating the
fitness F;
3. Update the possible solution following the rules of
optimization method to generation of new set of testing
values;
4. Evaluate the new fitness value. Update the best stored if
the new value is better than the previously stored;
T. Greboge et al. / Improved Engineering Design Strategy Applied to Prosthesis Modelling 66
5. If
to
With
each eval
subscribes
the CAD
coding ca
stored in
a table fo
problems
interventio
the CAD m
4. Applic
The exam
demonstra
intermedia
The figure

F

The C
values of


From
applied in
conventio
coordinate
depends o



&


f maximum nu
o step 3.
the best F fo
luated CT sl
s the real bon
system and th
an be used to
.txt files hav
orm using a
among differ
on of the user
menu comman
cation of Met
mple about t
ation. A total
ate slices wer
e 5 shows a sa
Figure 5. An e
CT slices with
each k1,,20
Table 1. T
m table 1, we
n the internal
n we use k
e that carries
of relationship






umber of inte
ound, the corre
ice. These v
e edge. After
he ellipses ca
export the el
ve a commonl
known char
rent software
r, or handled
nds.
thod and Disc
the problem
of 26 CT slic
e used becaus
ampled group
example group
h respective o
0 ellipses para
The parameter
have all dim
border. Note
k to represen
the distance (
p pixel x mm






eractions not
esponding va
values define
all ellipses fo
an be created
llipse values
ly accepted fo
acter as sepa
and versions.
by a script o
cussion
in skull ad
ces were extra
se those are th
of some CT s
up of CT samp

open border w
ameters (a, b,
rs found to eac
mensional info
that k repres
nt the sequen
(in mm) betwe
m defined in D






achieved, retu
lues of a, b, x
the more ap
ound, these va
in that enviro
to CAD syste
ormat, where d
arator. This a
This file mig
f instructions
dressed in f
acted from DI
he slices close
slices from sku
ples slices from
were processe
x0, y0) are pr
ch adjusted inn
ormation abou
sents the numb
nce of slices
een slices. Th
DICOM file. T






urn
x0, y0 can be
ppropriated e
alues can be e
onment. A file
em. The patte
data can be or
avoids incom
ght be operate
in a plug-in
figure 2.b is
ICOM file, an
est from failur
ull failure.
m testing skull
ed by the EA
resented in tab
ner ellipse.
ut each adjus
ber of tested
s and z as
he transforma
The FE1 valu






e stored for
ellipse that
exported to
e in ASCII
ern of data
rganized in
mpatibilities
ed through
added into
used for
nd the k=20
re position.

l.
AA and the
ble 1.

sted ellipse
slices. For
the range
ations kz
ues are the






T. Greboge et al. / Improved Engineering Design Strategy Applied to Prosthesis Modelling 67
fitness res
minority d
small valu
adjustmen
Figur
the image
thickness
case the in
L do not
value F=6
figure 6.b
of the thic
the fitness
the origin
build the 3
sults to each k
distance diffe
ues of F repre
nt (F=31) and
re 6 shows a r
es presents th
left is larger
nner surface o
represent the
69 obtained pr
b shows the si
ckness measur
s value F=31.
nal characteris
3D representa
(a)
k slice applied
erence betwee
esents the best
the worst are
epresentation
he influence o
r than right a
of skull is diff
natural behav
roves the diffi
imilarity betw
re performs a
Even existing
stic of surface
ation of the vo
d to edge E1.
en the genera
t adjustment. I
in the slices k
of generated
of edge thick
and an unbala
ferent by symm
aviour of the b
culty of EAA
ween left and
better adjustm
g differ
e, the in
lumetric
As the objec
ated ellipse an
In table 1, the
k=13 and k=14
ellipses after
ness. For ins
ance positioni
metry viewpo
bone in that r
A find the best
d right thickn
ment to ellips
tive of F is m
nd the bone b
e slice k=8 hav
4 with (F=69)
application o
tance, in figu
ing of E1 occu
int, and the ri
region. The h
solution. Oth
ness. A good
e creation, as
measure the
border, the
ve the best
).
f EAA and
ure 6.a the
urs. In this
ight side of
high fitness
erwise, the
proportion
proved by



ing arcs d d ) Correspon b
(b)
llipse. e
b erimposed. ( p
ted of each c
d ellipses su e

c extra
e w of generat ee
(a)
7. (a) Top vi Figure

mposed to i
relation to
n figure 7. i
i an be super c
n generate in
i el, as shown d
c ual ellipses d
of informatio
mo
divi
ences
e k=13. b) c c olution to sli ss ocessing. a) rr after EAA p f adjustment o
slice k=8; o
o 6. Example e
o solution t
e

Figur
(b)
T. Greboge et al. / Improved Engineering Design Strategy Applied to Prosthesis Modelling 68
The f
2. The re
where n=2
to identify
interruptio
These dpi
3.b) and
discarded.
centres C
the skull a
The s
can be sup
8.a shows
parameter
figure 8.b
evaluated
new confi
piece.
Figur

A geo
skull shap

Figu
figure 7.a pres
espective cent
20 CT slices t
y the cut posit
ons positions
i coordinates
define the ar
. Figure 7.b sh
1 to C20. T
as an example
same procedur
perimposed to
s the superim
rs, a geometri
. The 3D mod
together with
iguration of pa
re 8. (a) Super
ometric mode
pe and prosthe
(a)
ure 9. (a) Reco
sents all ellips
tres are denote
tested. The sym
tions in ellipse
in original b
values are ob
rcs of ROI. T
hows the extr
hese arcs are
of the inner b
re is performe
o build a 3D v
mposed arcs
ic model can
del is an impo
h all medical r
arameters can
rimposed solut
corres
el can be built
esis piece as w
onstructed def
ses obtained to
ed by P = {(x
mmetry line (L
es. The cut po
bone edge ex
btained from
The compleme
racted arcs fro
the solutions
borders.
ed to both edg
view of prost
obtained from
be built in C
ortant pre-mac
requirements.
n be submitted
tion found aft
sponding 3D s
t in a CAD sy
well.

fective Skull. (
o each CT sli
x1,y1,z1), (x2
L) defines two
ositions in elli
xtracted by m
histogram o
entary part o
om ellipse E1
that represen
ges (internal an
thesis in CAD
m each best
AD system to
chining step, b
Depending o
d to algorithm
ter ellipses ad
surface.
ystem to perfo
(b) The hole f
ice from imag
2,y2, z2),(xn
o separated he
ipses are the s
min(dpi) and
f position (a
f ellipse out
1 to E20 and
nt the missing
nd external) a
D as figure 8.
ellipse. With
o 3D visualiz
because the pi
of resulting 3D
to improve th
djustment and
form the visua
(b)
filled by prosth
ge of figure
n, yn, zn)}
emispheres
same of the
max(dpi,).
as in figure
of ROI is
respective
g region on
and all arcs
The figure
h ellipses
ation as in
iece can be
D model, a
he shape of

(b) its
alization of

hesis.
T. Greboge et al. / Improved Engineering Design Strategy Applied to Prosthesis Modelling 69
In the figure 9.a it is shown the 3D defective skull reconstructed in CAD. There is
a large failure region to be filled, which occupies lateral and frontal sides. The piece of
prosthesis was superimposed on the skull, and the resulting image is presented in figure
9.b.
The quality of adjust is better to small failures. In large failures as in the presented
example, some junction problems are more visible as in figure 9.b. There is a junction
difference present between skull and prosthesis around the entire piece border as
signalized by number (1). This is caused by the arc points are totally flat, and do not
contains the surface details. And also, the light and shadow position enhances the
discontinuity. The region pointed by numbers (2) and (3) demonstrates a difference
between skull basis and first arc (from first ellipse Ek=1). As the same way, there is no
interpolation in this joint position and the difference is apparent as well. By analysing
images and parameters, some specific new improvements are necessary and it will be
provide in next steps of this research.
5. Conclusion
In this paper was presented the improvement to the new technique for skull border
modelling. The approach is based on ellipse adjustment algorithm (EAA), and the
method demonstrates that it is possible to build a virtual model of prosthesis using the
CT slices straight from medical exam file. The presented example shows that the
method is feasible to large defective areas in skull, and in particular when the failure
does not be solved by symmetry. When the problem in skull is asymmetric there are no
enough information to build the failure region and the EAA algorithm provide a way to
give the missing information. The link between problem identification and the early
steps before machining is well defined in method. A CAD system is a necessary tool in
the middle of process to perform the connexion between the bone shape
characterisation and machining parameters definition. Also as shown in the example,
there are some open points to be solved, becoming a large field of research to new
improvements. In a gradual way, the method is still in evolution and the next step is to
experiment an interpolation method between extreme points of arc and skull breaks
position to smooth the discontinuity. After this the prosthesis piece might be better
evaluated by specialized doctor before implementation of an automatic manufacturing
procedure.
6. References
[1] Aquino, L. C. M., Giraldi, G. A., Rodrigues, P.S.S., Lopes Jr, A., Cardoso, J. S., Suri, J. S.;
Surface Reconstruction and Geometric Modeling for Digital Prosthesis Design. Multi-
Modality State-of-the-Art Medical Image Segmentation and Registration Methodologies,
cap 8, (2011), 187-225.
[2] You, F., Hu, Q., Yao, Y.; Lu, Q.; A New Modeling Method on Skull Defect Repair, IEEE
International Conference on Measuring Technology and Mechatronics Automation, 2009.
[3] Lee, S.-C., Wu, C.-T., Lee, S.-T., & Chen, P.-J.; Cranioplasty using polymethyl methacrylate
prostheses. Journal of Clinical Neuroscience, 16, (2009), 56-63.
[4] Chen, M. X., Guan T. M., Shan L. J., Digital Design of the Customized Cranial Prosthesis.
IEEE 2nd International Conference on Information and Computer Science, Wuhan, China
(ICIECS), (2010), 1-4.
[5] Canciglieri Jr,,O., Rudek, M., Greboge, T., A Prosthesis Design Based on Genetic
Algorithms in the Concurrent Engineering Context. In: ISPE Concurrent Engineering 2011
- CE2011, Boston. v. 1, (2011), 12-24.
T. Greboge et al. / Improved Engineering Design Strategy Applied to Prosthesis Modelling 70
[6] Huang, G. Y., Shan, L. J., Research on the Digital Design and Manufacture of Titanium
Alloy Skull Repair Prosthesis. IEEE 5th International Conference on Bioinformatics and
Biomedical Engineering, Wuhan, China (ICBBE), (2011), 1-4.
[7] Saldarriaga, J. F. I., Vlez, S.C., Posada, A.C., Henao, B.B., Valencia, C. A. T., Design and
Manufacturing of a Custom Skull Implant. American J. of Engineering and Applied
Sciences, v. 4, n. 1, (2011), 169-174.
[8] Jin, G.Q.; Li, W.D.; Gao, L., An adaptive process planning approach of rapid prototyping
and manufacturing. Robotics and Computer-Integrated Manufacturing, v. 29, n. 1, (2013),
2338.
[9] Rudek, M., Canciglieri Jr., O., Greboge, T., A PSO Application in Skull Prosthesis
Modelling by Superellipse. Electronic Letters on Computer Vision and Image Analysis
12(2) in proof, (2013),1-12.
[10] Francesconi, T. Proposta metodolgica para modelagem geomtrica a partir de imagens
mdicas. Dissertao de Mestrado, Programa de Ps Graduao em Engenharia de
Produo e Sistemas (PPGEPS), Curitiba, PR. Pontifcia Universidade Catlica do Paran,
master dissertation (in portuguese), 2008.
[11] Greboge, T., Rudek, M., Canciglieri Jr., O., Geometric Prosthesis Modelling to Skull
Repairing Using Artificial Intelligence Methods. International Conference of Computers &
Industrial Engineering (CIE41), Los Angeles, USA, 2011.
[12] Gielis, J., A Generic Geometric Transformation That Unifies a Wide Range of Natural and
Abstract Shapes, American Journal of Botany, v.90, (2003), 333-338.
[13] (DICOM) Digital Imaging and Communications in Medicine Part 5: Data Structures and
Encoding, National Electrical Manufacturers Association, 17th Street Rosslyn, Virginia
22209 USA, 2011.
[14] Schindelin, J., Carreras, I. A., Frise, E., Kaynig, V., Longair, M., Pietzsch, T., Preibisch, S.,
Rueden, C., Saalfeld, S., Schmid, B., Tinevez, J.-Y., White, D. J., Hartenstein, V., Eliceiri,
K., Tomancak, P. and Cardona, A., Fiji: an open-source platform for biological-image
analysis, Nature Methods 9(7), (2012), 676-682.
[15] Lin, L., Zhang, J., Fang, M., Modelling the bioscaffold for repairing symmetrical and
unsymmetrical defective skull. In Bioinformatics and Biomedical Engineering, ICBBE
(2008), 905908.


T. Greboge et al. / Improved Engineering Design Strategy Applied to Prosthesis Modelling 71
Understanding the Customer Involvement
in Radical Innovation
Danni Chang
a,1
and Chun-Hsien Chen
a,b

a
School of Mechanical and Aerospace Engineering, Nanyang Technological University,
50 Nanyang Avenue, Singapore 639798, Singapore
b
Logistics Engineering School, Shanghai Maritime University, 1550 Haigang Avenue,
Shanghai 201306, PR China
Abstract. This study aims to identify the factors fostering radical innovations
during new product development (NPD), and investigate into the importance of
customer involvements. Based on an analysis of a large number of relevant
research, the hypothesis is established: H1, The customer is not the most or the
only significant factor affecting radical innovation performances in NPD. To test
H1, an interactive multiple regression model is adopted to detect the impacts of
innovation-related factors. Through calculation and comparison, the results
showed the radical innovations are more sensitive to professional consulting
institutions such as consultants, commercial lab or private R&D institutions
rather than customers, clients or end users. Based on the analysis, conclusions are
drawn that firms should properly distribute research focus on all important aspects,
and carefully control the customer involvements so as to achieve the best benefits.
Keywords. Customer involvement; Radical innovation; Hypothesis testing;
Interactive multiple regression model
1. Introduction
Nowadays, the market is dynamic and constantly changing. Without the adaptability to
outside challenges, enterprises will eventually fail in competition. Product innovation is
such a notion which suggests seeking solutions for product improvements through the
creation and introduction of a good that is either new or improved on previous goods.
Particularly, radical innovation which is new to the market contains more chances to be
innovative and competitive. Considering the key factors of successful product
innovation, customers secure a crucial position. For radical innovation, it should be
considered carefully to comprise the consumers resistance to novelties.
Normally, customers were assumed to be beneficial to product innovation. Various
studies have demonstrated that customers, rather than manufacturers, often serve as
the idea generators and initial developers of products that later become commercially
significant (Enos 1962; Freeman 1968; Shaw 1985; von Hippel 1988; Lilien et al.
2002). It is revealed that customers are helpful for firms to face the changing market
conditions and survive in the competitive market environment. Therefore, much
attention has been placed on customers as presented in related research, and firms
heavily rely on customers in design and innovation activities (e.g. participatory design,

1
Danni Chang, Email: dchang1@e.ntu.edu.sg
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-72
72
customer co-creation). Moreover, the openness of enterprises to customers or external
sources has become one important estimator to forecast the innovation performance
(Huang & Rice, 2012). It appears that more customer involvements and wider relevant
cooperation with customers may induce better innovation solutions.
However, it lacks further investigations whether the more participation of
customers will bring out more innovative or radical ideas. In most cases, the
engagement of customers in the innovation process means a great consumption of
resources in terms of time and effort (Lilien et al. 2002). Furthermore, the quality of
customer involvements cannot be ensured owing to the great difficulties in selection of
customers and identification of customers qualifications and intentions. If customers
are involved improperly, the information provided by customers may be invalid. These
problems have been noticed by some researchers (e.g. Brockhoff, 2003). Although
some viewpoints are stated about the disadvantages of customer involvement, there still
lacks sufficient study on the disclosure of the relationships between customer
involvement and innovation performance from a quantitative perspective.
In this paper, a study based on innovation survey data is presented attempting to
identify the influence of customer involvements on radical innovation performance in
NPD. The remainder is organized as: existing work is introduced in section 2,
meanwhile problems are uncovered and the hypothesis is established accordingly. In
section 3, the proposed models are explained in details. Results are given in section 4
with specific illustrations and discussions. According to the results, conclusions are
derived in section 5. Finally, the limitations of this work are analyzed and the future
work is prospected in section 6.
2. Literature review and Hypothesis development
In this section, relevant research is presented mainly from the perspective of customers
in product innovation. Based on existing research, two problems are uncovered, and
one hypothesis is developed accordingly to lay out our research focus.
2.1. Literature review
As the vital design participator, customers have become the significant factor of
product innovation (Ngo and O'Cass, 2012; Szainfarber, et al., 2010). It has been
demonstrated that the degree of customer satisfaction determines the success of a new
product. Therefore, various innovation models are proposed based on the assumption
that the customer is the starting point and the ending point of a design process.
However, customers do not always positively facilitate product innovation. For
instance, customer involvement in product innovation is synonymous to a considerable
amount of resource investment (Lilien, et al., 2002). To control the investment within
the competence of a company, the scale of customer involvement should be ensured
with a good balance between the cost and expected benefits.
For radical innovation, it is high-degree innovation with new features or functions.
Brockhoff (2003) stated the higher degrees of innovation demand more careful
management that indicates deliberate attention should be paid to the customer
participation in radical innovation. However, there are no sufficient studies
emphasizing on this problem.
D. Chang and C.-H. Chen / Understanding the Customer Involvement in Radical Innovation 73
On the other hand, the selection of customers who are actually able to contribute to
new product development is, in practice, very challenging (Brockhoff, 2003). It cannot
be guaranteed that we can find the right partner, and the consequences of a poor
collaboration can be harmful. Particularly for radical innovation, the confidentiality
issue is very important. Nevertheless, customers have no immediate responsibility to
design projects, and the wrong participations or information disclosure, which impairs
the market performances of radical innovation, will easily happen.
Additionally, customers are not always trustworthy. In some cases, they are not
even clear about what they really want. Even though they have a clear understanding of
their preferences, there is no guarantee that they can articulate themselves clearly and
exactly. Therefore, the information gathered from customers should be handled
carefully. Considering these respects, customer involvements are not always equal to
good innovation performance.
Based on the understandings, the customer involvement is vital to product
innovation. Small scaled involvement of customers cannot fully take advantage of
customers value. However, heavy customer involvement will consume too much
economical funds and human efforts (Patricia Sandmeier, 2008). Furthermore, vague
customer demands will even raise bias and uncertainties, which carry with more
difficulties for designers to use them. Therefore, the customer involvement in the
product innovation process deserves serious considerations.
Based on an analysis of related research, two issues are revealed as follows:
The research focus is mostly placed on customer involvement, thus other
potential significant factors may be neglected;
There lacks sufficient study focusing on the identification of the importance of
customer involvements in radical innovation.
Hence, the objective of this study is to explore the importance of customer
involvements to radical innovation in NPD through a quantitative and analytical
manner. A basic hypothesis testing method is applied in this work to examine above
issues.
2.2. Hypothesis development
Based on the above literature review, customers attract most focus of firms in product
design and innovation process which can be seen from relevant research and projects.
However, there are also other factors showing effective influences on radical
innovation. For example, competitors are necessary concerns in innovation
management (Sarpong and Maclean, 2012). Cooperation with competitors through
partially sharing market information can attain more initiatives to face the changing
market conditions, and reach more openness to the market. This can help reach more
satisfactory innovation performance. However, cooperation with competitors contains
the risks of valuable resource loss and barriers to develop products new to the market
(Wu, 2012). In the same sense, professional consultants or research institutions also
have influence on innovation performance, since they are able to provide more
professional suggestions and technical supports (Sarpong and Maclean, 2012). Based
on the above examples, it is indicated that there are other factors affecting innovation
performance besides customers. Hence, the hypothesis is proposed as:
Hypothesis 1: The customer is not the most or the only significant factor
affecting radical innovation performance in NPD.
D. Chang and C.-H. Chen / Understanding the Customer Involvement in Radical Innovation 74
According to the study of existing research, two issues of customer involvements
in radical product innovation are uncovered and lead to the following study. In the next
section, the method to test this hypothesis will be explained into details.
3. Research method
The core of this work is to reveal the importance of innovation related factors crossing
inbound and outbound sources in order to identify the factor with a significant positive
correlation with radical innovation performance (in this work, the business case
number of products new to the market is measured as radical innovation performance),
and verify the importance of customer involvements.
3.1. Data
In particular, this study adopts the 2011 UK innovation survey data (The National
Archives). This survey covered the period from 2008 to 2010, and consists of a
nationally representative sample of business with 10 or more employees in sections B-
N of the Standard Industrial Classification (CIS) 2007. In total, 28,079 questionnaires
were distributed and valid responses were received from 14,342 enterprises to give a
response rate of 51.1%. The core questionnaire covers a broad range of innovation-
related concepts. Amongst them, the important information source is the focus of this
study, since it gives the inputs of innovation activities. In addition, the factor business
cases of products new to the market (the introduction of a new good or service to the
market before competitors) is processed as the radical innovation performance in NPD.
3.2. Descriptive statistics
The survey shows only 7.3% firms have products new to the market. This percentage is
low indicating the lack of radical innovation and the necessity of related research. The
standard errors of collected data of every attribute vary from 0.31 to 0.80. The errors
are mainly caused by the vagueness of questionnaire questions and cannot be avoided.
3.3. Pre-model steps
Firstly, the factors to be studied will be identified. These factors are mainly selected
from sources of important information. Based on the 2011 UK innovation survey
database, totally 8 elements are selected as estimated variables to be processed. The
business case number of products new to the market is processed as the response
namely outputs of radical innovation. However, these data are not all suitable for
quantitative processing (e.g. invalid data, incomplete data). Thus, necessary pre-
processing is needed which includes:
1. Cleaning The survey results include invalid data (some items received no
answers from investigated firms). In this work, the data sets with void items will be
discarded since they are fuzzy and uncertain.
2. Coding For this survey, most attributes are collected through choice of yes or
no which are qualitative. In order to achieve quantitative analysis, these expressions
or qualitative formats should be coded with numerical formats. In this work, yes will
D. Chang and C.-H. Chen / Understanding the Customer Involvement in Radical Innovation 75
be marked as 1 and no will be marked as 0. For attributes which are assessed
through rating system, the rating number set by participating firms will be regarded as
the numerical representations of related attributes.
3. Weighing According to research focus, attributes deserving more innovation
efforts will be assigned with more weights. For example, the factor cooperation with
other institutions has various levels: local, UK national, European, and worldwide.
The wider cooperation indicates the stronger capability of a company in product
innovation, thus should be assigned with more weights.
4. Scoring To simplify the calculation, the dimensions of every observation
should be controlled under a proper level. The main attributes of every observation are
8, and it is better to avoid more detailed sub-dimensions under every attribute.
Therefore, the sub-attributes should be combined and integrated to one score which can
reflect the overall performance of the related attribute. The score can be computed
through the formula below.

1
n
ij ij
j
Score Code W
1
ij ij
n
ij ij
Code W
ij
(1)
Where
ij
Code is the coding number of jth dimension of ith attribute;
ij
W is the weight
of jth dimension of ith attribute; n is the total dimension number of ith attribute.
3.4. Model specifications
Through pre-processing, the data now are consistent and can be calculated and
compared. Since the estimated variables (attributes) are 8, an interactive multiple
regression model is preferred to deal with multi-dimensional inputs. As the intention is
to extract the most significant factor, iterative computations are conducted to identify
the influence of every factor step-by-step. Amongst them, the influence of customer
related dimension clients, customers or end users will be focused. The algorithm of
this model is making use of interactive linear regression to compute the beta
coefficients of every estimated variable, and the larger beta magnitude means larger
influences of the variable. The t-distribution is adopted to judge whether the variable
has significant influence on response.
4. Experiment results
Through simulations using Matlab, the interactive multiple regression gives the effects
of these factors related to product innovation.
4.1. Significance of every influential factor
In this step, every observation consists of eight attributes which are all important
information sources for firms. Hence the inputs are multi-dimensional. Products new to
the market are outputs of radical innovation which are one-dimensional. Through first-
round calculation, the factor within business or enterprise group (-.0035952,
p=0.97253) is not reliable, since it has a very large p-value. To improve the confidence
of this experiment and the final result, this factor should be discarded as accidental case.
In the second-round computation, the dimensions of inputs are reduced to seven.
D. Chang and C.-H. Chen / Understanding the Customer Involvement in Radical Innovation 76
Applied algorithms are similar and the results are shown in Table 1. From the
perspective of p-values, the results are acceptable.
Table 1 Results of the second-round computation through interactive multiple regression model
Coef StdErr tStat pVal
Constant
3.5752 2.6155 1.3669 0.17928
Suppliers of equipment,
materials, services or
software
-0.26595 0.083115 -3.1998 0.002692***
Clients, customers or end
users
0.18029 0.068758 2.622 0.012304**
Competitors or other
businesses in your industry
-0.42843 0.13507 -3.1719 0.0029065***
Consultants, commercial
labs or private R&D
institutes
1.1134 0.23622 4.7133 2.9383e-005***
Technical, industry or
service standards
0.52831 0.307 1.7209 0.092996*
Conferences, trade fairs,
exhibitions
0.25726 0.23696 1.0857 0.28412
Scientific journals and
trade/technical publications
-0.22341 0.15051 -1.4844 0.14554
*p<0.1
**p<0.05
***p<0.01
Amongst these factors, factor Suppliers of equipment, materials, services or
software (-.26595, p<0.01), factor Competitors or other businesses in your industry
(-.42843, p<0.01) and factor Consultants, commercial labs or private R&D institutes
(1.1134, p<0.01) have very high reliabilities, so the coefficients can reliably reflect the
correlation between factors and response. Specifically, factor Suppliers of equipment,
materials, services or software has a negative influence on response and the absolute
value is not large which means the influence is not very obvious. Factor Competitors
or other businesses in your industry also has a negative effect on response and the
effect is not large. This phenomenon demonstrates the viewpoints of some researchers
(e.g. Wu, 2012) that cooperation with competitors is always damaging the market
performance of product innovation. For factor Consultants, commercial labs or private
R&D institutes, it has the greatest positive coefficient which implies it stimulates the
response significantly. This result indicates the importance of consultants for radical
innovation through an analytical perspective. Actually, the cooperation with outside
consultants or commercial R&D institutions has attracted some industries. For example,
banking industry often outsources their business (e.g. risk management) to consulting
companies which have long-term and trusty cooperation with them to seek professional
guidance and pursue promising solutions. Therefore, consultants or commercial R&D
institutions can also support product innovation through providing professional
knowledge and methods, ensuring the confidentiality of project and avoiding the risks
of wrong and invalid participation.
The estimation of factor Clients, customers or end users (.18029, p<0.05) is
reliable under the confidence level =0.05, and the influence is positive. However, the
absolute value is small that means the effect is not major. This result reveals the
influence of customers is not as significant as expected. Although some work suggests
that a strong focus on the customer organization perilously can alienate the
manufacturer from its inherent core competencies (Lilien et al. 2002), there still lacks
D. Chang and C.-H. Chen / Understanding the Customer Involvement in Radical Innovation 77
further and deep considerations. This experiment demonstrated that there are other
significant factors which have positive impacts on radical innovation performance.
For factor Technical, industry or service standards (.52831, p<0.1), it is reliable
under confidence level =0.1. In fact, this estimation is not ideal since normally 0.05 is
the acceptable ledge, but can be accepted. The estimations of factor Conferences, trade
fairs, exhibitions (.25726, p>0.1) and factor Scientific journals and trade/technical
publications (-.22341, p>0.1) indicate poor reliability, thus the implication of the
analysis of these two results is limited. Moreover, the magnitudes of the coefficients
are small. Therefore, it is reasonable to neglect these two factors.
From the results, H1 gets supported that factor Consultants, commercial labs or
private R&D institutes shows more significant positive influence on radical innovation
performance. Therefore, it may be not wise to simply emphasize on the customer
involvements in the product innovation process and consultants can be promising to
help improve radical innovation.
4.2. Influence tendency of essential factors
Based on the computation of related factors, a focused study is performed to mainly
disclose the essential correlation between important factors. Thus, the third-round
calculation is based on factors which we are interested. Factor Clients, customers or
end users is undoubtedly included. The factor Consultants, commercial labs or private
R&D institutes is also included since it has the most significant influence. Otherwise,
the factor Competitors or other businesses in your industry is also analyzed, as it is an
important concern for decision managers. Therefore, the inputs are three-dimensional.

Figure 1 Quadratic response surface of interactive multiple regression model
(x-clients, customers or end users; y-competitors or other businesses in your industry; z-consultants,
commercial labs or private R&D institutions)
To understand the comprehensive correlation of these three factors with the
response, a quadratic surface model is preferred. In this model, the linear correlation,
interactive correlation, and square correlation are all concerned, thus a convictive
comprehensive estimation is achieved. The result is presented in Fig. 1. X-label is
factor Clients, customers or end users, Y-label is factor Competitors or other
D. Chang and C.-H. Chen / Understanding the Customer Involvement in Radical Innovation 78
businesses in your industry, and Z-label is factor Consultants, commercial labs or
private R&D institutes. The HSV (Hue-Saturation-Value) grades reflect the products
new to market which are measurements of the radical innovation performance.
From the figure, it is clear that centric part of this cube has very low innovation
performance where all dimensions are set at medium level. As extending to the ledge,
innovation performance is becoming better. The reason may be that the effects of y-
label are contrary with effects of x- and z-label. They cannot lead to optimal innovation
in a consistent direction. The possible way to achieve optimization is to set some
factors at extreme values and offset opposite effects caused by other factors.
Furthermore, the distribution of potential best radical innovation is centralized around
one corner where X and Z values are high and Y value is low. It indicates that radical
innovation will be promoted by customers and consultants but weaken by competitors.
This conclusion is in line with the computation results in above step.
5. Discussion and conclusion
Customer involvement has become an important way for firms to improve product
design and innovation in order to face changing market condition and survive in a
competitive business environment. In particular, radical innovation endows firms with
sharp advantages that differentiate from competitors. This study investigates into the
customer involvements in radical product innovation, and the major result is that the
potential influential factors in radical product innovation are detected, and the
importance of customer involvement is verified.
Based on this study, the innovation performances are more sensitive to
consultants, commercial labs or private R&D institutions rather than direct
involvements of customers. It does not deny the importance of customers, but want to
point out more opportunities to reach innovative products. Actually, consultants can
provide professional knowledge and solutions for the development of innovation
project which cannot be achieved on a firms own. However, many firms neglect the
importance of consultants. In the same sense, there are also other factors lacking
sufficient concerns due to too much attention on customers. Thus a reasonable
redistribution of research focus is recommended to concern all significant aspects
sufficiently. Especially for firms stuck in a bottleneck, there is little space to extend the
benefits from customers, thus promising directions for further improvements may be to
improve other factors, such as a corporation with consultants or research institutions.
By and large, the significance of customers is detected. It is proven that careful
considerations are worthwhile during developing radical innovations. Therefore,
controlling the customer involvements by estimating expected benefits and forecasting
potential risks is important. Furthermore, research efforts should be properly
distributed on related aspects to pursue the best benefits.
6. Limitation and Further work
This work was based on 2011 UK innovation survey data which are collected through
questionnaires. Nevertheless, the answers unavoidably contain errors and uncertainties,
such as the bias caused by misunderstanding of the questions, and implicit uncertainties
caused by void answers, just to name a few. In addition, the data do not explicitly focus
D. Chang and C.-H. Chen / Understanding the Customer Involvement in Radical Innovation 79
on customer involvements. This constraint implies that the analysis may not provide a
complete picture of the antecedents of a firm's customer involvements, because certain
relevant variables may not have been included. For further research, the error sources
will be still detected and processing methods will be further improved to be more
accurate to avoid potential errors. Otherwise, this study also provides interesting
directions for product design and product innovation which can be studied through
empirical study as well as theoretical work.
Acknowledgement
Research was supported by The Program for Professor of Special Appointment
(Eastern Scholar) at Shanghai Institutions of Higher Learning.
References
Brockhoff, K., 2003, "Customers' perspectives of involvement in new product development." International
Journal of Technology Management, 26(5/6): 464-481.
Enos, J. L., 1962, Petroleum progress and profits: A history of process innovation. Cambridge, MA: MIT
Press
Freeman, C., 1968, "Chemical process plant: Innovation and the world market." National Institute Economic
Review, 45: 29-57
Huang, F., Rice, J., 2012, Openness in product and process innovation, International Journal of Innovation
Management, 16(4): 1250020-1 1250020-24
Kaulio, M. A., 1998, "Customer, consumer and user involvement in product development: A framework and
a review of selected methods." Total Quality Management, 9(1): 141-149
Lilien, G. L., Morrison, P. D., Searls, K., Sonnack, M., and von Hippel, E., 2002, "Performance assessment
of the lead user idea-generation process for new product development." Management Science, 48(8):
1042-1059
Ngo, L.V., and O'Cass, A., 2012, Innovation and business success: The mediating role of customer
participation. Journal of Business Research
Patricia Sandmeier, 2008, Customer Integration in Industrial Innovation Projects, 1
st
Edition 2008,
BetriebswirtschaftlicherVerlag Dr. Th. Gabler / GWV Fachverlage GmbH, Wiesbaden
Sarpong,D., Maclean,M., 2012, Mobilising differential visions for new product
innovation.Technovation
Shaw, B., 1985, "The role of the interaction between the user and the manufacturer in medical equipment
innovation." R&D Management, 15(4): 283-292
Szainfarber, Z., Stringfellow, M. V., et al., 2010, The impact of customer-contractor interactions on space
craft innovation: Insights from communication satellite history. ActaAstronautica, 67(9-10):1306
1317
vonHippel, E., 1988, The sources of innovation. New York: Oxford University Press
Wu, J., 2012, Technological collaboration in product innovation: The role of market competition and sectoral
technological intensity. Research Policy, 41 (2): 489 496
D. Chang and C.-H. Chen / Understanding the Customer Involvement in Radical Innovation 80
A Novel System for Customer Needs
Management in Product Development
Wunching CHANG
a,1
, Chun-Hsien CHEN
b
and Xingyu CHEN
b
a
Department of Mechanical Engineering, Ming Chi University of Technology, Taiwan
b
School of Mechanical and Aerospace Engineering, Nanyang Technological University,
Singapore
Abstract. Understanding and fulfilling each individual customers needs has been
recognized as a great challenge for companies across industries. Customer needs
management, which is essentially concerned with the relationship between
customer needs in the customer domain and product specifications or function
requirements in the product domain, needs to be well addressed in the product
development process. However, there is a lack of effective knowledge-based
techniques supporting the implementation of a customer needs management
system to obtain accurate customer needs statements. In this paper, a customer
needs management system (CNMS) that combines the ontology customer needs
representation (OCNR) system with the Involvement/Thinking/Feeling (ITF)
customer segmentation model is presented to obtain accurate customer needs
statements. Through classifying different types of customers based on innovative
characteristics, acquiring customer needs by involving the identified innovative
customers and using the obtained customer needs to generate more accurate needs
statements. The detailed process of overall system implementation was presented.
System evaluations, including terminology evaluation and high level needs
statements evaluation, were conducted. Results and findings from the customer
needs management system implementation were also discussed and summarized.
Keywords. customer needs management, ITF model, OCNR system, product
development
1. Introduction
Product development process is defined as a sequence of steps or activities including
conceiving, designing and commercializing a product. With the increased complexity
of products, sophisticated customer needs and intense competition in todays market,
the issue of defining product specifications by capturing, analyzing, understanding and
rejecting customer needs or voices of customer (VoC) has received great attention in
recent years [1].
To obtain a better understanding of customer needs and product specifications, and
to explicitly signify the relationships between customer domain and product design
domain, several issues must be made clear to designers: 1) how to identify innovative
customers who work as mainstreams for product innovation; 2) how to represent the

1
Corresponding Author: Wunching CHANG, Department of Mechanical Engineering, Ming Chi
University of Technology, 84 Gungjuan Road, Taishan, New Taipei City 24301, Taiwan; E-mail:
wlylechang@mail.mcut.edu.tw
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-81
81
relationship between high level customer needs and low level product characteristics;
and 3) how to obtain more accurate needs statements based on the existing information.
Customer needs management, which is essentially concerned with the relationship
between customer needs in the customer domain and product specifications or function
requirements in the product domain, needs to be well addressed in the product
development process.
In this paper, a customer needs management system (CNMS) that combines the
ontology customer needs representation (OCNR) system with the
Involvement/Thinking/Feeling (ITF) customer segmentation model is presented to
obtain accurate customer needs statements. Through classifying different types of
customers based on innovative characteristics, acquiring customer needs by involving
the identified innovative customers and using the obtained customer needs to generate
more accurate needs statements, the CNMS system can managing customer needs in a
novel way.
2. Related Works
A large number of customer needs management approaches from multiple disciplines
had also been reported over the past two decades. Jiao and Chen [2] classified these
approaches into three groups: psychology-based approaches, artificial intelligence-
based approaches, and knowledge discovery approaches, as shown in Figure 1.
Figure 1. Multidisciplinary concerns on customer needs management
Compared to psychology-based approaches, techniques based on artificial
intelligence (AI) are much more accurate and objective [2]. Integrated approaches that
integrated picture sorts and laddering methods, fuzzy evaluation, and neural network
techniques were also proposed to solicit customer needs [3].
However, a common limitation in the current customer needs management
research is that the existing methods are not able to provide accurate customer needs
with precise definitions of specific terms given by the vague customer needs. In
addition, these methods cannot account for the relations between these terms. Therefore,
W. Chang et al. / A Novel System for Customer Needs Management in Product Development 82
the challenge regarding the translation and representation of customer needs cannot be
well addressed. Besides, existing research cannot identify different types of innovative
customers and get them involved in the product design process. These gaps provide the
motivation to carry out an in-depth study of customer needs management.
3. Methodology
3.1 Overview
The Customer Needs Management System (CNMS), which is proposed to obtain more
accurate customer needs statements, was implemented in four steps, as shown in Figure
2.
Figure 2. Framework of customer management system
First, raw customer data was obtained from customer review. Second, a customer
segmentation model named ITF model was applied to identify innovative customers.
Raw customer statements were then obtained from innovative customers. After this
stage, these raw customer statements were put into the customer representation system
called OCNR system to obtain a customer needs ontology. Finally, based on concepts
and relations in the customer needs ontology, customer needs statements were
generated.
3.2 ITF Segmentation Model
Market segmentation seeks to handle customers as distinct entities through the
identification and understanding of their different needs, preferences, and behaviors [4].
Many market segmentation studies were based on traditional variables such as
geographic and demographic variables [5], and a limitation of using traditional
variables is that many characteristics related with innovativeness may not be
discovered [6].
In this work, a new customer segmentation model named the ITF model was built
(Figure 3). Customers are categorized according to the variables in three dimensions:
involvement, thinking, and feeling [7]. These three variables, which are independent of
W. Chang et al. / A Novel System for Customer Needs Management in Product Development 83
each other, describe the innovativeness characteristics of customers from three different
perspectives. At the same time, they are closely related to customers characteristics of
innovativeness as a whole [8]. In this study, each variable has both high and low levels.
Thus, a total of eight different segments are generated.
Figure 3. Dimensions of customer segmentation ITF model
The first four segments are the passionate innovators, involved innovators, devoted
innovators and amateur innovators, who show innovativeness in at least two
dimensions based on the ITF model and are defined as innovative customers. The
remaining four segments, which correspond to neutral innovators and indifferent
innovators, are defined as non-innovative customers. The identified innovative
segments can lead to accurate innovative customer orientation and further enhance
customer satisfaction with products.
3.3 OCNR System
The OCRL system which has the functions of concept extraction and semantic relation
extraction was also built in this study. In OCRL system, domain-specific terms are
interpreted semantically for concept extraction, a string matching mechanism and
lexico-syntactic pattern recognition are used to extract taxonomic relations, and a rule-
based method combined with full-text parsing technique is adopted for non-taxonomic
relation extraction. The framework of OCNR system is shown in Figure 4.
Figure 4 Framework of the ontology learning system
W. Chang et al. / A Novel System for Customer Needs Management in Product Development 84
3.3.1 Concept Extraction
In order to establish a domain ontology to account for customer needs for product
characteristics, the first step is to determine what concepts are important in the domain
of interest. Terms are linguistic representations of concepts in the text [9], so concept
extraction identifies the domain-specific terms from the text. This study uses a two-
phase concept extraction: candidate term extraction from text with certain linguistic
filters and domain-specific term selection with statistical measures. A group of NLP
methods (e.g., the part-of-speech [POS] tag-based rules) are employed to extract terms
from text. Statistical measures are then applied to evaluate the extracted terms, from
which only candidates with high ranks are selected as concepts.
3.3.2 Semantic Relation Extraction
There are two important semantic relations in domain ontology, including taxonomic
and non-taxonomic relations. Taxonomic relations are the basic ones. String matching
based on term structure is a simple method for taxonomic relation extraction. Several
rules, described below, are applied in the string matching methods when extracting
taxonomic relations, rule-based method and word-property based method will be
applied for non-taxonomic relation extraction. Rule-based method is used frequently in
other literature for non-taxonomic relation extraction. Word-property based method is
originally designed in this research to extract more non-taxonomic relations. Besides, a
word-property based method is adopted as a complement to the rule-based method for
non-taxonomic relations extraction [7]. In this method, a non-taxonomic relation of
(Concept1, Concept2) is extracted if these two concepts can be frequently found from
text simultaneously.
An association rule mining mechanism is developed to reveal the relation between
various words. A customer review can be considered a transaction record. The
association rules are extracted from transaction data based on certain parameters such
as support and confidence. The confidence denotes the strength of an association, and
the support indicates the frequency of the occurring patterns in the rule. Each
association rule extracted from transaction data indicates a particular relationship
between the transaction data sets.
The OCNR system uses support and confidence to determine if there are potential
non-taxonomic relations between words. A data mining tool, Weka 3.7
(http://www.cs.waikato.ac.nz/ml/weka/), is employed to find the non-relationships
between concepts with empirically set minimum support 0.1 and minimum confidence
0.2.
4. Illustrative Case Study
4.1 Process
The implementation of the system using a case study is presented in detail as follows:

Step 1: Acquiring raw customer data.
The raw customer data for the implementation process was obtained from
Epinions.com, which can provide relevant customer data as well as customer review
W. Chang et al. / A Novel System for Customer Needs Management in Product Development 85
data. Specifically, the data for analysis was self-summary data collected from 500
members of Epinions.com. The customer data was from the author popularity and
about author categories under the customer profiles. The author popularity and
about author were written by the customers themselves.
Step 2: Innovative customer identification with ITF model.
In this step, the ITF model was applied to identify innovative customers. By using the
segmentation methods as discussed in Chapters 4 and 5, innovative customers were
identified by using the keyword-based segmentation method. As a result, 292
customers were identified as innovative customers and 208 customers were identified
as non-innovative customers in this study. Among the innovative customers, 106 were
identified as passionate innovators, 69 were involved innovators, 91 were devoted
innovators, and 26 were amateur innovators. Among the non-innovative customers, 155
were neutral innovators and 53 were indifferent innovators. The distribution of the
different types of innovators can be found in Figure 5.
Figure 5. Customer segmentation results based on ITF model
Step 3: Customer representation with OCNR system
After innovative customers were identified, customer reviews through the innovative
customers were picked up as inputs to the OCNR system. At this stage, the OCNR
system was applied to automatically analyze the customer reviews and represent them
using a customer needs ontology, as indicated in section 3.
Step 4: Customer needs statements generation
After building the customer needs ontology, customer needs statements can be
generated based on the concepts and relations in the customer needs ontology. The
statement can account for either taxonomy relation or non-taxonomy relation. Based on
the guidelines from Ulrich and Eppingers research [10], each relation only specifies
one needs statement. After the process, a group of customer needs statements were
obtained. The extracted concepts and relations were organized in the customer needs
ontology for need statements generation. Examples of the need statements generated
from customer needs ontology are shown in Table 1.
W. Chang et al. / A Novel System for Customer Needs Management in Product Development 86
Table 1. Example of needs statements generated from customer needs ontology
Number Customer Needs Statement
1 The digital camera provides fast AF.
2 The digital camera takes HD Video.
3 The digital camera has long battery life.
4 The digital camera is lightweight.
5 The digital camera is professional.
6 The digital camera is for typical family usage.
7 The digital camera is comfortable for easy grip.
8 The digital camera works in low light locations.
9 The digital camera indicates battery power level.
10 The digital camera is convenient with auto control.
11 The digital camera is professional with manual control.
12 The digital camera is convenient camera and lightweight.
4.2 System Evaluations
Subjective evaluation method was used to determine if the terminology of customer
needs statements were good enough compared to the expert generated ontology.
Moreover, the quality of the needs statements of high level customer needs to product
characteristics was also evaluated.
The evaluation criteria were the accuracy of terminology and the accuracy of high-
level needs statements based on product design experience and personal understanding.
The detailed explanations of these criteria are given as follows:
Higher accuracy of the terminology indicates that the selected terminology is
better to reduce confusion among different groups.
Higher accuracy of high level needs indicates that the high level needs
statements are expressed in a clearer way with related product characteristics.
4.2.1 Terminology Evaluation
Before the terminology evaluation, two groups of senior students with industrial
backgrounds were asked to interpret customer needs directly based on 40 randomly
selected customer statements from customer reviews. The results were used as
comparison results for terminology evaluation.
Table 2 shows the examples of customer needs statements generated by both the
proposed system and human designers or engineers.
Compared to the need statements interpreted by designers and engineers, the need
statements generated from the customer needs ontology used well-defined
terminologies such as fast AF, typical family usage, battery power level, etc.,
and delivered clearer relations between terminologies such as camera--> take --> HD
W. Chang et al. / A Novel System for Customer Needs Management in Product Development 87
video, auto control -> convenient camera etc. These features make the need
statements generated from the customer needs ontology more easily accepted by
participating engineers.
Table 2. Examples of the generated customer needs statements
Customer
Statement
Interpreted Needs
(by Designer1)
Interpreted Need
(by Engineer2)
Interpreted Need
(by Ontology System)
I need its grip; it
feels comfortable
The camera is
comfortable when grip
A grip is needed The camera is
comfortable for easy
grip
I use it for
shooting perfect
photos for typical
family or kids
events even in dim
light.
1.The camera can shoot
perfect photos in typical
family or kids events
The camera can work
in typical family or kids
events
The camera is for
typical family usage
2.The camera can shoot
perfect photos in dim-
light situations
The camera can work
in dim-light places
The camera can work
in low-light conditions
4.2.2 High Levels Statements Evaluation
In the second stage of evaluation, 48 NTU students with backgrounds in mechanical
engineering and product design were asked to evaluate the accuracy of the generated
customer needs statements from customer needs ontology. Each student was asked to
rate 40 randomly selected customer needs statements. The students were required to
use 5-point Likert scales to rate the needs statements. The students were only required
to evaluate the accuracy of the generated customer needs statements. Part of subjective
evaluation results are shown in Table 3. The complete results can be found in the
appendix.
Table 3. Part of Subjective evaluation results
Customer Needs Statement Rating Average by Criterion I
The camera should have shutter control. 3.33
The camera takes high-definition video. 3.42
The camera has long battery life. 3.53
The camera works in low light. 3.42
The camera is decent with fast AF and manual control. 3.53
The camera is fashionable with nice looking. 3.36
The camera is professional with fast AF and wide length range. 3.70
The camera is typical family with auto control. 3.45
W. Chang et al. / A Novel System for Customer Needs Management in Product Development 88
5. Results and Discussions
In this paper, an overall framework of CNMS, which combines ITF models and the
OCNR system, was illustrated. The validation of the whole system was based on the
assumption that better identifying innovative customers and better representing their
needs lead to more accurate needs statements.
Using the ITF model, customers were classified into six types, including 292
innovative customers. Among the innovative customers, 106 were identified as
passionate customers who have full innovative characteristics, and 155 were average
innovative customers who may lack of one or two dimensions of innovative
characteristics. Although the average innovative customers were often ignored in other
lead user studies, they can still provide very valuable inputs for customer needs
statement generation.
Using the OCNR system, a customer needs ontology for the product of digital
cameras was built. In the OCNR system, a group of NLP methods (i.e., the POS tag-
based rules) was used to extract single-word terms and multiword terms from text, after
which the extracted terms were ranked using statistical measures Only terms with high
ranks were selected as concepts. Based on the established theories and case studies,
these features can help to well define the terminologies and relations. As the online
information is easily accessed and processed through automated NLP tools embedded
with the linguistic filters and statistical measures, the use of the OCNR system can help
save both time and cost.
In addition, in the OCNR system, string matching in WordNet and lexico-syntactic
patterns recognition were used to extract taxonomic relations, and rule-based methods
and word-property methods were applied to extract non-taxonomic relations. These
relations could be well-defined and served as bases for the customer needs statements.
Unlike other similar systems, the high level customer needs could be embodied through
the non-taxonomy.
The obtained customer needs ontology can facilitate generating needs statements
in a straightforward and easy manner. For example, by using the relation camera--
>work_in-->low light conditions presented in the customer needs ontology, a needs
statement can be generated directly as The camera works in low light conditions.
Further, both the concepts and relations in the ontology are accurate, which greatly
enhances the accuracy of the generated needs statements.
Compared to needs statements generated by designers and engineers, the customer
needs generated from customer needs ontology used concepts (e.g., comfortable
camera, easy grip, typical family usage, low light conditions, etc.) and relations
(work_in, is_for etc.) that were well-defined in the customer needs ontology.
Therefore, use of customer needs ontology for needs statements generation can reduce
the confusion in the process of customer statements translation. Further, since all the
terms and relations were obtained from a well-established customer needs ontology, the
accuracy of the obtained concepts and relations could be expected to be high, which
indicates that the accuracy of the customer needs statements can accordingly be
ensured.
By organizing the important concepts and relations through the customer needs
ontology, a knowledge-based guide for the product development process is available to
map customer needs and product characteristics. As discussed earlier, a common
problem in the current product design is that it is very difficult to identify the
relationship between the high-level customer needs and the low-level product
W. Chang et al. / A Novel System for Customer Needs Management in Product Development 89
characteristics. The non-taxonomic relations obtained from the customer needs
ontology can help discern this relationship. For example, the non-taxonomic relation
manual control-> professional camera indicates that a high customer need (i.e.,
professional camera) can be accounted for by selecting certain product characteristics
(i.e., manual control). This information is very important, especially for QFD or a
related translation process.
6. Conclusions
In this paper, a case study was used to illustrate how the proposed overall system works.
It was also examined whether the proposed process can improve the whole design
process and enhance the fidelity of customer needs statements. It was found that the
established CNMS is able to identify innovative customers and obtain customer needs
from them. At the same time, the CNMS can help obtain well-defined terminologies
and relations, and further help get accurate high level needs statements to product
characteristics. These findings suggest that use of the CNMS can enhance the fidelity
of customer needs statements. Thus, the whole implemented framework has great
potential in facilitating customer needs management in the product development
process.
References
[1] A. McKay, A. de Pennington, and J. Baxter, Requirements management: A representation scheme for
product specifications. CAD Computer Aided Design, 33(7) (2001), 511-520.
[2] J. Jiao, and C. H. Chen, Customer requirement management in product development: A review of
research issues. Concurrent Engineering Research and Applications, 14(3) (2006), 173-185.
[3] C. H. Chen, L. P. Khoo, and W. Yan, A strategy for acquiring customer requirement patterns using
laddering technique and ART2 neural network. Advanced Engineering Informatics, 16(3) (2002),. 229-
240.
[4] K. Tsiptsis, and A. Chorianopoulos, Data Mining Techniques in CRM: inside Customer Segmentation
(Vol. 357), Wiley Publishing, 2010.
[5] S. J. Phua, W. K. Ng, H. Liu, X. Li, and B. Song, Customer information system for product and service
management: Towards knowledge extraction from textual and mixed-format data. Paper presented at
the 4th IEEE International Conference on Service Systems and Service Management (ICSSSM2007),
Chengdu, China , 2007, Jun 9-11(2007).
[6] X. Y. Chen, C. H. Chen, and K. F. Leong, Automated Ontology-based Customer Needs Translation and
Representation , IEEE 2011 , Beijing, China (2011), 907-910.
[7] X. Y. Chen, C. H. Chen, and K. F. Leong, Automated customer statement translation using online
customer review, ICPD 2011, Tainan, Taiwan (2011), 87-89.
[8] V. Bilgram, A. Brem, and K. I. Voigt, User-centric innovations in new product development - Systematic
indentification of lead users harnessing interactive and collaborative online-tools. International Journal
of Innovation Management, 12(3) (2008), 419-458.
[9] J. C. Sagar, D. Dungworth, and P. F. McDonald, English Special Language: Principles and Practice in
Science and Technology. Germany: Oscar Brandstetter: Wiesbaden, 1980.
[10] K. T. Ulrich, and S. D. Eppinger, Product design and development (4th ed.). New York: McGraw-Hill,
2008.
W. Chang et al. / A Novel System for Customer Needs Management in Product Development 90
Kansei Clustering Using Design Structure
Matrix and Graph Decomposition for
Emotional Design
Chun-Hsien CHEN
a,1
, Yuexiang HUANG
a
, Li Pheng KHOO
a
and Danni CHANG
a

a
School of Mechanical and Aerospace Engineering
Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798

Abstract. Conventionally, Kansei engineering relies heavily on the intuition of the
person who uses the method in clustering the Kansei. As a result, the selection of
Kansei adjectives may not be consistent with the consumers opinions.
Nevertheless, to obtain a consumer-consistent result, all of the collected Kansei
adjectives (usually hundreds) might need to be evaluated by every survey
participant, which is impractical in most design cases. Accordingly, a Kansei
clustering method based on design structure matrix (DSM) and graph
decomposition (GD) is proposed in this work. The method breaks the Kansei
adjectives down into a number of subsets for the ease of management among the
survey participants. In so doing, each participant deals with only a portion of the
collected words and the subsets are integrated using a DSM-based algorithm for an
overall Kansei clustering result. In order to differentiate the groups in the
combined DSM further, graph decomposition (GD) is used to yield non-exclusive
Kansei clusters. The hybrid approach, i.e., using DSM and GD, is able to handle
the Kansei clustering problem. A case study on cordless battery drills is used to
illustrate the proposed approach. The obtained results are compared and discussed.
Keywords. Emotional product design, Kansei engineering, Kansei clustering,
design structure matrix, graph decomposition.
1. Introduction
Kansei engineering has been well perceived as an effective tool in the realm of
emotional design since the past few decades. Accordingly, a large number of Kansei
engineering related methods/systems have been developed. However, most of these
methods rely heavily on the intuition of the person who uses the method in clustering
the Kansei. Therefore, the selection of Kansei adjectives may not be consistent with the
consumers opinions. Nevertheless, to obtain a consumer-consistent result, all of the
collected Kansei adjectives (usually hundreds) might need to be evaluated by every
survey participant, which is impractical in most design cases.
The design/dependency structure matrix (DSM) proposed by Steward [1] has the
distinct advantage of showing the information cycles or connections among all of the

1
Corresponding Author: Associate Professor Chun-Hsien Chen, School of Mechanical & Aerospace
Engineering, Nanyang Technological University, North Spine (N3), Level 2, 50 Nanyang Avenue, Singapore
639798, E-mail: mchchen@ntu.edu.sg , Phone: +65 6790-4888, FAX: +65 6792-4062.
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-91
91
units in a system in a visual and traceable way. It lists all of the units or nodes in an n-
by-n square matrix, where n is the number of nodes. The manipulation of the DSM
enables the identification of loops or iterative connections among the nodes. Therefore,
DSM can be used to partition or cluster Kansei adjectives. The use of various
partitioning methods, e.g., binary matrix algebra [2], the powers of adjacency matrix
method [5], a loop tracing procedure [1], and a triangularization algorithm [6], which
rearrange a DSM into a lower triangular form, allows the minimizing of the loops
located along the diagonal line. Nonetheless, the lower triangular form is not preferred
in dealing with Kansei clustering problems because a Kansei DSM is symmetric and
requires heavy weight close and along the diagonal. Hence, new partitioning methods
based on the DSM are needed to cluster Kansei adjectives.
Graph decomposition (GD) is a useful technique in graph theory [12]. Owen
developed an algorithm for non-directed GD [13]. Chen and Occea improved the
algorithm and successfully employed it in a procedure for decomposing a group of
interrelated design knowledge facets into a number of subgroups for the easier
construction of a product design blackboard system [8, 9]. The revised algorithm could
also be used for clustering Kansei adjectives because of its two advantages [10]. First,
the correlation matrix of the Kansei adjectives is symmetrical. It can be treated as a
non-directed graph in such a way that the Kansei adjectives are the vertices and the
correlation coefficients between the Kansei adjectives are the links of the graph.
Second, the resulting subgraphs or Kansei clusters are non-exclusive. In other words,
the vertices could be shared by different subgraphs. Therefore, novel and promising
methods for clustering Kansei adjectives using GD could be developed.
Accordingly, a Kansei clustering method based on a design structure matrix
(DSM) and graph decomposition (GD) is proposed in this work. The method breaks the
Kansei adjectives down into a number of subsets for the ease of management among
the survey participants. In so doing, each participant deals with only a portion of the
collected words and the subsets are integrated using a DSM-based algorithm for an
overall Kansei clustering result. In order to further differentiate the groups in the
combined DSM, graph decomposition (GD) is used to yield non-exclusive Kansei
clusters. The hybrid approach, i.e., using DSM and GD, is able to handle the Kansei
clustering problem. Section 2 depicts the proposed hybrid approach to clustering the
Kansei adjectives. Section 3 presents a case study on cordless battery drills to illustrate
the proposed approach. The obtained results are compared and discussed. Section 4
summarizes the whole work.
2. Method
A method that integrates DSM and GD is proposed in this work to handle Kansei
clustering problems. The proposed method comprises three parts. The first part obtains
a Kansei adjectives matrix (KAM) by using DSM techniques [3, 4]. The second part
generates Kansei adjective sets (KAS) using GD procedures. Based on KAM and KAS,
the third part analyzes and identifies Kansei tags (KT).
In order to obtain a KAM, a DSM-based Kansei clustering method is proposed. It
involves eight steps: (1) collecting Kansei adjectives, (2) building subsets of Kansei
adjectives, (3) collecting product samples, (4) evaluating survey questionnaires, (5)
handling correlations among the Kansei subsets, (6) building a DSM of the subsets, (7)
C.-H. Chen et al. / Kansei Clustering Using DSM and GD for Emotional Design 92
processing the combined DSM, and (8) analyzing and manipulating the results. A step
by step description of the method follows.
Step 1: Collecting Kansei adjectives
Kansei adjectives are collected from various resources. As many Kansei adjectives
as possible should be collected in this step. Usually, the number can reach 300 to 500
words for a product.
Step 2: Building subsets of Kansei adjectives
The collected Kansei adjectives are divided into a number of subsets. Each subset
contains a portion of the Kansei adjectives. The subset-forming process involves three
sub-steps:
1. List all Kansei adjectives on a paper in random sequence.
2. Define the number of adjectives in one subset. Each subset may contain 10-25
Kansei adjectives depending on the design of the survey questionnaires and
the total number of Kansei adjectives. (Note: The higher the number of
subsets, the more participants is required.)
3. Fill each subset with the Kansei adjectives from the list. When a subset has
reached the pre-set number of Kansei adjectives, proceed to fill up the next
subset. In doing this, any two adjacent subsets share half (50%) of the same
Kansei adjectives. The identical Kansei adjectives are used as links to connect
the different subsets. (Note: If more than 50% are overlapping, e.g., 75%, it
may not be effective to conduct the survey due to too many subsets; in the
case of less than 50% overlapping, e.g., 25%, some adjectives might not be
clustered. Therefore, in this study, 50% overlapping was used.)
Step 3: Collecting product samples
In this step, products are collected for evaluation. The purpose is to examine
representative products or product samples in the market. Virtual models, e.g., future or
ideal products depicted by photorealistic renderings, can also be used.
Step 4: Evaluating survey questionnaires
The representative products are evaluated with respect to every Kansei subset by
the corresponding customer groups. The evaluation involves four sub-steps as follows.
1. Prepare the evaluation forms. Place every representative product together with
each Kansei subset. Therefore, if there are m Kansei subsets and n
representative products to be evaluated, mn different evaluation forms would
be required. A 7-point scale is employed in this step. The scale uses a uniform
bipolar system for evaluating antonym pairs of Kansei adjectives. In this
system, one extreme of the scale indicates that there is no such feeling
(Kansei) at all, while the other extreme reverses it [11].
2. Manage the participants. If there are m Kansei subsets, the participants are
randomly divided into m groups. Each group should comprise at least fifteen
people. Because 50% overlapping is employed, there are actually thirty survey
participants for each subset.
3. Evaluate the forms. Each group evaluates one Kansei subset for every
representative product. The meanings of the Kansei adjectives should be
clarified explicitly by the researcher(s) or designer(s), i.e., an explanation of
Kansei adjectives is given to the survey participants before the evaluation.
4. Collect the evaluation results and calculate the mean values.
Step 5: Handling correlations among Kansei subsets
In this step, statistical methods are employed to compute the correlation
coefficients of the Kansei adjectives within each subset. The Pearson product-moment
C.-H. Chen et al. / Kansei Clustering Using DSM and GD for Emotional Design 93
correlation is utilized to measure the distance (similarities) between the Kansei
adjectives in each subset. Compared with other types of correlation measurement, the
Pearson method provides the most convenient and straightforward way to exhibit the
similarity relationship between any two Kansei adjectives in a standardized interval (-1
to 1). The value of 1 (or close to 1) indicates that the two Kansei adjectives are
positively correlated. The value of -1 (or close to -1) denotes a negative correlation
between two Kansei adjectives.
Step 6: Building DSM of subsets
The correlation coefficients between the Kansei adjectives in each subset obtained
in Step 5 are used to construct DSM subsets. The Kansei adjectives on which the
correlation coefficients are based are the nodes in each DSM. The overall or combined
DSM is obtained.
Step 7: Processing the combined DSM
The DSM subsets and the combined DSM are partitioned in this step. Obviously,
the positively correlated Kansei adjectives should be kept in one block (or cluster),
while those negatively correlated should be separated into different partitions. Here, a
DSM for Kansei Partitioning (DSMKP) pseudo-code is developed to partition the
DSM subsets using computer programming (Figure 1), where n is the total number of
Kansei adjectives and v
ij
is the value of element (row i, column j) in the DSM subset.


Figure 1. Pseudo-code for DSMKP [4].

Next, a Combined DSM for Kansei Partitioning (CDSMKP) pseudo-code is
developed (Figure 2) to partition the overall DSM using computer programming, where
n is the total number of Kansei adjectives, v
ij-p
is the value of element (row i, column j)
in the p-th sub-DSM, and v
ij
is the value of element (row i, column j) in the combined
DSM. It differs from the DSMKP in two ways: first, it merges the sub-DSMs into a
combined DSM and the overlapping values are averaged; and second, the combination
effect of the Kansei groups will not be considered for those values that are not available
in the combined DSM.
Basically, the proposed partitioning algorithms arrange the Kansei adjectives from
column 1 to n in such a way that the corresponding values follow a descending
sequence in the top-down direction. In addition, the combination effect of the Kansei
groups should be considered. In other words, the values of former Kansei adjectives are
added together and the summations are compared. Therefore, heavy weights, i.e., large
correlation values, could be located along the diagonal line, while light weights are on
the sides.
C.-H. Chen et al. / Kansei Clustering Using DSM and GD for Emotional Design 94
After partitioning the combined DSM, the so-called KAM is obtained and possible
Kansei clusters are formed. The purpose of this step is to refine the Kansei clusters
obtained in the previous step. The refinement helps to eliminate insignificant Kansei
adjectives and reach a balance among the clusters.


Figure 2. Pseudo-code for CDSMKP.

In order to obtain the KAS, which defines the Kansei boundaries, a GD-based
Kansei clustering method is proposed. It comprises thirteen (13) consecutive steps. A
step by step description of the method is presented in Chen and Occea [8, 9].
The GD algorithm described above is a computer-oriented technique. An
application program implementing the algorithm is available for the efficient and
effective decomposition of graphs [10]. The values of the connection ratio and link
weights can be adjusted in the advanced setting. A relationship matrix that describes
the relationships between the vertices in a graph is shown in the middle of the main
window. After decomposition, the subgraphs are listed in a tree structure on the right
side of the main window.
Using the KAM and the KAS, Kansei tags (KT) can be identified through four
consecutive steps. A step by step description of the identification procedure is
presented as follows.
Step 1: Converting the KAM into a color map
A black and white gradient color scheme is set up to convert the KAM. An
increment value is used to define intervals of the color scheme so that each interval can
correspond to a mixture of black and white colors with certain percentages. For
example, the interval from 1 to 0.9 corresponds to 100% black + 0% white, while the
interval from 0.9 to 0.8 corresponds to 90% black + 10% white.
Step 2: Addressing the KAS on the color map
All of the vertices are marked in a subgraph, i.e., a Kansei adjective set, on the
color map. If the vertices are located nearby, a square frame is used to enclose them.
The enclosed adjectives are grouped under one cluster. Vertices that are away from the
square are considered branches. A line is used to connect the branches with the
corresponding square.
Step 3: Merging overlaps and removing branches
If a square overlaps another, the two squares are merged to form a bigger cluster. If
branches are located nearby, a square frame is used to enclose them and form a new
cluster. The remaining branches are removed.
Step 8: Analyzing and manipulating the results
C.-H. Chen et al. / Kansei Clustering Using DSM and GD for Emotional Design 95
The remaining vertices are checked, i.e., those that have not been involved in any
cluster. New clusters can be established if strong correlations exist among the
remaining vertices. Otherwise, individual vertices form individual clusters. Finally, all
resulting clusters, i.e., Kansei tags, are identified.
3. Case Study
To illustrate how the proposed method works, a number of participants were
involved in the case study. Thirty-two adjectives were randomly chosen for this study,
as shown in Table 1. The Kansei adjectives were randomly divided into four subsets
with 50% overlapping in each subset.

Table 1. Thirty-two Kansei adjectives collected from the customers.


The representative battery drills were evaluated by four groups of participants,
each using one of the four Kansei subsets. In other words, only a portion of the Kansei
adjectives were evaluated by any given participant. Each group comprised fifteen
university students. A brief introduction as well as the purpose of the survey was given
to the subjects. After the survey procedures and the meaning of the Kansei adjectives
were explained, the subjects marked their survey forms according to their own feelings.
The average scores given by the fifteen participants in each group were calculated.
Based on the survey results, the Pearson product-moment correlations were calculated
to measure the distance (similarities) between the Kansei adjectives in each subset. The
overall DSM was partitioned using the proposed CDSMKP algorithm.
These results were compared with the results obtained using another approach, in
which the survey participants (the 30 who had conducted the survey mentioned above)
were asked to evaluate all thirty-two Kansei adjectives for each representative product
(13 products in total). In this case, the participants needed much longer time and greater
patience to complete the survey. The data were collected and processed after the survey.
It can be found that although the correlation coefficients between the same pairs of
Kansei adjectives are different, which is reasonable because not all Kansei-Product
pairs are evaluated in the proposed approach, the resulting sequences of Kansei
adjectives in both cases are similar. In other words, the resulting KAMs from both
cases are about the same (it could be defined as similar if more than 80% of the Kansei
adjectives have the same sequence in both cases). This indicates that the approach
Step 4: Checking the individual clusters
C.-H. Chen et al. / Kansei Clustering Using DSM and GD for Emotional Design 96
whereby each participant deals with only a portion of the Kansei adjectives and where
the subsets are integrated using the proposed DSM-based algorithm could yield the
same results as the approach in which all of the Kansei adjectives are evaluated by
every survey participant.
As too many Kansei adjectives may cause unnecessary complexity when
illustrating the procedure of generating the KAS, as well as the analysis of the KT, a
Kansei set which involves only sixteen adjectives was chosen. The Kansei adjectives
were denoted from A to P to facilitate the demonstration.
Threshold values were defined to convert the Kansei correlation coefficients into
various types of relationships, i.e., correlations from -1 to 0, from 0 to 0.4, from 0.4 to
0.7, from 0.7 to 0.9, and from 0.9 to 1 are considered as no relationship (N.R.),
relationship (R.), important relationship (I.R.), very important relationship (V.I.R.), and
critically important relationship (C.I.R.), respectively. This configuration was defined
as the normal threshold values in this case.
The GD algorithm was applied to the converted relationship matrix using an
application program [10]. The connection ratio (CR) was set between 0.6 and 0.9 with
an increment of 0.05. The four value sets assigned to the different relationships were
{0,1,2,3,4}, {0,1,3,5,7}, {0,1,4,7,10}, and {0,1,5,9,13}. Based on the decomposition
results, i.e. the subgraphs, several observations can be made. First, the number of
subgraphs increases when the connection ratio is set to a larger value, no matter what
value set is used. Second, the number of vertices contained in each subgraph decreases
when the number of subgraphs in an experiment increases. Third, the subgraphs
obtained using different connection ratios are consistent in terms of the vertices
involved. For example, the subgraph contains the Kansei adjectives M, F, E, H, I, and
G when CR is 0.6. When CR is set to 0.7, the subgraph contains the Kansei adjectives
M, F, E, H, I, G, and N. The only difference between the two subgraphs is N. Fourth,
the change in the value sets seems independent of the generation of the subgraphs. For
example, when CR is set to 0.9, the number of subgraphs, as well as the vertices in
each subgraph, is identical, no matter what value set is applied.
The same Kansei correlations matrix was converted into relationships using
tightened threshold values for the purpose of comparison, i.e., correlations from -1 to
0.2, from 0.2 to 0.5, from 0.5 to 0.8, from 0.8 to 0.9, and from 0.9 to 1 were considered
as N.R., R., I.R., V.I.R., and C.I.R., respectively. This configuration was denoted as the
tightened threshold values in this case. Tightened means that a larger correlation
value is required to establish the relationships. For example, only correlations larger
than 0.8 can be considered as V.I.R or C.I.R. using the tightened configuration of
threshold values, while in the normal case, 0.7 was used.
The result shows that the number of subgraphs increases when the connection ratio
is set to a larger value, no matter what value set is used. In addition, the number of
vertices contained in each subgraph decreases when the number of subgraphs in an
experiment increases. Furthermore, the subgraphs obtained using different connection
ratios are consistent in terms of the vertices contained. Finally, the value sets are
independent of the subgraphs.
Comparing the normal and tightened results reveals that the number of
subgraphs produced when using tightened threshold values is greater than that of using
normal threshold values, given the same connection ratio and value sets. For example,
when the connection ratio is 0.85, and the value set is chosen as {0,1,2,3,4}, six and
four subgraphs are generated using the tightened and normal threshold values,
respectively. In addition, the subgraphs using the tightened and normal threshold values
C.-H. Chen et al. / Kansei Clustering Using DSM and GD for Emotional Design 97
are consistent in terms of the vertices involved. For instance, when the connection ratio
is 0.85 and the value set is {0,1,2,3,4}, the subgraph contains P, A, D, O, C, K, N, L,
and J using normal threshold values. This corresponds to two subgraphs using the
tightened threshold values, i.e., a subgraph containing D, C, N, L, A, and O, and
another subgraph containing P, D, C, O, and K. Furthermore, the selection of the value
sets is irrelevant to the resulting subgraphs when the connection ratio is set to a high
value. For example, in the situation of 0.9 CR, the resulting subgraphs are the same
using any value set. In summary, the changes in the connection ratio and threshold
values only affect the number of subgraphs. Larger connection ratios and tightened
threshold values will result in more subgraphs that contain fewer vertices. In general,
the vertices in the subgraphs are consistent in all cases. The selection of the value sets
has little effect on the resulting subgraphs. Therefore, to obtain a suitable number of
Kansei clusters, this case used 0.9 CR, tightened threshold values, and the {0,1,2,3,4}
value set. The resulting subgraphs of Kansei, i.e., KAS, are shown in Table 2.

Table 2. Kansei adjectives sets.


Based on the obtained results, the Kansei clusters, i.e., the so-called Kansei tags
(KT), can be identified. First, the KAM is converted into a color map, which can be
produced using chromatic colors or achromatic colors. In this case, a black and white
gradient achromatic color scheme is used. The increment is set to 0.2. In other words,
there are ten intervals and each interval corresponds to a mixture of black and white.
Figure 3 shows the color map for the KAM.


Figure 3. The color map for the KAM.

C.-H. Chen et al. / Kansei Clustering Using DSM and GD for Emotional Design 98
Second, the subgraphs from the KAS on the color map of the KAM are indicated.
Third, the subgraphs that overlap one another are merged and the branches that are
located away from main clusters are removed. Last, the remaining vertices, i.e., those
that have not been contained in any cluster, are checked in the synthesized color map.
New clusters can be established if strong correlations, e.g., larger than 0.8, exist among
the remaining vertices. Otherwise, individual vertices form individual clusters. In this
case, there is only one remaining vertex and it forms a cluster by itself. The final four
clusters are obtained, i.e., Kansei Tag 1 control, functional, speed, ergonomics,
classic, professional, and comfort, Kansei Tag 2 futuristic and masculine, Kansei Tag
3 torque, and Kansei Tag 4 cartoony, smooth, plastic, personal, portable, and clean.
4. Conclusion
A Kansei clustering method composed of three parts is proposed in this work. The
first part incorporates DSM in realizing the map of Kansei adjectives, which positions
all Kansei adjectives based on the similarity of their meanings. The method breaks
Kansei adjectives down into manageable subsets so that each participant deals with
only a portion of the collected Kansei words. The results are similar to those obtained
using conventional approaches. The second part generates Kansei adjective sets using a
GD-based approach to address Kansei boundaries in the map of Kansei adjectives. The
third part addresses the Kansei tags based on the KAM and KAS identified in the
previous two parts. Details of the proposed hybrid approach, which integrates DSM
and a graph decomposition method, are presented. The proposed approach was
illustrated using an example on cordless battery drills. Based on the results of the case
study, it is expected that this work may contribute comprehensively to the management
of Kansei adjectives in emotional product design.
References
[1] D.V. Steward, Systems Analysis and Management: Structure, Strategy, and Design. New York: Petrocelli
Books, Princeton, 1981.
[2] J.N. Warfield, Binary matrices in system modeling. IEEE Transactions on Systems, Man and Cybernetics
SMC-3 (1973), 441-449.
[3] Y. Huang, C.-H. Chen, & L.P. Khoo, Kansei clustering for emotional design using combined design
structure matrix. International Journal of Industrial Ergonomics 42(5) (2012), 416-427.
[4] Y. Huang, C.-H. Chen, & L.P. Khoo, A Kansei clustering method for emotional design using design
structure matrix. In Proceedings of the 17th ISPE International Conference on Concurrent Engineering.
Cracow, Poland (2010), 113-120.
[5] D.A. Gebala, & S.D. Eppinger, Methods for analyzing design procedures. In Proceedings of the 3rd
International Conference on Design Theory and Methodology 31 (1991), 227-233.
[6] A. Kusiak, & J. Wang, Decomposition of the design process. Journal of Mechanical Design.
Transactions of the ASME 115 (1993), 687-695.
[7] J.A. Bondy, & U.S.R. Murty, Graph theory. London: Springer, 2008.
[8] C.-H. Chen, & L.G. Occea, Graph decomposition of product design expert systems for a blackboard
environment. In Proceedings of the Fourth International Conference on Automation Technology,
AUTOMATION 96 vol. 1 (1996), Industrial Technology Research Institute, Hsinchu, Taiwan, 449456.
[9] C.-H. Chen, & L.G. Occea, Knowledge decomposition for a product design blackboard expert system.
Artificial Intelligence in Engineering 14 (2000), 7182.
[10] C.-H. Chen, T. Wu, & L.G. Occea, Knowledge organisation of product design blackboard systems via
graph decomposition. Knowledge-Based Systems 15 (2002), 423435.
C.-H. Chen et al. / Kansei Clustering Using DSM and GD for Emotional Design 99
[11] S.T.W. Schtte, Engineering Emotional Values in Product Design -Kansei Engineering in Development.
in Department of Mechanical Engineering. Ph.D. dissertation, Linkping: Linkping Universitet. 2005.
[12] J.A. Bondy, & U.S.R.Murty, Graph Theory. London: Springer. 2008.
[13] C.L. Owen, An algorithm for the decomposition of non-directed graphs. In: Moore, G.T. (Ed.)
Emerging Methods in Environmental Design and Planning. Cambridge: MIT Press, (1970), 13346.
C.-H. Chen et al. / Kansei Clustering Using DSM and GD for Emotional Design 100
Research on a Framework of Task
Scheduling and Load Balancing in
Heterogeneous Server Environment
Tifan XIONG
a,1
, Chuan WANG
a
, Li WAN
a
and Qinghua LIU
a
a
CAD Center, Huazhong University of Science and Technology, Wuhan, Hubei,
China, 430074
Abstract. In this paper, we study task scheduling and load balancing in a hetero-
geneous server environment. Through a research on the dynamic requirements of
load balancing service of the servers, we form an adaptive task scheduling and
load balancing service framework based on customizable load feedback and dy-
namic scheduling strategy. Finally, we build and test a solution within a multi-
domain physical modeling and simulation service platform in the web environ-
ment-WebMWorks.
Keywords. Task scheduling, load balancing
Introduction
Task scheduling means transferring the tasks submitted by users to an appropriate serv-
er or load node for executing and its effectiveness has an impact on overall system per-
formance. Load balancing is a technique of spreading the single computers work be-
tween two or more computers in order to get optimal resource utilization and its the
key to solve the performance bottlenecks of large amount of concurrent. For a hetero-
geneous server environment with large number of users, amount of tasks, reasonable
task scheduling and load balancing can increase resource utilization, increase the sys-
tem throughput, reduce response time, thereby improve overall system performance.
There is a diversity of tasks and servers in the application systems in a heterogene-
ous server environment, mainly as follow:
Task types are different. The complexity of the task, resource utilization,
process progress, complexity and processing time are various.
Tasks have an impact on server performance. It is need to take the impact into
consideration.
The performance of the server itself and the ability that provides the hardware
and software resources are different.
The runtime environment of tasks in the servers is various and the key per-
formance factors that affect the task to run are also quite different.

1
Xiong Tifan, born in 1973. Doctor. His main research interests include mechanical design and theory,
collaborative manufacturing, PLM. E-mail: xiongtf@hustcad.com.
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-101
101
Due to the number of the tasks and efficiency of the tasks processing, the
number of different types of servers are quite different.
Meanwhile, there are groups of servers of different architectures for some types of
tasks to be processed in a heterogeneous server environment. And for each group of
servers needs different scheduling policies. Even if with the same group of servers,
with the external request changes in different operating conditions should dynamically
adjust scheduling policy.
In the different aspects of load balancing and task scheduling researchers has done
a lot of research work, and made a large number of excellent load balancing algorithm,
such as Round-Robin/Weight Round-Robin [1] a scheduling that sequentially as-
signed client requests to each member server; Min-Min [2] a scheduling that assigned
client requests to the earliest available and the fastest server; a scheduling based on
dynamic resource allocation [3]; a scheduling that focuses on I/O [4]; a scheduling for
homogeneous servers [5]; a scheduling using two hierarchical algorithms [6] and so on.
However, the studies are generally used within a single scheduling policy. For dif-
ferent types and structures of servers in the same systems, it needs to customize the
different strategies. Also task scheduling and performance monitoring strategies should
be adjusted depending on the operating status of servers. In the paper, a task scheduling
and load balancing framework that is based on customizable task scheduling and load
feedback for each server groups within one system and dynamically adjusting the task
scheduling strategy according to the operating status is formed.
1. Architecture
As shown in Fig. 1, task scheduling and load balancing structure is divided into three
main types of applications node: Server Load Service on the central server; Executor
Load Service on the work server and Executor on the work server. Server Load Service
is responsible for scheduling a task to a specific work server; Executor Load Service is
responsible for scheduling a task to a specific Executor; Executor is the place where
task is executive. Scheduling processes are shown in follow:
1) Preprocessor component in the Server Load Service preprocesses the request
for changing the request to a task. And then transfers the task into scheduling
component.
2) Scheduling component obtains scheduling information from Container
through Analyzer component.
3) Scheduling component locates the work server using an appropriate schedul-
ing algorithm. And then dispatches the task into Executor Load Service
through Proxy.
4) Follow the steps 1), 2), 3), Executor Load Service dispatches the task into Ex-
ecutor.
5) Executive the task on the chosen executor.
There is a Monitor component in Executor and Executor Load Service. This com-
ponent has two main features:
Detection performance conditions (such as CPU, memory, I/O, task complexi-
ty, processing time, the number of tasks, etc.) of work server or executor ac-
cording to a certain performance measurement method.
T. Xiong et al. / Research on a Framework of Task Scheduling and Load Balancing 102
Feedback the performance conditions to the Load Service through a commu-
nication proxy.
2. Key Technologies
2.1. Pretreatment of Task
Some tasks such as data query, document management, do not require a specific pro-
cessor for processing, some tasks, such as simulation tasks in the online modeling plat-
form, require a specific compiler that can compile the executable program. Due to di-
versity of the tasks, there is a big difference in task runtime environment, Task
processes are largely fixed. Therefore, tasks can be pre-processed in order to gain the
relationship between the type of task and the main performance factors of the load
nodes. The relationship is stored in the configuration file named as task-config.xml.
When the task request received, the main performance factors can be generated accord-
ing to the task type from the configuration file. Thus, it can reduce the number of load
nodes that be scheduled, and improve the efficiency of the actual operation.
In this paper, k-means algorithm [7] is used for task type pretreatment.
Firstly, the request type is divided into four types: high CPU usage, long process
time, expressed as CPU-long; high CPU usage, short process time, expressed as CPU-
short; high I/O traffic, long process time, expressed as I/O-long; high I/O traffic, short
process time, expressed as I/O-short.

Figure 1. task scheduling and load balancing structure.
T. Xiong et al. / Research on a Framework of Task Scheduling and Load Balancing 103
Secondly, determination the initial value of the performance factors. The task per-
formance factors are shown as a n m-dimensional matrix: X

, ,

, 1 , vector

means the m-dimensional resources utilization


of the i-th factor. In this case five main factors have been considered as follow: CPU
usage, memory usage, IO traffic, network connections, processing time and the com-
plexity of the task;
Thirdly, calculate K samples

1 as initial cluster centers by K-Means


+ + algorithm [8];
Finally, pretreatment by K-means algorithm.
The formula of the k-means algorithm is shown in follow:

V

(1)

The algorithm is shown in follow:
Input: n m-dimensional matrix task samples,

; the number of category, K; K


task samples

1 ;
Output: n 1-dimensional vector V, storing factors of the sample;
The processes are shown in follow:
(1) calculate the K samples as initial cluster centers by the K-means + + algorithm;
(2) calculate for the distance that each object from the centroid and reclassify the
corresponding object based on the minimum distance;
(3) recalculate the mean of each cluster as the new centroid;
(4) repeat (2) to (3) until convergence of functions or meet a given criteria.
An example of the configuration file named as task-config.xml is shown in follow:

<task>
<task-name>db</task-name>
<task-type>common-db</task-type>
<task>
<task-mapping>
<task-name>db</task-name>
<factor-pattern>io-short</factor-pattern>
</task-mapping>
<task>
<task-name>graph</task-name>
<task-type>compile</task-type>
<task>
<task-mapping>
<task-name>graph</task-name>
<factor-pattern>cpu-long</factor-pattern>
</task-mapping>
2.2. Customizable Load Feedback
In reality, the types of task present diversity. There are differences between resource
utilization, processing time, complexity, priority and operating environment. In the
case of static single performance monitoring strategy, the performance information
which is monitored for different type of task load nodes with the same methods may
T. Xiong et al. / Research on a Framework of Task Scheduling and Load Balancing 104
not achieve the desired effect for improving the efficiency of processing tasks. Also,
different load nodes that may be designed for a specific task has different main perfor-
mances factors and the type of tasks have various impact on the performance of servers,
thus, these load nodes should have the appropriate monitoring strategies. That means it
needs to be able to customize load feedback strategy and properly integrate with the
task scheduling and load balancing system.
As shown in Fig. 2, The Configuration class loads the configuration file and gets
the strategy information. And then send the information to the Monitor component.
Monitor component call the Factory interface to generate different strategies using the
IStrategy interface according the strategy information. Thus, the Monitor component
can achieve performance monitoring by the strategy.
During the design, Strategy Pattern has been chosen for generating the strategy.
All load feedback strategies implement the same interface, but there are different load
feedback methods. With this design pattern, you can customize different monitoring
strategies based on usage scenarios or different needs, monitor different types of re-
sources and load nodes. For its good scalability, the load feedback strategy can be dy-
namically replaced by other load feedback strategies.
An example for configuration file named as strategy-config.xml is shown in below:

<strategy name="" class="">
<init-param key="">value</init-param>
</strategy>
2.3. Dynamic Scheduling Strategy
Due to the differences between the operating system, quality of resources, the ability to
support the executors and the number of different kinds of load nodes, the load balanc-

Figure 2. Monitor component structure.
T. Xiong et al. / Research on a Framework of Task Scheduling and Load Balancing 105
ing system should set different scheduling algorithm for load nodes that handle differ-
ent types of tasks. Even for the same type load nodes, scheduling algorithm should be
dynamically replaced under different scenarios. The task scheduling and load balancing
system should be able to implant realistic scheduling algorithm under different scena-
rios.
The scheduling algorithm that measured for different types of tasks and the condi-
tions of use the algorithm is stored in in the configuration file. Assume that the main
factors for selecting the algorithm consist of the following parts: the number of tasks
(expressed as num), the number of load nodes (expressed as replica), the number of
current connections (expressed as connect), performance averages of load nodes (ex-
pressed as avg) etc. The efficient scheduling algorithm under a specific scenario is
measured by experiment. And then the scheduling algorithm and conditions of use is
added to the configuration file named as task-config.xml that is shown in follow:

<algorithm-mapping>
<task-name>graph</task-name>
<algorithm name="" class="" deafult="">
<init-param key="num">value</init-param>
<init-param key="replica">value</init-param>
<init-param key="connect">value</init-param>
<init-param key="avg">value</init-param>
</algorithm>
<algorithm name="" class="" deafult="">
<init-param key="num">value</init-param>
<init-param key="replica">value</init-param>
<init-param key="connect">value</init-param>
<init-param key="avg">value</init-param>
</algorithm>
</algorithm-mapping>

As shown in Fig. 3, scheduling processes are shown in follow:
1) Preprocessor component transforms the request into a specific task and then
transfers the task into Scheduling component;
2) During running, Configuration class loads the configuration file named as
task-config.xml, and collects the collection of information of the main factors
and the scheduling algorithm. And then send the information to Scheduling
component;
3) Supervision class generates the current system status information. And then
match that there is an algorithm the conditions of use of which satisfy the cur-
rent system status information in the information collection from step 2). If ex-
ists, then return the algorithm information. If there is none, choose the algo-
rithm of which the default attribute equals true;
4) Invoker obtains the algorithm information according the step 3), and then ge-
nerates the algorithm using reflection;
5) Scheduling dispatches the task using the algorithm that is generated from
step 4).
T. Xiong et al. / Research on a Framework of Task Scheduling and Load Balancing 106
During the design, Strategy Pattern has been chosen for generating the algorithm.
All scheduling algorithms implement the same interface, but there are different load
analysis and scheduling methods. Various researchers have designed a variety of task
scheduling and load balancing algorithms, such as Round-Robin, First Available
Weight Based, Least Connection, least Load

etc. Most algorithms can be integrated into
the task scheduling and load balancing system using the Strategy Pattern. Even sche-
duling algorithm can be customized to meet the special requirements in some certain
circumstances.
3. Application Verification
A recovery mechanisms, task scheduling and load balancing system for a multi-domain
physical modeling and simulation service platform in the web environment-
WebMWorks has been researched and built based on this framework of task scheduling
and load balancing in heterogeneous server environment. In the collaborative visualiza-
tion modeling and simulation platform, user requests can be divided into modeling,
compile and solving task types. There are a lot of compiler and solver on the servers of
the platform. Different types of executors require various types, numbers, performances
and runtime environment servers. The executor can only perform an operational task
once a time. A large number of concurrent tasks are to be sent to the server for specific
executive. How to reasonable coordinate tasks with the server and executor will be the
key to solve the performance problems.
The structure of WebMWorks system of task scheduling and load balancing is
shown in Fig. 4.
The system for load balancing and task scheduling has server level and application
process level double load balancing, customizable feedback load for servers and execu-
tors, dynamic scheduling strategies and automatic scalability of the servers and the ex-
ecutors.

Figure 3. Scheduling component structure.
T. Xiong et al. / Research on a Framework of Task Scheduling and Load Balancing 107
The load balancing used in WebMWorks should first set up different default load
balance algorithm, according to the different service groups because WebMWorks has
several server groups that handle different tasks including parser server groups, compi-
lers server groups, solver server groups and so on. They handle totally different tasks
so the processing complexity, the occupying resources and the processing time between
them have a big difference. The compilers usually consume small amount of resources
and time and the solvers consume much more resources and time. Then the compile
servers and the parse servers use the Response Time [9] in default so they can finish
the task as soon as possible. The solver servers use the Round Robin [1] in default. At
the same time the scheduling strategies should relate to the quantity of the current tasks.
We need to use different load balancing strategies based on the quantity of the current
tasks contained by the task queue. When the quantity of the current tasks is very big,
the load balancing algorithm used by the compile servers and the parse servers should
be changed to the Round Robin. When the quantity of the current tasks is very small,
the scheduling strategy of the solver servers changes to the Response Time. At present
this load balancing method shows good performance and efficiency in testing and pro-
duction environments. And it shows good performance and efficiency in testing and
production environments.

Figure 4. WebMWorks scheduling and load balancing structure.
T. Xiong et al. / Research on a Framework of Task Scheduling and Load Balancing 108
4. Conclusion and Future Work
Task scheduling and load balancing is an effective way to improve the system perfor-
mance. To solve different environments and scenarios of task scheduling and load ba-
lancing problems, a task scheduling and load balancing framework of customizable
load feedback and dynamic scheduling strategy has been designed for heterogeneous
load nodes and the diversity of the types of tasks.
Meanwhile, the load balancing mechanism based on periodic feedback needs to be
optimized on the effectiveness of promotion. And the stability and efficiency of the
network is also an important factor. These will be the focus of future research.
References
[1] Heisswolf J., Konig R. and Becker J. A Scalable NoC Router Design Providing QoS Support Using
Weighted Round Robin Scheduling. In: Parallel and Distributed Processing with Applications (ISPA),
2012 IEEE 10th International Symposium on 2012.
[2] He X., Sun X. and Gregor V.L. QoS Guided Min-Min Heuristiec for Grid Task Scheduling, Journal of
Computer Science and Technology, 2003, 18(4):442451.
[3] Wenyu Zhou and Shoubao Yang. VMCTune: A Load Balancing Scheme for Virtual Machine Cluster
Based on Dynamic Resource Allocation. In: Proceeding, of the 2010 Ninth International Conference on
Grid and Cloud Computing.
[4] Xiao Qin, Hong Jiang, Adam Manzanares, Xiaojun Ruan and Shu Yin. Dynamic Load Balancing for
I/O-Intensive Applications on Clusters. ACM Transactions on Storage, Vol. 5, No. 3, Article 9, Publica-
tion date: November 2009.
[5] Tang Q., Gupta S.K. and Varsamopoulos G. Energy-efficient thermal-aware task scheduling for homo-
geneous high-performance computing data centers: A cyber-physical approach. IEEE Trans Parallel Dis-
trib Syst, 2008, 19(11):14581472.
[6] Barazandeh I. and Mortazavi S.S. Two Hierarchical Dynamic Load Balancing Algorithms in Distributed
Systems, [A] Second International Conference on Computer and Electrical Engineering, 2009:516521.
[7] http://en.wikipedia.org/wiki/K-means_clustering.
[8] http://rosettacode.org/wiki/K-means++_clustering.
[9] http://en.wikipedia.org/wiki/Response_time.
T. Xiong et al. / Research on a Framework of Task Scheduling and Load Balancing 109
Empirical Performance Evaluation in
Collaborative Aircraft Design Tasks
Evelina DINEVA
a,1
, Arne BACHMANN
a
, Erwin MOERLAND
a
, Bj orn NAGEL
a
,
and Volker GOLLNICK
a
a
Air Transportation Systems, Deutsches Zentrum f ur Luft- und Raumfahrt e.V.
(German Aerospace Center)
Abstract The overarching goal at the Integrated Design Laboratory (IDL) is to
understand the mechanisms of decision making and exchanges among engineers.
In this study a toolbox for the assessment of engineering performance in a real-
istic aircraft design task is presented. It allows for the assessment of participants
in different experimental conditions. The degree of task difculty and the amount
and quality of visualization are systematically varied across conditions. Using a
graphical user interface the participants mouse trajectories can be tracked. This
data together with performance evaluation of the generated aircraft design can help
uncover details about the underlying decision making process. The design and the
evaluation of the experimental toolbox are presented. This includes the number,
specicity, and ranges of design variables that can be manipulated by a participant.
The major difculty thereby is to nd a sweet spot where the task is just difcult
enough, such that participants display a progress in their performance. Too easy or
too difcult of a task would lead to ooring or ceiling effects, where most partic-
ipants will always fail or, respectively, perform perfectly. The decisions about the
aircraft design parameters are therefore based on a numerical analysis of the design
space. With this analysis nonlinearities and interdependencies of design parameters
are revealed. The experimental toolbox will be utilized to measure design perfor-
mance of individuals and groups. The results are expected to reveal ways to support
multidisciplinary collaboration.
Keywords. Aircraft design, collaborative engineering, empirical performance
evaluation, multidisciplinary collaboration
Introduction
Engineering graduates, when starting their professional careers, will face a challenge
they most likely are not yet familiar with. They will need to work in multidisciplinary
teams and thereby interact with experts whom they do not share a common understand-
ing and often not even a common technical language with. This is not just an individual
challenge, but also a real problem in many engineering projects. Progress is often slow
and difcult due to the sheer complexity in many current day projects [9]. For example,
aircraft design requires of engineers external exchanges with stakeholders like airlines
1
Corresponding Author: Evelina Dineva, Blohmstrae 18, 21079 Hamburg, Germany; E-mail:
evelina.dineva@dlr.de.
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-110
110
and governments to specify design projects. Projects then typically require that experts
from diverse Science, Technology, Engineering, and Mathematics (STEM) elds collab-
orate on a high technical level.
Researchers at the Institute for Air Transportation Systems work exactly at both
these levels of external and internal cooperation. Externally, they cooperate closely with
stakeholders like airports, airlines, aircraft and air trafc management to develop holistic
understanding of air-transportation. Internally, this knowledge provides signicant con-
straints for aircraft design projects within the the German Aerospace Center (DLR). The
holistic understanding provides a framework for integrative aircraft design. In particu-
lar it allows to (a) coordinate contributions from different DLR projects and (b) provide
unied software tools to serve the multitude of sub-projects. The day-to-day practice
of high-level integrative work exposed the need to gain explicit knowledge about the
mechanisms of collaborative engineering. This realization is shared with Ilan Kroo and
Juan Alonso at Stanford University, who proposed a third generation multidisciplinary
aircraft design view, highlighting the need for understanding decision processes in mul-
tidisciplinary teams [cite]. We consider the improvement of collaboration among engi-
neers from different subdisciplines and between engineers and stakeholders as a major
challenge toward improving engineering education and toward providing conditions for
better engineering practice and success. Toward that end, the goal of this paper is to ad-
dress the question of how engineering performance can be assessed. Note that to limit
the scope, the focus will be on methods that measure individual performance, which
later should be extended for collaborative engineering. Understanding engineering per-
formance is clearly an empirical question that needs to be addressed with methods from
the psychological and sociological sciences.
1. Related Research
Engineering practice is a very complex endeavor that is difcult to study, even more so
when it comes to the eld of air transportation systems which is complex in its own
right [9]. Diverse innovative methods emerge from educational research, for instance,
[11] developed a method to introduce multidisciplinary collaboration on project-based
teaching such that diverse STEMdisciplines can be learned successfully in concert. From
an analysis of skill-requirements of newly graduated engineers, [8] deduce guidelines for
the graduates further professional education. By observing engineering students over
multiple design sessions, [10] propose how a design tasks sequence can be structured to
facilitate creativity.
Creativity is critical to engineering practice because it involves exploration and in-
novation. Standard approaches, however, have been developed to assess routine work, as
[1] argue. Thus, the authors develop novel methods to observe and analyze engineers in
their natural work environment. Creativity, on the other hand, has been well studied in
laboratory settings (cf. [12] for a review), and there are ideas how standard methods from
creativity research can be applied to study professional practice (entrepreneurship in that
case) [5,13]. The design process, in contrast, has hardly been studied experimentally. A
literature search yielded only two studies [2,7]. Both provide only rst insights how to
correlate performance measures with other behavioral or biological data; both however
are insufcient in that they test a single condition and thus do not identify how external
factors may inuence behavior.
E. Dineva et al. / Empirical Performance Evaluation in Collaborative Aircraft Design Tasks 111
The picture painted, from the approaches mentioned so far, is that scientists are just
begging to develop empirical methods to reveal the process of design. Creating research
standards to assess collaborative engineering performance is, in fact, a critical scientic
challenge. Researchers at the Institute of Air Transportation Systems are currently em-
barking on tackling this challenge and seek to introduce methods of cognitive science to
investigate the process of engineering. This paper is about the prerequisites and technical
challenges and solutions of developing novel experimental studies.
2. Prerequisites for Experimental Research
2.1. Integrated Design Laboratory
The Institute for Air Transportation Systems has established the Integrated Design Lab-
oratory (IDL) in 2012. The IDL is dedicated to investigate aspects of collaboration in the
air transport system thus emphasizing the experimental or laboratory character within
its premises and research staff [4]. Its capacious main room of about 190 m
2
comprises
technical equipment such as a large high-resolution display wall, several secondary dis-
play sytems for stereo-projection and for touch-sensitive computer interaction, advanced
wireless inertia-triggered remote input devices, and diverse presentation support tools.
The IDL is designed and equipped for maximum exibility in order to support a
wide range of work sessions that might require different seating arrangements, number
of participants, type and duration of meetings, and moderation styles. Integrated, mov-
able and easily network-enabled working desks and on-site computing facilities support
ad-hoc collaborative design tasks to the degree of purely technical feasibility. However,
provided this technical environment is only as powerful as the experience and methods
which are put to practice. Of course, acceptance and knowledge about the opportunities
available need to be communicated and cultivated, which leads directly to future experi-
mental research that will be performed in the Integrated Design Lab.
2.2. Experimental Software Tools
To keep laboratory investigation close to actual work scenarios, experimental studies are
based upon VAMPzero [3,6], which is the current software tool used to study prelimi-
nary aircraft design congurations at DLR. VAMPzero calculates a mass-breakdown and
global performance data by using a mix of statistic (handbook) and low-delity physical
methods and models. Calculations are initialized with a data set comparable to the Air-
bus A320 airplane.To create new designs, control parameters in the data set are modied
and VAMPzero is re-run.
Participants in the experiments will use VAMPzero through a Graphical User Inter-
face (GUI), as illustrated in Figure 1. The GUI is programmed in MATLAB
R
, through
which parameters are communicated with VAMPzero. The GUI serves several impor-
tant roles for experimentation. Firstly, and important for the experimental design, only
a specied number of control parameters within predened ranges can be controlled.
To avoid confusion, please note that the notions control parameters and control vari-
ables are used in two different but related contexts: (a) the participants dene their
designs by setting control parameters via the GUI; and (b) the specic number, initial
E. Dineva et al. / Empirical Performance Evaluation in Collaborative Aircraft Design Tasks 112
1 2 3 4 5 6 7 8 9 10
0
2
4
6
8
10
c
o
n
t
r
o
l

p
a
r
a
m
e
t
e
r
s
wing span [m/10]
bypass ratio []
design range [m/10
6
]
1 2 3 4 5 6 7 8 9 10
0
20
40
60
80
100
iteration number
o
u
t
p
u
t

p
a
r
a
m
e
t
e
r
s
fuel mass [l
3
/10
3
]
OEM [t]
DOC [EUR/bh/10
2
]
Figure 1. GUI to interact with aircraft design software tool VAMPzero. Control parameters can be changed
(pseudo-)continuously or in discrete steps with the sliders and the drop down menu, respectively. When press-
ing the large red button at the bottom of the screen, these values are passed on as inputs to the a VAMPzero it-
eration. When the calculation is complete, the control parameters (y-axis, top plot) and resulting output param-
eters (y-axis, bottom plot) are displayed against the iteration number (x-axis). The results are left empty (here,
iterations number six and nine) when the control parameters specify an infeasible aircraft (i.e., VAMPzero does
not converge).
setting, and ranges of the control parameters are control variables of the experimental
design, through variation of which different experimental conditions can be compared.
Secondly, the GUI provides participants with critical feedback about their designs.
Like the control parameters, both design goals as well as the amount of visualization are
critical factors in the experimental design. All these can be control variables that when
manipulated create different experimental conditions.
Thirdly, the GUI is a simple interface that participants can intuitively interact with.
An easy access to the design software shifts the focus on design skills in terms of con-
ceptual understanding about relationships between control parameters and objectives de-
rived from the output aircraft design values. This opens up the opportunity for experi-
mental variation in terms of participant groups. For instance, difference of design skills
can be tested for novices versus experienced engineers independently of their familiarity
with the specic design software or data structures.
3. Experimental Control
The task at hand is to nd the right set of experimental control variables, which can be
any combination from the in Section 2.2 mentioned (a) control and output parameters of
E. Dineva et al. / Empirical Performance Evaluation in Collaborative Aircraft Design Tasks 113
the aircraft design, (b) feedback and information about participants design solutions, or
(c) composition of participant groups.
3.1. Control and Output Parameters
Selecting control and output parameters and the right ranges for the control parameters
is critical for the experimental work. Namely, these parameters dene a task that partic-
ipants need to solve by obtaining feasible and as efcient as possible aircraft designs.
Whether the experiment can yield meaningful evidence about work performance depends
in the rst place on the task difculty. This is because a too easy or a too difcult task
will lead to, respectively, ooring or ceiling performances whereby most participants fail
or succeed perfectly. The task difculty needs to be adjusted such that other experimental
variables that we are interested in (e.g, type of feedback) may show effects.
3.2. Feedback and Other Information
To narrow down a good task in terms of its parameters is mandatory for the success
of the experiment. Once this preliminary work is accomplished, we choose to focus on
investigating the level and type of information the participants are provided with. This
is because we see the question of how ergonomic factors like visualization and type of
information (or the lack thereof) facilitate performance, as most relevant. The results of
these experiments are expected to provide insights that will support our efforts to improve
the IDL as a work environment.
In the example of Figure 1, the entire history of designs (i.e. choices of control
parameters and the resulting outputs) is provided throughout the experiment. If used
strategically, this may allow participants to investigate relationships about control and
output parameters. Other types of feedback that, depending on experimental conditions,
may be available are: (a) mathematical formulas that describe physical relationships;
(b) geometry of the aircraft designs produced by the participants; (c) partial derivatives
indicating the directions and magnitudes of changes in output parameters with respect
to the control parametersinformation about the derivatives is ignored by novices [2,7],
but we expect experts to use it. At the current stage, the effects of these factors have not
been tested.
3.3. Particpant Groups
Although we are considering to extend our research to test experience levels, our initial
focus group is students. Whether testing different participant groups is a feasible exper-
imental condition hinges on the selected design task as dened by other experimental
variables. For example, a task that is adjusted to work well with students might be too
easy for experienced engineers.
4. Systematic Test of Parametric Conditions
The selection of adequate control and output parameters is mandatory for a good ex-
perimental design. In several discussion sessions, our team (of engineers and cognitive
E. Dineva et al. / Empirical Performance Evaluation in Collaborative Aircraft Design Tasks 114
scientists) deliberately settled for a relatively small number of control and output param-
eters to setup the systematic testing. As control parameters, the aircrafts design range,
wing span, and the engines bypass ratio were tested. From the output parameters, fuel
mass, Take-Off Mass (TOM), Operating Empty Mass (OEM) and Direct Operating Costs
(DOC) were investigated in detail. For these output parameters lower values are better,
therefore the optimization goal of the task is to adjust the control parameters such that
output parameters are minimized. Global dependencies of the control on the output pa-
rameters were tested and analyzed.
4.1. Testing Procedure
To evaluate if a given subset of control parameters indeed circumscribes a reasonable de-
sign task for engineering students, VAMPzero was repeatedly iterated with different in-
put parameters. Combinations of three control parameters, the values of which were var-
ied in small steps, were tested. Input settings and resulting output values were recorded
for each iteration. Tests and the subsequent analysis were performed with MATLAB
R
.
4.2. Analysis and Results
For analysis, the dependencies of output parameters on a large set of control parameters
were visualized. Most indicative were three-dimensional (3D) carpet plots in which the
local optima per design variable are highlighted. Examples are shown in Figures 2 and 3.
Each subplot in Figure 2 shows how the DOC change for different wing span and bypass
ratio settings and a xed design range; the design range is varied between subplots and
increases from the top left to the bottom right subplot. Figure 3 represents the same data,
however rearranged such as to vary the engines bypass ratio from low to high values
across the subplots.
Comparison of the subplots in Figure 2 shows a stronger curvature of the DOC sur-
face for increasing design range. Therefore, the DOC shows a larger sensitivity to the
remaining two parameters (wing span and design range) for increasing design range set-
tings. The plots also show that overall costs are lower for designs that use engines with a
high bypass ratio. For xed design range and bypass ratio, a clear minimum considering
the wing span dimension is observed. Adjusting the latter variable will be trickier for the
test persons, since the minimum is not located at the edge of the parameter range under
consideration. Furthermore, the value of the absolute minimumto be attained will depend
on the settings of the other values. The behaviour of the DOC minimum as function of
wing span setting is caused by two conicting optimization criteria from aerodynamics
and structural mechanics. Introducing a larger wing span will make the aircraft aerody-
namically more efcient: a larger amount of air is deected with a relatively lower veloc-
ity over the wing to generate the required lift force. Since the kinetic energy required to
deect the oncoming air is linearly dependent on the mass and quadratically dependent
on the velocity, it is energetically more advantageous to have a large wing span. From
a structural point of view, having a larger wing span is however disadvantageous: larger
bending moments will occur due to increased moment arms of the lift forces, leading to
higher material stress and the corresponding heavier structure needed. Conicts that tap
into knowledge from different sub-disciplines involved in aircraft design are necessary
for the experimental task.
E. Dineva et al. / Empirical Performance Evaluation in Collaborative Aircraft Design Tasks 115
Figure 2. Parametric interdependencies: DOC levels (z-axes) depending on bypass ratio (x-axes), wing span
(y-axes), and design range (varied over the subplots). White circles indicate the DOC minima for each tested
bypass ratio value, whereas black crosses indicate the DOC minima for each tested wing span value. The global
minimum is highlighted using a large black square. Default A320-type setting is for design range = 3 500 km
(middle subplot).
Different challenges concerning the design task are evident from Figure 3. For one,
it shows that the choice of design range has a non-linear effect on the DOC. This might
seem counter-intuitive as one might assume that, the shorter the design range, the cheaper
both production and operation are. However, the direct operating costs are reported in
Euros (EUR) per business hour with a [EUR/ bh] unit. Shorter ranges mean shorter pe-
riods in the air and more ground time, which is in fact a cost factor for the operating
airline. As an engineering task, this is interesting since it introduces a component from a
different disciplineeconomicswhich in fact poses relevant constraints for engineer-
ing practice. The perspective of Figure 3 also exposes an interaction between wing span
and design range, both of which have a non-linear inuence on the DOC. The associ-
ated vallues in the subplots indicate that the aicraft design can be optimized globally by
adjusting wing span and design range for the given engine at hand. All subplots show
a similar shape, furthermore it is seen that the engines bypass ratio does not affect the
location of the global minimum largely (with respect to DOC). This can be explained due
to the underlying calculation software, in which the relation between the bypass ratio of
the engine and the corresponding effect on the engine mass is still to be incorporated. For
the experiment, it is interesting to see if participants will exploit this independencenote
that they will not have the global relationships among parameters available as displayed
here (the task would be trivial otherwise).
Recall that participants will need to optimize for more than just DOC. After sim-
ilar analyses of the effects of the control parameters on fuel mass, TOM, and OEM,
the output parameters fuel mass and OEM were also selected for the experimental task.
Traditionally, reducing mass is seen as critical within aircraft design. Optimizing for a
combination of both OEM or fuel mass and DOC is then particularly interesting, since
counter-intuitive relations might occur. For example, for a set of aircraft requirements
E. Dineva et al. / Empirical Performance Evaluation in Collaborative Aircraft Design Tasks 116
Figure 3. Parametric interdependencies: DOC levels (z-axes) depending on design range (x-axes), wing span
(y-axes), and bypass ratio (varied over the subplots). White circles indicate the DOC minima for each tested
design range value, and white crosses indicate the DOC minima for each tested wing span value. The global
minimum is highlited using a lled white square. Default A320 setting is for bypass ratio = 4.8 (top right
subplot).
one could obtain a more DOC efcient aircraft, which is heavier than the conguration
at the global mass minimum.
The main result from our analysis is a potentially good set of control and output
parameters. In addition, meaningful ranges for the control parameters could be identi-
ed. A summary is given in Table 1. These ranges largely depend on the software used
to calculate the aircraft properties according to the provided control parameters. Since
VAMPzero is an empirical tool, the equations based on statistics limit the values of con-
trol parameters at hand. No ranges are reported for the output parameters since these are
actual outcomes of the calculations and not predened like the controls.
description control parameters output parameters
name wing span bypass ratio design range fuel mass OEM DOC
range 1444 3.57 3507 000 n/a n/a n/a
unit [m] [ - ] [km] [l
3
] [t] [EUR/bh]
Table 1. Details for the control and output parameters, as selected for the experimental design task.
4.3. Follow-up: Pilot Studies
The next step is to conduct preliminary tests of the experiment with participants. These
so-called pilot studiesare required to evaluative whether the current task comprises a
proper design exercise. Fine-tuning aside, the pilot studies will also serve to improve the
GUI by surveying participants about their experiences in using it. This is particularly
relevant in order to nd proper variations of feedback levels and potentially additional
information that might be displayed, too. Form and amount of feedback are the most
relevant experimental variables to be tested, as argued in Section 3.2.
E. Dineva et al. / Empirical Performance Evaluation in Collaborative Aircraft Design Tasks 117
Conclusion
Science is just beginning to unveil the process of collaborative engineering. One missing
link is the luck of experimental laboratory testing, which we aim to close by developing
rigorous cognitive science methods. Experimental research, which taps into the thought
process of engineering, requires innovative techniques. Innovation, in turn, requires pre-
liminary work as it was presented in the current paper. The central result was to identify
a design task which we believe to pose the right level of difculty for undergraduate stu-
dents. Whether this indeed is the case will be tested with pilot studies, which will also
serve to nd relevant feedback variables for the experiment. To understand the role of
feedback is central to our approach, and we anticipate the experiments to provide insights
that will help to improve the IDL as an environment for collaborative engineering. This
research is a rst step toward nding new ways how to enhance work experiences for
individuals and outcomes for their institutions.
References
[1] Saeema Ahmed, Ken Wallace, and Luci enne Blessing. Understanding the differences between how
novice and experienced designers approach design tasks. Research in Engineering Design, 14(1):111,
February 2003.
[2] Jesse Austin-Breneman. Observations of designer behaviors in complex system design. Master of
science, Massachusetts Institute of Technology, 2011. Yang, Maria C. (Thesis Supervisor).
[3] Arne Bachmann, Markus Kunde, Markus Litz, Daniel B ohnke, and Stefan K onig. Advances and work
in progress in aerospace predesign data exchange, validation and software integration at the german
aerospace center. In Product Data Exchange Workshop 2010, Oslo, Norway, 2010.
[4] Arne Bachmann, Jesse Lakemeier, and Erwin Moerland. An integrated laboratory for collaborative
design in the air transportation system. In Concurrent Engineering Approaches for Sustainable Product
Development in a Multi-Disciplinary Environment, Trier, Germany, Sep 2012. 19th ISPE International
Conference on Concurrent Engineering, Springer.
[5] Robert A. Baron and Thomas B. Ward. Expanding entrepreneurial cognitions toolbox: Potential con-
tributions from the eld of cognitive science. Entrepreneurship Theory and Practice, 28(6):553573,
2004.
[6] Daniel B ohnke, Bj orn Nagel, and Volker Gollnick. An approach to multi-delity in conceptual aircraft
design in distributed design environments. In Aerospace Conference, pages 1 10. IEEE, 2011.
[7] RJ de Boer and P Badke-Schaub. Emotional alignment in teams: How emotions support the design
process. In International Design Conference DESIGN, pages 10791086, Dubrovnik, Croatia, 2008.
[8] David G. Dowling and Roger G. Hadgraft. A systematic consultation process to dene graduate out-
comes for engineering disciplines. In Wilmar Hernandez, editor, REES 2011: Research in Engineering
Education Symposium, pages 525533, Madrid, Spain, 10 2011. Research in Engineering Education
Symposium.
[9] Wilson N Felder and Paul Collopy. The elephant in the mist: What we dont know about the design,
development, test and management of complex systems. Journal of Aerospace Operations, 1(4):317
327, 2012.
[10] Justin Y Lai, E Taylor Roan, Holly C Greenberg, and Maria C Yang. Prompt versus problem: Helping
students learn to frame problems and think creatively. In Third International Conference on Design
Computing and Cognition, page 2nd Design Creativity Workshop, Atlanta, GA, USA, 2008.
[11] N. Mathers, A. Goktogan, J. Rankin, and M. Anderson. Robotic mission to mars: Hands-on, minds-on,
web-based learning. Acta Astronautica, 80:124131, 2012.
[12] S. M. Smith, T. B. Ward, and R. A. Finke. The creative cognition approach: Cognitive processes in
creative contexts. In S. M. Smith, T. B. Ward, and R. A. Finke, editors, The creative cognition approach.
MIT Press, Cambridge, MA, 1995.
[13] Thomas B. Ward. Cognition, creativity, and entrepreneurship. Journal of Business Venturing, 19(2):173
188, March 2004.
E. Dineva et al. / Empirical Performance Evaluation in Collaborative Aircraft Design Tasks 118










A Task Oriented Approach to Documentation and
Knowledge Management of Systems Enabling
Design and Manufacture of Highly Customized
Products

Fredrik ELGH
a,1

a
School of Engineering, Jnkping University, Sweden
Abstract. A rapidly growing approach in product design and manufacture, with
great potential to improve customer value, is mass customization. The possibility
to design and manufacture highly customer adapted products brings a competitive
edge to manufacturing companies and is in some areas a necessity for doing
business. In this paper, an approach for documentation and knowledge
management of systems supporting the design and manufacture of customized
products is explored. As the governing framework and models are updated and
refined due to shifting prerequisite, the system and hence the solutions generated
for a single specification will change over time. This affects product management
and the ability to meet legislation and customers requirements regarding
documentation and traceability, as well as the companys ability to provide
services, maintenance and supply spare parts. A solution has been developed for
an industrial case with required functionality for capturing, structuring, searching,
retrieving, viewing, and editing a systems embedded information and knowledge.
The objective is to enable and facilitate system maintenance and updating and
support the reuse of functions and system encapsulated generic design descriptions
in future systems.

Keywords. Customized products, system maintenance, documentation, knowledge
management
Introduction
The ability to efficiently and quickly design and manufacture highly customized
product can provide a competitive advantage for companies acting on a market with
shifting customer demands. A business model based on highly customized product
requires advanced application systems for automating the work of generating product
variants based on different customer specification in the quotation and order processes.
There are examples of companies adopting this approach for many years, nevertheless,
as these and other companies want to cut lead time and be able to increase the
customization level, more effective use of the systems is required. From a scientific
viewpoint, most research has focused on system functionality and in some cases on
system development whereas methods to support efficient maintenance, updating and
reuse of these systems are not fully developed.
In this paper, an approach for documentation and knowledge management of
systems supporting the design and manufacture of customized products is explored.
Two different life-cycle perspectives have to be considered when addressing
documentation and management a knowledge perspective and a product perspective.

1
Corresponding Author.Fredrik Elgh, School of Engineering, Jnkping University, P.O.Box 1026,
551 11 Jnkping, Sweden; e-mail: Fredrik.elgh@jth.hj.se.
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-119
119








The knowledge perspective includes the adaptation of rules and models to changes in
production technology, new product knowledge, new markets, changes in legal
requirements, etc. Issues related to flexibility, stability, quality assurance, traceability
and documentation of a system's different constituting parts and underlying knowledge
can be critical unless adequate measures have been taken in the development phase.
The product perspective focuses mainly on documentation, traceability and version
control. As the governing framework and models are updated and refined due to
shifting prerequisite, the system and hence the solutions generated for a single
specification will change over time. This affects product management and the ability to
meet legislation and customers requirements regarding documentation and traceability,
as well as the companys ability to provide services, maintenance and supply spare
parts. Of central importance are issues relating to methods of generating and managing
documentation such as engineering calculations and simulations combined with the
principles of traceability from the product to the underlying knowledge and vice versa,
and versioning of rules, models and systems.
The scope and the purpose of this research originate from industrial problems and
needs which have been identified within research projects carried out in near
collaboration with industrial partners. New concepts, perceived as prescriptive models,
have in this work been introduced, evaluated, and refined, which is in accordance with
the design modeling approach [1].The focus of this paper is a case study carried out in
collaboration with industry. A solution has been developed for the industrial case with
required functionality for capturing, structuring, searching, retrieving, viewing, and
editing the system information and knowledge. The objective is to enable and facilitate
system maintenance and updating and support the maintenance and reuse of general
functions and system encapsulated generic design descriptions.


1. Case study - automatic design of seat heaters

A general tool, ProcedoStudio, has been developed and evaluated in collaboration
with a manufacturing company. The tool has been used for setting up a pilot
application system for automatic design of seat heaters to be used in the quotation
process. The scope of the system was to support automatic variant design of heating
elements for car seat heaters based on varying customer specifications and seat
geometries. A heating element consists of a carrier material, a wire and a connecting
cable. The wire is laid out and glued in a pattern of sinusoidal loops between two layers
of carrier material (Figure 1). The pattern design is based on company-aggregated
knowledge. Approximately 75 new variants are designed on a yearly basis and
hundreds of requests for quotations are replied. Accuracy in quotations and short
quotation preparation lead-time are key success factors for the company regarding
quotations. Of great importance is also that the final design should comply with the
initial decisions made in the quotation preparation. This requires decisions that are
correct compared to the final design and the existence of tools for documentation.
F. Elgh / A Task Oriented Approach to Documentation and Knowledge Management 120









Figure 1. A car seat with a heating element in the backrest.
In Figure 2, the principle system architecture for the automated system generating
variant designs of car seat heaters is depicted.


Figure 2. Principle system architecture [2].
The system is feed with customer-specific input (parameter with associated values
together with a 2D outline of the heating element). The main output include a pattern
for the heating wires centre line, an amplitude factor for the sinusoidal loops, the
wire specification, detailed manufacturing preparation and a cost estimation. The
application for car seat heaters corresponds to a knowledge domain modeled as a
KnowledgeBase in the tool. Presently, there are 20 KnowledgeObjects for input
specification, file management, electrical calculations, geometry design, manufacturing
preparation, and cost estimation. The number of variables managed by the database is
66, although the total number of variables residing in all of the KnowledgeObjects is of
much higher figure. Application programs used are MS Access 2007, MS Excel 2007,
MathCAD 13, and Catia V5R18. Screenshots from the running system are collected in
Figure 3. Fundamental system principles and functions are:

Implemented with a modular architecture.
Resides upon a database.
Based upon a CoTS-approach.
Supports process oriented modeling.
F. Elgh / A Task Oriented Approach to Documentation and Knowledge Management 121








Includes an adaptive user interface.
Incorporates functions to ensure system completeness and functionality.
Based on the separation of knowledge and execution.
Supports documentation of design rational associated to rule statements.
Installed as a client-server solution.


Figure 3. Screenshots from the system for automatic design of seat heaters.
The system output has been compared with final designs of seat heaters. One
example is illustrated in Figure 4. The system generated pattern and wire data are
shown to the right in the figure. Although not generating the same pattern type as
for the final design, the precision is acceptable and more accurate compared to what
can be manually estimated in the quotation preparation.















Wire length = 12,255 m
CL length = 4,835 m
No of strands = 72
UL = 2,56
LL = 1,97
Amplitude factor = 2,53
Wire length =12,444 m
CL length = 4,559 m
No of strands = 72
UL = 2,56
LL= 1,97
Amplitude factor = 2,73

Figure 4. Comparison between a final design (left) and system generated design (right) [2].
F. Elgh / A Task Oriented Approach to Documentation and Knowledge Management 122








1.1. Problem - maintenance and reuse

During the practical use of the system in the company, the lack of assistance from
either documentation work about the whole system or management of knowledge could
bring out some obstacles when the engineers from the company reuse existing
knowledge and information without some helps from original developers. As a result, it
will significantly delay companys developing activities, and engineers would be faced
great challenges to finish with design tasks and quotation process. From a long-term
point of view, reuse of the existing knowledge is a critical issue, since there will be
introduction of new product, new variant of existing product, new manufacturing
process and additional or modified design rules due to new insights or changes in
standards or legislation. In order to deal with these New factors, the adaptability of
entire system need to be considered, and some supportive documentation work and
management of knowledge need to be accomplished for achieving the goal of
maintenance of the system, updating and reuse of system embedded information and
knowledge. Hence, it is essential to bring full comprehension of the system. This
includes both an overview of entire system and detailed relationships between different
knowledge and information residing within the system.


2. Theoretical foundation and candidate solutions

Two concepts that are important to consider when developing a support that enables
access to preceding decisions and argumentations for a design are design rational and
traceability. These concepts are briefly described in the following section followed by
an inventory of candidate solutions for system development.

2.1. Design rational and traceability

Design rational is the set of reasons behind the decisions made during the design of an
artifact (e.g. a product or an application system). The access to design rational can
support development of new artifacts, modification of existing artifacts (design
changes) or the reuse of an existing solution in a new context. The realization of a
design rationale system includes methods and tools to capture, structure, manage and
share information across organizations, processes, systems and products. The
requirements concerning the scope and the granularity of design rational to be captured
depend on future needs. These can be difficult to foreseen, however, a limitation has to
be set as is not feasible to capture everything during the design process. Two different
approaches to represent design rational are Argumentation-based and Template-based
[3]. Argumentation-based representation uses nodes and links whilst Template-based
representation makes use of predefined standard templates. The selection of approach
will affect the scope, the granularity and the structure of the captured design rationale;
however, the key factor for successful implementation of a design rational recording
tool is simplicity [4].
The development of a design automation system is preferably a part of, or
integrated with, the development of the actual product. Four sub-processes can be
identified within such a development process resulting in four different outputs: the
product design, the design space, the system adapted definition of the design space, and
the system implementation. Traceability, defined as the ability to describe and
F. Elgh / A Task Oriented Approach to Documentation and Knowledge Management 123








follow the life of a conceptual or physical artifact. [5], across these sub-processes is
essential. The artifact of concern in this study is mainly the design automation system.
The design automation system encapsulates product knowledge that has been expanded
and transformed into different levels of completeness and generalization throughout the
four sub-processes. Traceability, both forward and backward, across different
knowledge levels would support the work of pursuing affected objects when changes
occur in the premises of a design or the work of using an existing solution in a new
context; i.e. knowledge traceability, defined as the ability to follow the life of a
knowledge component from its origins to its use. [5], is required.

2.2. Candidate solutions

A supporting tool could either be realized by the development of a special purpose
application or by the use of an available application with functionality suitable for the
purpose. A special purpose solution can be based on either four principles: Systems
Modelling Language (SysML) [6], MOKA [7], Product Variant Master (PVM) [8] and
CommonKADS [9]. Four candidate applications with some relevant functionality are:
PCPACK [10], Design Rationale Editor (DRed) [4], Product Model Manager (PMM)
[11], and Semantic MediaWiki (SMW) [12].
When comparing the four stated principles, SysML seems as an easy way to show
the rationale, requirements, constraints and rules by using the concept of the block
diagrams, while CommonKADS looks more like a dominant method to manage the
knowledge. In CommonKADS, all the information from design to delivery is shown in
a simple way. Storing the experience, geometry and data that are related to a product
and show them within different classes and views are outstanding for MOKA. When it
comes to reducing costs, risks and lead time in a project, providing a way of developing
and maintaining KBE makes MOKA more specific. Product variant master (PVM)
gives a general overview of the product according to sub or super parts with the
relations between different components which all can be seen on a big piece of paper.
Regarding the three specific applications, PCPACK has an integrated suite of ten
knowledge tools designed to support the acquisition and use of knowledge. Analyzing
knowledge from text documents and structuring knowledge using various knowledge
models makes PCPACK a powerful system. DRed is a simple and unobtrusive software
tool that allows engineering designers to record their rationale as the design proceeds.
It allows the issues addressed, options considered, plus associated pro and con
arguments (arguments for or against an answer), to be captured in the form of a
directed graph of dependencies. PMM is a tool built upon the principles of PVM. PMM
is an easy tool to learn with an intuitive structure and graphical notation. However,
support for advanced queries, revisions, and authorization are not included. Improved
data structures by using categories and access to information according to users
specific queries are the advantages of SMW. Support for revisions and authorization
are also supported by SMW.
F. Elgh / A Task Oriented Approach to Documentation and Knowledge Management 124








3. Solution

Based on the required system functionality and the comparison above, the foundation
for a system realization can be outlined. The system has to be able to provide a general
view with all relations and constraints. This is supported by all of the four general
principles outlined in the previous chapter, however, additional elements have to be
added to support structuring of design rationale and relations to other domains and
supporting documents. Of great importance is also the functionality and
the mechanisms enabling querying or aggregation of information within and across the
documentation together with support of versioning and authorization control, this is all
supported by SMW.
The main focus is on the mapping between the scattered system associated and
encapsulated knowledge and information which are presented in Figure 5.



Figure 5. System associated and encapsulated knowledge and information.

There are two main approaches to structure the knowledge, either based on the product
structure or the design process. Depending on the system for implementation, both
views can be supported and even other views introduced. In this case, the design
process was selected as the master as it would result in a higher level of granularity of
the system encapsulated information and knowledge than a solution based on the
product structure. In the planning of the system, all design tasks were documented and
their relation modeled using the principles of Dependency Structure Matrices (DSM).
The DSM was analyzed using a tool for partitioning with subsequent manual work
resulting in two principle sub-process that can run separately until the final task except
for one parameter that has to be transferred (Figure 6). These tasks can also be re-
grouped into a limited number of main activities. For the macro generating the wire
layout, a subsequent detailed process description was required to enable a clear
F. Elgh / A Task Oriented Approach to Documentation and Knowledge Management 125








overview of the execution flow combined with a very detailed description of the macro
modules, procedures and functions.



Figure 6. DSM modeling

The structure of the information and knowledge entered into the system is
based upon the principle information model presented in Figure 7 and the main
page of the prototype system is depicted in Figure 8. The main principle of structuring
the knowledge and information is to sub-divide the process into different tasks
and functions at different levels to be able to support both a contextual meaning
and the access to detailed descriptions. The Rational class can be used to describe
why a Process/Task/Function exists or in detail describe the set of Input, the set of
Output, and the transformation associated with a specific Process/Task/Function. The
SupportingObject enables traceability to reports, protocols, guide-lines, standards,
legalizations etc. The information model also describes the main content of the wiki
pages, i.e. it defines a template.

Wiki pages for the electric calculations, process planning and cost estimation were
quite easily done whereas the documentation of the macro for the wire lay-out required
a lot of effort due to its size and amount of internal relations. One main principle of the
automatic system for design of seat heaters is to sub-divide the design process into
design tasks and use applications to define executable files that automate each and
one of these tasks. The applications preferably provide means to enter text and
illustration for the purpose to, in natural language, describe the principles of the
defined algorithms. The macro, although, was programmed in CATIA VBA with no
support to sub-dived the macro into number of separate files and to add illustrations to
the code. The macro was, however, divided into modules that were copied into
separate wiki pages and annotated with descriptions and figures. Links are one of the
F. Elgh / A Task Oriented Approach to Documentation and Knowledge Management 126








most powerful tools in Semantic Media Wiki and were extensively used to create
relations between different pages allowing for mapping between concepts. The search
facilities also provide a means to find and track knowledge and information in the
domain which support both detailed selection and aggregation of information.



















Figure 7. Principle information model






















Figure 8. Main page of the system [13]
F. Elgh / A Task Oriented Approach to Documentation and Knowledge Management 127








4. Conclusion

In this paper, an approach for documentation and knowledge management of systems
supporting the design and manufacture of customized products has been explored. The
main objective was to develop a support that enables and facilitates
system maintenance and reuse of general functions and system encapsulated generic
design descriptions. A principle solution and a prototype system have been outlined
and discussed. One of the central parts is that the knowledge and information have been
structure based upon the design process. Semantic Media Wiki has more or less all
require functionality to set up a system, including history management, access
management, linking tools and advanced search mechanisms. To fully validate the
presented explorative work and its feasibility entail future studies and improvements
that requires the development of systems targeting other domains - this will be subject
for future work.


Acknowledgement

The author would like to express his gratitude to KA for information and
knowledge about the case of application. Finally, Jie Nan and Qian Li conducted the
development of prototype system, and their contribution is greatly acknowledged.


References

[1] A. Duffy, M.M. Andreasen, Enhancing the Evolution of Design Science, Proceedings of Conference on
Engineering Design 1 (1995), 29-35.
[2] F. Elgh, Decision Support in the Quotation Process of Engineered-to-order Products, Advanced
Engineering Informatics 26(1) (2012) 66-79.
[3] A. Tan, Y. Jin, J. Han, A Rational-based Architecture Model for Design Traceability and Reasoning, The
Journal of Systems and Software 80 (2007), 918-934.
[4] R. Bracewell, K. Wallace, M. Moss, D. Knott, Capturing Design Rationale, Computer Aided Design
41(3), (2009), 173-186.
[5] K. Moham, B. Ramesh, Traceability-based Knowledge Integration in Group Decision and Negotiation
Activities, Decision Support Systems 43 (2007), 968-989.
[6] S. Friedenthal, A. Moore, A. Steiner, A Practical Guide to SysML: the Systems Modeling Language,
Morgan Kaufmann, San Francisco, US, 2008.
[7] M. Stokes, Managing Engineering Knowledge MOKA, Prof Eng Publications ltd, London, UK, 2001.
[8] L. Hvam, N.H. Mortensen, J. Riis, Product Customization, Springer Verlag, Berlin, Germany, 2008.
[9] G. Schreiber, H. Akkermans, A. Anjewierden, R. Hoog, N. Shadbolt, W. Velde, Knowledge Engineering
and Management: The CommonKADS Methodology, The MIT Press, Cambridge, US, 2000.
[10] Epistemics, PCPACK, http://www.epistemics.co.uk/Notes/55-0-0.htm (Acc. 9 December 2011), 2008.
[11] A. Haug, L. Hvam, N.H. Mortensen, Implementation of Conceptual Product Models into Configurators:
From Months to Minutes, Proceedings of MCPC 2009 (2009).
[12] Semantic MediaWiki, Introduction to Semantic MediaWiki, http://www.semantic-mediawiki.org (Acc.
9 December 2011), 2011.
[13] J. Nan, Q. Li, Design Automation Systems - Supporting Documentation and Knowledge Management,
Masters Thesis, Jnkping University, Jnkping, 2012.
F. Elgh / A Task Oriented Approach to Documentation and Knowledge Management 128


Beyond Concurrent Engineering:
Parallel Distributed Engineering for
More Adaptability and Less Energy
Consumption


Shuichi FUKUDA
a,1

a
Stanford University, USA and Keio University, Japan

Abstract. A Parallel Distributed Processing (PDP) or Neural Network model is
proposed to re-organize industries so that they can share knowledge and
experience among them. This re- organized industry framework, which is
called Parallel Distributed Engineering here, develops interchangeable
components at its intermediate level and combines them into final products to
meet the requirements of customers. Thus, it brings forth greater
flexibility to adapt to very frequently and extensively changing situations and what
is more important, it reduces time, cost and energy consumption and increase
productivity considerably. And it will satisfy customers more because their
diverse requirements can be more precisely met.

Keywords: Parallel Distributed Processing, Neural Network, interchangeable
common components, reduction of time, cost and energy consumption, increase
of productivity


Introduction

This paper discusses the need for a new design and manufacturing approach which
can be applied across different industries. Current industry framework carries the
history of inventions so that each industry is making efforts only in their own fields.
But as concurrent engineering demonstrated that knowledge and experience can be
shared across processes, they can be shared across industries. Such sharing of
knowledge and experience across different industries calls for modular design and
manufacturing, and it will bring forth greater flexibility to design and manufacturing,
greater reduction ofcost and energy consumption and greater increase of
productivity. It will not only benefit industries, but it will also provide greater
satisfaction to customers.





1
Shuichi Fukuda,
Consulting Professor, Dr., Stanford University, USA,
Adviser, System Design and Management Research Institute, Keio University, Japan, E-mail:
shufukuda@cdr.stanford.edu.
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-129
129


Figure 1. Sequential Engineering



1. Sequential Engineering

Our hardware products used to be developed sequentially as shown in Figure 1,
because they are physical and it was thought to be difficult to process them in another
way.


2. Concurrent Engineering (CE)

Concurrent Engineering (CE) was proposed to reduce time to market in order to meet
producers requests. The success of CE can be attributed to the fact that it noticed
knowledge is non-physical and can be shared, although hardware itself is physical and
cannot be shared.
CE may be best understood by comparing it to a packaging problem (Figure 2).
Since knowledge is non-physical, it can be packed into a smaller box than physical
objects by sharing common knowledge.




Figure 2. Concurrent Engineering as a Packaging Problem

CE has been applied to one industry. But if we expand the idea to multiple
industries, we could reduce time and cost much more. In fact, if we process knowledge



S. Fukuda / Beyond CE: PDE for More Adaptability and Less Energy Consumption 130
and experience across industries, we could pack them into much smaller box as shown
in Figure 3.
Therefore, we should make our efforts to share our knowledge across different
industries beyond one industry in order to achieve a greater amount of reduction of
time, cost and energy consumption and to achieve a greater amount of increase in
productivity


3. Parallel Distributed Engineering (PDE)

If we represent multiple industries in the same manner as shown in Figure 1, its
framework can be represented as shown in Figure 4. This is nothing other than a Neural
Network. Therefore, we can process tasks more effectively utilizing techniques



Figure 3. Concurrent Engineering as a Packaging Problem across Industries


Figure 4. Neural Network Representation of Multiple Industry Framework
S. Fukuda / Beyond CE: PDE for More Adaptability and Less Energy Consumption 131
developed in Neural Network technology. Then, we can reduce time, cost and energy
consumption and increase productivity considerably. Besides a Neural Network
structure is very flexible so that it is easier to adapt to very frequent and extensive
changes and to meet diversifying requirements. As a Neural Network used to be called
Parallel Distributed Processing, let us call such engineering framework as Parallel
Distributed Engineering (PDE).



4. Examples of PDE

Let us see what illustrative examples we have for PDE.


4.1 Automotive Industry

Trucks are designed and manufactured in this way since long time ago. Trucks are
designed and manufactured with a common chassis (C in the figure) and a cargo body
(A in the figure) is designed and manufactured separately to meet the requirements of
customers (Figure 5).
In most cases, cargo bodies for trucks are designed and manufactured in another
company, which is different from the truck company producing common chassis.
Thus, in the area of trucks, knowledge and experience are shared across
companies, although it is within a single industry.




Figure 5. Truck



4.2 Wheelchair and Personal Mobility

If we look at things from a different angle, there are areas where much knowledge and
experience can be shared.



S. Fukuda / Beyond CE: PDE for More Adaptability and Less Energy Consumption 132
Let us take a wheelchair and a personal urban transport for example. Wheelchairs
are designed from the first for the handicapped and wheelchair industries produce only
wheelchairs. This is primarily because requirements vary very extensively from person
to person.
But if we compare a wheelchair with P.U.M.A (Personal Urban Mobility and
Accessibility) [1], Personal Urban transport from Segway and GM, we immediately
find out there is no difference between a wheelchair and a personal urban transport
(Figure 6).
If we can introduce a common chassis or a common platform, then it would
reduce time, cost and energy consumption and would increase productivity of
wheelchair industries, because then what they have to develop is a body on top of it.
The body might vary extensively from person to person, depending on whether the user
is handicapped or not. But this is the same situation with truck production with special
cargo bodies.
If we go further, our cars are nothing other than a chair on wheels. So cars,
wheelchairs and personal urban transports share the same idea of a chair on wheels.
There is a great possibility of sharing knowledge and experience among these
industries. Of course, this kind of cars might not work on highways. But it might be
useful to re-think if cars are the best vehicle for highways. If our cars become smaller
and get closer to personal mobility, then there might be another possibility of
integrating cars and rails or other transportation vehicles and their shapes could be very
different from the current ones.
What is important in introducing PDE is to come back to the basics and to think
afresh. We are too much stuck with the history of inventions and we have been
developing industries on these tracks. If we look at our goal, there are many other short
cuts. We have to get off the beaten tracks.




Figure 6. Wheelchair and P.U.M.A.


4.3 Integration of Transportation Industry

The typical example is the transportation industry. There are automotive industry,
airplane industry, rail industry, etc, so many kinds of different industries in the
transportation industry. But if we come back to our basics, our basic need is to travel
comfortably and without trouble from A to Z.


S. Fukuda / Beyond CE: PDE for More Adaptability and Less Energy Consumption 133
Nobody would like to change transports for land, air and water. These industries
operate separately only because they were invented independently. All of them are
focusing how further away they can go on the same track.
But if they look aside and get off the beaten track, then they will find PDE
approach will not only benefit them, but it will also provide much greater
satisfaction to their customers. From business point of view, they have good
reason to charge higher because they provide more comfort and convenience. So the
introduction of PDE will bring forth Win-Win relations to everyone concerned.
In fact, such integration is really called for in some countries. In such a big country
as Brazil, you can fly to a distance location, but if a car is not available there, you
cannot go any further. So the integration of air and land transports is a prerequisite.
Figure 7 shows steps of integration in transportation industry. At the first step, land
and air, and air and water transports are integrated. Then at the next step, the same
transports will be used for land, air and water.



Figure 7. Integration of Transportation Industry


5. Standardization for PDE

It naturally follows that in order to share knowledge and experience, their
standardization is called for. But we have to remember that most of current standards

S. Fukuda / Beyond CE: PDE for More Adaptability and Less Energy Consumption 134
are not well linked between design and manufacturing. Most of them standardize either
design or manufacturing. But to develop interchangeable components for many
different industries, design and manufacturing must be integrated and standardized for
each component. Otherwise, one component in one industry cannot be used for another
industry.
One AWS (American Welding Society) Standard is a good example in this
direction (Figure 8). This standardizes welding procedures in terms of a box type
component. Box type components are used in many applications as basic structural
elements such as in ships, containers, trains, trucks, etc. Other standardization such as
JIS (Japan Industrial Standard) standardizes welding procedures in terms of weld lines.
But if a structure changes, so does the procedures in this case. So knowledge and
experience accumulated in terms of weld lines cannot be applied to other applications.
When customers requirements were not diversified, such standards were effective,
because welding engineers can focus their attention only to welding. But as
diversification increases, such an approach is no more effective. Even welding
engineers have to have much broader perspective and must accumulate their knowledge
and experience so that they can apply them to much wider applications.
If welding procedures are standardized in terms of a box components, welding
engineers and design engineers can work together more effectively because such
structural elements are widely used and they do not have to examine conditions from
scratch. They may need some modifications, but those take much less time and efforts
to adapt the standard to the current product.
What is better is if manufacturing engineers would like to introduce a welding robot,
it is quite easy, because the conditions are defined in terms of structural elements so
that operating conditions for a robot can be easily determined.
We have to standardize our knowledge and experience in the form which allows the
widest applicability and greatest interchangeability.




Figure 8. Example of AWS Standard


6. Summary

A Parallel Distributed Processing (PDP) or Neural Network model is proposed to re-
organize industries so that they can share knowledge and experience among them. This
re-organized industry framework, which is called Parallel Distributed Engineering
here, develop interchangeable components at its intermediate level and combine them
into final products to meet the requirements of customers. Thus, it brings forth great
flexibility to adapt to very frequently and extensively changing situations and what is


S. Fukuda / Beyond CE: PDE for More Adaptability and Less Energy Consumption 135
more important, it reduces time, cost and energy consumption and increase
productivity considerably. And it will satisfy customers more because their diverse
requirements can be more precisely met.


References
[1] http://www.segway.com/puma/
S. Fukuda / Beyond CE: PDE for More Adaptability and Less Energy Consumption 136
Business-Product-Service Portfolio
Management
Giuliani Paulineli GARBI
a,1
and Geilson LOUREIRO
b

a
Engineering Department, College Anhanguera of So Jos
b
Integration and Testing Laboratory, Brazilian Institute of Space Research
Abstract. Product and service development is critical because new products and
services are becoming the nexus of competition for many companies. Product and
service development is thus a potential source of competitive advantage for many
companies. Thus, product and service development is among the essential process
for success, survival, and renewal of companies, particularly in competitive
markets. Nowadays the design and development of new products and services or
modification of existent ones (redesign) is a key and fundamental element to
enhance innovation and competitiveness of industrial companies.
This paper presents the Business-Product-Service Portfolio Management (BPSPM)
as an approach to manage the business portfolio for versioning and variation of the
products and services development in order to succeed a business into more
marketable set of products and services, jointly capable of fulfilling the
stakeholders needs. BPSPM is a framework that encompassing the versioning and
variation of products and services development concurrently for the products,
services and their performing organisations analysis over perspectives of the
strategic marketing, project management, engineering design and operations
management.
There are five issues that the company must consider in BPSPM: the activities of
relationship between global market and company; the activities of relationship
between the providing company and stakeholders, representing a product-oriented
and a service-oriented view respectively; the design activities of versioning and
variety of the products and services; the specifications activities for versioning and
variety of the products and services throughout all stages of life cycle, regarding
the product and service life cycle and the marketing and sales life cycle; the
representation activities of specifications and the elements of systems architecture
for versioning and variety of the products and services throughout all stages of life
cycle in manner of the hierarchy and heterarchy structures. The BPSPM
framework uses the concepts of System Engineering, Concurrent Engineering,
Service Engineering, Project Management and Business Portfolio presented in five
dimensions, being Business dimension, Outcome dimension, Variety dimension,
Life dimension and Structure dimension. The framework is described with SysML
block definition diagrams.
Keywords. Systems engineering, concurrent engineering, service engineering,
product and service portfolio

1
Engineering Department, College Anhanguera of So Jos, avenue Dr. Joo Batista de Sousa Soares,
4121, Jardim Morumbi, So Jos dos Campos, So Paulo, Brazil, CEP: 12223-660; +55 (12) 3512-1300;
Fax: +55 (12) 3512-1316; Email: giuliani.garbi@anhanguera.com
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-137
137
Introduction
Changes in the global market environment leave companies with the need of a
continuous flow of new innovations and only with the creation of new innovations,
companies are able to defeat competitors by successfully bringing products and
services to global markets in less time than their competitors [1]. Nowadays the design
and development of new products and services or modification of existent ones
(redesign) is a key and fundamental element to enhance innovation and
competitiveness of industrial companies. Design is the process of specifying a
description of a product and service that satisfies a set of requirements. Redesign is the
process of changing the description of an existent product and service to satisfy a new
set of requirements [2] [3]. The chosen of the projects and its development process are
important issues for successful of the new products and services. [4] We identified the
first opportunity that is the possible of the framework that encompassing concurrently
the new products and services development, in this paper called as versioning, and the
modification of existing products and services development, in this paper called as
variety that must be evaluation and selection in order to meet the stakeholders needs.
Integrated Product Development (IPD) has traditionally focused on the
development activities relating to physical technological artefacts. With the advent of
business approaches for manufacturing firms based on providing customers the utility
of integrated products and services, the term dubbed product service systems (PSS),
companies need to extend their activities to include new dimensions of development.
Within the paradigm of mass production and consumption, traditional product-oriented
business strategies regarded physical technological artefacts (products) as the mediators
of customer value. Value was based on the exchange of products between a providing
company and a receiving customer. A customer would buy a product because it
represented potential valuable benefits. PSS approaches are business strategies where
companies provide value to customers by supporting and enhancing the utility of
products throughout their entire life cycle. [5] We identified the second opportunity
that is the proposal of a PSS approach that include the Engineering Service processes in
Systems Concurrent Engineering. The result is a modeling framework that integrates
the product, services and their performing organizations concurrently.
Product and service development is defined as the transformation of a market
opportunity and a set of assumptions about product and service technology into a
product available for sale. The existing literature on product and service development is
vast. To sharpen our understanding of the literature, it is useful to organize this
literature into a few competing paradigms. Such a clustering is an attempt on our part
to elucidate differences, and may lead in some cases to an exaggeration of these
perspectives. There are at least four common perspectives in the design and
development research community: marketing, organizations, engineering design, and
operations management. [6] We identified the third opportunity that is the proposal of
an approach that encompassing concurrently the versioning and variation of products
and services development over perspectives of the strategic marketing, project
management, engineering design and operations management.
This paper presents the Business-Product-Service Portfolio Management (BPSPM)
as an approach to manage the business portfolio for versioning and variation of the
products and services development in order to succeed a business into more marketable
set of products and services, jointly capable of fulfilling the stakeholders needs.
BPSPM is a framework that encompassing the versioning and variation of products and
G.P. Garbi and G. Loureiro / Business-Product-Service Portfolio Management 138
services development concurrently for the products, services and their performing
organisations analysis over perspectives of the strategic marketing, project
management, engineering design and operations management. The paper presents in
Section 1 the Business-Product-Service Portfolio Management (BPSPM) approach
modeled with SysML. Section 2 presents the concepts of Business dimension. Section
3 presents the concepts of Outcome dimension. Section 4 presents the concepts of
Differentiation dimension. Section 5 presents the concepts of Life dimension. Section
6 presents the concepts of Structure dimension. And section 7 draws some conclusions.
1. Business-Product-Service Portfolio Management (BPSPM)
BPSPM approach uses the concepts of Business Portfolio, System Engineering,
Concurrent Engineering, Service Engineering, Project Management and Strategic
Marketing presented in five dimensions, being Business dimension, Outcome
dimension, Differentiation dimension, Life dimension and Structure dimension.
BPSPM is an iterative collaborative multidisciplinary approach to derive, develop,
evaluate and select balanced value innovations of the versioning and variation of
products and services during the life cycle on perspectives of the strategic marketing,
project management, engineering design and operations management.


Figure 1 Business-Product-Service Portfolio Management (BPSPM) modeled
with SysML.
The diagram in Figure 1 shows the structure view for Business-Product-Service
Portfolio Management (BPSPM). Business dimension requests for proposal of
versioning and variety of the products and services for Outcome dimension.
Outcome dimension deliveries the versioning and variety of the products and
services scopes for evaluating and selecting by Business dimension. Outcome
dimension translates the request for proposal of versioning and variety of the products
and services in systems requirements that are designed by Differentiation dimension.
Differentiation dimension translates the systems requirements in systems architecture
G.P. Garbi and G. Loureiro / Business-Product-Service Portfolio Management 139
for versioning and variety of the products and services that are specified over
programmatic, technical and operational aspects by Life dimension. Life
dimension translates the systems architecture in programmatic, technical and
operational specifications for versioning and variety of the products and services
throughout all stages of life cycle that are represented by Structure dimension.
Structure dimension represents the hierarchy and heterarchy structures of the
elements of systems architecture and specifications for versioning and variety of the
products and services throughout all stages of life cycle that are organised by
Outcome dimension.
2. Business Dimension
Business dimension deals of the activities of relationship between global market and
company. Business dimension applies the concepts of Strategic Marketing and
Business Portfolio with two main objectives, being identify the opportunities for
Outcome dimension and, evaluating and selecting of versioning and variety of the
products and service scopes proposed by Outcome dimension. Business dimension
aims apply a strategy based in value innovation and not in competition. [1] The
identified opportunity must be available in manner of mission statement considering
the main constraints of the programmatic, technical and operational aspects. Following
the main approaches for understanding user and costumer needs: surveys and focus
groups, latent needs, lead-users, customer-developers, competitive analysis of
competing products, industry experts or consultants, extrapolating trends, building
scenarios, market experimentation, and others. [3]


Figure 2 Business dimension modeled with SysML.
Portfolio management process is defined as a dynamic decision-making process,
for evaluating, selecting, request adequacy, approval or cancel of versioning and
variety of the products and services scopes. In order to have a successful portfolio
management process, it is important to take care of the three goals, being value
maximization, balancing the portfolio and creating alignment with business strategy.
[1] A range of criteria are used to screen projects prior to development. An early study
found financial criteria to be the most common, the most widely used being Net Present
Value/Internal Rate of Return having the highest usage, followed by cost-benefit and
payback period. However, most firms use a range of additional criteria: ranking,
profiles, simulated outcomes, strategic clusters, interactive, and others. [3]
G.P. Garbi and G. Loureiro / Business-Product-Service Portfolio Management 140
The diagram in Figure 2 shows the structure view for Business dimension. The
Business dimension is made up of one or more Identify the opportunities and one
or more Evaluating and selecting of versioning and variety of the products and
services. The Identify the opportunities are performed by one or more Approaches
for understanding user and costumer needs. Evaluating and selecting of versioning
and variety of the products and services are performed by one or more Range of
criteria. Diagram shows that the Business dimension has two flow ports: Business
dimension port requests for proposal of versioning and variety of the products and
services for Outcome dimension; and Outcome dimension port deliveries the
versioning and variation of the products and services scopes for evaluating and
selecting by Business dimension.
3. Outcome Dimension
Outcome dimension deals the activities of relationship between the providing company
and stakeholders, representing a product-oriented and a service-oriented view
respectively. Outcome dimension performs the stakeholders and requirements analysis
for the products, services and their performing organisations concurrently. Outcome
dimension also organise the elements and specifications for versioning and variety of
the products and services throughout all stages of life cycle in manner of scope.
Outcome dimension applies the Systems Concurrent Engineering approach with the
Engineering Service concepts. The Systems Concurrent Engineering approach is a
modeling framework that integrates the product and their performing organizations [7].


Figure 3 Outcome dimension modeled with SysML.
Stakeholders analysis available the stakeholders requirements and measures of
effectiveness (MOES) for requirements analysis. Requirements analysis available the
G.P. Garbi and G. Loureiro / Business-Product-Service Portfolio Management 141
measure of performance (MOP) and technical performance measures (TPM) [8]. From
the hierarchy and heterarchy structures of the elements and specifications for
versioning and variety of the products and services throughout all stages of life cycle
the Outcome dimension organise the product, service and organisation elements and
the programmatic, technical and operational specifications in manner of scopes
considering the main objectives of the mission statement.
The diagram in Figure 3 shows the structure view for Outcome dimension. The
Outcome dimension is made up of one or more Systems requirements and one or
more Organise the elements and specifications. The Systems requirements is
made up of one or more Stakeholders analysis and one or more Requirements
analysis. Stakeholders analysis identifies one or more Product and service mission,
one or more Product, service, marketing and sales life cycle processes with its
scenarios, and one or more Scope of the organisation effort. Requirements
analysis defines one or more Functions, one or more Performance, and one or
more Conditions. The Organise the elements and specifications is made up of one
or more Integration of the elements and one or more Integration of the
specifications. Integration of the elements is performed with one or more Product,
one or more Service, and one or more Organisation. Integration of the
specifications is performed with one or more Programmatic factors, one or more
Technical factors, and one or more Operational factors. Diagram shows that the
Outcome dimension has four flow ports: Business dimension port requests for
proposal of versioning and variety of the products and services for Outcome
dimension; Outcome dimension port translates the request for proposal of
versioning and variety of the products and services in systems requirements that are
designed by Differentiation dimension; Structure dimension port represents the
hierarchy and heterarchy structures of the elements and specifications for versioning
and variety of the products and services that are organised by Outcome dimension;
and Outcome dimension port deliveries the versioning and variation of the products
and services scopes for evaluating and selecting by Business dimension.
4. Differentiation Dimension
Differentiation dimension deals the design of versioning and variety of the products
and services. Differentiation dimension aims to provide the design of a range of
products and services that are based on a core of company competences. Differentiation
can generate new profits and growth in two distinct manners, being versioning that
offering new products and services and the variety that offering variations of the
products and services for a version that will satisfy the systems requirements [9].
Differentiation dimension performs the functional and implementation analysis of
versioning and variety of the products and services and their performing organisations
concurrently. Differentiation dimension performs the design for versioning and variety
of the products, services and organisation elements that must be available in manner of
systems architecture.
Functional analysis identifies the functional context for product and service at each
life cycle process scenario and for organization at each life cycle process scenario
within the scope of the development effort [7]. Implementation analysis identifies the
implementation context for product and service at each life cycle process scenario and
for organization at each life cycle process scenario within the scope of the development
G.P. Garbi and G. Loureiro / Business-Product-Service Portfolio Management 142
effort. Physical connections between the system and the environment elements define
the physical external interface requirements. [8]



Figure 4 Differentiation dimension modeled with SysML.
The diagram in Figure 4 shows the structure view for Differentiation dimension.
The Differentiation dimension is made up of one or more Versioning and one or
more Variety. The Versioning is made up of one or more Functional analysis
and one or more Implementation analysis. Functional analysis identifies one or
more Product, service and organisation structure, one or more Product, service and
organisation behaviour, and one or more Product, service and organisation hazard
and risk. Implementation analysis identifies one or more Product, service and
organisation internal interfaces, one or more Product, service and organisation
external interfaces, and one or more Product, service and organisation architecture
connections and flows. The Variety is made up of one or more Functional
analysis and one or more Implementation analysis. Functional analysis identifies
one or more Product, service and organisation structure, one or more Product,
service and organisation behaviour, and one or more Product, service and
organisation hazard and risk. Implementation analysis identifies one or more
Product, service and organisation internal interfaces, one or more Product, service
and organisation external interfaces, and one or more Product, service and
organisation architecture connections and flows. Diagram shows that the
Differentiation dimension has two flow ports: Outcome dimension port translates
the request for proposal of versioning and variety of the products and services in
systems requirements that are designed by Differentiation dimension; and
Differentiation dimension translates the systems requirements in systems architecture
for versioning and variety of the products and services that are specified over
programmatic, technical and operational aspects by Life dimension.
G.P. Garbi and G. Loureiro / Business-Product-Service Portfolio Management 143
5. Life Dimension
Life dimension deals the programmatic, technical and operational specifications for
versioning and variety of the products and services throughout all stages of life cycle.
Life dimension regards the product and service life cycle (concept, development,
production, utilization, support and retirement) and the marketing and sales life cycle
(development, introduction, growth, maturity and decline) for products, services and
their performing organisations concurrently. [10]
Programmatic specifications are related with the management of the cost, schedule,
quality, risk, configuration and interfaces or others issues determined by Business
dimension. Technical specifications are related with the development capacity of the
company analysing the human abilities and competencies, domain of the technologies
and the available facilities. Operational specifications are related with the production
capacity of the company and its partners, marketing campaigns, sales and logistics
channels, and communications channels with customers and providers.



Figure 5 Life dimension modeled with SysML.
The diagram in Figure 5 shows the structure view for Life dimension. The Life
dimension is made up of one or more Versioning for product, service, marketing and
sales life cycle and one or more Variety for product, service, marketing and sales life
cycle. The Versioning for product, service, marketing and sales life cycle is made
up of one or more Programmatic specifications for product, service and organisation,
one or more Technical specifications for product, service and organisation and one or
more Operational specifications for product, service and organisation. The Variety
for product, service, marketing and sales life cycle is made up of one or more
Programmatic specifications for product, service and organisation, one or more
Technical specifications for product, service and organisation and one or more
Operational specifications for product, service and organisation. Diagram shows that
the Life dimension has two flow ports: Differentiation dimension translates the
systems requirements in systems architecture for versioning and variety of the products
and services that are specified over programmatic, technical and operational aspects by
Life dimension, and Life dimension translates the systems architecture in
programmatic, technical and operational specifications for versioning and variety of the
products and services throughout all stages of life cycle that are represented by
Structure dimension.
G.P. Garbi and G. Loureiro / Business-Product-Service Portfolio Management 144
6. Structure Dimension
Structure dimension represents the programmatic, technical and operational
specifications and the elements of systems architecture for versioning and variety of the
products and services throughout all stages of life cycle in manner of the hierarchy and
heterarchy structures. Structure dimension available the realization for the
implementation architecture and attributes, the implementation requirements and lower
level system requirements.
Hierarchy structure defines how the programmatic, technical and operational
specifications and the elements of systems architecture are structured in a layered
pyramidal structure. The layers of the hierarchy correspond to the end products and
services breakdown structure. The performing organizations for that end products and
services are in the same building block, and consequently, the same layer of that
products and services [7]. Heterarchy structure defines how the collaborative and
multidisciplinary teams are structured in a layered concurrent structure. The layers of
the heterarchy correspond to the end products and services breakdown structure. The
performing organizations for that end products and services are in the same building
block, and consequently, the same layer of that products and services.


Figure 6 Structure dimension modeled with SysML.
The diagram in Figure 6 shows structure view for the Structure dimension that is
made up of one or more Versioning structure and one or more Variety structure.
The Versioning structure is made up of one or more Hierarchy of the specifications
and elements and one or more Heterarchy of the collaborative and multidisciplinary
team. Hierarchy of the specifications and elements are structured in a layered
pyramidal structure of the products, services and their organisations. Heterarchy of
the collaborative and multidisciplinary team are structured in a layered concurrent
structure of the products, services and their organisations. The Variety structure is
made up of one or more Hierarchy of the specifications and elements and one or
more Heterarchy of the collaborative and multidisciplinary team. Hierarchy of the
specifications and elements are structured in a layered pyramidal structure of the
products, services and their organisations. Heterarchy of the collaborative and
multidisciplinary team are structured in a layered concurrent structure of the products,
G.P. Garbi and G. Loureiro / Business-Product-Service Portfolio Management 145
services and their organisations. The diagram also shows that the Structure
dimension has two flow ports: Life dimension translates the systems architecture in
programmatic, technical and operational specifications for versioning and variety of the
products and services throughout all stages of life cycle that are represented by
Structure dimension, and Structure dimension represents the hierarchy and
heterarchy structures of the elements and specifications for versioning and variety of
the products and services that are organised by Outcome dimension.
7. Conclusions
Business-Product-Service Portfolio Management (BPSPM) can be used to attend the
identified opportunities: a framework that encompassing concurrently the versioning
and variety of products and services development; as Product-Service Systems
approach; and as approach for concurrent development the versioning and variation of
products and services over perspectives of the strategic marketing, project
management, engineering design and operations management. The SysML structural
diagrams allowed a visualization of high abstraction of the activities of relationship
between global market and company, of relationship between the providing company
and stakeholders, the specifications for versioning and variety of the products and
services, the representation of specifications and the elements of systems architecture
for versioning and variety of the products and services throughout all stages of life
cycle in manner of the hierarchy and heterarchy structures. Further steps of this work
are to demonstrate how to move from the MBSCE modeled by SysML to a given
application domain.
References
[1] P .H. F. Pennings, Portfolio Management Process: A case study after the performance of portfolio
management. TUE. Department Technology Management. Eindhoven, 2008.
[2] A. I. Lopez; V. Sosa; S. L. Arevalo, Modelling Approach for Redesign of Technical Processes. Advances
in Chemical Engineering. Zeeshan Nawa, 2012.
[3] J. Tidd; K. Bodley, The Effects of Project Novelty on the New Product Development Process. R&D
Management. 2002.
[4] R. G. Cooper; S. J. Edgett; E. J. Kleinschmidt, Portfolio Management: Fundamental for New Product
Success. PDMA ToolBook for New Product Development. Wiley & Sons, 2002.
[5] A. R. Tan; T. C. Mcaloone; M. M. Andreasen, What Happens to Integrated Product Development
Models with Product/Service-System Approaches? VI

Integrated Product Development Workshop IPD
2006. Schnebeck, Bad Salzelmen B. Magdeburg. October, 2006.
[6] V. Krishnan; K. T. Ulrich, Product Development Decisions: A Review of the Literature. Management
Science, Vol 47, n 1. Informs, 2001.
[7] G. Loureiro, Lessons Learned in 12 Years of Space Systems Concurrent Engineering. 61 International
Astronautical Congress, Prague, Czech Republic, 2010.
[8] G. P. Garbi; G. Loureiro, Model-Based System Concurrent Engineering. Concurrent Engineering
Approaches For Sustainable Product Development In A Multi-Disciplinary Environment, Ispe
Concurrent Engineering 2012 Ce 2012, Trier, Germany, 2012.
[9] M. Holweg; A. Greenwood, Product Variety, Life Cycles and Rate of Innovation: Trends in the UK
Automotive Industry. Proceedings of the Logistics Research Network Conference, Cardiff, September,
2000.
[10] V. MAHAJAN; E. MULLER; F. BASS, New Product Diffusion Models in Marketing: A Review and
Directions for Research. Journal of Marketing, 1990.

G.P. Garbi and G. Loureiro / Business-Product-Service Portfolio Management 146
Development of Three Dimensional
Measured Data Management System in
Shipbuilding Manufacturing Process
Kazuo Hiekata
a,1
Hiroyuki Yamato
a
and Shogo Kimura
a
a
Graduate School of Frontier Sciences, The University of Tokyo, Japan.
Abstract. In this paper, a data management system for measurement data and
accuracy evaluation results of shipbuilding assemblies in shipbuilding
manufacturing process is proposed. In the accuracy evaluation system, the
accuracy of assemblies is calculated by comparing measurement data obtained by
laser scanner to design data. The analysis process for the accuracy can be defined
for each structural member. The system is evaluated in the empirical case study
and some assumptions for improvement of the manufacturing process are found.
Keywords. Data management, three dimensional measurement, laser scanner,
shipbuilding
Introduction
At each fabrication stage in shipbuilding process, the shapes of components are
measured and the accuracy is evaluated for managing the whole shape of a ship and
reducing cost of rework in post fabrication stage. For example, the shape of shell plates
are evaluated by fitting wooden model, or the gap and misalignment of shipbuilding
blocks are calculated based on measured data obtained from laser range radar (total
station). These traditional instruments measure only points extracted from assemble, so
these instruments cannot apply for accuracy evaluation of whole shape. Normalization
of evaluation process is not in progress because measure instruments differ in
fabrication stage. Heo employed total station to measure the shipbuilding blocks for
accuracy control [1], but the evaluation depends on the spacing of measured points.
Recently some accuracy evaluation system using measured data of assembles
obtained from laser scanner are proposed [2]. Laser scanner measures the whole
surface of the members as point cloud data. Measured data can be used for evaluation
of surfaces of shell plates or welding surfaces of shipbuilding blocks. The measured
data and evaluation result have much information content, so these data are expected to
help to discovery knowledge about manufacturing process. However, in most shipyards,
the search and reuse of evaluation result are difficult because large amount of accuracy
information are stored without adequate data management.

1
Associate Professor, Graduate School of Frontier Sciences, the University of Tokyo, Building of
Environmental Studies, Room #252, 5-1-5, Kashiwanoha, Kashiwa-city, Chiba 277-8563, Japan; Tel: +81
(4) 7136 4611; Fax: +81 (4) 7136 4611; Email: hiekata@k.u-tokyo.ac.jp ; http://www.nakl.t.u-tokyo.ac.jp/
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-147
147
In this paper, a data management system using measurement data and accuracy
evaluation results of shipbuilding assemblies measured in shipbuilding manufacturing
process is proposed. The proposed system has three functions: (1) accuracy evaluation,
(2) accuracy data accumulation, and (3) search and reuse of accuracy data. The
objective of this study is to build a method for identifying knowledge, know-how and
techniques in the field based on the data managed by the developed system and
evaluate by the three dimensional measured data in ship construction process.
In the accuracy evaluation system, the accuracy of assemblies is calculated by
comparing measurement data obtained by laser scanner to design data. The
methodologies for accuracy evaluation are different according to assembles, and some
existing method can be applied for evaluation.
In the accuracy data accumulation system, measured data, design data, and
evaluation results are accumulated in database with metadata about the name, feature,
or evaluation result of assembles. Metadata is attached in RDF format and has URI for
identifying accumulated data. Relationships of each assemble are structured in RDFs
format, and user can be edit the relationships.
In the accuracy data search system, accuracy data are searched by querying RDF
metadata attached for accuracy data. Value of attached metadata and the name of
assembles defined in the RDFs relationships visualized as a tree structure are used for
metadata search with SPARQL [3]. Search results are displayed as a summary of
metadata, and accuracy data identified by the searched metadata are loaded and
compared.
The accuracy of sub-assembly members manufactured in a shipyard is evaluated
and measured data are accumulated in the proposed system. In these experiments,
decrease of distortion of panel surface is confirmed by comparing the accuracy of panel
between heating process. This system is also helpful to identify areas where distortion
is large by searching data extracted from measured data and comparing it to evaluation
result. The findings obtained by these comparisons can be utilized for redesign of
manufacturing process.
1. Proposed System
1.1. Overview
The overview of proposed system is shown in Figure 1. In the proposed method,
measured data and design data of panels and other primitive parts, small assemblies
and large building blocks of ships are the scope of the system. As for the measurement
process, the measurement procedure, quality standard and analysis method for the
accuracy must be defined for each object. The results of accuracy analysis will be
visualized for the workers in the manufacturing workshops. Metadata will be assigned
to all the data related to the analysis process to facilitate the data retrieval. With the
metadata, the measured data, related design data and other useful information can be
retrieved efficiently.
K. Hiekata et al. / Development of Three Dimensional Measured Data Management System 148
Figure 1.System overview.
1.2. Software Architecture
The data often requires special process for analysis. A structural member requires the
analyzing procedure for structural members and a flat panels requires the one for flat
plates. The system provides the extensible application program interface with the users.
The users can develop procedure to analyze the data as a plug-in for the whole system.
The architecture is designed for the deployment of the system into the production
phases.
1.3. Data Accumulation and Management
1.3.1. Metadata assignment for accumulated measured data
Metadata will be assigned to all the measured data of different phases, then the data and
metadata will be stored in the database. The identifier of the assembly parts and/or the
relationship between those parts can be described in RDF format. The relationship
between an assembly part and the components of the assembly part is described by
rdfs:subClassof in RDF as shown in Figure 2. Employing the metadata written in RDF,
the background of all the data can be managed efficiently. The configuration of the
product, the ship building blocks, is described and the measured data will be attached
to a node to show the correspondence of measured data and design data .
K. Hiekata et al. / Development of Three Dimensional Measured Data Management System 149

K
K
K
Figure 2. Relationship between parts configuration described in RDF.
1.3.2. Data retrieval with the help of metadata
The data can be retrieved by the user interface shown in Figure 3. The users can find
the appropriate data from all the accumulated measured point cloud data using the
metadata. The left-upper pane of the window shows the configuration of assembly parts
in a tree view. The data requested by a user can be easily found in the tree view. The
result will be filtered using the upper right pane. The metadata field names and values
are specified and employed for the filter. The query for the retrieval and filtering are
formed in SPARQL format, and the results will be shown in the lower left of the
window. The user will review the search results in the lower right, then read the
selected data and displayed in the window.

Figure 3. Metadata search interface.
1.4. Accuracy Management Procedure
1.4.1. Accuracy measurement for shipbuilding blocks
After building huge ship building blocks, the accuracy of assembled shipbuilding
blocks is evaluated by the dimensions calculated by the coordination of the end point
and edge of the outer plates and structural members [4]. Because of the complexity and
Configuration
of Assembly parts
Search results
Search Conditions
K. Hiekata et al. / Development of Three Dimensional Measured Data Management System 150
the data size of the point cloud data by laser scanners, the analysis results of
shipbuilding blocks varies even for the same measured data. In other words, as for the
end points and the edges, the robustness of the measurement and analysis process is not
enough to be employed in the daily operations in shipyards.
On the other hand, the extraction process of surfaces from the point cloud data is
stable and reproducibility of measurement and analysis results is good. So, in this paper,
accuracy measurement method based on the surfaces of floor panels is proposed to
avoid the variance caused by the measurement and analysis process. A shipbuilding
block is shown in the left side of Figure 4 and the edges and floor panels are illustrated
in the right side of the figure.
In the proposed method, the edges of the shipbuilding blocks are extracted as
following steps.
The user picks up a point of a floor panel in the measured point cloud data
The measured points of the floor panel are extracted by region growing
method and a plain is fitted to the data
The edges are extracted based on the distance between the projection points of
point cloud data to the fitting surface. The difference between the design
dimension and the measured data extracted from the point cloud will be
visualized.
Figure 4. A shipbuilding block and floor panel
1.4.2. Accuracy analysis for the floor panels
As for the floor panels and other surfaces in the shipbuilding blocks, the fitting planes
can be calculated based on principal component analysis. The flatness of the planes can
be visualized using the distance between the fitting plane and each measured point.
The visualization is based on the method proposed in the prior research for the
accuracy measurement system for the curved shell plates [5].
2. Case Study
2.1. Overview
The measured data of shipbuilding blocks is stored in the proposed system and
analyzed in the case study. The data is stored in the database with metadata. The raw
data of the shipbuilding block is shown in Figure 5. Then, the search function is
employed to find the related data from the database. To support the assumption made
from the data, other related data must be retrieved easily and efficiently. The tendency
K. Hiekata et al. / Development of Three Dimensional Measured Data Management System 151
of the deformation by welding process is found through the exploration of the data with
metadata.

:
/
Figure 5. Measured point cloud data of a shipbuilding block
2.2. Results
The shipbuilding block shown in the left side of Figure 4 is measured by a laser
scanner. In this case study, the distances between the floor panel and each point are
calculated as accuracy control index. And the flatness of the floor plates is also
evaluated using the software system. The flatness is calculated as the difference
between the best fit plain and the point cloud data of the floor panel. So kinks and other
discontinuity will be found by the visualization of the flatness.
Both the raw measured data and the analyzed results are stored with the metadata
to improve the accessibility to the data. Metadata fields such as the name of the
shipbuilding block and the type of the analysis are assigned to the raw data and the
analyzed data. The data can be retrieved by the user interface shown in Figure 3.
2.3. Trial Analysis
Measured data for the shipbuilding blocks are stored in the system to evaluate the
accuracy of the blocks. The edge of the block is will be extracted the data shown in
Figure 5. The measured data also has the data of the inner floor plate. The accuracy of
the edges is important to join the blocks, so the edges are often evaluated by the
method illustrated in 2.3. The flatness of the floor plate is also illustrated in 2.3. The
gap on the edges and the flatness of the floor plate are shown in Figure 6. The
horizontal axis of the graph corresponds to the position of the block joints shown in
Figure 5. The vertical axis is the difference between the edges of design data and the
point cloud data. If the vertical axis is positive, the edge is longer than the design
dimension.
K. Hiekata et al. / Development of Three Dimensional Measured Data Management System 152
Figure 6. Measured point cloud data of a shipbuilding block
According to the results shown in Figure 6, periodical pattern in the floor panel
and the correlation of the three data history can be found.
As for the periodical pattern, the principal beam girders are found at the local
minimal values. So, it is assumed that the floor panel is constrained by the girders.
Then periodical pattern is caused by the welding process and the girders. This
assumption is supported by the other data retrieved by the system based on the
metadata, so the periodical deformation by the welding process will be occurred in the
shipyard. The comparison of two measured data is shown in Figure 7.
And the correlation of the floor panels and the edge are also supported by the other
data. Retrieving the appropriate data using the metadata search system, measured data
of the real field can be accessible to support assumptions.
This kind of findings may contribute the quality of manufacturing process of the
shipbuilding blocks.
Figure 7. Periodical deformation pattern of floor plate
K. Hiekata et al. / Development of Three Dimensional Measured Data Management System 153
3. Conclusion and Future Work
In this study, the accuracy measurement and analysis method and the data management
system are proposed. The accuracy evaluation method for edges of shipbuilding blocks
and flatness of floor panels are discussed. By giving the appropriate metadata to the
raw measured point cloud data, the accessibility to the raw data and the analysis results
are improved. In the empirical case study, the related data for supporting assumptions
made during the analysis process are retrieved by the metadata search function.
The periodical welding deformation pattern by principal beam girders and the
correlation between the floor panels and the edges of the shipbuilding blocks are
proposed and supported in the empirical case study.
The measured data by laser scanners is increasing rapidly, so data management
method will be more and more important. The metadata must be the key technology for
managing this kind of complex and huge data.
4. Acknowledgement
The authors would like to thank the shipyard and UNICUS Co.,Ltd, which gave a lot of
support to this project.
References
[1] Heo H., Yang J., Won S. and Kin Y., Dimensional Management System of Hull Blocks in Ship
Production, 11th International Symposium on Practical Design of Ships and Other Floating Structures
(2010), 1-6.
[2] KOBAYASHI, Y et al., Useful Application of a Non-contact 3D Measuring System for Nissans
Monodukuri, Nissan Technical Review 62 (2008), 56-60.
[3] Klyne, G. and Carrol, J. (eds.), Resource Description Framework (RDF): Concept and Abstract Syntax,
W3C Recommendation (2004), http://www.w3.org/TR/rdf-concepts/
[4] Kazuo Hiekata, Hiroyuki Yamato, Masakazu Enomoto and Shogo Kimura, Accuracy Evaluation System
for Shipbuilding Blocks Using Design Data and Point Cloud Data Proceedings of the 18th Ispe
International Conference on Concurrent Engineering (2011), 377-384.
[5] Kazuo Hiekata, Hiroyuki Yamato, Masakazu Enomoto, Yoshiaki Oida, Yoshiyuki Furukawa, Yuki
Makino and Taketoshi Sugihiro, Development of Accuracy Evaluation System of Curved Shell Plate by
Laser Scanner, Proceedings of the 17
th
ISPE International Conference on Concurrent Engineering
(2010), 47-54.
K. Hiekata et al. / Development of Three Dimensional Measured Data Management System 154
A Design Method for Unexpected
Circumstances: Application to an Active
Isolation System
Masato INOUE
a,1
, Masaki TAKAHASHI
b
and Haruo ISHIKAWA
c
a
Department of Mechanical Engineering Informatics, Meiji University, Japan
b
Keio University, Japan
c
The University of Electro-Communications (UEC Tokyo), Japan
Abstract. The early phase of design contains multiple sources of unexpected
circumstances which include the change or addition of performance requirements
or design conditions with the process of the design phases. Therefore, designers
are required to design considering those unexpected circumstances in the after
phases. Previously, we proposed a preference set-based design (PSD) method,
which obtains a unique design solution set from a point of view of design
preference and robustness under various sources of uncertainties while
incorporating the designers preference in the early phase of design. PSD method
represents the uncertain design information as the interval set. This paper proposes
a new design method based on PSD method which can obtain diverse possible
design solution sets for unexpected circumstances by deriving the multiple feasible
design domains which satisfy the required performances. Obtaining diverse
possible design solution sets could deal with the unexpected circumstances by
selecting the optimal design solution from obtained diverse possible design
solution sets in the early phase. In this paper, the proposed design method is
applied to the design problem of an active isolation system of 4-story building to
correspond to the additional unexpected earthquake ground motion which the
designer cannot consider at the early phase of design.
Keywords. Unexpected circumstance, diverse possible design solution sets, active
isolation system
Introduction
Generally the early phase of design which is called a conceptual design and an
embodiment design [1] contains multiple sources of unexpected circumstances
including the change or addition of design constrained conditions with the process of
the design phases. The early decision-making process has the greatest effect on the lead
time of the development process, overall cost, and product qualities. Therefore,
designers are required to design considering those unexpected circumstances in the
after phases. It is also desired that designers make a decision quickly even if the
uncertain and unexpected circumstances exist for lower cost and faster product
development.

1
Corresponding Author: Senior Assistant Professor, Meiji University, Japan; E-mail:
m_inoue@meiji.ac.jp.
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-155
155
The traditional design practice often considers the engineering design as an
iterative process. That is, it quickly develops a point solution, evaluates it on the
basis of multi-objective criteria, and then, iteratively moves to some other points until it
reaches a satisfactory point solution. In this iterative process, there is also no theoretical
guarantee that a process will ever converge and produce an optimal solution. In
addition, the use of a point solution does not provide information about uncertainty. To
correspond to the uncertainty, a large design space needs to be explored to obtain a set
of feasible design solutions, instead of a point solution. In addition, preparing diverse
possible design solutions has a possibility of the response to the addition of the
unexpected design condition by selecting the design solution from prepared diverse
possible design solutions beforehand.
In contrast to the traditional point-based design, Ward et al. advocated the set-
based concurrent engineering paradigm [2]. A set-based approach presents many
possibilities with respect to the handling of various sources of engineering uncertainties
that are intrinsic in the early phase of design [3-5]. The representation of uncertainties
is a topic that researchers have approached from many different directions. In the
engineering design community, there have been rigorous research efforts such as fuzzy
set-based approaches [6], interval set-based approaches [2, 7], probabilistic-based
approaches [8], a set-based approach based on the multi-attribute utility theory [9], and
a set-based multiobjective optimization approach [10].
Previously, the authors proposed a preference set-based design (PSD) method
[11][12], which obtains a unique design solution set from a point of view of design
preference and robustness under various sources of uncertainties while incorporating
the designers preference in the early phase of design. PSD method represents the
uncertain design information as the interval set.
This paper proposes a new design method based on PSD method which can obtain
diverse possible design solution sets for unexpected circumstances by deriving the
multiple feasible design domains which satisfy the required performances. Obtaining
diverse possible design solution sets could deal with the unexpected circumstances by
selecting the optimal design solution from obtained diverse possible design solution
sets in the early phase of design. In this paper, the proposed design method is applied to
the design problem of the active isolation system of 4-story building to correspond to
the additional unexpected earthquake ground motion which the designer cannot
consider in the early phase of design.
1. Proposal of a Design Method for Unexpected Circumstances
1.1. Preference Set-based Design (PSD) Method
The PSD method consists of four steps: set representation, set propagation, set
modification, and set narrowing, which are described below [11].
Firstly, to capture the designers preference structure, both an interval set and a
preference function defined on this set. The preference function is used to specify the
design variables and performance requirements, in which any of the shapes is allowed
to model the designers preference structure on the basis of his/her knowledge,
experience, or know-how. The interval set at the preference level of 0 is the allowable
interval, while the interval set at the preference level of 1 is the target interval that
designers would like to meet, as shown in Figure 1.
M. Inoue et al. / A Design Method for Unexpected Circumstances 156
Figure 1. Propagation from the initial design variables to possible distribution.
Figure 2. Elimination of the infeasible design subspaces.
Secondly, possible distributions that are achievable using the given initial design
variables are calculated by the theoretical formula or approximation expression. For
example, the design variables x
1
and x
2
are related to the performance Y
i
by the
mapping function Y
i
= f
i
(x
1
, x
2
), as shown in Figure 1. Then, if all the possible
distributions have common domains (i.e., acceptable performance domain) between the
required and possible distributions, there are feasible design domains within the initial
design variables. Otherwise, the initial design variables should be modified. However,
if the possible distributions are not subset of the required performances, as shown in
Figure 1, infeasible subspaces also exist in the initial design variables, which produce
performances outside the performance requirements.
Thirdly, the next step is to narrow the initial design variables to eliminate inferior
or unacceptable design domains of the initial design variables. As shown in Figure 2,
the initial design variables are partitioned into two or more levels, where each level has
the same interval width at the 0 preference level. Figure 2 shows an example of subsets
(i.e., subspaces I and II) of the initial design variables partitioned into two levels, and
the possible distributions by combinations of the decomposed design spaces. When
Design variables x
1
, x
2
P
r
e
f
e
r
e
n
c
e
Performance Y
i
1
y
min
y
max
0.5
x
1 min
x
1 max
x
2 min
x
2 max
0
1
0
Initial design variables x
1
, x
2
Infeasible subspace
Required performance
Possible distribution
Y
i
= f
i
(x
1
, x
2
)
Design variables x
1
, x
2
1
0.5
0
1
0
Design subspaces
Performance Y
i
x
1 min
x
1 max
x
2 min
x
2 max
I II I II
Possible performance
space by combination
of design subspaces
(I of x
1
and II of x
2
)
y
min
y
max
Required
performance space
Y
i
= f
i
(x
1
, x
2
)
Design variables x
1
, x
2
1
0.5
0
1
0
Design subspaces
Performance Y
i
Y
i min
Y
i max
x
1 min
x
1 max
x
2 min
x
2 max
I II I II
Possible performance
space by combination
of design subspaces
(I of x
1
and x
2
)
Required
performance space
Y
i
= f
i
(x
1
, x
2
)
Preference
Preference
M. Inoue et al. / A Design Method for Unexpected Circumstances 157
there are two design variables, the four combinations can obtain the possible
distributions. In this case, the designer can select the upper side of the design subspaces,
as shown in Figure 2, because the possible distribution by combining the design
subspace I of the design variable x
1
and that of the design variable x
2
exists within the
required performance.
Figure 3. Procedure of our proposed design method.
1.2. Design Method for Obtaining Diverse Design
This paper proposes a new design method based on PSD method which can obtain
diverse possible design solution sets by deriving the multiple feasible design domains
which satisfy the required performances.
Figure 3 shows the procedure of our proposed design method for obtaining diverse
possible design solution sets. To derive the multiple feasible design domains, the
subsets of the initial design variables are subdivided into two or more levels. Figure 4
shows an example of sub-subsets (i.e., sub-subspaces I-i, I-ii, II-i, and II-ii) of the
subsets partitioned into two levels, and the possible distributions by combinations of
the decomposed sub-subsets. The narrowing of the design variables repeats until the
possible distribution exists within the required performance. In this case, the upper side
of the design sub-subspaces could be a one of the feasible design domains, as shown in
Figure 4, because the possible performance by combining the design sub-subspace I-i
of the design variable x
1
and the design sub-subspace II-i of the design variable x
2
exists within the required performance space.
M. Inoue et al. / A Design Method for Unexpected Circumstances 158
Figure 4. Elimination of the infeasible design sub-subspaces.
2. Application to Active Seismic Isolation System
2.1. Setting of Design Problem
The present paper applies the proposed design method to an active seismic isolation
system which can reduce a building vibration associated with earthquake ground
motions. In the active seismic isolation system, a sensor detects the seismic ground
motion, and then control force generated by an active damper negates the input from
the earthquake ground motion to the building.
The active seismic isolation system based on the conventional point-based design
method might not achieve the performance under the unexpected circumstances
because the conventional design method obtains a unique optimum design solution
under assumption of a ground motion including Level and a periodic band. Therefore,
it is required to design for various kinds of earthquake vibrations in the early phase of
design.
This study defines the two design variables of the active damper for 4-story
building: weighting factor Q for state variable and weighting factor R for control force.
Table 1 shows performance requirements including the three performances:
acceleration on 4th floor a, displacement of a base-isolated layer x, and control force u.
Table 2 shows two design variables: weighting factor Q for state variable and
weighting factor R for control force.
We assume the following design situation:
In the early phase of design, diverse possible design solution sets of the two
design variables is obtained in consideration of two earthquake motions: the
Hachinohe earthquake motion and the Taft earthquake motion.
Design variables x
1
, x
2
1
0.5
0
1
0
Design sub-subspaces
Performance Y
i
x
1 min
x
1 max
x
2 min
x
2 max y
min
y
max
Y
i
= f
i
(x
1
, x
2
)
P
r
e
f
e
r
e
n
c
e
Required performance
Design variables x
1
, x
2
1
0.5
0
1
0
Design sub-subspaces
Performance Y
i
x
1 min
x
1 max
x
2 min
x
2 max
I
Possible distribution by
combination of design
sub-subspaces
(I-i of x
1
and II-i of x
2
)
y
min
y
max
Y
i
= f
i
(x
1
, x
2
)
P
r
e
f
e
r
e
n
c
e
Required performance
II-i II-ii I-i I-ii
I II-i II-ii I-i I-ii
Possible distribution by
combination of design
sub-subspaces
(I-i of x
1
and II-ii of x
2
)
M. Inoue et al. / A Design Method for Unexpected Circumstances 159
The designer needs to consider another El Centro earthquake motion as an
unexpected circumstance with the process of the design phase.
The design solution sets are obtained for three different earthquake motions.
This study serves as a rough guideline for determination of the value of the
design valuables in the early phase of design.
Table 1. Performance requirements.
Performance requirements Interval at preference level 0 Interval at preference level 1
Acceleration on 4th floor a
[cm/s
2
]
[0, 200] [0, 100]
Displacement of a base-isolated
layer x [cm]
[0, 50] [0, 20]
Control force u [N] [0, 6.410
6
] [0, 10
6
]
Table 2. Design variables.
Design variables Interval at preference level 0 Interval at preference level 1
Weighting factor Q for state
variable
[10
10
, 10
15
] [10
10
, 10
12
]
Weighting factor R for control
force
[1, 10] [5, 10]
2.2. Design Solution Sets for Expected Circumstances
First, the design solution sets for two earthquake motions: the Hachinohe earthquake
motion and the Taft earthquake motion are obtained.
Figure 5 shows the design solution sets for the active damper correspond to the
Hachinohe earthquake motion and the design solution sets for the Taft earthquake
motion. From these results, the intersection of design solution sets which satisfies the
required performances for both earthquake motions was obtained as shown in Figure 5.
2.3. Design Solution Sets for Unexpected Additional Circumstances
Next, the El Centro earthquake motion is taken into consideration as an additional
design situation which newly occurred with the process of the design phase.
Figure 6 shows the design solution set for the active damper correspond to the El
Centro earthquake motion and the intersection of design solution sets which satisfies
the required performances for the Hachinohe earthquake motion and the Taft
earthquake motion and the El Centro earthquake motion.
These results show the availability of the diverse possible design solution sets for
unexpected circumstances by deriving the multiple feasible design domains which
satisfy the required performances. Obtaining diverse possible design solution sets could
deal with the unexpected circumstances by selecting the optimal design solution from
obtained diverse possible design solution sets in the early phase. Moreover, when the
design solution corresponding to an unexpected situation does not exist in the possible
design solution sets, the designer can judge immediately the possible existence of the
design solution within the possible design solution sets for the unexpected
circumstances.
M. Inoue et al. / A Design Method for Unexpected Circumstances 160
Figure 5. Intersection of design solution sets for Hachinohe and Taft earthquake motion.
Figure 6. Design solution sets for expected (Hachinohe and Taft earthquake motion)
and unexpected circumstances (El Centro earthquake motion).
3. Conclusions
This paper proposed a new design method based on PSD method which can obtain
diverse possible design solution sets for unexpected circumstances by deriving the
multiple feasible design domains which satisfy the required performances. Our
proposed design method was applied to the design problem of the active isolation
system of 4-story building to correspond to the additional unexpected earthquake
ground motion which the designer cannot consider in the early phase of design.
Future research regarding evaluation index to support decision-making within the
possible design solution sets should be carried out.
M. Inoue et al. / A Design Method for Unexpected Circumstances 161
References
[1] G. Pahl, W. Beitz, J. Feldhusen, and K.H. Grote, Engineering Design: A Systematic Approach, 3rd ed.
Springer-Verlag London, UK, 2007.
[2] A. Ward, JK. Liker, JJ. Cristiano, and DK. Sobek, II, The Second Toyota Paradox: How Delaying
Decisions Can Make Better Cars Faster, Sloan Management Review, 36, 3, (1995), 4361.
[3] T. McKenney, LF. Kemink, and DJ. Singer, Adapting to Changes in Design Requirements Using Set-
Based Design, Naval Engineering Journal, 123, 3 (2011), 6777.
[4] T. McKenney, and DJ. Singer, Determining the Influence of Variables for Functional Design Groups in
the Set-Based Design Process, In: ASNE Day 2012: Proceedings of the American Society of Naval
Engineers Day 2012, 2012.
[5] DJ. Singer, CN. Doerry, and ME. Buckly, What is Set-Based Design?, In: ASNE Day 2012: Proceedings
of the American Society of Naval Engineers Day 2012, 2012.
[6] S-K. Oh, W. Pedrycz, and S-B. Roh, Hybrid Fuzzy Set-Based Polynomial Neural Networks and Their
Development with the Aid of Genetic Optimization and Information Granulation, Applied Soft
Computing, 9 (2009), 10681089.
[7] WW. Finch, and AC. Ward, Quantified Relations: A Class of Predicate Logic Design Constraints Among
Sets of Manufacturing, Operating and Other Variables, In: DETC96: Proceedings of ASME Design
Engineering Technical Conference, 1996.
[8] W. Chen, and C. Yuan, A Probabilistic-Based Design Model for Achieving Flexibility in Design, ASME
Journal of Mechanical Design, 121, 1 (1999), 7783.
[9] RJ. Malak Jr, JM. Aughenbaugh, and CJJ. Paredis, Multi-Attribute Utility Analysis in Set-Based
Conceptual Design, Computer-Aided Design, 41 (2009), 214227.
[10] E. Zitzler, L. Thiele, and J. Bader, On Set-Based Multiobjective Optimization, IEEE Transactions on
Evolutionary Computation, 14, 1(2010), 5879.
[11] M. Inoue, Y-E. Nahm, S. Okawa, and H. Ishikawa, Design Support System by Combination of 3D-
CAD and CAE with Preference Set-Based Design Method, Concurrent Engineering: Research and
Applications, 18, 1 (2010), 4153.
[12] M. Inoue, K. Lindow, R. Stark, K. Tanaka, Y-E. Nahm, and H. Ishikawa, Decision-Making Support for
Sustainable Product Creation, Advanced Engineering Informatics, 26, 4 (2012), 782792.
M. Inoue et al. / A Design Method for Unexpected Circumstances 162
An approach of body movement-based
interaction towards remote collaboration
Teruaki ITO
a,1
a
The University of Tokushima, Japan
Abstract. Concurrent Engineering (CE) has been one of the major topics in the
last few decades to achieve the goal of cost and time reduction as well as quality
improvement. Achievement of CE is based on the collaboration of various
activities ranging from design disciplines, manufacturing and assembly, marketing
and purchasing, all the way to the end users. In this respect, collaboration of
people from various activities among different locations is crucial to the success of
CE. To support collaboration between two difference places, various types of
communication tools are available these days, including business video conference
systems such as Polycom, or proprietary Voice over IP services such as Skype.
Even though remote communication is available with these tools, participants who
are physically separated during a meeting do not have the same feeling as a face-
to-face meeting because of several reasons. Some of them would be the lack of
presence and immersion. This study proposes an idea of body movement-based
interaction during a remote meeting to feel the presence of remote participants, and
to experience the immersion into the virtual space. According to the literature, the
four movement features potentially influence the immersion in virtual space;
namely natural control, mimicry of movements, proprioceptive feedback, and
physical challenge. This study focuses on two types of body movement, namely
hand gesture and head movement, to implement the idea of body movement-based
interaction. Hand gesture covers the natural control and mimicry of movement
towards a distant object. This study uses a pair of acceleration sensors and an
inclined plastic panel for movement-based interaction. The acceleration sensors
detect hand motion and position, and the detected signal is recognized by a signal-
recognition algorithm. A plastic panel is used to fold both hands during
manipulation by providing comfortable and easy operation. The panel also works
to project the hand gesture in three dimensional space onto its two dimensional
gesture, which makes it easier to design the detection algorithm[23]. As a bench
level experiment, it was recognized that the hand motion controlled the distant
object for simple manipulation. Head movement covers the physical challenge as
well as the mimicry of head movement using a physical object. This study uses a
gyro sensor attached to around ear in order to detect head movement of the subject
during the conversation. The detected signal by the sensor was used to control the
remote robotic arm, which holds the smart phone for Voice over IP
communication, and to mimic the head movement of the subject in a remote place.
As a result, presence of a remote subject was significantly recognized in the
physically active movement of the phone, which was quite different in comparison
with the fixed position of the phone. The idea of body movement-based interaction
was proposed and implemented in the two types of movement, or hand gesture and
head movement. Based on the preliminary experiments for these two types,
feasibility of the idea will be discussed in this paper.
Keywords. Body movement-based interaction, hand gesture, head movement
remote collaboration

1
Corresponding Author: Teruaki ITO, Institute of Technology and Science, The University of
Tokushima, 2-1 Minami-Josanjima, Tokushima, 770-8506, Japan; Email: tito@tokushima-u.ac.jp
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-163
163
1. Introduction
Concurrent Engineering (CE) has been one of the major topics in the last few decades
to achieve the goal of cost and time reduction as well as quality improvement.
Achievement of CE is based on the collaboration of various activities ranging from
design disciplines, manufacturing and assembly, marketing and purchasing, all the way
to the end users. In this respect, collaboration of people from various activities among
different locations is crucial to the success of CE. To support collaboration between
two difference places, various types of communication tools[1][2] are available these
days, including business video conference systems such as Polycom, or proprietary
Voice over IP services such as Skype. Even though remote communication is available
with these tools[9][10][15], participants who are physically separated during a meeting
do not have the same feeling as a face-to-face meeting because of several reasons.
Some of them would be the lack of presence and immersion[5].
This paper proposes an idea of body movement-based interaction during a remote
meeting to feel the presence of remote participants, and to experience the immersion
into the virtual space. According to the literature, the four movement features
potentially influence the immersion in virtual space; namely natural control, mimicry
of movements, proprioceptive feedback, and physical challenge. The study presented in
this paper focuses on two types of body movement, namely hand gesture and head
movement, to implement the idea of body movement-based interaction[12].
Hand gesture covers the natural control and mimicry of movement towards a distant
object[3]. This study uses a pair of acceleration sensors and an inclined plastic panel
for movement-based interaction. The acceleration sensors detect hand motion and
position, and the detected signal is recognized by a signal-recognition algorithm[21]. A
plastic panel is used to fold both hands during manipulation by providing comfortable
and easy operation. The panel also works to project the hand gesture in three
dimensional space onto its two dimensional gesture, which makes it easier to design the
detection algorithm. As a bench level experiment, it was recognized that the hand
motion controlled the distant object for simple manipulation. The idea was
implemented as a prototype system named Acceleration sensor-based Touch Panel
Interface (ATPI).
Head movement covers the physical challenge as well as the mimicry of head
movement using a physical object[11]. This study uses a gyro sensor attached to
around ear in order to detect head movement of the subject during the conversation.
The detected signal by the sensor was used to control the remote robotic arm, which
holds the smart phone for Voice over IP communication, and to mimic the head
movement of the subject in a remote place. As a result, presence of a remote subject
was significantly recognized in the physically active movement of the phone, which
was quite different in comparison with the fixed position of the phone. This idea was
implemented as a prototype system named ARM-COMS.
The idea of body movement-based interaction was proposed and implemented in the
two types of movement, or hand gesture and head movement. This paper presents the
preliminary experiments for these two types, and discusses the feasibility of the idea.
T. Ito / An Approach of Body Movement-Based Interaction Towards Remote Collaboration 164
2. Hand gesture interface - ATPI
The first one of the body movement-based interaction proposed in this paper is based
on hand gesture. The interaction is based on the use of acceleration sensor to provide an
intuitive control for users to share a remote display[17][18][19]. The idea aims at a
touch-less hand gesture interface[4] called Acceleration sensor-based Touch Panel
Interface (ATPI).
Using a pair of wireless acceleration sensor, movement signal of hand gesture
could be detected and recognized in real time based on the algorithm proposed in this
study. Since the hand gesture operation using both hands for long time is not
comfortable, the system uses a plastic panel so that the hands could be placed on it to
keep a comfortable hand position during operation. The panel is fixed at a certain slope
angle, so that the 3D acceleration data could be projected onto the 2D plane which
makes it easier to calculate. Figure 1 shows the basic outline of the system, which is
composed of a PC, a projector, a screen, a pair of wireless 3D acceleration sensors, and
an operation board. This section describes the overview of hand gesture recognition.
Figure 2 shows ATPI gestures corresponding to typical finger operation.
Figure 1. System overview of ATPI.
Figure 1. Gesture operation of ATPI vs finger operation
T. Ito / An Approach of Body Movement-Based Interaction Towards Remote Collaboration 165
2.1. 9 types of basic hand gestures
To design a signal-recognition algorithm, 9 types of basic hand gesture motions were
defined as follows; scroll towards four directions including up, down, right and left,
spread to zoom in, pinch to zoom out, click to select, and motion to start/end (Table 1).
These gestures are analyzed in real time and determined as one of the functions.
Table 1 Nine gestures for each manipulation function
Gesture Function
G1 Put two hands Start
G2 Release two hands End
G3 Up-down motion Selecting
G4 Upward slide Scroll-up
G5 Downward slide Scroll-down
G6 Leftward slide Scroll-left
G7 Rightward slide Scroll-right
G8 Pinch Zoom-in
G9 Spread Zoom-out
2.2. Starting motion gesture (G1)
ATPI is based on hand gesture only. Therefore, starting and ending signals are critical
to recognize the chunk of gestures. For this reason, a specific gesture was defined for
starting and ending signal of gestures. This section explains the starting motion which
is performed prior to the hand gesture.
For starting motion, both hands are put on the operation panel with palm side down,
which is defined as the initial stable hand position (SP1). Here, z-axis acceleration
signals are zero and combination of x/y-axis acceleration signals are almost identical to
the gravity acceleration value. Then both hands are moved in tilting motion with each
palm facing each other.
The hand inclination is almost equal to the slant angle of operation board as shown in
scheme (1) and (2) (SP2). By combining SP1 and SP2, the starting motion can be
defined.
2.3. Ending motion gesture (G2)
Ending motion is defined as the reverse action of the starting motion. In other words,
both hands on the panel with palm facing each other in SP2 position and both hand are
moved to SP1 position, which is defined as the ending position. Z-axis acceleration
values are shifted from negative values to zero. The threshold value of -100mG is used
to determine the ending position.
2.4. Sliding motion(G4, G5, G6 & G7)
Sliding motion was defined to show the four different hand gestures, which include up,
down, left and right sliding motions. The detection of all four sliding motions is
T. Ito / An Approach of Body Movement-Based Interaction Towards Remote Collaboration 166
conducted as the following procedures; (i)hand position detection, (ii)sliding motion
detection, (iii)sliding direction detection, and (iv)sliding distance detection.
(i) Hand position detection
Hand position is calculated based on the x/y acceleration signals and the hand slope
angle. When the difference of x/y acceleration values in the latest 5 sets is below the
threshold value (20mG), hand motion is recognized as a not-moving motion. The slope
angle is calculated using the latest 5 sets of acceleration value.
(ii) Sliding motion detection
When the hand position is detected as in a moving position, sliding position is detected
as the combination of acceleration values projected to gravity acceleration and
distribution values given in scheme (3) and (4), with the threshold value of 300mG.
When the distribution values in the latest five sets are more than the threshold value,
the hand gesture was recognized as in a sliding motion.
(iii) Sliding direction detection
If the hand gesture is recognized as in a sliding motion, then the sliding direction is
determined. When distribution value becomes below the threshold value, FFT
calculation is made based on the combination of the latest 32 sets of x/y axis
acceleration values as shown in Fig.11. Then the power spectrum is calculated as shows
in Fig. 12.
(iv) Sliding distance calculation
Sliding acceleration is calculated with the difference of sliding motion acceleration
value and gravity acceleration. Plus/minus sign of acceleration is converted to meet the
direction. Calculation of sliding distance in scheme (6) and (7), which is shown in Fig.
13, enables to determine the sliding direction.
2.5. Pinch and spread motion (G8 & G9)
Spread and pinch motions are detected as the same procedure of sliding motion.
2.6. Selection motion (G3)
Selection motion is recognized by the simultaneous up-down motion of both hands
together as shown in Fig. 16. Fig.17 shows how the acceleration signal changes in this
motion.
2.7. Feasibility experiments
User experiment was conducted to measure the recognition performance. Manipulation
instructions were given to 5 subjects (male, 20s) and 7 gesture motions were
performed by those 5 subjects using a plastic panel (350mm x 250mm) at the slope
angle of 30 deg. The gesture recognition rate was 87% in average for those 7 gestures,
or G3-G9 (Fig.18), out of which spread (G9) and selection (G3) gestures were
completely recognized. However, some were not recognized. Manipulation experiment
was conducted to see the usability of the interface. Questionnaires were given to 7
T. Ito / An Approach of Body Movement-Based Interaction Towards Remote Collaboration 167
subjects in manipulation experiments to review the usability, and two issues were made
clear. Reaction speed is regarded as slow, and manipulation did not follow the gesture.
3. Head motion interface - ARM-COMS
The second one of the body movement-based interaction proposed in this paper is
based on head motion. The interaction is based on the idea of head motion to mimic a
person's head movements in a remote place and implemented a prototype system called
ARM-COMS (ARm-supported eMbodied COmmunication Monitor System) [6]. Since
the detailed description of ARM-COMS can be found in the literature[22][24][25], this
section covers just the general overview.
Figure 3. General overview of ARM-COMS.
ARM-COMS is composed of two sub-systems: a tablet PC for video communication
and a robotic arm to control physical motion of the tablet PC as an avatar[7]. ARM-
COMS traces the head movement of a person during conversation, understands the
meaning of motions and makes the tablet PC physically behave as if the person were
actually there. As opposed to the conventional tele-presence robots, ARM-COMS
pursues the realization of entrainment in video conversation with a remote person. The
idea of ARM-COMS challenges two types of issues. One is motion-control to mimic
the movement of a remote person and to enable entrainment in conversation. The other
one is position-control to show the dynamic relationship with participants in
conversation [13] or to show the status of interest in topics under discussion.
Challenge 1: Entrainment movement control
It has been reported that entrainment among participants emerges during conversation
if the participating persons get together in the same physical space and engage in the
conversation. [24] Tracking the head movement of a speaking person, ARM-COMS
mimics the movement of head by a robotic arm[20][26] in remote communication to
challenge this issue.
T. Ito / An Approach of Body Movement-Based Interaction Towards Remote Collaboration 168
Challenge 2: Entrainment position control
In a face-to-face meeting, each person takes a meaningful physical position to represent
the relationship with the other participants, or to send a non-verbal message to
others[13]. A closer position would be taken for friends, showing close relationship,
whereas a non-closer position would be taken for strangers, showing unfriendly
relationship [14]. ARM-COMS controls a tablet PC to dynamically locate an
appropriate position in space and to challenge this issue by explicitly representing the
relationship with other participants.
3.1. Basic structure of motion control
During conversation, various types of body/head movements can be observed. In order
to mimic some of these movements, this study focuses on three types of head
movements, namely, nodding, head-tilting, and head shaking movements. All of these
are very typical non-verbal expression in Japan during conversation. Nodding means
affirmative, agree, listening, etc. Head-titling means ambiguous, not sure, impossible to
answer, etc. Head shaking means negative, disagree, etc. Fig.3 shows the corresponding
physical motions implemented by the robotic arm control. If the monitor behaves like
these in conversation, it is assumed that the physical movements could send a non-
verbal message. Technically speaking, these three types of movements can be regarded
as the rotation around each axis as shown in Fig. 3. Therefore, the rotation angles of
these three motions can be calculated as in Scheme (1), (2) and (3).
3.2. Prototype system of ARM-COMS
A prototype robotic arm system was built to mimic the head motions based on the
scheme (1), (2) and (3). Fig. 4 shows the overview of the system architecture of the
prototype. The prototype is composed of a table top robotic arm (Lynxmotion) with
motor controller board (SSC-32 Ver.2.0) which is connected to PC (Windows 7) by
serial connection. The robotic arm is controlled by a PC using an integrated sensor
(TSND-121, ATR Promotion) through Bluetooth connection.
3.3. Feasibility study of video-communication using ARM-COMS
Feasibility experiment was conducted to compare the video communication with and
without ARM-COMS to make clear the effectiveness of the idea of ARM-COMS.
4 pairs of participants were recruited from students and two sets of video conversation
were conducted by those paired students. One conversation is just a regular video
communication and the other one is based on ARM-COMS idea. Fig.6 shows the
overview of the experimental setup. The video communication was conducted between
Site-A and Site-B by a pair of Subject-A and Subject-B, who were located in separate
places.
T. Ito / An Approach of Body Movement-Based Interaction Towards Remote Collaboration 169
Figure 4. Feasibility experiments.
Site-A is regarded as a local site where ARM-COMS is installed. For a tentative
purpose, a smart phone is equipped to the robotic arm to be used as a pseudo-active
display. Subject-A communicates with Subject-B via Skype on the smart phone. A
magnetic sensor (Fastrak, POLHEMUS) is attached to the head of Subject-A during
conversation to detect the head motion of subject-A.
Site-B is regarded as a remote site where Subject-B communicates with Subject-A in
Site-A through video communication (Skype) on a laptop PC. The integrated sensor
TSND-121 was attached to the head of Subject B to trace the head movement during
conversation, which was also used to control ARM-COMS in Site-A. The sensing data
from the integrated sensor was transmitted to the client program in the laptop through
Bluetooth. The socket program communicates with the server program in desktop PC
on Site-A, and controls the ARM-COMS via Wi-Fi network. The collecting data from
the integrated sensor was also used to analyze the head movement of Subject-B during
video conversation.
Subject-A and Subject-B was video-recorded in both Site-A and Site-B during the
whole conversation, which was used to synchronize the head movements of Subject-A
and Subject-B in the conversation.
Several types of topic for video conversation were prepared from daily conversation.
For example, Which animal do you like?, What did you eat for the breakfast?,
How did you come to the university today?, Which convenience store do you often
go? , etc. Each pair of subject was asked to perform one-minute video conversation
about one of the selected topics twice. First conversation was without the robotic arm
manipulation, which was similar to regular Skype conversation, and the second one
was with arm manipulation based on the idea of ARM-COMS.
T. Ito / An Approach of Body Movement-Based Interaction Towards Remote Collaboration 170
3.4. Results and discussion
Video conversation was smooth in any of the four pairs with or without AMR-COMS.
It was observed that most of the four pairs conducted their conversation without using
dynamic head movement simply because it was what they normally do in regular
conversation.
Figure 7 shows an example of comparison data between with and without ARM-COMS
during conversation. As opposed to the dynamic head movement of Subject B, that of
Subject A was stable. Entrainment of the head was not significantly observed. However,
synchronization of the two subjects was observed in the bowing at the beginning and at
the ending of the conversation.
For smooth conversation, a topic memo was given to each subject. As a result, each
subject quite often looked down the memo in the conversation, rather than looking at
the counterpart in the video screen. Subject did not use the memo at the greetings in the
beginning and in the end. This made it possible to perform the conversation in a smooth
manner. However, this might have hindered the entrainment with the counterpart.
Future works needs further discussion on the experimental setup.
4. Concluding remarks
The idea of body movement-based interaction for remote communication was proposed
in this study, and the two types of movements, namely hand gesture and head
movement, were presented to show this idea. As for the hand gesture-based interaction,
this paper presented the idea of ATPI, by which hand gesture covers the natural control
and mimicry of movement towards a distant object. As for the head movement-based
interaction, this paper presented the idea of ARM-COMS, which is not only to present
the tele-presence of a remote person, but also to explicitly show the relationship
between the remote person and the local participants. Based on the preliminary
experiments for these two types of interaction using prototype system, feasibility of the
idea was discussed in this paper. To support collaboration between two different places,
which is very critical in CE implementation, the results of preliminary experiments in
this paper suggested one clue to support the remote collaboration.
References
[1] Andries van Dam (February 1997). "POST-WIMP User Interfaces", Communications of the ACM (ACM
Press) 40 (2) (1997), pp. 6367.
[2] Abowdm D,G., Mynatt, D.E., 2000, Charting past, present, and future research in ubiquitous computing,
ACM Transactions on Computer-Human Interaction (TOCHI), v.7 n.1, pp.29-58.
[3] S. Chu and J. Tanaka, "Hand Gesture for Taking Self Portrait", Proceedings of 14th International
Conference on Human-Computer Interaction (HCI International 2011), Human-Computer Interaction,
Part II, LNCS 6762, pp.238-247, Orlando, Florida, USA, July 9-14, 2011
[4] Fukuoka, Y., K. Komuro, and M. Ishikawa, Zooming Touch Panel, Interaction 2007, pp.33-34, March
2007. (in Japanese)
[5] Greenberg, S., 1996, Peepholes: low cost awareness of one's community, Conference companion on
Human factors in computing systems: common ground, Vancouver, British Columbia, Canada, pp.206-
207.
[6] Ito, T. and T. Watanabe, ARM-COMS: Arm-supported embodied communication monitor system, Procs.
of HCI international, Las Vegas, NV, U.S.A., 2013.
T. Ito / An Approach of Body Movement-Based Interaction Towards Remote Collaboration 171
[7] Kashiwabara, T., H. Osawa, K. Shinozawa, M. Imai, TEROOS: a wearable avatar to enhance joint
activities, Annual conference on Human Factors in Computing Systems, pp. 2001-2004, 2012/5.
[8] Kim, K., Bolton, J., Girouard, A., Cooperstock, J., and Vertegaal, R.: TeleHuman: Effects of 3D
Perspective on Gaze and Pose Estimation with a Life-size Cylindrical Telepresence Pod, Proc. of
CHI2012, pp.2531-2540 (2012).
[9] Kinect, available at http://www.microsoft.com/en-us/kinectforwindows/
[10] Kitaguchi T, S. Torikoshi, H. Takada, A shared display operation environment using mobile devices
equipped with camera and touch panel for face-to-face collaborative work, Interaction 2011. (in
Japanese)A. B. Tsushima and C. Kita, Symbiotic systems, | International Journal of Nuclear
Simulation, Vol. 2, No. 4, pp. 101-110, 2011.
[11] Kubi: http://revolverobotics.com/meet-kubi/
[12] P. Mistry, P. Maes. SixthSense A Wearable Gestural Interface, In the Proceedings of SIGGRAPH
Asia 2009, Sketch. Yokohama, Japan. 2009.
[13] Okada, K., F. Maeda, Y. Ichikawa and Y. Matsushita, Multiparty videoconferencing at virtual social
distance: MAJIC design, SCW '94 Proceedings of the 1994 ACM conference on Computer supported
cooperative work (CSCW 94), 1994, pp.385-393.
[14] Osawa, T., Yuji Matsuda, Ren Ohmura, Michita Imai, Embodiment of an agent by
anthropomorphization of a common object, Web Intelligence and Agent Systems: An International
Journal, vol. 10, pp. 345-358, 2012.
[15] Otsuka, T., S. Araki, K. Ishizuka, M. Fujimoto, M. Heinrich, and J. Yamato, A Realtime Multimodal
System for Analyzing Group Meetings by Combining Face Pose Tracking and Speaker Diarization,
Proc. of the 10th International Conference on Multimodal Interfaces (ICMI08), pp. 257264, Chania,
Crete, Greece, 2008.
[16] Pencil project, available at http://www.evolus.vn/en-US/Home.aspx
[17] Peltonen, P., E. Kurvinen, A. Salovaara, G. Jacucci, T. Ilmonen, J. Evans, A. Oulasvirta, and P.
Saarikko, Its Mine, Dont touch! Interactions at a Large Multi-Touch Display in a City Centre,
CHI2008, pp.1285-1294, April 2008.
[18] Sirkin, D. and W. Ju, Consistency in physical and on-screen action improves perceptions of
telepresence robots, HRI '12 Proceedings of the seventh annual ACM/IEEE international conference on
Human-Robot Interaction, 2012, pp.57-64.
[19] Sugita K, J. Takakura, and Y. Yamauchi, a remote display operation interface using the see-through
tough panel, IEICE Technical Reprot MVE2008-121(2009-3), pp.57-60. (in Japanese)
[20] Tariq, A. M. and T. Ito, Master-slave robotic arm manipulation for communication robot, Japan
Society of Mechanical Engineer, Proceedings of 2011 Annual meeting, Vol.11, No.1, p.S12013, Sep.
2011.
[21] Tokoro, Y., T. Terada, and M. Tsukamoto, a pointing method with two accleromoters, 16th
Workshop on Interactive Systems and Software(WISS2008), November 2008.
[22] Tomotoshi, M. and T. Ito: A study on awareness support method to improve engagement in remote
communication, the first International Symposium on Socially and Technically Symbiotic System
(STSS2012), 39, pp.1-6, Okayama, Aug. 2012.
[23] http://blog.use-design.com/category/design/
[24] Watanabe, T., Human-entrained Embodied Interaction and Communication Technology, Emotional
Engineering, Springer, pp.161-177, 2011.
[25] Watanabe, T., Okubo, M., Nakashige, M., and Danbara, R.: InterActor: Speech-Driven Embodied
Interactive Actor; International Journal of Human-Computer Interaction, Vol.17, No.1, pp.43-60 (2004).
[26] Wongphati, M., Y. Matsuda, H. Osawa, M. Imai, Where do you want to use a robotic arm ? And what
do you want from the robot ?, International Symposium on Robot and Human Interactive
Communication, pp. 322-327, 2012/9.
T. Ito / An Approach of Body Movement-Based Interaction Towards Remote Collaboration 172
How to Successfully Implement Automated
Engineering Design Systems: Reviewing
Four Case Studies
JOEL JOHANSSON, FREDRIK ELGH
Mechanical Engineering, School of Engineering,
Jnkping University, Sweden
Abstract. Introducing computerized systems to automate engineering design activ-
ities (design automation) on manufacturing companies promises improved product
quality, shortened time to production, increased control on cost, less effort to adapt
products to new customer requirements. Due to these motives, big effort has been
put on developing computer systems automating a variety of engineering design
activities throughout the product and production development process. The ques-
tion is now: Is design automation ready to launch yet? In this paper, we review
four cases of design automation of engineer-to-order to give guidelines for devel-
oping engineering design automation systems.
Keywords. Automated Engineering Design, Knowledge Object, Inference engine
1. Introduction
Introducing computerized systems to automate engineering design activities (design
automation) on manufacturing companies promises improved product quality, short-
ened time to production, increased costing control, and less effort to adapt products to
new customer requirements. Due to these motives, big efforts has been put on develop-
ing computer systems automating a variety of engineering design activities throughout
the product and production development process. Here, we only consider the design
automation of products that are engineered-to-order, i.e. products for which it is not
possible to develop pre-defined sets of configurations to select from.
To develop guidelines for selection of technical solutions four design automation
projects (driven by the authors during the 2002-2012) are reviewed in this paper. The
systems were rated according to defined criteria for good design automation systems.
The scores were then compared to the content of the system (the automated engineering
knowledge) in order to see how it affects the resulting systems. The systems were eval-
uated based on the following criteria [1]:
Transparency: The level of clearness and accessibility of the documentation and
visualization of, the product and its product structure, design process, design tasks, and
design knowledge. A highly transparent process is to be seen as an antonym to a black-
box process.
User readable and understandable knowledge: The level of which the
knowledge (e.g. design rules) is expressed in a user readable and understandable format.
Rules expressed in formal language are for example easier to read and understand, but
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-173
173
less efficient in execution, than rules expressed in some computer programming lan-
guage.
Scalability: The possibility of expanding the system towards higher system com-
plexity through system realization architecture that allows the application to grow and
expand with emerging details, additional or refined tasks to be performed, additional
knowledge to be added, and additional application modules to be implemented.
Flexibility: The possibility of expanding the system within the same level of sys-
tem complexity through realization architecture that allows the application to grow and
expand with additional variants, products, and/or sub-systems.
Longevity: Factors that can affect system longevity are for example: dependence
on single specialized vendors, level of transparency, level of user readable and under-
standable knowledge, and ease of application overview and maintenance.
Investment: Level of initial cost for implementation and use of capital recourses
(in relation to a predicted total cost of system development and operation).
Effort of development: Level of development (and expansion) effort in terms of
the use of human resources.
The following systems are detailed and evaluated in the paper:
The CoRPP system (2003): This system was developed as a research case study
and targets the preliminary design of a bulkhead part of a submarine escape section for
subsequent cost calculation [2].
The Kongsberg-Automotive system (2007): This system automated layout of heat-
ing elements for car seats in order to make cost calculation and production planning [3].
The BendIT system (2008): This system targets the development of toolsets for the
rotary draw bending of aluminum profiles. That system combined KBE, CAD, and
FEM to make design proposals of the tools to subsequently simulate and analyze pro-
duction outcome .
The TRackS (2010): This system targets the development of ski-rack for cars with
no rails. Specifically the system is used to retrieve existing components to make new
combinations for new car models. When the combinations of components are estab-
lished the behavior of the complete ski-racks during car collision are automatically
simulated using FEM-simulations [4].
2. System descriptions
This study is based on four cases, which are briefed in this section. All the systems
were developed as parts of the research projects, as is common for many automated
design systems targeting products that are engineered to order.
2.1. System 1: CoRPP Knowledge processing static flow
The primary purpose of the CoRPP (Coordinated Realisation of Products and Process-
es) system was to support the company in its effort to gain design solutions with en-
hanced producibility through studies of variations in cost, weight and operation time.
The main element of the bulkhead is a circular plate with vertical structural mem-
bers, which consist of cut, rolled and welded steel plating, as shown in Figure 1.
J. Johansson and F. Elgh / How to Successfully Implement Automated Engineering Design Systems 174

Figure 1. A bulkhead and examples of stiffener variants.
The system architecture was modular, where the knowledge was captured in
knowledge objects grouped in separated modules.
The knowledge base (knowledge objects in the different modules) is executed on
the basis of different customer specifications. The product design module generates
parameters that serve as input to product geometry, process planning and cost estima-
tion. Product geometry, process planning and cost estimation consist of a number of
interrelated knowledge objects (generic templates) that are instantiated and then exe-
cuted and configured in accordance to the input parameters.
The system was developed together with an industrial partner and a research insti-
tute using a commercial off-the-shelf (COTS) approach (comprised of MS Access, MS
Excel, MS Visual Basic, Mathsoft Mathcad, and PTC Pro/Engineer). The modules for
process planning and cost estimation were developed by one of the authors. The system
was considered to have many areas of use at the company: design calculations, design
optimization, geometry modeling, automated CAD generation, knowledge repository,
design manual, process planning, cost estimation, operation time estimation, and
weight calculations.
The system consists of a geometry modeler separated from commercial software
for solid modeling. An extended product model was implemented in the geometry
modeler supporting the process planning and cost estimation of the product.
The bulkhead was modeled in a software application as parametric solid models,
using methods that permit dimensional and topological changes [5]. The geometry
modeler drives the parametric solid models. A nomenclature was defined and imple-
mented. This enabled the mapping between the geometry modeler and in the standard
process plans. Standard process plans, with the integration of a system for cost estima-
tion, were created in a common spreadsheet software application. The operations in the
process plans were activated in either of two ways: if there was a corresponding feature
in the geometry model, or in accordance with rules where operations are interrelated.
Geometrical and topological cost drivers were identified and corresponding parameters
stated in the standard process plan. Production data and costs for production resources
were gathered in tables.

J. Johansson and F. Elgh / How to Successfully Implement Automated Engineering Design Systems 175

Figure 2: Heat elements
2.2. System 2: KA knowledge processing static flow and information handling
The scope of the KA-system was to generate variant designs of heating elements based
on different customer specifications and seat geometries. The heating elements are part
of a car seat heater. The heating element consists of a carrier material, a wire and a
connecting cable. The wire is laid out and glued in a pattern of sinusoidal loops be-
tween the two layers of carrier (Figure 2).
The pattern is calculated on the basis of company-aggregated knowledge. The pur-
pose was to combine some of the functions and properties relating to information han-
dling and knowledge processing into one system. The objectives with the system were:
cut quotation lead-time, allow for evaluation of different design alternatives, quality-
ensure the design process, capture design knowledge, ensure producibility, and provide
design documentation.
The system was developed by one of the authors in cooperation with programming
consultant. The knowledge base comprises rules in Catia Knowledge Ware Advisor
(KWA). The rules are linked (through an Access database) to different Knowledge
Objects. The Knowledge Objects can be of different types (e.g. Catia KWA rules,
Mathcad worksheets) in which the methods of the different Knowledge Object are
implemented. The rule firing, invoking the Knowledge Objects, is controlled by an
inference engine (CATIA KWA in early versions, and in-house developed in later
versions of the system). The company resources with associated manufacturing re-
quirements are stored in an Access database together with the Knowledge Objects. The
graphical user interface and the interfaces to different software applications and data-
bases are programmed with Visual Basic. The system is fed with customer-specific
input (parameter with associated values together with a 2D outline of the heated seat
areas). The main output is the pattern for the heating wires centerline, an amplitude
factor for the sinusoidal loops and the wire specification.

J. Johansson and F. Elgh / How to Successfully Implement Automated Engineering Design Systems 176

Figure 3: Suggested design as CAD model (left), and analysis model (right) of a rotary draw bending toolset.
2.3. System 3: BendIT, Knowledge processing, dynamic flow
The target for the BendIT system was to design tool-sets for the rotary draw bending of
aluminum profiles with general sections. The complete process was fully automated
including initial estimations of spring back, required bending moment, need for section
support and other phenomena based on handbook formulas and formulas derived from
fundamental physical laws to generate a design proposal represented in CAD software
(left in Figure 3). To render the CAD-model first the volume allocated by the profile
during all the manufacturing steps had to be generated (this was done using automated
CAD-functionalities), subsequently template CAD-models of tool-sets were retrieved
and the previously generated geometry was removed using boolean operations to have
the tool cavities. The design proposal was then used to generate simulation models for
each manufacturing step in the tool-set (right in Figure 3). The results from the simula-
tions, the simulated production outcome, were automatically analyzed for wrinkling of
the profiles.
The structure of the system was completely modular based on knowledge objects.
The solution path of the knowledge base was dynamical so that knowledge objects
were executed on demand, controlled by an inference engine developed by one of the
authors. The knowledge objects were used to connect to MS Excel, CATIA, MS Ac-
cess, PTC MathCAD, and LS-Dyna. Additionally, routines were developed and auto-
mated through knowledge objects to convert CATIA mesh models to LS-Dyna, to
make suggestions on where to support the profiles, and to detect wrinkles.
In the system, it was possible to add redundant knowledge. In other words
knowledge based on rules of thumb, knowledge based on formulas analytically derived
from fundamental laws of physics, knowledge based on experiments, and knowledge
based on simulations, could all exist for same phenomena at the same time. For an
example there were three knowledge objects calculating the developed length of a
circular aluminum tube. Meta-knowledge was added so that the special inference en-
gine could execute appropriate knowledge objects in different context of running the
system.
The system was finally used to investigate the design space of general aluminum
profiles.

J. Johansson and F. Elgh / How to Successfully Implement Automated Engineering Design Systems 177
A)


B)

C)


D)


Figure 4: Thule clamp development process.
2.4. System 4: TRackS, information handling and visualizing
The Thule Rack System (TRackS) was developed targeting the automation of adapting
a special product to new specifications. The system utilizes the Case Based Reasoning
method to retrieve exiting components to assemble into new product variants. The
method was applied on two targeted components, one where the search could be per-
formed directly on component geometry and one where the search was based on clear-
ance analyzes.
The system was developed using procedural programming and was embedded as
an add-in to the SolidWorks CAD-software. The user creates a new project in the add-
in, selects the roof model on which the system makes a new assembly inserting the roof
in correct position. The user then creates two datum-planes to indicate where to place
the racks on which the system automatically searches for existing footpads that would
fit on the roof at the given positions. When finished the search, the user can evaluate
footpads based on fast in-context previews. When found, the good fitting footpads are
automatically retrieved and placed on the roof in the assembly. Subsequently the racks
are mounted on the footpads and a new search procedure for good brackets starts, also
including fast in-context previews.
When the rack model is complete, including footpads and brackets, a simulation
model is automatically developed in order to do crash simulations. The simulation
models are generated using a name convention used in the CAD-files to define how the
part should be idealized together with macro programming for the FEM pre-processor
(ANSA).
3. System contents
If the design automation system somehow deals with the embodiment of the product, it
needs to be capable of geometrical modeling. It is experienced by the authors that in-
troducing geometrical modeling into a design automation system affects criteria 1, 2, 6
and 7 negatively. All the reviewed systems included geometrical modeling (as that is
fundamental for design this is what makes the big difference to other computer systems,
Footpad
Bracket
Car roof
J. Johansson and F. Elgh / How to Successfully Implement Automated Engineering Design Systems 178
see Table 1). The geometrical modeling capability can be implemented using either
commercially available CAD-systems or in-house developed routines. When using
commercial CAD-systems the resulting product geometries can be rendered either by
adapting pre-defined template-models that might vary parametrically and topologically,
or by generating it by macro programming.
The first system used parameterized CAD-templates adapted to new product speci-
fications while the second system instead generated the geometry. The third system
was hybrid using templates adapted through Boolean operations of generated geometry.
The last system used the CAD-system for previewing geometry only since the geomet-
rical functions were instead implemented into the systems core code. The main reasons
were the lack of the necessary functionality (clearance analyses of footpads and brack-
ets) and that communication through the API made the performance poor.
The engineer-to-order-process often includes the simulation and analysis of the
suggested geometry through FEM-calculations. The system would then include auto-
matic idealization and meshing of the product. In addition, boundary conditions, con-
straints and other definitions have to be generated. The two last systems included the
automation of FEM-analyzes based on naming CAD-features and macro programming.

Table 1. Categorizing the knowledge content of the systems.
Geometry CAD In-house Templates Generative FEM
1. CoRPP X X X
2. KA X X X
3. SAPA X X X X X
4. TRackS X X X X X X
4. System structure
The system structure can be either modular or not. If the system is modular, its execu-
tion flow can be fixed predefined, runtime static or runtime dynamic. A fixed prede-
fined flow means that the execution sequence is determined during the design of the
system and hard coded into the machine code. A runtime static execution flow means
that the modules are executed in a predefined order that is editable in the system with-
out rebuilding it. A runtime dynamic flow means that the modules are executed based
on current status of the system, either whenever there is enough information (as soon as
possible) or on demand (as late as possible).
The first three systems were based on a modular structure called knowledge
objects, of which the first used a fixed pre-defined execution flow, the second used an
inference mechanism that resolved the execution order when the system is invoked
based on the knowledge objects dependencies, and the third system executed the
knowledge objects dynamically whenever enough information was present at run-time,
see Table 2.

Table 2. Categorizing the structures of the systems.
Modular Fixed pre-
defined flow
Run-time
static flow
Run-time
dynamic flow
As soon as
possible
On de-
mand
1. CoRPP X X
2. KA X X
3. SAPA X X X
4. TRackS X
Suggested increased difficulty
J. Johansson and F. Elgh / How to Successfully Implement Automated Engineering Design Systems 179

5. Evaluation of the systems
The four systems were compared based on the criteria mentioned in the introduction
assigned values poor, moderate, and good, see Table 3.
5.1. Transparency
The first three of the systems were based on a modular structure, i.e. knowledge objects.
The two first systems were built on knowledge objects automating widespread com-
mercial software making the human readability of the knowledge high whereas the
knowledge automated in last system was built into machine code. Even though the
third system was modular and built on knowledge objects some of the knowledge
chunks was hard coded into machine code, making the readability of the system mod-
erate.
Common for the first three systems also was the usage of the DSM. In the first two
systems the design process was visualized using a DSM that provided access to under-
lying executable rules and documentation (both general and variant specific). In the
third system, the dependencies of knowledge objects and/or design parameters were
visualized through DSMs.
The product structure in CoRPP was not explicit as in the KA system, which
proved to enhance the transparency of the system.
5.2. User readable knowledge:
The knowledge for design calculations was explicit in the first three cases. It was
made explicit by wide-spread commercial software for spreadsheets and mathematics.
The CAD modeling rationale was treated differently though. In system 1, the geometry
calculations were separated from the CAD software which enabled a possibility to add
notes on the rationale. However, this was very time consuming and required quite a lot
of effort in the development phase. In the second system all geometry was treated in a
CAD software. An extensive macro was developed that required a separate documenta-
tion for future maintenance. Even though, geometrical calculations were made explicit
some of the routines were developed in-house and complied to machine code. In sys-
tem 4, the underlying knowledge was completely hard coded making it hard to read.
5.3. Scalability:
System 1 and 4 had a hard coded execution sequences and product structures. Sys-
tem 2 had an executions sequence resolved in run-time by an inference mechanism and
a flexible product structure. In system 3 the execution of knowledge object was based
on current system status. It was possible to add redundant knowledge objects making
for a highly scalable and flexible system.

J. Johansson and F. Elgh / How to Successfully Implement Automated Engineering Design Systems 180
5.4. Flexibility:
System 1 and 4 were special purpose systems while systems 2 and 3 were built us-
ing in-house developed platforms for DA-applications that are general and could be
applied on a variety of design activities.
5.5. Longevity:
As a result from the above.
5.6. Investment:
The development of systems 1 to 3 included developing platforms for automating
engineering design which required quite a lot of man-hours. However, the investments
in software were at a minimum. System 4 was targeting the automation task directly,
without the development of a platform, and took about one man-year to develop. That
system was integrated as an add-in to the already existing CAD-system at the company.
5.7. Effort of development:
The design calculations and the separation of geometry algorithms required quite a
lot of effort in the system 1 and 4 system whilst the programming of the CAD-macro
and in-house routines for geometry handling was the most time consuming tasks in the
development of system 2 and 3.

Table 3. The different systems were scored based on the criteria mentioned in the introduction.
Critera CoRPP KA BendIT TRackS
1. Transparency Good Good Moderate Poor
2. User readable knowledge Moderate Moderate Moderate Poor
3. Scalability Moderate Good Good Poor
4. Flexibility Poor Good Good Poor
5. Longevity Moderate Good Good Poor
6. Investment Moderate Moderate Moderate Good
7. Effort of development Moderate Moderate Moderate Moderate
6. Conclusions
When comparing the contents of the systems to the evaluations it seems that, when
possible, the use of template-models in commercial CAD-systems to automate embod-
iment of engineer-to-order products increases the possibility of a successful design
automation project. It is also seen that the introduction of automated analysis through
FEM-simulations is difficult and affects criteria 1 to 5 negatively.
We can conclude that when planning the automation of engineering design pro-
cesses it is important to consider what types of content the final system will have. The
answers to the following questionings can serve as an indication of how well the result
will meet the criteria stated in the introduction:
J. Johansson and F. Elgh / How to Successfully Implement Automated Engineering Design Systems 181
x Does the system need to interact with geometry?
x Is it possible to utilize commercially available CAD-systems or is there a need
for specialized geometrical functions?
x If CAD-software may be used, can the geometries be represented using
adaptable templates or should they be generated?
x Is there a need for frequent geometrical evaluation? If so, will the performance
of the system when using a CAD-software compared to developing own
functions affect the usability of the system?
x Will the system automate FEM-simulations?

The more affirmative answers, the harder it will be to implement a system meeting all
the criteria.
Comparing the system structures to the evaluations shows that the system structure
should as far as possible be modular and the execution flow should not be fixed but
runtime static or runtime dynamic.
Of course, many factors affect the result of a design automation project of which
this paper just touches some few. However, since the computerization of engineering
design is rapidly increasing further investigation is necessary. It seems for instance
hard to decide the size of knowledge objects, how to document them, and how to man-
age the organization around the design automation system.
Further, the development of and commercialization of software facilitating all the
necessary functionality to support the automation of engineering design is demanded.
7. Acknowledgements
The authors express their gratitude to all sponsors funding and participants for tech-
nical support in the projects mentioned in this paper.
References
1. Cederfeldt, M., Planning Design Automation : A Structured Method and Supporting Tools, in
Institutionen fr produkt- och produktionsutveckling, produktutveckling2007, Chalmers university
of technology: Gothenburg.
2. Elgh, F. and M. Cederfeldt, Concurrent cost estimation as a tool for enhanced producibility-
System development and applicability for producibility studies. International Journal of
Production Economics, 2007. 109(1-2): p. 12-26.
3. Elgh, F., Decision support in the quotation process of engineered-to-order products. Advanced
Engineering Informatics Advanced Engineering Informatics, 2012. 26(1): p. 66-79.
4. Johansson, J., Combining Case Based Reasoning and Shape Matching Based on Clearance
Analyzes to Support the Reuse of Components, 2012.
5. Cederfeldt, M. and S. Sunnersj, Solid Modelling with Dimensional and Topological Variability,
in Research for Practice - Innovations in Products, Processes and Organisations : Proceedings of
the 14th International Conference on Engineering Design, August 19 - 21, 2003, Stockholm,
Sweden.2003, The Design Society: Glasgow.

J. Johansson and F. Elgh / How to Successfully Implement Automated Engineering Design Systems 182
Service Process Estimation and
Improvement on Verbal Characteristics
Leonid KAMALOV
a,1
, Alexander POKHILKO
a
, Ivan Gorbachev
a
and
Evgeny KAMALOV
b

a
Ulyanovsk State Technical University, 32 North Venets st., 432027 Ulyanovsk,
Russian Fed.
b
Ulyanovsk Regional Clinical Hospital, 7 3
rd
International sq., 432048 Ulyanovsk,
Russian Fed..
Abstract. Paper considers problems of service process improvement. Process
characteristics, expressed by the means of verbal expressions, are the key features
that lead to consumer satisfaction. A mathematical model of verbal estimation and
a method of service process improvement are proposed. Verbal estimation based
on fuzzy logic described. Paper provides software simulation of a process verbal
analysis and improvement. General results and the application in health service
described.
Keywords. Service improvement, process estimation, linguistic variables,
Bellmans optimality principle.
Introduction
A service is a consequence of activities that lead to some result. In health treatment the
result is the complete recovery, but in most cases the aim of the treatment is the
elimination of some disease. Results may differ, and it is necessary to understand what
it takes in wide sense to gain the result. In other words there are always some
constraints that must be observed during the process. These constraints determine
which activities may be executed, and which may not. Constraints are represented by
the limited resources, time, laws, and physical conditions and so on. In health treatment
the physical condition, age, patients habits, chronic ailments must be taken into
account.
This research deals with known processes, in particular, with known health
problems and their treatment processes. Authors focus on ways of providing better
medical services by the means of process improvement on different characteristics that
are described in verbal messages. Not only the problem elimination, also the good
treatment and the perfect service will give medical companies an opportunity to
produce bigger added value.
The service must be useful. And the service company must fulfill the
consumers requirements. To become successful and gain concurrent advantages, the
service company must overcome the consumers requirements and provide better

1
Leonid KAMALOV, Ulyanovsk State Technical University, 32 North Venets st., 432027 Ulyanovsk,
Russian Fed., e-mail: l.kamaloff@gmail.com
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-183
183
service. To understand ways of the service process improvement, it must be estimated
first.
Each activity in process gives its contribution in result. Therefore the result
may be estimated through the activities that lead to the result. New ways of treatment
are not considered in this work because is the business of scientific research institutes.
In medical service, like in other services, that is offered in mass consumption, such
sequence of activities is well-known and the steps are predetermined. And one
companies become more successful than the others because of the way they provide the
services.
In automatic control theory there is similar concept known as control function. It
determines the way of how some function is being executed. When applied to service
process, the control functions might have been represented in table view, as a set of
parameters or as a text with instructions to the executor. So the activities may differ
upon their control functions.
1. Linguistic characteristics of a process and of a result
Today medical services are estimated upon the statistic indexes. They are intended to
report to the government about the medical funds spent. The estimation of a process
from a consumers point of view is mostly described in verbal characteristics. They
have a complex nature and represented by the numerical and non-numerical
expressions. The characteristics are represented by such terms as cost, staff attitude,
medications quality, comfort wards, time-to-wait, diagnosis precision, personnel
professional skills, amount of pain sensation, equipment, sanitary conditions, and so on.
Authors propose a method of linguistic characteristics representation in a form of
Entity-Relationship model as shown on Figure 2.
The estimation of these non-numerical characteristics may be made by linguistic
variables, so as the human always do. These estimations may be made in such
linguistic terms as good, average, poor, and so on. In addition, each linguistic
variable may correspond to some triangular fuzzy number as shown on Figure 2.

Figure 1. Linguistic dictionary.
L. Kamalov et al. / Service Process Estimation and Improvement on Verbal Characteristics 184
2. Mathematical model of process estimation method
2.1. Estimation on one criterion
To estimate the process on one criterion each activity must be estimated. For each
activity a triangle fuzzy number is put into correspondence. To get a complex
estimation of a process some logical operation must be executed with the estimations.
From all the fuzzy logic operations the most appropriate is minimum. Such
conclusion is made upon the following reasoning: if a maximum operation is used the
total estimation of a process will be on a level on one best activity. This differs with
human reasoning, because negative occasions are memorized better, than good ones.
Human aspires to escape negative things more, than to gain positive things. Other
logical operations are close by its sense to minimum or maximum. So the minimum is
chosen.
From the mathematical point there is no difference between these operations,
because from the mentioned below it is clear that the estimation model deals with the
function extremum. When applied to other processes, the maximum operation might
be useful.
If the estimation if the first activity is designated as C
1
, the second as C
2
, and so on,
the total process estimation will be designated as (n being the number of activities):
(1)

Figure 2. Service characteristics ER-model
L. Kamalov et al. / Service Process Estimation and Improvement on Verbal Characteristics 185




In some cases the result of the minimum operation is a void set. To make sense the
formula must be written as
(2)
being the less estimation of activities.

2.2. Estimation on many criteria
The process estimation on many criteria is represented as a vector of such linguistic
estimations (k being the number of criteria):
(3)
To compare two processes on one criterion the cardinal number term is useful:
(4)
In case of fuzzy numbers, represented by 2-dimensional characteristic functions,
this formula represents the square of the figure, bounded by the function. In case of
triangle and trapezoidal characteristic functions, also as in other cases, the squares may
be equal. Besides the direct addition of the squares will not allow recognizing which
estimation is better.
2.3. Integral fuzzy estimation
To solve this problem the integral process index is proposed:

(5)

Or in conjunction form:



(6)



being the basis of triangular fuzzy numbers Cj. This integral index estimates the
whole process, taking all estimations into account. It is shown below, how the integral
index is used to improve the process.
L. Kamalov et al. / Service Process Estimation and Improvement on Verbal Characteristics 186
3. Process improvement by the means of control function altering.
3.1. Process control functions
As have been mentioned above in service area it is more important how certain
things are done rather than what things are done. In such responsible area as health
treatment all the processes are already described in a form of instructions to medical
personnel. Deviation from the instructions may cause serious consequences. Also these
instructions and rules provide the control functions the description of how. And
of course the physicians experience and knowledge helps when there is no instruction
describes the case.
Table 1 describes the consequence of actions, that anesthetist must execute
during the process of epidural anesthesia

Table 1. The process of epidural anesthesia and control function variants

The right column describes the variants of how some action executes. This is a
description of control functions designated Tb(u). The choice of the function
determined by the current constraints, which may be the patients condition, age, local
laws, pharmacopeia and so on. Each control function corresponds to specific estimation
on specific criterion. It is shown in Table 2.



L. Kamalov et al. / Service Process Estimation and Improvement on Verbal Characteristics 187



Table 2. Control function table form.
Criteria Control function Tb(u(t))
u(t
1
) u(t
2
) u(t
p
)
Criterion
1
C
11
C
12
C
1p

Criterion
2
C
21
C
22
C
2p

Criterion
j
C
j1
C
j2
C
jp


3.2. Process improvement
Control function altering influences the set of estimations on different criteria. This
influence on estimation described by formula (C
ij
being the estimation of i-activity on
j-criterion):
(7)

Substitution this expression into integral index M formula yields:

(8)



The process optimization task consists in finding the extreme meaning of this
function. To solve this task Bellmans principle of optimality is used. The algorithm is
shown on Figure .
After the process modeling and its estimation, the integral index M is calculated.
Then control function of the last activity is altered and the process is estimated again.
When maximum of M is found, these steps are repeated with previous activity and so
on until the first activity is reached. According to Bellmans optimality principle the
process may be considered optimal.

4. Software implementation
The mathematical model of service process estimation and improvement is
implemented as software application. It is capable to interact with business-process
modeling applications and get the consequence of activities in a form of .csv file. It
consists of the linguistic variables dictionary, estimation block and the optimization
model. The last one implements the above-mentioned algorithm.
3
L. Kamalov et al. / Service Process Estimation and Improvement on Verbal Characteristics 188
On process model import, the control functions are appointed. Then, an estimation
is assigned to each control function. After that, the algorithm picks the control
functions that lead to maximum meaning of M index. They form an optimal process.
This software is used in Ulyanovsk Regional Clinical Hospital (Russian
Federation) to estimate and improve anesthesia operative invasion processes.
Also, the proposed model and software is applicable in all areas that deal with
business processes, including PLM-software deployment, consulting and business
process reengineering services.
References
[1] L. Kamalov, A. Pokhilko, O. Kozincev, S. Ryabov, Preproduction Process Estimation by the Means of
Fuzzy Statements, Concurrent Engineering Approaches for Sustainable Product Development in a
Multi-Disciplinary Environment, Springer, London, 2013.
[2] L.E. Kamalov, A.F. Pokhilko, The Process Approach to Synthesizing and Analyzing of 3D
Representations of Complex Technical Objects, Pattern Recognition and Image Analysis 23 (2013),
pp. 68-73.
[3] L. Kamalov, A. Pokhilko, T. Tylaev, A Formal Model of a Complex Estimation Method in Lean Product
Development Process, New World Situation New Directions in Concurrent Engineering, Springer-
Verlag, London, 2010.

Figure 3. Process optimization algorithm.
L. Kamalov et al. / Service Process Estimation and Improvement on Verbal Characteristics 189
Lean Approach in Concurrent
Engineering Applications
enay KARADEMR
a,1
and Can CANGELR
b,1

a
Concurrent Engineering Section, Turkish Aerospace Industries, Inc., Turkey
b
Concurrent Engineering Section, Turkish Aerospace Industries, Inc., Turkey
Abstract. Early on, with the technological developments, machines have replaced
the human workers which pave the way to mass production for decades. With the
increase of variety of goods and number of producers, competition has grown to
produce cheap, high number of goods with high quality. This has led the producers
to think about ways to lessen the scraps and prevent problems. Thus, techniques
and approaches like Total Quality Management, Lean Manufacturing, Six-Sigma
have been developed and grown in importance.
Whereas in todays competitive world what is counted as success is no more the
ability to produce but also to design. World trend is growing to find ways to design
the most rapid, cost effective and quality product. In a similar way to the progress
of production, new approaches, methodologies and principles are being generated
and initiatives are established to fulfill this need in engineering area.
Challenges increase in diversity and difficulty in parallel with the complexity of
the design. Until now, companies used product data management methods and
tools in order to integrate and provide the running of the tasks. Further
developments in information technology shall be used to put in more knowledge to
these systems to lessen the routine design work while enabling the designer to
touch the tacit-knowledge.
This paper will examine the lean thinking in engineering and try to harmonize the
Lean Engineering Approach with Concurrent Engineering (CE) Philosophy.
Principles to enable CE applications in Aerospace Industry will be developed and
an analogy of Lean techniques will be outlined.
Keywords. Concurrent Engineering Principles, Knowledge Based Engineering,
Lean Engineering, Concurrent Engineering in Aerospace Industry, Formal
Methods in Concurrent Engineering, Concurrent Engineering in Practice

1
Engineer, Concurrent Engineering Section, Turkish Aerospace Industries, Inc., Fethiye Mahallesi,
Havaclk Bulvar No:17 06980 Kazan-ANKARA / TRKYE
e-mail: skarademir@tai.com.tr
2
Chief, Concurrent Engineering Section, Turkish Aerospace Industries, Inc., Fethiye Mahallesi,
Havaclk Bulvar No:17 06980 Kazan-ANKARA / TRKYE
e-mail: ccangelir@tai.com.tr

20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-190
190
Introduction
Former approach of the industry was intensively concentrated on improving the
production and service of a quality product. In parallel with the purpose, Lean
Manufacturing techniques have made a strong contribution to improving manufacturing
efficiency. Among the new developing technology, new product introduction and
development (NPID) is increased in criticality and importance for the firms in order to
sustain a competitive advantage. As also stated by McManus and Haggerty
[1]
,
production and service of a quality product, usually seen as the delivered value are not
valuable if the product itself does not please the customer. If lean improvements are
confined to manufacturing, they will represent only islands of success in a sea of
inefficiency. Thus, lean thinking concepts might be applied to all stages of new product
design and development progressively, in order to enhance the performance and
provide efficient products. This idea is also supported by Haque and Moore [2], stating
that the success of lean in NPID depends on following the footsteps of manufacturing
and identify metrics that are conducive to lean thinking.

Information plays the same role in the product development value stream that material
plays in the manufacturing value stream. Product development activities transform
Information. Information in many forms converges to define a design just as many
parts come together to make a product.
Lean Manufacturing principles aim to increase the efficiency through inventory
control and production process improvements whereas engineering does not have
inventory and in most cases is not a production environment. In addition, NPID
processes are mostly uncertain and confined by tacit knowledge.
1. Base of Lean Techniques in Engineering
Engineering is fundamentally different than the ways applied in factories. However it is
still possible to link the lean thinking concepts with engineering processes. As a start
point, Womack and Joness 5 steps to lean which are, value, value stream, flow, pull
and perfection might be considered. Following examples can be considered as the
adjusted steps of lean engineering.
Value: a producible, low cost design; a design that is expected to satisfy
customer requirements with an acceptable level of risk; or a supplier
infrastructure which supports production as well as the operations and
sustainment
[3]
.
Value Stream: NPID processes act upon information and knowledge (which is
mostly tacit) to produce product specification.
Flow: Wastes like early/ late or overqualified/not yet mature delivery of
information should be avoided. This could be provided by defining the
satisfactory information and using an iterative method to provide only the
satisfactory information to the following step.
Pull: Defining the required documents/ maturity of information precisely to
enable that information generation is only driven by the needs of the next step.
Perfection: Efficient product development process.
S. Karademir and C. Cangelir / Lean Approach in Concurrent Engineering Applications 191
Re-imagining these concepts, study of MacManus has revealed the conclusion as
shown in the Table 1
[4]
.

Table 1. Applying the lean steps to Engineering
2. Concurrent Engineering (CE) Approach
Concurrent Engineering is an engineering management philosophy and a set of
operating principles that guide a product development process through an accelerated
successful completion. Overall CE philosophy rests on a single, but powerful, principle
that promotes the incorporation of downstream concerns into the upstream phases of a
product development process
[5]
. This approach is intended to cause the developers
from the outset to consider all elements of the product lifecycle from conception to
disposal, including quality, cost, schedule, and user requirements
[6]
. The concept of
Concurrent Engineering has been widely recognized as a major enabler of fast and
efficient product development
[7]
.
CE paves the way for real time collaborative work environments where
multidisciplinary teams can work to generate solutions for design problems. In order to
provide the required environment for CE, basic principles might be adopted which
roots from the company requirements harmonized with the capabilities of computer-
based tools. Each principle aims either reducing the time spent and costs or increasing
the quality.

Figure 1. Principles adopted in TAI (Turkish Aerospace Industries, Inc.) in the concept of Concurrent
Engineering.
Manufacturing Engineering
Value Visible at each step, defined
goal
Harder to see, emergent goals
Value Stream Parts and material Information and knowledge
Flow Iterations are waste Planned Iterations must be
efficient
Pull Driven by takt time Driven by needs of enterprise
Perfection Process repeatable without
errors
Process enables enterprise
improvement
S. Karademir and C. Cangelir / Lean Approach in Concurrent Engineering Applications 192
2.1. Standard Set of Process and Information Product
An important criterion for CE is that an organization systematically identifies and
defines its Standard Information Products. Without defining a standard set of
products, it is impossible to systematize the process and gain productivity benefits from
the application of CE
[8]
. In this context, maturation path of the information products
shall be managed in order to describe the required sufficiency degree of the outcome of
different processes. The maturation path will represent the iterations to the desired final
product which shall also needs to be reviewed and assessed at defined checkpoints
milestones-. These cross checks among various other disciplines to either
monitor/observe or confirm the sufficiency of the information for their own tasks and
processes. Determination of the cross flow of information products is a must to manage
all these maturities and milestones.
2.2. Unique, Up-to-Date and Shared Product Data
Standard information products will ensure the content but not the unicity. In order to
keep track of the change and development of an information product, it has to be
generated and published uniquely. Unnecessary duplications have to be avoided by any
means in order to avoid waste. Information products have to be traceable and accessible
during the processes. An additional must is that access to the information products shall
be limited in order to prevent them from being used or updated improperly.
2.3. Multi-functional Work Environment
CE deals with multifunctional project groups where complex products are produced. It
is beyond the imagination of a single person, a single team, or even a single department
to comprehend fully all the aspects of the product needs
[9]
. Thus, a main requirement
of CE approach is to provide the designers, analysts, managers etc. an environment
where they can easily communicate, collaborate and compromise. By these multi-
functional work environments, parallel evolution of the design will be enabled. An
absolute must of these teams is that they can easily access to the up-to-date data and
feed and respond to each others work instantly.
2.4. Integrated Computer-Based Solutions
The base of CE multi-functional teams- in todays world is mostly spread around. It
is usually not possible to bring all disciplines which are required to build the whole
product in one premise of any company. Developing IT technologies enable the teams
to work together by not being physically together. End to end information flow among
the teams can be provided digitally. All the processes can be fed from a common
database seamlessly by the computer-based solutions. With the integration of
computer-based solutions, CE aims to minimize the manual work, to share the data
instantly with the teams and by exchanging the information products among tools.
Computer-based solutions shall also be in line with the processes to provide results for
monitoring the program and store and collect the company knowhow.
S. Karademir and C. Cangelir / Lean Approach in Concurrent Engineering Applications 193
3. Application of Concurrent Engineering in Aerospace Industry TAI example
Following section will examine how CE is perceived and the aforementioned principles
are applied in TAI (Turkish Aerospace Industries, Inc.)
3.1. Standard Set of Process and Information Product
Standard Information Product is perceived as any type of document, CAD Model, 2D
drawing, etc. which is mainly various forms of information. Standard Process is
perceived as the lifecycle of these products during the development of the end-product.
The stakeholders and the relations in between are defined in order to put the whole
process in a logical maturation path. As a result of a stage by stage modular approach
as explained by Karademir and Cangelir
[10]
, two maturity levels are defined for both
Structural Design and Systems Design.
Further study is carried to define the milestones of these maturity levels, where the
crosscheck of the stakeholders are required. As a result of the study, 14 milestones are
adopted in order to establish the sustainability, sufficiency and the correctness of the
information products.
Similar study is also carried by Airbus as stated by Landeg and Ash
[11]
, resulting
on 3 maturity levels and 14 milestones Concurrent Engineering Model.
3.2. Unique, Up-to-Date and Shared Product Data
In order to provide the unity of the information products, all items are numbered in a
logical manner by which the standardization is familiar to every stakeholder. All types
of information products are defined digitally called as item- in computer-based
systems. Each and every item has its unique specifications depending on the type and
content such as Structural Item, Harness Item or Vendor Item. All the items are
kept in a Product Structure or linked to the corresponding documentation in the
system. Role management is the essential part of this scenario where the items
Standard Information Products- are only available to the relevant roles. Availability
rules are also various among roles, providing that different roles can add different value
to the items.
3.3. Multi-functional Work Environment
By keeping whole information products in a common database, it is possible for
various teams to access the data without the necessity to physically being together.
Workflows in the computer based solutions are defined which enable the relevant role
to access the data with corresponding user rights and at the required time without any
delays. The use of workflows, provide the teams to work / analyze / assess the
information product at the right time with the right detail. Information products are
released and published with different status during the process in order to take a
baseline by which other teams get started their own work.
Aim of a workflow can be either to involve parallel processes to the evolution
decisions or to confirm the sufficiency of the information product to be used in their
own processes.
S. Karademir and C. Cangelir / Lean Approach in Concurrent Engineering Applications 194
3D CAD Model is the main information product of the design groups which in
total forms the Digital Mock-up (DMU). All items of DMU are preserved in computer-
based tools so that all the teams can access the up-to-date, unique data.
3.4. Integrated Computer-Based Solutions

Product Data Management tools are mainly used as the enablers of a Concurrent
Engineering applications. A seamless flow of information via the computer-base tools
is essential to lessen the time spent on communication and wastes on
miscommunication. Computer-based solutions shall also be improved in accordance
with the process and business requirements to minimize the manual and routine work.
Standard information products are defined digitally in computer-based solutions
to that a common and easy-to-access, role based environment be provided to the teams.
The nature of product data management brings out the need for various softwares.
Variability of the software raises no problems as long as the systems can communicate
seamlessly.
4. Concurrent Engineering Approach based upon Lean Engineering

As the Concurrent Engineering Approach aims to prevent problems, it coincides with
the aim of Lean Engineering by means of eliminating the time and resource wastes
caused by unnecessary knowledge generation and lack of communication, which are
the root causes of complex design and poor compatibility with manufacturing
processes. Studies indicate that 40% - 60% of the typical engineers or designers time
is spent on nonvalue-added activities
[12]
. These may represent tasks such as operation
system functions (integrating multiple platforms, multiple operating systems), business
interfacing (data transfer, file management, backup), communication (fixing network,
protocols) and others
[12]
.


Figure 2: Typical Teams Time Distribution Chart ( Current Process: As-Is, Modified Process: To-be)
[12]

With the principles mentioned above, CE in application is significantly parallel
with lean way of thinking. It is possible to use lean techniques to provide continuous
improvement on communication, business interfacing and operating system functions.
This section will try to make an analogy between the lean techniques and applications
of CE in TAI.
S. Karademir and C. Cangelir / Lean Approach in Concurrent Engineering Applications 195
During the process of maturity and milestone definition, all processes are to be
visualized by value stream mapping. The bottlenecks of the processes to be identified
and root causes are to be revealed. Critical to Quality analysis are to be performed to
find out the actual wants and needs of the tasks. Once these are determined, a
maturation path can be specified for the standard information products (Requirement
Documents, Specification Documents, CAD Models etc.). In accordance with the pull
requirements of the next or parallel tasks, using the JIT methods, sufficiency levels of
the information products can be determined and milestones can be settled (checklists,
assessment lists etc.) All these studies will result in a maturity model which will
provide visual supplier-input-process-output-customer loop. Way of communicating
among teams will be standardized and classified.
Item, Roles and Product Structure Management can be seen as a 5S application.
Information products are sorted 1st S- in accordance with their content and
application area. With the aid of Product Structure, all the items are straightened and
settled in order - 2nd S- in a way it is easy and logical to find the location of the
information product. Via roles, only relevant information product is available to the
users which in fact is in tune with 3rd S : Sweeping or Shine. Although the contents
are different for items, information products are standardized 4th S such that they
hold the same specifications and relations in the system. Lastly, 5th S is provided by
managing all these in a common to all computer-based solution.
Workflow management can be perceived as a representation of the Value Stream
Maps of the approval process for each information product. Rules of the workflows are
results of Poka Yoke studies. Workflows also provide a balanced line for the
information products to follow. Business Interfacing and communication is improved
via workflows and DMU since there is a single media to perform all activities.
Integrated Computer-based Solutions are enablers of the whole environment. With
the help of SIPOC and VSM, the gaps between the software can be identified and
solutions shall be provided. For Engineering, software are analogous to the
manufacturing task centers. In order to provide a seamless flow of information, Single
point flow and Line Balancing techniques can be applied.
Table . Analogy between Concurrent Engineering Application and Lean Engineering
Principle of CE Application of CE Lean Techniques Improvement Area
Standard Set of
Process and
Information Product
Maturity Management
Milestone Definition
Value Stream Mapping
Root Cause and
Bottleneck
CTQ
Just in Time
Kanban
Business
Interfacing
Communication

Unique, Up-to-
Date and Shared
Product Data
Item Management
Product Structure Management
Role Management
5S
Poka Yoke

Communication
Operating system
functions
Multi-
functional Work
Environment
Workflow Management
DMU Management
Poka Yoke
VSM
SIPOC
Line Balancing
Business Interfacing
Communication
Integrated
Computer-Based
Solutions
PLM/PDM Solutions
Data Exchange
Digital Product Management
VSM
Standardization
Line Balancing
Best Practice
Single Point Flow
Kaizen
Communication
Operating system
functions
2
S. Karademir and C. Cangelir / Lean Approach in Concurrent Engineering Applications 196
5. Conclusion
In this paper, base of lean techniques in engineering and application of Concurrent
Engineering in aerospace industry is discussed. 4 basic principles are determined for
the proper implementation of CE philosophy. Discussion carried on lean techniques
and their scope of application. Lastly an analogy of lean techniques and CE principles
which are determined in the paper is provided while showing the improvement areas.
References
[1] McManus, Hugh, Al Haggerty, and Earll Murman. "Lean Engineering: Doing the Right Thing Right."
Paper delivered to the 1st International Conference on Innovation and Integration in Aerospace
Sciences, August 2005.
[2] Haque B., and James-Moore M. (2004), Applying Lean Thinking to New Product Introduction, Journal
of Engineering Design, Volume 15, No. 2 (March), 2004
[3] Walton, M.,Strategies for Lean Product Development, Lean Aerospace Initiative, Center for
Technology, Policy and Industrial Development, Massachusetts Institute of Technology, Cambridge,
MA,1999.
[4] Hugh McManus, Product Development Value Stream Mapping, Beta Release, Lean Aerospace
Initiative, MIT,Cambridge MA, March 2004.
[5] Yassine, A. and Braha, D., Complex Concurrent Engineering and the Design Structure Matrix Method,
Massachusetts Institute of Technology, Cambridge, MA, September, 2003.
[6] Kamrani, A.K. and Nasr, E.S.A, Collaborative Engineering: Theory and Practice, 2008.
[7] Tenkorang, R.A., Concurrent Engineering (CE): A Review Literature Report, Proceedings of the
World Congress on Engineering and Computer Sciences, San Francisco, October 2011.
[8] Parkin, K., Sercel J.C., Liu, M.J., Thunnissen, D.P., ICEMaker:An Excel-Based Environment for
Collaborative Design, Division of Engineering and Applied Science, California Institute of
Technology, January 2003.
[9] Prasad, B. Concurrent Engineering Fundamentals-Vol.1: Integrated Product and Process
Organization ,Prencite Hall PTR, Upper Saddle River, New Jersey 07458, 1996
[10] Karademir, ., Cangelir, C., Determining Concurrent Engineering Maturity Levels , Proceedings of
the 19th ISPE International Conference on Concurrent Engineering, September,2012
[11] Landeg, B. and Ash, S., Implementation of Airbus Concurrent Engineering, AGARD SMP Meeting
on Virtual Manufacturing, Aalborg, Denmark, October,1997
[12] Prasad, B. Concurrent Engineering Fundamentals-Vol.1: Integrated Product and Process
Organization ,Prencite Hall PTR, Upper Saddle River, New Jersey 07458, 1996

S. Karademir and C. Cangelir / Lean Approach in Concurrent Engineering Applications 197
Physics-Based Distributed Collaborative
Design for Aerospace Vehicle Development
and Technology Assessment
Raymond M. Kolonay
a
a
United States Air Force Research Laboratory, WPAFB, Ohio 45433
Abstract. One of the missions of the United States Air Force Research Lab
(AFRL) is to develop and assess technologies for next generation aerospace
systems. Currently, the assessment is achieved using empirical relationships and
historical data associated with systems developed previously. The assessment is
done in this fashion due to resource constraints on time, personnel, and funding.
Performing technology assessment in such a fashion, although timely, is not
necessarily accurate. This is due to the fact that many of the technologies and
system configurations being evaluated have no historical or empirical information
associated with them. Hence, traditional assessment techniques produce
misleading results and subsequently ill-informed decisions by Air Force leadership
associated with technology investment and potential future system capabilities. To
address this issue the Multidisciplinary Science and Technology Center within
AFRLs Aerospace Systems Directorate is developing physics-based design
exploration and technology assessment methods and processes. The new methods
and processes utilize physics-based analyses and a distributed collaborative
computational environment to predict vehicle performance which in turn is used in
mission level simulations to assess the impact of a given configuration or
technology on the combat effectiveness of a system. The new methods and
processes will be executable within the same time and resource constraints of the
traditional process. This enables AFRL technology developers to have a
quantifiable and traceable trail of the impact of their technologies on system
performance parameters such as weight, lift, and drag into terms that Air Force
leadership measures system effectiveness lethality, survivability, sustainability,
and affordability. This leads to well informed decisions concerning technology
investment and achievable capabilities.
Keywords. multidisciplinary design optimization, collaborative design, network
computing, physics-based design, Service ORiented Computing EnviRonment
(SORCER)
Introduction
One of the missions of the United States Air Force Research Lab (AFRL) is to develop
and assess technologies for next generation aerospace systems. Currently the majority
of the assessments are achieved using empirical relationships and historical data
associated with systems developed previously. The assessments are done in this fashion
due to resource constraints on time, personnel, and funding. Performing technology
assessment in such a fashion, although timely, is not necessarily accurate. This is
attributed to the fact that many of the technologies and system configurations being
evaluated have no historical or empirical information associated with them. Hence the
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-198
198
traditional assessment techniques produce misleading results leading to ill-informed
decisions by Air Force leadership associated with technology investment and potential
future system capabilities. To address this issue the Multidisciplinary Science and
Technology Center within AFRLs Aerospace Systems Directorate is developing
physics-based design exploration and technology assessment methods and processes to
support Air Force leadership decisions on potential system capabilities and technology
investments. The new methods and processes utilize physics-based analysis methods
and a distributed collaborative computational environment to predict vehicle
performance which in turn is used in mission level simulations to assess the impact of a
given configuration or technology on the combat effectiveness of the system. The new
methods and processes will be executable within the same time and resource
constraints of the traditional process. A high level representation of the desired
technology assessment process is depicted in Figure 1. It consists of the following
areas: strategic guidance, system specification and concept of operations, mission
assessment/combat effectiveness, and physics-based system and technology
performance. The primary differences in the proposed process compared to current
practice occur in two significant ways. First the use of a physics-based system and
technology performance instead of empirical or historical information and second the
feed forward of this information to evaluate mission assessment and to influence the
concept of operations and system specifications. What follows in this manuscript is a
brief description of each of the areas in Figure 1 with a detailed discussion on what is
contained in the Physics-based System & Technology Performance area, its
relationship to Mission Assessment and Concept of Operations and System
Specifications, and examples of its implementation and usage.
Figure 1. Technology Assessment Process
1. Strategic Guidance
To obtain an understanding of the area of strategic guidance it is best to take a brief
look at the US DoDs acquisition process. This is depicted in Figure 2. AFRLs
technology development and assessment role takes place primarily pre-milestone B
(Concept Refinement and Technology Development). The following description of the
interaction between the customer, those giving strategic guidance, typically one of
the Air Forces Major Commands such as Air Combat Command (ACC) or Air
Mobility Command (AMC), and the acquisition community (Air Force Material
Command) is summarized here from Reference 1. Once the warfighter(ACC, AMC
R.M. Kolonay / Physics-Based Distributed Collaborative Design 199
etc..) has identified a need, the early Systems Engineering and JCIDS (Joint
Capabilities Integration Development System) processes begin, where DoD strategic
guidance, joint operating concepts, and joint functional concepts are considered as
inputs to the DOTMLPF (Doctrine, Organization, Training, Materiel, Leadership &
Education, Personnel, and Facilities) evaluation to determine if indeed a materiel
solution is needed. A gap analysis is performed while considering user needs and
technology opportunities. Operational requirements are then generated. This is the first
decision point in the process. If it is determined that a material solution is required to
meet the needs and required capability the formal acquisition process for the system(s)
is kicked-off. Otherwise, the need is satisfied with existing capabilities within the DoD.
If it is determined that a material solution is required this leads into the conceptual
design phase of the system and eventually into an Analysis of Alternatives (AoA)
process where the most promising solutions are compared in detail to select a preferred
concept to move forward into technology development processes. There is/should be a
continuous communication between the acquisition community and the customer (those
providing the strategic guidance) to ensure that the customer capability requirements
are well defined and for the acquisition community to convey to the customer if the
desired capability is possible to deliver with existing technology, what the cost will be,
and what is the risk associated with delivering that capability.
Figure 2. US DoD Acquisition Process
2. Concept of Operations and System Specifications
Returning to Figure 1 and focusing on the Concept of Operations and System
Specifications, once it is determined that a material solution is required and the desired
capability is identified the acquisition community begins to develop concept of
operations and system specifications. At this point in the process a rigorous Systems
Engineering Approach is employed (Figure 3).
R.M. Kolonay / Physics-Based Distributed Collaborative Design 200
Figure 3. Acquisition Process Systems Engineering V
Shown in Figure 3 is the Systems Engineering V that represents the system
throughout the acquisition process. In this particular phase the portion of the V that is
being determined is ConOps or concepts of operation and system specifications. The
goal is to use Systems Engineering to transform the operational needs identified by the
strategic guidance into a description of system performance parameters and a system
configuration through the use of an iterative process of definition, synthesis, analysis,
and design. As Figure 1 indicates this takes place through an iterative process with the
customer to identify/refine operational needs/desirements and the mission assessment
team. To determine the concept of operations and systems specifications requires the
consideration of multiple alternatives and architectures. These alternatives must be
modeled and evaluated and scored based on customer needs. Currently, the modeling
of the system is performed using traditional conceptual design information based on
historical and or empirical information. A goal of the new process is to utilize physics-
based models in this process. This is indicated by the feedback arrow between the
ConOps/System Specifications block and the Physics-based System & Technology
Performance block. At a minimum the physics-based models should be used to
validate the conclusions made for concepts of operation and system specification
before they are finalized.
3. Mission Performance/Assessment
Once a set of concept of operations and systems specifications are identified, the
mission performance and combat effectiveness of a given system(s) is assessed. This is
achieved using modeling and simulation (M&S) tools [2],[3],[4],[5]. These tools are
typically capable of performing three levels of assessments: single-sortie analysis,
mission-area analysis, and campaign analysis. A taxonomy for classifying M&S is live,
virtual, and constructive. Live M&S implies a human operating a physical system.
Virtual represents a human operating within a virtual environment, and constructive
consists of a completely computer simulated environment (both user and system
response). The M&S environment has representative models for both the friendly
blue components and the adversary or red components for a given mission. As an
example Reference [3] gives a use case for a penetrating intelligence, surveillance, and
R.M. Kolonay / Physics-Based Distributed Collaborative Design 201
reconnaissance (PISR) mission. Models of blue components would consist of an air-
breathing platform equipped with an electro-optical (EO) sensor that provides high-
detail imagery for final target identification. Representative red components would
consist of the integrated air defense system (IADS). This would include modeling
commanders, weapons managers, surface to air (SAM) sites, sensor managers, and
radar sites. For assessments carried out for such a single-sortie analysis or mission-
area analysis the performance is usually measured in terms of war fighter/customer
capability such as lethality, survivability, sustainability, availability, and affordability.
These measures can be quantified by determining target kills per sortie, probability of
target acquisition, probability of availability, probability of detection, and probability
of survivability. In order for the M&S tools to perform the mission assessment they
require inputs that describe the physical features and capabilities of the system. For
example; vehicle speeds, turn rates, range, loiter time, specific excess power for a given
maneuver, weapons load, weapons performance, vehicle radar signature, and sensor
performance. This information is required throughout the entire mission profile. The
fidelity of the air vehicle data cited above affects the accuracy of the performance
assessments. The interest in this work is associated with the modeling of the vehicle
performance such as speeds, turn rates, range etc.. and not of the representation of the
sub-systems such as on board radar or weapons. Todays common practice is to model
the vehicle performance using traditional conceptual design models that are based on
empirical equations and historical databases for the vehicle capabilities throughout the
mission. There is little or no physics-based analyses carried out to obtain this
information. As mentioned previously, this is due to resource constraints on time,
personnel, and funding associated with creating and executing the models.
Unfortunately, performing vehicle capability and technology assessment in such a
fashion, although timely, it is not necessarily accurate. This is due to the fact that many
of the technologies and system being designed or evaluated have no historical or
empirical information associated with them.
As an example consider the calculation of cruise range using the Breguet range
equation[6]
=

ln

Eq (1)
R is the range, C the specific fuel consumption, V the velocity, L/D the lift-to-drag ratio,
and W
i-1
and W
i
are weights of the vehicle at the beginning and end of the mission
segment. Lets consider the vehicle weight. It will change during the mission segment
due to fuel burn and potentially store release (external fuel tanks in the case of PISR)
but the empty weight We will remain constant. In Reference [6], Raymer has a series of
weight equations broken out into components and subsystems for the vehicle
depending in the class of vehicle for We. Raymer identifies three classes of vehicles,
Fighter/Attack, Cargo/Transport, or General Aviation. For the Fighter/Attack the wing
weight sub-component is given as

=
0.0103

.
(1 +)
.
()
.

.
Eq (2)
R.M. Kolonay / Physics-Based Distributed Collaborative Design 202
Where K
dw
= .768 for delta wings, 1.0 otherwise, K
vs
= 1.19 for variable swept wings,
1.0 otherwise, W
dg
is the design gross weight, N
z
the ultimate load factor, S
w
the
trapezoidal wing area, A aspect ratio, t/c root WKLFNQHVVWRFKRUGDWWKHZLQJURRW
LV WDSHU UDWLR LV WKH ZLQJ VZHHS DW FKRUG, and S
csw
is the control surface area.
This is a parametric equation based on a curve fit derived from historical data from
fighter/attack aircraft that Raymer had available data. It may or may not be applicable
to the current configuration that one is designing. In addition if the designer is
attempting to evaluate a new technology that has an effect on wing weight that has
never been used on a previous aircraft it will not be accounted for in the above equation.
One common approach is to estimate the impact of the technology under consideration
on the wing weight using expert opinion and k- factors. An expert will estimate that a
given technology would reduce the wing weight by 10%. With that information the
wing weight equation is just multiplied by a k- factor of 0.9 to indicate the impact of
that technology. This type of approach more often than not compromises the mission
effectiveness analysis and leads to erroneous conclusions concerning the impact of a
given technology.
4. Physics-Based System Design & Technology Assessment
Examining the far right hand side of Figure 1 we now discuss the development of
physics-based system design and technology assessment which is depicted in Figure 3.
As discussed in sections 1 through 3, for the development of a new air vehicle, a
compendium of complex requirements and objectives are set forth with the
specification, among other items, of the aircraft performance, safety, reliability,
maintainability, and the subsystems properties and performance. Once the high level
set of requirements are established, the conceptual design of potential configurations
that meet those requirements are explored. The results of these conceptual designs are
used to feed the input requirements for the mission assessments described in section 3.
As mentioned previously, in the conceptual phase, the vehicle and its performance are
represented by a series of parametric equations and empirical relations such as the wing
weight equation in Eq 2. Typically in this phase of the design process the number of
design parameters is on the order of a few dozen. A representative set of design
parameters in such a study are: wingspan, thickness to chord ratios, engine location,
engine maximum thrust, and average cruising altitude. Examples of constraints on the
design problem includes: maximum take-off distance, maximum landing distance, and
minimum Cl/Cd. Two such programs that perform conceptual design of aircraft in this
fashion are FLight Optimization System (FLOPS)[7] and AirCraft SYNThesis
(ACSYNT)[8] program.
R.M. Kolonay / Physics-Based Distributed Collaborative Design 203
Figure 3. Physics-Based System Desing & Technology Assessment
During this phase of the development process a technology suite that will potentially be
included in the system is identified. The set of technologies selected depends on the
time frame that the capability is desired. In general technologies are classified as near
term, mid term and far term. Although there are no standard times associated with
near, mid, and far term, typically near term is within 4 years, mid 5-10 years out, and
far term is 10-20 years on the horizon. Hence the technology suite chosen is time
dependent. As an example a set of technologies identified for the PISR for the mid term
may be; active aeroelastic wing, advanced laminar flow - distributed roughness
elements, dielectric barrier discharge actuation for separation control, and ultra-light
multi-function airframe concepts - integrated structural antennas. Once the suite of
technologies are identified a conceptual design study is carried out. In the conceptual
design phase the vehicle and its performance is represented by a series of parametric
equations and empirical relations such as the wing weight equation (Eq. 2) (Top of
Figure 3). This representation will be referred to as zeroth order fidelity. As cited
earlier, examples of software applications that perform air vehicle conceptual design
are FLOPS and ACSYNT. As stated in section 3 the current practice for representing a
given technology in the conceptual design phase is a series of knock-down or knock-up
factors (k-factors). For example active aeroelastic wing technology, based on expert
opinion, is believed to enable a 10% reduction on GTOW for a PISR configuration.
This factor is then applied to the appropriate weight equation (such as Eq. 2 ) in the
conceptual analysis application. With the selected technologies and their associated
effect factors a conceptual design is performed. Often the zeroth order application is
connected to a formal optimization algorithm to produce a concept that has the
minimum GTOW with the maximum range. Again, the number of design parameters in
such a study is on the order of a few dozen(wingspan, chords, thickness to chord ratios,
engine location, engine maximum thrust, and average cruising altitude) and any
gradients required are determined analytically or by finite difference. During this phase
of the design space exploration tens of thousands configurations of the system are
explored. This is possible due to the small computation cost(a few seconds on a
R.M. Kolonay / Physics-Based Distributed Collaborative Design 204
desktop machine) of determining the vehicle performance using historical databases,
parametric equations, and empirical relations.
Once the conceptual design study is completed, the performance and several attributes
of the vehicle are obtained. Figure 4 illustrates the resulting output from a typical
conceptual study. It is important to note that this information is output as a series of
real and integer numbers. There is no physical geometry associated with the results.
Hence, a great deal of effort is required before any physics-based analyses can be
performed. From the conceptual design the necessary performance parameters (vehicle
speeds, turn rates, range, loiter time, specific excess power for a given maneuver, and
weapons load etc..) required for the mission M&S can be extracted.
Figure 4. Conceptual Design Outputs
Standard industry practice is to primarily use the results obtained from this zeroth order
analysis/design to evaluate the mission performance and technology assessment in the
M&S phase of Figure 1. Little or none of the process shown boxed in labeled Physics-
Based Design & Assessment of Figure 3 is used to impact the conceptual design or the
M&S mission assessment. The purpose of this work is to propose and demonstrate a
process that uses higher fidelity models based on physics to perform the conceptual
design and compute the information required for the M&S analysis. The vision is to
merge conceptual design with the following aspects of preliminary design:
x Increased fidelity of disciplines
x Increased number of disciplines considered
x Increase the chaining and couplings of disciplines
x Perform design optimization considering aerodynamic, structural, and control
effector design variables simultaneously
The overall goal is to perform design studies and M&S mission assessments with
physics-based models with the same resources and time that traditional conceptual
design is achieved today. In addition it will enable the evaluation of and maturation of
tens of design configurations at a high level of fidelity rather than the one or two that
are typically done in a traditional process. If accomplished, three primary benefits can
be obtained; data with less uncertainty associated with it for making decisions
concerning system capabilities and technology assessment, reduction in the discovery
of late defects within the system due to physics, and opening up the design space to
R.M. Kolonay / Physics-Based Distributed Collaborative Design 205
enable novel concepts and otherwise unobtainable capability by leveraging the
discipline couplings.
4.1. Merging Conceptual and Preliminary Design
Figure 5 illustrates a design process flow diagram or an N2 diagram for a traditional
conceptual design process. The blocks on the diagonal represent engineering
disciplines (with a level of fidelity) and lines on the upper right represent the feed
forward of information or chaining of disciplines while lines on the lower left indicate
the feedback or coupling of disciplines. Coupling is defined as the need to
simultaneously solve disciplines such as aerodynamics and structures to perform an
aeroelastic analysis. Coupling implies a bi-directional dependency. Chaining is defined
as the need to sequentially chain analyses together to obtain the necessary result.
Performing a pre-stressed structural analysis prior to executing an eigenvalue analysis
is an example of chaining. This is a one way dependency. Finally, fidelity refers to the
level of physics included in a specific domain.
The disciplines represented in this process are propulsion, weights/mass properties,
aerodynamics, mission performance/range, and an optimizer. Each discipline has a
level of fidelity of 0 indicating the use of empirical equations and historical
information to represent the necessary information for the respective disciplines. Also,
this process will be executed in a single location on a single compute resource, usually
a desktop.
Figure 5. Traditional Conceptual Design Process
R.M. Kolonay / Physics-Based Distributed Collaborative Design 206
Figure 6. Multi-Fidelity Physics-Based Distributed Collaborative Design
Figure 6 depicts the desired process and is an expanded view of the boxed in area of
Figure 4 with increased detail of the Multi-Fidelity Analysis for Design portion of
Figure 4. Here, it can be seen that additional disciplines have been added, such as
structures, stability and control, aeroelasticity (coupled structures and aerodynamics)
and configuration and geometry when compared to Figure 5. Also, the concept of
multi-fidelity analysis is introduced. Multi-fidelity for aerodynamics is indicated by
Level 0-3. Where level 0 is the traditional empirical representation for aerodynamics,
linear potential/panel methods are indicated by level 1, level 2 represents the Euler
equations, and level 3 would employ the Reynolds Averaged Navier-Stokes (RANS)
Equations for computing the aerodynamic quantities. The selection of the fidelity,
coupling, and chaining is critical. The appropriate levels of fidelity, coupling, and
chaining are those that are required to capture the phenomena that are critical in
designing the specified configuration and to accurately assess the specified technology
suite selected. This implies that the appropriate fidelity, coupling, and chaining are
dependent on the configuration, the flight conditions, and the technology suite selected.
Recall that for the PISR example that the technology suite selected consisted of active
aeroelastic wing, advanced laminar flow - distributed roughness elements, dielectric
barrier discharge actuation for separation control, and ultra-light multi-function
airframe concepts - integrated structural antennas. Each of these technologies chosen
requires a level of fidelity, coupling and chaining. For example active aeroelastic wing
requires a nonlinear aeroservoelastic analysis capability where the transient coupled
non-linear aeroelastic analysis will need to be able to capture structural geometric non-
linearities and aerodynamic non-linearities possibly including viscous effects.
R.M. Kolonay / Physics-Based Distributed Collaborative Design 207
With the process in Figure 6 vehicle requirements associated with strength, stiffness,
buckling, cruise performance, maneuver performance, static aeroelastic stability,
dynamic aeroelastic stability, and controllability can be assessed for a given
configuration. This is done by carrying out a loads survey, based on the mission
profile determined in the concept of operations, to identify the critical set of flight
conditions that drive the design of the specified configuration. This results in 100-200
critical conditions that must be considered, performing the design and evaluating
technologies. These critical conditions are associated with different ground and air
maneuvers throughout the mission and have a wide range of Mach number, altitude,
dynamic pressure and control surface settings. In order to perform the design
refinement at the critical set of flight conditions, higher fidelity, coupled, and/or
chained analyses are required. Now the design parameters are not only the conceptual
design variables cited earlier (wingspan, chords, thickness to chord ratios, engine
location, engine maximum thrust, and average cruising altitude) but also structural
sizing parameters (skin thicknesses, spar thicknesses, spar cap cross-sectional areas and
moments of inertia) and control effector parameters (the number, size, and location of
control effectors). This brings the total number of design parameters from a few dozen
up to thousands. This produces a multidisciplinary, multi-fidelity, optimization
problem that needs to be solved during the design space exploration.
A final distinction between Figures 5 and 6 is the fact that in Figure 5 the design is
carried out in a single location on a single compute resource, usually a desktop machine.
Figure 6 in contrast shows that different discipline blocks may reside at different
geographic locations and execute on vastly different hardware. This enables distributed
collaborative analysis and design space exploration. This will be covered in further
detail later.
The last process component to discuss in Figure 3 is Modeling for Design. This part
of the process represents the bridge between the conceptual representation of the
system and a representation that is required to perform physics-based analysis and
design. As stated in Figure 4 the conceptual representation of the vehicle is a series of
real and integer numbers. There is no physical geometry associated with this
representation. In order to perform physics-based analysis and design it requires the
solution to integral, ordinary, and, partial differential equations to obtain the necessary
system responses. In addition to the responses (pressures, aerodynamic coefficients,
structural deflections, structural stresses, etc.) the sensitivities of these responses with
respect to the set of design parameters under consideration are also required for
gradient based design space exploration and uncertainty quantification. These can be
computed analytically, semi-analytically, finite-difference, automatic differentiation,
direct methods, and adjoint methods. For complex geometries closed form solutions to
the response and sensitivity equations do not exist, hence numerical procedures are
used that require representation and discretization of the domain. Finite element and
finite difference techniques are commonly used for computing the response quantities
of interest. Currently, the process of moving from the conceptual representation to a
representation that is necessary to further evaluate the vehicle at a higher level of
fidelity is a choke point. Indeed, the current state-of-the-art is extremely time
consuming and hands-on intensive. It is typically accomplished by a designer and/or
analyst taking the conceptual design information and using a CAD system to generate a
parametric associative model. The model development is based on years of experience
R.M. Kolonay / Physics-Based Distributed Collaborative Design 208
and company standards and practices. The parametric associative model must at a
minimum have the following attributes: smooth water tight outer mold line, an internal
structural layout, and subsystem volumes, locations, and mass properties.
Many in industry are pursuing a single parametric associative model referred to as a
Master Model. The Master Model concept traditionally contained only geometric
information but has now been extended to contain any critical information that may be
needed throughout the life of a product. The single Master Model is a single logical
representation of the product that may be distributed geographically or between several
different databases or applications. The point being that there is a single representation
of the product without any duplication of information. All users begin from and update
a single representation of the product to insure consistency. A CAD system
(UniGraphics, ProE, Catia etc.) along with a PDM (e-Matrix, Windchill etc.) system
are typically combined to create a Master Model. Many companies are also coupling
the Master Model with knowledge based engineering (KBE) systems resulting in what
is called an Intelligent Master Model (IMM) [9],[10] or Smart Product Model
(SPM) [11]. This allows design intent and rules to be maintained with the model along
with the model representation itself. Typical KBE systems employed are AML [12],
Intent [13] and UG Knowledge Fusion [14]. A few features that are desirable for the
IMM are:
x Ability to quickly generate a representation of the product.
x Support parametric and topological changes.
x Ability to quickly generate the domain specific analysis & design models
x Capture the knowledge and design intent of the product.
Within the Multidisciplinary Science & Technology Center(MSTC) two approaches are
being explored to address the first three items listed above. One approach is CAD
light focused on high fidelity geometry, specifically Constructive Solid Geometry
(CSG), based on the OpenCSM code that can be driven by the Electronic Geometry
Aircraft Design System (EGADS)[15]. The work in Reference [15] currently focuses
on the ability to quickly generate attributed, parametric, associative models of the
system that can be used for higher fidelity analysis models (level 3 and 4 fidelity) and
can eventually have a linkage to manufacturing. The second approach developed by
Alayanak[16] called MSTC-GEOM is not CAD based and is focused on the generation
of level 1 and level 2 fidelity models. Specifically, aerodynamic, structural, and mass
property components for analysis and design models. The primary goal of MSTC-
GEOM is to automate the creation of analysis models and structural design models for
well accepted tools such as MSC Nastran [17], ASTROS[18], ZAERO[19] or
ZEUS[20]. Figure 7 is a representative wing structural layout that can be generated by
MSTC-GEOM.
Figure 7. MSTC-GEOM Wing Structural Layout[15]
R.M. Kolonay / Physics-Based Distributed Collaborative Design 209
To summarize, the goals associated with merging conceptual and preliminary design is
to enable the following:
1. Use an appropriate level of fidelity, coupling, and chaining that is necessary to
capture the phenomena that are driving the design of a specified configuration.
2. Use an appropriate level of fidelity, coupling, and chaining that is necessary to
capture the physics associated with the technology suite that is being evaluated.
3. Increase the number of configurations that can be prototyped. That is the
number of configurations that can be carried beyond the conceptual level of
design.
4. Use physics-based models to perform M&S
With the results of #1 and #2 a designer can evaluate and refine the configuration and
determine if the selected configuration and the selected technology suite has the
performance predicted in the conceptual design phase. The designer can
confirm/update the knock-down/knock-up factors used in the conceptual design phase
and rerun the conceptual studies and determine if the same configuration is produced.
The designer can also identify any phenomena that may be systemic to the
configuration chosen and make a decision if a new configuration should be chosen or if
the systemic phenomena should be addressed in the design. Also, the use of #1 should
help eliminate the discovery of late defects. These should now be predicted and
accounted for earlier in the design process with the use of the appropriate fidelity,
coupling, and chaining. The use of #2 should help reduce the risk of new technology,
again by evaluating it with a higher level of fidelity earlier in the process giving the
designer a more accurate representation of its requirements and performance.
4.2. Distributed Collaborative Design
To identify the computational framework requirements, refer to Figures 5 and 6. In
Figure 5, for the traditional conceptual design process, it can be seen that the design is
carried out in a single location on a single compute resource, usually a desktop machine.
Figure 6 in contrast shows that different engineering discipline blocks may reside at
different geographic locations and execute on vastly different hardware. This enables
distributed collaborative analysis and design space exploration. This is a key enabler
for performing physics-based design of tens of configurations with the same amount of
resources that are allocated for traditional conceptual design. Such a process as
identified in Reference [21] produces the following requirements for the computational
framework:
x Seamless access to varying fidelity best in class tools to evaluate/modify the
design.
x Process Representation with secure communication between all tools, data,
and vested parties involved in the product development process regardless of
their geographic location.
x Modularity that enables high level of reuse when moving from one study to
the next.
These requirements are illustrated in Figure 8. The multi-colored ellipses represent
engineering methods, data, and applications that are modular and can be reused
depending on the study being conducted. They are distributed across a heterogeneous
computing network depending on the computational needs of a given piece of software
along with the corporate security requirements of the owner of the application, model,
R.M. Kolonay / Physics-Based Distributed Collaborative Design 210
or data. The interconnecting lines indicate that a process is being executed to perform
analysis or design computations such as those found in the N
2
diagram in Figure 6.
Although Figure 6 shows only nine blocks in the N
2
diagram it is felt that to perform a
fully physics-based design the number of blocks in the resulting N
2
will be on the order
of a hundred(s). The run times of a given block will be from seconds to days or even
weeks with data sets ranging from kilobytes to terabytes. The computational
framework will have to accommodate such scales.
Figure 8. Seamless Access to All Methods, Models, and Compute Resources across the Network
The Multidisciplinary Science and Technology Center is using and developing the
Service ORiented Computing EnviRonment (SORCER)[22],[23] to address the
aforementioned computational framework requirements for distributed collaborative
design. SORCER is a Java-based, network-centric computing platform that enables
engineers to perform analyses and design studies in a very flexible, robust, secure, and
distributed computing environment. SORCER federates distributed services in real
time and orchestrates the communication between the services (engineering methods
and models) based on a control strategy algorithm. It provides a common way to
model analysis and design processes in conjunction with the system/product data.
5. Example: Physics-Based Distributed Collaborative Design
Recent studies[24],[25],[26] performed within the Multidisciplinary Science &
Technology Center will be cited as examples of the impact of doing physics-based
conceptual design and the usage of the SORCER framework to perform distributed
collaborative computing. These studies focus on the same vehicle class, an efficient
supersonic air vehicle (ESAV). The configuration studied is a single engine fighter
with a gross take-off weight approximately 30,000 pounds.
5.1. Physics-Based Design
In references [24] and [25] Alyanak demonstrates the impact of increasing the fidelity
in the vehicle weights computations and including static and dynamic aeroelastic
analysis in the conceptual design phase to evaluate two technologies using physics-
R.M. Kolonay / Physics-Based Distributed Collaborative Design 211
based analysis; active aeroelastic wing technology[27] and active flutter suppression.
Figure 9 summarizes Alyanaks findings.
Figure 9. Physics-Based Design & Technology Assessment Results
Figure 9 illustrates nine different designs. They are identified by Mach number and by
technology suite. For a selected cruise Mach number a multi-objective bi-level
optimization problem is constructed and solved. The objectives are to minimize gross
take-off weight while maximizing vehicle range. The design variables are a
combination of conceptual design variables and preliminary design variables. The
conceptual design variables are aspect ratio, inboard and outboard sweep angle, inboard
and outboard taper ratio, wing break location, and thickness over chord ratio. The
preliminary design variables are wing skin, spar, and ribs thicknesses. For a given
cruise Mach number three separate optimizations are carried out to evaluate the impact
of active aeroelastic wing technology and active flutter suppression on vehicle range
and weight. The blue diamond labeled Raymer wt uses the vehicle weight equations
found in Reference [6]. These weight equations are based on historical data and do not
have any vehicles that use either active aeroelastic wing technology or active flutter
suppression. Essentially, with those equations there is really no way to evaluate the
technologies selected with a traditional conceptual design approach. What has been
done in similar studies to account for active flutter suppression is to develop a k
factor based on the weight report of a similar class of aircraft. In the report the designer
identified the amount of structural weight that was added to eliminate flutter. In one
study with a similar class of vehicle this was found to be 0.5% of the gross take-off
weight (GTOW) of the vehicle. This k factor was used on the empirical weight
equation. But as one can see from Figure 9 that if physics-based analysis is used to
evaluate the impact of the technology the actual weight savings for the active flutter
suppression technology ranges from 7% to 10% depending on the cruise Mach number.
Decision makers would make drastically different investments based on these numbers.
At 0.5% GTOW savings there would be no sense in considering or investing in active
flutter suppression for this vehicle. But at 7%-10% GTOW savings this technology
would have a significant impact on the vehicle performance.
R.M. Kolonay / Physics-Based Distributed Collaborative Design 212
5.2. Distributed Design
In Reference [25] Burton performs design studies of and ESAV configuration utilizing
SORCER in conjunction with a mix of Linux-based cluster computers, desktop Linux-
based PCs, Windows PCs, and Macintosh PCs. The ability of SORCER to leverage
these resources is significant to MDO applications in two ways: 1) it supports platform-
specific executables that may be required by an MDA; and 2) it enables a variety of
computing resources to be used as one entity (including stand-alone PCs, computing
clusters, and high-performance computing facilities). SORCER also supports load
balancing across computational resources via the JavaSpaces technology, making the
evaluation of objective and constraint functions in parallel a simple and a dynamically
scalable process. In [25] a GTOW minimization is performed while a range constraint
is enforced. A bi-level optimization procedure is carried out with the outer loop design
variables being wing area, taper ratio and aspect ratio, while the inner loop design
variables are wing skin thicknesses. A sequential linear programming (SLP) algorithm
is employed which requires sensitivity calculations. The SLP method used is tailored
for taking advantage of SORCERs parallel computing capability on a large number of
CPU cores such that gradient and line search calculations are executed in parallel. This
resulted in significant computational savings. In this case it reduced the computational
time to perform the optimization from 24 hours to approximately 2 hours. An order of
magnitude reduction in time due to using the SORCER computational framework. This
enables a conceptual designer to use physics-based models when performing their
design space exploration within the same time frame and resources (assuming the
computational resources are available) as the traditional conceptual design process.
6. Concluding Remarks
A physics-based distributed collaborative design for aerospace vehicle development
and technology assessment has been presented. The new methods and processes utilize
physics-based analyses and a distributed collaborative computational environment to
predict vehicle performance which in turn is used in mission level simulations to assess
the impact of a given configuration or technology on the combat effectiveness of a
system. This enables AFRL technology developers to have a quantifiable and traceable
trail of the impact of their technologies on system performance parameters such as
weight, lift, and drag into terms that Air Force leadership measures system
effectiveness lethality, survivability, sustainability, and affordability. The overall goal
is to perform design studies and M&S mission assessments with physics-based models
with the same resources and time that traditional conceptual design is achieved today
and evaluate tens of configurations at the preliminary level of fidelity rather than the
current practice of one or two. Three primary benefits can be obtained from the new
process; generation of information with less uncertainty associated with it for making
decisions concerning system capabilities, technology assessment, and technology risk
reduction, reduction in the discovery of late defects within the system due to physics,
and opening up the design space to enable novel concepts and otherwise unobtainable
capability by leveraging the discipline couplings.
The process utilizes the SORCER computing infrastructure to enable collaborative
design across organizational boundaries and full usage of all compute resource on the
network ranging from desktops to high performance computing machines. This is the
R.M. Kolonay / Physics-Based Distributed Collaborative Design 213
key to executing the process within the same amount of time and resources as a
traditional conceptual design process.
References
[1] Gregory L. Roth, John, W. Livingston, Maxwell Blair, and Raymond Kolonay, CREATE-AV DaVinci:
Computationally Based Engineering for Conceptual Design, AIAA 2010-12332.
[2] Dan Caudill and Jim Zeh, Aerspace Vehicle Technology Assessment & Simulation(AVTAS) Mission
Level Simulation System(MLS2), AIAA Modeling and Simulation Technologies Conference and
exhibit, 16-19 August 2004, Providence, Rhode Island, AIAA 2004-4933.
[3] Dan Caudill, Eric Like, and James Zeh, Capability Focused Modeling, Simulation, and Analysis, AIAA
Modeling and Simulation Technologies Conference and Exhibit, 15-18 August 2005, San Francisco,
CA, AIAA 2005-6012.
[4] Andrew Cowen, Up and Away Air Vehicle Performance Assessment, 46
th
AIAA Aerospace Sciences
Meeting and Exhibit, 7-10 January 2008, Reno Nevada, AIAA 2008-205
[5] J. V. Kitowski, Combat Effectiveness Methodology as a Tool for Conceptual Fighter Design,1992
Aerospace Design Conference, February 3-6, 1992, Irvine, CA, AIAA 92-1197.
[6] Daniel P. Raymer, Aircraft Design: A Conceptual Approach, AIAA, Inc, Washington, DC, 1989.
[7] L. A. McCullers, Flight Optimization System (FLOPS) Version 8.20 Users Guide, ATK Space Division,
NASA Langley Research Center, 2011.
[8] Gelhausen, P., Moore, M., and Gloudemans, J., "Overview of ACSYNT for Light Aircraft Design," SAE
Technical Paper 951159, 1995
[9] http://www.ugs.com/products/nx/docs/wp_intelligent_master_model.pdf Intelligent
[10] Peter J. Rohl, Raymond M. Kolonay, Rohinton K. Irani, Michael Sobolewski, Kevin Kao, Michael W.
Bailey, A Federated Intelligent Product Environment, 8
th
AIAA/USAF/NASA/ISSMO Symposium
on Multidisciplinary Analysis and Optimization 6-8 Sept, 200 Long Beach, CA AIAA-2000-4902.
[11] http://www.atl.lmco.com/programs/smart_product.php
[12] http://www.technosoft.com/aml.php
[13] http://www.engineeringintent.com/intentSales.html
[14] http://www.ugs.com/products/prod_index.shtml
[15] Haimses, R., & Drela, M. (2012). On the Construction of Aircraft Conceptual Geometry for High-
Fidelity Analysis and Design, 50th AIAA Aerospace Sciences Meeting including the New Horizons
Forum and Aerospace Exposition. Nashville, TN.
[16] Edward J. Alyanak and Raymond M. Kolonay, Efficient Supersonic Air Vehicle Structural Modeling
for Conceptual Design, 12th AIAA Aviation Technology, Integration, and Operations (ATIO)
Conference and 14th AIAA/ISSM 17 - 19 September 2012, Indianapolis, Indiana, AIAA 2012-5519.
[17] Reymond, M., & Miller, M. (1996). MSC/NASTRAN Quick Reference Guide Version 68. Los Angeles,
CA: The MacNeal-Schwendler Corporation.
[18] Neill, D., & Herendeen, D. (1995). ASTROS Enhancements: Volume I - Astros User's Manual.
Torrance, CA: Wright Laboratory WL-TR-96-3004. Raymer, D. (2006). Aircraft Design: A Conceptual
Approach, Fourth Edition. AIAA.
[19] ZAERO Users's Manual: Engineers' Toolkit for Aeroelastic Solutions. Scottsdale, AZ: ZONA
Technology, Inc.
[20] ZEUS User's Manual Version 3.1. (2009). Scottsdale, AZ: Zona Technology, Inc. ZMORPH Version 2.0
Reference Guide. (2011). Scottsdale, AZ: Zona Technology. ZONA. (2009).
[21] R.M. Kolonay, Functional Requirements for Next Generation Engineering Analysis and Design
Integration Environments, The 12th ISPE International Conference on Concurrent Engineering
Research and Applications, Fort Worth Tx, 25-29 July , 2005
[22] M. Sobolewski and R. Kolonay, Service-oriented Programming for Design Space Exploration, J.
Stjepandic at al. (eds.), Concurrent Engineering Approaches for Sustainable Product Development in a
Multidisciplinary Environment, pp. 995-1007, Vol 2, DOI: 10.1007/978-1-4471-4426-7_84, Springer-
Verlag London, 2013
[23] Michael Sobolewski, , Scott Burton and Raymond Kolonay ,Parametric Mogramming with Var-
oriented Modeling and Exertion-Oriented Programming Languages , 20th ISPE International
Conference on Concurrent Engineering 2 6 September 2013 RMIT University Melbourne
AUSTRALIA
[24] Edward Alyanak, Raymond Kolonay, Peter Flick, Ned Lindsley, and Scott Burton, Efficient
Supersonic Air Vehicle Preliminary Conceptual Multi-Disciplinary Design Optimization Results, ,
R.M. Kolonay / Physics-Based Distributed Collaborative Design 214
12th AIAA Aviation Technology, Integration, and Operations (ATIO) Conference and 14th
AIAA/ISSM 17 - 19 September 2012, Indianapolis, Indiana, 2012-5518.
[25] Edward Alyanak, Multidisciplinary Design and Optimization of Efficient Supersonic Air Vehicles,
FY13 Scientific Advisory Board S & T Quality Review Presentation, 20 23 May 2013
[26] Scott A. Burton, Edward J. Alyanak, and Raymond M. Kolonay, Efficient Supersonic Air Vehicle
Analysis and Optimization Implementation using SORCER, 12th AIAA Aviation Technology,
Integration, and Operations (ATIO) Conference and 14th AIAA/ISSM 17 - 19 September 2012,
Indianapolis, Indiana, AIAA 2012-5520.
[27] Ed Pendleton, Pete Flick, Donald Paul, Dave Voracek, Eric Reichenbach, Kenneth Griffin, The X-53 A
Summary of the Active Aeroelastic Wing Flight Rsearch Program,, 48th
AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, 23-26
April 2007, Honolulu, Hawaii. AIAA 2007-1855.
R.M. Kolonay / Physics-Based Distributed Collaborative Design 215


Provisioning Service Resources for Cloud
Manufacturing
Lingjun KONG, Wensheng XU and Jianzhong CHA
School of Mechanical, Electronic and Control Engineering,
Beijing Jiaotong University, Beijing 100044, China
Abstract. Cloud manufacturing is a new service-oriented networked manufactur-
ing paradigm which can integrate various physical manufacturing resources and
manufacturing capacities and provide manufacturing services of the lifecycle for
the product. It is a new research direction in the field of the advanced manufactur-
ing. In this paper, a provisioning method of service resources for cloud manufac-
turing is studied. Firstly, the service-oriented architectures are investigated to de-
cide the service architecture for the encapsulation of manufacturing resources.
Then a three-step provisioning method of manufacturing services are proposed, a
function-classification based manufacturing service interface is defined, and the
encapsulation strategies for four categories of manufacturing resources including
intelligence resource, knowledge resource, tool resource and manufacturing capac-
ity are put forward, and the dynamic provisioning process of manufacturing ser-
vice is described. By the proposed method, the four categories of manufacturing
resources can be dynamically provisioned as services with well-defined service
interfaces for cloud manufacturing.
Keywords. Cloud manufacturing, service-oriented architecture, encapsulation,
provision
1. Introduction
Networked manufacturing (NM) [1] represents a manufacturing pattern which can de-
liver products to the market in time by sharing manufacturing resources from different
manufacturing enterprises connected by network technology, information technology
and manufacturing technology. In order to realize NM, various NM paradigms have
been proposed such as application service provider (ASP) [2], manufacturing grid
(MGrid) [3], agile manufacturing (AM) [4], global manufacturing (GM) [5], etc. By
these NM paradigms, a lot of research results have been achieved in the filed of re-
source modeling, service encapsulation, collaborative design, supply chain manage-
ment, etc. In order to further expand and deepen the application of NM, a new NM pa-
radigm, called cloud manufacturing (CMfg) [6,7], is proposed. CMfg is a new ser-
vice-oriented model developed from existing NM paradigms (e.g. ASP, MGrid, AM,
GM) to provide on-demand, high quality, low consumption, reliable and safe services
by the network and service platform under the support of cloud computing [8], internet
of things (IoT) [9], cyber physical system (CPS) [10], service oriented technologies,
enterprise information technologies and so on. It is also an important direction to be
followed in the field of the advanced manufacturing [11].

20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-216
216


The goal of CMfg is manufacturing tertiarization [12] which includes two aspects
of meaning. The first aspect is manufacturing inner-tertiarization which means the
manufacturing enterprises provide the manufacturing resources as manufacturing ser-
vices among the manufacturing industry to increase the resource utilization rate, and its
essence is to provide services of the whole manufacturing lifecycle for producers. The
second aspect is manufacturing outer-tertiarization, which means the manufacturing
enterprises expand the their business to the product-related services such as installation,
maintenance, recycling, etc. in order to get higher additional profits, and its essence is
to provide services of the whole product lifecycle for customers. The two aspects not
only support and promote each other, but also develop along together.
The inner-tertiarization is the foundation of manufacturing tertiarization, and how
to provision service resources for cloud manufacturing is a key research topic for the
inner-tertiarization. In this paper, a study of provisioning manufacturing services is
carried out. The paper is organized as follows. Section 2 selects the service architecture
for manufacturing resources according to the resource characteristics. Section 3 pro-
poses a three-step provisioning method of manufacturing services, which includes three
technologies: manufacturing service interface, four encapsulation strategies for manu-
facturing resources and dynamic provisioning of manufacturing service. Section 4
gives the conclusions.
2. Selection of Service-Oriented Architecture
In order to provision service resources for cloud manufacturing, a service-oriented ar-
chitecture (SOA) is needed as the underlying service architecture. SOA is a software
architecture using loosely coupled software services that integrates them into a distri-
buted computing system by means of service-oriented programming [13], and it has
three elements: service provider, service requestor and service registry. According to
whether the communication protocol between service provider and service requestor is
fixed or not, SOA can be classified into service protocol-oriented architecture (SPOA)
and service object-oriented architecture (SOOA) as shown in Fig. 1.
In SPOA, the service provider publishes the service description to the service regi-
stry, and the service requestor looks up the service in the service registry then gets the
service description. By the service description, the service requestor constructs a ser-
vice proxy to communicate with the service provider, and the communication protocol
in SPOA is fixed like SOAP (Simple Object Access Protocol) in Web Services and
IIOP (Internet Inter-ORB Protocol) in CORBA. In SOOA, the service provider pub-
lishes the service proxy to the service registry, and the service requestor looks up the
service in the service registry then gets the service proxy. The service requestor can
communicate with the service provider by the service proxy without knowledge of the
communication protocol, and the service provider can use any communication protocol
according to its particular distributed application.
Most of manufacturing resources especially the manufacturing equipment uses ex-
clusive communication protocol, because the manufacturer may not foresee that all the
manufacturing resources will be connected in the future, or they want to maintain the
key customers share. So the SOOA which is protocol neutral is more suitable than the
SPOA which uses fixed protocol for manufacturing resources.


L. Kong et al. / Provisioning Service Resources for Cloud Manufacturing 217


3. Provisioning Method of Manufacturing Services
Based on SOOA, a three-step provisioning method of manufacturing services is pro-
posed: (i) Interface defining: to define manufacturing service interface which needs to
be implemented by the manufacturing resources, so the service resources can be recog-
nized and invoked correctly in SOOA; (ii) Resource encapsulation: according to the
characteristics of manufacturing resources, to encapsulate different manufacturing re-
sources as services and implement the interface in the service program; (iii) Dynamic
provisioning: to deploy the service program into the service container, and then register
the service in the service registry.
3.1. Manufacturing Service Interface
Crucial to the success of SOOA is interface standardization [13]. Service providers can
publish their services and also be identified by the standardized interface. In order to
define standardized interface for manufacturing resources, a manufacturing service
interface is proposed based on the function classification as shown in Fig. 2.
Manufacturing service interface has two parts: manufacturing function tree and
service interface. The manufacturing function tree is a classification system of manu-
facturing functions, and its basic element is manufacturing function (MF). Each MF
node can be divided into sub MF classifications, e.g. a manufacturing function can be
divided into design function, processing function, assembling function, transportation
function, etc., and the design function can be divided into requirement analysis, con-
ceptual design, structure design, etc., and the structure design can be divided into geo-
metric modeling, structure analysis, digital simulation, etc. Thus a multi-dimensional
and multi-level manufacturing domain-oriented tree can be constructed.
Service interface is the description model of MF node, it has three parts: descrip-
tion, signature, and method. Description is an extensible node which can describe the
function information. The signature is the identifier of the service interface. The me-
thod represents the invoking information of the service interface, it can has multiple
operate, and each operate can have more than zero input and output. Service interface
is the model of manufacturing function and is not related to the manufacturing re-
sources. Each service interface can be implanted by multiple manufacturing resources.
And the service interface is connected to the MF tree by the function path. The manu-
SPOA architecture (a) SOOA architecture
Figure 1. Comparison between SPOA and SOOA.
L. Kong et al. / Provisioning Service Resources for Cloud Manufacturing 218


facturing interface needs to be implemented by the resource provider according to the
resource function, so the manufacturing service can be identified and invoked correctly.
3.2. Encapsulation Strategies for Manufacturing Resources
According to the formational relations between manufacturing resources, the manufac-
turing resources are classified into four categories: intelligence resource, knowledge
resource, tool resource and manufacturing capacity. Because the content of each cate-
gory is very rich and has many types of resources, it is infeasible to study the encapsu-
lation method of every type of manufacturing resource. So the encapsulation strategies
for the four categories of manufacturing resources are proposed below. When encapsu-
lating a specific type of manufacturing resource, the corresponding encapsulation
strategy can be extended to achieve the encapsulation of the specific resource.
Encapsulation Strategy for Intelligence Resource
Intelligence resource (IR) is a kind of manufacturing resource which has human wis-
dom behavior, and has the features of dynamic, mobility and autonomy. IR includes
domain engineer, expert advisor, manufacturing activity coordinator, manufacturing
requestor, product customer, etc. Generally speaking, intelligence resource is a kind of
offline resource, for example, the domain engineer might not be online all the time, it
might be working in the workshop or on a regular rest schedule, the IR needs to be no-
tified to work online on time when the task arrives. If the IR has intelligent terminal,
such as smart phone, PDA or tablet computer, the manufacturing task can be directly
Figure 2. Structure of manufacture service interface.
L. Kong et al. / Provisioning Service Resources for Cloud Manufacturing 219


pushed to the intelligent terminal for IR. Meanwhile, the engineering background and
knowledge structure of different IRs differs a lot, the customized human-computer in-
teraction interfaces are needed to be constructed for different IRs.
According to the analysis above, the encapsulation strategy for IR is shown in
Fig. 3 which has three layers: (i) Connectivity layer. The SOOA service connects to the
network dynamically and is used as the connector of IR and IR service, when the IR
service is requested, then it submits the request to the upper layer. (ii) Management
layer. The task manager is responsible for the management of submitted service re-
quests, forms the task queue according to the task strategy, and provides task informa-
tion to the upper layer. (iii) Interface layer. The interface manager gets the task from
the management layer and invokes suitable interface to communicate with IR according
to the task status and IR features. If the task status is uninformed, the interface manager
invokes the notification interface to ask the IR to work online on time, if the IR has
intelligent terminal, the interface manager invokes the push interface to send the task
directly to the terminal of IR and provides customized human-machine interaction in-
terfaces to help IR finish manufacturing task successfully.
Encapsulation Strategy for Knowledge Resource
Knowledge resource (KR) is generated by intelligent activity of IR. KR is a kind of
manufacturing resource which is knowledge-based and can be reused in the manufac-
turing process. KR can be classified into explicit knowledge and tacit knowledge. Ex-
plicit knowledge is a kind of KR which has physical form such as engineering case,
working standard, simulation model and so on. Tacit knowledge is a kind of KR which
still exists in IR, such as project experience, operation experience, individual method,
etc. Assistant methods are needed to help the IR convert tacit knowledge to explicit
knowledge in order to avoid repeated intelligent activity. For example, the knowledge
engineer acquires domain knowledge from the domain engineer by means of meeting

Figure 3. Encapsulation strategy for intelligence resource.
L. Kong et al. / Provisioning Service Resources for Cloud Manufacturing 220


or questionnaire. The digitization is the precondition of service encapsulation of KR.
Because the same KR can provide different KR services to different target users, the
manufacturing function needs to be identified in order to confirm the manufacturing
function interface, and the service process is the query and reasoning process of KR.
Take the KR service of material property for instance, the service target user is the fi-
nite analysis element engineer, service input is the material name and analysis type,
service output is the set of material properties, and the service process is the criteria
query of material property database.
The encapsulation strategy for KR has three steps as shown in Fig. 4: (i) Materia-
lization. By the methods of knowledge engineering, convert the tacit knowledge such
as project experience, operation experience, individual method, etc. to explicit know-
ledge such as working standard, operation manual, engineering case, etc. (ii) Digitiza-
tion. Convert the explicit knowledge to digital knowledge resource such as electronic
document, knowledge database, case database, etc., and the knowledge resource in this
stage is a localized electronic resource or a networked information resource which can
be accessed through a particular tool or interface. (iii) Tertiarization. Convert the digi-
tal knowledge resource to knowledge service by means of SOOA technology, imple-
ment the manufacturing service interface according to the resource function, implement
the service execution process, the knowledge resource in this stage is a knowledge ser-
vice which has well-defined service semantic on the network.
Encapsulation Strategy for Tool Resource
Tool resource (TR) is developed by IR according to specific aim, and it is a kind of
manufacturing resource which has specific manufacturing functions, and can be classi-
fied into software tool and hardware tool. The software tool includes office software
such as word processor, presentation software, etc., and engineering software such as
CAx, PDM, PLM software, etc. The hardware tool includes computing resources such
as computer, storage, network, and manufacturing equipment such as NC lathe, ma-
chining center, drill machine and etc. The service encapsulation of TR has two precon-
ditions: (i) TR can connect to the network. The software tools which are installed in
computer can get network connection through the resource host computer. Most hard-
ware tools need to be network-connectable by adding network connection module, for
instance, add the Zigbee wireless module to fatigue-testing machine; (ii) TR provides
open interface. The open interface is needed as the encapsulation interface of TR, so
TR can be invoked externally.
The encapsulation strategy for TR has three steps as shown in Fig. 5: (i) Connect
TR to the network. According to the resource features, add the network connection
module to TR; (ii) Choose resource function. Most TRs have multiple functions, such
as the ANSYS software can execute geometric modeling, meshing and finite element
analysis, the multifunctional NC machine can perform the operation of drill, mill, grind

Figure 4. Encapsulation strategy for knowledge resource.
L. Kong et al. / Provisioning Service Resources for Cloud Manufacturing 221


and laser processing, and thus the resource function needs to be chosen according to the
application requirements; (iii) Build SOOA service program. Based on the open inter-
face of TR such as application programming interface, command line interface, data-
base interface, etc., develop the local program of the chosen resource function, then
encapsulate the program by means of SOOA technology.
Encapsulation Strategy for Manufacturing Capacity
Manufacturing capacity (MC) is a kind of intangible resource which is composed by IR,
KR and TR according to a certain logical relations and constraints. MC can provide a
certain capacity to finish a type of manufacturing task, suck as design capacity,
processing capacity, transportation capacity and so on. Organizing the resources of IR,
KR and TR as MC has two benefits. The first benefit is the manufacturing enterprise
can provide the overall solution of problem in the form of MC to attain high profits.
The second benefit is the resources which are difficult to integrate by computers can be
shared indirectly by the IR in MC.
The encapsulation strategy for MC has two steps as shown in Fig. 6: (i) Inner en-
capsulation. Based on the encapsulation strategies for KR and TR, encapsulate the easy
encapsulation resources of the interior of MC as services to improve the operation effi-
ciency of MC. (ii) Outer encapsulation. Based on the encapsulation strategies of IR,
encapsulate the IR of the interior of MC as IR service. The IR service is taken as the
service agent. The requestor can submit task to the IR service, then the IR service will
inform the representative IR to coordinate the resources in the interior of MC to finish
the task collaboratively.
3.3. Dynamic Provisioning of Manufacturing Services
After the encapsulation of manufacturing resources, the SOOA service programs need
to be deployed into the SOOA service container and be provisioned as manufacturing
services on the network. The SOOA service container is a light-weight service contain-

Figure 5. Encapsulation strategy for tool resource.
L. Kong et al. / Provisioning Service Resources for Cloud Manufacturing 222


er for SOOA services, and it supports distributable hot deployment, e.g. Cybernode
service in RIO [14]. The provisioning process of manufacturing service has six steps as
shown in Fig. 7: (i) Send instantiation instruction with the location information of code
server and the service interface to the SOOA service container; (ii) SOOA service con-
tainer deploy the service program by downloading the service program from the code
server according to the service interface; (iii) SOOA service container startups the
deployed service program; (iv) The started service program registers its service proxy
in the service registry; (v) After the invoking of the service, the destroying instruction
is sent to the SOOA service container; (vi) The service container destroys the service
instance, and the service is unregistered by the service registry.

Figure 6. Encapsulation strategy for manufacturing capacity.

Figure 7. Provision process of manufacturing service.
L. Kong et al. / Provisioning Service Resources for Cloud Manufacturing 223


4. Conclusions
Provisioning diversified service resources is the key to the implementation of cloud
manufacturing. According to the exclusive characteristics of communication protocols
of manufacturing resources, SOOA is selected as the underlying service architecture.
An SOOA-based provisioning method of manufacturing services is proposed, by which
the four categories of manufacturing resources including intelligence resource, know-
ledge resource, tool resource and manufacturing capacity can be encapsulated and dy-
namically provisioned as services with well-defined interface for cloud manufacturing.
This work is the research basis for the following studies of CMfg in the future.
Acknowledgements
This research is supported by the National Natural Science Foundation of China
(No.51175033).
References
[1] Y.S. Fan and G.Q. Huang. Networked manufacturing and mass customization in the ecommerce era:
The Chinese perspective[J]. International Journal of Computer Integrated Manufacturing, 2007,
20(23): 107114.
[2] G. Flammia. Application service providers: challenges and opportunities[J]. IEEE Intelligent Systems
and Their Applications, 2001, 16(1): 2223.
[3] F. Tao, L. Zhang and A.Y.C. Nee. A review of the application of grid technology in manufacturing[J].
International Journal of Production Research, 2010, 49(13): 41194155.
[4] Y.Y. Yusuf, M. Sarhadi and A. Gunasekaran. Agile manufacturing: The drivers, concepts and
attributes[J]. International Journal of Production Economics, 1999, 62(1/2): 3343.
[5] M. Ulieru, D. Norrie, R. Kremer et al. A multi-resolution collaborative architecture for Web-centric
global manufacturing[J]. Information Sciences, 2000, 127(1/2): 321.
[6] B.H. Li, L. Zhang, S.L. Wang et al. Cloud manufacturing: A new service-oriented networked manufac-
turing model[J]. Computer Integrated Manufacturing Systems, 2010, 16(1): 17, 16 (In Chinese).
[7] X. Xu. From cloud computing to cloud manufacturing[J]. Robotics and Computer-Integrated Manufac-
turing, 2012, 28(1): 7586.
[8] M. Armbrust, A. Fox, R. Griffith et al. A view of cloud computing[J]. Communications of the ACM,
2010, 53(4): 5058.
[9] International Telecommunication Union. ITU Internet reports 2005: The Internet of things[EB/OL].
(2005-11)[2013-01-13].
http://www.itu.int/osg/spu/publications/internetofthings/InternetofThings_summary.pdf.
[10] E.A. Lee. Cyber physical systems: Design challenges[C]// Proceedings of the 11th IEEE International
Symposium on Object/Component/Service-Oriented Real-time Distributed Computing, 2008, Orlando,
USA, 2008: 363369.
[11] B. Huang, C. Li, C. Yin et al. Cloud manufacturing service platform for small- and medium-sized en-
terprises[J]. International Journal of Advanced Manufacturing Technology, 2012, doi:
10.1007/s00170-012-4255-4.
[12] A. Szalavetz. Tertiarization of manufacturing industry in the new economy: Experiences in Hungarian
companies[R]. Budapest: Hungarian Academy of Sciences, 2003.
[13] M. Sobolewski. SORCER: Computing and metacomputing intergrid[C]// Proceedings of 10th Interna-
tional Conference on Enterprise Information Systems, Barcelona, Spain, 2008: 7485.
[14] Rio Project[EB/OL]. (2012-12-31)[2013-01-13]. http://www.rio-project.org/.
L. Kong et al. / Provisioning Service Resources for Cloud Manufacturing 224
A Virtual Environment for Collaborative
Engineering with Formal Verification
Wolfgang Herget
a
, Christopher Krau
a,1
, Andreas Nonnengart
a
, Torsten Spieldenner
a
,
Stefan Warwas
a
, and Ingo Zinnikus
a
a
DFKI, Campus D3.2, 66123 Saarbrcken, Germany
Abstract. In this paper, we present a collaborative, web-based framework to create
3D scenarios for product design, simulation and training assisted by animated ava-
tars. To support correct design and anticipate critical design decisions we employ a
verification approach to check for safety and reachability properties. By animating
the 3D model based on prover results (trace witnesses out of constructive proofs)
the sys- tem provides tangible feedback of the verification to the designer. We de-
scribe the components of the framework and illustrate the functionality based on
an existing factory production line.
Keywords. Collaborative engineering, 3D editor, formal verification, agents
Introduction
Concurrent Engineering demands for solutions to several engineering issues. These in-
clude the definition of a suitable design process, an adequate compatibility of the de-
sign software to be used, and a well-defined and reliable communication between engi-
neering teams that include specialists from various disciplines involved in the process.
In this paper, we present a collaborative, web-based framework to support the engineer-
ing teams in this effort. Within this framework, 3D scenarios for product design and
simulation are designed, and training - assisted by animated avatars - is made possible.
To support correct design and to anticipate critical design decisions a verification
approach is employed that allows one to check for safety (invariant and reachability)
properties already during the design phase. The underlying formal model consists of a
collection of Hybrid Automata that cover a description of the (behavior of) objects un-
der consideration, their potential interaction (sensor perception, collision, etc.) together
with controller software (centralized or de-centralized).
By animating the 3D model based on verification results (trace witnesses out of
constructive proofs) the system provides tangible feedback of the verification to the de-
signer. The presented approach implements a common management of the 3D model
and the formal model including an implicit adaptation of the formal model whenever
changes are applied to the 3D model. Thereby formal representations are hidden from
system designers as much as possible. The general idea behind this approach is to allow
non-specialists in the various areas involved to be able to make use of techniques that
otherwise would require high expertise. For instance, a 3D-expert will probably have
no problem in manipulating the 3D-scene according to her desires. The annotated for-

1
Corresponding Author.
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-225
225
mal models, on the other hand, allow her to check the required safety properties even
without an explicit need to become an expert in formal verification. And, in particular,
reachability proofs can be visualized as an animation within the 3D-scene, thus provid-
ing comprehensive feedback of the verification results.
This paper is organized as follows: After introducing our running example we pre-
sent the frameworks editor. This is followed by a more elaborated chapter in which we
outline all the aspects that deal with Formal Analysis and its interplay with the editor.
After that we briefly discuss our simulation of human/agent behavior and conclude
with a section on related and further work. The work presented in this paper was sup-
ported by the research projects c3d and ISReal [3] funded by the German BMB+F.
1. Running Example - Key Chain Assembly
The example (see Figure 1 for a 3D version) we use to demonstrate our approach con-
sists of a production line in the SmartFactoryKL [6]. It produces individualized key
fobs with an LED flashlight. The fobs can be customized with an engraved name. The
installation consists of four larger modules: Order picking, milling, automated assem-
bly and a manual working station.
Product assembly starts at the order picking module. The fobs cover, which con-
tains an RFID module, is imprinted with the necessary information for this fobs pro-
duction, for example the customized engraving. A robot arm then takes the cover from
storage to the milling module. The milling station reads out the relevant product infor-
mation from the product itself and performs the requested engravings. After engraving,
the case cover is picked up again by the robot arm from the order picking station and
delivered to the automated assembly machine by a carriage that runs on a conveyer belt
system.
At the heart of the assembly station is a pick-and-place (PNP) robot. It retrieves the
covers from the carriage and the other parts necessary for assembly from two dispenser
units and distributes them in the according order to up to 3 pressing machines. All parts
placed, those presses are used to compress them to the completed fob. Finally the PNP
robot delivers the key fob to an exit chute which leads to the manual working station.
At this station quality assurance as well as individualized post-processing steps take
place.
2. Editor
2.1. Technologies
2.1.1. XML3D
XML3D is used as core technology to display and interact with 3D graphics in the web
browser. With help of XML3D, the usual elements of webpages - which mostly consist
of HTML, CSS and XML elements - are enriched by DOM objects that represent 3D
graphics in a scene graph like structure [21]. All nodes within this graph are also nodes
in the web sites DOM tree representation, and can be accessed and changed via Javas-
cript like any other common DOM elements as well. Moreover, numerous HTML
events can directly be registered on these nodes. By this, all 3D data is entirely merged
W. Herget et al. / A Virtual Environment for Collaborative Engineering with Formal Verication 226
into the website, and not encapsulated by additional applets or specialized software.
Moreover, interaction with the 3D scene does not differ from interacting with a com-
mon 2D website.
2.1.2. Data Storage and Retrieval
Information about all data concerning a project or scenario is stored in a NoSQL data-
base. For the current application, Apache CouchDB [7] is used. The NoSQL approach
allows a very flexible layout of database documents. In addition, CouchDB pro- vides a
REST API [19] to retrieve, create or update database entries. Running instances of the
editor can start a long poll request to automatically receive any changes made to any of
the database objects [7]. The Backbone model-view-controller Javascript frame- work
[1] is used to render objects and object collections from the database to the website, in-
cluding the 3D visualization of the scenario.
2.2. The Editing Process
The editing process starts from a static scenery, that is fixed for each scenario (like the
empty factory facility in the example). The user can navigate freely through the scene,
using keyboard and mouse. One or more sets of 3D models are provided for every sce-
nario. Components from these sets can be instantiated in and moved through the scen-
ery by mouse, similar to known level editor tools for gaming (see Figure 1 for an ex-
ample).
Each change made to a scenario, like adding or deleting instances of an asset, or
changing an existing instances position or orientation, leads to an implicit save of the-
se changes to the database, where the overall system is stored as a composition of in-
stantiated components. In turn, all running instances of the web editor, that subscribed
to the databases long poll, immediately receive these changes and update their version
of the scene accordingly. By this multicast approach, the version of the system dis-
played in the editor is always kept synchronized with the database [14].
Figure 1. 3D Editor: A factory module is placed and marked for interaction
W. Herget et al. / A Virtual Environment for Collaborative Engineering with Formal Verication 227
2.3. Collaborative Editing
In order to jointly work on the design of a system, users have to authenticate using their
user account data. User authentication is realized by CouchDBs cookie authentication,
and CouchDBs user documents are taken for user management. Having performed this
step, a user is provided with a list of all projects he is participating in. A project groups
one more of the already described scenarios, that share common parameters - for ex-
ample the same static scenery to start from -, but differ in the concrete arrangement of
the final system. Choosing a project lets the user select a scenario to enter for modifica-
tion.
The automatic synchronization mentioned in 2.2 turns a scenario into a shared
workspace as soon as two or more users enter the same scenario. Each change made by
one user is directly displayed in the application instances of all joined users. To avoid
conflicting database updates, objects that are currently undergoing modifications are
locked for editing for other participants.
In addition to the concurrent editing, an integrated messaging system allows com-
munication during asynchronous work. Users can send messages to all of their co-
workers to inform them about recent changes, future work or problems that occurred
during the design process. New messages for a user are indicated as soon as the receiv-
ing user enters the workspace again, or directly, if the recipient is already logged in.
Analog to all other data of a scenario, messages are also stored in CouchDB. As the
model-view-controller framework can be served by any storage that provides a REST
API [19], though, access control and messaging can also be taken care of by existing
project management systems that support REST access, like for example Redmine [4].
3. Formal analysis
Systems of our interest are physical systems that are controlled by embedded software
like flight control systems, automatic breaking systems, and production lines in factory
environments. Such systems are called hybrid systems in that they involve both discrete
and continuous behavior. A major goal in the design and implementation of hybrid sys-
tems is the ability to reliably verify functional properties of the system at hand. The so-
lution presented here is based on the use of formal methods for the semantically un-
ambiguous modeling of systems and their verification. Thereby we try to hide formal
modeling and verification from the end users of our system, e.g. system engineers, as
far as possible. In our approach we automatically generate the formal models from the
3D models of systems constructed in the editor from Section 2 and present verification
results as animated behaviors in the 3D representation of the system.
3.1. Background
3.1.1. Hybrid Automata
Hybrid automata are a language particularly well suited to formally model hybrid sys-
tems in that they allow specifying both the continuous and discrete behavior parts of
the system in one model. Our version of hybrid automata is very similar to the lan-
guage of rectangular hybrid automata as they are known from [12], however with some
extensions to provide an easy to use and extensive approach to model hybrid systems.
W. Herget et al. / A Virtual Environment for Collaborative Engineering with Formal Verication 228
Their main contribution is the support of a high degree of modularity. All possible sys-
tem behaviors are defined by the composition of the different components. Since the
language of hybrid automata is a formal one with formal semantics, verification is pos-
sible on systems specified in this language. For a detailed introduction into hybrid au-
tomata we refer to [15].
3.1.2. Formal Verification
The verification algorithm works on a logical representation of the composed system
[17]. The hybrid system modeled as a set of hybrid automata is composed and translat-
ed into a logical formula on-the-fly during the verification process. The algorithm tries
to prove or refute safety properties described in the ICTL language [17]. Technically
this results in states that have to be or must not be reachable in the system. The verifi-
cation produces a trace witness [15] if the proof of a safety property fails or the proof
of a reachability property succeeds. This witness forms the core of our transformation
as it contains a description of how the system has to behave to reach the (un-)desired
state.
3.1.3. Animations and XFlow
As with ordinary video, an animation in our context means a sequence of images
shown in rapid enough succession as to evoke the illusion of continuous motion. The
keyframing approach to animation works by specifying two still images (the epony-
mous key frames, i.e. states of the system) at certain points in time; all states, and
conversely images, in the time interval between those two points can be recomputed.
In our approach, we encode animations of 3D objects using such a keyframing
technique. For each degree of freedom required by the mapping from formal to visual
model (as will be presented in 3.3) the corresponding objects animation is encoded by
a set of keyframes. A single value, the animation key, is then used to access intermedi-
ate system states. To recompute an animation from the keyframes, we use XFlow [9].
XFlow is a declarative data flow language for three-dimensional web content, and takes
care of computing from that single value the state of possibly highly complex anima-
tion sequences. Through keyframing and suitable interpolation operations, any inter-
mediate state of such an animation sequence can be directly addressed and displayed.
In the simplest of cases, we input a time-derived value into these keyframe value
containers, which results in an XFlow-encoded animation running in realtime. Con-
versely, by adding a scaling factor, arbitrary velocities of the animated structures can
be visually reproduced.
3.2. From XML3D to Hybrid Automata
Our approach aims at automatically generating the formal model from the 3D model of
a system constructed in the editor presented in Section 2. Obviously for constructing a
system in the editor every component needs to come with a 3 dimensional description
of its physical characteristics and its looks. We apply the same idea for the formal
model: additionally every component is equipped with a description of its possible be-
havior given as a hybrid automaton template.
W. Herget et al. / A Virtual Environment for Collaborative Engineering with Formal Verication 229
Now, whenever a new component is added to the system in the editor (and thereby
instantiated as a 3D Object), this functional description is instantiated accordingly and
added to the formal model of the system. A mapping that also comes with the compo-
nent describes how the parameters of the 3D object correlate with the parameters and
the initial states of the formal model. It translates position and rotation of the 3D object
into values for the template parameters and initial locations of the automaton template.
Figure 2 depicts such a mapping for the PNP robot from our production line exam-
ple. On the left it shows the 3D model of the robot and part of its XML3D representa-
tion, on the right it displays a simplified description of the robots behavior given as a
hybrid automaton template, and in the middle it sketches the mapping between the 3D
and the formal model. The PNP robot can move to a provided destination, it can be in
picking or placing mode, or just idle until it gets a new request. Moving to a destination
is modeled by the locations MoveLeft and MoveRight where the robot moves with the
velocity defined by the template parameter SPEED. The destination is given through
the synchronization label movePPToPos?(targetPos) on the transitions from Idle to
MoveLeft and MoveRight where depending on the on the current position (currentPos)
the appropriate transition is taken. Picking and placing is modeled by according loca-
tions that are entered through transitions from location Idle. Those transitions are trig-
gered when picking or placing is demanded through the synchronization labels pick?()
or place?() respectively. After the process of picking or placing (modeled through the
progress of time by PICK_TIME, PLACE_TIME resp.) the automaton returns to loca-
tion Idle.
Through the mapping we can then compute the initial state of the automaton: If the
robots arm in the 3D Model is in basic position (its arm is not rotated or extended, the
picker is retracted) the initial state is Idle. If it is not in basic position it is either in
Picking or Placing depending on whether an object is attached to its picker. The initial
position INIT_POS of the robot is computed from the x value of the position of the 3D
model of the robot. The other parameters cannot be deduced just from the 3D model
and are set to default values (given in square brackets).
Applying this for all objects added to the system allows us to automatically gener-
ate a projection of the formal model from the system modeled in the 3D editor onto its
<group xmlns="http://www.xml3d.org/2009/xml3d"
id="PickNPlace" transform="t_PickNPlace">
<group id="Armwagen" transform="t_Armwagen">
<group id="Halter_Energiekette"
transform="t_Halter_Energiekette">
...
<group id="Rotationsbasis"
transform="t_Rotationsbasis">
...
<group id="Arm" transform="t_Arm">
<group id="Schiene" transform="t_Schiene">
...
<group id="Schiebeteil" transform="t_Schiebeteil">
<group id="Hebeteil" transform="t_Hebeteil">
<group id="Saugnapf" transform="t_Saugnapf">
...
<group id="SaugnapfAdapter"
transform="t_SaugnapfAdapter">
...
<group id="Platte" transform="t_Platte">
...
<group id="Kolben" transform="t_Kolben">
<group id="Kolben2" transform="t_Kolben2">
<group shader="/models/pressemodelle/
presse_defs.xml#ID20">
<mesh src="/models/pressemodelle/
presse_defs.xml#ID744" type="triangles"/>
</group>
</group>
</group>
PICK_TIME [50],
PLACE_TIME [50],
SPEED [200],
INIT_POS [1000]
Idle
currentPos. = 0
MoveRight
currentPos. = SPEED
currentPos <= targetPos
MoveLeft
currentPos. = -SPEED
targetPos <= currentPos
Placing
time. = 1;
currentPos. = 0
time <= PLACE_TIME
Picking
time. = 1;
currentPos. = 0
time <= PICK_TIME
movePPToPos?(targetPos' )
currentPos <= targetPos'
movePPToPos?(targetPos' )
targetPos' <= currentPos
pick?()
busy = 0
c' = 0; busy' = 1
donePicking!()
c = PICK_TIME
currentPos = targetPos
currentPos = targetPos
busy = 1
place?()
c' = 0
c = PLACE_TIME
donePlacing!()
Formal Model
INIT_STATE [IDLE]
PNP.x 4000
Mapping
Idle
Picking
Placing
if arm.x = 0
if arm.x 0 ...
if arm.x 0 ...

3D Model
Figure 2. The Pick and Place robot: from 3D model to formal model
W. Herget et al. / A Virtual Environment for Collaborative Engineering with Formal Verication 230
visual parts. Systems can then be extended by formal descriptions of non-physical
components like e.g. controllers. Based on complete models of systems modifications
in the 3D model - like moving, adding, and deleting components - are directly and au-
tomatically reflected in the formal model through changing initial states and parameter
values and adding or deleting automata.
3.3. From Verification Results to 3D Animations
Additionally to the automated generation of formal models we also aim at providing
feedback from the verification process in a generally understandable manner. Verifica-
tion results are visualized as animated behaviors in the originating 3D model. Our ap-
proach presented in [15] was based on the interpretation of the description of behaviors
in the formal model, so called traces. We directly translated values of variables from
the formal model to position and rotation values of the objects in the 3D model. This
was successful in reaching our goal but at the same time very cumbersome. We had to
(re-)program all animations that were necessary for displaying all possible behaviors
de- fined through formal traces without having access to the tool support common for
3D modeling. As a result, in actual applications of our method we often only visualized
very simple behaviors. We simplified our approach by generating animations for the
trace visualization from basic predefined animations that are provided with the 3D
models of the components as XFlow animations. Non-basic behaviors of the compo-
nents and common behaviors of the whole system are then visualized through an ap-
propriate combination of the basic animations. This makes the definition of the map-
ping from formal traces to 3D animations much simpler and shorter as the basic anima-
tions had to be provided with the actual mapping in our previous approach. Additional-
ly it is much easier to provide elaborated and complex animations as we can now use
graphical modeling tools like Cinema4D [2] to pre-build the basic animations.
Our approach is still based on a step by step interpretation of the formal trace: for
every new picture to be displayed in trace visualization the formal state is computed for
the current time and then translated into the according state of the 3D model using the
mapping. However, now we use the basic animations to effectively modify the 3D
model. To do so, we dont actually run the animations but we use the possibility to
jump to a particular state in the progress of the animation (by calling the animation
with the appropriate key). Basing the visualization of the trace on a walkthrough of the
formal trace is necessary. We need to be able to start and stop basic animations or jump
to particular points in basic animations as in general we cannot foresee when a basic
animation has to be started or stopped, or whether it has to be interrupted or restarted.
E.g., the animation of a carriage running on a belt has to be interrupted when this car-
riage reaches a stopper and restarted at the appropriate point when the stopper is re-
leased. Applying this technique gives us all the freedom necessary for generically in-
terpreting formal traces of the model at hand.
In the visualization procedure of the trace all currently active animations are stored
in a list. Given the current point of time, for every active animation the appropriate key
for the current progress of that animation is computed and the animation is triggered
with that key to produce the modifications in the 3D model for that state. Animations
can be always active if they depend on the values of variables (variable activated) or be
activated by change of locations (location activated). In the latter case they either stay
active for a predefined amount of time or for as long as the component resides in the
W. Herget et al. / A Virtual Environment for Collaborative Engineering with Formal Verication 231
location. The kind of an animation, their association to a variable or a location, and the
starting and the end key is defined in the mapping.
Figure 3 sketches the translation of a state in the formal trace on the left into a 3D
state on the right. It again focuses on the PNP robot component of the example system.
The animation for picking is location activated and activated whenever the PNP robot
enters the Picking location and lasts as long as the component remains in this location.
The value of the key depends on the time that has passed since the system entered the
location and scaled for the appropriate duration. The animation that defines the position
of the component is variable activated. The according key is computed from its current
position in the formal trace given through PickAndPlace::currentPos and scaled ap-
propriately.
3.4. Verification in the Editor
To provide the users of the 3D editor with the results presented above we extended it
with a verification view. This view allows the user to verify functional properties of the
formal model associated with the current 3D model and created or modified in the edi-
tor. Settings of the verifier like the algorithm that is applied and whether a trace output
for visualization should be produced can be configured and properties to be verified
can be selected from a list of predefined or previously proven properties
The verification interface then provides feedback whether the verification of the
currently selected property was successful and - if one was produced - a textual repre-
sentation of the formal trace that describes how the system has to behave to reach the
(un-/)desired state. Finally an animated visualization of this formal trace can be visual-
ized in the 3D representation of the modeled system.
Mapping
PickAndPlace::Picking
currentPos = 750
currentPos' = 750
Carriage::MovingLeft
pos = 835
pos' = 0
Press1::Selected
Press2::Selected
ControlelrMain1::waitForRob

Formal Trace

3D Model
Location triggered:
while_in_Loc : key = time/50
Variable triggerd:
key = currentPos/230
<group xmlns="http://www.xml3d.org/2009/xml3d"
id="PickNPlace" transform="t_PickNPlace">
<group id="Armwagen" transform="t_Armwagen">
<group id="Halter_Energiekette"
transform="t_Halter_Energiekette">
...
<group id="Rotationsbasis"
transform="t_Rotationsbasis">
...
<group id="Arm" transform="t_Arm">
<group id="Schiene" transform="t_Schiene">
...
<group id="Schiebeteil" transform="t_Schiebeteil">
<group id="Hebeteil" transform="t_Hebeteil">
<group id="Saugnapf" transform="t_Saugnapf">
...
<group id="SaugnapfAdapter"
transform="t_SaugnapfAdapter">
...
<group id="Platte" transform="t_Platte">
...
<group id="Kolben" transform="t_Kolben">
<group id="Kolben2" transform="t_Kolben2">
<group shader="/models/pressemodelle/
presse_defs.xml#ID20">
<mesh src="/models/pressemodelle/
presse_defs.xml#ID744" type="triangles"/>
</group>
</group>
</group>
Figure 3. The Pick and Place robot: from formal model to 3D model
W. Herget et al. / A Virtual Environment for Collaborative Engineering with Formal Verication 232
4. Behavior Simulation
In order to simulate interactions between workers and the production assets, the editor
allows positioning animated avatars into the scene. The behavior of avatars is con-
trolled by an agent platform which is provided as a service. Agents in our context are
the abstract entities representing the avatars in the scene. Agent models containing the
behavior can be modeled in advance and assigned to virtual characters.
Since the objects in the scene can be moved, they constitute possible obstacles for
avatars. Therefore, in the framework, a navigation service is associated to the agent
platform which generates a navigation mesh on demand and prevents collisions of the
avatars with each other and objects in the scene.
5. Related Work
There are some approaches that deal with collaborative 3D editing from the web
browser. Pappas et al. introduce DiCoDev, a collaborative tool for project and process
design evaluation [18]. The Collaborative Prototype Designer by Smparounis, Alex-
opoulos and Xanthakis combines collaborative design and review, using 3D graphics in
browser as well, and adds a decision support module for product evaluation [20].
Menck et al. proposed a collaborative factory planning tool, based on VR technology
[16]. All these approaches include user management with access control and communi-
cation layers. The mentioned approaches make use of Java applets, plugins or third par-
ty libraries for interactive 3D visualization in the browser. By using XML3D, our ap-
proach, in contrast, directly brings 3D graphics to any browser that supports WebGL,
without any additional applets. The 3D editor itself and the displayed objects in it are
directly part of the web page and the underlying DOM tree, which drastically simplifies
information exchange between the 3D editor and the remaining 2D part of the web site.
There are quite a few tools for modeling and simulating hybrid systems that also
include the possibility to visualize simulations in 2D or 3D [5]. However, their focus is
on simulation rather than on verification. Formal verification wise there are several ver-
ification tools for hybrid systems like HyTech [13], PHAVer [10], Spaceex [11], d/dt
[8], that all are based on languages that have formal semantics and provide verification.
Neither do they provide the generation of the formal model and the visualization of
prove results in our way, nor are they integrated in a collaborative modeling tool.
6. Conclusion & Further Work
We have presented a browserbased 3D editor for the modeling of hybrid systems with
an integrated verifier for analyzing the functional behavior of the modeled system. All
modifications of the model are performed collaboratively and seamlessly synchronized
in a virtual environment. Simulation of human interaction is possible through the inte-
grated agent platform. A key feature of our approach is the hiding of the formal model
by its automatic generation from the 3D model of the system and the visualization of
verification results as animations of the 3D model.
Further steps in our work will concentrate on an even higher degree in the automa-
tion of the generation of the formal model and the visualization of proof results as ani-
mations in the 3D model. Our approach would also benefit from a natural language like
W. Herget et al. / A Virtual Environment for Collaborative Engineering with Formal Verication 233
property language for specifying new desired properties of the system. With respect to
collaboration we plan an integration of the Redmine project management framework
and thereby a direct coupling of 3D models with other documents for the system at
consideration.
References
[1] Backbone.js. http://www.backbonejs.org, 2013.
[2] Cinema 4d. www.maxon.net/products/cinema-4d-studio/who-should-use-it.html, 2013.
[3] Stefan Nesbigall, Stefan Warwas, Patrick Kapahnke, Ren Schubotz, Matthias Klusch, Klaus Fischer,
Philipp Slusallek: Intelligent Agents for Semantic Simulated Realities - The ISReal Platform. ICAART
(2) 2010: 72-79.
[4] Redmine. www.redmine.org, 2013.
[5] Simulink: Simulation und model-based design. www.mathworks.de/products/simulink/, 2013.
[6] SmartFactoryKL. www.smartfactory kl.de, 2013.
[7] J. Chris Anderson, Jan Lehnardt, and Noah Slater. CouchDB: The Definitive Guide Time to Relax.
OReilly Media, Inc., 1st edition, 2010.
[8] Eugene Asarin, Thao Dang, and Oded Maler. d/dt: A tool for reachability analysis of continuous and hy-
brid systems. In 5th IFAC Symposium Nonlinear Control Systems (NOLCOS) , 2001. [ACH + 95,
pages 334, 2001.
[9] Dmitri Rubinstein Sergiy Byelozyorov Stefan John Philipp Slusallek Felix Klein, Kristian Sons. Xflow -
declarative data processing for the web. In Proceedings of the 17th International Conference on Web
3D Technology, Los Angeles, California, 2012.
[10] Goran Frehse. Phaver: Algorithmic verification of hybrid systems past hytech. In Manfred Morari and
Lothar Thiele, editors, Hybrid Systems: Computation and Control, 8th International Workshop, HSCC,
volume 3414 of Lecture Notes in Computer Science, pages 258273. Springer, 2005.
[11] Goran Frehse, Colas Le Guernic, Re Donze, Scott Cotton, Rajarshi Ray, Olivier Lebeltel, Rodolfo
Ripado, Antoine Girard, and Thao Dang. Spaceex: Scalable verification of hybrid systems. In Proceed-
ings of the International Conference on Computer Aided Verification, 2011.
[12] T. A. Henzinger. The theory of hybrid automata. In Proceedings of the 11th Annual IEEE Symposium
on Logic in Computer Science, LICS 96, pages 278, Washington, DC, USA, 1996. IEEE Computer
Society.
[13] Thomas A. Henzinger, Pei-Hsin Ho, and Howard Wong-Toi. Hytech: a model checker for hybrid sys-
tems. International Journal on Software Tools for Technology Transfer (STTT), 1:110122, 1997.
[14] Mohsen Kahani and H. W. Peter Beadle. Collaboration in persistent virtual reality multiuser interfaces:
Theory and implementation.
[15] Christopher Krau and Andreas Nonnengart. Formal analysis meets 3d-visualization. In Josip Stje-
pandic, Georg Rock, and Cees Bil, editors, Concurrent Engineering Approaches for Sustainable Prod-
uct Development in a Multi-Disciplinary Environment, pages 145156. Springer London, 2013.
[16] N. Menck, X. Yang, C. Weidig, P. Winkes, C. Lauer, H. Hagen, B. Hamann, and J.C. Aurich. Collabo-
rative factory planning in virtual reality. In Procedia CIRP, volume 3(0), pages 317 322, 2012.
[17] Andreas Nonnengart. A deductive model checking approach for hybrid systems. Research Report MPI-
I-1999-2-006, Max-Planck-Institut fr Informatik, Stuhlsatzenhausweg 85, 66123 Saarbrcken, Ger-
many, November 1999.
[18] M. Pappas, V. Karabatsou, D. Mavrikios, and G. Chryssolouris. Development of a web-based collabo-
ration platform for manufacturing product and process design evaluation using virtual reality tech-
niques. In International Journal of Computer Integrated Manufacturing, volume 19(8), pages 805814,
2006.
[19] Alex Rodriguez. Restful web services: The basics. IBM developerWorks, 2008.
www.ibm.com/developerworks/webservices/library/ws-restful/.
[20] Konstantinos Smparounis, Kosmas Alexopoulos, and Vagelis Xanthakis. A web-based platform for
collaborative product design and evaluation. In 15th International Conference on Concurrent Enterpris-
ing (ICE), 2009.
[21] Kristian Sons, Felix Klein, Dmitri Rubinstein, Sergiy Byelozyorov, and Philipp Slusallek. Xml3d: in-
teractive 3d graphics for the web. In Web3D 10: Proceedings of the 15th International Conference on
Web 3D Technology, pages 175184, New York, NY, USA, 2010. ACM.
W. Herget et al. / A Virtual Environment for Collaborative Engineering with Formal Verication 234
Development of a Parametric Form
Generation Procedure for Customer-
oriented Product Design
Ming-Chyuan Lin
a,1
, Yi-Hsien Lin
b
, Ming-Shi Chen
c
and Jenn-Yang Lin
d

a
Department of Creative Product Design College of Arts Nanhua University
b
Department of Educational Administration College of Education and Human Services
Texas A&M University-Commerce
c
Department of Product Design TransWorld University
d
Department of Creative Product Design and Management College of Commerce &
Management Far East University
Abstract. Currently, the involvement of consumer requirements in the early stage
of product development has become an important issue in product design. The
designer needs to correctly and immediately do customer requirement analysis and
make decisions in the process of design alternative recommendation. However,
the designer is usually difficult to grasp customer requirements. Fortunately, the
enforcement of computer software and upgraded efficacy of computer hardware
allow an embodiment representation of design alternatives to link customers with
designers. Therefore, the objective of this research is to develop a parametric
product design system that can provide designers with an interactive interface to
consider customer requirements in the early stage of product development. The
design of ear phones is used as a case to explore the applicability of the parametric
product design system. The function of parametric design in the CATIA software
is also used to help construct design appearance for the generated ear phones.
It is expected that the generation of parametric design incorporated with data
mining system in the web site will enhance product design efficiency in grasping
product design key factors and parameters at the initial stage. Designers can not
only generate products fast in the process of product development, but also get the
appropriate forms of product quickly and match the demand and preference of
consumers.
Keywords. Customer-Oriented Product Design, Parametric Design, Data Mining,
Decision Making

1
Ming-Chyuan Lin, Ph.D., Professor
Department of Creative Product Design College of Arts Nanhua University
Chiayi, Taiwan 622
Tel: 886-5-2721001 Ext 56435, Fax: 886-6-2522609
e-mail: minglin@mail.nhu.edu.tw
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-235
235
1. Introduction
The progress of technology and competition of global market has made the product life
cycle shortened. Due to the significant improvement of digital technology and
computer performance, product designers can use computers to generate solid
modeling of design alternatives and make rapid revision and detailed design on the
generated design alternatives. The digital data can even be directly transferred to the
department of engineering design and manufacturing. As such, the introduction of
computer-aided design and application of knowledge base have made product
development more efficiently. This application of computer information technology in
the design process of generation of product design alternatives is no longer a bottle-
neck for designers [1]. However, involving customer requirements in the first stage of
design process and making a quick analysis and decision become an important issue for
designers. To face this trend, the designer needs to improve the product development
process and enhance the efficient collaboration design. In the collaborative product
design, customer requirements are involved to ensure the developed products can be
closed to the expectation of the targeted market. Researches showed that the cognition
between designers and customers is still significantly different [2]. It might be because
customers can not explicitly depict the expected product form and requirements in a
representational media that make designers difficult to catch the voice of customers.
To help reduce the sale loss of the developed new products that is incurred from
communication obstacles between designers and customers, an efficient design
assistance system is needed in the design process. Currently, the enhancement of
computer software functions and the improvement of computer hardware efficiency
might provide designers with a friendly interface that can properly link customer
requirements with designer ideas [3]. The objective of the proposed research is to
construct a parametric product design system that can provide customers and designers
with a communicative interface to help present different customer requirements and the
corresponding product graphic representation. In order to explain how the proposed
approach is developed, the design of ear phones will be used as an example to illustrate
the steps of the approach.
2. The Development Procedure
As mentioned before, an interactive interface will be designed in the parametric
product design system to allow designers to involve specific customer requirements in
the design process. To help develop the system, the proposed approach also considers
the data connections between customer requirements and customer attributes. A
relational database is developed and then the technique of data mining is used to
explore some useful customer preference. The identified customer requirements will
proceed with product characteristics analysis, evaluation and design.
In the procedure of parametric product design development, the designer conducts
a systematic analysis on product structure to identify and set up the key product design
parameters for the generation of graphic representation. The parametric data are
transformed into the construction of component characteristics of a product form. Each
generated component is then integrated into a complete product form. The developed
design data base can be stored for further analysis. Data mining technique is also
applied to product classification [4] and product design assessment to assist designers
M.-C. Lin et al. / Development of a Parametric Form Generation Procedure 236
to extract the customer-oriented product design for the specific customer category [5,
6]. The approaches of this research include: (1) identification of product parameters for
customer requirements, (2) decomposition of identified product parameters into several
sections, (3) construction of product components and complete forms, (4) definition of
the range of numeric data for each product component and (5) generation of design
alternatives for a specific set of customer requirements. The research uses ear phone
design as a case to help explain the development procedure. Note that a preliminary
exploration of parametric product design is conducted as illustrated in Figure 1. It will
help designers to learn how to use simple geometric form, position of control points,
proportional adjustment, numeric change, and even transformation of form such as
reduce-enlarge, revolution, transfer, gradual change, transition and twist in the
generation of a variety of product forms [7-9].

(a) An ashtray design

(b) An iPod design
Figure 1. General concept of parametric product design.
3. Implementation of the Parametric Ear Phone Design
Since customer requirements and product characteristics are two major parts in product
design development, the research will focus on a parametric adjustment of product
form in response to a specific set of customer requirements [5, 10, 11]. A six pronged
procedure for the development of parametric product design system is employed. In
order to explain how the proposed approach is developed, the design of the ear phone
will be used as an example to illustrate the steps of the approach.
M.-C. Lin et al. / Development of a Parametric Form Generation Procedure 237
3.1. Identification of Product Parameters for Customer Requirements
After a collection and classification of 65 ear phone samples, the research targeted the
design of one piece mould ear phones as illustrated in Figure 2. In Figure 2, three
design parameters are identified [10]. They are (1) head, (2) neck and (3) body.

Figure 2. Characteristics of a one piece mould ear phone.
3.2. Decomposition of Identified Product Parameters into Several Sections
In developing the parametric product design system, the research uses computer
software CATIA to construct ear phone images for graphic representation. Based on
the observation of the ear phone appearance shown in Figure 2, the research
decomposed the ear phone design parameters into 5 composite sections [12]. The
design parameter Head consists of the first, second and third sections, Neck
consists of the fourth section and Body consists of the fifth section. All sectional
graphics are constructed with circles. Note that the fourth section of the Neck
defines two control points (above and under the neck) and the fifth section of the
Body also defines two control points (above and under the body). In the construction
of a 3D one mould piece ear phone, the research first connects end points of the first,
second and third sections, then connects the corresponding control points of the fourth
and fifth sections [12]. A conceptual construction for a one mould piece ear phone
graphic representation is shown in Figure 3.
3.3. Construction of Product Components and Complete Forms
To apply the numeric data in the CATIA software for the construction of product
components and adjustment of control points to fit for a complete product form, the
research identifies 18 graphic parameters, as illustrated in Figure 4. The advantage of
using parametric graphics includes: (1) product sample forms are constructed with
numeric data to avoid subjective deviation of different designers in constructing
graphics and (2) designers can do self control for some design factors based on the
classification rules of samples.
M.-C. Lin et al. / Development of a Parametric Form Generation Procedure 238

Figure 3. Decomposition of a one piece mould ear phone characteristics.
Figure 4. Decomposition of a one piece mould ear phone characteristics.
3.4. Definition of the Range of Numeric Data for Each Product Component
In constructing the one piece mould ear phone, the proposed approach uses the concept
of line connection of circular contours. To allow the developed samples can cover
different characteristics of ear phones for customer requirements and assist designers to
expeditiously generate recommended design alternatives, the research defines
variations for graphic parameters. It is noted that single variation such as radius, length,
width and height is not enough to represent the characteristics of a product form. As
such the research transforms the identified graphic parameters into characteristics of a
product form. The characteristics of a product form are then transformed into numeric
M.-C. Lin et al. / Development of a Parametric Form Generation Procedure 239
definition. Figure 5 illustrates the conceptual transformation of graphic parameters into
characteristics of a product form and the corresponding numeric definition. Based on
the conceptual transformation shown in Figure 5, the range of numeric definition can
then be determined as illustrated in Table 1. Note that that numeric definition is based
on the dimensional measurements of different types of marketed one piece mould ear
phones.

Figure 5. Conceptual transformations of graphic parameters into numeric definition.
Table 1. Determination of ranges of numeric definition for a one piece mould ear phone

M.-C. Lin et al. / Development of a Parametric Form Generation Procedure 240
3.5. Generation of Design Alternatives for a Specific Set of Customer Requirements
According to the determination of the range of numeric definition, the research planned
the generation of some experimental design alternatives. Figure 6 shows an operational
interface of the parametric product design system. In Figure 6, an experimental design
alternative is generated for a specific set of numeric data [13]. The research generated
40 experimental design alternatives that will be forwarded to explore the close linkage
between customer requirements and the recommended design alternative. Figure 7
illustrates the generation of 40 experimental design alternatives.
Distance between 1st & 2nd sections centers
X coordinate of 2nd sections center
Y coordinate of 2nd sections center
Distance between 2nd & 3rd sectionss centers
X coordinate of 3rd sections center
Y coordinate of 3rd sections center
Distance between 5th sections center & origin
X coordinate of 5th sections center
Y coordinate of 5th sections center
Distance between 4th & 5th sectionss centers
X coordinate of 4th sections center
Y coordinate of 4th sections center
Radius of 1st section
Radius of 2nd section
Radius of 3rd section
Radius of 4th section
Radius of 5th section
X coordinate of 4th control point
Y coordinate of 4th control point
Z coordinate of 4th control point
X coordinate of 3rd control point
Y coordinate of 3rd control point
Z coordinate of 3rd control point
X coordinate of 2nd control point
Distance between 1st & 2nd sections centers
Xcoordinate of 2nd sections center
Ycoordinate of 2nd sectionscenter
Distance between 2nd & 3rd sectionss centers
Xcoordinate of 3rd sections center
Ycoordinate of 3rd sections center
Distance between 5th sections center & origin
Xcoordinate of 5th sectionscenter
Ycoordinate of 5th sections center
Distance between 4th & 5th sectionss centers
Xcoordinate of 4th sectionscenter
Ycoordinate of 4th sections center
Radius of 1st section
Radius of 2nd section
Radius of 3rd section
Radius of 4th section
Radius of 5th section
Xcoordinate of 4th control point
Ycoordinate of 4th control point
Z coordinate of 4th control point
Xcoordinate of 3rd control point
Ycoordinate of 3rd control point
Z coordinate of 3rd control point
Xcoordinate of 2nd control point
Ycoordinate of 2nd control point
Z coordinate of 2nd control point

Figure 6. Operation Interface of the parametric product design system.
Sample 1 Sample 10 Sample 2 Sample 3 Sample 4 Sample 5 Sample 6 Sample 7 Sample 8 Sample 9
Sample 31 Sample 40 Sample 32 Sample 33 Sample 34 Sample 35 Sample 36 Sample 37 Sample 38 Sample 39
Sample 21 Sample 30 Sample 22 Sample 23 Sample 24 Sample 25 Sample 26 Sample 27 Sample 28 Sample 29
Sample 11 Sample 20 Sample 12 Sample 13 Sample 14 Sample 15 Sample 16 Sample 17 Sample 18 Sample 19
Figure 7. Generation of 40 experimental one piece mould ear phones.
M.-C. Lin et al. / Development of a Parametric Form Generation Procedure 241
4. Conclusions
Product design is an activity that relies on experience, thought and knowledge of
the designer. Each designer might have his or her own subjective opinion, but the
integration of knowledge, experience and customer requirements from designers of
similar products will significantly improve the quality and reliability of product design.
When the product designer or design team develops a product design using classical
methods, the range of design alternatives that are developed is limited by the creativity
of the designer or team members and the processes used in evaluating alternative
designs may require a considerable amount of time. The research that this document
describes is to explore possible connection of the parametric product design system
with customer requirements. An evaluation of customers preference and analysis of
market trend via internet network will be conducted for future development. The
evaluation will be conducted with an Internet survey type of questionnaire. The
technique of semantic differential method in Kansai engineering will be applied in the
design of questionnaire. In the questionnaire, a set of adjective vocabulary
incorporated with the 40 experimental ear phone design alternatives is arranged to
investigate the customers preference. The stored data will be analyzed by the
statistical software SPSS. The result of the questionnaire will assist designers to
identify a specific group of customers with a particular type of one piece mould ear
phones. When knowing different group of customer requirements, the parametric
product design system can make a quick search for a most suitable interface and
product type as shown in Figure 6. The customer can then generate a preferable
product form. The development of the parametric product design system has shown
the potential benefits that can be achieved with the involvement of customer
requirements.
5. Acknowledgement
The authors are grateful to the National Science Council, Taiwan for supporting this
research under the grant number NSC97-2221-E-343-009-MY3.
References
[1] Huang GQ, Huang J and Mark KL (2000) Early supplier involvement in new product development on the
internet: implementation perspectives. Concurrent Engineering: Research and Applications, 8(1): 40-49
[2] Du XH, Jiao JX and Tseng MM (2006) Understanding customer satisfaction in product customization.
Advanced Manufacturing Technology, 31(3-4): 396-406
[3] Yan W, Chen CH and Khoo LP (2007) Identification of different demographical customer preferences
for product conceptualization. Engineering Design, 18(1): 39-54
[4] Witten IH and Frank E (2000) Data Mining, Academic Press, San Diego
[5] Chen CH, Khoo L P and Y W (2005) PDCSa product definition and customization system for product
concept development. Expert Systems with Applications, 28(3): 591602
[6] Gologlu C and Mizrak C (2011) An integrated fuzzy logic approach to customer-oriented product design.
Journal of Engineering Design, 22(2): 113-127
M.-C. Lin et al. / Development of a Parametric Form Generation Procedure 242
[7] Chen SE and Parent RE (1989) Shape averaging and its applications to industrial design. Computer
Graphics and Applications, IEEE, 9(1): 47 54
[8] Gologlu C and Mizrak C (2010) Customer driven product determination with fuzzy logic and taguchi
approaches. Journal of The Faculty of Engineering and Architecture of Gazi University, 25(1): 9-19
[9] Liao SH, Chen YJ and Deng MY (2010) Mining customer knowledge for tourism new product
development and customer relationship management. Expert Systems with Applications, 37: 4212-4223
[10] Zwicky F (1967) The morphological approach to discovery, invention, research and construction, new
method of thought and procedure. Symposium on Methodologies, Psadens, 316-317
[11] Liu QS and Xi JT (2011) Case-based parametric design system for test turntable. Expert Systems with
Applications, 38(6): 6508-6516
[12] Lin MC and Chen LA (2005) An integrated prototyping and knowledge representation procedure for
customer-oriented product design, Product Development, 2(4): 353-370
[13] Zhou S, Chin KS and Prasad KDV (2003) Internet based intensive product design platform for product
design. Knowledge-Based System, 16: 7-15
M.-C. Lin et al. / Development of a Parametric Form Generation Procedure 243
The Design of Production Strategy based
on Risk Analysis using Process Simulation
Taiga Mitsuyuki
a,1
, Hiroyuki Yamato
a
, Kazuo Hiekata
a
and Bryan Moser
b

a
Graduate School of Frontier Sciences, The University of Tokyo, Chiba, Japan
b
Global Project Design, Boston, USA
Abstract. This paper proposes a methodology to design production strategy based
on risk analysis using an integrated organization and process simulation. A process
simulator is developed to evaluate the risk emerging from the integration of an
organization model, a production model, feasibility constraints of factory, and
teams production strategies. The process simulation iterates with the following
procedure: (1) extract the class of available workers, facilities, and activities in a
time horizon, (2) prioritize the work using parameters of production strategy, (3)
allocate workers and facilities to activities under the constraints of simulation, (4)
determine if each worker can work or not based on an uncertainty ratio, (5) renew
the condition of all activities, workers, facilities and time. Better production
strategies are searched using a random key based genetic algorithm which
minimizes the average total labor cost. The designer selects a production strategy
considering average total labor cost and standard deviation of total labor cost. The
proposed methodology is applied to case studies of the block assembly process in a
shipbuilding company. Results show that the proposed methodology can evaluate
organizational performance based on total labor cost including risk. Furthermore,
the case studies confirm that an adequate production strategy can be selected given
the constraints of a specific factory.
Keywords. Project design, risk analysis, production strategy, genetic algorithm
1. Introduction
In the aftermath of the 2011 Tohoku earthquake and tsunami, the Japanese government
has called on all factories to reduce 15% of electric consumption in Japan compared to
peak demand during the previous summer season. To achieve this objective,
reconsideration and design of adequate production strategies are important for all
factories in Japan.
For evaluating an organization structure and allocation strategy, many process
simulations have been developed. Process models and simulation can determine the
bottleneck when numerous objective functions are used. The Virtual Design Team
(VDT) [1-3] addresses the design coordination of work by considering complicated
communications between each designer. Recently, Suzuki et al [4] developed the
Process Management Tool (PMT) which simulates enterprise business processes that
consider organization hierarchy and communication between each member. A
methodology for evaluating and optimizing process architecture is the Design Structure
Matrix (DSM) [5], which optimizes processes based on sequential and technical

1
Corresponding Author.
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-244
244
relationships among system features or tasks. Eppinger and his colleagues [6] have
developed various DSM methodologies. There has also been research on creating a
project schedule with uncertain disturbances and behaviors [7-8]. Moser [9] has
developed a socio-technical approach leveraging activity models, simulation, and
project design, demonstrating usefulness on many case studies in the real world.
In a previous paper (Mitsuyuki et al [10]), we proposed a method to evaluate
organizational performance by calculating optimal production strategy, but did not yet
consider the risk driven by organization uncertainty. For application in the field, it is
important to evaluate a production strategy considering the designers tolerance for risk.
This paper proposes a methodology to design a production strategy which balances
total labor cost with risk.
2. Proposed Methodology
A methodology for design of production strategy based on risk analysis including
variations in organization performance is shown in Figure 1. We first create an
enterprise model representing the status of the factory to be analyzed. The enterprise
model is composed of an organization model, production model, and constraints. A
production strategy is described by setting each parameter. Next, a process simulation
is repeated using the Monte Carlo method on each production strategy. In this paper,
uncertainty is introduced on performance variables of each worker and each facility in
this factory, with troubles occurring at some probability. The process simulator creates
multiple work plans for each production strategy. The simulation results of the work
plans generate average total labor cost and standard deviation of total labor cost. Better
production strategies are searched by using a random key based genetic algorithm
which minimizes these two functions. By repeating these steps, the evaluation results
of each production strategy are accumulated. The designer can select the final
production strategy and then adjust the enterprise model by using these results.

Figure 1. Overview of proposed methodology
T. Mitsuyuki et al. / The Design of Production Strategy Based on Risk Analysis 245
2.1. Enterprise model
An enterprise model represents the capability or status of the factory to be analyzed. In
this research, the enterprise model is composed of an organization model, production
model, and constraints in the factory.
The organization model represents workers and facilities, defined by their amount
of skill, cost and electricity consumption. Type 1 indicates a worker and type -1
indicates a facility. In this study, there are two types of cost: constant labor cost and
variable labor cost. By using the concepts of constant labor cost and variable labor cost,
the effects of a regular or temporary employee and depreciation cost can be determined.
In addition, the value of electricity consumption per hour is defined in each facility.
Furthermore, each worker and facility is defined with a parameter of efficiency for each
skill. Table 1 is an example of the organization model. Worker A, who is a regular
employee, can use facility D, but 10kW is consumed per hour.
The production model is a class of workflows. A workflow is composed of
activities and their attributes, including name, basic period needed to finish, relation
with the previous activity, and deadline. The activity name is related to the skill set in
the organization model.
Constraints are composed of an electricity constraint and an area constraint. The
electricity constraint defines the maximum electricity consumption per unit of time that
can be used. The area constraint expresses the available stockyard area for working. In
other words, a constraint indicates the maximum number of workflows that are
performed per unit time.

Table 1. Organization model in table form
Name Type Constant
labor
cost
(JPY/hr)
Variable
labor
cost
(JPY/hr)
Electricity
(kW/hr)
Efficiency Skill 1 Skill 2 Skill 3
Worker A 1 2500 0 - 0.90 1 1 1
Worker B 1 2000 0 - 0.90 1 1 0
Worker C 1 0 1500 - 0.85 0 1 0
Facility D -1 700 500 10 0.90 1 0 0
Facility E -1 1200 700 40 0.95 0 1 0
Facility F -1 2500 800 100 0.95 0 0 1

2.2. Production strategy
The production strategy consists of the weights of nine dispatching rules [11]. A
dispatching rule is a rule that gives priority to all jobs that are waiting for processing by
workers or facilities. The dispatching rules inspect jobs that are waiting and select the
job with the highest priority. Table 2 shows a detailed explanation of each parameter
for the dispatching rules of the production strategy.






T. Mitsuyuki et al. / The Design of Production Strategy Based on Risk Analysis 246

Table 2. Production strategy parameters
Name Detail Parameter
EDD Workflow tasks with an earlier deadline are performed preferentially. w
EDD

FIFO Tasks with an earlier deadline and available are performed preferentially. w
FIFO

SPT Tasks with a short completion time are performed preferentially. w
SPT

SST Tasks which sum of earliest and latest start time is short are performed
preferentially.
w
SST

TSLACK Tasks on a critical path are performed preferentially. w
TSLACK

SPN Tasks with a lower number of workers with the required skills are performed
preferentially.
w
SPN

SSP Workers and facilities that have available tasks that require many skills are
allocated preferentially.
w
SSP

VCP Variable labor cost is lessened as much as possible. w
VCP

ECP Electric power consumption is lessened as much as possible. w
ECP

2.3. Process simulation
From the organization model, product model, constraints, and production strategy, a
work plan is calculated by process simulation. A simulator developed for this research
allocates each task to each worker and facility by using PERT (Program Evaluation and
Review Technique) and a leveling method. Allocating is completed by the following
processes:
a) Create a class of the free workers and facilities that can do the task in time t.
b) Create a class of the free activities which are not completed by the worker and
facility in (a).
c) A prioritized order of activities is made by arranging the activities in ascending
order based on their score that is calculated in equation (1).
d) Each worker and facility is given the score that is calculated in equation (2).
e) Choose which tasks have a lower score that those in (d) and that are performed
by workers and facilities in the class of free workers and facilities who can do the
task by priority.
f) Judge whether each worker and facility has trouble or not.
g) The duration value of an available activity that has a worker who has no trouble
and facility which has no trouble in (e) is decreased by 1.
h) Renew the condition of each activity, worker and facility.
i) t=t+1 and back to (a).
) ( ) ( ) (
) ( ) ( ) ( ) (
t SPN w t TSLACK w t SST w
t SPT w t FIFO w t EDD w t f
j SPN j TSLACK j SST
j SPT j FIFO j EDD j
w T w w
w w w
(1)
) ( ) ( ) ( ) ( t EC w t VC w t SSP w t g
k EC k VC k SSP k
w w w (2)
The notation j is the activity number and notation k is the worker or facility
number. At each time step, the parameters of all dispatching rules in the production
strategy are calculated. Equations (3)-(11) express the parameter of each dispatching
rule. Limit
j
is the deadline of parent workflow. h represents each of the input activities
for activity j and x
j
(t) equals 1 when activity j is finished. In other words, equation (4)
express the latest finish time of all previous activities of activity j. ES
j
is the earliest
start time of activity j and LS
j
is the latest start time of activity j. d
j
expresses the
T. Mitsuyuki et al. / The Design of Production Strategy Based on Risk Analysis 247
duration of activity j. k is the worker or facility number. RA(t) expresses the activities
available, which can be allocated to workers and facilities at time t. VC
k
expresses the
valuable cost of worker or facility k. EC
k
expresses the electricity consumption by
using facility k.
j j
Limit t EDD L ) ( (3)
) 1 ) ( | max( ) ( 1 m t x t t FIFO
h j
(4)
j j
d t SPT d ) ( (5)
j j j
LS ES t SST L E ) ( (6)
j j j
ES LS t TSLACK E L ) ( (7)
k
j,k j
r t SPN ) ( (8)
) (
) (
t RA j
j,k k
r t SSP (9)
k k
VC t VCP V ) ( (10)
k k
EC t ECP E ) ( (11)
A process simulator includes random variation based on behavior choices in steps
(f) and (g). On each time step, this simulator judges whether trouble occurs or not for
each worker and facility by using the parameter of efficiency in the organization model.
When a worker works an activity by using a facility on unit time, the duration value of
this activity is decreased only if there is no trouble in this worker and this facility. After
this step, the condition of each activity, worker and facility is renewed.
By repeating these steps until all activities are finished, one work plan is created.
2.4. Risk analysis
A process simulator creates many kinds of work plans for one production strategy.
These work plans are analyzed for average total labor cost and standard deviation of
total labor cost.
Results of this analysis are entered in the graph with x-axis as the average total
labor cost and y-axis as the standard deviation of total labor cost. Pareto efficient
strategies can be found from this graph. The designer can select a best strategy in
Pareto efficient strategies after analyzing many kinds of production strategies.
2.5. Design of production strategy
Based on this methodology of calculating the work plan by process simulation, the
applicable parameters of the production strategy can be searched by using a random-
key based genetic algorithm [12]. Genes are expressed by the parameters of the
production strategy, which express priority by a real number from 0 to 1.
T. Mitsuyuki et al. / The Design of Production Strategy Based on Risk Analysis 248
The objective function is expressed by equation (12). In this study, the objective is
to minimize the average total labor cost of work plans in one production strategy.
2.6. Comparative analysis of production strategies
A comparison analysis between each production strategy is proposed. Firstly,
production strategies are compared by the logarithmic value of each parameter. After
this step, the logarithmic value of each parameter is normalized that the average is 0
and the variance is 1. By using this analysis, the information of which dispatching rule
is important or neglected can be obtained quantitatively.
3. Case studies
The proposed methodology was applied to sample scenarios with different case studies.
In this paper, this methodology was applied to case studies of a block assembly in a
fabrication shop of a shipbuilding company. Figure 2 shows the workflow for
producing one block. The duration of each activity was 1 hour. Tact time to produce
one fabricated block was 25 hours. The organization model of these case studies is
expressed as Table 3. PL, LO, TR, and PI, indicate PLATE,
LONGITUDIAL, TRANSVERSE, and PIPE, respectively. In these case studies,
10 blocks were to be fabricated with an outer and inner panel.
In this paper, two case studies are discussed. In case 1, production strategy was
designed on the assumption that this factory has no constraint on electricity
consumption and two workflows could be simultaneously performed. In case 2, a
restriction of reducing 15% of the maximum electric consumption in case 1 to peak
electricity demand was added to the situation of case 1. In these case studies, the
parameters of the genetic algorithm are as follows: population size is 10, last
generation number is 500, crossover probability is 0.7 and mutation probability is 0.2.
Figure 2. Workflow of producing one fabricated block
T. Mitsuyuki et al. / The Design of Production Strategy Based on Risk Analysis 249

Table 3. Organization model
N
a
m
e

T
y
p
e

C
o
n
s
t
a
n
t

l
a
b
o
r

c
o
s
t

[
J
P
Y
/
h
o
u
r
]

V
a
r
i
a
b
l
e

l
a
b
o
r

c
o
s
t

[
J
P
Y
/
h
o
u
r
]

E
l
e
c
t
r
i
c
i
t
y

c
o
n
s
u
m
p
t
i
o
n

[
k
W
/
h
o
u
r
]

E
f
f
i
c
i
e
n
c
y

M
A
T
E
R
I
A
L

(
P
L
)

F
I
T
T
I
N
G

(
P
L
)

W
E
L
D
I
N
G

(
P
L
)

P
A
T
C
H
I
N
G

(
L
O
)

M
A
K
I
N
G

(
L
O
)

F
I
T
T
I
N
G

(
L
O
)

W
E
L
D
I
N
G

E
X
P
O
R
T
I
N
G

1

(
T
R
)

M
A
K
I
N
G

(
T
R
)

F
I
T
T
I
N
G

(
T
R
)

W
E
L
D
I
N
G

(
P
I
)

M
A
K
I
N
G

(
P
I
)

W
E
L
D
I
N
G

E
X
P
O
R
T
I
N
G

2

U
P
E
N
D
I
N
G

F
I
T
T
I
N
G

W
E
L
D
I
N
G

C
O
A
T
I
N
G

C
H
E
C
K
I
N
G

W1 1 0 1300 - 0.90 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 0 0
W2 1 0 1300 - 0.90 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 0 0
W3 1 1800 0 - 0.95 1 1 0 1 1 0 1 1 1 1 1 1 1 0 0 0 0 0 0
W4 1 1800 0 - 0.90 1 1 1 1 1 1 0 1 1 1 1 1 1 0 0 0 0 0 0
W5 1 0 1300 - 0.90 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0
W6 1 1800 1800 - 0.95 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1
W7 1 2500 0 - 0.95 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1
F1 -1 700 500 10 0.90 0 1 0 1 0 1 0 0 0 1 1 0 1 0 0 0 0 0 0
F2 -1 700 500 10 0.90 0 1 0 1 0 1 0 0 0 1 1 0 1 0 0 0 0 0 0
F3 -1 700 500 10 0.90 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
F4 -1 700 500 10 0.90 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
F5 -1 600 500 10 0.95 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0
F6 -1 600 500 10 0.95 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0
F7 -1 1500 700 20 0.95 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
F8 -1 1500 700 20 0.95 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
F9 -1 2000 500 90 0.85 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0
F10 -1 2000 1000 60 0.95 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0
F11 -1 2500 1000 80 0.95 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 0 0
F12 -1 2500 1000 100 0.95 0 1 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
F13 -1 3500 1000 100 0.95 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
F14 -1 3500 1000 120 0.95 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0
F15 -1 0 0 0 1.00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1

3.1. Case 1
Figure 3 shows the search results for getting optimal production strategy. 4 Pareto
efficient strategies can be found. In this paper, the difference between production
strategy considering risk and the one without considering risk was evaluated. Average
total labor cost by using optimal production strategy without considering risk is 5.916
million JPY and standard deviation is 0.128 million JPY. It could be said that proposed
T. Mitsuyuki et al. / The Design of Production Strategy Based on Risk Analysis 250
methodology could make the list of better production strategies compared to the one
without considering risk from this figure.
Table 4 shows the result of comparison analysis between optimal strategy without
considering risk and each Pareto efficient strategies considering risk. In P1-1 and P1-2,
the EDD and FIFO parameters increase and the SPT, EST and TSLACK parameters
decrease when compared to optimal production strategy without considering risk. In
other words, this factory performs tasks with earlier deadline and available
preferentially. On the other hands, the VC parameters increase in P1-3 and P1-4. From
Table 3, it could be said that expensive workers and expensive facilities tend to be low
risk. This is why this factory changed its production strategy to allocate tasks to
expensive workers and facilities for reducing risk.


Figure 3. Search results for getting optimal production strategy in Case 1

Table 4. Result of comparison analysis between optimal strategy without considering risk and each Pareto
efficient strategies considering risk in Case 1
Average
total labor
cost
[million
JPY]
Standard
deviation
[million
JPY]
EDD FIF
O
SPT SST TSL
AC
K
SPN SSP VC EC
P1-1 5.840 0.131 0.90 1.82 -1.38 -0.82 -0.85 0.76 0.64 -0.67 -0.39
P1-2 5.848 0.116 0.64 1.80 -1.52 -0.83 -0.97 1.18 -0.11 -0.11 -0.09
P1-3 5.856 0.114 0.71 -1.08 -0.87 -0.67 -0.71 -0.04 0.20 2.36 0.12
P1-4 5.868 0.109 0.14 -0.04 -0.80 -0.78 -0.79 0.08 -0.20 2.64 -0.26

3.2. Case 2
Figure 4 shows the search results for getting optimal production strategy. In this figure,
8 Pareto efficient strategies can be found. Average total labor cost of optimal
production strategy is 6.047 million JPY and standard deviation is 0.126 million JPY.
T. Mitsuyuki et al. / The Design of Production Strategy Based on Risk Analysis 251
Table 5 shows the result of comparison analysis between optimal strategy without
considering risk and each Pareto efficient strategies considering risk. In all Pareto
efficient strategies, the SPT and the EDD parameters increase and the FIFO and the
TSLACK parameters decrease when compared to optimal production strategy without
considering risk. In other words, this factory changed its production strategy to allocate
tasks with earlier deadline to workers and facilities.


Figure 4. Search results for getting optimal production strategy in Case 2

Table 5. Comparison analysis between optimal strategy without considering risk and each Pareto efficient
strategies in Case 2
Average
total labor
cost
[million
JPY]
Standard
deviation
[million
JPY]
EDD FIF
O
SPT SST TSL
AC
K
SPN SSP VC EC
P2-1 5.986 0.141 0.93 -1.45 1.55 -0.88 -1.15 -0.46 -0.12 0.64 0.94
P2-2 5.994 0.129 0.81 -1.16 1.74 -0.58 -0.62 -1.51 0.12 0.95 0.24
P2-3 5.997 0.128 1.42 -1.06 1.52 -0.55 -0.79 -1.37 -0.16 0.82 0.19
P2-4 5.999 0.124 0.80 -0.88 1.87 -0.39 -0.62 -0.20 -1.51 -0.17 1.10
P2-5 6.001 0.118 1.57 -1.32 1.68 -0.72 -0.77 -0.10 -0.71 0.56 -0.18
P2-6 6.003 0.116 0.81 -1.12 1.72 -0.55 -0.82 -1.46 0.23 0.95 0.25
P2-7 6.025 0.115 0.61 -0.80 1.72 -0.42 -0.60 0.34 -0.02 -1.83 1.01
P2-8 6.036 0.114 1.15 -1.39 1.88 -0.61 -1.09 -0.18 -0.41 0.67 -0.02
4. Discussion
In Case 2, the SPT parameter shows the most significant increase when compared to
optimal production strategy without considering risk. It should be noted that the
duration value of all activities is 1 hour. These results show that the difference between
adequate production strategy considering risk and the one without considering risk is
T. Mitsuyuki et al. / The Design of Production Strategy Based on Risk Analysis 252
small. In other words, the Case 2s factory cannot afford to reduce risk because they
already works under severe constraints.
5. Conclusion
This paper proposes a methodology to design production strategy based on risk
analysis using process simulation. A process simulator including the Monte Carlo
method evaluates the impact of uncertainty in performance on an organization model,
production model, constraints, and production strategy. Better production strategies can
be searched using a random key based genetic algorithm which minimizes the average
total labor cost and standard deviation of total labor cost. A designer can select the final
production strategy considering the results of simulations.
The proposed methodology is applied to case studies of block assembly process in
a shipbuilding company. Results shows that proposed methodology generates a set of
better production strategies compared to those without consideration of risk.
Furthermore, we can confirm an adequate production strategy designed given
constraints and differences in a specific factory.
Acknowledgement
This research was supported by JST.
References
[1] Jin Y, Levitt R.E, The virtual design team: a computational model of project organizations,
Computational and Mathematical Organizational Theory 2, 3 (1996) 171-196.
[2] Kunz J.C., Christiansen T.R., Cohen G.P., Jin Y., Levitt R.E., The virtual design team: a computational
simulation model of project organizations, Communications of the Association for Computing
Machinery(CACM), 41, 11 (1998) 84-91.
[3] Lankhorst M.M. Enterprise architecture modeling-the issue of integration, Advanced Engineering
Informatics, 18, 4 (2004) 205-216.
[4] Suzuki Y., Jin Y., Koyama H., Kang G, An Application of Simulation Based Process Design,
Proceedings of 17th ISPE International Conference on Concurrent Engineering (2010)
[5] Tyson R.B. Applying the design structure matrix to system decomposition and integration problem: A
review and new directions, IEEE Transactions on engineering management, 48, 3 (2001) 292-306.
[6] Unger D., Eppinger S. Improving product development process design: a method for managing
information flows, risks, and iterations, Journal of Engineering Design, 22, 10 (2011) 689699.
[7] Luh P.B., Liu F., and Moser B., Scheduling of design projects with uncertain number of iterations,
European Journal of Operational Research, 113, 3 (1999) 575-592.
[8] Levitt R.E., Thomsen J., Christiansen T.R. et al, Simulating project work processes and organizations:
Toward a micro-contingency theory of organizational design, Management Science, 45, 11 (1999)
1479-1495.
[9] Moser B.R.. The Design of Global Work: Simulation of Performance Including Unexpected Impacts of
Coordination across Project Architecture, Ph.D thesis, The University of Tokyo (2012)
[10] Taiga Mitsuyuki, Kazuo Hiekata, Hiroyuki Yamato, Kazuki Haijima. A Study on Evaluation of
Organizational Performance considering the Workers and Facilities, 19th ISPE International
Conference on Concurrent Engineering, 2 (2012) 533-544.
[11] Blackstone, J.H., Phillips, D.T. and Hogg, G.L. A state-of-the-art survey of dispatching rules for
manufacturing job shop operations, International Journal of Production Research, 20, 1 (1982) 27-45.
[12] Bean.J.C. Genetics and random keys for sequencing and optimization, ORSA Journal on Computing, 6,
(1994) 154-160.
T. Mitsuyuki et al. / The Design of Production Strategy Based on Risk Analysis 253
Development of Support System Solutions
for Capability Transition
Kevin DOWNEY
a
and John P.T. MO
b1

a
BAE Systems Australia
b
RMIT University Australia
Abstract. The Australian Defence Force and industry are undergoing significant
changes in the way they work together in capability enhancement programs. In
order to manage major asset acquisition to transition new capability into the front
line of Royal Air Force Fighter Groups whilst maintaining and supporting its
current obligations, this paper looks at the steady state support solution and argues
that in order to interchange from one support solution to a new architecture there
must be a period of transition which may need its own short term business model
and operational service. Preliminary study of several existing support solutions
reveals the generic elements that need to be parameterised and traced throguh the
trajectory. Research is continuing with detailing these parameters and validating
through actual practical application of the proposed methdoology in this paper.
Keywords. Transitional system architecture, enterprise trajectory, support solution,
capability enhancement, change management
Introduction
The RAAF is in the process of a major reworking of its air force structure [1]. Its entire
air combat fleet has embarked on a decade long process of renewal and augmentation.
The F-111 long-range strike aircraft has been retired and the replacement Super
Hornets have begun to arrive. The F-35 Lightning II Joint Strike Fighter is scheduled to
be delivered progressively from 2014 to 2020 and the classic Hornets will be phased
out over the same period. A range of supporting aircraft and associated equipment is
acquired to provide a significant capability boost. In addition to relying on manned
aircraft, the RAAF has been employing, leased unmanned aerial vehicles (UAVs)
overseas. The UAVs provide real-time intelligence, surveillance and reconnaissance
capabilities for deployed Australian forces. These acquisitions constitute important
components of the overall security patrol capability but will now not be complete until
sometime after 2020.
Recent trend around the world among the owners of complex engineering systems
such as defence is to include consideration for the sustainment of the system at the very
early stages of system development. The Ministry of Defence in UK [2] has adopted a
new approach to managing capability enhancement around the objective of achieving
through life support and sustainability. From the industrys point of view, this shift in
defence acquisition process means longer, more assured revenue streams based on

1
Corresponding Author.
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-254
254
long-term support and ongoing development instead of a series of big must win
procurements [3].
Therefore, major capability acquisition programs needs to transition from the
existing support solutions which were design for specific aircraft types into new air
capability and operational solution in order to maintain defence duties during this
changing role. The challenge in this transition is to maintain viable working support
solution architectures in line with the requirements of the changing strategy towards
integrated holistic capacity. It is crucial to use a systematic design methodology that
helps the management developing well-defined policy and process across the
organisational boundaries and implements the changes in all enterprises of the service
supply chain.
1. Literature Review
Support system architecture modelling is a multi-disciplinary engineering process.
This literature review attempts to examine those factors that are closely related to
enterprise change and sustainable system requirements.
1.1. Transition management
The notion of Transition Management is largely the work of Bridges [4]. Transition is
different from change and it is very often the transition that people resist not the
change itself. The transition needs to be understood and managed especially where the
change is radical. Staff at different stages along the change trajectory will have
emotional response that needs to be recognised [5].
Ng et al [6] described the planning and implementing of a new service
transformation of an organisation towards an effective service capability. It is essential
to consider both the current operating state, of the product based service and the
required future operating state. Later, Ng et al [7] argued that understanding both states
would lead to a more informed planning process and structured around the
transformation requirements. While they looked at the needs for transformation, they
only considered the capability aspects in order to produce effective value co-creation.
Chattopadhyay et al [8] studied a global engineering company and showed that
organisational capability should first become adaptive by establishing internal
structures and process that aided the creation of competence and hence the ability to
transform to a service provider.
1.2. Knowledge retention
According to Nonaka et al [9], the capacity to generate, renew, and use knowledge
constitutes an important strategic source for driving organisations towards attaining
sustainable competitive advantage. In technologically complex industries, the tacit
knowledge residing within workers is the chief asset as well as the strategic advantage
for organisations. Thus, ensuring the productivity of the knowledge workforce is the
greatest challenge of the twenty-first century [10]. In the new economy, it has become
the top priority for leadership to retain the knowledge workforce and nurture its
continued commitment and loyalty to the organization. This is especially the case for
K. Downey and J.P.T. Mo / Development of Support System Solutions for Capability Transition 255
industries such as aerospace and defence, which deal with cutting-edge technologies
and large-scale systems, with significant complexity.
1.3. External influences
Although it is still early stage process development, the Australian Defence intends to
adopt a more integrated approach by contracting for acquisition and sustainment
simultaneously in some of their new system acquisitions. According to the Defence
Materiel Organisation in Australia (DMO) [11], the capability system lifecycle is
considered a continuum of four phases: requirements, acquisition, sustainment and
disposal. To meet Government expectations the DMO becomes more business-like in
its operations and is accountable for acquisition and sustainment of ADF equipment.
1.4. Product Service System
The shift of acquisition paradigm forced industry to develop the concept of product-
service system (PSS) [12]. The PSS concept extends, on the basis of an existing
complex product, the provision of support services on that complex product when it is
in operation. The concept of product service system PSS was initially developed
around the optimisation of sustainability criteria to operations, maintenance, and
environmental related issues around the product. The PSS concept extends, on the basis
of an existing complex product, the provision of support services on that complex
product when it is in operation [13].
2. Enterprise change trajectory
Many interacting factors influence business performance and ability of a company to
deliver acceptable PSS. Research has shown that any change in enterprise is associated
with risks [14]. To analyse business and assess its opportunity, an enterprise model is
used. An enterprise model is developed from a modelling framework known as
enterprise architecture as shown in Figure 1.

Figure 1. Generic Enterprise Reference Architecture (GERA) [15]

K. Downey and J.P.T. Mo / Development of Support System Solutions for Capability Transition 256
Any unplanned change to the support system is an impact of uncertainty to system
performances. The enterprise architecture approach provides a structured system to
manage change activities, for example, promote planning, reduce risk, implement new
standard operating procedures and controls, rationalizing supporting facilities.
Enterprise engineering methodologies help to guide these changes in order to minimize
enterprise design modifications and associated rework of the system governing
information and material flows.
Essentially, the enterprise architecture creates a baseline support system model
(Figure 2). The enterprise model enables interpolation between snapshots that lead
to the identification of trends and changes in the enterprise architecture. By carefully
analysing the evolution and interlinks between different functions, data and processes,
a development continuum could be mapped out to form a trajectory.


Time
Status
Enterprise
model of the
future support
system at
stage t
4

Enterprise
model
captured as
baseline at
stage t
1

t=t
1
t=t
4

Changes of enterprise
structures over time

Figure 2. Enterprise models in a transition trajectory

Creation of the trajectory is an iterative process. This paper examines some of the
key elements that form the enterprise change modelling methodology.
3. Research Methodology
The study identifies the key elements of a support system and subsequently develops
transition models to guide changes between different support system stages. In order
provide a clear set of measureable outcomes, the transition is studied from current
support solutions. The concept of the support solution post transition will be a
theoretical model similar to the current support solution but with variations in operating
parameters. The intent is to show the need of a transition phase, how the operation and
architecture can be used to ensure that the transition has been successful.
3.1. Systems engineering approach
The use of systems engineering approach will help to define input states, output criteria,
Critical success factors and operational requirements. Measurement against these
K. Downey and J.P.T. Mo / Development of Support System Solutions for Capability Transition 257
requirements may prove useful in determining if the transition model is effective.
Guidelines on how a project should be measured and evaluated are developed as the
research progresses.
To assist with the selection of the preferred strategy, the future scenario should
consider using evaluation criteria against which each strategy alternative can be
assessed.
In presenting the arguments for the preferred strategy in the future scenario must
ensure a clear flow of argument be presented.
3.2. Establish the baseline
Study of current support systems where the product support solution is introduced and
assessed. The research builds upon the extensive contextual data and information that
has been collected from different sources. It will use the collected material, which
documents current support solution, as the baseline for a new support solution
matching the changing Air Force. Major influences of change in a support solution are
identified from the established support service cases.
3.3. Generic approach to develop new support
Studies of international support solutions have focused on the requirements for a
sustainable support solution and how the contractual constraints shape the architecture
of the support solution. Use of a generic approach to investigate important questions
and/or potential topics on the development and delivery of support and service
solutions in specific sectors. The research process looks at the steady state support
solution and investigates that in order to interchange from one support solution to a
new architecture then there must be a period of transition which may need its own short
term business model and operational service.
4. Review of current transition models
It is possible that a holistic approach to the transition period may provide the RAAF
with a solution that limits the risk of reduced capability, whilst maintaining a high level
of technology insertion. Several existing support solutions are reviewed.
4.1. Hawk 127
The Hawk 127 project is a complex business model that draws on the experience of the
international Hawk user group. The business model reflects on the needs of the
Government to provide trained fast jet pilots to the operational Australian Air Force.
The contract requires a level of aircraft availability from the support contractor, failure
to meet this metric may incur sever financial penalties. In the earlier phases of the
contract this drove the culture of business model and largely focused on delivering a
product as opposed to a service.
There are four key processes by which the PSS provides support:
Fleet Management
K. Downey and J.P.T. Mo / Development of Support System Solutions for Capability Transition 258
Logistics Management
Engineering support.
Deeper Level Maintenance
Once the baseline has been identified the proposal will review a selection of other
operating models that have transitioned from one business model to another and how
the transition was managed without major impact on the customer and end users.
4.2. Hornet F18
The acquisition of the Hornet fleet between 1984 and 1990 occurred prior to the
introduction of the current ADF Airworthiness Management System in 1993. DMO
has overarching responsibility for providing the Air Force with project management of
Hornet fleet engineering, logistic support and acquisition. In service support of
individual F/A18A/B aircraft is provided at two levels:
Operational Maintenance undertaken by serveral Air Combat Groups
Operational Conversion Units. This includes aircraft flight-line servicing and
fault diagnosis, and aircraft condition inspections and repairs at the line
replaceable unit level
Deeper Maintenance undertaken both by contractors and by the ACG personnel
employed in the Wing F/A-18 Deeper Maintenance Combined Workshop at
Williamtown.
There remain significant challenges in Defence achieving the planned transition to
an F-35 based air combat capability in the required timeframe. Such a capability gap
can arise between the withdrawal from service of the F/A-18A/B fleet and the
achievement of full operational capability for the F-35 fleet.
4.3. Collins Class Submarine
The Coles report [18] into the support service and management of the Collins class
submarine has led to far ranging and far reaching changes in the way the DMO manage
contracts. This is a useful source of relevant and topical information that supports the
need both for careful management of a new capability but also the need for a strong
transitional support program.
The support contract for the Collins class submarine was initially not expected to
be directly engage through one contractor. In the early operational life of the fleet,
Defence separately contracted with suppliers of equipment (several dozen). There was
also an early piecemeal contract involving a team of 28 ASC personnel that, given its
size, was insufficient to provide the level of support needed. There were many IP issues
as there was no consistent contractual approach. The level of support required to firstly
transitioning and then support the Collins Class was a step change from that required
for the previous Oberon Class fleet.
5. Key elements of a generic solution identified
The study of current support solutions reveals common elements of a support
solution. These elements allow the structure and operations of a generic solution to be
K. Downey and J.P.T. Mo / Development of Support System Solutions for Capability Transition 259
identified and carried through a transitional period. Without such a framework or
acknowledgement that these critical elements need to be maintained the transitional
period cannot bridge support the operational period between the two support solutions.
These elements are discussed below:
5.1. Enterprise Management
Traditional enterprise architectures are based on top down approach. They
emphasized on uniformity throughout the organization. As such, the structure is
inflexible. Changing the structure in order to respond to fast changing dynamic issues
for in service engineering systems will be too long to fix any problem. The transitional
enterprise will by its very nature will need to be flexible but also provide a strong
framework that allows the other elements transition.
5.2. Engineering Management
From the systems engineering point of view, efficiency of the interface requires a
systematic development of the functions and the respective relationships. The ideal
support system is one that has clearly defined responsibility and accountability for each
function so each can operate in a complementary manner with suitable information
exchange and synchronization of activities.
5.3. Configuration Management
In order to ensure that the service solution has transferred all elements of PSS there
needs to be strong and consistent Configuration Management both of the product and
of the support service itself. The role of configuration and ensuring that all the changed
requirements have been fully implemented, needs carefully planning and is at the very
center of the transitional process.
5.4. Maintenance Management
While change during transition is inevitable the management of maintenance needs
to be fixed and only change as and when appropriate. Maintenance availability requires
planning and the procurement of necessary spare parts and ensuring that the correct
data or instructions are always provided to maintenance or technical staff performing
the physical operations.
5.5. System Safety Management
Step changes in capability, new or altered environments, and the use of different
materials need to be considered prior to the actual roll out of a new product service. As
the service solution builds or the product environment is fully functional several
mitigations may need to be in place to ensure the short term transition is safe to operate
until the full or final solution is fully released.
K. Downey and J.P.T. Mo / Development of Support System Solutions for Capability Transition 260
5.6. Publication Management
As with the importance of configuration management, then the control of technical
data must also be of high importance during a transitional period. If a product or
service needs a level of process instruction then both the pre and post services will need
to be run in parallel until the new process or service is fully operational.
5.7. Training
Specific training that allows the product service to transition from the current state
to the future state needs to be identified and pro-actively managed.
5.8. Risk Management
The fundamental change within a transitional architecture is the ability to react to
customer operational needs in a more response manor. Risk management and offset
must be a high mitigation in this set up, with Lifecycle costing and reliability data
providing a large input into the model. The risk management issues pertinent to support
operations should have been assessed earlier in the project lifecycle during the
development phase. However this is not always the case, therefore support solution will
fall into two broad groups; the first is a support contract where the residual risk
associated with the product is fully understood, and the second is a support contract for
a product where the residual risk is not fully understood.
Therefore the risk funding for the phase of contact should have a realistic level to
overcome known issues. A matrix of major components will provide the input into the
risk funding and mitigation for each phase of the transition.
5.9. Logistics Management
While most of the above elements have a transitional short term life, the
management of logistics during this phase is more related to long term planning. The
phase out of one solution and the introduction of a new service may not need to be so
integrated as the other elements. The two do however need to be aware of the needs of
each other. KPIs related to the phase out or phase in of a solution will be different to
those in the steady state support solution.
6. Investigating the Transitional Architecture
6.1. Applying a systems approach
Within the Hawk 127 case study there is evidence of an applied systems
engineering approach having been taken in the architectural design of the product
support service. From its operational capability, logistics support and supply chain
management, all have a large element of systems engineering influence. This will be
extended and built upon to examine the requirements and solution within a transitional
operational environment.
K. Downey and J.P.T. Mo / Development of Support System Solutions for Capability Transition 261
The evidence of measuring success via KPIs and the introduction of many
processs within the projects Business Management system are all hallmarks of an
holistic approach to a complex system design, that has been planned and systematically
captured. It highlights the main points in relation to the design of new support systems.
Using this baseline and along with information from other sources, the project will
hypothesise the future scenario that RAAF plans to achieve. A new support solution is
then developed for this new scenario.
6.2. Defining transitional system architectures
In order for projects to change from its current enterprise model takes a paradigm
shift in culture, behavior and relationships if the desired value proposition is to be met.
The enterprise model needs to move to an integrated service enterprise where the
customer is part of the partnered process of service delivery. Some of the solutions may
not be palatable by the customer for political, geographical or historical reasons.
However, they do need to be explored as part process to identify the optimised
enterprise model and the co-creation of value.
A crossover plan for the phase-in of a generic support solution and the phase-in of
an alternative support solution needs to be modeled and defined within the context of
the system architecture. Techniques and models introduced will be developed and
evaluated during this stage to understand their applicability to a transitional model.
The transition architecture will identify the high-level transition tasks and identifies
those authorities, which have responsibility for in-service support tasking and for
rectifying any outstanding issues existing at the time of hand over.
6.3. Defining a systems operational environment
The role of the airworthiness regulator is the other defining factor in the shape and
nature of the support solution. It sets the engineering and maintenance regulations via
the Technical Airworthiness Manual. Compliance to its regulations is required to
ensure that the AEO and AMO retain their status on an annual basis. Under the
Strategic Reform Program the DMO [19] is seeking greater accountability and
transparency in the way Defence manages its budget, and advice, on which, to base its
capability investment decisions.
It has forced projects to look at initiative such as Lean, Kazan and other in an
effort to remain competitive and continue to remain as customers supplier of choice.
While the initiatives have brought immediate cost based savings, what is not clear is
the impact on the total support solution.
Once the baseline has been identified the proposal will review a selection of other
operating models that have transitioned from one business model to another and how
the transition was managed without major impact on the customer and end users.
7. Conclusion
This paper outlines the components of a transitional architecture. The defence and
industry are undergoing significant changes in the way they work together in capability
enhancement programs. In order to manage major asset acquisition without degrading
the level of capability through the transition over decades, a series of transitional
K. Downey and J.P.T. Mo / Development of Support System Solutions for Capability Transition 262
support models which can be seen as a trajectory of enterprise architectures can be
created by the support system design team for planning and monitoring. Preliminary
study of several existing support solutions reveals the generic elements that need to be
parameterised and traced throguh the trajectory. Research is continuing with detailing
these parameters and validating through actual practical application of the proposed
methdoology in this paper.
8. References
[1] Davies, A. (2010). RAAF Capability review, Australian Strategic Policy Institute Limited, accessible
from http://www.aspi.org.au/publications/publication_details.aspx?ContentID=259&pubtype=0
[2] Ministry of Defence (2005). Defence Industrial Strategy, Defence White Paper, December, 145 pages,
United Kingdom, accessible from http://www.mod.uk/nr/rdonlyres/f530ed6c-f80c-4f24-8438-
0b587cc4bf4d/0/def_industrial_strategy_wp_cm6697.pdf.
[3] Brammer, S., Walker, H. (2011). Sustainable procurement in the public sector: an international
comparative study. International Journal of Operations & Production Management, Vol.31, No.4,
pp.452476
[4] Tukker, A. (2004). Eight types of product-service system: eight ways to sustainability? Experiences
from SusProNET. Business Strategy and the Environment, Vol.13, No.4, pp.246-260
[5] Karkach, A.S. Trojectories and models of individual growth. Demographic Research, Vol.15, Art.12,
2006, pp.347-400
[6] Ng, I.C.L., Parry, G., Smith, L., Maull, R. (2010). Value Co-creation in Complex Engineering Service
Systems: Conceptual Foundations. Forum on Markets and Marketing: Extending the Service Dominant
Logic, Cambridge, UK, 24-26 September
[7] Ng, I.C.L., Parry, G, McFarlane, D, Tasker, P, (2011) Towards A Core Integrative Framework For
Complex Engineering Service Systems, in Complex Service Systems: Concepts and Research, Eds. Ng,
I.C.L., Wild, P., Parry, G., McFarlane, D., Tasker, P. Pub. Springer, UK, ISBN: 0857291882
[8] Chattopadhyay, S., Chan, D.S.K., Mo, J.P.T. (2012) Modelling the disaggregated value chain the
new trend in China, International Journal of Value Chain Management, Vol.6, No.1, pp.47-60
[9] Nonaka, I., Toyama, R., Noboru, K. (2000). SECI, Ba and leadership: A united model of dynamic
knowledge creation. https://agileconsortium.pbworks.com/f/Nonaka_etal_2000_SECI.pdf
[10] Drucker P.F. (1999) Management Challenges for the 21st Century, Butterworth-Heinemann, Oxford 2
June 2010
[11] Defence Materiel Organisation (2007). Performance Based Contracting Handbook Guiding
Principles and Performance Framework, Version 2.0, Department of Defence, Australia, February, 138
pages, accessible from http://www.defence.gov.au/dmo/asd/publications/asd_pbc_v2.pdf
[12] Mo, J.P.T. (2012). Performance Assessment of Product Service System from System Architecture
Perspectives. Advances in Decision Sciences, Volume 2012, Article ID 640601
[13] Baines, T.S., Lightfoot, H.W., Evans, S., Neely, A., Greenough, R., Peppard, J., Roy, R., Shehab, E.,
Braganza, A., Tiwari, A., Alcock, J.R., Angus, J.P., Bastl, M., Cousens, A., Irving, P, Johnson, M.,
Kingston, J., Lockett, H., Martinez, V., Michele, P, Tranfield, D., Walton, I.M., Wilson, H. (2007).
State-of- the-art in product-service systems, Journal of Engineering Manufacture, Vol.221, No.10,
pp.1543-1552
[14] Beasley, M.S., Clune, R., Hermanson, D.R. Enterprise risk management: An empirical analysis of
factors associated with the extent of implementation. Journal of Accounting and Public Policy, Vol.24,
2005, pp.521-531
[15] IFIPIFAC Task Force (1999). GERAM: Generalised Enterprise Reference Architecture and
Methodology. Version 1.6.3, Annex to ISO WD15704, Requirements for enterprise-reference
architectures and methodologies, March, 39 pages
[16] Bridges, W. 2003. Managing Transitions: Making the Most of Change. Nicholas Brealey Publishing,
USA
[17] Coles, J. (2012). Study into the business of sustaining Australias strategic Collins class submarine
capability, Ministerial and Executive Coordination and Communication Division, Defence,
Commonwealth of Australia, November, 176 pages report
[18] Strategic Reform Programme (2009), Defence Materiel Organisation, Commonwealth of Australia
2012

K. Downey and J.P.T. Mo / Development of Support System Solutions for Capability Transition 263
$6WXG\RQ0HWKRGRI0HDVXULQJ
3HUIRUPDQFHIRU3URMHFW0DQDJHPHQW
6KLQML 0RFKLGD
D

D
)DFXOW\RI&RPPHUFH
8QLYHUVLW\RI0DUNHWLQJDQG'LVWULEXWLRQ6FLHQFHV

$EVWUDFW(IILFLHQWSURMHFWPDQDJHPHQWLVQHFHVVDU\WRVROYHWKHYDULRXVSUREOHPV
DVVRFLDWHG ZLWK WKH FRPSOHWLRQ RI D SURMHFW 0HDVXULQJ WKH SURMHFWV SURJUHVV DQG
SHUIRUPDQFH DFFXUDWHO\ LV YHU\ LPSRUWDQW IRU PDQDJLQJSURMHFWVDQG IRU GHWHFWLQJ
WKH IDFWRUV OLPLWLQJ FRPSOHWLRQ RI WKH SURMHFW :H KDYH LQYHVWLJDWHG WKH ZRUN
SHUIRUPDQFH DW WKH EHJLQQLQJ RI WKH HDFK ZRUN RQ WKH SURMHFW $V D UHVXOW LW KDV
EHFRPHFOHDUWKDWZRUNSHUIRUPDQFHDWWKHVWDUWRIWKHZRUNLVQRWHIILFLHQW
DV FDQ EH VHHQ LQ D SHUIRUPDQFH 6FXUYH 7KLV VXJJHVWV WKDW WKH SURMHFW FRXOG EH
FRPSOHWHG PRUH TXLFNO\ LI ZRUN SHUIRUPDQFH FRXOG EH UDLVHG DW WKH EHJLQQLQJ RI
WKHZRUN,QWKLVVWXG\ZHDWWHPSWWRUDLVHWKHZRUNSHUIRUPDQFHZKHQWKHSURMHFW
LV LQLWLDWHG :H KDYH FRQVWUXFWHG D WULDO V\VWHP IRU FROOHFWLQJ NQRZOHGJH DQG
GLVWULEXWLQJ WKH NQRZOHGJH WR WKH VWDII EDVHG RQ LQIRUPDWLRQ DERXW IDXOWV RU
GHIHFWV RQ SDVW SURMHFWV DQG LPSRUWDQW LQIRUPDWLRQ DERXW WKH SURMHFW EHLQJ
XQGHUWDNHQ %\ SURYLGLQJ DSSURSULDWH NQRZOHGJH DQG DSSO\LQJ FULWLFDO WKLQNLQJ
SURMHFWZRUNHUSURGXFWLYLW\OHYHOVZLOOEHUDLVHGDQGHQKDQFHG$VDUHVXOWLWZLOO
EHSRVVLEOHWRPDQDJHPXOWLSOHSURMHFWVHYHQLIZRUNHUVDUHWDNLQJFKDUJHRIWZR
RUPRUHSURMHFWVDQGVXEV\VWHPVDWWKHVDPHWLPH
.H\ZRUGV3URMHFW0DQDJHPHQWZRUNSHUIRUPDQFH*DQWWFKDUWFULWLFDOWKLQNLQJ
,QWURGXFWLRQ
,QZKDWLVQRZDIDPLOLDUSURFHVVDSURMHFWLVILUVWSODQQHGDQGWKHSURMHFWWKHQEHJLQV
WRVROYHYDULRXVSUREOHPV7KHSURMHFWFRXOGEHDQ\DFWLYLW\WRLQYHQWDQHZSURGXFWRU
QHZ VHUYLFH )RU WKH SXUSRVH RI WKLV LQYHVWLJDWLRQ WKH QXPEHU RI SURMHFW PHPEHUV
EXGJHWDQGWHUPRIWKHSURMHFWKDVEHHQOLPLWHG7KHSXUSRVHRIWKHPDQDJHPHQWRIWKH
SURMHFWLVWRHQVXUHWKHTXDOLW\RIWKHZRUNLWVWLPHO\FRPSOHWLRQDQGDUHDVRQDEOHFRVW
RI FRPSOHWLRQ 7KHUHIRUH WKHUH ZHUH D ORW RI UHVHDUFKHV DERXW SURMHFW PDQDJHPHQW LQ
WKH SDVW (VSHFLDOO\ WKHUH ZHUH D ORW RI UHVHDUFKHV FRQFHUQLQJ VFKHGXOLQJ WKDW XVH
3(57RURWKHUPHWKRG)RUH[DPSOHWKHUHZDVDQRSWLPL]DWLRQRISURMHFWPDQDJHPHQW
WKDW XVHG WKH *DXVV PHWKRG
>@
7DEOH VKRZV WKH XVH VWDWH LQ 3(57 FKDUW LQ RXU
,QYHVWLJDWLRQ E\ WKH TXHVWLRQQDLUH ,W VHHPV WKDW 3(57 FKDUW LV QRW XVHG VR PXFK
%HFDXVH VSHFLILFDWLRQ FKDQJHV RFFXU IUHTXHQWO\ LQ D SURMHFW HVSHFLDOO\ LQ D V\VWHP
GHYHORSPHQW SURMHFW DQG EHFDXVH ZRUNHU SURGXFWLYLW\ FKDQJHV WRR LW LV GLIILFXOW WR
PDQDJHWKHV\VWHPGHYHORSPHQWSURMHFW,QSDUWLFXODUZRUNHUSURGXFWLYLW\ KDVDJUHDW

60RFKLGD
*DNXHQ1LVKLPDFKL1LVKLNX.REH+\RJR-DSDQ
HPDLOVKLQMLBPRFKLGD#XPGVDFMS
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-264
264
LQIOXHQFH RQ WKH SURMHFW EXW LW LV YHU\ GLIILFXOW WR HVWLPDWH WKH SURGXFWLYLW\ RI WKH
SURMHFW VWDII DFFXUDWHO\ :KHQ WKH SURMHFW PDQDJHU DVVLJQV VWDII WR WKH SURMHFW WKH
SURMHFWPDQDJHU PXVWFRQVLGHUEDODQFLQJWKH ZRUN ZLWKRWKHUSURMHFWV+HJXHVVHVWKH
SURGXFWLYLW\ RI KLV ZRUNHUV RQ WKH EDVLV RI KLV H[SHULHQFH DQG PDQDJHPHQW VNLOOV
+RZHYHUEHFDXVHZRUNHUVDUHIUHTXHQWO\LQYROYHGLQWKHFRPSOHWLRQRIPRUHWKDQRQH
SURMHFWDWDQ\SDUWLFXODUWLPHLWLVGLIILFXOWWRHVWLPDWHSURGXFWLYLW\IRUVSHFLILFSURMHFWV
$V SURMHFWVSHFLILFDWLRQVDQGWKH ZRUNLQJHQYLURQPHQWFKDQJHRYHUWLPHSURGXFWLYLW\
RIWKH VWDIILQYROYHGZLWK WKH SURMHFWZLOODOVRFKDQJH$Q\FKDQJHLQRQHSURMHFW ZLOO
DIIHFWDOORWKHUSURMHFWVWKHZRUNHUPD\EHLQYROYHGZLWK6LQFHWKHSURGXFWLYLW\RIWKH
VWDIILVDQXQFHUWDLQYDULDEOHLWLVH[WUHPHO\GLIILFXOWWRPHDVXUHWKHHIIHFWRIDGGLWLRQDO
ZRUN XQGHUWDNHQ E\ SURMHFW ZRUNHUV RQ RWKHU SURMHFWV
>@>@
,PSURYLQJ ZRUN
SHUIRUPDQFH LV WKHUHIRUH LPSRUWDQW WR DFKLHYH WKH SODQQHG SURGXFWLYLW\ %\ SURYLGLQJ
DSSURSULDWH NQRZOHGJH DW WKH DSSURSULDWH WLPH DQG ZLWK WKH DSSOLFDWLRQ RI FULWLFDO
WKLQNLQJ SURMHFW VWDII SURGXFWLYLW\ ZLOO EH UDLVHG DQG HQKDQFHG &ULWLFDO WKLQNLQJ LV
QHFHVVDU\WRHQFRXUDJHZRUNHUVWRWKLQNFDUHIXOO\DQGUHDVRQDEO\DERXWSUREOHPVDQG
WRMXGJHFRUUHFWO\DQGDYRLGSUHFRQFHSWLRQV
>@>@
7KLVVWXG\SURSRVHVWKHSURMHFWODXQFK
GDWH PDQDJHPHQW V\VWHP DQG NQRZOHGJH PDQDJHPHQW 7KH ZRUN ODXQFK GDWH LV D GD\
ZKHQ DSSURSULDWH NQRZOHGJH LV JLYHQ LQIRUPDWLRQ LV JDWKHUHG DQG SUHOLPLQDU\
H[DPLQDWLRQ RI WKH SURFHVV RU DFWLYLWLHV LV EHJXQ 3URMHFW ODXQFK GDWH PDQDJHPHQW
V\VWHP WDNHV FDUH RI WKH SURFHVV RU DFWLYLWLHV ODXQFK GDWH $ SURFHVV LQFOXGHV VRPH
DFWLYLWLHV 7KH SURFHVV RU DFWLYLWLHV ODXQFK GDWH LV RQO\ FDOOHG ZRUN ODXQFK GDWH DW WKH
IROORZLQJ 7KH ZRUN ODXQFK GDWH PXVW EH DQ DSSURSULDWH GD\ XVXDOO\ DERXW RQH ZHHN
EHIRUH ZRUN RQ WKH SURFHVV RU DFWLYLWLHV DUH LQLWLDWHG LQ RXU SDVW LQYHVWLJDWLRQ 3URMHFW
ODXQFK GDWH PDQDJHPHQW V\VWHP VHHNV WR DFKLHYH KLJK SURGXFWLYLW\ DW WKH VWDUW RI WKH
SURMHFW JLYLQJ DSSURSULDWH NQRZOHGJH DQG WR VKRUWHQ WKH WLPH UHTXLUHG IRU FRPSOHWLRQ
RIWKHSURMHFW
>@>@

7DEOH8VHVWDWHLQ3(57&KDUWDQG*DQWWFKDUW
&RPSDQ\ 7\SHRIEXVLQHVV 3(57&KDUW *DQWWFKDUW

6\VWHPGHYHORSPHQW QRWXVH XVH

6\VWHPGHYHORSPHQWORJLVWLF QRWXVH XVH

0DFKLQHIDFWRU\WUDIILF QRWXVH XVH

0DFKLQHIDFWRU\UDLOURDG QRWXVH XVH

0DFKLQHIDFWRU\ QRWXVH XVH

3URGXFWLYLW\RIVWDIIDQGSURFHVVSODQQLQJ
,Q RXU H[DPSOH WZR RU PRUH VXEV\VWHP GHYHORSPHQWV DUH H[HFXWHG IRU D ODUJHVFDOH
SURMHFW DQG RQH ZRUNHU WDNHV FKDUJH RI WZR RU PRUH SURMHFWV DQG VXEV\VWHPV DW WKH
VDPH WLPH 7KHUH LV D VFUDPEOH IRU HTXLSPHQW DQG WKH KXPDQ UHVRXUFHV 6LQFH WLPH
DOORFDWLRQIRUHDFKRIWKHSURMHFWVLVOHIWWRWKHGLVFUHWLRQRIWKHZRUNHULWLVGLIILFXOWWR
DFFXUDWHO\IRUHFDVWWKHSURGXFWLYLW\RIWKHZRUNHUIRUDQ\RQHSURMHFW:HFDQLOOXVWUDWH
WKLVLQ)LJXUHZKHUHDZRUNHULVDVVLJQHGWRWZRSURMHFWVDQGWKHRUGHUIRUEHJLQQLQJ
WKHSURMHFWVLVOHIWWRWKHGLVFUHWLRQRIWKHZRUNHU

S. Mochida / A Study on Method of Measuring Performance for Project Management 265

)LJXUH$FWLYLWLHVDVVLJQHGWRWKHVDPHVWDII

:RUNLQJKRXUVDQGSURGXFWLYLW\
:KHQ WKH SURMHFW ZRUNHU LV HQJDJHG LQ WKH DFWLYLW\ LW LV WKRXJKW WKDW SURGXFWLYLW\
LQFUHDVHVDVWLPHVSHQWRQWKHSURMHFWSDVVHV7KLVDOVRDVVXPHVWKDWSURGXFWLYLW\LVORZ
DWWKHEHJLQQLQJRIWKHZRUNEHFDXVHQHFHVVDU\VNLOOVKDYHQRW\HWEHHQGHYHORSHGDQG
WKHUH LV D SDXFLW\ RI GHWDLOHG LQIRUPDWLRQ QHFHVVDU\ IRU FRPSOHWLQJ WKH ZRUN ,I WKH
SURGXFWLYLW\DWDFHUWDLQWLPHLVJLYHQE\[WDVDIXQFWLRQDWWLPHWWKHUDWHRIFKDQJH
LQSURGXFWLYLW\LVVKRZQE\H[SUHVVLRQ
P[
GW
G[
PLVDQLQFUHDVLQJUDWHRISURGXFWLYLW\
3URGXFWLYLW\ LQFUHDVHV DV WLPH VSHQW RQ WKH SURMHFW SDVVHV DQG DV QHFHVVDU\ VNLOOV DQG
LQIRUPDWLRQ DUH DFFXPXODWHG :KHQ ZH UHSODFH P ZLWK IXQFWLRQ H[SUHVVLRQ
VXEVWLWXWLQJH[SUHVVLRQIRUH[SUHVVLRQH[SUHVVLRQLVREWDLQHG
n
N
[
U ULVDQLQFUHDVLQJUDWHRISURGXFWLYLW\NLVDFRQVWDQW

[ N[ U
GW
G[


LVWUDQVIRUPHGDQGH[SUHVVLRQLVREWDLQHG

GW G[
[ N[ U

1H[WH[SUHVVLRQLVPDGHLQWRDSDUWLDOIUDFWLRQDQGH[SUHVVLRQLVREWDLQHG

S. Mochida / A Study on Method of Measuring Performance for Project Management 266


GW G[
N[ U U
N
U[

,IERWKVLGHVRIH[SUHVVLRQDUHLQWHJUDWHGLWEHFRPHVH[SUHVVLRQ

GW G[
N[ U U
N
G[
[ U

LVFDOFXODWHGDQGH[SUHVVLRQLVREWDLQHG

7KH GDWD IRU SURGXFWLYLW\ FROOHFWHG IURP D SURMHFW RQ D FRPSXWHU V\VWHP GHYHORSPHQW
ZDVVXEVWLWXWHGLQWRH[SUHVVLRQDQG)LJXUHZDVREWDLQHG

)LJXUH&XUYHRISURGXFWLYLW\

7DEOHVKRZVWKHGLIIHUHQFHRIDFWXDOZRUNLQJGD\VDQGSODQQHGZRUNLQJGD\V,QWKLV
FDVH ZRUNLQJ GD\V LQ WKHSODQ DUH HVWLPDWHGDW GD\VEXW DFWXDO ZRUNLQJ GD\V ZHUH
GD\V 7KH FXUYH LQ )LJXUH VKRZV SURGXFWLYLW\ $FWXDO ZRUNLQJ GD\V VKRZQ LQ
7DEOH DUH REWDLQHG DV DUHD E\ PXOWLSO\LQJ GD\V E\ WKLV FXUYH ; 7KH QXPEHU RI
DFWXDOZRUNLQJGD\VLVWKXVGD\VZKLOHSODQQHGZRUNLQJGD\VZDVGD\V,QWKLV
H[DPSOH WKHUHIRUH DFWXDO ZRUNLQJ GD\V DUH QHDUO\ HTXDO WR SODQQHG ZRUNLQJ GD\V
)LJXUH VKRZV WKH UHODWLRQ EHWZHHQ SODQQHG ZRUNLQJ GD\V DQG DFWXDO ZRUNLQJ GD\V
7KLVGDWDZDVWDNHQIURPWKH DFWLYLW\RQWKHV\VWHPGHYHORSPHQW7KH ZRUNFRQVLVWHG
RI PDNLQJ WKH VSHFLILFDWLRQV IRU WKH LQWHUIDFH IRU WKH V\VWHP 6LQFH FUHDWLQJ WKH
VSHFLILFDWLRQV LV GLIILFXOW ZRUN DQG SURGXFWLYLW\ GLGQW ULVH WR WKH H[SHFWHG YDOXH WKH
ZRUNZDVGHOD\HG)LJXUHVKRZVWKLVVLWXDWLRQ

7DEOH:RUNGD\VFRPSDULVRQ

0
l
0 5 l0 l5 20 25
roducfIvIfy
fdny
:
3ODQQHGZRUNLQJ
GD\V
PHDVXUHGZRUNLQJ
GD\V
GHOD\LQZRUNLQJ
GD\V
DFWXDOZRUNLQJGD\V
;GD\V
GD\V GD\V GD\V GD\V
G
EH
.
W [
D W F


S. Mochida / A Study on Method of Measuring Performance for Project Management 267

)LJXUH&XUYHRISURGXFWLYLW\DQGZRUNGHOD\
3URFHVVSODQQLQJDQGSURGXFWLYLW\
,QQRUPDOSURJUHVVPDQDJHPHQWRQO\WKHFRPSOHWLRQGD\RIHDFKDFWLYLW\LVFRQVLGHUHG
7KLVSDSHUSURSRVHVDV\VWHPWRPDQDJHWKHZRUNODXQFKGDWH7KHZRUNODXQFKGDWHLV
D GD\ ZKHQ LQIRUPDWLRQ LV JDWKHUHG DQG D SUHOLPLQDU\ H[DPLQDWLRQ RI WKH SURMHFW LV
XQGHUWDNHQ,QWKHSURMHFWODXQFKGDWHPDQDJHPHQWV\VWHPDSUHSDUDWLRQGD\LVDGGHG
SULRUWRWKHZRUNODXQFKGDWHDQGIROORZLQJFRPSOHWLRQRIWKHZRUNDQHYDOXDWLRQGD\
LV DOVR LQFOXGHG )LJXUH VKRZV D *DQWW FKDUW IRU WUDGLWLRQDO SURMHFW PDQDJHPHQW
)LJXUHVKRZVD*DQWWFKDUWXVLQJDSURMHFWODXQFKGDWHPDQDJHPHQWV\VWHP

)LJXUH*DQWWFKDUWIRUWUDGLWLRQDOSURMHFWPDQDJHPHQW

)LJXUH*DQWWFKDUWIRUSURMHFWODXQFKGDWHPDQDJHPHQWV\VWHP
S. Mochida / A Study on Method of Measuring Performance for Project Management 268
7DEOH VKRZV D *DQWW FKDUW WKDW DGGHG D ZRUN ODXQFK GDWH WR WKH SURMHFW 7DEOH
VKRZV WKH VWDUW GDWH HQG GDWH DQG WKH SURGXFWLYLW\ RI HDFK WDVN LQ WKH V\VWHP
GHYHORSPHQW 3URGXFWLYLW\ PHDQV VWDQGDUG SURGXFWLYLW\ DQG LI WKLV LQYROYHV
SURJUDPPLQJLW PHDVXUHVVWDQGDUGSURGXFWLYLW\RIOLQHVDGD\WKHSURGXFWLYLW\DW
WKLV WLPH LV 7DEOH VKRZV WKH DFWXDO GD\ ZRUN LV VWDUWHG UDWKHU WKDQ WKH SODQQHG
VWDUWGD\LQ WKH*DQWWFKDUW,QDGGLWLRQWDEOHVKRZVWKDWSURGXFWLYLW\LQWKHSODQQHG
VWDUW GD\ GRHVQW UHDFK 6LQFH JHQHUDOO\ WZR RU PRUH SURMHFWV DUH H[HFXWHG DW WKH
VDPH WLPH DQG HDFK ZRUNHU LV UHVSRQVLEOH IRU WZR RU PRUH DFWLYLWLHV WKH SURGXFWLYLW\
RIHDFKZRUNHUGRHVQWUHDFK7KHDOORFDWLRQRIZRUNLQJKRXUVIRUHDFKSURMHFWDQG
WKH ZRUNRUGHUDVVKRZQLQ)LJXUHDUHOHIWWRWKH ZRUNHUVGLVFUHWLRQ+XPDQIDFWRUV
LQYROYHGLQWKHVHGHFLVLRQVPLJKWWHQGWRORZHUSURGXFWLYLW\+RZHYHULWWKLVV\VWHPLW
EHFRPHVSRVVLEOHWRVWDUWZRUNZLWKDKLJKSURGXFWLYLW\E\SURYLGLQJZRUNHUVZLWKWKH
NQRZOHGJHDQGLQIRUPDWLRQWKH\QHHGWREHLQJWKHLUDFWLYLWLHV,QSURMHFWPDQDJHPHQW
LW LV LPSRUWDQW WR DYRLG WKH GHOD\V DQG WURXEOHV FRPPRQ ZKHQ DFWLYLWLHV FDQQRW EH
H[HFXWHG %\ GHFUHDVLQJ WKHVH GHOD\V DQG WURXEOHV SURMHFWV EHFRPH PRUH OLNHO\ WR EH
FRPSOHWHG RQ WLPH 7KH UHTXLUHG NQRZOHGJH DQG LQIRUPDWLRQ VKRXOG EH JLYHQ DV D
FKHFN VKHHW %\ SUHVHQWLQJ WKH QHFHVVDU\ NQRZOHGJH SULRU WR WKH GD\ ZRUN LV DFWXDOO\
LQLWLDWHGGHOD\VDQGSUREOHPVEHFRPHOHVVRIDQLVVXH

7DEOH3URGXFWLYLW\E\VWDIIVVHQVHDWWKHGDWHRQWKH*DQWWFKDUW
SURGXFWLYLW\
:RUNLWHP
ZRUNODXQFK
GDWH
5HDOVWDUWRI
ZRUNGD\
5HDOFRPSOHWLRQ
GD\
6WDUWGDWHRI
HDUO\VWDJH
,QFDVHRIV\VWHPGHYHORSPHQW
,QYHVWLJDWLRQ
%DVLFGHVLJQ
'HWDLOHGGHVLJQ
3URJUDPPLQJ
)XQFWLRQDOWHVW
WRWDOWHVW
/RFDOSODFLQJ
,QFDVHRIPDFKLQHIDFWRU\
,QYHVWLJDWLRQ
%DVLFGHVLJQ
'HWDLOHGGHVLJQ
PDQXIDFWXULQJ
)XQFWLRQDOWHVW
WRWDOWHVW
/RFDOSODFLQJ
7KHDFWXDOGD\ZRUNLVVWDUWHGUDWKHUWKDQWKHSODQQHGVWDUWGD\
S. Mochida / A Study on Method of Measuring Performance for Project Management 269

)LJXUH$FWLYLWLHVIRUZKLFKWKHVDPHZRUNHUWDNHVFKDUJH
5HJLVWUDWLRQRISURFHVVLQIRUPDWLRQDQGNQRZOHGJH
3URMHFWPDQDJHPHQWDQGPDQDJHPHQWRQZRUNODXQFKGDWH
3URGXFWLYLW\ GHSHQGV RQ D ZRUNHUV DELOLW\ DQG VNLOO DQG VR LW FDQ RQO\ EH HVWLPDWHG
ZKHQDSURMHFWLVSODQQHG,WLVLPSRUWDQWKRZHYHUWRLPSURYHWKDWSURGXFWLYLW\ZKHQ
ZRUN DFWXDOO\ EHJLQV 7KLV VWXG\ SURSRVHV WKDW D SURMHFW ODXQFK GDWH PDQDJHPHQW
V\VWHPEHXVHGWRDWWDLQWKRVHSURGXFWLYLW\JDLQV:LWKWKLVV\VWHPLWEHFRPHVSRVVLEOH
WR VWDUW WKH ZRUN ZLWK KLJK SURGXFWLYLW\ E\ SUHVHQWLQJ UHTXLUHG NQRZOHGJH DQG
LQIRUPDWLRQ WR WKH ZRUNHU EHIRUH WKH VWDUW RI WKH ZRUN %\ SURYLGLQJ WKH DSSURSULDWH
NQRZOHGJHDQGDSSO\LQJFULWLFDOWKLQNLQJWKH ZRUNHUVSURGXFWLYLW\ ZLOOEHUDLVHGDQG
HQKDQFHG $SSURSULDWH NQRZOHGJH FRQVLVWV RI LQIRUPDWLRQ DERXW IDXOWV RU GHIHFWV RQ
SDVW SURMHFWV RU XVHIXO LQIRUPDWLRQ DERXW WKH SURMHFW DERXW WR EH XQGHUWDNHQ 0DQ\
GLIILFXOWLHVFDQ EHDYRLGHGE\SURYLGLQJWKLVLQIRUPDWLRQEHIRUHWKHSURMHFWEHJLQVRQ
DQDSSURSULDWHZRUNODXQFKGDWH7KHZRUNODXQFKGDWHZLOOXVXDOO\EHDERXWRQHZHHN
EHIRUH ZRUN RQ WKH SURMHFW EHJLQV LQ RXU SDVW LQYHVWLJDWLRQ 7KH EHVW GD\ IRU WKLV
LQIRUPDWLRQWREHJLYHQLVH[SUHVVHGQXPHULFDOO\E\WKHTXHVWLRQQDLUHVKRZQLQ)LJXUH
7DEOHVKRZVWKHH[DPSOHRIWKHIXQFWLRQRIV\VWHPVVROG
3URMHFWODXQFKGDWHPDQDJHPHQWV\VWHPIRUWULDOSXUSRVHV

7KHV\VWHPWKDWDFKLHYHVWKHSURMHFWODXQFKGDWHPDQDJHPHQWKDVEHHQPDGHIRUWULDO
SXUSRVHVLQWKLVVWXG\.QRZOHGJHDQGWKHDFWLYLWLHVFDQEHUHJLVWHUHGLQWKHVDPHZD\
LQWKLVV\VWHP)LJXUHVKRZVWKHIXQFWLRQRIWKLVV\VWHPWKDWKDVEHHQPDGHIRUWULDO
SXUSRVHV
S. Mochida / A Study on Method of Measuring Performance for Project Management 270

)LJXUH([SUHVVLQJRIVHQVHQXPHULFDOO\

7DEOH)XQFWLRQFRPSDULVRQRIVRIWZDUHRIV\VWHPVVROG
IXQFWLRQ
V\VWHP
9HQGHU .QRZOHGJH
UHJLVWUDWLRQ
0LOHVWRQH
UHJLVWUDWLRQ
(90 6WDWXVWKDWFDQEH
GLVSOD\HG
$ 0 SODQVWDUWILQLVK
% 2 SODQVWDUWILQLVKGHOD\
&RPSOHWHGYROXPHSHUFHQWHVWLPDWH0LOHVWRQHPHWKRG7KHFRPSOHWHGYROXPHLVVXEMHFWLYHO\DOORFDWHG
EHIRUHUHDFKLQJWKHPLOHVWRQH

3URFHVVLQIRUPDWLRQDQGWKHNQRZOHGJHDQGLQIRUPDWLRQUHJLVWHUHGLQQRI)LJXUHLV
SXW LQWR WKH SODQ VSRRO 1H[W WKH *DQWW FKDUW VKRZQ LQ , LV GLVSOD\HG 3URFHVV
LQIRUPDWLRQLQWKHSODQVSRROLVGLVWULEXWHGWRHDFKVWDIIVSRRO7KHZRUNNQRZOHGJH
OLQNHG ZLWK WKH SURFHVV LV H[WUDFWHG DQG GLVWULEXWHG WR HDFK VWDII VSRRO LQ WKH VDPH
PDQQHU $ VWDUWLQJ VWDWXV LV GLVSOD\HG LQ WKH *DQWW FKDUW ZKHQ WKH SURFHVV
LQIRUPDWLRQGLVWULEXWHGLQHDFKVWDIIVSRROKDVVWDUWHG+RZHYHUWKHIXQFWLRQRI,
DQG LVXQGHUGHYHORSPHQWQRZ %HFDXVHNQRZOHGJHDQG ZRUN LQIRUPDWLRQDUH
GHVFULEHG ZLWK ;0/ LQ WKLV V\VWHP WKHVH FDQ EH WUHDWHG HTXDOO\ DQG UHJLVWHUHG
NQRZOHGJH DQG SURFHVV LQIRUPDWLRQ FDQ EH UHWULHYHG LQ WKH VDPH ZD\ , WKHQ SURYLGH
WKHH[DPSOHRIUHJLVWHULQJZRUNLQIRUPDWLRQ7KHH[SHFWHGVWDUWGDWHFDQEHUHJLVWHUHG
RQ WKH SURFHVV LQIRUPDWLRQ UHJLVWUDWLRQ VFUHHQ 5HJLVWHUHG SURFHVV DQG NQRZOHGJH
LQIRUPDWLRQ LV FUHDWHG DV ZRUG SURFHVVRU GDWD 7KH ZRUG SURFHVVRU GDWD FDQ FRQWDLQ
OLQNVWRLPDJHGDWDDQGWKHUHIHUHQFHILOHDQGWKHUHJLVWHUHGGDWDFDQEHGLVSOD\HGDQG
UHYLVHGE\WKHZRUGSURFHVVRU

S. Mochida / A Study on Method of Measuring Performance for Project Management 271

)LJXUH6\VWHPFRQILJXUDWLRQ
'LVFXVVLRQ
,QWKLVVWXG\ZHWRRNGDWDIURPDQDFWXDOSURMHFWRIV\VWHPGHYHORSPHQW,WZDVVKRZQ
LQD*DQWWFKDUWWKDWZRUNSURGXFWLYLW\GRHVQWUHDFKDQRSWLPDOOHYHODWWKHEHJLQQLQJ
RI WKH ZRUN ,Q RUGHU WR DGGUHVV WKLV SUREOHP WKLV VWXG\ SURSRVHV WKH LQWURGXFWLRQ RI D
SURMHFW ODXQFK GDWH PDQDJHPHQW V\VWHP 7KH ZRUN ODXQFK GDWH LV D GD\ ZKHQ
LQIRUPDWLRQ JDWKHULQJ DQG D SUHOLPLQDU\ H[DPLQDWLRQ RI WKH SURMHFW WDNHV SODFH ,Q
SURMHFWODXQFKGDWH PDQDJHPHQWV\VWHPDODXQFKGD\LVDGGHGDWWKHEHJLQQLQJRIWKH
SURFHVV RU DFWLYLWLHV 7KLV UHVXOWV LQ WZR H[WUD GD\V EHLQJ XVHG WR IRU SURMHFW
PDQDJHPHQW5HTXLUHGNQRZOHGJHDQGLQIRUPDWLRQVKRXOGEHJLYHQDVDFKHFNVKHHWRQ
WKH ZRUN ODXQFK GDWH LQ RUGHU WR SUHYHQW GHOD\V RU PLVWDNHV $V WKHUH LV QRW \HW D
V\VWHP WKDW FDQ UHJLVWHU WKH ZRUN ODXQFK GDWH IRU SURMHFW PDQDJHPHQW WKLV VWXG\ KDV
XVHG ;0/ WR FUHDWH D WULDO V\VWHP WR LPSOHPHQW WKH SURMHFW ODXQFK GDWH PDQDJHPHQW
.QRZOHGJH DQG SURFHVV LQIRUPDWLRQ LV GHVFULEHG ZLWK ;0/ WUHDWLQJ WKHP HTXDOO\
5HJLVWHUHGNQRZOHGJHDQGSURFHVVLQIRUPDWLRQFDQEHUHWULHYHGLQWKHVDPHZD\RQWKLV
V\VWHP %\ XVLQJ WKLV V\VWHP LW EHFRPHV SRVVLEOH IRU NQRZOHGJH DQG SURFHVV
LQIRUPDWLRQWREHJLYHQWR ZRUNHUV EHIRUHSURMHFW ZRUN EHJLQV ,Q WKH IXWXUH LW ZLOO EH
QHFHVVDU\ WR UHJLVWHU D ORW RI NQRZOHGJH DQG SURFHVV LQIRUPDWLRQ LQ DGGLWLRQ WR ZRUN
LQIRUPDWLRQDQGLWZLOOEHQHFHVVDU\WRFRQILUPWKHHIIHFWLYHQHVVRIWKLVV\VWHPLQWKH
IXWXUH
>@>@

S. Mochida / A Study on Method of Measuring Performance for Project Management 272


&RQFOXVLRQ
0HHWLQJEXGJHWVUHPDLQLQJRQVFKHGXOHDQGWKHDFKLHYHPHQWRITXDOLW\DUHLPSRUWDQW
LQ SURMHFW PDQDJHPHQW DQG SURMHFW PDQDJHUV QHHG WR FRQWURO FRVWV DQG UHVRXUFHV LQ
RUGHUWRDFKLHYHWKHVHJRDOV:HFDQQRWEHFHUWDLQKRZHYHUWKDWWKHSODQQHGVFKHGXOH
IRU D SURMHFW LV DQ DSSURSULDWH ZRUN SHULRG 7KH SURGXFWLYLW\ RI ZRUNHUV LV WKH PRVW
XQFHUWDLQ HOHPHQW YDULDEOH SURMHFW PDQDJHPHQW ,I LW LV SRVVLEOH WR LQFUHDVH ZRUNHU
SURGXFWLYLW\ WKH WLPH IRU FRPSOHWLRQ RI D SURMHFW ZLOO GHFUHDVH DQG FRVWV ZLOO EH
UHGXFHG ,I NQRZOHGJH DQG ZRUN LQIRUPDWLRQ FRXOG EH WUHDWHG DQG UHJLVWHUHG HTXDOO\
XVHIXONQRZOHGJHDQGLQIRUPDWLRQZRXOGEHH[WUDFWHGIURPLQIRUPDWLRQDQGNQRZOHGJH
+RZHYHUIXUWKHUUHVHDUFKDQGVWXG\LVQHHGHGWRUHILQHDQGLPSURYHWKHSURMHFWODXQFK
GDWHPDQDJHPHQWGHVFULEHGWRDFKLHYHWKRVHJRDOV
$FNQRZOHGJPHQW
7KLVZRUNZDVVXSSRUWHGE\-636.$.(1+,*UDQW1XPEHU
5HIHUHQFHV
>@ 7DND\XNL .XUDWD 0DVDKLUR 1DJDPDWVX 2SWLPL]DWLRQ RI SURMHFW PDQDJHPHQW LQ RIIVKRUH GHYHORSPHQW
WK$QQLYHUVDU\RI%0)6$
>@ 4XHQWLQ:)OHPLQJ-RHO0.RIIOHPDQ(DUQHG9DOXH3URMHFW0DQDJHPHQW-DSDQHVH7UDQVODWLRQ1LKRQ
QRXULWXN\RXNDL
>@ 3DUYL] ) 5DGNRX LWRX WUDQVODWLRQ 352-(&7 (67,0$7,1* &267 0$1$*(0(17 -DSDQHVH
7UDQVODWLRQ6HLVDQQVHLV\X[WXSDQ
>@ 6KLQML0RFKLGD'\QDPLF.QRZOHGJH&ROOHFWLRQ6\VWHP8VLQJ:HE7HFKQRORJ\-RXUQDORI%LRPHGLFDO
)X]]\6\VWHPV$VVRFLDWLRQ92/12$XJXVW%0)6$
>@ 1RULKLNR.DQHNR7H[WIRU3URMHFWPDQDJHPHQW1LKRQ.HL]DL6KLPEXQ,QF
>@ ,7 $VVRFLDWH &RQIHUHQFH *XLGHOLQHV IRU (QWHUSULVH $UFKLWHFWXUH 9HU 0LQLVWU\ RI (FRQRP\ 7UDGH
DQG,QGXVWU\
>@ (XJHQH % =HFKPHLVWHU -DPHV ( -RKQVRQ&ULWLFDO 7KLQNLQJ $ )XQFWLRQDO $SSURDFK -DSDQHVH
7UDQVODWLRQNLWDRRMLV\RERX
>@ *ORELV0DQDJHPHQWVFKRRO*ORELV0%$&ULWLFDO7KLQNLQJ',$021',QF
>@ 3URMHFW 0DQDJHPHQW ,QVW $ *XLGH WR WKH 3URMHFW 0DQDJHPHQW %RG\ RI .QRZOHGJH2IILFLDO -DSDQHVH
7UDQVODWLRQ
>@ 3DXO65R\HUQREXRPLQHPRWRWUDQVODWLRQ3URMHFW5LVN0DQDJHPHQW6HLVDQQVHLV\X[WXSDQ
>@ 3DUYL] )UDG LWRX NRX WUDQVODWLRQ 352-(&7 (67,0$7,1* $1' &267 0$1$*(0(17
6HLVDQQVHLV\X[WXSDQ
>@ 7LPRWK\ M.ORSSHQERUJ-RVHSK $3HWULFNPLXUDM\XUR WUDQVODWLRQ0$1$*,1* 352-(&7 48$/,7<
6HLVDQQVHLV\X[WXSDQ
>@ * 0LFKDHO &DPSEHOO 6 %DNHU QDNDMLPD KLGHWDND WUDQVODWLRQ 7KH &RPSOHWH ,GLRWV *XLGH WR 3URMHFW
0DQDJHPHQW62*2+25(,38%/,6+,1*
>@ 3DXO65DOSK<3HUIRUPDQFH%DVHG(DUQHG9DOXH,(((
>@ 6KLQML0RFKLGD .QRZOHGJH &ROOHFWLRQ 6\VWHP IRU 3URMHFW PDQDJHPHQW-RXUQDO RI%LRPHGLFDO )X]]\
6\VWHPV$VVRFLDWLRQ92/12%0)6$
>@ 6KLQML 0RFKLGD $FTXLVLWLRQ 0HWKRG IRU .QRZOHGJH %DVHG RQ $FWLRQ 6FULSW-RXUQDO RI %LRPHGLFDO
)X]]\6\VWHPV$VVRFLDWLRQ92/12%0)6$
>@ 6KLQML 0RFKLGD .QRZOHGJH 0LQLQJ IRU 3URMHFW 0DQDJHPHQW DQG ([HFXWLRQ -RXUQDO RI $GYDQFHG
&RPSXWDWLRQDO,QWHOOLJHQFH
>@ 6KLQML 0RFKLGD .QRZOHGJH 5HWULHYDO IRU 3URMHFW 0DQDJHPHQW &RQFXUUHQW (QJLQHHULQJ $SSURDFKHV
IRU6WDLQDEOH3URGXFW'HYHORSPHQWLQD0XOWL'LVFLSOLQDU\(QYLURQPHQW6SULQJHU
S. Mochida / A Study on Method of Measuring Performance for Project Management 273
A simulation-based approach to decision
support for lean practitioners
EFFENDI BIN MOHAMAD
a,d,1
TERUAKI ITO
b
and

DANI YUNIAWAN
c

a, c
Graduate School of Advanced Technology and Science,
University of Tokushima, Tokushima, 770-8506, Japan
b
Institute of Technology and Science,
University of Tokushima, Tokushima, 770-8506, Japan
d
Faculty of Manufacturing Engineering, Universiti Teknikal Malaysia Melaka,
Hang Tuah Jaya, 76100, Melaka, Malaysia
Abstract: In todays global competition, having a lean production system is a must
for companies to remain competitive. By identifying and eliminating waste
throughout a products entire value stream by means of a set of LM tools,
companies are able to produce and assemble any product range in any order or
quantity. In order to do these, personnel needs to have the expertise in deciding
which LM tool to implement at the right time and on the right place. However, this
expertise is not always available. Therefore, this paper proposes a simulation-
based decision support (SDS) tool to assist the decision making in LM tool
implementation. The SDS tool provides five functions through an interactive use
of process simulation. The functions are layout, zoom-in/zoom-out, task status,
Key Performance Indicators (KPI) status and R.A.G (Red, Amber and Green)
status (quantifying waste). These functions are incorporated into a process model
of coolant hose manufacturing (CHM) factory which was developed in this study.
Layout function provides a birds eye view of the whole process model and shows
how the manufacturing process runs with the flow of materials and products.
Zoom-in/zoom-out function provides a detail view of manufacturing processes of
the factory. For KPI and RAG status functions, examples of LM tool
implementations are used to show how different parameters affect the outcome of
manufacturing process. Bar charts of KPIs are also available during simulation.
Feasibility study showed how SDS tool enhance the visual perception and analysis
capabilities of lean practitioners through availability of specific functions in the
simulation model. Hence, decisions in LM implementation could be made
correctly and with increased confidence by lean practitioners.
Keywords: Simulation, Lean Manufacturing, Decision support
Introduction and Research Background
To date, the lean manufacturing (LM) philosophy has been applied to many
manufacturing processes and its feasibility has been reported so far [1]. By identifying
and eliminating waste throughout a products entire value stream by means of a set of

1
Corresponding Author: Effendi Bin Mohamad, Graduate School of Advanced Technology and
Science, University of Tokushima, Tokushima, 770-8506,Japan ; Email : effendi@utem.edu.my
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-274
274
LM tools, companies are able to produce and assemble any product range in any order
or quantity. In order to do these, personnel needs to have the expertise in deciding
which LM tool to implement at the right time and on the right place. However, this
expertise is not always available [2, 3]. The decision making in manufacturing systems
is becoming more difficult nowadays due to increasing amount of data and complex
interrelations between manufacturing processes [4].
Simulation has been asserted as a tool to quantify the effectiveness of LM tool
implementation and assist lean practitioners with the decision to implement LM [1, 5-
6]. Simulation is an effective method of supporting and evaluating LM tools, assessing
current and future state of manufacturing process, performing what-if analysis and
measuring impact of improvement after LM implementation. Most importantly,
simulation could represent a large number of interdependent input parameters and
manage the complexity of interactions effectively [7].Using simulation to analyse real
data enables lean practitioners to forecast the output of manufacturing processes base
on the input values. This provides the lean practitioners time to react to emerging
problems, evaluate potential solutions and decide on LM implementation.
However, most studies use simulation to design, test and improve lean system. Yet,
studies on usage of simulation to support decision-making in replacing an existing
manufacturing process with a lean system are still lacking [1]. Lean practitioners
(decision-makers) understanding on how to implement LM and the impact of LM on
performance measures is also still lacking [8]. Thus, the decisions to adopt LM are
often made based on their own intuitions, faith in LM philosophy, consulting the
experts, utilizing handbooks, experiences of other management teams who have
implemented LM and using their own calculation methods [4, 9].
There are research attempts which present the application of simulation-based
approaches to decision making issues in LM implementation. A research conducted by
[10], uses simulation to support decision-makers in production design and operations
while the study of [4] deployed simulation in operational scheduling system and
concluded that simulation-based approaches could alleviate the works required to plan
day-to-day scheduling, ensure conformance of customer order due date, synchronize
flow through the plant, reduce changeover time and forecast potential problems.
Nevertheless, there are minor drawbacks associated with these simulation-based
approaches to decision making in LM implementation. As far as the limitation of these
approaches are concerned, the biggest obstacle is to develop a system capable of
supporting operational (real-time) decision making as opposed to strategic
manufacturing decision making [11]. Another obstacle is the gap which exists
between lean practitioners and simulation-based approaches in terms of expertise in
utilising the simulation software tools. The simulation software tools are generally
more suitable for simulation engineers who know how to design/build/analyse a
simulation model, and how to integrate it to LM tool software [12]. Basically,
simulation studies in lean projects are managed by simulation engineers and real time
updating of simulation model is also performed by them [13]. Therefore, these
approaches are not suitable for lean practitioners who are familiar with neither
simulation software, nor LM tool software. Misunderstanding between simulation
engineers and other lean practitioners may lead to development of a biased simulation
model [14]. Therefore, a structured approach of using simulation software tools is
required to support decision-making process in manufacturing and increase the
understanding of decision-makers in the company because it will determine the future
of the company [15].
E.B. Mohamad et al. / A Simulation-Based Approach to Decision Support for Lean Practitioners 275
Motiv
for users w
using Vis
Superscap
models. U
voice com
decision
manufactu
In thi
address th
expertise
supporting
interactive
simulation
The
Indicators
runs, resu
could be
production
(WIP) and
detect pro
to be appl
time, lean
implemen
improvem
countlessl
approach,
and the e
practitione
decide on
process m
this study.
vated to addre
who are not ex
sual Basic Ap
pe VRT softw
Users are also
mmands and o
support syste
uring systems
is study, a sim
he gap between
in utilising
g operational
e use of proce
n) in their dec
functions are
s (KPI) status
ults (SDSS ou
retrieved du
n output, total
d Inbound/Ou
oblems in the
lied to solve t
n practitioner
nt it into the si
ment brought b
ly until the d
lean practitio
effectiveness o
ers time to r
n LM implem
model of Coola
. The details o
ess this gap, [
xperts in simu
pplication an
are to enable
o able to inter
observe the vir
em, the targe
and addresses
mulation-base
n lean practiti
simulation s
(real-time) de
ess simulation
ision to imple
e layout, zo
and RAG (Re
utput) will be
uring or at th
l production t
utbound buffer
simulated pro
the problems.
rs could cho
mulation mod
by the chosen
desired results
oners are able
of LM tools
react to emer
entation. Feas
ant Hose Man
of SDSS will
[16] develope
ulation. The de
nd designed t
non-expert us
ract with the s
rtual model u
eted users co
s their behavio
ed decision su
ioners and sim
software tools
ecision making
n to assist lean
ement LM too
oom-in/zoom-o
ed, Amber and
e saved in an
he end of sim
time, changeo
r values. From
oduction line
For example
oose Single M
del and condu
LM tool (Fig
s are achieve
e to forecast th
base on the
rging problem
sibility of SD
nufacturing (C
be elaborated
ed a decision
ecision suppor
to work with
sers to develo
simulation mo
using head mo
ould develop
or.
upport system
mulation-based
s. SDSS is
g by providin
n practitioners
ls.
out, task sta
d Green) statu
independent
mulation run
over time, bar
m the results,
and select the
, if the result
Minute Exch
ct simulation
ure 1). The pr
ed. By using
he output of m
input values.
ms, evaluate
DSS is studied
CHM) factory
d in the next se
support syste
rt system was
h Witness pa
op simulation
odels in real
ounted display
simulation















m (SDSS) is p
d approaches
s a system c
g five functio
s (who are not
atus, Key Pe
us. Following
database. Th
ns in the form
chart, Work i
lean practitio
e most suitabl
shows high c
hange of Die
run again to o
rocess could b
this process
manufacturing
This provide
potential solu
d using a man
which was de
ection.
Figure 1
simulation-base
decision suppo
(SDSS) archite

m targeted
developed
ackage and
and virtual
time using
y. With this
models of
roposed to
in terms of
capable of
ons through
t experts in
erformance
simulation
hese results
m of total
in Progress
oners could
le LM tool
changeover
e (SMED),
observe the
be repeated
simulation
g processes
es the lean
utions and
nufacturing
eveloped in
. The
ed
ort system
ecture
E.B. Mohamad et al. / A Simulation-Based Approach to Decision Support for Lean Practitioners 276
1. Overview of SDSS
As mentioned earlier, this research proposes SDSS to address the gap between lean
practitioners and simulation-based approaches in terms of expertise in utilising
simulation software tools. SDSS plays a critical role to support lean practitioners in
real-time decision making and selection of LM tools. SDSS provides five functions
through an interactive use of process simulation to assist lean practitioners (who are not
experts in simulation) in their decision to implement LM tools.
The layout function of SDSS provides a birds-eye view of the simulated factory
floor. By using this function, lean practitioners could observe how the manufacturing
process runs with the flow of materials and products throughout the manufacturing
process. They could identify workstations that cause bottleneck, movement of
operators, movement of material transportations and other problems.
On the other hand, zoom-in/zoom-out function is designed for obtaining a detail
view of each section of manufacturing processes. For example, if the lean practioners
noted a section with product congestion during simulation run, they could click on the
zoom-in button to get a better view of that particular section and find out the cause of
product congestion. To find the cause of product congestion, they are provided with the
third function of SDSS which is the task status function. The task status functions of
SDSS provides three status illustrations i.e. busy, idle, and fail to represent operator
status in every workstation in the factory. The three task statuses of operator are
differentiated by means of colours and location of the operator from the machine. By
observing these status illustrations, lean practitioners would be able to understand the
changing task status in real time during the simulation runs. Once they have understood
the problem at the workstation, they could resume viewing the layout view by clicking
on the zoom-out button. Following that, they could proceed to the next function of
SDSS which is KPI status function to acquire more information on the existing
problems.
The KPI status function which includes total production output and total
production time and changeover (C/O) task time, are presented by means of KPI tables.
KPI values in this simulation model are generated and updated in real time during
simulation. By conducting what-if analysis and observing the KPI, the lean
practitioners could see the performance of the existing production line and compare it
with the performance post LM tool implementation. For visual understanding of KPI,
bar charts of KPI tables are also generated and updated in real time during simulation.
These bar charts also provide information on WIP and Inbound/Outbound buffer which
assist lean practitioners in their decision to implement LM tools.
Apart from providing KPI status function, SDSS also provides RAG status
function which is capable of quantifying waste in manufacturing process. RAG status
function continuously monitors the status of waste quantitatively during simulation
runs. RAG status is developed in the following three steps. Step 1 is collection of
observation data. Step 2 is performance level (PL) calculation of workstation (WS) in
manufacturing line using mathematical calculation. Step 3 is determination of waste
level by quartile calculation method. To determine waste level, distribution pattern of
PL was assessed by using quartile calculation to attain Q1, Q2 and Q3 of each WS in
the simulation study. The method of quartile calculation is described below:-
A set of data from each WS is arranged in ascending order of magnitude X
(1)
, X

(2)
, X
(n)
.The median (middle value of the data set) is determined followed by
E.B. Mohamad et al. / A Simulation-Based Approach to Decision Support for Lean Practitioners 277
calculation of each quartile. Quartile calculation is executed for even and odd sample
size (n) accordingly.
i. For even sample size (n),
Q2 (Second quartile) =
/

/2 (1)
Q1 (First quartile) = median of
,,

/
(2)
Q3 (Third quartile) = median of

,,

(3)
ii. For odd sample size (n),
Q2 (Second quartile) = x
/
(4)
Q1 (First quartile) = median of x
,,
x


(5)
Q3 (Third quartile) = median of x

,,
x

(6)
Table 1. Waste level for different condition of manufacturing line
Waste Level R (Red) A(Amber) G(Green)
Condition A PL Q1 Q1 < PL < Q3 PL Q3
Condition B PL Q3 Q1 < PL < Q3 PL Q1

After determining Q1, Q2 and Q3, waste level is set depending on the condition of
the manufacturing line (Table 1). During simulation runs, the RAG status function
continuously monitors the waste level and display it in the form of graphical image. A
green status indicates that waste is not present. Amber status indicates that waste exists
but still within acceptable limits and warrants attention. Red status indicates that waste
is beyond the acceptable limits.
2. CHM factory simulation model
The CHM factory simulation model is developed in this study using Arena simulation
software [17]. This factory produces four types of coolant hose products, which are
called CH4, CH6, CH8 and CH10.








The factory floor is divided into six sections from Section 1(S1) to Section 6(S6). S1
(supplier section) supplies raw materials to S2, S3, S4, and S5. Then, S2, S3, S4 and S5
supply their processed parts to S3/S4, S4, S5 and S6, respectively as shown in the
process model of CHM factory (Figure 2). Material handling of these parts is
floor y


Figure 2. Process model of CHM
factor
E.B. Mohamad et al. / A Simulation-Based Approach to Decision Support for Lean Practitioners 278
performed by either forklift or trolley. Production capacity for each product is
150units/day in nine hours operation.
Following the process model, layouts and model logic of CHM factory were then
created. Figure 3 shows the layout of CHM factory using S4 as an example. The
simulation model for CHM factory was designed based on a certain assumptions; all
workstations operate at full capacity; all workstations have triangular distribution
process time; product arrival time is based on a deterministic arrival pattern; and all
results are reported at a confidence interval level of 95%.






Verification of the model was proved by tracing all the products from the point of their
creation (S1: Incoming warehouse) to the point of their disposal from the system (S6:
Outgoing warehouse) to ensure that the simulation model closely approximate the real
system. Validation of the model was also proved by comparing the output of simulation
(total production time) with its mathematical calculation results by applying Littles
Law equation [18]. Total production time is obtained from WS with the longest


(total mean flow time) in the production line.

is calculated by considering the


buffer, batch size, process time and route time for each WS.

tot
=
B
+
Bq
+
Bk
+t
0
+t
route
(7)
where,
t
route
: route time between workstation (in time unit)
t
0
: process time for workstation (in time unit)

B
: mean flow time for waiting in buffer (in time unit)

Bq
: mean flow time for queuing on the inter-arrival of a batch (in time
unit)

Bk
: mean flow time for wait-to-batch time (in time unit)

To calculate total production time, this formula is used:
Total production time

. total demand/no of batch (8)


Table 2. Validation of CHM factory model
Section Simulation
result (minute)
Mathematical calculation
result (minute)
Similarity
(%)
Confidence interval
range (95%)
Status
S2 385.59 380.02 98.56 342.13-519.58 Valid
S3 834.61 853.60 97.77 639.43-1001.3 Valid
S4 887.14 853.60 96.22 572.08-989.3 Valid
S5 118.89 111.40 96.70 91.36-203.70 Valid

Figure 3. Layout of S4
of CHM Factory
E.B. Mohamad et al. / A Simulation-Based Approach to Decision Support for Lean Practitioners 279
The similarity of simulation results and mathematical results for total production
time for each section in CHM factory model were above 93%, which is within the
range of 95% confidence interval level (Table 2). Therefore the CHM factory model
was validated.
3. Feasibility of SDSS in CHM factory simulation model

Feasibility study of SDSS was done using S4 of CHM factory as an example. By using
layout, zoom in/zoom out function and bar charts (Figure 4 & 5), bottleneck is
observed at WS1 of S4. The reason for this bottleneck situation is acquired from the
KPI status function which showed high changeover time (51 minutes). This has caused
a low total production output (100 units/day) and high total production time
(531.33minutes) as can be seen in Figure 6.

































To react to this problem, one of the potential solutions is implementing SMED at
WS1 and WS6 of S4 to reduce changeover time. Following SMED implementation, the
Figure 4. Snapshots of S4by zoom-in function

Figure 6. Snapshots of KPI table for S4
Figure 7. Task status illustration
Figure 5. Snapshots of bar charts for S4
06:30:25
E.B. Mohamad et al. / A Simulation-Based Approach to Decision Support for Lean Practitioners 280
total production output is increased by 9% while the total production time is reduced by
4%. For further improvement of S4, the functions of SDSS are observed continuously
during simulation runs. Another problem detected in S4 is prolonged idle status of
operators in WS4, WS5 and WS6. A potential solution for this problem is to implement
cellular manufacturing (CM) in S4. By implementing CM, the total production output
is increased by 1% while total production time is reduced by 1.14%. Despite the minor
improvements, the number of operator has been reduced by 33.33% (from six to four
people).
This feasibility study is also used to show RAG status function in CHM factory
simulation model using WS1 of S4 and one of the seven wastes of manufacturing
(waiting) as an example. In this study, waiting is defined as an idle status of operator
due to starvation of parts/materials and high changeover task time in WSs. As
mentioned earlier, S4 consists of six WSs, produces two types of products (CH8 and
CH10) and has a scheduled changeover process at WS1 and WS6.

Table 3. PL for WS1 of S4 within Time Range t
30
to t
540

Time Range t
30
t
60
t
90
t
120
t
150
t
180
t
210
t
240
t
270

PL 0.0207 0.0103 0.0069 0.0348 0.1279 0.1899 0.2199 0.1015
Time Range t
300
t
330
t
360
t
390
t
420
t
450
t
480
t
510
t
540

PL 0.0681 0.0619 0.0568 0.0524
















After conducting a series of simulation runs with different time range between t
30
to t
540
, PL values were calculated as shown in Table 3. Then, Q1, Q2 and Q3 for WS1
with sample size (n=18) were calculated using quartile calculation. The results are
0.0370, 0.0513, and 0.0750, respectively (Table 4). Base on Q1, Q2 and Q3 values,
waste level is determined (Table 5) using condition B (please refer to Table 1). These
waste levels were presented in real-time in the form of RAG status.
The customized RAG status was then incorporated into WS1 simulation model
followed by implementation of SMED LM tool. The PL of WS1 with and without
SMED implementation was updated in real-time during simulation from t
30
to t
540
as
shown in Table 6 and Figure 8. Figure 8 shows that the RAG status remains the same
from t
30
to t
300
because no C/O process took place in WS1 within this time range.
However, the RAG status changes from amber to green at t
390
when SMED was
implemented and the green colour continued until t
450
. This behaviour of RAG status
Q1
(First quartile)

Q2
(Second quartile)
Q3
(Third quartile)
Table 4. PL for WS1 Table 5. Waste level of WS1 Table 5. Waste level of WS1
E.B. Mohamad et al. / A Simulation-Based Approach to Decision Support for Lean Practitioners 281
detected t
improvem
quantitativ
to provid
selection o

4. Conc
This resea
implemen
decision m
simulation
status, Ke
status. Us
feasibility
practitione
manufactu
reproduce

Acknowle

The resea
Malaysia
the process im
ment was succ
ve information
de pro-active
of LM tool co
Ta
#
Fig
clusion
arch proposed
ntation of LM.
making and
n. SSDS prov
ey Performanc
sing a process
y study show
ers to detect
uring process
ed experimenta
edgement
archers would
Melaka (UT
mprovement
cessfully achie
n on the perce
assistance to
ould be made a
able 6. PL improv
ure 8. Performan
d a simulatio
. SDSS plays
selection of
vides five fu
ce Indicators
s model of CH
wed that SDS
t problems a
s. However, t
ally in a real c
d like to thank
TeM), and Un
by SMED LM
eved. This be
entage of PL i
o LM practit
appropriately.

vement of WS1 W


nce level of WS1
on-based decis
a critical role
LM tools thr
unctions, nam
(KPI) status
HM factory, f
SS play an i
and quantify
the results c
case study.
k the Malaysi
niversity of T
M tool and p
ehaviour of RA
improvement
tioner so that

W-SMED and WO
W-SMED and W
sion support
to support lea
rough an inte
mely layout, z
and RAG (R
feasibility of
indispensable
the effective
an be further
ian Governme
Tokushima Ja
proved that th
AG status cou
(Table6) wer
t decision m
O-SMED
WO-SMED
system (SDS
an practitione
eractive use
zoom-in/zoom
Red, Amber a
SDSS was stu
role in ena
eness of LM
r validated i
ent, Universit
apan for thei
he process
upled with
re designed
making and
SS) for the
ers for their
of process
m-out, task
and Green)
udied. The
abling lean
M tools on
f they are
ti Teknikal
r financial
E.B. Mohamad et al. / A Simulation-Based Approach to Decision Support for Lean Practitioners 282
support and provision of facilities to carry out this study. The researchers are also
grateful to the anonymous reviewers for the comments and input to the earlier version
of this paper.
References
[1] R.B.Detty and J.C.Yingling, Quantifying benefits of conversion to lean manufacturing with discrete
event simulation: a case study, International Journal of Production Research, Vol.38, No.2, (2000), pp.
429-445.
[2] P.Achanga, E.Shehab, R.Roy and G.Nelder, Critical success factors for lean implementation within
SMEs, Journal of Manufacturing Technology Management, Vol. 17, No. 4, (2006) ,pp.460 471.
[3] Y.C .Wong, K.Y. Wong and A. Ali, A Study on Lean Manufacturing Implementation in the Malaysian
Electrical and Electronics Industry, European Journal of Scientific Research, Euro Journals Publishing,
Inc. Vol. 38 No.4, pp. 521-535.
[4] J. Heilala, J. Montonen, P. Jrvinen, and S. Kivikunnas. , Decision Support Using Simulation for
Customer-Driven Manufacturing System Design and Operations Planning, Book Chapter in Decision
Support Systems, Advances in, edited by: Ger Devlin. ISBN: 978-953-307-069-8, (2010).
[5] S.Ramakrishnan, C.M. Drayer, P.F. Tsai and K.Srihari, Using Simulation with design for six sigma in a
server manufacturing environment, Winter Simulation Conference, (2008), pp. 1904-1912.
[6] F.Sevillano, M.Serna,M. Beltran and A.Guzman A simulation framework to help in lean manufacturing
initiatives, Proceedings 25th European Conference on Modelling and Simulation , Simulation in
Industry, Business and Services (IBS 30), 7-10 June 2011, (2011),Krakow, Poland.
[7] T.C. Papadopoulou and A.Mousavi, Scheduling of non-repetitive lean manufacturing systems under
uncertainty using intelligent agent simulation, The 6th International Conference on Manufacturing
Research (ICMR08), Brunel University,UK,9-11 September 2008,(2008), pp.207-215.
[8] G. Anand, and K.Rambabu, (2011), Design of lean manufacturing systems using value stream mapping
with simulation: A case study, Journal of Manufacturing Technology Management, Vol. 22, No. 4, pp.
444-473.
[9] F. A., Abdulmalek, and J. Rajgopal (2007), Analyzing the benefits of lean manufacturing and value
stream mapping via simulation: a process sector case study, International Journal of production
economics, Vol.107 No.1, pp. 223-236.
[10] F. K., Schramm, G. L.,Silveira, H., Paez, H.Mesa, , C. T. Formoso, and D. Echeverry (2007), Using
Discrete-Event Simulation to Support Decision-Makers in Production System Design and Operations,
In Proceedings of the 15th Annual Conference of the International Group for Lean Construction , pp.
131-141.
[11] J. W. Fowler, and O. Rose (2004), Grand challenges in modeling and simulation of complex
manufacturing system, Simulation, Vol. 80 No.9, pp. 469-476.
[12] P.C.Janca, and D.Gilbert, Practical design of intelligent agent systems, in: N.R.Jennings and M.J.
Wooldridge (eds.), Agent Technology: Foundations, Applications, and Markets, Springer-Verlag,
Berlin, Germany, (1998), pp. 73-89.
[13] R. Gourdeau, (1997), Object-oriented programming for robotic manipulator simulation, IEEE Robot.
Automation Magazine Sept 1997, Vol.4, No.3, pp. 2129.
[14] A.A.West, S.Rahimifard,R.Harrison and D.J.Williams,(2000), The development of a visual interactive
simulation of packaging flow lines, International Journal of Production Research, Vol .38,No.18,
(2000), pp.4717-4741.
[15] P. Kingstam, and P. Gullander, (1999), Overview of simulation tools for computer-aided production
engineering, Computers in Industry, Vol.38, Issue 2, pp. 173186.
[16] T. S., Mujber, T.Szecsi, and M. S. J. Hashmi, (2005), Design and development of a decision support
system for manufacturing systems, In Intelligent Production Machines and Systems-First I* PROMS
Virtual Conference: Proceedings and CD-ROM set , pp.91-96, Access Online via Elsevier.
[17] W.D Kelton, R.P. Sadowski, and N.B. Swets, (2010) Simulation with Arena, 5th ed., McGraw-Hill,
International Edition, New York, NY.
[18] J.E. Rooda and J.Vervoort,Analysis of Manufacturing Systems using 1.0, Technische Universiteit
Eindhoven, Department of Mechanical Engineering Systems Engineering Group, The Netherlands,
(2007).
E.B. Mohamad et al. / A Simulation-Based Approach to Decision Support for Lean Practitioners 283
Focussed Web Based Collaboration for
Knowledge Management Support
Marc Oellrich
a, 1
and Frank Mantwill
a

a
Helmut-Schmidt-University, Germany

Abstract. Knowledge Management is one of the key abilities of an enterprise to
face the future where the competition in the globalised world gets harder. Web
based systems can support this challenge and help to succeed. This concept
presents a web based system to work directly inside the browser. It is project based
and includes several of the product development methodology tools which are
combined and extended by already established internet technologies. As it will be
shown, facilitates these features focussed information streams and collaboration to
support the Knowledge Management and with it the product development quality.
It also sketches the possibilities of an Inhouse Open Innovation system where all
employees can participate.
Keywords. Web based, Product Development Methodology, Collaborative
Engineering, Knowledge Management, Decision Support, Open Innovation
Introduction
Many enterprises have to compete today with competitors of the whole world. Getting
the own development processes lean seems to be difficult and hard but necessary.
Keeping the know-how or even most of it inside the company to prevent the
consequences i.e. of the demographic change is a main topic in most enterprises and a
key ability to ensure the survival. Web based systems can support several parts of the
development process and can help to win these challenges.
The Product Development Methodology, developed in the last century, contains
several tools like the Requirements List, Morphological Boxes, Development
Catalogues and decision supporting elements like the Pairwise Comparison and the
Value Analysis (s. [1], [2]). Usual PLM systems are supporting many steps during the
product development process, but the conventional methodological part, nearly at the
beginning of the process, to penetrate a problem and develop different solution variants
before rating them, is not included.
Classic web based approaches to support the development methods were often file
based so that users can share online the same data by up- and downloading files. Often
those software offers gave the users the possibilities to communicate with a messaging
system and maybe to follow a workflow, but they were not able to work inside the
systems in a real comfortable way. Eversheim et al. showed at the beginning of the
millennium in different projects that development costs could be reduced by 20% and

1
Corresponding Author: Marc Oellrich, Helmut-Schmidt-University, Machine Parts and Computer
Aided Development, Holstenhofweg 85, 22043 Hamburg, Germany: E-mail: marc.oellrich@hsu-hh.de
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-284
284
the development time by 25% (projects in plant engineering, production of domestic
appliances and railway vehicle manufacturing) [3] with such a system but in that time
the web technologies where only used static. Also if those results show that a
development methodological approach can be helpful, it is still only sporadic applied
[4].
Established web technologies of today can raise the efficiency of the Product
Development Process by increasing the appliance of the Development Methodology as
well as the amount of participating users while minimizing the information exchange
intervals. Especially for young employees the use is familiar and they could assist older
employees to overcome possible inhibitions [5].
The following sections will sketch the system, starting with the components of the
system in section 1, the advantages of their combination in section 2, the data
protection in section 3 and ends with the conclusion in section 4.
The presented system is currently being developed and gets extended over the time
by the shown components.
1. Components of the Web-Based System
It might be possible to reuse existing components like online project management
systems (e.g. www.xelos.net) or to define and implement interfaces to existing PLM
systems, but to generate the wanted benefit it is more easy to think of a complete
integrated system. Figure 1 gives an overview of the system components which are
explained in the next two sub sections. The development methods in Figure 1 have
been selected by their allocation to the steps of the development process (s. [6]) and the
Web-Technologies because of their expected benefit.

Figure 1. System Components.
M. Oellrich and F. Mantwill / Focussed Web Based Collaboration 285
Due to the fact that the actual state of the system already contains 64 database tables
the following Figure 2 gives only a very abstract view at the system but it should help
to understand its structure and the functioning.

Figure 2. Abstract view at the structure of the system.
1.1. General Components
The total system is based on a web-based project management system which leads
through the workflow. It is implemented in a master copy way so that every project is a
derived version. The master copies can be configured and define different project
phases, milestones and can also map checklists and specific useful components to each
phase. As an example one could think of the four phases of the VDI 2221 [6] for new
development projects (s. Figure 3).
Figure 3. View of Toolbox, Project-Phaseplan and an exemplary Checklist.
After a user login, the system is able to generate dynamic user and role specific
views. With an Activity Stream the user gets specific information about the projects he
is assigned to. By entering such a project he can get more detailed information
M. Oellrich and F. Mantwill / Focussed Web Based Collaboration 286
regarding this project. He is then also able to take a look at project elements which
have been added before.

There are some additional implemented elements and technologies that help to
increase the opportunities:
A Wiki to have a central access point for information and knowledge which is
also linked to different methodology components (s. 1.2);
A Blog system to communicate but also to collect information like comments
or ideas of improvement and assign those blogs to information regarding
elements;
A Social Tagging system that facilitates users to improve the information
classification by adding tags on their own;
An Evaluation system so users can evaluate ideas, posts, etc. what usually
increases their interest and acceptance of a system;
A Document Management System to enable file exchange by checking files in
and out;
A Versioning System to keep all outdated information and files restorable.
The listed and described elements enable users also to use the system only for
standard project management so that it is not necessary to use the methodology
components of the next section and what might overcome possible inhibitions.
1.2. Design Methodology Components
The following listed methodology tools are part of standard literature for product
development methodology like [1] and [2] and have been chosen by their assignation
over the development timeline as shown in the VDI 2221 [6] (s. Table 1), considering
the estimated less implementation effort and the expected benefit.

Table 1. Mapping of Tools to Project Steps (excerpt of [6]).
1 2 3
C
l
a
r
i
f
y

t
h
e

t
a
s
k
D
i
s
c
o
v
e
r

f
u
n
c
t
i
o
n
s

a
n
d

t
h
e
i
r

s
t
r
u
c
t
u
r
e
F
i
n
d

s
o
l
u
t
i
o
n
-
p
r
i
n
c
i
p
l
e
s

a
n
d

t
h
e
i
r

s
t
r
u
c
t
u
r
e
Requirements List x x x
Checklist x* x* x
Brainwriting x x
Morphological Box x x
Development Catalogue x x
Pairwise Comparison x
Benefit Analysis x
FMEA x* x* x
* own assumpti on
M. Oellrich and F. Mantwill / Focussed Web Based Collaboration 287
1.2.1. Requirements Lists
Requirements Lists are basic elements to organise product requirements. To improve
the lists with every new development project and to follow a continuous improvement
process (CIP) they are based, like the project types, on master copies. Only those
master copies can be changed while copies are only derived and updated if necessary.
Furthermore it will be possible to add a comment blog to each entry for keeping
ideas and comments where they belong.
1.2.2. Checklists
With Checklists, recurring tasks and information to consider can be easily list wise
organised [7].
Like the Requirements Lists are also the Checklists implemented in a master copy
way with derived versions. The possibility to add comments is also given.
1.2.3. Brainwriting
To support creativity techniques (e.g. for the search of functions and solutions) the
Brainwriting method will be implemented. The implementation is planned in two
different ways. A synchronous one, which could be seen as a chat, where every post
will be immediately visible for the participating members of the creativity session and
an asynchronous one, where posts will be collected in a definable time window. To
overcome possible inhibitions all posts will be shown anonymously.
1.2.4. Morphological Boxes
With Morphological Boxes already found functions/tasks and part-solutions can be
visualised tabular and clear. The build-up process of these boxes can nowadays take
place directly inside the browser so that additional software is not anymore necessary.
Figure 4. Principle of connections between Morphological Boxes and Wikis.
Previous computer supported approaches have been often file based, had less
intuitive operation and had no or only some interfaces to other project tasks (e.g. [8]).
M. Oellrich and F. Mantwill / Focussed Web Based Collaboration 288
The explicit advantage of the web-based solution is the opportunity to access the
Morphological Boxes from everywhere and anytime in the always latest state and to be
able to extend them.
Another advantage can be generated if those boxes are linked with related
Development Catalogues and Wiki-Pages (s. Figure 4). Those catalogues and pages can
also be generated and linked partly automated. Each row of a Morphological Box can
be seen as a first state of a Development Catalogue and a Wiki-Page could be provided
in a first state with information like title, image, social tags (based on function and
solution name) and links to the Morphological Box and maybe a Development
Catalogue.
The central stored data in a database facilitates an accelerated build-up process of
further Morphological Boxes. Helpful would be in this case auto-completion
functionality so similar functions could be proposed and after the choice of the user
assigned part-solutions could be offered for selection. This would enable a build-up
time of only a few minutes to get an at least partly filled Morphological Box.
Morphological Boxes are the basis to develop solution variants which should be
compared afterwards to take the best decision (s. also 1.2.6 and 1.2.7).

1.2.5. Development Catalogues
Additional to Morphological Boxes are Development Catalogues helpful to organise
functions/tasks and possible solutions. The difference between both is that the boxes
help to get an overview about a complete machine or at least of one of its components
and to develop solution variants and that the catalogues can help to get an overview of
possible solutions of only one function or task but to provide the user also with
additional detailed information.
The build-up process could take place separate and manually but also derived of a
Morphological Box row. Especially this derivation ability should reduce the build-up
time. The possibility to extend each catalogue independent of time and location is also
very helpful and might increase the acceptance because classic catalogues had to be
mostly complete from the beginning to avoid otherwise necessary updates and thus
version controls.
1.2.6. Pairwise Comparison
Pairwise Comparisons can be used to generate weights and to differentiate better
between criteria. The classic manual approach takes a lot of time what is one reason
why it is not often applied. The optimised version Improved Pairwise Comparison
like shown in [9] is applied computer aided and needs dramatically less time by
providing the same results why it will be implemented in this way. By connecting the
Pairwise Comparison directly with other components the workload can be reduced
further. As an example one could think of the wishes of the Requirements List which
could be sent directly so that the criteria must not be entered again.
1.2.7. Benefit Analysis
For decision support the Benefit Analysis as a tabular tool has been chosen. It allows
quantified and more objective decisions because the different decision options are
evaluated on a defined scale, what results with the column sums in the benefit values.
M. Oellrich and F. Mantwill / Focussed Web Based Collaboration 289
As described before could the evaluation get accelerated if the weighted criteria
and the decision options can be imported of another component like the Pairwise
Comparison.

1.2.8. Failure Mode and Effect Analysis (FMEA)
The CE declaration of conformity is necessary for every product sold by an enterprise
in the European Union and guarantees the conformity with all necessary regulations
and laws. The enterprise is obliged to put the CE mark on all those products. To be
compliant with this regulation it is necessary to check detailed on possible failures. A
FMEA fulfils already most of those requirements.
To each entry of the FMEA component, comments and also ideas of improvement
can be added.
2. Advantages of the Component Combination
With the described system there can be realised several advantages.
By using the Web 2.0 principle where content not only can be consumed but also
added asynchronous by everyone and anytime (s. [1] and [10]), the stored conceptual
knowhow, ideas, requirements and (possible) failures will be easy accessible, findable
and can always be extended. This does not take place only in the Wiki. The content in
this system can also be added in the explained components like the Morphological
Boxes, Development Catalogues, Requirements Lists and FMEAs (s. 1.2).
Accessibility and find ability get further supported by the social tagging component
and the part automated content creation as described above. Those free-extendable
solution collections can furthermore accelerate and improve future development
projects. Although the system starts with no content, the content grows with each
project and improves the system. To accelerate this process at the start one could use
employees to fill the system with first contents, e.g. by apprentices or trainees who
would also get a better understanding of the development history.
All users can communicate with the blog system and they stay informed by the
Activity Stream which pulls the news out of each component together.
The possibility to add comments to (nearly) every discrete content focusses the
information where it belongs. This accelerates the information access during the project
and might be helpful to improve the development process afterwards with better
analysis abilities. The easy access and focussed information should also help to
consider always most relevant facts what could decrease the count of iterations during
the project and shortens with it the development time.
By implementing creativity supporting elements like Brainwriting, the idea of
Open Innovation can be realised. Open Innovation is normally public where every
interested person can deliver his ideas and advices (e.g. www.quirky.com). Companies
of the mechanical engineering are usually very conservative and suspicious regarding
their confidential knowledge. By using an Inhouse Open Innovation variant where
only employees could join the process those inhibitions should be overcome. The
ability to reach much more people, e.g. all 100 employees instead of only the six
project members, facilitates an enterprise to get faster, more and maybe better ideas
than before [11].
M. Oellrich and F. Mantwill / Focussed Web Based Collaboration 290
3. Data Protection
Data Protection and Confidentiality are especially in technology and research intensive
branches very important. No enterprise can accept a knowledge leakage to their
competitors while development periods get shorter. To counter this difficulty a rights
and role management should be implemented which enables employees on the one
hand to participate of the system and allows collaboration but on the other hand to
secure confidential information (part-solutions, functions, wiki-pages, full
morphological boxes, etc.). As an example one could think of contract workers and
integrated suppliers who should be part of creativity sessions but should not be able to
access all solutions, morphological boxes, etc. A rights and role management as
explained should also avoid the difficulties as described in [13] where the usage of
Wikis in enterprises has been analysed. The author discovered that departments who
work with confidential information do not want to put it in an enterprise wide
accessible Wiki. The usual solutions were to use local department Wikis or to renounce
it.
The possibility to download these contents gets already dramatically reduced by
saving the information web-based in a database instead of downloadable files.
4. Conclusion
The presented system describes a web-based project management system which is
extended by product development methods (Requirements Lists, Morphological Boxes,
i.e.), selected over their application possibilities along the development process and
combined with established web-technologies (Wiki, Blogs, i.e.).
Expected improvements are in the efficiency of development projects, in
knowledge management and in the innovative strength. The increased efficiency should
be realised through faster information streams with Activity Streams and linked
components (e.g. Morphological Box with a Wiki-Page), what also improves the
knowledge management. Especially the central storage of functions and part-solutions
should accelerate the build-up process of new Morphological Boxes and with it the
development projects. The focussed information through comment blogs, attachable to
nearly every discrete content, will also accelerate the information search and
concentrates information where it belongs. By linking between the components content
and also aggregated overviews of those blog contents the knowledge management gets
further support. The implementation of the Open Innovation idea by using creativity
sessions, which e.g. all employees of an enterprise can join, should furthermore lead to
more ideas in less time and increase the innovative strength of the applying company.
The focussed information might also affect other parts. The actual amount of
project relevant emails will probably decrease, what would also change the functioning.
Meetings would be still necessary but with this information system their amount can
maybe get reduced and the meeting efficiency could be improved.
References
[1] K. Ehrlenspiel, Integrierte Produktentwicklung, Hanser-Verlag, Munich, 2003
M. Oellrich and F. Mantwill / Focussed Web Based Collaboration 291
[2] G. Pahl, W. Beitz, J. Feldhusen, K. Grote, Konstruktionslehre: Grundlagen erfolgreicher
Produktentwicklung; Methoden und Anwendung, Springer Verlag, Berlin, 2005
[3] W. Eversheim, J. Schrder, C. Voigtlnder, Intranetbasiertes Entwicklungshandbuch der schnelle Weg
zu Neuprodukten, Konstruktion 5-2001, Springer-VDI Verlag, 2001
[4] M. Oellrich, Webbasierte Untersttzungsmglichkeiten des Konstruktionsprozesses, Masterthesis, TU-
Berlin, Berlin, 2011
[5] M. Oellrich, F. Mantwill, Concept for a web based Support of the Product Development Process, BTW
2013, Gesellschafft fr Informatik, Magdeburg, 2013
[6] Verein Deutscher Ingenieure, VDI 2221 Methodik zum Entwickeln und Konstruieren technischer
Systeme und Produkte, VDI-Richtlinien, Berlin, 1993
[7] P. Conrad, H. Schiemann, P. G. Vmel, Erfolg durch methodisches Konstruieren, Lexika-Verlag,
Grafenau, 1978
[8] TRIGON Software, Prosecco Programm zur strukturierten Erfassung von konstruktionsdaten,
Darmstadt, 1997
[9] M. Oellrich, F. Mantwill, Improved Pairwise Comparison, WASET, Amsterdam, 2012
[10] M. Koch, A. Richter, Enterprise 2.0, Oldenburg-Verlag, 2005
[11] O. Gassmann, Crowdsourcing Innovationsmanagement mit Schwarmintelligenz, Hanser-Verlag,
Mnchen, 2010.
[12] A. Back, N. Gronau, K. Tochtermann, Web 2.0 in der Unternehmenspraxis, Oldenburg-Verlag, 2009
[13] I. Hackermeier, Wikis im Wissensmanagement: Determinanten der Akzeptanz eines Web 2.0 Projektes
innerhalb eines internationalen Zulieferers der Automobilindustrie, Dissertation, Munich, 2012
M. Oellrich and F. Mantwill / Focussed Web Based Collaboration 292
QFD Application on Developing R&D
Project Proposal for the Brazilian
Electricity Sector: A Case Study - System
Assets Monitoring and Control for Power
Concessionaires
Joo Adalberto PEREIRA
a,b,1
, Osris CANCIGLIERI JNIOR
b
,
Juliana Pinheiro de LIMA
a
and Samuel Bloch da SILVA
c

a
Companhia Paranaense de Energia - COPEL
b
Pontifcia Universidade Catlica do Paran - PUCPR
c
Flextronics Instituto de Tecnologia - FIT
Abstract. This work shows how to conduct a transition from a technological need
of a power utility to adequate planning for an innovative system R&D project
proposal. In the highly competitive world of electricity commercialization, and in
accordance to ANEEL (Brazilian Electricity Regulatory Agency) principles, one
consideration about new R&D projects proposals is that it needs to present
innovative technological solutions to fill technical gaps at operating systems. Due
to the multidisciplinary nature and technological complexity of these kinds of
projects it is considered that project management by Concurrent Engineering,
specific Models for Product Development and its associated tools could be fully
applicable to this proposal. In this context established models and tools created for
the Product Development Process were recently adapted by Pereira & Canciglieri
Jnior (CE2012) for projects like that. Among the tools, it is highlighted the QFD
(Quality Function Deployment) successfully used on initials phases of new
projects to specify the project requirements and the product characteristics from
the needs of the customers. In this sense, this work presents a Case Study where
the QFD was applied to establish the requirements for a new R&D project
proposal to the Brazilian electrical sector. This work illustrates the Pre-
Development phase of a Product Development Model where the characteristics of
an R&D project are defined. Its main objective is an Experimental Development
project for an innovator technological system to control assets in the process
maintenance on the energy power utilities. The used analysis unit was the process
of elaboration of a new R&D project proposal to Brazilian power utilities, where
QFD was applied to reach the concepts of the R&D project proposal. These
concepts are: the expertise of the multidisciplinary team; the project development
steps and their simultaneity; the allocation of the gates and the technologies that
may be used to solve the problem. This phase was then conducted by the
development project team coordinator, closely watched by the manager of the
project who has in mind the energy company strategic plan to the project.
Keywords. Needs of costumer, quality functions deployment, product
development process

1
Corresponding Author: Joo Adalberto Pereira, Companhia Paranaense de Energia, Rua Emiliano
Perneta, 756, 5 andar, 80420-080, Curitiba, PR, Brasil.
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-293
293
Introduction
Being a reference for the national electrical sector is a strategic goal for the power
companies. That is why it is crucial to constantly seek the mastering and the use of
innovative technologies in its process, aggregating new functions and aiming to
increase the quality of the services rendered to the community.
In this scenario, COPEL, seeking to solve an internal technical problem, came up
with the development of an automated system for the identification and control of the
electrical network assets right in their utilization spots which are completely connected
to its database through an R&D project.
It is known, however, that the creation of new technologies in the context of power
companies depends, above all, on their strategic decisions, the guidelines regulator
agency [1] and power market trends. And when one decides to execute an R&D project,
it is necessary to plan it according to the actual needs of the power company and
expectations of the final product users.
As well as in the industries, the QFD application to assist in the elaboration of the
R&D project proposal came just to attend this premise [2].
This work presents the Informational phase [3] for elaborating a new R&D project
proposal for the Brazilian electric sector for which QFD spreadsheets have been
applied in a Concurrent Engineering approach. It is also about case study of a recent
framework application proposed by the authors at the last CE2012 [2].
In a synthetic way, the proposal for the R&D project will be presented at the end of
this article according to the criteria ANEEL R&D Program [1] and based on the
information provided by the QFD application.
The team responsible for this phase has been formed by employees of the power
company from maintenance, logistics, Information Technology (IT) and R&D
management (all of them clients) and also by researchers from the institution which
executes the Project who also has great experience in R&D and industrial process.
1. Considerations about ANEEL Criteria for R&D Projects
In Brazil, ANEEL

(Brazilian Electricity Regulatory Agency) is the government agency
that regulates the R&D program of the electrical energy sector (established by the
Federal Law n 9.991 [4]) and provides the criteria for elaboration of the R&D project
proposals [1].
Innovation is the mainspring of the ANEEL R&D Program that stimulates, through
researches, the development of innovative technologies through projects of Basic
Research (BR), Applied Research (AR), Experimental Development (ED) and, from
these, the development of practical solutions that can be applied in a daily basis by the
energy utilities by the projects like Head Production Series (HS), Pioneer Production
Lot (PL) and Market Product Insertion (MI), Figure 1. All of them criteriously
commented by Neves (2011) [5] and Pereira&Canciglieri Jnior (CE2012) [2].


Figure 1. ANEEL Innovation Chain
J.A. Pereira et al. / QFD Application on Developing R&D Project Proposal 294
2. QFD in the Product Development Process
In the Product Development Process (PDP), the projects can be of three different
kinds: Improvement Projects (or derivative), Platform Projects (or next generation) and
Radical Projects (or of rupture) [6]. For the last ones, which will establish new
products and process, it is done here an analogy to the R&D projects in the ANEEL
program, which goals are to promote the creation of new process, new products or the
improvement of their characteristics and where the related activities are the ones of
creative or entrepreneurial nature for the investigation of new applications [1].
Focusing on the final product, its success depends directly on satisfying the clients. In
other words, the coming up of new ideas in the power company sphere is usually
boosted by the technological needs of their process and can become R&D projects
which must be properly planned and managed attending the needs of their potential
clients.
The PDP is a set of activities that allows reaching the product specifications
considering the customer needs, the technological constraints and possibilities as well
as the competitive strategies adopted [3]. In accordance whit resents studies [2] Unified
Model [3], originally proposed for the development of industrial products, has showed
adequate for R&D projects purpose, even because, its was developed be applied in
designs classified as the Radical type. It structure combine concepts proposed by
Pahl&Beitz [7] with the concepts of Concurrent Engineering [8, 9]. The main feature
of this model is the division of the development process into three macro phases: Pre-
Development, Development and Post-Development.
For theses phases there are appropriate tools [2, 3] to support the execution of the
activities. Among them, there is one which can be used among the several phases of the
PDP, thats the Quality Functions Deployment (QFD) which goal is to assist in the
development process management keeping the focus on attending to the customer
needs [10].
In the case study, the R&D project proposal is unfolded by the QFD. Through
them their spreadsheets are incorporated in the form of analysis which increases the
knowledge about the Project variables which guide and facilitate making decisions
during its execution. For this case, the QFD served as a visual, connective, prioritizer
methodology which allowed to explicit the connections between the customer desires
and the R&D project guidelines [11] in the shape of the Necessary Technology
Specification, the Team Expertise and definition of the Concept for Product.
3. QFD in the ANEEL R&D Project Proposals
Of all the considerations, the framework proposed previously by Pereira & Canciglieri
Jnior [2] was used in the informational phase of the planning, Figure 2, achieving the
detailed proposal in the shape of three contractual documents:
Descriptive Document: It contains the project description and justifications
that characterize it inside the ANEEL R&D Program.
Cost spreadsheet (not approached in this work): It contains the distribution of
necessary resources for development.
File XML (not described in this work): Necessary for the project submission
upon ANEEL.
J.A. Pereira et al. / QFD Application on Developing R&D Project Proposal 295

Figure 2. Framework for ANEEL R&D project proposal [2]
4. Methodological Approaches
The research strategy used in this work was a Case Study with qualitative approach and
the unit of analysis utilized was the elaboration process of a new R&D project proposal
to a Brazilian energy power company.
As technical procedure, it was made a literature review about the QFD in the
context of Product Development Process and about newest technologies to identify and
track parts in power electrical distribution systems.
The work has followed with the application of the QFD planning to get the
preliminary concept to the system to be developed, to set the multidisciplinary team to
run the project, to define the project steps and simultaneity between them and
technologies that probably can be used to solve the problem.
5. Costumer Definition
The importance defining the customers for a new product is in allowing a translation of
their needs in terms of desired product requirements.
According to [12] the customers can be categorized as External, Internal and
Intermediary. However, it is worth to point out that the R&D project goal is not the
serial production, but the Experimental Development (ED) for a new technology, still
J.A. Pereira et al. / QFD Application on Developing R&D Project Proposal 296
treated strategically internal to the power company. That is why external and
intermediary customers have not been established which will occur when one decides
to follow the ANEEL innovation chain with project proposals followed by DE, in other
words, with focus on the product development for the market.
The internal customers, however, have been recategorized according to their origin
activities: MANAGEMENT, FIELD Activities and INTERFACE with the power
company data network, as shown in Table 1.
Table 1. Kinds of customers.
Customer Description Activity Origin
I
N
T
E
R
N
A
L

MANAGEMENT Managers, coordinators
They use all the systemic functionality for
identification, control and assets traceability.
FIELD Electricians
They perform installation, maintenance and asset
inspection activities in the field.
INTERFACE IT Team
They perform the data interface from the system
with the power company database.
6. Customer Requirements
The qualitative research, under the Concurrent Engineering approach, was performed
through meetings with the team (customers) and through observations and comments
made during field visitations.
The customers desires, yet in an informal language, were organized as shown in
Table 2, where the necessary disciplines for the desired solution development were
associated to as well.
These pieces of information, however, many times are not completely related to
each other, and they can even be conflicting [13], such as, innovative solution and
low cost solution. Thus, with the help of QFD, priorities have been established,
adjusting the common terms in an objective way, compiling and reorganizing the data
based upon the researchers experience, the state-of-art and previous knowledge.
This procedure, named Demand Quality Tree, has as a goal to hierarchize the
needs to reason the work necessary for the development without losing the focus of the
customer perceptions.
Table 3 illustrates the needs now reorganized in levels.
7. Competitive Evaluation
Bodies of research in journals and database from ANEEL have showed few projects
handling asset management for the electrical sector. None of those which have been
found handle the subject in a systemic ways the way it is proposed.
One initiative stands out, a project which started in 2009 by The Department of
Public Utilities in Orangeburg, USA [14], which uses tracking technology for exclusive
identification of wooden poles. In this case, despite the fact that the data collection
process is not automated and that the identification of other items is not contemplated,
the company registered a significant reduction in the time of inspections, showing that
the solution is promising if applied in a massive way.
J.A. Pereira et al. / QFD Application on Developing R&D Project Proposal 297
Table 2. Customer desires
Client Customer Desires Disciplines
MANAGEMENT
Low maintenance of its components Design, Manufacturing
Independent audit of human Telecommunications, RF
Traceability Identifications Systems
Controlling of scraps return Logistic, Automation
Online visibility of facilities and SO Computation
Enable inventory control Software, IT Systems
Geographically locate materials in field Geoprocessing, Software
Update automatically register Computation, Automation
Online inventory of installed materials Computation, Automation
Innovative solution R&D Management
Allow staff control, materials, SO, etc. Computation, Automation
Low costs solution Administration, Logistic
FIELD
Operation in hostile environment Electromagnetism
Identify various materials Embedded Systems
Interface display location Electrical, Geoprocessing
Easy installation and simplified operation Mechanics, Design
Remote reading distance RF, Antennas, Telecom
Low power consumption Energy Sources, Electronic
Easy to carry and hold Mechanic, Design
Adaptable to the diversity materials Electrical, Materials Science
Reader must be lightweight Materials Science, Design
Safe as electrical shock Electrical, Materials Science
Accuracy in data reading Electronic, Electricity
INTERFACE
Integration to the geographic location system Software, Computation
Integration with existing infrastructure Software, IT Systems
Integration with logistic systems Software, Computation
Predict scalability (system expansion) Computation, Electronic

Table 3. Demand Quality Tree X Quality Features
Primary level Secondary Level Quality Features
Economic
Price Value of the project
Standardized components Number of standardized components
Licensed software Standards of the power utilities
Performance
Low consumption Low power devices
Low maintenance Alternative source of energy
Remote reading Communication system
Identify materials Electromagnetic compatibility
Information control Specialized software
Monitoring and traceability Geoprocessing
Communication whit central Communication protocol
Automatic update Automation software
Visibility (installation and SO) Interface software
Integration Intelligent software
Interface with GDEMAN and WebGeo Data protocols
Endurance
Durability Used materials
Robustness Mechanical design
Accuracy
Accuracy of information Precision components
Lossless data communication IEEE 802.15.4 protocol
Security
Electrical insulation Insulating materials
Installation process Safety rules
Do not interfere with existing systems Electromagnetic compatibility
Environment
Control of scraps Data integration software
Recyclables materials Weight of recycled material
Standardization Standards and protocols Types of norms and standards used
Ergonomics
Effort to transport Power to the transport
Effort to installation Power for installation
Position of use Mechanic design
J.A. Pereira et al. / QFD Application on Developing R&D Project Proposal 298
8. R&D Project Requirements: Specifications of the Project Proposal
Of the disciplines necessary for the project, Table 2, it is possible to establish the
necessary expertise for its development, Table 4. It is through this study of
competences that are possible to be proved if the company that executes the Project is
able to promote the development or if it will be necessary to hire new researchers or
even chance the company.
Table 4. Definition of competences
Disciplines Expertise Technical Areas
Normalization
Electrical Maintenance
Logistic
Identifications Systems
Mechanic
Mechatronic
Electrical
Electronic
Alternative Energy Sources
Antennas and RF
Electromagnetism
Material Science
Normalization specialist
Maintenance specialist;
Logistic specialist;
Identifications Systems specialist
Mechanics Engineer
Mechatronics Engineers
Electrical Engineers
Electronic Engineers
Alternative Energy Sources Specialist
RF and Antennas Specialists
Electromagnetism
Materials Specialist
HARDWARE
Geoprocessing
Specific software
Design
Computation
Information systems
Geoprocessing engineer;
Software programmer
Designer
Computation Engineers
Software Programmer
SOFTWARE
IT Systems
Industrial Automation
Embedded Systems
Telecommunications
Automation
Manufacturing
IT Systems specialist.
Industrial Automation Specialists
Embedded Systems specialists
Telecommunications Engineers
Automation specialists
Manufacturing specialists
INTEGRATION

About the demand quality tree, Table 3, where each primary level can cover
several secondary levels, the customer requirements could be quantified which guided
the team in the definition of technical details of the Project to be executed.
With the data organization in the form of an internal questionnaire, each customer
can attribute concepts of importance (grades from 1-10) for the Project requirements
could be reorganized by degree of importance, according to Table 5.

Table 5. Prioritization of Demanded Quality for the R&D project proposal
Order Quality Characteristics
1 Update the logistics system (SO) and georeferencing system of COPEl
2 Traceability of assets installed in field (search for components in database)
3 Control of assets installed in field (substitution of components for SO in infrastructure)
4 Propose model for control of out and return of material according to the SO installation
5 Present a low cost technology for application in scale
6 Present a technology of identification of assets adaptable to different kinds of materials
7 The solution must be easy to implement
8 The solution must forecast control of construction residues

At this moment the quality features (R&D project requirements) become
measurable and they are turned into quantitative information from qualitative
information.
J.A. Pereira et al. / QFD Application on Developing R&D Project Proposal 299
Adding to the information in Table 4 regarding the needed disciplines, three
development macro-phases could be established, which are characterized as
Engineering System inside the concept of Concurrent Engineering [11], Figure 3:
MACRO-PHASE 1: Design of tags and Ad Hoc reader
MACRO-PHASE 2: Design of monitoring and control software
MACRO-PHASE 3: Integration in a pilot design


Figure 3. Simultaneity between the stages of the project (Source: FIT)
9. Defining the Concept for the Final Product
From the initial QFD we could get the preliminary concept to the equipment that would
be developed, illustrated in Figure 4. In it, the customer desire is quantitatively
displayed, which must be the main goal of the research project which will be developed.


Figure 4. Conceptual model (Source: adapted by FIT)
10. R&D Project Proposal
The R&D project proposal described according to ANEEL criteria and QFD application
is presented in a summarized way in Table 6.
Figure 5 shows the steps sequence defined for the project in accordance with the
simultaneity and multidisciplinarity concepts as discussed and detailed by [7, 8, 9].
J.A. Pereira et al. / QFD Application on Developing R&D Project Proposal 300
Table 6. Proposal for R&D project according to the ANEEL criteria [1]
Project name: Development of a System to Identify and Control Assets in Process
of Inspection and Maintenance of Electrical Power Utilities
1 Descriptive Overview
1.1 Preamble Duration 30 months; ED - Figure 1; Product: System Prototype [1].
1.2 Research Team Institutions: FIT (executor) and COPEL (proponent)
Team: According Table 4
1.3 Motivation Needs of utilities.
1.4 Objectives System to identify and control of assets.
1.5 Justifications Difficulty in managing inventory.
1.6 Expected Benefits Reliability of the service.
1.7 Methodology QFD to identify the project requirements; Macro-phases Figure 3.
1.8 State of the Art There is only one initiative, specific for wooden posts.
2 Analytic Overview
2.1 Proposal Originality Challenges: SmartGrids adaptation; automation; low costs.
Innovation: Systemic system; robust; visualization; IEEE 802.15.4


2.2 Applicability Context: Administration of assets by the power utilities.
Scope: National and international electric utilities.
2.3 Relevance Qualification: Master degree in Polymeric Material and Photovoltaic.
Technological Capabilities: Acquisition of equipment to increase the
institutions infrastructure; patent application and software registration.
Social and Environmental Impacts: ISA1
2
and ISA4
3
[1].
Economic Impacts: Efficiency increment with cost reduction.
2.4 Costs Reasonability
4
HR=65,7%;
5
TS=13,8%;
6
ME=8,7%;
7
CM=2,2%;
8
OT=9,8%. An
economic feasibility study estimated that the project will pay theirself
in two years after their massive deployment.
1
ISA1=Environmental Impacts.
2
ISA4=Safety or Quality Community Life Impacts.
4
HR=Human Resources.
5
TS=Third Part Services.
6
ME=Material or Equipment.
7
CM=Consumption Materials.
8
OT=Otters.


Figure 5. Steps sequence
J.A. Pereira et al. / QFD Application on Developing R&D Project Proposal 301
11. Conclusion
In this phase of informational project (Pre-development of Unified Model) the QFD
Requirement List was applied to establish the multidisciplinary team [15], the
dimensioning and simultaneity between the steps, as well as the concept for the product
according to the customer desire. The QFD Relationship Matrix has not been necessary
yet, however it will become crucial in the execution of the first steps of the project
since it validates the customer requirements and establishes the product specifications
[2, 11]. As well as in the industrial process, the use of PDP, having the QFD as a tool,
has shown to be crucial in the elaboration of this R&D project proposal since it allowed
to convey the qualitative information obtained from the customers in quantitative
technical information for the R&D project structuring.
Acknowledgments
The authors are thankful for the financial and technical support provided by the
Companhia Paranaense de Energia (COPEL), Pontifcia Universidade Catlica do
Paran (PUCPR), Flextronics Instituto de Tecnologia (FIT) and Agncia Nacional de
Energia Eltrica (ANEEL), all of them in Brazil.
References
[1] Manual do programa de pesquisa e desenvolvimento tecnolgico do setor de energia eltrica. ANEEL.
Available at: http://www.aneel.gov.br. Braslia-DF, Brasil, 2012.
[2] J.A. Pereira, O. Canciglieri Jnior, Multidisciplinary systems concepts applied to R&D projects promoted
by Brazilian Electricity Regulatory Agency (ANEEL). 19th ISPE International Conference on
Concurrent Engineering, CE2012. Trier, Germany, 2012.
[3] H. Rozenfeld, F.A. Forcellini, D.C. Amaral, et al., Gesto de desenvolvimento de produtos: Uma
referncia para a melhoria do processo. Saraiva Press. 1st ed. So Paulo-SP, Brasil, 2006.
[4] Lei 9.991 de 24 de julho 2000. Dirio Oficial da Unio. Braslia, DF, Brasil, 2000.
[5] N. Neves, Critrios de avaliao e seleo de projetos para o programa de P&D da ANEEL. Dissertao
de mestrado. Programa de Ps-graduao em Tecnologia da Universidade Tecnolgica Federal do
Paran - UTFPR, 2011.
[6] K.B. Clark, S.C. Wheelwright, Managing new product and process development: Text and cases. Free
Press. New York, 1993.
[7] G. Pahl, W. Beitz, Engineering design: A systematic approach. 2nd ed. Springer Press. Darmstadt,
Germany, 1988.
[8] F.N. Casarotto, J.S. Favero, J.E.E. Castro, Gerncia de projetos/Engenharia simultnea. 1st edn. Atlas
Press. So Paulo-SP, Brasil, 1999.
[9] J.R. Hartley, Engenharia simultnea. 1st ed. Bookman Press. So Paulo-SP, Brasil, 1997.
[10] P.A.C. Miguel, Implementao do QFD para o desenvolvimento de novos produtos. 1st edn. Atlas
Press. So Paulo-SP, Brasil, 2008.
[11] Y. Akao, Quality function deployment: Integrating customer requirements into products design.
Productivity Press. Portland, USA, 1990.
[12] M.L. Stedile, C.A. Costa, J.L. Kalnin, M.A. Luciano, Convertendo necessidades dos clientes em
especificaes de projeto: Uma aplicao no projeto de cabines para caminhes. XVIII SIMPEP
Simpsio de Engenharia de Produo. Bauru, So Paulo, Brasil, 2011.
[13] J.V. Batista, A.M. Arajo, QFD/Capture-Guia de Operao: Desdobramento da funo qualidade
auxiliado por software. Escola de Engenharia. Universidade do Rio Grande do Sul. Brasil, 2004.
[14] C. Swedberg, RFID Tracks wooden utility poles at the factory and in the field. Available at:
http://lovespss.blog.51cto.com/1907593/518965. 2011.
[15] K. Craig, M. Nagurka, Multidisciplinary engineering systems. 2nd and 3rd Year College Wide Course
Marquette University. Milwaukee, USA, 2011.
J.A. Pereira et al. / QFD Application on Developing R&D Project Proposal 302
Methodological Proposal to Determine a
Suitable Implant for a Single Dental Failure
Through CAD Geometric Modelling
Anderson Luis SZEJKA
a,1
, Joo Adalberto PEREIRA
b,1
, Marcelo RUDEK
a,2
and Osiris
CANCIGLIERI JNIOR
a,2
a
Professor in the Production Engineering Department at Pontifical Catholic University
of Paran (PUCPR)
b
Companhia Paranaense de Energia - COPEL
Abstract.The integration of different areas of knowledge to reach new
technological solutions has become a reality. This integration has been improving
the surgical process of dental implant applying the concepts and methods of the
product engineering. In this context, this work proposes a methodology to
determine a dental implant for a single dental failure through CAD geometric
modelling. This article presents two case studies of single dental failure located
between two teeth, which validate the proposed methodology using the expert
system developed in the Matlab environment applying the Product Engineering
principles inside the Concurrent Engineering settings. The results demonstrate the
methodology potential in offer support to the dentistry in the pursuit of the dental
implant set determination more adequate to the patient.
Keywords.Concurrent Engineering, Product Engineering, Dental Implant, Expert
Systems, Product Development.
Introduction
The recent technological computer advances, in both hardware and software, has
enabled a development in computer-aided design (CAD) from analysis to modeling.
This evolution also allowed an opening for the integration of the engineering with other
areas, particularly medicine and dentistry in the bioengineering. According to [1], the
CAD systems have been extensively applied since the design customization of
prostheses and implants until to the tissue engineering.
The medical image processing and concurrent engineering allied to computer aided
design development have allowed an improvement in computer aided diagnosis [2],

1
Ph.D.Research Student of Graduate Program in Production Engineering and Systems (PPGEPS) at
Pontifical Catholic University of Paran (PUCPR), RuaImaculadaConceio, 1155, Prado Velho, Curitiba,
CEP 80215-901, PR, Brazil; Tel: +55 (0)32711304; Fax: +55 (0) 32711345; Email: anderson.szejka@pucpr.
br, japereira@creapr.org.br.
2
Professor in the Department of Production Engineering at Pontifical Catholic University of Paran (PUCPR),
RuaImaculadaConceio, 1155, Prado Velho, Curitiba, CEP 80215-901, PR, Brazil; Tel: +55 (0) 32711304;
Fax: +55 (0) 32711345;Email:marcelo.rudek@pucpr.br,osiris.canciglieri@pucpr.br; http://www.pucpr.br/
cursos/programas/ppgeps/corpo_docente.php.
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-303
303
since from the image processing it is extracted information that is not obtained directly
from the images, such as density and bone geometry. The concurrent engineering
systematizes the integration of these different tools in a tool which supports the
decision-making diagnostic processes and surgical procedures.
In dentistry, the dental implant process is a multivariable process with a large
dependence on the expertise of the dentist who will perform the procedure. Some
computer systems help in the visualization of CT images obtained from patients, but do
not provide essential information for planning the dental implant process and do not
support the process of selecting the implants that best suits the patient causing
premature failure and implants rejection. In the worst cases, during implant insertion it
can interrupt a nerve and may result in partial or complete paralysis of the patient's
mouth. Thus, this paper presents a methodological proposal for determining implant
that best suits the patient with a single dental failing through the geometric modeling
using computer-aided design, concurrent engineering and medical image processing.
The article presents the current state of the dental implant process, imaging
recognition in computer tomography and its influences that were used as subsidies for
the conceptual model development. The main contributions of this research can be
highlighted as: (i) Improvement of the dental implant process with decisions based on
information extracted from image (patients bone arch); (ii) reduction of the surgical
time, as well as in the implant absorption since there are fewer traumas; (iii) reduction
in the risk of the dental implant rejection.
1. Research Methodology
This research is considered applied nature since it searches to understand, explain and
produce knowledge for practical application oriented for the solution of specific
problems through already existent theories. Relating to the approach, it is qualitative
because it seeks deep comprehension of a specific phenomenon through descriptions,
comparisons and exploratory interpretations providing closer familiarity with the
problem. The scientific aim of this research is exploratory because it provides more
knowledge of the phenomenon which the definition or problem is not totally explicit
since new variables need to be evaluated in order to understand the impact that they
cause in the problem solution. The technical proceedings adopted in this work are
literature review and experimental research. It is a literature review because based on
the review of the literature it was built the knowledge to develop the methodological
and experimental since it was necessary to determine the object of study and its
variables making possible to control the object of study.
The research main objective is to propose a conceptual methodology based on
techniques of medical and dental implant image processing which is able to provide
support to the decisions taken by the dentist implant surgeon in the definition of the
most appropriate single implant. Thereby, for this purpose it is necessary to investigate
techniques of images processing in DICOM file format, protocols used in this area; the
techniques of dental implant process; methodology of computer aided design and
computer aided diagnosis. In the experimental development, the model was validated
by the implementation and tests in experimental case studies. For the implementation,
it was used the Matlab platform from Mathworks as it presents commands that allow
the development of algorithms for image processing and analysis. Finalizing the
A.L. Szejka et al. / Methodological Proposal to Determine a Suitable Implant 304
research, there was an analysis of the results and the future directions for research
continuity.
2. Background
The evolution of computer systems is enabling the development of increasingly
complex algorithms that perform processing almost instantaneously and with a high
degree of accuracy. These systems are increasingly used in medicine and dentistry
helping on planning diagnostic and disease forecasting [3].
Computed tomography is a radiographic technique that consists in the acquisition
of images in axial cuts that can be three-dimensional reconstructed [4], enabling
advances in the imaging diagnostic, revolutionizing the practice of radiology, as well as
medicine and dentistry areas, combining techniques of image processing in
development of tools that provide medical data to assist in decision making processes
[5].
The DICOM standard made possible the evolution of the image processing
algorithms since the information obtained from the hardware is the same,
independently of the manufacturer, allowing the efforts be concentrated in the
development of systems to support doctors, dentists, nurses. Besides the DICOM image
files can be converted in different formats enabling the visualization in computers
without dedicated application with the purpose of compacting the image file size
providing it is sent through the internet to remote computers [6]. However, depending
on the format choice there may be a considerable loss of important information for the
image analysis [7].
In the oral implantology, dentistry branch for the edentulous treatment with the
rehabilitation via dental implant it is verified an increment in the use of this equipment
mainly in the 3D images reconstruction area through computed tomography, providing
a better view to the dentist of the patients bone structure. This overstepped some
limitations in the conventional dental implant treatments planning mainly in pre-
implantation stages, which used to be based on 2D data obtained by computed
tomography. Thus, in this graphical multi-visualization environment proportionated by
the image reconstruction there is an increase in the dentist interactivity with surgical
planning, making the process increasingly safety and reliable [3].
In the implantology there is a division in types of prostheses existing fixed and
cemented prosthesis. The advantage of using the fixed prosthesis (screwed) according
to [8] is the longevity that they present before the prosthesis partially fixed (screwed
and cemented) since they reduce the risk of cavities; improve hygiene; reduce the risk
of sensitivity and contact with the root of the existing teeth; improve the aesthetics of
the abutments; the cleaning of the bone in the edentulous space reduces the lossof
prosthesis tooth besides the psychological aspect. As disadvantages it can be cited the
high cost, high treatment time and the possibility of implant insertion failures due to
poor planning or execution. The advantage the non-occurrence of the process of
reabsorption of the surrounding structures of the missing dental element, i.e. there is no
absorption of the soft bone which is present in this region and thereby this research has
chosen to use the fixed prosthesis [9].
The fixed prosthesis implanted can be divided into two elements: prosthesis
segmented and non-segmented, as illustrated in Figure 1. The segmented prosthesis is
composed by three distinct parts: the implant, the abutment and the crown. The non-
A.L. Szejka et al. / Methodological Proposal to Determine a Suitable Implant 305
segmented prosthesis consists of only two parts: the implant and the crown (built from
a pillar connected to the prosthesis) facilitating the aesthetic result [10].
Figure 1.Schematic representation of the prostheses classification.
Source: Adapted from [10]
The use of computed tomography in dental implant process has madeprocedure
safer as in other areas that already use these images for three-dimensional modeling
(3D), for example,the skull reconstruction where all the bone reconstruction and
correction ofall missing parts that exists in the boneare made virtually, then the
information are exported in CAD file,making possible the manufacturing of the part
and after that its insertion.
Another aspect to be considered is the use of simultaneous engineering concepts
and computer aided diagnosis in the medical areas. The concept of simultaneous
engineering was defined by the Institute for Defense Analysis (IDA) as a systematic
approach for the integration, product simultaneous conception and its related processes
including the manufacture and support. This approach aims that the developers take in
consideration, since the beginning, all the elements in the product life cycle [11]. In the
medical sciences, the systems that use this philosophy improve the results as there is an
investigation of different variables simultaneously that converges to a solution each
time more reliable enhancing the diagnostics.
The integration of all these areas requires that the system presents expertise in
searching of integration solutions as inproblem solving modeling attributed to the
system. Thus, the conception of these systems, called specialized systems, can use the
concept of inference mechanisms for structuring decision rules, which helped in
solution of the multi-variable systems [12].
3. Methodological Proposal to Determine a Suitable Implant for a Single Dental
Failure through the CAD Geometric Modeling
The dental implant process is multivariable and complex because multiple variables
such as bone density, geometry of the dental arch, region of nerves, among others need
to be analyzed simultaneously for determining the dental implant that is most suitable
to the characteristics of the patient. The traditional dental implant procedures occur by
visual analysis of tomographic images or limited computational systems. According to
[13] the existing systems do not provide sufficient informational subsidies for the
A.L. Szejka et al. / Methodological Proposal to Determine a Suitable Implant 306
correct determination of the dental implant causing their premature failure. The non-
accurate and reduced information make difficult and imprecise the dental implant
definition which may cause its premature failure, bone loss, implant rejection and
infection, as seen in the work of [14] and [12], compromising the treatment of partial
and / or total edentulous patients.
The selection of the dental implant is a process of simultaneous and interdependent
analysis of the aspects such as bone structure, nerves positioning, geometry of the
mouth and teeth, and in the case of selection of single implants placed between teeth it
is necessary to check the space available for inserting the implant with physical
properties that can support the tooth masticatory. Figure 2 presents the conceptual
methodological approach for determining the single dental implant, whose mark ("?")
presents the necessary investigation to build an informational structure that provides
support to the process of determining the dental implant.
This methodological proposal is divided in: design system oriented to the process
of single dental implant (DOSDI) and the product model. The product model provides
informational support to DOSDI, which uses the inference mechanism concept for
determination of dental implants more adequate for the patient. Thus, these two
elements, product model and DOSDI, is macro area that contains representations of
various stages of the dental implant and their interdependence functions, containing all
information relating to products, processes and procedures related to dental implant
protocols.
Figure 2.Methodological Proposal to Determine a Suitable Implant for a Single Dental Implant.
A.L. Szejka et al. / Methodological Proposal to Determine a Suitable Implant 307
Product Model In this macro area, it is defined the requirements and
specifications of all necessary information to support the functions that compose the
DOSDI. In each representation is confined the information related to the product
and to the procedures or techniques of dental implant, for example, the DICOM
representation that comprises the tomographic files and the patients information
and can be accessed by the application of the DOSDI function according to the
necessity.
Design Oriented for Single Dental Implant This macro area is the phase named
as the application of the DOSDI. In the DOSDI, it is defined the inference
mechanisms between representations for information conversion, translation and
sharing. This research explores the phase of the implant determination and the
representations that support this application.
3.1. Product Model
The product model contains the informational requirements necessary to subsidize the
decision making processes of the DOSDI function such as tomographic images stored
in DICOM representation and the information of the diameter, length, and density
applied to the dental implant stored in the dental implant representation.
DICOMRepresentation - This representation contains the tomographic images
acquired from patients and stored in DICOM standard. It contains images of axial
cuts, transverse cuts obtained by processing the axial cuts and the patients control
images providing information subsidies to the other representations. It is possible
from processing these images to extract enough characteristics and information for
dental implant definition (diameter, length and bone density) that will be used for
the selection of the implant that best suits the patient as illustrated in Figure 3.
Figure 3.DICOM Representation Structure.
A.L. Szejka et al. / Methodological Proposal to Determine a Suitable Implant 308
Dental Implant Representation - This phase presents the information concerning
to the models dental implants (type, diameter, length, among others) which will
form the database. This information were obtained from the dental implant
manufacturer [15] which shows implants that fit most dental failures, besides it is
a national manufacturer. It is worth noting that other manufacturers can be added
to this database since respecting the same information pattern.
3.2. Design for Single Dental Implant
The concept of inference mechanisms was used for the development of DOSDI. The
inference mechanisms are specialized systems elements capable of searching the
necessary rules to evaluate and ordain logically the heuristic process of inference. This
function, Dental Implant Determination - Function 01, Figure 4, comprises the
inference mechanisms that translate, convert and share the information contained in the
DICOM Representation (information of control; axial cut; transverse cut) in order to
mathematically determine the parameters of diameter, length, bone density that will be
used for the selection of the group of implants that best meet the desired requirements
and that are available in the Dental Implants Representation.
Figure 4.Dental Implant Determination Structure.
The methodology used information conversion, translation and sharing mechanism
contained in a representation to support the decision making function of determining
the dental implant. Thus, the DOSDI was structured into six inference mechanisms:
Definition of the region of interest and symmetry axis (Detail A - Figure 2); Definition
of the dental implant diameter (Detail B - Figure 2); definition of the transverse cut;
definition of the dental implant length; Definition of the bone density and selection of
the dental implant. This article approaches the mechanisms for definition of the region
of interest and geometric modeling of the symmetry axis and the mechanism of
defining the implant diameter since this two mechanisms are responsible for the
geometrical definition of the implant.
A.L. Szejka et al. / Methodological Proposal to Determine a Suitable Implant 309
4. Inference mechanism of the region of interest and symmetry axis
This mechanism is projected for selecting the region of interest and the geometric
modeling of the symmetry axis, Figure 5.1, where occurs the insertion of the dental
implant allowing the conversion of axial cuts into transverse cut of this region. The
definition of the region of interest is made by the oral facial dentist surgeon through the
observation / analysis, identifying the image that presents in detail the gaps between
two teeth. From this image the system intervenes and performs the geometric modeling
of the dental arch.
(1) Region of interest selection. (2) Detection of the bone geometry.
Figure 5.The structure of the function of the determination of the Dental Implant.
Of the interest region is extracted only the bone information from processing these
images using as segregation parameter the Hounsfield scale. As a result of this process,
it is obtained only the information of the bone geometry and the failure surrounding
teeth, Figure 5.2, allowing a geometric analysis of the dental arch. In this analysis, it is
identified the geometrical center through two reference lines that are based on the inner
edge the failure neighbors teeth and the intersection of these reference lines identifies
the geometric center as shown in the Figure 6.1. Using this geometric center as a
reference generates a symmetry line to the neighboring teeth allowing the extraction of
the insertion center and the implant diameter, Figure 6.2.
(1) Symmetry axis construction. (2) Identification of the insertion point.
Figure 6.Geometrical analysis of the dental arch.
A.L. Szejka et al. / Methodological Proposal to Determine a Suitable Implant 310
5. Discussion of results
The methodology development using experimental cases allows the assessment of a
partial edentulous with a single failure in the canine region, figure 7.1. The system
outlined the axis of symmetry of the dental failure interacting with the dentist, creating
an accurate axis, relying only on images with an uncertainty grade of 0.25 mm, which
can be considered insignificant on the implantology optics. Beyond this point, the
system defined the implant diameter, Figure 7.2, based on the bone thickness obtained
by geometric modeling, respecting the minimum area for the osseo integration which is
1mm around the implant [16] e [17].
For determining the implant diameter, the system used the shortest distance
between the outer edge of the bone and the internal bone as it is smaller than the
distance between two teeth, obtaining, as a result, an implant of 3.85mm in diameter.
(1) Dental failure between two teeth. (2) Implant models obtained for this dental
failure, from the methodology concept.
Figure 7.Experimental analysis of the proposed methodology.
With these data, the system performs an enquiry to the dental implant
representation database. It returns a selection of 12 models of dental implant that meet
the design requirements, assigning to the dentist the identification of which implants
are most likely to be used on the patient. In this research, the manufacturer shows an
availability of approximately 150 implant models and the system decreased by 92% the
options, making the process more and more reliable.
6. Conclusion
This paper proposes a methodology for determining the most suitable implant for
single dental failure since the traditional procedures do not present informational
requirements that support the dentist decision making process, leaving to him the
selection of the best implant based on little information. Thus, this methodology aims
to assist and support the process of determining the dental implant based on
tomographic image processing and analysis of the existing models of dental implant,
extracting the most important of characteristics them.
A.L. Szejka et al. / Methodological Proposal to Determine a Suitable Implant 311
These support information is stored in the product model and is up to the design
function, via inference mechanisms, to convert, to share and to translate information
from one representation to another in order to select the set of implants best suited to
the patient. As a result of this process, it was proposed an experimental case of single
failure in the mandible at the canine region. It was obtained a set of 12 implants
suitable for use and is up to the dental surgeon the identification of the better implant
for the patient.
The results show the system potentiality as a tool for computer aided diagnosis
through the analysis of the bone geometry and other parameters making the dental
implant procedure less traumatic, reducing the implant rejection level. For further
researches, it was verified the necessity of to explore in more detail the process of
dental implants in total edentulous with the construction of a guide mask for the
implant insertion procedure; to perform the correct planning of the dental implant
inserting process identifying the depth of the insertion as well as the drilling and
threading rotation process.
Aknowledgment
The authors would like to thank the Pontifical Catholic University of Paran (PUC-PR)
for the financial support of this research.
References
[1] W. Sun, B. Starly, J. Nam, A. Darling., Bio-CAD modeling and its application in computer-aided tissue
engineering, Computer-Aided Design 37 (2005), 1097-1114.
[2] Z. Zhou, B.J. Liu, A.H. Le, CAD-PACS integration tool kit based on DICOM secondary capture,
structured report and IHE workflow profiles, Computer Medical Imaging and Graphics31 (2007), 346-
352.
[3] D. Grauer, L.S.H. Cevidanes, W.R. Proffit, Working with DICOM craniofacial images. American journal
of orthodontics and dentofacialorthopedics:official publication of the American Association of
Orthodontists136(2009),460-470.
[4] S.E. Duff, et al.,Computed tomographic colonography (CTC) performance: one-year clinical follow-up.
Clinical Radiology61(2006), 932-936. Jan. 2006.
[5] S. Jivraj, W.Chee, Rational for dental implants,British Dental Journal200(2006), 661-665.
[6] R. N. J. Graham, R. W. Perriss,A. F.Scarsbrook, DICOM demystified: A review of digital file formats
and their use in radiological pratice.Clinical Radiology60(2005), 1133-1140.
[7] R. H. Wiggins, H. C. Davidson, H. R. Harnsberger, J. R. Lauman, P.A. Goede, Image file formats: Past,
present, and future.RadioGraphics21 (2001), 789-798.
[8] C.E. Misch, Implantes Dentrios Contemporneos. Santos Livraria. 2nd ed. So Paulo-SP, Brasil, 2000.
[9] M. A. Bottino, M. K. Itinoche, L. Buso, R. Faria, Esttica com implantes na regio anterior. Revista
Implant News3 (2006), 560-568.
[10] K. Ochiai, S. Ozawa, A. A. Caputo, R. D. Nishimura,Photoelastic stress analysis of implant-tooth
connected prostheses with segmented and non-segmented abutments. The Journal of Prosthetic
Dentistry. 89(2003), 495-502.
[11] D. Tang,L. Zheng,L. Zhizhong,L. Dongbo, S. Zhang, Re-engineering of the design process for
concurrent engineering. Computer and Industrial Engineering.38 (2000), 479-491.
[12] T. Li, et al., Optimum selection of the dental implant diameter and length in theposterior mandible with
poor bone quality A 3D finite element analysis.Applied Mathematical Modelling. 35 (2010), 446-456.
[13] C. G. Galanis, M. M. Sfantsikopoulos, P. T. Koidis, N. M. Kafantaris, P. G. Mpikos, Computer
methods for automating preoperative dental implant planning: Implant positioning and size assignment.
Computer Methods and Programs in Biomedicine86(2006), 30-38.
[14] A. D. Pye, D. E. A. Lockhart, M. P. Dawson, C. A. Murray, A. J. Smith, A review of dental implants
and infection,Journal of Hospital infection72(2009), 104-110.
[15] Neodent, Catlogo de produtos (press) 1 (2011), 1-164.
A.L. Szejka et al. / Methodological Proposal to Determine a Suitable Implant 312
[16] J. H. Lee, V. Frias, K. W. Lee, R. F.Wright, Effect of implant size and shape onimplant success rates: A
literature review. Journal of Prosthetic Dentistry94 (2005), 377-381.
[17] J. Brink,S. J. Meraw, D. P. Sarment, Influence of implant diameter on surroundingbone. Clinical Oral
Implantology Res. 18 (2007), 563-568.
A.L. Szejka et al. / Methodological Proposal to Determine a Suitable Implant 313
Design for sustainability of product-service
systems in the extended enterprise
Margherita PERUZZINI
a,1
Michele GERMANI
a
Eugenia MARILUNGO
a
a
Universit Politecnica delle Marche
via Brecce Bianche 12, 60131 Ancona
Tel. +39 071 220 4799, Tel. +39 071 220 4969, Fax. +39 071 220 4801
[m.peruzzini, m.germani, e.marilungo]@univpm.it
Abstract. A recent trend in modern manufacturing companies is moving from
products to services. Indeed, services allow creating new business opportunities
and increasing the value perceived by the customers. At the same time,
sustainability is a crucial aspect for industry, which pays more and more attention
to realize efficient and sustainable solutions. The research challenge is defining a
structured methodology to understand how to design for sustainability considering
Product Service Systems (PSS) and evaluating the effect of shifting from products
to services. While product sustainability can be assessed by several tools,
sustainability of PSS is almost unexplored. Furthermore, PSS requires creating an
extended value creation network. This paper defines an integrated product-service
lifecycle and proposes a methodology to identify a set of KPIs for both PSS and
products and to compare different use scenarios. It adopts a holistic approach to
assess sustainability on the basis of the three main impacts: environmental,
economical and social. The methodology is illustrated by means of an industrial
case study focusing on water heaters; it analyses an innovative PSS Hot water as
a Service supported by an extended network, and compares it with the traditional
scenario based on product selling supported by a vertical supply-chain. The final
aim is to evaluate the service benefits and to support company decision-making.
Keywords. Design for Sustainability, Product-Service Lifecycle, Product-Service
Systems (PSS), Extended Enterprise (EE), Service Engineering.
Introduction
An interesting business trend involving manufacturing enterprises is the transition from
products to Product-Service Systems (PSS), which mainly consist of adding a wide
range of services to increase the value perceived by the customers and better satisfy
their needs over time [1]. It implies an evolution from a traditional product-oriented
model to an extended product/service-oriented ecosystem. In manufacturing industry,
PSS are almost realized by providing technical services that are easy to realize and can
create new market potentials as well as generate higher profit margins. Furthermore,
enhancing-product services (e.g. maintenance, user training, retrofitting and product
monitoring, etc.) can significantly influence product performances and improve PSS
sustainability. At the same time, PSS require creating new relationships between
different stakeholders of the Extended Enterprise (EE) to add value with low impact
thanks to the exploitation of the ecosystem capabilities.
The interrelations between products and non-physical services are complex to
model and requires an integrated lifecycle considering all the activities related to both
product and service. A reliable PSS sustainability assessment can be achieved only by

1
Corresponding Author.
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-314
314
considering a new integrated lifecycle and investigating the impact of each phase
according to lifecycle design approaches. Such an analysis allows understanding the
effective advantages in respect with traditional products to guide the evolution of the
EE and support strategic decision-making.
In this context, the research aims to support the EE in PSS conception and design
embracing sustainability principles. The paper proposes a methodology for holistic
sustainability assessment of manufacturing products and technical PSS. The method
defines the new integrated lifecycle phases and, for each of them, indicates some
sustainability objectives belonging to three main factors (environment, economics and
social wellbeing). Objectives can be concretized by a set of KPIs that can be measured
by specific techniques (i.e. LifeCycle Assessment LCA, LifeCycle Cost Assessment
LCCA, Social LifeCycle Assessment SLCA) to obtain a unique Sustainability
Assessment value (SA).
The proposed method combines a set of assessing techniques and adopts them until
the preliminary design stages to envisage the global impacts of different PSS design
solutions on sustainability, to highlight the most critical phases and the objective in
danger (economical, ecological or social), to verify the PSS benefits compared to the
corresponding traditional product, and to finally support Design for Sustainability in
general. The method is validated by an industrial case study focusing on hot water as a
service. It analyses a new service idea where hot water service is paid as a service and
heaters are no more purchased by the consumers. Analysis allows identifying the best
commercial strategy for the companies involved into the EE, optimizing the design to
maximize the sustainability, supporting design for sustainability decision-making, and
evaluating whether and when services are more convenient than products for the global
EE.
1. Research Background
1.1. Lifecycle design approaches for sustainability
Sustainability is a fundamental guiding principle for achieving highly competitive
solutions and create added value. In industry, it can be achieved by adopting lifecycle
design approaches that allow quantifying the sustainability of products, to identify the
more advantageous trends in product innovation, and to give a tangible commercial
value in terms of efficiency and costs to support customers decision-making [2]. They
consider the entire lifecycle from cradle to grave to estimate the impacts at each
phases, from the time natural resources are extracted and processed through each
subsequent stage of manufacturing, transportation, use, recycling, and ultimately,
disposal. In design approaches, the designer describes a specific lifecycle scenario and
determines a lifecycle strategy. Design strategies deal with the environmentally
conscious selection of materials and components, the definition of end-of-life scenario
and the robust analysis of consumption during use (energy, fuel, water, etc.). All
relevant lifecycle phases have been designed, specified, analyzed and made available
for simulation purposes. The lifecycle design approach also includes the definition of
key parameters and indicators as metrics to assess the lifecycle performance (e.g.
functionality, manufacturability, serviceability, environmental impact) [3].
The main scope of lifecycle approaches is avoiding potential shifting of
environmental consequences from one lifecycle stage to another, from one geographic
M. Peruzzini et al. / Design for Sustainability of PSS in the Extended Enterprise 315
area to another, and from one environmental medium to another. Different methods
have been developed for these purposes. LifeCycle Assessment (LCA) takes a holistic
approach and considers the environmental impact during all phases of product lifecycle
[4,5]. LifeCycle Costing (LCC) considers the total cost associated to one activity
performed over one fixed time horizon by providing a global vision of cost, spread over
the whole product lifetime [6]. Its application during product and system design and
development is realized through the accomplishment of the LifeCycle Cost Analysis
(LCCA) that is consistent with the LCA approach [7]. Recently also the social
dimension has been included by the modern sustainability thinking [8]. In this direction
some recent works explores the social dimension by the so-called Social LifeCycle
Assessment (SLCA) [9-10].
It has been demonstrated that lifecycle approaches offer a structured methodology
to proceed with comparative analysis for estimating all major impacts in the choice of
alternative courses of action and not for absolute evaluation [11]. Recently, some
researches coupled LCA, LCC and SLCA analysis to achieve eco-efficiency solutions
[12,14]. Notwithstanding their applications generally refer to physical products, they
are strongly product-centred and focus on the company perspectives. Only few
examples addressed PSS assessment are presented in [15,16].
1.2. Product-service sustainability assessment
A product-service consists of a mix of tangible products and intangible services
designed and combined to increase the value for customers. Value creation can be
provided through an extended business network involving different stakeholders, which
concur to create the services. Product-service starts from the idea of extended product
[17], based on adding value by incorporating intangible services into a core product,
which is the physical item traditionally offered on the market. Creating extended
products implies the involvement of organizations, public bodies, tertiary service
providers and customers to create a unique business framework [18], moving from a
vertical supply-chain to an extended collaborative network able to support both product
and service lifecycles. A PSS includes the product-service itself, the enterprise network
and the infrastructures needed [19].
Services are often added to products precisely to achieve sustainability advantages
of properly designed. From the economical viewpoint, services are able to create new
market potentials and higher profit margins, and can contribute to higher productivity
by means of reduced investment costs along the lifetime as well as reduced operating
costs for the final users. From an ecological viewpoint, product-services can be more
efficient thanks to a more conscious product usage, an increased resource productivity
and a close loop-chain manufacturing as reported by some examples [20,15]. Finally,
PSS can be also socially advanced, as services are able to support the building up and
securing of knowledge intensive jobs, and can contribute to a more geographically
balanced wellbeing distribution [21]. While product lifecycle modelling is a well-
known technique, service lifecycle modelling is a rather new idea. It aims at
representing all service data relating to its design, implementation, operation and final
disposal. ISO 15704:2000 standard [22] defines the generic entity/system lifecycle
phases and evolution in time. However, the biggest challenge remains performing a
real and substantial assessment of sustainability for PSS and extended networks.
Numerous methods to analyse service products have been recently proposed (i.e.
modularization-focused, stochastic behaviour-focused, and lifecycle-focused) [23];
M. Peruzzini et al. / Design for Sustainability of PSS in the Extended Enterprise 316
however some of them are very theoretical and hard to implement in practice, while
others focus on the analysis of specific cases with a limited perspective [24]. None of
them can provide a concrete sustainability evaluation to be applied in manufacturing
EE.
In this context, lifecycle design approaches could be extended to product-service
solutions and distributed enterprises. In this case, the design process chain and the
product-service lifecycle converge and create interdependencies: on one hand the
design choice determines the product-service lifecycle configuration as well as the EE
configuration, on the other hand the product-service relations and the EE partners
characteristics affect the PSS performances and consequently the early design decision-
making.
2. Integrated product-service sustainability assessment
The starting point to apply lifecycle design to PSS in EE contexts is defining an
appropriate design for sustainability methodology.
The proposed method can be summarized in the following steps:
1. Definition of an integrated Product-Service Lifecycle, able to support and manage
all the activities related to product design and development, service ideation and
implementation, system infrastructure design and creation, product-service
delivery, until PSS disposal. Lifecycle modelling considers the product as well as
the technological infrastructure and the services;
2. Identification of the sustainability objectives for each lifecycle phase. Three main
aspects have been considered: environment, economics and social wellbeing;
3. Definition of the relevant lifecycle phases and, for each of them, of a set of KPIs.
About sustainability, the relevant impacts arise from the end of the design stages
until the system end-of-life;
4. Definition of reliable measuring techniques to assess the relevant KPIs. According
to lifecycle design, LCA, LCCA and SLCA are chosen: LCA focuses on the
impact on environmental resources and ecosystem, LCCA estimates the total costs
by considering the companies involved, the consumers and the dismissing
consortiums, and SLCA estimates the impact on human resources and human
health;
5. Measurement of the sustainability impacts by applying the selected techniques.
Impacts are separately measured for each relevant stage and any design solution
as well as EE scenario. The scenario depends on the companies involved into the
EE, the user typologies, profiles and behaviours, as well as the considered lifetime
to carry out targeted analyses. The KPIs measurement allows quantifying the
achievement of the defined objectives;
6. Calculation of the global sustainability assessment of product-services by
combining the selected techniques and normalizing the single indexes to have a
sustainability global assessment (SA), as expressed by Eq. (1):
(1)
The proposed methodology is schematized in Fig.1. Sustainability assessment
focuses on the operative phases; indeed, the impacts of ideation and design stages is
M. Peruzzini et al. / Design for Sustainability of PSS in the Extended Enterprise 317
limited and they are almost similar for different solutions, so their contributions can be
neglected as far as sustainability investigation is concerned.
Thanks to its pragmatic procedure, the method is general and can be
straightforwardly carried out for assessing PSS as well as traditional products: for the
latter, the lifecycle will be simplified by considering only the product-oriented
activities, but the method doesnt change. Such method extends some recent works
focused on product-services assessment [15,16]. It aims to define a structured
methodology to benchmark different design solutions by a sustainability viewpoint as
already known in product design for other specific purposes, for instance collaboration
[25]. Such a method has three main advantages: it well addresses product-services as it
exploits lifecycle approach and it can be adopted until the preliminary design stage to
support top managers decisions, to objectively compare the impact of different
solutions and to validate the technicians choices. Moreover it can be easily applied to
both products and PPS to compare design alternatives and alternative scenarios,
evaluating the consumed resources and choosing the less-impact one.
Figure 1. Methodology for Product-Service sustainability assessment
2.1. Lifecycle sustainability impacts
The lifecycle analysis considers all significant data referring to the analysed phases for
product (manufacturing, use, end-of-life), service (implementation, operation,
decommission) and product-service system (creation, commercialization and delivery).
For each phase, three impacts are separately estimated. Environmental impact is
measured by Eco-Indicator99 (EI-99), considering Ecosystem Quality impact and
Resources consumption. The unit of measurement is EI-99 point (Pt). Economical
impact considers the lifecycle costs in terms of resources consumption (MegaJoule) as
well as material use and transformation (MegaJoule or euro/dollars), from product
M. Peruzzini et al. / Design for Sustainability of PSS in the Extended Enterprise 318
manufacturing to service implementation phases by considering all the companies
involved in the EE, until the end-of-life and decommission in charge of the dismissing
consortium and involved entities. It adopts the Equivalent Annual Cash Flow technique
(EA) to transform a generic cash flow distribution into an equivalent annual
distribution by cost actualization according to Eq. (2):

(2)
where n is the lifetime years number, i is the generic discount rate (for example 3%),
and P is the value during the entire lifetime. The impact is expressed in Euro. Finally,
social impact considers separately Human Health contributions according to EI-99
methodology as before. Impact is expressed into QALYs (Quality Adjusted Life
Years). Such values can be calculated by LCA and LCCA software tools (i.e. SimaPro,
Gabi, Relex).
2.2. Normalization procedure
The results calculated by means of LCA, LCCA and SLCA are coupled to obtain a
unique sustainability index via proper data normalization. Starting from different units
of measurement (i.e. EI-99 Pt, euro, QALYs), environmental and social impacts are
monetized to obtain a final monetary (expressed in euro). Then, the three monetary
values can be summed to assess the overall sustainability. Higher is the impact, and
lower is the sustainability. Normalization is achieved by the following equations, which
are based European average data for having a consistent redefinition.
The environmental impact originally expressed in EI-99 pt. can be translated into
PDFm
2
yr (Potentially Disappeared Fraction of species per square meter per year) and
MJ (MegaJoule), and normalized by Eq. (3) and Eq. (4):
Pt = PDFm
2
yr and

(3)
Pt = MJ and

(4)
The social impact, that originally expressed in QALYs, can be multiplied for the
estimate cost for year according to recent European data, be Eq. (5):

(5)
3. The industrial case study
3.1. Hot water as a service
The case study has been realized in collaboration with an Italian company producing
heating and hot water systems, which is a world leader in this field. The company is
actually organized in a vertical supply-chain and adopts a product-oriented
development process. Collaboration with partners and suppliers is limited to design
innovation and cost reduction.
The case study focuses on a new PSS idea consisting of selling hot water as a
service instead of heater products (i.e. condensing boilers). The idea starts from the
high inefficiency of actual product solutions, due to wrong assistance actions during the
M. Peruzzini et al. / Design for Sustainability of PSS in the Extended Enterprise 319
product lifetime, which consequently led to a significant increase of energy
consumptions, costs and environmental impact. In case of traditional product,
customers usually purchase the heater from a dealer (at the final price of about 2.000,00
euro), a technician installed the product at home and a third company cares about
maintenance annually (costing about 100 euro/year). Otherwise, the PSS solution
consists of providing the heater for free and guaranteeing the hot water service by
remotely monitoring the product, verifying its performances, and caring about its
functioning also by predictive maintenance. The consumer will pay a unique monthly
fee (i.e. 36 euro/month). The lead company will remain the product owner and it will
plan specific interventions or product substitution when necessary (for instance, every
5 years). Affiliated partners will provide product monitoring and maintenance actions.
After a certain lifetime, the consumer can choose if renewing the service contract or
buying back the product at a special price. The considered lifetime is 15 years. PSS
scenario implies a strong collaboration between the lead company and the technical
assistance centres, as well as the third parties caring about maintenance and about
renewing or dismantling. Fig.2 shows the conceptual PSS framework and highlights the
novelties in respect with the traditional product one.
Figure 2. The case study framework: PSS and traditional product comparison
The research questions are: which is the global impact on sustainability of the new
PSS solution? Which is the more sustainable model to sell the new PSS (years of
product substitution, price for buy-back, price for monthly fee)? What are the
achievable benefits in respect with the traditional product? Such questions are
investigated for four different use scenarios, differing in electrical and thermal energy
efficiency: 1) existing apartment, 2) new apartment, 3) existing detached house, 4) new
detached house.
M. Peruzzini et al. / Design for Sustainability of PSS in the Extended Enterprise 320
3.2. PSS sustainability assessment and comparison with traditional product
The sustainability assessment is achieved by applying the proposed method to PSS
lifecycle stages as reported in Fig.1. For the product manufacturing phase, LCA
considers the environmental impact produced by all the components used for the
production and the final assembly, and data are organized according to the main
functional entities (e.g. pressure vessel, heat exchanger, burner, embedded electronics,
external case, interface, etc.). A 5% cut-off is applied to not consider those parts that
have a limited impact. LCCA considers the global manufacturing costs (about 600
euro/product). SLCA considers the human health impact similarly to LCA. For the
service implementation phase, LCA and SLCA consider the impacts of all the items
necessary to create the service infrastructure and the service operational aspects (e.g.
central gateway, connection board, Zigbee module, electronics for control units, etc.).
LCCA considers their costs. For product-service system creation, analyses consider the
impacts and costs of the additional components and the system infrastructure that must
be added to the traditional stand-alone product (e.g. call-centre, the personnel employed
there, the wiring network). Product-service system commercialization and delivery
comprehend the transportation impacts and costs as well as the point of sales impacts.
The analysis of the product use and service operation phases consider the habits of the
consumers in the four use scenarios, according the European average data, over the
considered lifetime (1-15 years). For LCA both electric and gas thermal energy
consumptions are considers. While for traditional product there is a decrease of
performance with the passage of time as suggested by real data monitoring, PSS
performance have higher and constant performances due to a continuous control of the
machine status, real-time monitoring, predictive maintenance and constant assistance
(i.e. PSS machine is monitored and parts can be substituted in advance to guarantee a
high quality performance for the whole lifetime). LCCA considers the costs generated
by the resource consumptions (i.e. electric energy at about 0,2 euro/kWh, thermal gas
energy at about 0,08 euro/kWh). The product-service end-of-life considers the impacts
and the costs for product regeneration or substitution as well as service decommission.
The present PSS case considers 5-year regeneration: impacts are optimized as they are
directly managed by the EE, who cares about all phases (i.e. product regeneration,
component substitution and reuse, service updating, service decommission).
The following tables summarize the main research results. Tab.1 shows the
obtained results for the PSS case for 15-year lifetime. It investigates a specific use
scenario (i.e. new apartment) and shows the separated analysis values for LCA, LCCA
and SLCA, and the global Sustainability Assessment (SA). In this way the impact of
each phases of the integrated product-service lifecycle is expressed in terms of three
categories (environment, economics, social wellbeing). Tab. 2 compares PSS impacts
for different scenarios: it contains the global SA results obtained by summing the three
contributions after normalization. Tab. 3 compares the PSS case and the traditional
product case by considering different lifetimes for two different scenarios (i.e. new
home and existing home). It is worth to notice that PSS is more convenient for both
scenarios, regardless the user habits. Furthermore, it is evident how new buildings
allow maximizing the PSS sustainability. Data con be investigated also over the years
to better highlight the PSS advantages in relation to traditional product and to
understand how benefits evolve along the lifetime.
M. Peruzzini et al. / Design for Sustainability of PSS in the Extended Enterprise 321
Table 1. Lifecycle analysis results for PSS (LCA + LCCA + SLCA)
PSS CASE New apartments (15-years lifetime)
Lifecycle phases LCA (Pt) LCCA () SLCA (QALY)
PRODUCT manufacturing 1099,67 1.800
9,18 E-03
SERVICE implementation 1986,84 1.500
PRODUCT-SERVICE SYSTEM creation 51,42 250
PRODUCT-SERVICE Comm. & Delivery 1,01 15
PRODUCT-SERVICE SYSTEM use/operation 675,33 6.541,84 5,87 E-03
PRODUCT-SERVICE SYSTEM
EoL/decommission
-2687,74 - 1.650 -4,79 E-03
Table 2. Sustainability assessment for different scenarios
PSS - SUSTAINABILITY ASSESSMENT (SA)
15-years lifetime Existing
apartment
New apartment
Existing
detached house
New detached
house
Env. Impact (norm.) 107,42 153,26 104,83 135,46
Eco. Impact () 9.941,76 36.573,76 8.441,84 26.233,20
Soc. Impact (norm.) 838,01 2.250,42 758,46 1.702,02
GLOBAL SA () 10.887,19 38.977,44 9.305,13 28.070,68
Table 3. Comparative sustainability assessment for PSS and traditional product

Lifetime
PSS Traditional product
New detached
house
Existing detached
house
New detached
house
Existing detached
house
15 10.887,19 38.977,44 11.719,33 41.341,75
10 7.147,98 25.189,29 7.971,47 27.484,29
5 3.653,52 12.469,84 4.177,07 13.402,05
The proposed analysis can be repeated for other use scenarios to simulate different
service features (e.g. monthly rate, bay-back period, product substitution period, etc.)
and finally understand when the product-service solution is particularly advantageous.
Furthermore, it allows easily comparing PSS solutions with traditional product
solutions on the basis of both single impacts and global sustainability.
The methodology application to an industrial case study verifies the fulfillment of
the following purposes: 1) identification of more sustainable strategy for a specific PSS
and EE made up of the lead company, several partners, the dismantling consortium and
the consumer; 2) easy comparison between different design solutions in order to
identify the more sustainable one; 3) support to design for sustainability decision-
making; 4) identification of those conditions when services are more convenient than
products by considering the impacts of the entire EE.
4. Conclusions
The paper proposes a methodology to support design decision-making for PSS by
assessing the global sustainability. It allows a deep sustainability investigation by
highlighting the impacts for the different phases of an integrated lifecycle with respect
to three categories: environment, economics and social wellbeing. Its validity is
demonstrated by an industrial case study proposing a new service idea (hot water as a
service): it compares different PSS scenarios and highlights the benefits in respect with
M. Peruzzini et al. / Design for Sustainability of PSS in the Extended Enterprise 322
the traditional product one. Future works will focus on method applications to different
industrial scenarios and its validation and optimization also by effective performance
evaluation and feedback.
References
[1] M.J. Goedkoop, C.J.G. Van Halen, H.R.M. Riele, P.J.M. Rommens, Product-Service Systems
Ecological and Economic Basic, PWC, The Hague, 1999.
[2] J. Jeswiet, A definition for life cycle engineering, Proc. 36th International seminar on manufacturing
systems, Saarbrucken Germany, 2003.
[3] K. Melk, R. Anderl, A Generic Framework for Life cycle Applications, Proc. LCM conference, 2007.
[4] ISO 14040:2006 Environmental Management - Life Cycle Assessment - Principles and Framework, 2006.
[5] ISO 14044:2006 Environmental Management - Life Cycle Assessment - Requirements and Guidelines,
2006.
[6] D.G. Woodward, Life cycle costing theory, information acquisition and application, Int J Project
Management, 15 (6) (1997), 335-344.
[7] S. Nakamura, Y. Kondo, Hybrid LCC of Appliances with Different Energy Efficiency, Int J Life Cycle
Assess, 11 (5) (2006), 305-314.
[8] W.M. Adams, The Future of Sustainability: Re-thinking Environment and Development in the Twenty-
first Century, Report of the IUCN Renowned Thinkers Meeting, 2006.
[9] B. Weidema, The integration of economic and social aspects in life cycle impact assessment, Int J Life
Cycle Assess, 11(1) (2006), 89-96.
[10] G.A. Norris, Integrating Life Cycle Cost Analysis and LCA, Int. J LCA, 6 (2) (2001), 118-120.
[11] M.A. Curran, Environmental Life Cycle Assessment, McGraw-Hill, 1996.
[12] A. Kicherer, S. Schaltegger, H. Tschochohei, B. Ferreira Pozo, Eco-Efficiency, Combining Life Cycle
Assessment and Life Cycle Costs via Normalization, Int. J LCA, 12 (7) (2007), 537-543.
[13] J. Parent, C. Cucuzzella, J.P. Revret, Impact assessment in SLCA: sorting the sLCIA methods
according to their outcomes, Int J Life Cycle Assess, 15 (2010), 164-171.
[14] A. Dobon, P. Cordero, F. Kreft, S.R. stergaard, H. Antvorskov, M. Robertsson, M. Smolander, M.
Hortal, The sustainability of communicative packaging concepts in the food supply chain. A case study:
part 2. Life cycle costing and sustainability assessment, Int J Life Cycle Assess, 16 (2011), 537-547.
[15] C. Favi, M. Peruzzini, M. Germani, A lifecycle design approach to analyse the eco-sustainability of
industrial products and product-service systems, Proc. of International Design Conference DESIGN
2012, Marjanovic, Storga, Pavkovic, Bojcetic (eds.), (2012), 879-888.
[16] M. Peruzzini, M. Germani, Investigating the Sustainability of Product and Product-Service Systems in
the B2C Industry, in Product-Service Integration for Sustainable Solutions LNPE 6, H. Meier (Ed.),
Springer-Verlag Berlin Heidelberg (2013), 421-434.
[17] K.D. Thoben, H. Jagdev, J. Eschenbaecher, Extended Products: Evolving Traditional Product Concepts,
Proc. 7th International Conference on Concurrent Enterprising, Bremen, 2001.
[18] S. Balin, V. Giard, A process oriented approach to service concepts, Proc. 8me Confrence
Internationale de Gnie Industriel, Tarbes, France, 2009.
[19] SUSPRONET final report: http://www.suspronet.org/
[20] T.S. Baines, H. Lightfoot, S. Evans, A. Neely, R. Greenough, J. Peppard, R. Roy, E. Shehab, A.
Braganza, A. Tiwari, J.R. Alcock, J.P. Angus, M. Bastl, A. Cousens, P. Irving, M. Johnson, J. Kingston,
H. Lockett, V. Martinez, P. Michele, D. Tranfield, I.M. Walton, H. Wilson, State of the art in Product-
Service System, Journal of Engineering Manufacture, 221 (2007), 15431552.
[21] W. Stahel, The Utilization-Focused Service Economy, Resource Efficiency and Product-Life Extension,
The greening of industrial ecosystem, Allenby B., Richard, D. eds., Washington, DC, National
Academy Press, 1994, 178-190
[22] ISO 15704:2000, Industrial automation systems - Requirements for enterprise - Reference architectures
and methodologies, 2000.
[23] M. Garetti, MP. Rosa, S. Terzi, Life Cycle Simulation for the design of ProductService Systems,
Computers in Industry, 63 (2012), 361-369.
[24] J.C. Aurich, C. Fuchs, C. Wagenknecht, Life Cycle oriented design of technical Product-Service
Systems, Journal of Cleaner Production, 14 (2006), 1480-1494.
[25] M. Germani, M. Mengoni, M. Peruzzini, A benchmarking method to investigate co-design virtual
environments for enhancing industrial collaboration, Proc. ASME WINVR2010, Ames, IOWA (USA),
2010.
M. Peruzzini et al. / Design for Sustainability of PSS in the Extended Enterprise 323
A Case Study on Implementing Design
Automation: Identified Issues and Solution
for Documentation
Morteza POORKIANY
a,1
, Joel JOHANSSON
b
and Fredrik ELGH
c
a
PhD Candidate, Jnkping University, Sweden
b
Assistance Professor, Jnkping University, Sweden
c
Associate Professor, Jnkping University, Sweden
Abstract. Computer supported engineering design systems are used as support for
designers by automating some tasks/activities of design process. From industrial
aspect, implementation of a developed prototype system is a critical task. User
acceptance is of high importance and strongly related to the access and
understanding of the knowledge which requires a high level of system
transparency. In addition, integration of the system in the environment or its
compatibility with other systems/tools should be considered. Our experiences in
industry show that two major issues are usually raised up during implementing a
design automation system which are: documentation and organization.
Documentation concerns the way of capturing, storing and distributing the
information in systems, and organization concerns alignment of the system with
other systems or tools as well as communication and collaboration among system
participants and users. The focus of this paper is on documentation and the
importance of reuse, design rationale and traceability is discussed. In order to align
closely with industry practices, the thoughts are presented along with an on-going
case study, where the development and analysis of roof racks for cars are being
automated, and a number of challenges have been discussed.
Keywords. Computer supported engineering design systems, Documentation,
Design Rationale and Traceability
Introduction
Many companies put much effort and investment in order to develop computer
supported engineering design systems automating a variety of engineering design
activities throughout the development process and production preparation. For
example, Sellgren developed a framework for simulation-driven design [1], in which
simulation models were extracted based on the CAD-model relationships. Also,
Chapman and Pinfold described how to use KBE and FEA for the design automation of
a car body [2], and a system was presented by Hernndez and Arjona that automatically
designs distribution transformers and that also uses FEM automatically [3]. The design
process of different jet engine components has also been the subject for design
automation using KBE (or KEE) integrated with FEA [4, 5]. Stolt developed methods
to automatically develop FEM-models for die-cast components [6], and so on.

1
PhD Candidate, department of Mechanical Engineering, Jnkping University
P.O.Box 1026, SE-551 11, Jnkping, Sweden. Phone: +46 (0)36 101571, Fax: +46 (0)36 125331,
Email: morteza.poorkiany@jth.hj.se.
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-324
324
The mentioned system architectures and solving methods have been tested through
developed prototype systems. From industrial perspective the implementation of an
idealized prototype is a critical process and of high importance for the actual use and
consequently the benefits achieved and future return on the investment. There is a need
to ensure the compatibility of the system with dependent methods and tools as well as
the companys IT infrastructure. User acceptance is of high importance and strongly
related to the access and understanding of the underlying knowledge which requires a
high level of system transparency. Experiences show that when a system is being used
in a company, extracting and utilizing the information and knowledge is an important
task. The effective utilization and application of this information and knowledge assist
the decision making process [7]. Moreover, the system should be expanded and
updated over time by adding the information and knowledge of new tasks/activities or
adding new knowledge sources in order to keep usefulness of the system [8].
Our experience in industry shows that in order to successfully utilize and keep the
usefulness of the system, usually two major issues should be focused significantly
during implementing the system. One issue is documentation which concerns the way
of capturing, structuring, storing, and distributing the required information within the
system. During documenting the knowledge, one should identify what knowledge
should be captured and in which level.
The second issue is organization, which concerns the integration of the system in
environment or alignment of the system with other systems or application software. At
organization aspect, processes, methods and tools have to be systematically addressed
and handled to ensure proper and effective usage. In addition, according to Turban and
Aronson [9] in modern organizations groups make major decisions and therefor,
communication and collaboration among system users is important especially when the
users are in different locations.
Each of these issues, on its own, provide sufficient material for an entire research
paper, therefore, the focus of this paper will be only on documentation. The paper
provides a framework for documenting engineering knowledge discussing the main
challenges. Then a pilot system with an information model is developed for a case
study presenting the way of modelling the knowledge considering type and level of
knowledge.
1. Documentation
An effective way of capturing and storing knowledge is using computer supported
systems. By this, collective mind of individuals are transferred into a computerized
system. Then a major aspect that should always be considered is that these two sources,
collective mind of individuals and collective knowledge captured in computers, also
require updates over time (see Figure 1).
M. Poorkiany et al. / A Case Study on Implementing Design Automation 325
Figure 1. Changing in individuals sources and computer sources over time.
Based on the type and objective of the system many sources and types of
information and knowledge could be used or produced during utilizing the system such
as; product information, process information, the required knowledge describing
assemblies and parts, records of previous activities, as well as catalogued information,
CAD files, features, rules, bill of material and so on. The system user collects, verifies,
and stores this knowledge in different repositories and formats, also associates it to
different processes and knowledge sources. The main challenge regarding system use
and longevity is the access to information and knowledge generated or utilized during
using the system. As Baxter et al. [10] note, around 20% of the designers time is spent
searching for information and only 40% of design information requirements are met by
documentation sources. This implies that design information and knowledge is not
represented in a simply accessible knowledge base.
Because of the diversity of this knowledge capturing, structuring, storing, and
representing the knowledge need significant effort. Stokes [11] described a
methodology for the development of knowledge based engineering applications called
MOKA. He states, capturing engineering knowledge consists of some steps: 1) prepare
for collection, 2) collect required knowledge, 3) structure raw knowledge, 4) check for
fitness of purpose, and 5) annotate and file models in knowledge repository. In addition,
structuring the knowledge is the way to represent the knowledge in a form that make it
easy for reusing existing knowledge in the future activities. Reuse of the knowledge is
perceived to significantly increase the efficiency and is a means to reduce the product
development lead time. In order to easily reuse the knowledge, availability and
relevance of the knowledge should be considered during storing the knowledge.
1.1 Design Rationale and Traceability
During developing a product some decisions are made or some rules are created which
access to design rationale gives insight into the reasons of making those decisions or
rules that will support engineers for reuse or revision of the product in the future [12].
Falessi et al. [13] define rationale as, not only the reasons behind a decision but also
the justification for it, the other alternatives considered, the tradeoffs evaluated and the
argumentation that led to the decision. The access to design rationale can support
development of new artifact, modification of existing artifacts (design changes) or the
reuse of an existing solution in a new context.
M. Poorkiany et al. / A Case Study on Implementing Design Automation 326
Generally, it is hard to obtain design rationale from design specifications because
there is no systematic practice to capture them. Tang et al. [14] mention even when
some design rationale are captured, they are not structured in such a way that they can
be retrieved and tracked easily. The realization of design rationale system includes
methods and tools to capture, structure, manage and share information across
organizations, processes, systems and products.
Since the knowledge is stored in different levels of sources and repositories,
traceability is the key for supporting the ability to follow the origin of knowledge
component and pursuing effected objects when changes occur in design. The
information is traceable if one can detect (adapted from Kirkman [15]):
the source of the information
the reason why the information exists
what other information are related to it or how the information is related to
other knowledge.
1.2 User Guides
Information and knowledge should be captured and stored in such a way to make it
flexible for reusing and revising. To be able to reuse a solution access to the knowledge
that once was used is required. Finding the desired knowledge is easier by having a
structured documentation. The documentation includes explanation about the product,
parts, assemblies and parameters, and the relation between different parts and rules.
Capturing the knowledge should be done in a way that knowledge bases are kept
current, relevant and commitment as well as updated and modified.
The requirements concerning the scope and the granularity of design rationale to
be captured depend on future needs of the knowledge. In many companies when
studying the documentation of automated systems, it is directed towards describing the
final results of different activities (answer to question What?) rather than describing the
reason and the origin behind that activity (answer to questions Why, When, How?).
The former description is the definition of design (design definition) and the latest one
is the rationale of design (design rationale). The documentation should be constituted
by design definition and completed by design rationale, considering traceability for
detecting the source and origin of information.
1.3 Developer Guides
To build and run a system one matter is extracting the design knowledge from
designers in order to execute it. According to Sunnersj [16] transforming the product
knowledge from individuals minds to executable codes is often extensive and many
gaps or weaknesses in existing knowledge are often revealed. He states that design
knowledge is a mixture of company policies and rules, experience of designer, design
rules and so on, and for system developer will be soon clear that extracting design
knowledge will not be easily done by just interviewing the designers, because usually a
part of design knowledge and rules are based on the experience of the designers and
does not exist in any documentation.
Another matter is the rationale behind the design and development of the system.
The system developer uses a bunch of information and knowledge and makes some
decisions during developing the system. Knowing the reason and origin of these
M. Poorkiany et al. / A Case Study on Implementing Design Automation 327
decisions will be important in the future for modifying or maintaining the system
corresponding to new changes (changes might be in product, process or technology) or
updating the software or hardware of the system. This becomes more critical when the
system developer is not a member of organization and he/she might not be available in
the future for supporting the system maintenance.
In order to explore the discussed issues a design automation system was selected as
a case study. The selected system is a part of a project running in a company with effort
on reusing the knowledge in a new context or a modified solution.
2. Case Study: Thule Rack System (TRackS)
As a research case, an ongoing design automation project was selected. The project
running at Thule Group company aims to automate the development process of car roof
rack. The automation specifically targets roof racks that are mounted directly on the
cars roof, i.e. there are no rails on the car. Consequently, the roof rack product has to
be adapted to every car model it is supporting. The adaption is done by changing two
components, the footpad and the bracket (see Figure 2). The footpad is a rubber pad on
which the rack is standing on the roof, and the bracket is used to fix the rack by
keeping around the roof end where the doors are.
The company acts on the open market competing with car manufacturers and
therefore gets no nominal data of car roofs. Instead, the engineers have to collect
geometrical information about car roofs by measuring. When the roof geometry is
collected, for a particular car model (A in Figure 2) a footpad (B) is retrieved or
developed in the design automation system. The rack is subsequently placed on the
footpad in the virtual model (C) and finally a bracket is retrieved or developed in the
design automation system (D).
Since the number of developed brackets and footpads are increasing by entering
new cars to the market, searching among the existing brackets and footpads to be used
for new cars (maybe with some minor changes) is an enterprise task. As an example
reusing an existing bracket cuts the overall lead-time up to 40%. But a time-consuming
step during the development process of a ski-rack is the manual search among existing
brackets and footpads, taking up to several hours. Since manual search is a painstaking
task with an ever-increasing list of products, the engineers tend to skip that step and
draw new components instead.
A)

B)
C) D)
Figure 2. The roof rack product is adapted to new car-models by changing footpad and bracket components.
Car roof
Footpad
Bracket
M. Poorkiany et al. / A Case Study on Implementing Design Automation 328
In order to search automatically among existing components, a computer-based
system (TRackS) is developed [17, 18]. The system works as an add-ins to SolidWorks
and uses the recorded design information. TRackS has the ability to search among the
existing components and checks the applicability of previous solutions for a new car
roof based on shape matching. TRackS utilizes a database which has been set up by the
system developer for storing design information. Further, for running TRackS having
access to roof data, drawings, CAD files, and bill of material is a necessity. To update
TRackS, the files and knowledge should be updated and that would be possible when
the rationale part of the knowledge is captured. Then it would be easier for the system
user to reuse or update the documentation by knowing the origin and reason behind
each activity.
On the other hand, documenting design information of the whole rack product is
taking place traditionally by creating a folder on the companys database and saving all
relevant documents and files there. The folder includes test reports, check lists,
drawings, CAD files, features, BOM and so on. Although the engineers at the company
try to describe and define the activities by writing reports but it seems that capturing
design rationale is the missing mission in documenting the design knowledge.
Geometries and CAD files are the most type of information which are used by
TRackS. This type of information mainly describe the results of the activities for a
context. Such information might be enough if the context is to be used as it is, but if the
context has to be modified and adapted to specific circumstances even more
information is required to support the adaption.
3. Pilot System
A system founded on the presented framework of documentation for modeling design
knowledge for TRackS was developed. A recently developed rack product was selected
to be modeled in the system application (see Figure 3). The application which was used
is based on wiki pages. CAD models, bill of material, features, rules, and roof data are
the required information for TRackS and for each of them a wiki page is associated.
The page contains all the required information for that context such as Excel, Word,
figures and etc. These can be added to a page by uploading the specific files and then
create links to them. The information and knowledge could be described by using text,
figures, tables, and rules. The page also includes the principle and function of that item
in the product, and the rules and their validity for the product family. Since in the
current documentation of the company design rationale is not recorded, rationale
behind every rule and knowledge can be discussed and documented during meetings
and discussions with the designers.
The representation of knowledge, including design rationale for TRackS is based
on the information model depicted in Figure 4. The model is implemented in Microsoft
Visual Studio and uses the concept of classes for describing the items (rule, BOM, and
). Of central importance is the Rationale class that connects to all other classes
within the design process. Basically the type of rationale would be different for each
stated class in figure 4. The required information for bracket, footpad and roof data are
inherited from the CAD-model class which are not shown in this picture. The model
shows the relation between different classes and also the required classes to create an
item. The item would be the final assembly model of a product variant.
M. Poorkiany et al. / A Case Study on Implementing Design Automation 329
Figure 3. Main page of developed systems knowledge repository.
4. Conclusion
Documentation and organization are the two major issues which are identified during
implementing engineering design automation systems. The objective of the paper was
to illustrate the challenges that exist during documenting the design knowledge in
systems. The research was expressed in a better way by implementing a case study. As
a general overview, the engineers at the company are satisfied by the results of TRackS.
50% reduction of the costs for making new tools and 40% reduction of lead time for
developing the product are the two great benefits of using TRackS at the company.
In most design automation systems, reusing product knowledge for modifying an
existing product or developing a new product variant is a significant task. Therefore,
structuring and organizing documentation including design rationale and considering
traceability is a necessity with the objective to reuse, and maintain the generic product
family objects embedded in design automation systems. Design knowledge can be
modeled in a system application in order to support capturing of design definition and
design rationale, facilitate high quality documentation, link models, items and
supported documents together, and update the documentation easily.
The system developer of TRackS confirms the applicability of the proposed
framework for documenting and modeling the knowledge in general but further
researches are required to fully validate and evaluate the approach and supporting tools
for feeding the system with the right level and right quality of information. Also more
study is required regarding organization, the integration and dependency of the system,
as well as communication and collaboration among system users.
M. Poorkiany et al. / A Case Study on Implementing Design Automation 330
Figure 4. Information model for TRackS
Acknowledgments
The authors express their gratitude to Thule Group Corporation and the Knowledge
Foundation (KK-stifitelsen) in Sweden for financing the research project. Also special
thank is given to the engineers and managers at Thule company for technical support.
References
1. Sellgren, U., Simulation driven design-A functional view of the design process.
Licentiate Thesis, Department of Machine Design, Royal Insitute of
Technology, Stockholm, 1995.
2. Chapman, C.B. and M. Pinfold, The application of a knowledge based
engineering approach to the rapid design and analysis of an automotive
structure. Advances in Engineering Software, 2001. 32(12): p. 903-912.
3. Hernndez, C. and M. Arjona, Design of distribution transformers based on a
knowledge-based system and 2D finite elements. Finite elements in analysis
and design, 2007. 43(8): p. 659-665.
4. Sandberg, M., Design for manufacturing: methods and applications using
knowledge engineering. 2007.
M. Poorkiany et al. / A Case Study on Implementing Design Automation 331
5. Boart, P., The enabling of product information in the conceptual phase, 2007,
Citeseer.
6. Stolt, R., CAD-model Parsing for Automated Design and Design Evaluation,
in Institutionen fr produkt- och produktionsutveckling,
produktutveckling2008, Chalmers tekniska hgskola. p. 162.
7. Hicks, B., et al., A framework for the requirements of capturing, storing and
reusing information and knowledge in engineering design. International
journal of information management, 2002. 22(4): p. 263-280.
8. Elgh, F. and M. Cederfeldt. A Design Automation System Supporting Design
for Cost-Underlying method, system applicability and user experiences. in
CE2005: ISPE International Conference on Concurrent Engineering. 2005.
Fort Forth, Texas, United States.
9. Turban, E., J. Aronson, and T.-P. Liang, Decision Support Systems and
Intelligent Systems 7Edition. 2005: Pearson Prentice Hall.
10. Baxter, D., et al., An engineering design knowledge reuse methodology using
process modelling. Research in engineering design, 2007. 18(1): p. 37-48.
11. Stokes, M., Managing engineering knowledge: MOKA: methodology for
knowledge based engineering applications. Vol. 3. 2001: Professional
Engineering Publishing London.
12. Sung, R., et al., Automated design knowledge capture and representation in
single-user CAD environments. Journal of Engineering Design, 2011. 22(7): p.
487-503.
13. Falessi, D., G. Cantone, and M. Becker. Documenting design decision
rationale to improve individual and team design decision making: an
experimental evaluation. in Proceedings of the 2006 ACM/IEEE international
symposium on Empirical software engineering. 2006. ACM.
14. Tang, A., Y. Jin, and J. Han, A rationale-based architecture model for design
traceability and reasoning. Journal of Systems and Software, 2007. 80(6): p.
918-934.
15. Kirkman, D.P., Requirement decomposition and traceability. Requirements
Engineering, 1998. 3(2): p. 107-114.
16. Sunnersj, S. PLANNING DESIGN AUTOMATION SYSTEMS FOR
PRODUCT FAMILIES-A COHERENT, TOP DOWN APPROACH. in
Proceedings of the 12th International Design Conference DESIGN 2012. 2012.
17. Johansson, J. and M. Cederfeldt. INTERACTIVE CASE BASED REASONING
THROUGH VISUAL REPRESENTATION-SUPPORTING THE REUSE OF
COMPONENTS IN VARIANT-RICH PRODUCTS. in Proceedings of the 12th
International Design Conference DESIGN 2012. 2012.
18. Johansson, J., Combining Case Based Reasoning and Shape Matching Based
on Clearance Analyzes to Support the Reuse of Components. 2012.
M. Poorkiany et al. / A Case Study on Implementing Design Automation 332
A Framework and Generator for Large
Parameterized Feature Models
Robert RGER
a
and Georg ROCK
b

a
Trier University of Applied Sciences
Schneidershof, Trier
Germany
b
Trier University of Applied Sciences
Schneidershof, Trier
Germany
Abstract. The customer oriented individualization of products is getting more and
more important for the production industry. Especially in car manufacturing
industry we can observe a dramatically increasing number of product variants not
only related to different car concepts but also concerning different functionality as
for example in car entertainment. In order to cope with this increasing complexity
in terms of product features and their interrelationships, manufacturers more and
more build on a formal approach called feature modeling that allows for a formal
analysis of the specified variability artifacts with the help of specialized proving
engines as for example SAT-solvers. The development of such proving engines
and their test is quite complicated also due to the fact that manufacturers do not
disclose the real development data for understandable reasons. Thus, a framework
is needed that enables the proving engine developers to test their engines on nearly
real data and to show the potential and possibilities of their engines without having
the real development data at hand. This paper presents a framework for generating
especially large parameterized feature models used for load testing and
benchmarking feature model analysis tools, as well as two usage scenarios: the
first one runs a typical benchmark with large feature models on two versions of the
theorem prover SPASS, the second shows the integration of the generator in a
client-server environment where its functionality is hosted on a website, i.e. using
the browser as a frontend working on tablets and modern smartphones.
Keywords. variability management, product line engineering, feature model,
parameterized, generator, benchmark
Introduction
The current change towards creating and selling products perfectly suited to the
demands of individual customers has led and still leads to an ever growing diversity in
product variants, thus resulting in increasing complexity of managing the
interdependencies between the products building blocks, restrictions due to production
and assembly processes, as well as legal regulations. The FODA-approach introduced
by Kang [1] and the process of Product-Line-Engineering as described for example in
[2] suggest how to master the described problems by developing a variability model
based on the early requirements, where so-called feature models are used. Feature
models are a means to describe and specify product variability in a model-based
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-333
333
fashion where abstraction is one of the main advantages. Furthermore, feature models
are simple enough, such that engineers do not waste time in understanding them and
rather apply them with an immediate benefit. On the other side they are expressive
enough to specify the variability typically arising in product development processes.
Feature models do not only describe the variability-dependent structure, but they
also provide a formally sound foundation for automated analysis of properties by
means of a formal proving approach as for example automatic theorem provers or
SAT-solvers. Thus, feature models become a kind of operational concept that can be
executed to formally discover problems during the very early stages of development.
1. Related Work
Our initial approach focuses on the generation of especially large feature models,
without guaranteeing certain properties like satisfiability or no dead features. These can
be used for load testing and benchmarking analysis and visualization tools.
The Betty-Framework [3] provides similar functionality, but operates on extended
models with attributes and cardinalities and takes a more controlled approach with the
generation of test instances via metamorphic testing, i.e. small feature models with
well-known properties are transformed to larger ones by operations whose effect on the
properties are known. There, models are generated for correctness testing, solving the
oracle-problem [4]. This exact approach results in long runtimes, and is not yet feasible
for very large feature model.
The tool S.P.L.O.T.
1
offers an online feature model editor for manual creation, a
generator for 3-cnf-feature-models, as well as an analyzer based on binary decision
diagrams and SAT-solvers. In [5] the authors describe how a consistency check of
feature models can be considered as SAT-problem and describe that these are
especially hard in the transition between underconstrained to overconstrained problems.
They also show empirically that 3-cnf-feature models should not fall into this category.
2. Feature Models
Feature models consist of a hierarchy of features spanning a tree as depicted in Figure 1
for an example small mobile phone product line. Furthermore, so called orthogonal
cross-tree-constraints between arbitrary features are used. Each feature is represented
by a rectangle, relations between features are shown as lines. The relation type is
indicated by the line type:

line ending A filled circle at the line end represents a mandatory feature,
an empty circle describes an optional feature.
arcs An arc spanning across multiple lines below a feature
describes a selection of child features, with a filled arc
meaning a logical OR, i.e. zero to all child features can be
selected, and an empty arc representing a logical XOR, i.e.
exactly one child feature has to be selected.
cross-tree-constraints Dashed lines linking features in a non-hierarchical way, i.e.

1
Software Product Lines Online Tools http://www.splot-research.org
R. Rger and G. Rock / A Framework and Generator for Large Parameterized Feature Models 334
running across different subtrees, can be either requires-
constraints described by a one-headed arrow or excludes-
constraints with a double-headed arrow.

This set of relations between features describes all possible products, so called
configurations. A configuration is a selection out of a set of features F denoted by
(S, R) e F F, with S being selected features and R unselected features, i.e. S, R L F
and S r R = . A configuration is called complete, if S U R = F, it is called a partial
configuration, if S U R c F.
2.1. Analysis of Feature Models
Table 1. mapping of feature model relations to propositional logic


Feature models can be formally transformed to propositional logic (see Table 1) by
assigning a literal to each feature and transforming each relation to a term [6, 7].
Analysis can then be accomplished by using tools operating on propositional logic, e.g.
SAT-solvers like SAT4j [8] and theorem-provers like SPASS [9]. An example for a
typical analysis is checking whether the model is consistent, i.e. if there exists a
configuration with the root feature being selected, so that the corresponding
propositional logic expression evaluates to true.
As variability in products and processes increases enormously, feature models
have nowadays grown to several 10.000 feature nodes with several 10.000 constraints,
which makes it hard to analyze them with current tools. Analyzing feature models
means solving NP-hard problems, which has to be tackled with specialized algorithms
and/or heuristics.
R. Rger and G. Rock / A Framework and Generator for Large Parameterized Feature Models 335

Figure 1. feature diagram of a feature model Source: [10]

2.2. Providing Test Data for Tool Developers
A common problem for tool developers in this field of variability management is the
lack of test data, as the variability described by feature models is typically intellectual
property of their respective owners. Thus, the question remains how to test the
analyzing tools with nearly realistic test data.
This paper presents the initial approach to provide a framework and a generator for
large parameterized feature models. The framework allows to simply extend it by
another self-realized generator and it comes with a set of predefined parameterized
generators that can be used to directly generate huge sets of test data. The generated
models are not only relevant to the analyzing domain but also to specification and
visualization of large feature models.
In contrast to other approaches like taken e.g. by the Betty-Framework [11], we do
not try to synthesize feature models with well-known properties, because generating
large models with known properties is as hard a problem as to analyze them. Instead
we focus on a fast and stable yet configurable algorithm to provide explorable, easy
reproducible and step-by-step extendable test cases. Stable in this context means, that
the generation itself is deterministic and monotone. Each model is reliably reproduced
by its parameter set, and model sizes can be stepwise increased or decreased, resulting
in only atomic differences. Thus, analysis problems and singularities in tools can be
narrowed down by tailoring the generated feature models towards the problem cases.
To make the generator universally applicable and easy to extend it is implemented as
command-line tool programmed in Java, with a query-able self-describing service
interface.
We currently concentrate on two example usage scenarios: the first one runs a
typical benchmark with large feature models on two versions of a theorem prover (as
test scenario we have taken two versions of the theorem prover SPASS). The second
scenario shows the integration of the generator in a client-server environment where its
functionality is hosted on a website, i.e. using the browser as a frontend working on
tablets and modern smartphones making the generator framework available to all
researchers in the described field.
R. Rger and G. Rock / A Framework and Generator for Large Parameterized Feature Models 336
3. Problem Statement
Tools working with feature models basically need to cover three use-cases: (i) the
creation and editing of feature models, (ii) the analysis of properties and flaws and (iii)
the visualization for the user. Realistic feature models for complex high-tech products
like those of the sectors of automotive and aeronautics have reached sizes not
manageable by todays tools, which affects all of the afore mentioned use-cases:

- The manual creation and maintenance of feature models by gathering
corporate data of various departments is not feasible for feature models with
more than some hundred features.
- Todays analysis tools are not well-suited to large feature models, thus
resulting in long execution times, as the inheriting problems to solve are of
complexity NP, i.e. the algorithms runtime is over-polynomial. Heuristics and
specialized algorithms exploiting inherent properties of feature models might
improve on this in the future.
- A static visualization of feature models with more than some hundred features
is not feasible for working with them as a user, and also dynamically fading
out single feature groups wont improve on this because of the orthogonality
of cross-tree-constraints [12,13,14].

Tools for feature modeling and analysis are to some extent a relatively new field of
research and therefore there exists little reference data from real applications, especially
for large feature models. Additionally, real world feature models are mostly intellectual
property and therefore unavailable for testing tools. As a consequence research groups
tend to build own test suites consisting of rather small feature models. A common
accessible database for test instances has just started to evolve
2
.
4. Solution Description
In our approach the generation of feature models fulfills the following initial
requirements:

- The generation process is configurable by a parameter set. This parameter set
is able to approximate industrial sized and structured feature models.
- Feature model generation is randomized, yet reproducible and deterministic,
i.e. the generation process is a function 0: P 0A - FH, with P as parameter
set, 0A as the specific generator algorithm used and FH the resulting feature
model. We use the Java pseudo-random number generator with seeding to
guarantee reproducible results.
- The generation process is stable, i.e. the feature model growth is structurally
monotone. A feature model with n + 1 features is identical to a feature
model with n features except an additional feature. Although being
randomized the generation results are not chaotic.

2
http://www.splot-research.org
R. Rger and G. Rock / A Framework and Generator for Large Parameterized Feature Models 337
- The generation is fast, i.e. 100,000 features are generated in about 5 seconds
on current consumer level hardware.
- The framework supports multiple different generators and different export
formats.
- A component for transforming feature models to propositional formulas is
included, currently used for the DFG [15] export format.
- The framework is easily extendable with new generators, their parameter sets
and new export formats.
- The framework can be used cross-platform, stand-alone or in a client-server
setup.
4.1. Structure of the Framework
As the long-term goal is to provide generation functionality for different characteristic
feature models, we chose a modular component based structure, where individual
components can be added and replaced, thus different generators with individual
parameter sets and various export formats can be plugged in.

Figure 2. structure of the framework

Currently the framework supports direct usage as desktop application via a graphical
user interface as well as commandline usage. The main components are: (i) the service
interface, which can be queried and returns a JSON-format descriptor of available
generators, their respective parameter sets and a list of export formats, and (ii) the
generator interface for generating a feature model in a specified format. For handling
future extensions there are factory components where new generator classes and export
formats can be registered by name and are immediately available through the service
descriptor. An additional transformer component can be used to translate the feature
model IR (Intermediate Representation) to propositional logic. Figure 2 shows the
structure of the framework.

4.2. The Default Generator
As a prove-of-concept a default generator instance was implemented, taking the
following parameters as inputs:
R. Rger and G. Rock / A Framework and Generator for Large Parameterized Feature Models 338

- the seed
- total number of features
- maximum branching factor, i.e. maximum number of child features per single
feature
- maximum number of children in a feature group, i.e. maximum number of
child features per feature group (or/xor)
- total number of cross-tree-constraints (cross-tree constraints are evaluated
concerning their usefulness before they are added to the model).
- ratio between excludes and requires constraints

During generation the maximum-parameters are evaluated per feature node and the
probability for the instanced number of child features per node is distributed equally.
5. Application to Real World Examples
This section will show two typical use-cases for the generation of synthetic feature
models: (i) the inclusion in a benchmarking-setup for testing execution times and
consistency between the results of two different versions of SPASS [9], and (ii)
providing feature model generation via the world-wide-web.


Figure 3. process chain for benchmarking
5.1. Scripted Usage for Benchmarking
The tools SPASS3.7 and SPASS-SATT can be used to analyze properties of feature
models. SPASS is a first-order logic theorem prover and can take a feature model as
propositional formula as input. Analyses like checking for consistent models can be
performed by adding rules, e.g. setting the root-feature to true. SPASS-SATT is a
specialized version of SPASS tailored to SAT-problems. Please note that we only want
to present an example scenario with two different provers involved. We of course know
that comparing a generalized first-order theorem prover with a specialized SAT solver
does not make sense and the outcome is clear. We focus on the scenario settings. When
our framework is publicly available, it can be used to benchmark each combination on
an arbitrary large set of arbitrary large feature models.
R. Rger and G. Rock / A Framework and Generator for Large Parameterized Feature Models 339
In this use-case we show how to use the feature model generator in a scripted chain
of various tools to benchmark SPASS3.7 and SPASS-SATT. Figure 3 depicts the steps
to building the tool-chain. Each step is executed by a commandlinetool and the results
are either directly transfered to the next step via input- and output-streams or via
temporary files.
As test-suite we used successively growing feature models, starting with 512
features and 5 cross-tree-constraints, doubling these numbers each instance up to about
1,000,000 features. The measured timings on our reference machine
3
went up to 22
seconds for the largest instance. Figure 4 shows the timings for the feature model
generation.
Figure 5 compares the timings for SPASS3.7 and SPASS-SATT. The analysis task
was to check whether the models are consistent, i.e. if the respective propositional
formula is satisfiable. As we used a timeout of 5 minutes there are no results for
SPASS3.7 with the larger feature models. One can see, that SPASS-SATT is as
expected considerably faster and has about linear runtime in the number of features.
SPASS3.7 has about quadratic runtime
4
and application to larger instances is not
feasible. As mentioned before SPASS3.7 is a general first-order theorem prover.


Figure 4. benchmarking results for the fm generation


Figure 5. benchmarking results for the fm analysis with SPASS3.7 and SPASS-SATT

3
Core i7 M620@2.67 GHz, 8GB Ram on Windows 7
4
note the logarithmic scales in figures 4 and 5
R. Rger and G. Rock / A Framework and Generator for Large Parameterized Feature Models 340
Another scenario for load testing with this tool chain is comparing the result returned
by each tool. In our case, both tools returned identical result, which is not a prove for
correctness, but if we had found differences, it would have clearly shown that one of
the tools is not working correctly. So automatic testing with mass data might serve as
regression test between versions, which would hardly be possible with handcrafted
feature models.
5.2. Feature Model Generator as Web Service
As the generator framework is implemented as cross-platform console application, it is
easily integrable as a service in a web server, which can run Java. In our prototype we
used Node.js
5
to implement a web-server which can accept service queries and fm-
generation requests from a website.
A typical communication from a website consists of two message round-trips. First,
the client sends a service-query to ask for the available generators, their parameter sets
and the available export formats. The Node.js-webserver validates the request, passes it
to the framework and serves the result back to the client. As the service-descriptor
returned by the framework is already in JSON-format, no conversion is necessary at
this point. When the client receives the service descriptor object, it can dynamically
construct the websites gui, i.e. showing comboboxes for generator and format
selection and input fields for parameterization. As a second step, the client then sends
the users selection to the server, where it is validated and passed through to the
framework. A feature model is generated and outputted to the standard output stream.
The webserver listens on this stream, encapsulates the outgoing result into a JSON-
object and sends it back to the client, where the feature model might be shown as text
or being processed further. Figure 6 shows a simple website, which adapts the input
fields for the user according to the returned service-descriptor information and shows
the returned feature model as DFG-format.


Figure 6. sample website demonstrating client-server usage of the framework

5
a server-side javascript runtime based on Googles V8 engine http://nodejs.org
R. Rger and G. Rock / A Framework and Generator for Large Parameterized Feature Models 341
6. Conclusion
Our generator framework provides a fast method for automated creation of large
reproducible feature models, usable for load testing and benchmarking analysis tools. It
is easily extendable by new generators and export formats, and works cross-platform
stand-alone or in client-server setups.
Next steps for improving the utility are: (i) make the generator functionality
publicly available and provide some documentation for usage and extensibility, and (ii)
the development of more specialized generators, which create feature models fitted to
new industries or use-cases.
References
[1] 1 K.C. Kang, S.G. Cohen, J.A. Hess, W.E. Novak and A.S. Peterson, Feature-Oriented Domain Analysis
(FODA) Feasibility Study. Technical Report, Carnegie-Mellon University Software Engineering
Institute, Nov. 1990.
[2] 2 K. Pohl, G. Bckle and F. van der Linden, Software Product Line Engineering: Foundations,
Principles and Techniques. Springer, 2005.
[3] 11 S. Segura, J.A. Galindo, D. Benavides, J.A. Parejo and A.R. Corts, BeTTy: benchmarking and testing
on the automated analysis of feature models. Proceedings of the Sixth International Workshop on
Variability Modeling of Software-Intensive Systems, pp. 63-71, 2012.
[4] 12 E.J. Weyuker, On Testing Non-Testable Programs. The Computer Journal, 25(4):465-470, 1982.
[5] 13 M. Mendonca, A. Wasowski and K. Czarnecki, SAT-based analysis of feature models is easy. In
Proceedings of the 13th International Software Product Line Conference, SPLC 2009, pp. 231-240,
Pittsburgh, PA, USA, 2009.
[6] 14 D. Batory, Feature models, grammars, and propositional formulas. Proceedings of the 9
th

international conference on Software Product Lines SPLC05, pp. 7-20, Rennes, France. 2005.
[7] 15 T. Thum, D. Batory and C. Kastner, Reasoning about edits to feature models. Proceedings of the 31st
International Conference on Software Engineering, pp. 254-264, Washington, DC, 2009.
[8] 4 The boolean satisfaction and optimization library in Java: http://www.sat4j.org
[9] 5 C. Weidenbach, D. Dimova, A. Fietzke, R. Kumar, M. Suda and P. Wischnewski, SPASS Version 3.5.
In 22nd International Conference on Automated Deduction, CADE 2009, LNCS 5663, pp. 140-145,
2009.
[10] 3 D. Benavides, S. Segura and A.R. Corts, Automated analysis of feature models 20 years later: A
literature review. Inf. Syst., 35(6):615-636, 2010.
[11] 6 S. Segura, R.M. Hierons, D. Benavides and A. Ruiz-Corts, Automated Metamorphic Testing on the
Analyses of Feature Models. Information and Software Technology, 53(6):245-258, 2011.
[12] 7 C. Junk, Konzeption und Entwicklung einer Visualisierung varianter Entwicklungsstrukturen. Master
Thesis, University of Applied Sciences Trier, Trier, 2011.
[13] 8 D. Blanke, Konzeption und prototypische Realisierung einer Visualisierung varianter
Entwicklungsstrukturen. Master Thesis, University of Applied Sciences Trier, Trier, 2013.
[14] 9 G. Botterweck, S. Thiel, D. Nestor, S. bin Abid and C. Cawley, Visual Tool Support for Configuring
and Understanding Software Product Lines. In Proceedings of the 13
th
International Software Product
Line Conference, SPLC 2008, pp. 77-86, Limerick, Ireland, 2008.
[15] 10 R. Hhnle, M. Kerber, C. Weidenbach and R.A. Schmidt, Common Syntax of the DFG-
Schwerpunktprogramm Deduktion Version 1.5
R. Rger and G. Rock / A Framework and Generator for Large Parameterized Feature Models 342
Visual Planning and Scheduling of
Industrial Projects With Spatial Factors
Vitaly SEMENOV
a,1
, Anton ANICHKIN, Sergey MOROZOV, Oleg TARLAPAN and
Vladislav ZOLOTOV

a
Institute for System Programming, Academy of Sciences, Russia
Abstract. With the increasing complexity and scale of industrial projects the need
of comprehensive and trustworthy methods for planning and scheduling becomes
more and more apparent. Being implemented in recent project management
systems, the traditional methods aid in project scheduling based on activity
durations, precedence relationships, explicit timing constraints, and resource limits
all assumed by Resource-Constrained Project Scheduling Problem (RCPSP).
However, the methods have many shortcomings for the industrial projects where
spatial factors play critically important role. The objective of this paper is to
present an advanced Visual Scheduling Method (VSM) for solving Generally
Constrained Project Scheduling Problem (GCPSP) that extends the classical
RCPSP statement by taking into account additional spatial factors such as product
element collisions, missing of supporting neighbouring elements, workspace
congestion. In the paper we provide a holistic framework of the method as well as
illustrate how feasible project schedules can be generated under complex spatio-
temporal constraints in highly automatic and visually interpretable way. A
software implementation of the method and its prospects for application in
industrial practice are discussed too.
Keywords. Planning and scheduling, resource-constrained project scheduling
problem, 4D modelling, spatio-temporal validation
Introduction
With the increasing complexity of large-scale industrial projects such as building a new
skyscraper, manufacturing an airplane or deployment of an oil platform, the need of
comprehensive and trustworthy methods for project planning and scheduling becomes
more and more apparent. Ultimately, such methods would enable to anticipate and to
avoid potential problems at earlier planning phases and to reduce risks and waste at the
implementation phases often being undergone to delays and reworks.
Project planners typically use traditional network techniques, Gantt charts, line-of-
balance diagrams, earned value analysis plots as well as fundamental Critical Path
Method (CPM), Program Evaluation and Review Technique (PERT), various heuristic
methods for Resource-Constrained Project Scheduling Problem (RCPSP) and Time-

1
Professor, Dr.Sc.; Institute for System Programming of the Russian Academy of Sciences, 25,
Alexander Solzhenitsyn st., Moscow, 109004, Russia; Phone: +7 (495) 9125317; Fax: +7 (495) 9121524; E-
mail: sem@ispras.ru; Web: http://www.ispras.ru


20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-343
343
Constrained Project Scheduling Problem (TCPSP) [1,2,3]. Being implemented in
recent project management systems such as Microsoft Project, Oracle Primavera, Asta
Powerproject, the methods aid in project scheduling based on activity durations,
precedence relationships, explicit timing constraints, and resource utilization limits.
Nevertheless, these methods and tools have many shortcomings for complex industrial
projects where divergent spatial factors play critically important role. Their use cannot
guarantee the correctness of the prepared schedules in terms of lack of spatial conflicts
commonly related to technology violations and management imperfections.
Previous attempts to consider these factors were successful only for very particular
statements such as space scheduling [4], dynamic layout planning [5], horizontal and
vertical logic scheduling [6], workspace congestion mitigating [7], scheduling multiple
projects with movable resources [8], spatial scheduling of repeated and grouped
activities [9], motion planning [10] and did not result in a holistic framework for
project planning and scheduling under complex spatio-temporal constraints.
Improved spatial reasoning and increased communication efficiency are key
attractive features of the emerging 4D modelling technologies [11,12]. These
technologies have a tremendous potential to rise up the certainty of whole industrial
programmes due to ability of the project stakeholders to simulate and to visualize
project progress in space dimensions and across time. Recent 4D modelling systems
like Autodesk Navisworks, Bentley Schedule Simulator, Intergraph Schedule Review
provide basic functions for these purposes. Nevertheless, these systems are quite
limited in the detection of sophisticated spatial conflicts and absolutely incapable to
produce schedules free of such defects.
The objective of this paper is to present an advanced Visual Scheduling Method
(VSM) capable to generate feasible schedules under complex spatio-temporal
constraints in highly automatic and visually interpretable way. In Section 1 we start
with the classical RCPSP statement and then provide mathematical formalization of the
Generally Constrained Project Scheduling Problem (GCPSP) that extends it by taking
into account additional spatial constraints. In Section 2 we provide a holistic
framework of the VSM method as well as explain how GCPSP problem can be reduced
to well-studied RCPSP statements and resolved through consolidated process of
planning, traditional scheduling and visual modelling. In Conclusion we outline the
prospects for application of the method in industrial practice.
1. Generally Constrained Project Scheduling Problem
The classical RCPSP problem can be stated as follows. We assume that a single project
is represented by a network with N activities on the nodes and M links on its arcs.
Every activity
n
a , N n ,..., 1 = implies an uninterrupted process beginning at the time
n
t and having the fixed duration 0 >
n
d . Every link
m
l , M m ,..., 1 = reproduces the
finish-start precedence relation between a predecessor activity
) (m Pr
a and a successor
activity
) (m Sc
a and forces the successor activity not to be started earlier than the given
lag
m
dl after its predecessor has been finished. A successor activity having only zero-
lag links cannot start until all its predecessors have been finished. For the formalization
we introduce unique dummy source and sink activities
1
a and
N
a of zero duration
V. Semenov et al. / Visual Planning and Scheduling of Industrial Projects with Spatial Factors 344
0
1
= d , 0 =
N
d and link them with the project activities having opened starts and
opened ends respectively. In order to be processed, an activity
n
a may require
nk
u
units of the renewable resource
k
r during its execution. We assume a constant
availability of every resource
k
r , K k ,..., 1 = , denoted as
k
U and require that it is not
exceeded at any time point t such that
n
t t t s s
1
throughout the whole project. In
order to make the problem simple, activity splitting and resource levelling are not
considered. The objective of the RCPSP is to schedule the activities such that the
makespan of the project is minimized, all the precedence relations are satisfied and
resource availability limits are not exceeded. Let ) (t A denotes an index set of the
activities being in progress at the time t or formally
} , ,..., 1 | { ) (
n n n
d t t t N n n t A + < s = = , then the RCPSP problem can be mathematically
formulated as follows:
N
t min subject to (1)
M m dl d t t
m m Pr m Pr m Sc
,..., 1 ,
) ( ) ( ) (
= + + > (2)
n
t A n
k nk
t t t t K k U u s s = s
_
e
1
) (
| , ,... 1 , (3)
The objective function (1) minimizes the completion time of the unique sink
activity
N
t and thereby the makespan of the whole project. Constraints (2) take into
consideration the links between each pair of preceding and succeeding activities.
Finally, constraints (3) limit the total resource utilization at each time point to the
available amounts. To be correct from mathematical point of view and to have a
solution, the RCPSP must avoid any link cycles and exclude exceeded resource
utilization for individual activities K k N n U u
k nk
,..., 1 ,..., 1 , = = s .
We extend the RCPSP problem by associating the project activities with product
and process elements
l
s , L l ,..., 1 = and by defining behavioural patterns for the
associated elements. For generality, we suggest that the elements are geometrically
represented by solids being connected, compact, orientable 3-dimensional manifolds in
Euclidean space. Typically they are the objects of simple shape: cuboids, cylinders,
prisms, pyramids, spheres, cones, polyhedra, extrusions. But they can be compound
objects constructed from primitives by means of set-theoretic operations of union,
intersection and difference adopted by the constructive solid geometry modelling
(CSG) and traditionally denoted as , and \ correspondingly [13].
The patterns are necessary to define dynamic behaviour of the associated elements
as the activities are performed. As an example, the pattern "install" may correspond to
the appearance of the element in the given position at the beginning of the performed
activity, the pattern "remove" to the disappearance of the element at the end of the
activity, the pattern "deploy" to the temporary appearance of the element, "displace"
to the movement of the element along a predefined path, etc. The figure 1 gives an
example of visual modelling of the project plan presented at Gantt chart. Being
associated with the project activities
1
A ,
2
A ,
3
A ,
4
A through assigned patterns, the
elements
1
S ,
2
S ,
3
S ,
4
S exhibit proper dynamic behaviour. Being not associated, the
element
0
S remains static throughout the project duration. A sequence of the 3D
V. Semenov et al. / Visual Planning and Scheduling of Industrial Projects with Spatial Factors 345
scenes at the bottom of the figure 1 illustrates the project progress at the chosen focus
times
1
T ,
2
T ,
3
T ,
4
T ,
5
T , thereby allowing the user to reason about the project using
clearly interpretable visual model rather than symbols of the Gantt chart.
Here we admit that several elements can be associated with the same activity and
the same element can be assigned to different activities. To avoid ambiguity in the
visual representation of the scene and to resolve the situations when the same element
l
s is associated with several activities
1 n
a ,
2 n
a , ... ,
nm
a simultaneously, we require
these activities are connected by finish-start links of zero lag and form the sequential
chain so that
nm n n n n n n n n
t d t t d t t d t s s + s s + s s + ...
3 3 3 2 2 2 1 1
. Then, the pattern
associated with the activity
1 n
a predetermines the visual state of the element
l
s at the
interval
2 1 n n
t t t < s , the next pattern at the interval
3 2 n n
t t t < s and the final
pattern at the semi-infinite interval
nm
t t > .
Figure 1. Visual modelling of the project plan.

The appearance of the element
l
s at the time moment t denoted as ) (t s
l
implies
its presence status in a scene, scale, position, orientation, and perhaps colour attributes.
A scene is called pseudo-dynamic if the changes occur only at discrete points in time,
coinciding with the beginning and end of the activities. Further we restrict the
consideration to the case of pseudo-dynamic scenes originated from the emerging 4D
modelling technologies as applied to large-scale industrial projects [14].
The elements not associated with activities are a priori static elements of the scene
which are not suffered to any schedule changes. An index set of static elements will be
denoted as S , an index set of the dynamic elements as D , and an index set of the
dynamic elements present in the scene at the time t will be denoted as ) (t D . Spatial
issues caused by static elements should be beyond our further consideration as they can
be identified and corrected at the problem statement phase before the scheduling
process runs.
In order to avoid clashes among dynamic elements we impose the following
additional requirement upon the project schedule:
1
, ), ( , , ) ( ) ( t t l l t D l l t s t s
l l
> ' ' = ' e ' ' ' C =
' ' '
(4)
We admit here that under certain behaviour patterns and element positions, the
constraints (4) cannot be satisfied. For example, if two elements must be installed at the
same position and be left in that position for the whole project period, any sequence of
T1
S
1
install
S
2
deploy
S
3
remove
S
4
displace
A
1
Time
A
2
A
3
A
4
S
3
S
4
S
1
S
3
S
4
S
1
S
2
S
3
S
4
S
1
S
3
S
4
S
1
S
4
S
0
S
0 S
0
S
0
S
0
T2 T3 T4 T5
V. Semenov et al. / Visual Planning and Scheduling of Industrial Projects with Spatial Factors 346
the associated activities cannot avoid clash between the elements. It is worth to note
here that resource constraints (3) also cannot be satisfied under certain parameters. For
example, if resources utilized by any activity exceed available total limit, no schedule
resolves this situation. Nevertheless, the conditions (4) remain quite meaningful for
most scheduling problems to the same degree as the conditions (3).
Another important requirement is that the schedule must reproduce physically and
technologically feasible sequences of the activities. In our paper [11] so-called "join"
constraints have been introduced and investigated to identify spatial issues when a
newly installed element "hangs in the air" and cannot be supported by neighbouring
elements from different sides. If an element is uninstalled, similar requirement should
be imposed upon remained elements to avoid their fall. To define join constraints we
prescribe a set of vectors { }
k
e e e E ,..., ,
2 1
= and require the existence of neighbours
along these directions at an appropriate distance. To simulate a gravity factor having
obvious physically-sound meaning, one vector in top-down direction must be defined
in the join constraint. The vector length should match the tolerance of the performed
collision analysis. Here we suggest that any static element is rigidly fixed in a
predefined position and can serve as a foundation supporting other elements. Therefore,
join constraints should be stated only for dynamic elements. These requirements can be
formalized as follows:
C = e - = ' ' - > e ' = '
' ' '
) ( )) ( ( | , ,..., 1 ), ( , ,..., 1
1
t s t s T E e L l t t t D l L l
l l
e
(5)
where the operator ) (s T
e
result in translation of the element s by the vector e . As
opposed to clash constraints (4) assuming the absence of neighbouring elements, join
conditions (5) require the existence of such elements in nearby positions along one of
the prescribed directions.
To take into account workspace mitigation factors, the RCPSP statement is
extended by the next group of spatial constraints. Let
i
w , I i ,..., 1 = are project
workspaces which are geometrically represented as solids. By consuming
nk
u units of
the resource
k
r with corresponding spatial rate
k
v , the activity
n
a utilizes a workspace
) , ( k n i
w with the factor ) (
) , ( k n i k nk nk
w v v u = , where the function ) (w v returns the
volume of the corresponding workspace. A notation ) , ( k n i is used here to emphasize
that the workspace
i
w is associated with the activity
n
a and the related resource
k
r .
To reflect the fact that the workspace
i
w is utilized only when the activity performs we
write ) (
) , (
t w
k n i
.
Two activities
n
a ,
n
a
'
are interfering if they overlap in time and some of their
workspaces intersect so that C =
' ' ' ) , ( ) , ( k n i k n i
w w or 0 ) (
) , ( ) , (
>
' ' ' k n i k n i
w w v . To be
processed concurrently the activities must avoid workspace competition and congestion.
By limiting the utilization and congestion factors we thus require that the workspace
capacity must be large enough to allocate all the needed amount of the resource units,
including units consumed by other interfering activities. Under the suggestion that the
workspace capacity is consumed by different activities and resources additively, this
requirement takes the form:
for ) , ( ) , ( k n I k n e ,
n n n
d t t t t + < s |
V. Semenov et al. / Visual Planning and Scheduling of Industrial Projects with Spatial Factors 347
_
e ' '
' ' '
' ' '
'
' '
s
) , ( ) , (
) , ( ) , ( ) , (
) , (
) ( )) ( ) ( (
) (
U A I k n
k n i k n i k n i
k n i
k
k n
w v t w t w v
w v
v
u (6)
where ) , ( U A I is a set of index pairs for all the related activities and resources so that
0 =
nk
u . A summation on the left side of the conditions (6) is taken over all pairs of
related activities and resources whose workspaces can intersect the workspace
) , ( k n i
w
utilized by the activity
n
a and the resource
k
r during the execution interval
n n n
d t t t + < s . The constraints (6) can be seen very similar to the collisionless
conditions (4) if we interpret workspaces as elements with the deployment
behavioural pattern. However, the constraint expression (6) enables to quantify the
workspace congestion and admits that workspace solids can intersect each other to a
certain extent as opposed to the conditions (4) preventing any clashes among them.
More details about the conditions formalized above can be found in [11].
Returning to the original RCPSP formulation, it assumes minimization of the
makespan of the whole project (1) under precedence relations (2) and resource limits
(3). By relaxing the resource constraints (3), the RCPSP reduces to the CPM-case
which can be solved by forward recursion in polynomial time. But in general statement
the RCPSP belongs to the class of NP-hard problems [15,16]. A RCPSP problem
extended by spatial constraints (4) (6) is called the GCPSP problem assuming that
both resource limits and spatial factors induce a more general class of mathematical
constraints. Any such constraint can be non-trivial like those of formalized above and
quite difficult for computations, even to check it. The only assumption done here that
the constraints can be resolved by identifying conflicting activities and prioritizing their
execution. Being a generalization of the RCPSP problem, the GCPSP belongs to the
class of NP-hard problems too.
2. Visual Scheduling Method
Existing analytical methods like dynamic programming procedures, branch and bound
techniques are too computationally expensive to find optimal solutions for the NP-hard
scheduling problems in most practical cases. On the other hand, heuristic approaches,
in particular, priority rule based scheduling methods, provide reasonable solutions for
RCPSP problems in reasonable time that makes possible to use them within
commercial project management systems [17].
However, these methods cannot be applied for the specified GCPSP problem by
several reasons. First of all, the spatial constraints (4) (6) cannot be interpreted in
terms of renewable resources. Workspaces might seem similar to independent
renewable resources, but the analogy holds only in very special cases where workspace
solids do not intersect or overlap entirely. Another reason consists in underdetermined
character of most project plans which do not necessary contain all the technological
links. Additional spatial constraints can help to identify possible problems in activity
sequencing and to fix them, for example, by restoring the lost precedence relations.
However, the existing scheduling methods do not provide proper mechanisms to do
that. And, finally, as opposed to the resource constraints (3) which can be always
resolved by delaying any of the conflicting activities, the prioritization order in the
GCPSP solving may have fatal value not only for search of optimal or suboptimal
V. Semenov et al. / Visual Planning and Scheduling of Industrial Projects with Spatial Factors 348
solution, but also for finding any solution satisfying all the imposed constraints. Often,
it is impossible without manual interventions of the planner skilled by advanced 4D
modelling tools.
The proposed VSM method is intended to solve GCPSP problems iteratively by
alternating and combining three underlying phases:
Planning Phase, in which the user forms a work breakdown structure for the
whole project, fulfils it by individual activities, establish precedence relations
between them, defines resource utilization limits, and impose spatial
constraints. The original plan may be corrected in subsequent iterations by
adding or removing links;
Scheduling Phase, in which an equivalent RCPSP (CPM problem if resources
are unlimited) is solved by one of the existing methods under the optimistic
assumption that spatial constraints are automatically satisfied by accounting
available links;
Modelling Phase, in which the current schedule is visually simulated and
checked against spatial constraints. If all the constraints are satisfied, then it is
stated that the original GCPSP problem has been solved. Otherwise, the
method returns to the planning phase and revives iterations until all the
constraints have been satisfied.
The VSM method assumes that any violated GCPSP constraint can be resolved by
identifying conflicting activities in the Modelling Phase and prioritizing their execution
in the Planning Phase by recovering lacking technological links between them. By
adding more and more links, the project schedule may undergo significant changes,
which in turn may lead to new spatial issues. Such issues have to be fixed by inducing
new links and taking risks of the formation of link loops. The loops would mean it is
impossible to realize the plan and, therefore, the user must decide which activity
sequences are really feasible and which redundant links should be removed. If no loops
are formed, an equivalent RCPSP problem can be always resolved in the Scheduling
Phase. Thus, the VSM method tends to solve the GCPSP problem by replacement of
spatial constraints by corresponding precedence relations and by reduction of the
original problem to well-studied RCPSP or CPM statement through consolidated
process of planning, scheduling and visual modelling.
The figure 2 illustrates the VSM method in conformity to the project plan
represented at a Gantt chart by the activities
1
A ,
2
A ,
3
A ,
4
A ,
5
A and the links
connecting the predecessor
2
A with the successors
3
A ,
4
A ,
5
A . The elements
1
S ,
2
S ,
3
S ,
4
S ,
5
S are associated with the activities through dynamic behavioural patterns
labelled at activity bars. The element
0
S remains static throughout the project. The
plan assumes also the reservation of the workspaces
3
W ,
4
W for the activities
3
A ,
4
A
respectively. For brevity, we suggest here that resources are unlimited and workspace
utilization factors high enough to exclude any their overlap. The project is scheduled as
a GCPSP problem under the constraints avoiding clashes of elements (4), simulating
the gravity factor (5), and preventing the workspace congestion (6).
In the first iteration, the method generates a schedule for the original plan in the
CPM formulation without spatial constraints. But being checked, the schedule reveals
one clash between the elements
1
S ,
3
S and one congestion issue for the workspaces
V. Semenov et al. / Visual Planning and Scheduling of Industrial Projects with Spatial Factors 349
3
W ,
4
W ( see the figure 2,a ). It is easy identified that the detected issues are due to the
conflicting activities
1
A ,
3
A and
3
A ,
4
A respectively.
A
1
Time
S
1
S
2
S
5
S
1
Detect
CLASH conflict
and
WORKSPACE conflict
Create links
Reschedule
a)

S
3
W
3
S
4
S
444
Time
S
2
S
5
Detect
GRAVITY conflict
Create link
Reschedule
b)

S
3
W
3
Time
c)
S
3
W
3
S
1
S
4
S
5
S
1
S
4
S
0
S
0
S
0
S
0
S
0
S
0
S
0
A
2
A
3
A
4
A
5
OK Conflict
S
1
install
S
2
remove
S
3
deploy
S
4
deploy
S
5
deploy
A
1
A
2
A
3
A
4
A
5
W
4
W
4
A
1
A
2
A
3
A
4
A
5
S
1
install
S
4
deploy
OK Conflict
S
2
remove
S
5
deploy
S
3
deploy
OK OK OK
S
2
remove
S
4
deploy
S
5
deploy
S
3
deploy
S
1
install

Figure 2. Project scheduling using VSM method.

In the second iteration, the conflicting activities are prioritized and linked to avoid
the issues. The activity
3
A having the pattern "Deploy" must be finished before the
activity
1
A with the pattern "Install" starts. Taking into account the latest finish time
among conflicting activities, the activity
3
A should be prioritized over the activity
4
A
as the inverse order would shift the activity
1
A for a later time and would prevent the
minimization of the entire makespan. The added links are shown in the figure 2,b by
bold curves. Then the updated plan is rescheduled and validated again. Newly revealed
V. Semenov et al. / Visual Planning and Scheduling of Industrial Projects with Spatial Factors 350
issue is connected with the violated gravity constraint for the element
5
S deployed by
the activity
5
A . Spatial analysis of the scene shows that the element
1
S installed by the
activity
1
A could support it if only the activity
1
A precedes the activity
5
A .
In the third final iteration represented by the figure 2,c, the required link between
the activities
1
A and
5
A is added and the plan is rescheduled again. The validation
does not detect new issues and the method is successfully completed, thereby resolving
the originally stated GCPSP problem.
Thus, the method presented enables to generate feasible schedules in highly
automatic and visually interpretable way. Indeed, computationally expensive phases of
the method, namely, Scheduling Phase and Modelling Phase are performed
automatically. The user interventions are basically required in the Planning Phase to
prioritize the execution of conflicting activities. Under the assumption that the original
plan had been prepared carefully enough, and contains most of the technological links,
we can expect that the number of spatial issues is relatively small and their correction
will not require considerable effort. Facilities to automatically generate an issue report
and to visualize individual issues as transparently interpreted 3D scenes would simplify
the Planning Phase and would enable to do that work based on the strong discipline.


Figure 3. Graphic user interface of the system prototype implementing the VSM method.

To validate the described VSM method, we have implemented a prototype system
and conducted a series of computational experiments for variable-scale industrial
projects. The system is capable to import the construction project data supplied in
standard IFC files and to visualize them concordantly in the Gantt chart, traditional for
most project management applications, and in the 3D views, typical of CAD systems.
By shifting the focus time line at the Gantt chart manually or by running the simulation
in the automatic mode, the user can observe the project progress in multiple views from
the most convenient perspectives (see the figure 3). The system is also capable to
reschedule project plans in the CPM formulation and to explore the obtained schedules
V. Semenov et al. / Visual Planning and Scheduling of Industrial Projects with Spatial Factors 351
against the spatial constraints assumed by the introduced GCPSP statement. A
generated report helps to identify the product elements that caused the spatial issues as
well as the activities affecting these elements. With the report the user can resolve the
detected issues by adding links to the original plan and continue the iterations.
Conclusions
The conducted computational experiments have confirmed the effectiveness and the
feasibility of the proposed VSM method in conformity to large-scale projects
represented by hundreds and thousands of activities and product elements. However, a
significant part of CPU resources was spent to perform spatial checks and to reschedule
project many time. These phases seem to be most consumable and need to be optimized.
This work is planned for the near future. Being optimized, the VSM method looks very
promising when used in the industry practice.
References
[1] D. F. Cooper, Heuristics for scheduling resource-constrained projects: An experimental investigation,
Management Science 22(11) (1976), 1186-1194.
[2] P. Brucker, A. Drexel, R.H. Mohring, K. Neumann, E. Pesch, Resource-constraint project scheduling:
Notation, classifcation, models and methods, European Journal of Operational Research 112(1)
(1999), 341.
[3] T. A. Guldemond, J. L. Hurink, J. J. Paulus, J. M. J. Schutten, Time-constrained project scheduling,
Journal of Scheduling 11 (2008), 137-148.
[4] H. J. Choo, I. D. Tommelein, Space Scheduling Using Flow Analysis, Proceedings IGLC-7 (1999), 299-
312.
[5] P. P. Zouein, I. D. Tommelein, Dynamic Layout Planning Using a Hybrid Incremental Solution Method,
Journal of Construction Engineering and Management 125(6) (1999), 400-408.
[6] W. Y. Thabet, Y. J. Beliveau, HVLS: Horizontal and Vertical Logic Scheduling for Multistory Projects,
Journal of Construction Engineering and Management 120(4) (1994), 875-892.
[7] K.W. Yeoh, David K H Chua, Mitigating Workspace Congestion: A Genetic Algorithm Approach,
EPPM 2012 Conference (2012), 107-118.
[8] T. Hegazy, Optimization of resource allocation and leveling using genetic algorithms, Journal of
Construction Engineering and Management 125(3) (1999), 167-175.
[9] W. Y. Thabet, Y. J. Beliveau, Modeling Work Space to Schedule Repetitive Floors in Multistory
Buildings, Journal of Construction Engineering and Management 120(1) (1994), 96-116.
[10] M. Ellips, S. Davoud, Classic and Heuristic Approaches in Robot Motion Planning - A Chronological
Review, Proceedings of world academy of science, engineering and technology 23 (2007), 101-106.
[11] K. A. Kazakov, V. A. Zolotov, V. A. Semenov, Virtual construction: 4D planning and validation,
Proceedings of the XI International Conference on Construction Applications of Virtual Reality (2011),
135-142.
[12] V. A. Semenov, K. A. Kazakov, V. A.Zolotov, Combined strategy for efficient collision detection in
4D planning applications, In Computing in Civil and Building Engineering, Proceedings of the
International Conference (2010), 31-39.
[13] D.M. Conway, Constructive Solid Geometry Using the Isoluminance Contour Model, Computers and
Graphics 15(3) (1991), 341-347.
[14] V. A. Semenov, K. A. Kazakov, S. V. Morozov, O. A. Tarlapan, V. A. Zolotov, T. Dengenis, 4D
modeling of large industrial projects using spatio-temporal decomposition, eWork and eBusiness in
Architecture, Engineering and Construction (2010), 89-95.
[15] R. Kolisch, A. Sprechrr, A. Drexl, Characterization and generation of a general class of resource-
constrained project scheduling problems, Management Science 41(10) (1995), 1693-1703.
[16] S. M. Lavalle, Planning algorithms, Cambridge University Press, UK, 2006.
[17] R. Kolisch, Efficient priority rules for the resource-constrained project scheduling problem, Journal of
Operations Management 14 (1996), 179-192.
V. Semenov et al. / Visual Planning and Scheduling of Industrial Projects with Spatial Factors 352
DMU Management Product Structure and
Master Geometry Correlation
Glden ENALTUN
a
and Can CANGELR
b

a
gsenaltun@tai.com.tr, Turkish Aerospace Industries Inc., Ankara, Turkey 06980
b
ccangelir@tai.com.tr, Turkish Aerospace Industries Inc., Ankara, Turkey 06980
Abstract. One of the main principles of Concurrent Engineering is accessing the
up to date design information at any moment. 3D models are the basic design
information for todays industry. To work with 3D models by different
departments concurrently, every model should be managed as positioned in 3D
space relative to other models. This brings the term of Digital Mock-Up (DMU),
digital design of the product to be developed. In order to use DMU effectively,
design data should be managed in Product Lifecycle Management (PLM) tools by
the help of Product Structure. In addition to that, 3D modeling rules are also
important parameters to have effective DMU. Therefore, these rules should be set
and every model should be created according to these rules.
In this paper, definition and management of DMU in aerospace industry are
explained. Relation between Product Structure and DMU is detailed considering
different DMU management methods such as Bottom-Up Method, Top-Down
Method and Hybrid Method. These methods are also explained in this paper.
Moreover, definition of Master Geometry is done and the role of Master Geometry
in DMU creation is explained by using specific design examples from industry.
Keywords. Digital Mock-Up, DMU, Concurrent Engineering, CE, Bottom-Up
Method, Top-Down Method, Hybrid Method, Product Structure, Master Geometry
Introduction
Todays economic environment needs the high quality products with lower costs and in
a shorter time according to the conventional methods. In order to have this type of
production environment, Concurrent Engineering approach should be adapted to
organizations since Concurrent Engineering aims reducing the total effort in bringing
the product from concept to deliver, while meeting the needs of both the consumers and
industrial customers [1].
According to Prasad, Concurrent Engineering replaces the traditional sequential
over the wall approach to a simultaneous design and manufacture spectrum with
parallel, less interrelated process.[1].
For aerospace industry, development of aircraft has a very complex work from
concept to deliver. All disciplines should work in a harmony and there should be
continues and up-to-date information flows between departments. A main principle of

20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-353
353
Concurrent Engineering, working with up-to-date data in the same database by all
departments in the organization, meets this collaborative work requirement. This is
done with the help of Product Lifecycle Management (PLM) tools the digital
environment keeps all design data and provides collaborative work between all related
departments. All related departments reach the design data via PLM tools at any time
of the product lifecycle, give comments about design or make their own works such as
fatigue analysis of design.
For the complex product (e.g. aircraft), design group has plenty of outputs, mainly
3D models and other meta data such as part numbers, material information, version,
etc. All this information should be kept in the PLM tools in an organizational way. This
is done with the help of product structure. Moreover, 3D models bring together via
product structure and Digital Mock-Up (DMU), digital representation of the product is
created.
In the following sections of this paper, since DMU evolves during aircraft
development design evolution methods namely bottom-up, top-down and hybrid
methods and their effect the product lifecycle are explained. Additionally, product
structure and master geometry are defined to explain their correlation with DMU.
1. DMU Management
As mentioned above, DMU is the virtual representation of the product. Figure 1 rep-
resents the example of DMU.


Figure 1. DMU Example

DMU provides main information of departments other than design to make their
analysis such as maintainability, producibility, etc. analyses. Song states that Digital
mock-up (DMU) is used widely to prevent the interferences and mismatches during
precision design and assembly processes without physical mock-ups [2].
In order to create DMU, all design should be done in 3D design environment with
CAD tools and design should be managed with product structure which is the basis of
DMU. Moreover, master geometry should be used and design should be related with
master geometry.
G. Senaltun and C. Cangelir / DMU Management 354
1.1. Design Evolution Methods
Using appropriate methods during design is important decision since design method
affects time, costs and the quality. In this chapter, 3 different design methods and their
effects to product lifecycle are explained.
Method 1-Bottom-Up Method.
As seen in Figure 2, the evolution of design starts from detail parts through assemblies.
The individual detail parts of the assembly are first designed in great detail. These
elements are then linked together to form larger sub-assemblies. This work is done in
some cases in many level, until a complete top assembly is formed.


Fig. 2. Bottom-Up Design Method
This method brings the easy manufacturing of product and meeting the physically
requirements properly. However, functional requirements should be transferred from
assembly to detail parts. Bottom-up method could not guarantee that the functional
requirements are not met by top assembly.
According to Terpenny Bottom- Up design method is the characteristic of
traditional engineering and designs are built from known components in anticipation of
satisfying functional requirements. In this scenario, physical realizability is guaranteed,
but there is no immediate assurance that functional requirements are met [3].
G. Senaltun and C. Cangelir / DMU Management 355
In this method, since satisfying functional requirements are not guaranteed, there
will be a large number of design change during verification of the product. Moreover,
starting the design from detail parts brings the assembly design problems. There may
be inconsistency in assemblys 3D model and detail parts should be redesigned.
Bottom-up design is described as a highly iterative method by Terpenny [3]. As a result
of these, bottom-up design is a time consuming process and it causes the long product
lifecycle.
Method 2-Top-Down Method.
In top-down method, the evolution of design starts from assemblies through detail parts.
First top assembly is designed and its components are decided. Then components after
then sub-components up to detail parts are design. Systems engineering approach is
adapted to this method. The design evolution of top-down method is represented in
Figure 3.
Top-down method explained as top-down assembly design process is refined from
the traditional product design process to better exhibit the recursive-execution and
structure-evolvement characteristics of product design by Chen [4].


Fig. 3. Top-Down Design
Functional requirements are transferred from top assembly through detail parts
during design process. However, manufacturing concept is not considered during these
process.
G. Senaltun and C. Cangelir / DMU Management 356
According to Terpenny In the top-down method, design is driven from functional
requirements toward solution alternatives. While design solutions using this approach
are likely to meet functional requirements, there is no guarantee that solutions are
realizable in terms of physical manifestations [3].
Starting from top assembly brings the full consistency of interface between detail
parts and/or sub-assemblies. Hence, there is no extra time spent to assemble detail parts
to get the top assembly. This brings the shorter design time compared to bottom-up
method. On the other hand, top-down method does not guarantee of meeting physical
requirements, thus manufacturing process could be long and complex process.
Moreover, there could be change of design because of physical constraints and long
production time means long product lifecycle.

Fig. 4. Example of Top-Down Method [4]
Figure 4 represents the cylindrical piston system design using top-down method.
As seen in the figure, design starts from assembly and going through the detail parts.
After all detail parts are designed, assemblys 3D model is completed.
Method 3-Hybrid Method.
Hybrid method is the combination of bottom-up and top-down methods. It brings
together advantages of top-down and bottom-up methods.
Physical design starts from detailed parts through top assembly and during
physical design; functional requirements are transmitted from top assembly through
detail parts. This makes the product lifecycle shorter compared to other two methods.
In this method, both physical and functional requirements are considered during
design phase means meeting all requirements is guaranteed. Moreover, this brings the
less number of changes in design and manufacturing process.
During aircraft design, top level analyses are done and in the same time detail parts
are designed. For example, for aerodynamic analysis frame locations, landing gear
G. Senaltun and C. Cangelir / DMU Management 357
systems locations are designed. Concurrently, landing gear and frames are designed in
details. This detail design gives input to aerodynamic analysis. This means design is
done both top-down and bottom up means hybrid.
1.2. Product Structure Correlation
Product structure, defined in EIA-649-B, represents the composition, relationships and
quantities of a product and its components and it is determined from product
configuration information [5]. According to Military Handbook (MIL-HDBK) 61B;
Product structure derived from the functional analysis and allocation process of
system engineering, may be depicted graphically as a tree structure or as an
indentured listing [6].
Dolezal states that product structure is composed of main and sub-components in a
hierarchical way. It refers to system architecture, internal structure, relationship of
system components and associated configuration documentation [7].
According to all these references, it can be easily said that product structure is the
base for all design information management including 3D models, material information,
change information, etc. Product structure is also base for DMU creation and
management. 3D models are kept up-to-date and in an organizational way with the help
of product structure. DMU is created and managed with design information through
product structure.
Moreover, there are different types of parts in aerospace industry such as structural
parts, systems, equipments, harness, softwares, etc. Product structure brings all
different kinds of parts and this is an important information for DMU. Analysis done
with DMU differs with the part types and this information is kept on DMU via product
structure.
Simplest product structure of aircraft is seen in Figure 5.
Product structure construction process is also important for DMU management. All
related departments should be involving this process and product structure should be
constructed according to their needs. For example, for some analysis fuselage and wing
models should be reached separately via PLM tools. In this case, product structure
should be constructed according to this requirement.
Additionally, DMU and analysis done by using DMU should be done for one
product. This is controlled by the effectivity information which is the manufacturing
serial number of the product. Parts could be filtered according to effectivity
information via product structure and this is the significant feature of product structure
in DMU management.
1.3. Master Geometry Correlation
To manage DMU, all parts should be located on their exact location on the product in
top assembly model. This could be done by two ways:
Design all detail part in their exact locations by using only product axis system.
This is suitable for bottom-up method since detail parts are designed first.
Keep location information in the assembly models.
This is suitable for top-down method since assemblies are designed first.
G. Senaltun and C. Cangelir / DMU Management 358
In the hybrid method, both ways are used for creating DMU. For complex products,
mostly, detail parts are designed on their exact locations and the parts used more than
once are located in the assembly model.
For both ways, to locate parts in the 3D model, auxiliary products named as master
geometry should be used.
Master Geometry could be defined as one of auxiliary products of DMU. Master
Geometry is a 3D CAD model which presents the outer geometry and the elements
forming the structure of the aircraft all together. This geometrical composition consists
of surfaces, planes, lines and points which include all necessary geometrical
dependencies and constraints required for design of the aircraft. Dolezal explains
master geometry content like indications e.g. planes, coordinate systems of main
structural components e.g. frames, ribs, spars, cut-outs like doors or windows and the
positions of main sections [7].
During design process, parts locations should be linked to auxiliary geometries in
the master geometry, such as frame positions. This brings the easy control of parts
locations and changes of the location changes of critical components. Hence, if one
frame location changes, parts locations that are linked to the frame are changed
automatically. This is the main advantage of using master geometry.
1.4. Conclusion
In this paper, 3D design evolution methods named bottom-up, top-down and hybrid for
DMU creation are explained. Details of three design methods are given and their

Fig. 5. Product Structure
G. Senaltun and C. Cangelir / DMU Management 359
effects on product lifecycle are studied. If physical constraints are more important than
functional constraints, bottom-up method is the suitable for design. However, top-down
method should be used, if functional requirements are more important. Both two
methods affect negatively product lifecycle time. Because of this effect, using these
methods for complex products like aircraft is not suitable. Moreover, for aircraft all
physical and functional requirements are important. In that case, hybrid method is the
most suitable one.
In addition, DMU management and its relation with product structure and master
geometry have been studied. Product structure keeps all design data up-to-date and in
an organizational way. Furthermore, design data could be filtered according to
effectivity information. Both these reasons state the importance of using product
structure in DMU management. Master geometry is also as important as product
structure. Parts are located in 3D environment on the product by using master geometry.
This simplifies the design process and location changes could be controlled easily.
References
[1] Biren Prasad, Concurrent Engineering Fundementals, 1996
[2] In-Ho Song, Sung-Chong Chung, Synthesis of the digital mock-up system for heterogeneous CAD
assembly
[3] Terpenny, Janis P., Nnaji, Bartholomew O, Bhn, Jan Helge ,Blending Top-Down and Bottom-Up
Approaches in Conceptual Design, May 1998
[4] X. Chen, S. Gao, Y. Yang, S. Zhang, Multi-level assembly model for top-down design of mechanical
products, Computer Aided Design, In Press, Available online 24 December 2010.
[5] EIA-649-B, National Consensus Standard for Configuration Management Department of Defence
[6] Military handbook 61B, Configuration Management Guidance, 2002, page 5-5
[7] Dolezal, W., Success Factors for DMU in Complex Aerospace Product Development, Technische
Universitt Mnchen, s. , November 2007

G. Senaltun and C. Cangelir / DMU Management 360
Implementation of an Artificial
Neuro-Electronic System for Moisture
Content Determination of Subbase Soil


N. S. SHETU
a,1
, M. A. MASUM
b

a
Associate Member, IEB, Bangladesh
b
Graduate Student Member, IEEE


Abstract. In this paper, a new approach will be proposed to determine the
moisture content of subbase soil in a view to suppress the limitations of existing
methods while maintaining the better accuracy. This innovation embeds an
automatic electronic control as well as an artificial neural network (ANN) in the
framework for time optimization. Artificial neural network and automatic
electronic control both together can be termed as artificial neuro-electronic
control. The artificial neural network has been trained by mapping the weights of
soil samples at specific time steps to the respective final moisture contents. As a
result, the system can be able to predict the final moisture content by analysing
fewer data samples in the very beginning of moisture content determination tests.
Validation of the predictive results has also been conducted in real time for soil
samples suitable for subbase layer of a pavement to ensure the system feasibility
for laboratory and field uses. Experiments show that this fully automatic system
can exhibit a significant accuracy and precision for the evaluation of moisture
content in about 50% reduced time compared to the standard microwave based
method.
Keywords. artificial neural network, automatic electronic control, subbase,
moisture content, microwave heating
Introduction
Moisture content is one of the most influencing factors for evaluating the strength
of soil. Any structure and its foundation planning demand an inevitable
consideration of the strength of underlying soil, which comes from the test results and
highly depends on moisture content [1]. However, without knowing the moisture
content, optimal strength of soil cannot be ensured reliably [2] in the construction site
by merely applying the so called soil compaction method [3] and soil stabilization
technique [4].
Moisture content can be determined in accordance with any of the standards of
AASHTO T-265, ASTM D2216 or ASTM D4643 [5]. ASTM D2216-10 and AASHTO
T-265 outline the technical knowhows for the determination of moisture content of
soil and are the most widely exercised procedures. In these standards, weight of a


1
Corresponding Author: N. S. Shetu, Present Address: Unit-11, 46 Trinculo place, Queanbeyan, NSW-
2620, Australia; E-mail: mashetu@gmail.com
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-361
361

Figure 1. Illustration of the overall experimental setup

suitable aluminium can with some soil sample is taken, which is then heated in a
conventional oven at 110

C for 24 hours. After 24 hours of drying, the can with dry


sample is weighed again and from the difference between this measurement and the
previous one, moisture content is estimated [6]. In this procedure, a small amount of
energy is actually utilized in the drying process and the remaining significant portion
is lost. Therefore, this is not an energy efficient and enviroment friendly method.
Moreover, spending of 24 hours time may pose detrimental effects on the
construction process if its evaluation is necessary during this time and the subsequent
phases. Another standard procedure is based on ASTM D4643-08, where drying
process of soil sample involves microwave energy absorption. In this method, first
cycle of drying process of soil sample continues for 3 minutes under a constant
microwave power of 700W and after that both weighing and mixing of the sample
are conducted. For subsequent cycles, drying time is reduced to 1 minute and both
weighing and mixing are repeated until the two consecutive weights are differed by
0.1% or less of the initial weight of the soil taken [7]. This microwave based
moisture content determination is quite faster than the conventional oven based
method, but it requires frequent manual interventions and there is no automatic
control for data acquisition and processing. As a consequence, inaccuracies may arise
from human factors. Some variations in this test procedure can be found in literature
[8,9], but all of these methods suffer from the same drawbacks mentioned earlier.
Another approach for the determination of moisture content of soil mainly used in
irrigation purpose is the neutron scattering method [10]. In this method, radiation is
the key to measure the volumetric moisture content and hence bulk specific gravity
must be considered to figure out it. Moreover, this approach suffers from high
instrumental cost, radiation hazard and an error up to 15 percent [11] which make this
method inappropriate for this particular context.
In the last few years, artificial neural network has come to power in managing
every field of modern world such as aerospace, automotive, telecommunications and
transportation systems [12,13,14]. In Geotechnical engineering, ANN has been
applied successfully for the prediction of lateral load capacity of pile [15] and, for the
modelling of maximum dry density and optimum moisture content of soil [16].
In this paper we are going to propose a novel approach for determining the moisture
content of soil using a custom built system outlined in Figure 1. The main system
components are a laptop computer, a microwave oven and an electronic interfacing
circuitry. In this assembly, the computer runs our software which incorporates a neural
network and a convenient graphical user interface, and the microwave oven acts as
a drying machine. All the communications between the PC and the test equipments
are handled by the interfacing data acquisition module (DAQ module). All these
components together make the system fully automatic thus evading every manual

N.S. Shetu and M.A. Masum / Implementation of an Articial Neuro-Electronic System 362

Figure 2. Neural network architecture (style inspired from MATLAB Neural Network documentation)

interactions and possible inaccuracies arise from there. The fully trained embedded
neural network offers an additional flavour of predictive capability which can
provide the result in advance without going through the entire length of test period,
thus moisture content determination procedure is facilitated both in terms of reduced
test time and power consumption. Experiments show that time and power consumption
are lowered to about 1/2 of the existing microwave oven based methods.
The remainder of this paper is organized as follows. Section 1 presents some back-
ground theory, the detail explanations of the system setup and the procedures are out-
lined in section 2. The test results with discussions are presented in section 3 and finally
a conclusion with possible future directions is drawn in section 4.


1. Background Theory

1.1. Subbase Soil

The subbase soil is located immediately above the subgrade soil and it consists of ma-
terial of a superior quality than the soil which is generally used for subgrade
(bottom most layer). To maintain the requisite properties (e.g. strength) of subbase
soil, specific treatments such as stabilization and artificial compaction may be a viable
means and this choice (e.g. chemical or mechanical stabilization, compaction effort)
comprehensively depends on the existing and optimum moisture contents [17]. Soil
classified as A-1-a, A-1-b, A-2-4, A-2-5, and A-3 by the American Association of
State Highway and Transportation Officials (AASHTO) can satisfactorily be used as
subbase [17].

1.2. Moisture Content of Soil

Soil is composed of solid particles, water and air. The ratio of the weight of water
present in soil to the weight of solids is called the moisture content [3] and is
formulated as follows

w = (W
wc
W
dc
)/(W
dc
W
c
) 100% (1)

where, W
wc
, W
dc
and W
c
are the weights of wet and dry soil samples with container, and
weight of container respectively.

N.S. Shetu and M.A. Masum / Implementation of an Articial Neuro-Electronic System 363



Developed GUI
Computer

Microwave oven






Soil sample on
electronic balance




Data acquisition module

Figure 3. Experimental Setup

1.3. Neural network

Neural network can adapt with any nonlinear system to extract underlying patterns
and detect the trends. This is achieved similarly heuristic learning process until the
network will reach in steady state (MATLAB Neural Network Toolbox 1984-2011).
Generalized neural network architecture is depicted in Figure 2. In this study, a
feedforward backpropagation neural network has been incorporated because of
supervised learning nature of the input datasets which propagate only in the forward
direction to gain the moisture content as output. The overall output of this trained
ANN is the moisture content w and can be expressed in terms of input P (weights of
soil mass), transfer functions ( f
2
and f
1
), weight functions (LW & IW ) and biases
(LB & IB) as follows:


(
(

'

+ =

= =
N
i
M
j
i j ij i
LB IB P IW f LW f w
1 1
1 2
(2)




Hence, f
1
is a log-sigmoid transfer function that maps the inputs (soil weights) to the
normalised output of the hidden layer. Finally the linear transfer function f
2
outputs the
predicted moisture content (w). The weights and biases stabilize the ANN to enhance the
predictability by reducing mean squared error.



N.S. Shetu and M.A. Masum / Implementation of an Articial Neuro-Electronic System 364


Figure 4. Developed GUI with a plotted sample dataset

1.4. Microwave Heating Fundamentals

Microwave occupies the frequency range of approximately 0.3 to 300 GHz in
electromagnetic spectrum. The most common commercial microwave generator
magnetron found in microwave oven having a rated frequency of 2.45 GHz. This
frequency is very close to the natural frequency of water molecules. Due to resonance,
at this frequency vigorous vibration of water molecules present in soil sample causes
intermolecular frictions which in turn generate heat [18]. As a result, the temperature
of the soil mass increases gradually towards the boiling point of water and eventually
evaporation occurs thus allowing the sample to dry in a comparatively short time [19].
According to ASTM D4643-08, in the determination process of soil moisture content,
microwave oven drying gives rapid results. As microwave has to penetrate into the soil,
attenuation of microwave power is occurred in every consecutive layer [20]. Due to the
requirement of microwave penetration into the soil sample the dimensions should not be
too coarse.


2. Experimental Setup and Procedure

2.1. Setup

Experimental setup consists of a microwave oven, an electronic balance, a bi-
directional data communication module and a laptop computer (2.66 GHz Intel i5
processor) is shown in Figure 3. A Panasonic Microwave oven NNSD691S having
a dimension of 525mm x 401mm x 310mm has been used. It has rated input power of
700W, which meets the power requirement of ASTM D4643-08. An electronic balance
N.S. Shetu and M.A. Masum / Implementation of an Articial Neuro-Electronic System 365
of model CB-V Electronic Balance having a precision 0.01 g and capacity 2000 g
has been employed for measuring the weights of samples. USB-1602HS data
acquisition (DAQ) module from Measurement Computing has been used as an
interfacing board for bi-directional communication between PC, microwave oven,
balance and temperature sensor. This module can sample voltage and temperature
signals up to 1 Mega samples per second (Ms/s), and also can interpret the commands
issued by the developed GUI to the microwave oven using opto-coupler (MOC5007).
The opto-coupler isolates high voltage (230V) side of the oven from low voltage
(5V) side of the computer and DAQ module. The GUI has been designed using
MATLAB to act as an interface for the human operator to run the experiment
successfully. A screenshot of the GUI is shown in Figure 4. The microwave oven
bottom has been cut off and attached on a four leg stand so that the balance should be
placed in between the stand and the oven. All these engineering works have been done to
facilitate the continuous recording of the weight signal from electronic balance as volt-
age using DAQ module and passing to the computer for analysis. A temperature sensor
has been placed in the microwave oven to protect the oven from extra heat in case of the
sample being too dry or have some mineral which may react highly with microwave.
If the temperature increases more than 110

C (which is the maximum acceptable


temperature stated in the ASTM D2216) then the absorbed water is driven off from the
soil mass, which is not accounted for moisture content determination. In this case
the computer automatically stops the oven by analysing the data from temperature
sensor to avoid any accidental damage.

2.2. Procedure

2.2.1. Sample Preparation

190 subbase soil samples each of weight approximately 100 g have been collected in a
120 ml pyrex container successively following the criteria stated in ASTM D4643-
00 sample weight should be 100-200 g with particle size<2mm. Another 10 sample
pairs (each sample weight of approximately 100 g) have also been taken for both



Figure 5. The algorithmic workflow of ANN development and error calculation
N.S. Shetu and M.A. Masum / Implementation of an Articial Neuro-Electronic System 366
microwave oven and conventional oven experiments separately. A Pyrex container has
been chosen because of its microwave and thermal resistance and the shape of that
container favours the effective vaporization of water. These soil samples have been
collected from a road construction site nearby Rajshahi, Bangladesh which were A-2 as
classified by AASHTO Soil Classification System.

2.2.2. Data Acquisition

The proposed system has been operated algorithmically by software incorporating the
GUI. The data acquisition and storing in computer memory have been done for each
sample as a data set. In the beginning of the experiment, the oven and the clock have been
turned on and the computer started storing the weights of the sample at every 30 seconds
as it progressively losses moisture. When the two successive weights differ by 0.1%
or less, the process is automatically terminated and moisture content is calculated
using equation 1 in software. The same experiment has been repeated for all of the
190 soil samples. For the last 10 samples, datasets (each set contains initial 12 records
without performing full test) have been acquired separately for both microwave oven
test and the conventional oven following the outlines of ASTM D2216-10. Each of
these datasets not only contained the microwave test records but also the final moisture
content reading from conventional oven test. The entire length of test time in
microwave oven was about 11 to 15 minute for every sample due to its different
moisture content.

2.2.3. Training, validation and test of neural network

For this work a two layer (i.e. a sigmoid hidden layer and a linear output layer)
feedforward back propagation network has been developed. This network architecture
has been chosen because of the fact that one hidden layer is sufficiently capable of
resembling a function representing a continuous relationship between input and output
(Heaton 2008). From a lot of available optimization algorithms, the Levenberg-
Marquardt backpropagation algorithm has been nominated to optimize the errors due
to its robustness, efficiency and handling capability of nonlinear problems [21]
Initial 12 records (each dataset contains 22 to 30 records) from the first 194 datasets
of soil sample have been considered for training the network by adjusting its weight
function while aiming the lowest mean squared error (MSE) and the highest
regression values (R2). The highest regression value R2=1 indicates an exact
relationship between outputs and targets, while the lowest R2=0 means no relationship
exists at all. The datasets have been passed through the MATLAB neural network
fitting tool (MATLAB Neural Network Toolbox 1984-2011) which performed the
training, validation and preliminary testing of the ANN and developed a trained ANN.
As a rule of thumb, the optimum number of neurons in the hidden layer should be in
between 2/3 to twice of the number of input layer elements [22]. For this experiment
the optimum number of neurons in the hidden layer has been found to be 16.

2.2.4. Final test using microwave and conventional oven data

The ultimate test has been started using the last 10 datasets obtained from microwave
oven which has also had a counterpart attained from conventional oven drying. Datasets
N.S. Shetu and M.A. Masum / Implementation of an Articial Neuro-Electronic System 367



Figure 6. (a) Target vs. NN Output; (b) Discrepancy between actual and predicted moisture content.



N.S. Shetu and M.A. Masum / Implementation of an Articial Neuro-Electronic System 368
acquired from microwave oven testing have been passed through the developed neural
network. The outputs of the network are the predicted values of moisture contents
which have been compared with the moisture content of respective soil sample
obtained from conventional oven drying process. The whole algorithmic workflow as
shown in Figure 5 has been encapsulated inside the developed software. The
discrepancy between the actual moisture contents from the conventional oven drying
process and the predicted moisture contents is shown in Figure 6(b).


3. Result

The performance of the developed network for all the datasets (training, validation
and test) is shown in Figure 6(a). The data points represent the neural network
outputs to the targets. Hence the linear solid line indicates that the network outputs
are equal to the target moisture contents. It is obvious from the relationship, R=1
that there exists an exact linear relationship between the ANN outputs and targets,
which is also a merit of the proposed system. From this graph, it is evident that the
trained network worked precisely well with the soil samples having moisture
content up to 95%. Figure 6(b) reveals the discrepancy between the actual moisture
content obtaining from conventional oven process and the predicted moisture contents
from ANN. It should be noted that the maximum difference is only 0.008% that can be
overlooked considering the huge amount of time saving (almost 50% less than the
existing microwave oven based method) during the test procedure using our proposed
system.


4. Conclusion

The results of this work indicate that the soil drying process for moisture content
determination can be realized efficiently using a group of algorithmic actions. The
implemented system presents not only the precise results in the shortest possible of
time but also it automates the whole test procedure. Nevertheless the microwave
drying equipment and algorithm developed in this study have been operated safely in
normal condition but careful monitoring should be required if the soil sample
contains significant minerals because of their possible interactions with microwave. In
this study soil samples with no unusual properties have been tested but in real scenario
it may be different. So generalization of the proposed system while considering all
types of soil samples with various minerals should be the ultimate goal for the future
endeavour.


References

[1] M.V. Carlos, H.B. Luis and W.H. Jan. Contribution of water content and bulk density to field soil
penetration resistance as measured by a combined cone penetrometertdr probe. Soil and Tillage
Research, 60(),2001 35 42.
[2] S. Vanapalli, D. Fredlund and Pufah. The relationship between the soil-water characteristic curve and
the unsaturated shear strength of a compacted glacial till. Geotechnical Testing Journal (GTJ),
19(1996).
N.S. Shetu and M.A. Masum / Implementation of an Articial Neuro-Electronic System 369
[3] M. Budhu. Soil Mechanics and Foundations. John Wiley & Sons, 2011.
[4] N. Garber and L. Hoel. Traffic and highway engineering. Cengage Learning, 4th edition, 2008.
[5] Nebraska Department of Roads (NDOR). Geotechnical Policies and Procedures Manual, 2012.
[6] ASTM D2216-10. Standard test methods for laboratory determination of water (moisture) content of
soil and rock by mass. ASTM D2216 - 10, ASTM, West Conshohocken, USA, 2010.
[7] ASTM D2487. Standard practice for classification of soils for engineering purposes (unified soil
classification system). ASTMD2487 - 11, ASTM, West Conshohocken, USA, 2011.
[8] D. Hagerty, C. Ullrich and M. Denton. Microwave drying of soils. Geotechnical Testing Journal
(GTJ), 13, 1990.
[9] W. Philip, Chung, and T.Y. Ho. Study on the determination of moisture content of soils by microwave
oven method. Geo report No. 221, Civil Engineering and Development Department, The
Government of The Hong Kong, 2008.
[10] B.C. Van, D. Nielsen and J. Davidson Calibration and characteristics of two neutron moisture probes.
Soil Science Society of America proceedings, 25(1961).
[11] F.S. Zazueta and J. Xin. Soil moisture sensors. Bulletin 292, 1994.
[12] B. Smith and M. Demetsky. Short-term traffic flow prediction models-a comparison of neural network
and nonparametric regression approaches. Systems, Man, and Cybernetics, 1994. Humans, Information
and Technology., 1994 IEEE International Conference on, Vol. 2(1994), 1706 1709.
[13] H. Demuth and M. Beale. Neural Network Toolbox for Use with MATLAB. The MathWorks, Inc,
2002.
[14] D. Chaturvedi, R. Chauhan and P. Kalra. Applications of generalised neural network for aircraft
landing control system. International Journal on Soft Computing, A Fusion of Foundations,
Methodologies and
Applications 6.6, 2004.
[15] S.K. Das and P.K. Basudhar. Undrained lateral load capacity of piles in clay using artificial neural
network. Computers and Geotechnics, 33(8), 454 459, 2006.
[16] A. Alavi, A.H. Gandomi, A. Mollahassani, A.A. Heshmati and A. Rashed. Modelling of
maximum dry density and optimum moisture content of stabilized soil using artificial neural
networksl. Journal of Plant Nutrition and soil Science, 173(2010).
[17] N. Garber and L. Hoel. Traffic and highway engineering. Cengage Learning, 4th edition, 2008.
[18] M. Cann. Microwave Heating as a Tool for Sustainable Chemistry. CRC Press, Taylor & Francis Group,
2011.
[19] ASTM D4643-08. Standard test method for determination of water (moisture) content of soil by mi-
crowave oven heating. ASTM D4643 - 08, ASTM, West Conshohocken, USA, 2008.
[20] A.E. Lord, R. Korner, and J. Reif. Determination of attenuation and penetration depth of microwaves
in soil. Geotechnical Testing Journal, 2(1979).
[21] MathWorks Inc. Neural network toolbox. The Matlab Online Documentation,1984-2012
http://www.mathworks.com.au/help/nnet/index.html.
[22] J. Heaton. Introduction to Neural Networks with C#. Heaton Research, Inc., 2nd edition, 2008.
N.S. Shetu and M.A. Masum / Implementation of an Articial Neuro-Electronic System 370
An Electricity Market Trade System for
Next Generation Power Grid
Kyohei Shibano
a,b,
Kenji Tanaka
a,c
and Rikiya Abe
a,b
a
Presidential Endowed Chair for Electric Power Network Innovation by Digital Grid,
The University of Tokyo

b
Department of Technology Management for Innovation, Graduate School of
Engineering, The University of Tokyo
c
Department of Systems Innovation, Graduate School of Engineering, The University
of Tokyo
Abstract. In recent years, power sources in electric power system have been
diversified for various reasons such as electricity liberalization, greenhouse gas
reduction, and so on. Adapting to these various energy sources, electric power
system needs more flexible electricity trade system between those resources. That
is why there comes a new trend that every power demand becomes to identify and
select its power sources. This report proposes an electricity trading system with
Digital Grid, in which individuals can trade the identified electricity, time by
time and source by source. Individual participants can trade electricity on the
computer based market. The executed transaction of the electricity will be
delivered physically and automatically via Digital Grid Routers though existing
transmission and distribution lines. In the proposed market, participants can trade
smaller electricity, where it was not suitable in conventional wholesale electricity
markets. Additionally, it is considered suitable that they trade electricity via
battery storage, especially for renewable energy, because it is difficult to predict its
power output in future. Furthermore, buyers can buy specific electricity, satisfying
their needs. It leads to activation of renewable energy and distributed energy of
smaller size participating in electricity market. As a result, a new type of
Electricity Market Trade System is proposed in this study with simulation result
that identified electricity is traded accurately.
Keywords. smart grid, storage battery, renewable energy, distributed generation,
electricity market, electricity coloring, electricity service, Digital Grid
Introduction
Recently, renewable energy sources as distributed generation are introduced all over
the world. To utilize large proportions of renewable energy, there is a strong need to
develop a new electric power grid, and various approaches are being considered [1][2].
Smart grid is also such a new electric power grid that uses information and
communications technology to manage distributed generation, battery storage and other
power devices and much research has been done in recent years [3][4] [5].
Much research of electricity market also has been devoted [6][7][12][13]. In the
existing wholesale electricity market where electricity at the future time is traded, it is
difficult to trade renewable energy. In the Danish Electricity market, if an imbalance
occurs and smaller renewable and distributed generation unit owners are not be able to
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-371
371
fulfill their obligations they will be penalized [8]. In UK, smaller renewable distributed
generation cannot participate in electricity markets as they are unable to fulfill
requirements of markets [9].
There are researches of the effect of introduction of renewable and distributed
generation in the existing wholesale power market. The effects of large-scale
development of microgeneration in the Netherlands were evaluated with a qualitative
scenario analysis [8]. Some perspectives and approaches aiming to help stand-alone
small size and clusters of renewable and distributed generation units to participate in
the UK electricity market drawing on relevant experience from Denmark was discussed
[9]. The impacts of the introduction of wind power on negative price and clearing price
in the wholesale electricity market in the United States was discussed [9].
In this study, we propose new electricity trading system over the Digital Grid [10],
which is a new type of electricity grid. The Digital Grid can accept high penetrations of
renewable power. Power sources in electric power system have been diversified for
various reasons such as electricity liberalization, greenhouse gas reduction, and so on.
Accordingly, the power demand becomes to identify its power source. The number of
people who have electric power needs, such as those who want to live only on
renewable energy and those who want to live on nuclear free has been increasing. A
proposed electricity trading system is possible for individual participants to trade
electricity generated by any power sources such as home solar systems through Energy
Service Providers (ESPs). We introduce the Digital Grid electricity market system and
check identified electricity can be traded in some computer simulation case studies.
1. Electricity Identification with Digital Grid
In the Digital Grid, existing large synchronous grids are to be divided into smaller
segments, named as Cells, and they are connected asynchronously. Each cell has an
autonomous supply/demand matching mechanism through suitable energy storage.
Cells are connected by machinery which is called Digital Grid Router (DGR). We
can control amount of electricity interchange among cells, operating multiple DGRs at
one time. It is possible to interchange electricity among cells for various purposes such
as balancing supply and demand, controlling remaining amount of batteries for each
cell, electricity trading and so on.
Each cell has three functions: supply, demand and storage. Cells are connected by
DGRs and control of arbitrary amounts of bidirectional electricity interchange can be
inducted by operating DGRs. It is possible that transferring electricity among indirectly
connected Cells through other cells as bucket brigades. Namely, even among indirectly
connected Cells, we can transfer and trade arbitrary amount of electricity.
The size of a cell can be as large as one state, medium as cities and small as houses. It
is possible to interchange electricity among Cells of different sizes which are connected
by DGRs.
The Digital Grid aims for a user-participated type of system management. In
conventional type grid system, large scale power stations such as thermal power plants
or nuclear plants generate electricity in accordance with consumption. In the Digital
Grid system, on the other hand, individual user participates in system management
using solar home systems, home battery storages and so on.
K. Shibano et al. / An Electricity Market Trade System for Next Generation Power Grid 372
The Digital Grid system can be applied to the existing grid step by step. It can be
applied to the existing grid of developed countries, and that of developing countries
with electrification..
1.1. Functions of the Digital Grid
The representative functions of the Digital Grid are as follows.
1.1.1. Identification of Electricity
All energy flows through DGRs are monitored and recorded. The information about the
transaction time, place, source, destination, and other metadata are also recorded,
thereby we can identify electricity. In this way, the Digital Grid can identify electricity.
1.1.2. Electricity Trading
As above, the Digital Grid can identify electricity, and identified electricity is traded at
the Digital Grid electricity market. Trading and execution are realized by using a web
system based on information and communication technology (ICT).
1.1.3. Delivery of Electricity
Electric power is delivered based on a trading result. We can deliver trading volume of
electricity from seller to buyer, operating several DGRs at one time. Minimum cost
transmission route searching in transmission network is discussed in [14].
1.2. Players in the Digital Grid
Players in the Digital Grid are divided into three described below. These are the
Digital Grid Power Exchange, Energy Service Provider and User. Each role is
described as follows.
Figure 1 Players structure
1.2.1. The Digital Grid Power Exchange, DGPX
The Digital Grid Power Exchange (DGPX) is electricity trading exchange and now
assumed to be the only institution. Electricity trading is carried out here. Based on an
execution rule, which is introduced in this report, matching of a seller and a buyer and
the execution price are decided.
K. Shibano et al. / An Electricity Market Trade System for Next Generation Power Grid 373
1.2.2. Energy Service Providers, ESP
Energy Service Providers (ESPs) provide electricity services over the Digital Grid for
users. Electricity services sell electricity to users and buy electricity generated from
power generation equipment which users possess, employing users storage batteries,
and so on. While DGPX is assumed to be the only, ESPs assume that several
companies enter. We consider a service that ESPs perform operation to trade electricity
of users in DGPX.
ESPs perform operation of DGRs disposed for each user. In electricity trading in
DGPX, ESP secures a power transmission route from the seller to the buyer, operates
DGRs within a route, and delivers electricity.
1.2.3. Users
A user can receive Digital Grid electricity services by joining an ESP. Users are
prosumers of electricity, which are both producers and consumers. It is possible to
utilize home storage battery and battery of EV for system management through ESPs.
Users can trade electricity in DGPX through ESPs which they have joined.
2. Electricity Market System Design
The electricity market currently being examined in the Digital Grid differs from the
wholesale electricity market of the conventional type around the world. In the
conventional wholesale electricity market, suppliers purvey electricity, satisfying
demand in the synchronous grid. In the DGPX, on the other hand, consumers can
purvey electricity with their tastes for power sources. For example, users who want to
live using 100% solar power can realize their needs purveying solar electricity and
consuming it. Participants can trade electricity only in battery storage in order to
enable trading renewable energy smoothly, which is difficult to predict power output in
the future. Participants can trade their electricity freely in DGPX. Execution
mechanism is continuous session which is adopted in stock market.
Sellers of electricity place orders releasing the information of electricity they own.
Buyers can bid orders, displaying board (see Figure 2) which shows specifying
conditions as buyers want, such as renewable energy and photovoltaics.
Electricity trading between the users who belong to a different ESP is also assumed.
K. Shibano et al. / An Electricity Market Trade System for Next Generation Power Grid 374
Figure 2 Electricity trading in continuous session
2.1. Designing of Identified Electricity Products
In designing of the electricity market, it is important that electricity products to trade
are identifiable. Interchanging electricity through DGRs, transmitted power is tagged
with some property information. Electric power is identified using this property
information, and it is traded in the DGPX. We suppose that three property information:
a power source, an area and delivery time, are given this time. Power source
information is described by a hierarchical structure such as renewable energy, solar
power, silicon type and monocrystalline silicon type. Solar power is renewable
energy and Wind power is also renewable energy, similarly silicon type is
solar power and compound semiconductor type is also solar power. The seller of
electric power can offer a bid by selling with a "single-crystal-silicon type" when it
turns out that the solar energy system is a "monocrystalline silicon type", and when
unknown, a bid will be offered by selling as "photovoltaics." Conversely, the buyer can
also offer a bid by specifying a monocrystalline silicon type, and when it is not
necessary he/she can also offer a bid as photovoltaics. When a bid is offered without
specifying a power source, all the sale bids serve as candidate for an execution. In
general, when a bid is offered more deeply, it will be executed at a higher price.
Area information is also described by a hierarchical structure such as a state and
a city.
Delivery time specifies the time of the charge and discharge of a storage battery. A
bid is offered between within 30 minutes and within 7 days. For instance, when a
bid is offered as within 3 days, delivery of electricity will be performed within 3 days
from execution time. ESPs deliver electricity avoiding the time zone when the
transmission network is crowded, and this enables reducing transmission cost.
2.2. Execution Algorithm
Execution is performed in Zaraba method, or continuous auction method. In this
method, the bids are executed in accordance with price priority and time priority
like stock market. Moreover, as mentioned above there is depth, we introduce the
principle depth priority which means the deepest sell and deepest buy orders take
precedence over other orders. Priority order is price, time and depth. Zaraba method is
explained in detail in [16].
K. Shibano et al. / An Electricity Market Trade System for Next Generation Power Grid 375
For a bid by selling as photovoltaics the sale bids, which is not only as
photovoltaics but also monocrystaline silicon type, serve as candidate for an
execution. From the principle of depth priority, a buy order as monocrystaline shall
take precedence when the two same prices buy orders, which are monocrystaline and
photovoltaics, and one sell order as monocrystaline silicon type is offered.
We introduce the principle depth priority not only to power source but also to
area. Depth priority is also introduced into delivery time, which is separated as a
hierarchy.
Execution is performed in consideration of above discussed as many times as
combination of power sources, areas and delivery times. Therefore the number of
boards exists as many as combination of these three elements.
3. Delivery of Electricity to Storage Batteries
Remaining capacity of the battery will change because of many reasons including
trading. There may be users who use battery for electricity trading together with for
daily use. There may be users who use EV as storage for trading. There are many cases
where delivery is not possible because the remaining capacity of the storage is not
sufficient for trading. Therefore, it is necessary to specify a rule for the remaining
capacity of the storage. As the delivery method of electricity, following two methods
are considered.
(1) Lock capacity of storage is;
to lock the Executed Amount of Electricity in the battery of buyer and seller
until delivery. It spoils users convenience that we force users storage to freeze
the executed amount of electricity until the delivery performs.
(2) Deliver as possible is;
to deliver a part of the executed amount of electricity when buyers capacity is
available for acceptance and sellers capacity is available for delivering. If
executed amount remains, delivery is performed again until finished. In this case,
users convenience is not spoiled that is different from (1)
Figure 3 Storage management for electricity trading
4. Case Study
Analyzing is conducted on the Digital Grid system based on the possible applications.
K. Shibano et al. / An Electricity Market Trade System for Next Generation Power Grid 376
4.1. Power consumption ratio of the users with the preference to live with solar energy
In the condition that trading system has been set up with sufficiently large number of
participant, we simulate the behavior of users assuming that they prefer to cover the
consumption of electricity with the solar energy. If the price of the solar energy is
higher, the users will buy other renewable energy. And there are five electricity
products, thermal, nuclear, hydro, solar and wind. In this case, area and delivery time
information is not used.
The trading price of each supply is calculated based on the price plus a profit of
20% on cost of power generation by the report of the Japans government panel [17].
We consider that electricity price models as stochastic process based on Geometric
Brownian Motion with the condition that mean is 0, standard deviation is 0.2 and initial
value is shown in table 1. The price is expected to change by day and the simulation is
conducted in 365 days. Figure 5 illustrates how the price changes.
Table 1. Initial Price
Thermal Nuclear Hydro Solar Wind
12 11 12 42 18
(JPY)
For the families that have batteries with enough capacity, they will buy the amount of
electricity which they consume once in one day and charge the batteries. Based on the
price of the very day, the families will buy solar energy in the case that the price of
solar is fewer than 44 Yen, otherwise they will buy the cheaper one within hydro and
wind energy. The daily electricity consumptions of the families are calculated by
random number generating. And the electricity consumptions are built by the average
electricity consumption investigated in Kyoto in April, 2012[18]. In other words, it is
on the normal distribution with a daily average 14.5kWh, of standard deviation 4.0.
After the 365 days simulation, the electricity consumption ratio of solar is 80.6%
and hydro is 19.4%. The overall electricity consumption is 5,353kWh and the price for
buying electricity is 174,150 Yen. Figure 6 implies the electricity consumption ratio.
Figure 4 Price transition
K. Shibano et al. / An Electricity Market Trade System for Next Generation Power Grid 377
Figure 5 (left) Power consumption ratio of the user preferring to solar power
Figure 6 (right) Power consumption ratio of the user with the preference that the cheaper is better
4.2. Power consumption ratio of the user with the preference that the cheaper is better
Under conditions similar to the above, we assume the users buy the cheapest electricity
in each day. In this case the electricity consumption ratio of thermal is 70%, nuclear is
16.0% and hydro is 14.0%. The price for buying electricity is 155,479 Yen. Figure 7
implies the electricity consumption ratio.
4.3. Electricity exchange trade between two smart cities
Assuming that in some place in Japan, there are two smart cities, the electricity
demands of which are 100% supplied by solar power. These two smart cities have the
same demand of electricity and the amount of solar radiation. And each is introduced
by the photovoltaic installation corresponding with the demand of electricity. As the
output of the solar power systems varies from day to day, if the introduced batteries are
not sufficient, the shortage of electricity will occur in some days. However, the amount
of introduced batteries shall be reduced as much as possible for their high price. In such
situation, we propose a method that two smart cities are connected by the Digital Grid.
The idea is that keep the supply-demand balance of the smart cities by exchanging
electricity. In this way, the solar energy can meet the electricity demand of the two
smart cities.
Three cases are assumed in electricity interchange between smart cites:
(1) The electricity cannot be interchanged. In other word, each smart city is operated
as an independent cell.
(2) When one of the smart cities is in short of electricity, it can obtain the
interchanged electricity from the other smart city.
(3) In addition to (2), when one of the smart cities generates extra electricity that
cannot be stored in the battery, the extra electricity can be exported to the other
smart city.
We examine how many times electricity shortage occurs, considering that which is
better among this three cases and how much capacity storage have. It can be checked
by case (1) that each smart city has suitable generation equipment by introducing
enough storage. As the same way as chapter 4.1, electricity demand of one family is
built by the average of electricity demand in Kyoto in April, 2012[18] which is
expressed by random number with normal distribution. Each smart city is composed of
100 such families.
The output of solar electricity generation is calculated by the recorded data from the
electricity output of a 10KW solar electricity generating system at some place in Japan.
K. Shibano et al. / An Electricity Market Trade System for Next Generation Power Grid 378
Average electricity generation of the day is 28.7kWh, standard deviation is 16.3. The
simulation of the generating amount is conducted as a random number with the normal
distribution based on this.
The photovoltaic installation which fits the average of daily demand of 100 families
is introduced in each smart city. We made sure that how many times electricity
shortages occur for each case with changing amount of storage capacity. We also count
times of electricity interchange. We suppose that there are no loss of electricity by
storage and transmission.
Figure 7 (left) The number of times of shortages
Figure 8 (right) The number of times of interchange
Figure 7 (left) and Figure 8 (right) imply the simualtion results. Figure 7 (left) shows
the number of times of electricity shortage in two smart cities. If the batteriy capacity
is sufficient, we can make sure that there will be no electricity shortage in case of (1). If
battery capacity is smaller, we can find that in case (3), there are fewer days that the
city is suffering from electricity shortage comparing with case (1).
We make sure that the number of electricity interchange is reduced with the
capacity of the battery. But in order to lose electric power shortage completely, it is
necessary to introduce sufficient amount of storage battery for each case. If it permits
that 5 or 6 days of electric shortage occur per year, it turns out that sufficient storage
capacity of case (2) and (3) is two third as much as case (1). In this case, amount for
power shortage is filled up from existing grid.
Various practical use cases can be considered in addition to the cases examined in
this report.
5. Conclusion
In this paper design of the electricity trading system, execution method and electricity
products over the Digital Grid are proposed. The proposed system has the following
features which enable electricity trade under the decentralized energy resources
introduction: electricity identification, execution algorithm, delivery of electricity and
player design. Using this proposed system, we verified three cases and obtained the
electricity trade is done.
K. Shibano et al. / An Electricity Market Trade System for Next Generation Power Grid 379
References
[1] Y. Hayashi, S. Kawasaki, J. Baba, A. Yokoyama, et.al., Active Coordinated Operation of Distribution
Network System for Many Connections of Distributed Generators, IEEJ Trans.PE, Vol.127, No.1,
pp.41-52, 2007.
[2] A. Shiki, A. Yokoyama, J. Baba, T. Takano, T. Goda, Y. Izui, Autonomous Decentralized Control of
Supply and Demand by Inverter Based Distributed Generations in Isolated Microgrid, IEEJ Trans.PE,
Vol.127, No.1, pp.95-103, 2007.
[3] H. Farhangi, The Path of the Smart Grid, IEEE Power and Energy Magazine, Vol. 8, 2010, pp. 18-
28.
[4] W. Wang, Y. Xu, M. Khanna, A survey on the communication architectures in the smart grid, Computer
Networks, Vol. 55, Issue 15, pp.36043629, 2011.
[5] S. G. Kim, S. I. Hur, Y. Chae, Smart Grid and Its Implications for Electricity Market Design, Journal of
Electrical Engineering & Technology, Vol. 5, No. 1, pp.1-7, 2010.
[6] F. P. Sioshansi, W. Pfaffenberger, Electricity Market Reform an International Perspective, Elsevier,
2006.
[7] F. P. Sioshansi, Competitive Electricity Markets Design, Implementation, Performance, Elsevier, 2011.
[8] R.A.C. Van der Veen, L.J. De Vries, The impact of microgeneration upon the Dutch balancing market,
Energy Policy Vol. 37, Issue 7, pp.2788-2797, 2009.
[9] G. Romanovsky, G. Xydis, J. Mutale, Participation of Smaller Size Renewable Generation in the
Electricity Market Trade in UK: Analyses and Approaches, Innovative Smart Grid Technologies (ISGT
Europe), 2011 2nd IEEE PES International Conference and Exhibition on, 2011
[10] R. Abe, H. Taoka, D. McQuilkinDigital Grid: Communicative Electrical Grids of the FutureSmart
Grid, IEEE Transactions on, Vol. 2, Issue 2, 2011.
[11] P. Brown, U.S. Renewable Electricity: How Does Wind Generation Impact Competitive Power
Markets?, US Congressional Research Service Report, 2012.
[12] A. Ciarreta, M.P. Espinosa, C. Pizarro-Irizar, The Effect of Renewable Energy in the Spanish Electricity
Market, 2012 International Conference on Future Electrical Power and Energy Systems Lecture Notes
in Information Technology, Vol. 9, 2012.
[13] I. MacGill, Electricity market design for facilitating the integration of wind energy: Experience and
prospects with Australian National Electricity Market, Energy Policy, Vol. 38, Issue 7, pp.3180-3191,
2010.
[14] K. Shibano, R. Abe, H. Hirai, M. Hasegawa, K. Aihara, A Study on Minimization of Transmission Cost
over Electricity Routing among Asynchronous Grids, Technical Committee on Comunication Sciences,
2011, in Japanese.
[15] K. Shibano, Electric Network Innovation by Digital Grid (5) Trading , The 2013 IEEJ annual
meeting, DVD:6-059, pp108-109, 2013, in Japanese.
[16] Tokyo Stock Exchange Home Page, What is the zaraba method?,
http://www.tse.or.jp/english/faq/list/stockprice/p_c.html.
[17] Energy and Environment Councils Cost Review Committee Dec. 2011 (Cost To Kenshoo Iinkai
Hokokusyo 12/2011 Sanko Shiryo 2 Hatsuden Cost No Shisan Ichiran),
http://www.cas.go.jp/jp/seisaku/npu/policy09/pdf/20111221/hokoku_sankou2.pdf , in Japanese.
[18] Kyoto city Home Page, Monthly consumption amount of electricity and gas in home use in Kyoto City
2012 (Kyoto Shinai No Gokatei Ni Okeru Denki Toshi Gas Gekkan Shiyouryo 2012),
http://www.city.kyoto.lg.jp/kankyo/page/0000126392.html , in Japanese.
K. Shibano et al. / An Electricity Market Trade System for Next Generation Power Grid 380
Parametric Mogramming with Var-oriented
Modeling and Exertion-Oriented
Programming Languages
Michael Sobolewski
a, b
, Scott Burton
a, c
, and Raymond Kolonay
a
a
Air Force Research Laboratory, WPAFB, Ohio 45433
b
Polish Japanese Institute of IT, 02-008 Warsaw, Poland
c
American Optimization LLC, Liberty Township, Ohio 45044
Abstract. The Service ORiented Computing EnviRonment (SORCER) targets
service abstractions for transdisciplinary concurrent engineering with support for
true service-oriented (SO) computing. SORCER's models are expressed in a top-
down Var-oriented Modeling Language (VML) unified with programs in a
bottom-up Exertion-Oriented Language (EOL). In this paper the basic concepts of
mogramming are presented. On the one hand, modeling with service variables
allows for computational fidelity within multiple types of evaluations. On the other
hand, any combination of local and remote services can be described in EOL as a
collaborative federation of engineering applications, tools, and utilities. An
example of aircraft conceptual design application is given to illustrate how
parametric models can participate in service-oriented engineering analyses.
Keywords. transdisciplinary concurrent engineering, service-oriented mogramming;
var-oriented modeling; exertion-oriented programming; SOA; SORCER
Introduction
A transdisciplinary computational model [4] requires extensive computational
resources to study the behavior of a system by computer simulation. The large system
under study that consists of thousands or millions of variables is often a complex
adaptive system for which simple, intuitive analytical solutions are not readily
available. Usually adjusting the parameters of system in the computer network does
experimentation with the distributed model. The experimentation, for example
aerospace models with multi-fidelity, involves the best of the breed applications, tools,
and utilities considered as heterogeneous services of the model. The modeling services
are used in local/distributed service collaboration to calculate and/or optimize the
model across multiple disciplines fusing their domain-specific services running on
laptops, workstations, clusters, and supercomputers.
An elementary service is the work performed in which a service provider (one that
serves) exerts acquired abilities to execute a computation. Elementary services are
autonomous units of functionality and can be either local or distributed. Elementary
services have no calls to each other embedded in them. By contrast a compound service
is a composition of elementary and other compound services that exerts acquired
abilities of collaborating service providers. Each elementary service implements
multiple actions of a cohesive (well integrated) service type, usually defined by an
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-381
381
interface type in an underlying programming language, e.g. a Java interface. A service
provider can implement multiple service types, and thus can provide multiple
elementary services to be offered. An elementary service reference with operation of its
service type, complemented by its QoS parameters is called a service signature.
Service signatures are used to reference local or remote service providers. Different
instances of a service provider are equivalent units of functionality identified by the
same signature.
In most service systems the focus is on back-end aggregation of services into a
single provider, thus having more services performed by the same provider or by the
same provider node, e.g., an application server. In either case theses new services are
still elementary services to the end user. This type of back-end aggregation, done by
software developers and deployers, is called service assembly in contrast to the
aggregation of services at the front-end accomplished by the end user. The front-end
aggregation is called service composition and requires service-oriented (SO) languages
to express actualization of compound services. Service compositions are called
exertions and use service signatures to bind at runtime to corresponding service
providers. A dynamic collection of service providers requested for the actualization of
exertion is called a service federation.
In concurrent engineering computing systems each local or distributed service
provider in the collaborative federation performs its services in an orchestrated
interaction of applications, tools, and utilities. Once the service collaboration is
complete, the federation dissolves and the providers disperse and seek other federations
to join. These service providers have to be managed by a relevant service-centric
operating system with programming environment to express complex interactions of
providers in dynamic virtual federations [1].
The SORCER platform [4][5][6] (open source project [7]) introduces front-end
mogramming languages [3][8] with a modular SO Operating System (SOOS). It adds
two entirely new layers of abstraction to the practice of SO computingSO models
expressed in a Var-oriented Modeling Language (VML) in concert with SO programs
expressed in an Exertion-Oriented Language (EOL). The unification of VML and EOL
has been verified and validated in research projects at Air Force Research Lab and
SORCER Lab at TTU [1][9][11][12][13].
The remainder of this paper is organized as follows: Section 1 describes briefly
var-oriented modeling; Section 2 describes exertion-oriented programming; Section 3
introduces the SORCER SOOS; Section 4 demonstrates parametric modeling for
aircraft conceptual design; finally Section 5 concludes with final remarks and
comments.
1. Var-oriented Modeling
A computation is a relation between a set of inputs and a set of corresponding outputs.
There are many ways to describe or represent a computation and a composition of them.
Two types of computations are considered in this paper: var-oriented and exertion-
oriented. A front-end service composition with its own control strategy created by the
end user in Exertion-Oriented Language (EOL) is called an exertion. A service variable,
called a var and var-model are front-end modeling services in the Var-Oriented
Language (VOL) and the Var-oriented Modeling Language (VML), respectively.
M. Sobolewski et al. / Parametric Mogramming with VOM and EOP Languages 382
The exertions are drawn primarily from the semantics of a routine. The vars and
var-models are drawn primarily from the semantics of a variable and function
composition. Either one of these process expressions can be mixed with another
depending on the direction of the problem being solved: top down or bottom up. The
top down approach usually starts with var-oriented modeling in the beginning focused
on relationships of vars in the model with no need to associate them to services. Later
the var-model may incorporate relevant services (evaluators, getters, and setters)
including exertions as evaluators. In var-oriented modeling three types of models can
be defined (response, parametric, and optimization). EOL distinguishes three types of
exertions: elementary exertions (tasks), batch tasks (batches), and hierarchical
exertions (jobs). The functional composition notation has been used for expressions in
VOL, VML, and EOL that is usually complemented with the Java API.
1.1. Var-Oriented Programing (VOP)
In every computing process variables represent data elements and the number of
variables increases with the increased complexity of problems being solved. The value
of a computing variable is not necessarily part of an equation or formula as in
mathematics. In computing, a variable may be employed in a repetitive process:
assigned a value in one place, then used elsewhere, then reassigned a new value and
used again in the same way. Handling large sets of interconnected variables for
transdisciplinary computing requires adequate programming methodologies.
A service variable (var) is a collection of triplets: { <evaluator, getter, setter> },
where:
1. an evaluator is a service with the argument vars that define the var
dependency chain;
2. a getter is a pipeline of filters processing the result of evaluation; and
3. a setter assigns and returns a value that is a quantity filtered out from the
output of the current evaluator.
The var value is invalid when the current evaluator, getter, or setter is changed, current
evaluator's arguments change, or the value is undefined. VOP is a programming
paradigm that uses vars to design var-oriented multifidelity compositions. An
<evaluator, getter, setter> triplet is called a var fidelity. It is based on dataflow
principles where changing the value of any argument var should automatically force
recalculation of the vars value. VOP promotes values defined by selectable var
fidelities and their dependency chains of argument vars to become the main concept
behind any processing.
Evaluators, getters, and setters can be executed locally or remotely. An evaluator
may use a differentiator to calculate the rates at which the var quantities change with
respect to the argument vars. Multiple associations of <evaluator, getter, setter> can be
used with the same var allowing vars fidelity. The semantics of the value, whether the
var represents a mathematical function, subroutine, coroutine, or data, depends on the
evaluator, getter, and setter currently used by the var. The var dependency chaining
provides the integration framework for all possible kinds of computations represented
by various types of evaluators including exertions described in Section 2.
M. Sobolewski et al. / Parametric Mogramming with VOM and EOP Languages 383
1.2. Var-Oriented Modeling (VOM)
Var-Oriented Modeling is a modeling paradigm using vars in a specific way to
define heterogeneous var-oriented models, in particular large-scale multidisciplinary
models including response, parametric, and optimization models. The programming
style of VOM is declarative; models describe the desired results of the output vars
without explicitly listing instructions or steps that need to be carried out to achieve the
results. VOM focuses on how vars connect (compose) in the scope of the model, unlike
imperative programming, which focuses on how evaluators calculate. VOM represents
models as a series of interdependent var connections, with the evaluators, getters, and
setters between the connections being of secondary importance.
A var-oriented model or simply var-model is an aggregation of related vars. A var-
model defines the lexical scope for var unique names in the model. Three types of
models: response, parametric, and optimization have been studied to date [9]. In the
model hierarchy, optimization models are parametric and response, and parametric are
response ones as well. These models are declared in VML using the functional
composition syntax with VOL and possibly with EOL and the Java API to configure
the vars [3]. Consider the Rosen-Suzuki optimization problem, where:
design variables: x1, x2, x3, x4; response variables: f, g1, g2, g3, and
f = x1^2-5.0*x1+x2^2-5.0*x2+2.0*x3^2-21.0*x3+x4^2+7.0*x4+50.0
g1 = x1^2+x1+x2^2-x2+x3^2+x3+x4^2-x4-8.0
g2 = x1^2-x1+2.0*x2^2+x3^2+2.0*x4^2-x4-10.0
g3 = 2.0*x1^2+2.0*x1+x2^2-x2+x3^2-x4-5.0
The goal is to minimize f subject to g1 <= 0, g2 <= 0, and g3 <= 0.
In VML this problem is expressed by the following var-model:

KDZ^D


d
Z
ZZ
D
All vars in the model are configured with needed evaluators, getters, setters, and
differentiators by the method D. Having the model declared and
configured we can set values of input vars:

and get the output value of f:


or the value of constraint var g2c:
Var-models with no constraints and objective are parametric models. A parametric
task (see Section 2) allows for specifying a parametric table with rows of values of
input vars and calculate the corresponding output table as illustrated in Fig 1.
Var-models support multidisciplinary and multifidelity traits of transdisciplinary
computing. Var compositions across multiple models define multidisciplinary
problems; multiple evaluators per var and multiple differentiators per evaluator define a
vars multifidelity. These are called amorphous models. For the same var-model an
alternative triplet <evaluator, getter, setter> (new fidelity) can be selected or added at
runtime to evaluate an updated analysis ("shape") of the model and quickly update the
M. Sobolewski et al. / Parametric Mogramming with VOM and EOP Languages 384
related computations in an evolving or new direction. Var-models can be used as local
object models or as network service providers. In either case modeling tasks (exertions)
are used to specify modeling services as illustrated in Section 2.
2. Exertion-oriented Programming
The central exertion principle is that a computation can be expressed and actualized by
the interconnected federation of simple, often uniform, and efficient service providers
that compete with one another to be exerted for their services in the dynamically
created federation. Each service provider implements multiple actions of a cohesive
(well integrated) service type, usually defined by an interface type. A service provider
implementing multiple service types provides multiple services. Its service type
complemented by its QoS parameters can identify functionality of a provider. In an
exertion-oriented language (EOL) a service exertion can be used as a closure over free
variables in the exertions data and control contexts. In exertion-oriented programming
everything is a service. Exertions can be used directly as service providers as well.
In EOL service providers are uniformly accessed through two types of references:
class and interface signatures. Class and interface signatures are also called object and
net signatures correspondingly. Exertion-oriented programming (EOP) is a SO
programming paradigm using service providers and exertions. Exertions can be created
with textual language (netlets), API (exertlets), and user agents that behind visual
interactions create exertlets. Netlets are interpreted scripts and executed by the network
shell of the SORCER Operating System (SOS). Invoking the exert operation on the
exertlet (Java object) returns the collaborative result of the requested service federation.
Netlets are executed with a SORCER network shell (nsh) the same way Unix scripts
are executed with any Unix shell.
Exertions encapsulate explicitly data, operations, and a control strategy for the
collaboration. The SOS dynamically binds the signatures to corresponding service
providersmembers of the exerted federation. The exerted members in the federation
collaborate transparently according to the exertions control strategy managed by the
SOS. The SOS invocation model is based on the Triple Command Pattern that defines
the federated method invocation (FMI) [5].
Three types of service exertions are distinguished: tasks, batches and jobs. The
operator defines service exertions as follows:
dd
For convenience tasks, batches, and jobs are also defined with the , , and
EOL operators as follows:
d
d
:
Consider a parametric task that specifies a parametric model in the network by a
service type WD named Z^ D with a parametric
and response tables indicated byhZ>andhZ>correspondingly.
Ddd
ZdWDZ^D
dhZ>
dhZ>
M. Sobolewski et al. / Parametric Mogramming with VOM and EOP Languages 385
&
The returned output is specified in the task according to a structure shown in Fig.
1 and it is calculated by calling: . At the same time the output table is written
into hZ>.
3. The SORCER Operating System (SOS )
In SORCER the provider container (^d) is responsible for deploying
services in the network, publishing their proxies to one or more registries, and allowing
requestors to access its proxies. Providers advertise their availability in the network;
registries intercept these announcements and cache proxy objects to the provider
services. The SOS looks up proxies by sending queries to registries and making
selections from the available service types. Queries generally contain search criteria
related to the type and quality of service. Registries facilitate searching by storing
proxy objects of services and making them available to the SOS. Providers use
discovery/join protocols [2] to publish services in the network and the SOS uses
discovery/join protocols to obtain service proxies in the network. While an exertion
defines the orchestration of its service federation, the SOS implements the service
choreography in the federation defined by its FMI [5].
The SOS allows execution of netlets (interpreted mograms) by exerting the
specified federation of service providers. The overlay network of the service providers
defining the functionality of SOS is called the sos-cloud and the overlay network of
application providers is called the app-cloudservice processor [4]. The instruction set
of the SOS service processor consists of all operations offered by all service providers
in the app-cloud. Thus, an exertion is composed of instructions specified by service
Figure 1. Output table of parametric analysis with configurable parameters and responses.
M. Sobolewski et al. / Parametric Mogramming with VOM and EOP Languages 386
signatures with its own control strategy per service composition and data context
representing the shared data for the underlying federation. The signatures (instances of
^ type specify participants of collaboration in the app-cloud.
Both sos-providers and app-providers do not have mutual associations prior to the
execution of an exertion; they come together dynamically (federate) for all nested tasks
and jobs in the exertion. Domain specific servicers within the app-cloudtaskers
execute task exertions. Rendezvous peers (jobberssynchronous service coordination,
spacersasynchronous service coordination [10], and catalogersdynamic network
service catalogs) manage service collaborations. Providers of the d, :, and
^ type are basic service containers.
4. Aircraft Conceptual Design Application using SORCER
The Air Force Research Labs (AFRL) Multidisciplinary Science and Technology
Center (MSTC) is investigating conceptual design processes and computing
frameworks that could significantly impact the design of the next generation efficient
supersonic air vehicle (ESAV). To make the technological advancements required of a
new ESAV, the conceptual design process must accommodate both low- and high-
fidelity multidisciplinary engineering analyses. These analyses may be coupled and
computationally expensive, which poses a challenge since a large number of
configurations must be analyzed. In light of these observations, the ESAV design
process was implemented using the SORCER Operating System (SOS) to combine
propulsion, structures, aerodynamics, performance, and aeroelasticity in a
multidisciplinary analysis (MDA). The SORCER platform provides the MDA
automation and tight integration to distributed computing resources necessary to
achieve the volume of analyses required for conceptual design.
The MDA is a blend of conceptual and preliminary design methods from
propulsion, structures, aerodynamics, performance, and aeroelasticity disciplines. The
analysis process and data flow is shown in the ESAV N
2
diagram in Fig. 2. The process
begins by parametrically generating discretized geometry suitable for several different
analyses at varying fidelities. The geometry is used as input to compute several figures
of merit of the aircraft, which include the aircraft drag polars, design mass, range, and
aeroelastic performance. The different responses are evaluated for several flight
conditions and maneuvers. These responses are then used to construct the objective and
constraints of the multidisciplinary optimization (MDO) problem.
MDO generally require a large number of MDAs be performed. This significant
computational burden is addressed by using the SORCER platform. The network-
centric approach of SORCER enables the use of heterogeneous computing resources,
including a variety of operating systems, hardware, and software. Specifically, the
ESAV studies performed herein use SORCER in conjunction with a mix of Linux-
based cluster computers, desktop Linux-based PCs, Windows PCs, and Macintosh PCs.
The ability of SORCER to leverage these resources is significant to MDO applications
in two ways: 1) it supports platform-specific executable codes that may be required by
an MDA; and 2) it enables a variety of computing resources to be used as one entity
(including stand-alone PCs, computing clusters, and high-performance computing
facilities). The main requirements for using a computational resource in SORCER are
network connectivity and Java platform compatibility. SORCER also supports load
M. Sobolewski et al. / Parametric Mogramming with VOM and EOP Languages 387
balancing across computational resources using space computing, making the
evaluation of MDO objective and constraint functions in parallel a simple and a
dynamically scalable process.
SORCER employs Jini [1] technology with its JavaSpaces service [14] to
implement loosely coupled space-based service federations. The SOS via its Spacer
providers [10] enables different processes on different computers to communicate
asynchronously in a reliable manner. Using Spacer services, SOS implements a self-
load balancing service cloud that can dynamically grow and shrink during the course of
an optimization study, see Fig 3-left.
An exertion spaceor spaceis a exertion storage in the network that is
managed by SOS (its Spacer providers). The space provides a type of shared memory
where requestors, e.g., vars, can put exertions they wish to be processed by service
providers. Service providers, in turn, find spaces in the network and monitor them for
exertion tasks with their service type. If a service provider sees a task it can operate on
in a space, and the task has a flag indicating it has not been processed, the provider
takes the task from the space. The provider then performs the requested service and
returns the task to the space with a flag indicating the task has been processed. Once
the task has been returned to the space, the Spacer that initially wrote the task to the
space detects the returned task and checks to see if it has been processed. If the task
indicates it has been processed, the Spacer removes the task from the space and returns
it to the submitting service requestor.
To achieve the load balancing across multiple computers, a service provider may
be configured to have a fixed number of worker threads. The number of worker threads
determines the number of tasks from the space a provider can process in parallel. By
configuring the number of worker threads for a specific service provider on a specific
computer, the provider can self-load balance the computer it is hosted on.
The SORCER platform is then used with an external optimization program to
optimize an ESAV for range. The results from the optimization are shown in Fig 3-
Figure 2. The ESAV MDA N
2
diagram includes geometry generation, aerodynamic analysis,
aeroelastic analysis, and performance analysis. (Each box in the figure represents a SORCER
Provider).
M. Sobolewski et al. / Parametric Mogramming with VOM and EOP Languages 388
right. The optimized design has a higher aspect ratio than the baseline design. This
feature is consistent with historical aircraft design trends for long-range aircraft. The
results provide a degree of validation of the implementation of the optimization code,
the SORCER ESAV parametric model, the SORCER providers, and the SOS.
The use of the space computing proved reliable and efficient. It was a
straightforward process to add computers to the SORCER service cloud as needed
during the course of the two-optimization studies. This flexibility proved valuable as
the number of computers available varied from day-to-day.
5. Conclusions
As we move from computing of the information era to advanced computing of the
service era, it is becoming evident that new SO mogramming languages are required.
By higher-level abstractions, these languages reduce the complexity of
transdisciplinary designs performed by hundreds of people working together and using
thousands of services (programs) written already in legacy languages that are
dislocated in the global network. Domain specific SO languages are for humans, unlike
software languages for computers, intended to express domain specific complex
processes and related solutions. Three programming languages for SO computing are
described in this paper: VOL, VML, and EOL. The network shell (nsh) interprets
netlets in these languages and the SOS manages corresponding service federations.
The concept of the var fidelities in the EGS framework combined with exertions
provides the uniform modeling technique for SO interoperability and integration with
various applications, tools, utilities, and data formats. The SORCER operating system
supports the two-way convergence of modeling and programming for SO computing as
Figure 3. Left: SORCER uses Exertion Space to provide a flexible, dynamic space computing facility for
ESAV optimization studies. Right: the ESAV optimization result half-span planforms: baseline 1550 mi
range (top); optimized 2500 mi range (bottom).
M. Sobolewski et al. / Parametric Mogramming with VOM and EOP Languages 389
presented in the ESAV parametric model. On one hand, EOP is uniformly converged
with VOM to express a front-end SO procedural federation. On the other hand, VOM is
uniformly converged with EOP to express a front-end declarative SO modeling. Both
front-end exertions and var-models can be used as service providers directly. The
evolving SORCER platform (open source project [7]) with its SO computational model
has been successfully verified and validated in ESAV and other concurrent engineering
distributed applications [1][9][11][12] [13].
6. Acknowledgements
This work was partially supported by Air Force Research Lab, Aerospace Systems
Directorate, Multidisciplinary Science and Technology Center, the contract number
F33615-03-D-3307, Algorithms for Federated High Fidelity Engineering Design
Optimization and by SMT Software S.A., the contract number POIG.01.04.00-14-
062/12 Engineering Toolkit.
References
[1] Burton, S.A., Alyanak, E.J., and Kolonay, R.M. (2012). Efficient Supersonic Air Vehicle Analysis and
Optimization Implementation using SORCER, 12th AIAA Aviation Technology, Integration, and
Operations (ATIO) Conference and 14th AIAA/ISSM AIAA 2012-5520
[2] Jini Network Technology Specifications v2.1. Available at: http://www.jiniworld.com/doc/ spec-
index.html. Accessed 5 June 2013
[3] Sobolewski, M. and Kolonay, R., 2012. Unified Mogramming with Var-Oriented Modeling and
Exertion-Oriented Programming Languages, Int. J. Communications, Network and System Sciences,
2012, 5, 9. Published online http://www.scirp.org/journal/PaperInformation.aspx?paperID=22393
[4] Sobolewski, M., 2012, Object-Oriented Service Clouds for Transdisciplinary Computing, in I. Ivanov et
al. (eds.), Cloud Computing and Services Science, DOI 10.1007/978-1-4614-2326-3_1, Springer
Science + Business Media New York 2012
[5] Sobolewski, M., 2010. Object-Oriented Metacomputing with Exertions, Handbook On Business
Information Systems, A. Gunasekaran, M. Sandhu (Eds.), World Scientific Publishing Co. Pte. Ltd,
ISBN: 9789812836052
[6] SORCERsoft.org. Available at: http://sorcersoft.org. Accessed 5 June 2013
[7] SORCER Project. Available at: http://sorcersoft.github.io. Accessed 5 June 2013
[8] Kleppe A., 2009. Software Language Engineering, Pearson Education, ISBN: 9780321 553454
[9] Kolonay, R. M. and Sobolewski M., 2011. Service ORiented Computing EnviRonment (SORCER) for
Large Scale, Distributed, Dynamic Fidelity Aeroelastic Analysis & Optimization, International Forum
on Aeroelasticity and Structural Dynamics,IFASD2011, 2630 June, Paris, France
[10] Sobolewski, M., 2008. Federated Collaborations with Exertions, 17h IEEE International Workshop on
Enabling Technologies: Infrastructures for Collaborative Enterprises (WETICE), pp.127132
[11] Goel, S.; Talya, S. S. and Sobolewski, M., 2008. Mapping Engineering Design Processes onto a
Service-Grid: Turbine Design Optimization, International Journal of Concurrent Engineering:
Research & Applications, Concurrent Engineering, Vol.16, pp 139147
[12] Kolonay, R. M., Thompson, E. D., Camberos, J. A. and Eastep, F., 2007. Active Control of
Transpiration Boundary Conditions for Drag Minimization with an Euler CFD Solver, AIAA-2007
1891, 48th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference,
Honolulu, Hawaii
[13] Xu, W., Cha, J., Sobolewski, M., 2008. A Service-Oriented Collaborative Design Platform for
Concurrent Engineering, Advanced Materials Research, Vols. 4446 (2008) pp. 7177224
[14] Freeman, E., Hupfer, S., and Arnold, K., 1999. JavaSpaces Principles, Patterns, and Practice, Addison
Wesley Longman, Inc.
M. Sobolewski et al. / Parametric Mogramming with VOM and EOP Languages 390
Conceptual Design of Sustainable Liquid
Methane Fuelled Passenger Aircraft
M. Burston
1
, T. Conroy
2
, L. Spiteri
2
, M. Spiteri
2
, C. Bil
3
and G. E. Dorrington
3

RMIT University, Melbourne, Victoria, 3083, Australia
Abstract. Motivated by concerns over rising costs of Jet-A fuel and the current
limitations of drop-in fuel substitutes, it is proposed that biomethane (or Bio-
LNG) provides a promising sustainable aviation fuel. This paper discusses some
technical considerations for converting a jet airliner to biomethane fuel. Following
consideration of aircraft configuration alternatives and performance issues, a
conceptual design is presented where cryogenic methane is stored in both an
insulated wing-box and under-wing pods. It is concluded that the weight penalty of
such a cryogenic fuel system would be relatively modest, hence the range and
payload capability of existing Jet-A aircraft can be matched.
Keywords: Liquefied Natural Gas, Biomethane, Biofuel, Air Transportation
Introduction
In 2012, the aviation sector consumed $209 billion (US) of Jet-A (Avtur) fuel [1],
emitting 634 million tonnes of CO
2
into the atmosphere

[2]. In the next 20 years, it is
predicted that the annual demand for commercial airline passenger transport will grow
from 5.1% to 12.8% revenue passenger-km [3]. However, IATA and ACARE have set
challenging targets for the reduction of carbon emissions by 2050. To meet these
targets, net carbon-emissions per seat-km must be substantially reduced, without
imposing any significant increase in Direct Operating Cost (DOC).
1. Future Aviation Fuel Options
1.1. Drop in Fuels
To achieve carbon-emission targets many drop-in biofuels have been proposed
[4-6], however current production rates and market prices limit their near-future use as
a blend with Jet-A [7]. Drop-in biofuel production requires large areas of land and
extensive use of fertilizer, pesticide and water, etc., and is therefore not capable of
supporting the global aviation fleet [5, 8]. Furthermore, the cost of existing drop-in
biofuels is significantly higher than Jet-A fuel [9]. Another option as a drop-in solution
involves the use of synthetic Jet-A or Syn-Jet made from natural gas through the

1
Undergraduate, Bachelor of Engineering (Aerospace)
2
Undergraduate, Bachelor of Engineering (Aerospace) and Business (Management)
3
Academic staff member, School of Aerospace, Mechanical and Manufacturing Engineering,
contact for correspondence: graham.dorrington@rmit.edu.au
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-391
391
Fischer-Tropsch (FT) process [10]. However, the FT process does not result in any net
CO
2
emission reduction. Synthetic fuels are therefore not considered a viable solution
for sustainable aviation [11].
1.2. Liquefied Natural Gas (LNG), Biomethane and Bio-LNG
IATA data shows that the global price of Jet-A fuel has risen substantially in
recent years [12], essentially following the fluctuating price of crude oil. Meanwhile,
US Energy Information Administration (EIA) data suggests that the US import price of
Liquefied Natural Gas (LNG) has become decoupled from crude oil [13] and is
presently only about 20% of the Jet-A price on an equal energy basis (Figure 1).
LNG consists of more than 90% liquid methane (LCH
4
), but also includes small
fractions of liquid ethane, propane, nitrogen and other impurities. The obvious
disadvantage is that LNG is a cryogen and storage tanks need to be thermally insulated
[14], since LCH
4
boils at 111-126 K at 1-3 bar [14, 15]. In order to operate LNG
fuelled aircraft, a global infrastructure change to supply and store LNG at airports is
required. Despite to the high infrastructure costs, LNG aircraft operations could offer a
profitable and prudent investment due to the abundance of low cost LNG fuel [16].

Figure 1. Jet-A Fuel Price (upper) vs. LNG Price (lower). Data sourced from the US EIA [13].

Stoichiometric fuel-air combustion equations show that a 20% reduction in CO
2

emission may be achieved using LNG instead of Jet-A (for the same heat release). To
achieve further reductions, liquid biomethane would have to be blended with LNG, to
create Bio-LNG. Biomethane is produced from biogas and potentially reduces net
CO
2
emissions by up to 97% relative to petroleum fuels [17], in line with the EU
Flightpath 2050 objectives for CO
2
emissions and sustainable biomass fuel derivation
[18, 19]. Biomethane is a relatively energy efficient biofuel per hectare of land
available and is already used in the automotive and maritime sectors [19]. For example,
liquid biomethane is already used safely in airports to fuel passenger buses [20].
1.3. Previous Proposals for LNG Aircraft
The use of LNG in aviation has been considered by Beech, Lockheed and Tupolev.
In 1980, Beech successfully flew a Beech Sundowner aircraft on LCH
4
[21]. Lockheed
performed a major LCH
4
aircraft study in 1980 [22] and Tupolev flew a Tu-156 test
aircraft on LNG in 1986 [23] .
M. Burston et al. / Conceptual Design of Sustainable Liquid Methane Fuelled Passenger Aircraft 392
More recently, interest in LNG in aviation has gained renewed interest: Greitzer
[24] proposes LNG as a future fuel in the NASA SUGAR N+4 research initiative and
Kawai presents compelling arguments for a dual fuel (LNG plus Jet-A) Blended Wing
Body (BWB) aircraft [25]. It is interesting to note that Kawai was influenced by Gibbs
et al. [26] who submitted a proposal for an LNG fuelled aircraft to the 2011 Airbus
Fly-Your-Ideas competition (see acknowledgements).
2. System Requirements and Performance
2.1. System Requirements
The following top-level system requirement targets were set:
1) The LNG fuelled aircraft shall offer Airbus A320-A350 sized aircraft, at least
a 20% reduction in net CO
2
emission per seat-km.
2) The Bio-LNG fuelled aircraft shall offer Airbus A320-A350 sized aircraft, at
least a 50% reduction in net CO
2
emission per seat-km.
3) The payload and range of the (Bio-) LNG aircraft shall not be inferior to that
of an equivalent-sized Jet-A fuelled aircraft.
4) Operating safety levels shall exceed those of Jet-A aircraft
5) LNG shall be supplied at all airports, such that the delivery price is sufficient
to bring-out a DOC reduction compared to the equivalent-sized Jet-A aircraft,
allowing for the development costs of the LNG aircraft.
2.2. System Architecture
A system overview is presented for Bio-LNG fuelled aircraft in Figure 2. The
shaded subsystems represent the primary subsystems that were considered during the
design process.


Figure 2. System architecture for the Bio-LNG aircraft
M. Burston et al. / Conceptual Design of Sustainable Liquid Methane Fuelled Passenger Aircraft 393
2.3. Range and Payload
The performance of the Bio-LNG fuelled aircraft was investigated using Airbus
A320 and A350-900 baselines [27]. A comparison of the range-payload characteristics
of Jet-A and Bio-LNG variants with equal gross take-off weight is provided in Figure 3.
To achieve this performance it was found that the cruise lift-to-drag (L/D) value of the
modified aircraft cannot fall below approximately 7% of the Jet-A equivalent while the
dry mass penalty of LNG subsystem changes must be less than about 2 tonnes (A320
case), assuming a specific fuel consumption gain of about 10% (section 3.2).


Figure 3. Range and Payload for different A320 fuel configurations, GTOW, 73.5 tonnes.
3. Aircraft Configuration and Subsystems
3.1. Configurations Options and Evaluation Methods
Previous studies of cryogenic fuel aircraft have placed fuel tanks primarily within
the fuselage [21, 22, 24, 26, 28-30]. However, such configurations have disadvantages.
In particular, placing fuel tanks within the fuselage compromises useful space and
requires high load factor mountings, i.e., there is a weight penalty [24]. Also any
leakage of CH
4
vapour inside the fuselage could result in the accumulation of an
explosive mixture (Section 4.3). Early in this design study, it was therefore decided to
store Bio-LNG

within the wing-box, also noting the advantage of distributed span-
loading. However, to match Jet-A on an equivalent energy content basis, the wing-box
volume of existing aircraft would have to be increased by about 45%, hence under-
wing mounted fuel pods are proposed (Section 3.3).
Although tail engine configurations have recently been considered by Airbus [31],
in the past two decades Airbus and Boeing have selected the under-wing engine
arrangement. Given the aforementioned leakage issue, through fuselage LCH
4
piping is
rejected and under-wing engine configurations are preferred (Figure 4).

M. Burston et al. / Conceptual Design of Sustainable Liquid Methane Fuelled Passenger Aircraft 394


Figure 4. Bio-LNG aircraft concepts to achieve increased fuel tank volume

3.2. Propulsion
Necessary changes to the turbofan engine technology are relatively minor. Bio-
LNG fuelled turbofan engines will have at least 10% reduced specific fuel consumption
compared with Jet-A [14]. Along with the decrease in CO
2
(and CO) emissions, a 30-
50% reduction in NO
x
emissions is also reported by Fulton [32]. Introduction of
compressor intercooling and integration with direct methane fed Solid Oxide Fuel Cells,
could offer further performance gains [33, 34].

M. Burston et al. / Conceptual Design of Sustainable Liquid Methane Fuelled Passenger Aircraft 395
3.3. Use of a Goldschmied-Type Annular Suction-Slot to Reduce External Pod Drag
Using Goldschmieds experimental data [35-37] the predicted drag penalty of the
under-wing pods can be substantially reduced by using a small turbofan (or APU
powered air-pump) mounted aft of each pod (Figure 5). This would result in a minimal
increase in overall fuel consumption (in comparison with a non-integrated system). It is
estimated that the effective drag penalty caused by the under-wing pods is less than 5%
of overall cruise drag [38]. However, it is also recognized that aerodynamic testing of
this promising concept at Reynolds numbers in excess of 10
7
and Mach 0.8 is required.


Figure 5. Under-wing pod cutaway, with boundary layer suction and active drag reduction

4. Technical Challenges
Many technological challenges associated with the introduction of Bio-LNG were
identified, including: on ground and in-flight boil-off of Bio-LNG; prevention of wing
and pod icing; specific safety issues concerned with Bio-LNG

aircraft operations;
necessary propulsion system changes (within the powerplant itself); Bio-LNG pumping
requirements and the need for insulated fuel piping; monitoring of Bio-LNG fuel usage
and fuel tank state; airframe changes and unique structural issues such as cyclic thermal
shock; centre of gravity management by fore-aft Bio-LNG fuel transfer; Bio-LNG
ground refuelling operations and offloading Bio-LNG as required. Only a few of these
items will be addressed here.
4.1. Boil-off and Thermal Management
According to Kawai [25], foam insulated tanks would not permit a sufficiently low
LNG boil-off rate and he recommends heavy vacuum-insulated (Dewar-type) tanks.
However, in reaching this decision, Kawai assumed that the boil-off rate on the ground
(without any cryo-cooling system) would have to be limited to 0.1% per day or just
0.0011 kg/s (4 kg/h) for an A350-900. If, instead, it is assumed that the aircraft has to
M. Burston et al. / Conceptual Design of Sustainable Liquid Methane Fuelled Passenger Aircraft 396
supply a CH
4
vapour fed APU system with an output of 127 kW, then the gas feed rate
would need to be at least 0.122 kg/s (440 kg/h). Thus, the boil-off rate during periods
of unsupported ground operation can be two orders of magnitude higher than Kawai
estimates. Moreover, if CH
4
vapour feed is also assumed during take-off and climb
periods, then there is a demand for much higher boil-off rates. For an A350-900 sized
aircraft, the high thrust boil-off rate needs to be approximately 1.8 kg/s (6500 kg/h). A
dual mode fuel subsystem, whereby Bio-LNG is pumped between a well-insulated
under wing pod, and a lesser insulated wing-box tank with a much higher surface area
to volume ratio, is attractive.
4.2. Icing Prevention
Icing is not thought to be major problem at one third of the chord where in wing-
box cryogenic Bio-LNG storage is proposed. Icing occurs at the leading edge and heat
transfer between the wing-box and the leading edge is relatively minor. Hence,
conventional leading edge de-icing systems [39] should be sufficient to prevent in-
flight icing [38], despite predictions that central wing region surface temperatures could
drop to 180 K while in the stratosphere. Icing is predicted to occur on the nose of each
under-wing pod. To address this, a free spinning aerodynamic tip on the pod nose to
shed any ice accumulation is proposed. This design solution was inspired by the Rolls-
Royce rubber nose tip used on the spinners of their aero-engines [40].
4.3. Safety Issues
There are many safety considerations associated with the use of Bio-LNG. Two
are prominent. Firstly, vapour leakage from pipes into closed areas containing air might
allow an explosive gaseous mixture to form. To mitigate this during the configuration
study, it was decided that elimination of in-fuselage tank storage was necessary.
Internal wing zones filled with air (either side of wing box) can be readily flushed with
secondary airflow. Secondly, impacts with ground vehicles, bullets and bird strikes
were considered. Further work is needed to determine acceptable structural design
solutions.
5. Design Study Outcomes
A simplified decision matrix (Figure 6) was used to select the final preferred
configuration. Of course, this is just a down-selection procedure, but it serves the
purpose of illustrating the impact of pertinent design issues.
Configuration Z (Figure 7) was provisionally found to best satisfy the system
requirements and technology challenges (Section 2.1 and Section 4). In particular, this
configuration appears to offer sufficiently low boil-off rates for ground storage and
higher boil-off rates during flight (Section 4.1), whilst having an acceptable cruise L/D
and dry mass penalties (Section 2.3)
The evaluation methods used in this study were commensurate with a conceptual
design study, and therefore require further substantiation. In particular, key technical
areas that require attention are primarily concerned with icing (Section 4.2) and safety
issues (Section 4.3).

M. Burston et al. / Conceptual Design of Sustainable Liquid Methane Fuelled Passenger Aircraft 397

Figure 6. Configuration Decision Matrix




Figure 7. Selected Design (Configuration Z)
M. Burston et al. / Conceptual Design of Sustainable Liquid Methane Fuelled Passenger Aircraft 398
6. Recommendations
Despite the use of simplified methodologies and assumptions, the design outcome
appears to be realistic and promising in terms of feasibility and sustainability.
It is therefore recommended that a preliminary concurrent design study of the
promising concept(s) that are presented is justified and should be undertaken. A
concurrent approach is necessary, since many inter-related pertinent issues need to be
considered simultaneously. These issues not only include extensive technical changes
to aircraft systems, but also changes to ground infrastructure including whole matter of
sustainable biomethane production. In summary, a comprehensive eco-economic (Life
Cycle Cost) super-system model is needed to support concept design selection and
preliminary aircraft design optimization.

Acknowledgements

Authors MB, TC, LS and MS undertook the work presented here as part of a proposal bid
for Round 2 of the Airbus Fly-Your-Ideas Competition in 2013 and received useful feedback
from Dr. Paulo Lage and Enrique Tobias-Pena. The same authors wish to thank Prof.
Aleksandar Subic, Head of the School of Aerospace, Mechanical and Manufacturing Engineering
at RMIT, for supporting the project including the opportunity to display the design concept at the
2013 Avalon Airshow.
References

[1] IATA. Financial forecast. 2012 [cited 2013 March 3rd]; Available from:
http://www.iata.org/whatwedo/Documents/economics/Industry-Outlook-Dec2012.pdf.
[2] IATA. Fact sheet: climate change. 2012 [cited 2013 March 1st]; Available from:
https://www.iata.org/pressroom/facts_figures/fact_sheets/pages/environment.aspx.
[3] Airbus S.A.S. Global market forecast 2012-2031: Navigating the future. 2012 [cited 2013 20th
March]; Available from: http://www.airbus.com/company/market/forecast/.
[4] W. Gibbons and S. Hughes, Distributed, integrated production of second and third generation
biofuels, in in (Ed.) Economic effects of biofuel production,2011, National Center for Agricultural
Utilization Research.
[5] D.L. Daggett, et al., Alternate fuels for use in commercial aircraft, in ISABE, 2007, NASA / The
Boeing Company: Beijing (China).
[6] T.F. Rahmes, et al., Sustainable bio-derived synthetic paraffinic (Bio-SPK) jet fuel flights and
engine tests program results, in 9th AIAA Aviation Technology, Integration, and Operations
Conference2009: Hilton Head, South Carolina.
[7] Australian Aviation. Qantas spruiks biofuels with A330 flight. 2012 [cited 2013 March 27th];
Available from: http://australianaviation.com.au/2012/04/qantas-spruiks-biofuels-with-a330-
flight/.
[8] A. Petrova, 2012., Biofuels as Alternative Sources of Energy, in Saint-Petersburg State University
of Aerospace Instrumentation2012: Saint-Petersburg.
[9] P. O'Brien and B. Fargher, Biofuels in australia - issues and prospects, 2007, Commonwealth
Scientific and Industrial Research Organisation (CSIRO).
[10] J. Hileman, Ortiz, DS, Bartis, JT, Wong, HM, Donohoo, PE, Weiss, MA, Walts, IA, Near-Term
Feasibility of Alternative Jet Fuels, F.A. Administration, Editor 2009, RAND: Santa Monica, CA.
[11] P. Jaramillo, W.M. Griffin, and H.S. Matthews, Comparative analysis of the production costs and
life-cycle GHG emissions of FT liquid fuels from coal and natural gas. Environmental Science
and Technology, 2008. 42(20): p. 7559-7565.
[12] IATA. Fuel price analysis. 2013 [cited 2013 20 March]; Available from:
http://www.iata.org/publications/economics/fuel-monitor/Pages/price-analysis.aspx.
[13] EIA. Price of US Natural Gas LNG Imports. 2013 [cited 2013 March 15th]; Available from:
http://www.eia.gov/dnav/ng/hist/n9103us3m.htm.
M. Burston et al. / Conceptual Design of Sustainable Liquid Methane Fuelled Passenger Aircraft 399
[14] H.F. Brady and D. Del Duca, Insulation systems for liquid methane fuel tanks for supersonic
cruise aircraft, 1972, NASA.
[15] GIIGNL. Basic properties of LNG. 2009 [cited 2012 October 16th]; Available from:
http://www.giignl.org/fileadmin/user_upload/pdf/LNG_Safety/1-
LNG_Basics_8.28.09_Final_HQ.pdf.
[16] U.S Energy Information Administration. The global LNG market - status and outlook. 2003
[cited 2012 February 16th]; Available from:
http://www.eia.gov/oiaf/analysispaper/global/lngmarket.html.
[17] Airbus, A319/A320/A321 Flight deck and systems briefing for pilots, 1999, Airbus Industrie:
France.
[18] E. Commission, Flightpath 2050 - Europe's vision for aviation. 2011.
[19] M. Lage. The use of natural gas in the transportation industry: European LNG Blue Corridors.
2012 [cited 2013 March 15th]; Available from:
http://www.apvgn.pt/documentacao/lage_agn_out12.pdf.
[20] Gasrec. Gasrec begins trial of liquid biomethane fuel for ground vehicle at east midlands airport.
2010 [cited 2013 April 2nd]; Available from: http://cnch4.com/mediadetails.php?ID=19.
[21] FLIGHT International. Beech flies with methane. FLIGHT International 1981 10 October [cited
2013 16 March]; Available from:
http://www.flightglobal.com/FlightPDFArchive/1981/1981%20-%203155.PDF.
[22] L.K. Carson, et al., Study of methane fuel for subsonic transport aircraft, in NASA CR1593201980,
Lockheed-California Company: California.
[23] D. Kaminski-Morrow Tupolevs cryogenic Tu-155- 20 years on! Flight Global, 2008. 16.
[24] E.M. Greitzer, et al., N+ 3 Aircraft concept designs and trade studies. NASA/CR2010-
216794/VOL2, 2010.
[25] R.T. Kawai, Benefit Potential for a Cost Efficient Dual Fuel BWB, in 51st AIAA Aerospace
Sciences Meeting including the New Horizons Forum and Aerospace Exposition2013, AIAA:
Texas.
[26] J. Gibbs, D. Seigel, and A. Donaldson, A natural gas supplementary fuel system to improve air
quality and energy security, in 50th AIAA Aerospace Sciences Meeting including the New
Horizons Forum and Aerospace Exposition. 2012, American Institute of Aeronautics and
Astronautics.
[27] Airbus S.A.S. A320 Family: the market leader. 2012; Available from:
http://www.airbus.com/fileadmin/media_gallery/files/brochures_publications/aircraft_families/A3
20_Family_market_leader-leaflet.pdf.
[28] Tupolev. Cryogenic aircraft: development of cryogenic fuel aircraft. 1989 [cited 2013 10th
February]; Available from: http://www.tupolev.ru/english/Show.asp?SectionID=82.
[29] A. Westenberger, et al., Liquid hydrogen fuelled aircraft - system analysis (cryoplane final
technical report), 2003, Airbus Deutschland GmbH.
[30] D. Verstraete, et al., Hydrogen fuel tanks for subsonic transport aircraft. International Journal of
Hydrogen Energy, 2010. 35: p. 11085-98.
[31] Airbus. Future Concepts. 2013 [cited 2013 27 March]; Available from:
http://www.airbus.com/innovation/eco-efficiency/design/future-concepts/.
[32] K. Fulton, Cryogenic-Fueled Turbofans: Kuznetov Bureau's pioneer work on LH
2
and LNG dual-
fueled engines. Aircraft Engineering, 1993: p. 8-11.
[33] G. Wilfert, et al. , New environmental friendly aero engine core concepts, in International
Symposium on Air Breathing Engines 2007, AIAA: Beijing, China.
[34] E.P. Murray, T. Tsai, and S. Barnett, A direct-methane fuel cell with a ceria-based anode. Nature,
1999. 400(6745): p. 649-651.
[35] F.R. Goldschmied, Wind Tunnel Demonstration of an Optimized LTA System with 65% Power
Reduction and Neutral Static Stability, in Lighter-than-Air System Conference1983, AIAA 83-
38910: Anaheim, CA.
[36] F. Goldschmied, Integrated hull deign, boundary-layer control, and propulsion of submerged
bodies. Journal of Hydronautics, 1967. 1(1): p. 2-11.
[37] F. Goldschmied, Aerodynamic Hull Design for HASPA LTA Optimization. Journal of Aircraft,
1978. 15(9): p. 634-638.
[38] F. Goldschmied, Wind tunnel test of the modified goldschmied model with propulsion and
empennage: analysis of test results, 1986: Monroeville, PA.
[39] S.K. Thomas, R.P. Cassoni, and C.D. Macarthur, Aircraft anti-icing and de-icing techniques and
modeling. Journal of aircraft, 1996. 33(5): p. 841-854.
[40] K. Bottome, Gas turbine engine nose cone, U.S.P.A. Publication, Editor 2011, Rolls-Royce PLC:
Great Britain.

M. Burston et al. / Conceptual Design of Sustainable Liquid Methane Fuelled Passenger Aircraft 400
Securing Data Quality beyond Change
Management in Supply Chain
Sergej Bondar
a
, Christoph Ruppert
b
, Josip Stjepandi
a,1
a
PROSTEP AG, Darmstadt, Germany
b
Heidelberger Druckmaschinen AG, Heidelberg, Germany
Abstract. Concurrent engineering in distributed development environments like
those in the automotive or aerospace industry sets enormous demands on the
organization of collaboration between the companies and departments involved.
Seamless data communication in all phases of the product development process is
a prerequisite for cost-optimal and successful collaboration processes. Automotive
suppliers that develop system components for a number of different OEMs or tier-
1 suppliers face the challenge of ensuring that they make the CAD data available
in the format required by their customers and with a high level of reliability and, if
data translation is involved, that they take the system configuration of the
respective customer into consideration. The main obstacle in this process chain is
the insufficient data quality. This paper describes the approaches conducted in
cooperation between the supplier portal www.opendesc.com and Heidelberger
Druckmaschinen AG.
Keywords. CAD, Data Quality, Supply Chain, Engineering Collaboration,
Migration.
1. Introduction
In the past decades the manufacturing industries like automotive or aerospace were
even more shifting to a distributed environment like an extended enterprise with
increasing agility of all stakeholders. The local customers satisfaction was set on the
first place in order to get the assignment against many competitors. This has caused the
mass customization at high level and even more complex development, manufacturing
as well as logistics processes along the manufacturing supply chain. Thus the upcoming
outsourcing has derived a multi-tier supply network structure involving numerous
enterprises around the globe. The product development as well takes place in global
development partnerships. OEMs accomplish the development of new products at
many locations in several countries across the world [1]. Furthermore, a variable
number of external service providers and suppliers take part in individual projects by
using their project specific know-how. The organizational structure was derived from
the hierarchical, modular to the virtual network structure which determines the virtual
enterprise (Figure 1).
The suppliers were involved in the product development as early as possible,
because they mostly possess a greater depth of domain expertise for best product
development. The OEM-supplier relationship is characterized by a sequential

1
Corresponding Author; E-mail: josip.stjepandic@opendesc.com
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-401
401
interaction whereby the OEM gives clear product and production requirements to the
supplier and the supplier delivers the product or service to the OEM. Supplier
integration is a crucial method for incorporating a suppliers innovativeness in the
product development process and reduces the costs and risks [2]. Due to the complex
development cycle, the OEM took the lead and has begun to adopt supplier integration
into its product development process. To respond to this trend, the collaboration and
partnership management between the OEM and suppliers need to be continuously
improved to reduce costs and time. Regarding the depth of collaboration, the
integration of suppliers into the OEM process chain can be defined in many ways,
depending on corresponding work package and type of collaboration [3].

Figure 1. Organizational types for industrial collaboration.
Many industry associations like the German automotive manufacturers association
(VDA) accomplished the basic development work to define and classify the typical
collaboration models and corresponding processes [4]. There are 6 supplier types
defined according to the criteria: production technical integration, process integration,
functional integration, and geometrical (spatial) integration of the whole product (car).
Beside of the prime contractor who can be defined as a clone of the OEM without the
product management, sales and marketing function, all other types of suppliers (system
supplier, module supplier, component supplier, part supplier, and engineering service
provider) maintain a high level of independency in their corresponding processes.
Taking the fact into account that a supplier supplies many customers, who have their
own, various processes and infrastructures, there yields a strong need for a
comprehensive integration approach based primarily on standards and serving the
relationships to all the customers. Engaging an IT service provider can be a suitable
way to setup the processes and manage the outsourcing activities. Many approaches
and best practices are developed and implemented like the eSCM Model as an
everyday guide for developing and maturing a successful one-to-one relationship [5].
S. Bondar et al. / Securing Data Quality Beyond Change Management in Supply Chain 402
In the context of concurrent engineering, the validity and consistency of product
information become important. However, it is difficult for the current computer-aided
systems to check the information validity and consistency because the engineers intent
is not fully represented in a consistent product model. Due to the different approaches
and IT systems in the automotive OEM industry, a unified solution is not possible at
this time. In particular, automotive suppliers that develop system components for a
number of different OEMs or tier-1 suppliers, face the challenge of ensuring that they
make the CAD data available in the format required by their customers and with a high
level of reliability and, if data translation is involved, that they take the system
configuration of the respective customer into consideration. To enable the success of
supplier integration, this work describes how to improve the collaboration between the
OEM and its suppliers, through ensuring the appropriate data quality which is primarily
the suppliers responsibility.
2. CAD Data Quality
In the product life-cycle various quality requirements at the singular stages are defined
and necessary. These include requirements such as the functional requirements of the
product, the type of presentation form, the production requirements or the data quality
in the data exchange.

Figure 2. Exemplary data quality problem: Consistency.
Within the design and manufacturing process chains, there are different phases that
make demands on the geometric and organizational data quality and the data size. A
CAD model can both find multiple uses as requiring a certain compatibility. The
multiple use is mainly based on the modeling, while the compatibility refers more to
the geometrical and topological characteristics. In downstream applications, it may
happen that the designer says, the CAD model would be good, while the downstream
users of this CAD model in its process chain often describe the model as not so good or
even unusable. The model errors can be grave intentional (e.g. by using the rough
tolerances (Figure 2) or unintentional (e.g. by multiplicating the tolerances in a feature
tree). The same situation can occur in the collaboration context.
S. Bondar et al. / Securing Data Quality Beyond Change Management in Supply Chain 403
Therefore CAD data should ensure the reusability in native CAD system, but also
the derivation of the resulting geometry for downstream data processing. Although the
level of detail can be low or high during a period, the requirements remain the same for
geometry and topology.
A categorization of CAD data yields the matching of criteria and characteristics to
reasonable and memorable sets of 3D CAD models. There are different approaches,
such as the division into "components" or "resources" (for producing components), in
various process chains (depending on the material or the manufacturing method) or in
accordance with the stages of development (concept, coordination, approval, detailing,
etc.). The working group of German Automotive Industry Association (VDA)
recommends a two-dimensional matrix [6]. Meanwhile, these basic criteria are adopted
from almost all OEMs.
Finally the term CAD data quality includes the fulfillment of a set of predefined
criteria known as mathematical-technical and organizational. The first set belongs the
geometrical quality of CAD models and depends on fulfillment of basic accuracy
criteria (coincidence, continuity) depending on accuracy setup of CAD system. The
second set comprises the relevant rules for the structure of CAD models. Almost each
large enterprise has defined own CAD methods which includes a set of prerequisites.
The fulfillment of the rules is checked by special CAD checker which are controlled by
profiles for each model purpose.
For the translation from the original into the target CAD format direct or neutral
interfaces are used. The advantage of direct interface is the higher translation speed.
The neutral interfaces are more robust by using special healing algorithms for corrupt
geometry and allow better logging by using neutral format like STEP and JT.
In the most cases the main problem in the CAD translation is the insufficient data
quality after the translation. The requirement for the successful translation process can
be also redefined as fulfillment of all relevant data quality requirements independent of
the source system and the stage of the product development process. In the context of
collaboration, this requirement can be extended to a level that the CAD conversion has
to successfully pass the examination with each CAD checker on the target site
independent from the source CAD system [7].
3. Solution Approach
Since the problem has been identified and analyzed, a concept for ensuring data
quality in CAD data exchange has to be developed. Only for this purpose measures are
recorded to ensure data quality, and then, finally, to apply the phase model for
development of methods.
Measures to prevent loss of information can be taken by considering functional
aspects, verification aspects and application aspects. The functional aspects include the
division of the total amount of an element specification in compatible element subsets
that are adopted in individual performance levels with their associated functional
content. The verification of aspects includes procedures to verify and validate the
performance of a specification.
The application aspects include specific requirements such as the implementation
of a new interface or the use of a specific setup. Due to the wealth and diversity of the
internal data models of CAD systems a loss of information during the data exchange
cannot be completely avoided. Therefore, specific agreements between data sender and
S. Bondar et al. / Securing Data Quality Beyond Change Management in Supply Chain 404
receiver must contribute to minimize the loss of information in the data exchange
which defines the best way based on experiences gathered before. Appropriate
checklists are required. Out of them the criteria can be derived for boundary conditions.
These checklists should be created as a function of the gradual approach of the
functional performance of the interface [8].
The methodological concept development can be understood as decision making
process and consists of a sequence of iterative steps. These steps include extensive
analytical and synthesizing activities and the appropriate documentation of the analysis
and synthesis results. A methodical approach is necessary because all possible
requirements have to be considered and fulfilled in a process. Analogous to the phases
for the implementation of IT projects, the concept phase, the specification phase, and
the implementation phase of development are to be introduced as part of the
methodological concept development to ensure data quality.
The essential task in the concept phase is the creation of the requirement profile,
which includes the performance range. The requirement profile includes a checklist, or
a sort of guide for meaningful modeling technique that could prevent problems in data
exchange. Herein, the pre-mentioned causes are considered. This checklist has been
created after many investigations and will be continuously adapted to users. In case of
software update it must be reviewed against the new software.
Based on extensive experience, the necessary guidelines were defined similarly to
the VDA recommendation 4955 [6] which describes the content and the quality of
CAD data used in the collaboration. Here, best practices are set up for attempting to
establish common guidelines that are either independent of the used CAD systems or
specifically on system pairing (e.g. CATIA (Creo) NX), processors and interface
formats (e.g. STEP or JT as an alternative).
The implementation of design guidelines must be established by appropriate check
tools. The application is done by native checking capabilities following the progress of
the design process, pointing mainly to the possible errors in the data exchange.
Therefore, the application is very helpful. The final check can be done in the batch
mode to save time and costs. The positive result is the prerequisite for the approval of
data exchange.
The check tools are usually modular. The runs by the user are executed
individually or in any combination. Limits and other underlying assets for each run will
be proposed through a configuration file and can be changed interactively. A selection
or limitation of scope of testing is possible, but really just a test of all the elements can
give the whole figure.
The logging of the whole test session, including the indication and highlight of
corrupt elements is stored in a log file. The respective defective items must be clearly
identifiable.
Further, the visualization of certain entities (e.g. transferred and arrived surfaces)
from the log file need to be written out after the data export to instantly identify
potential losses (e.g. surface count). This required comparative studies to be made with
respect to the different log files of the respective CAD systems and the meaningfulness
of the log files. For example: if a face must be broken down because the G0 continuity
can be preserved the face count becomes inconsistent. Additional compensation is
necessary.
Finally, a figure is required, indicating the number of transferred data elements
during the export of native data formats to the neutral file and potential losses as well.
This is a comparison between the elements between the states "before and after".
S. Bondar et al. / Securing Data Quality Beyond Change Management in Supply Chain 405
The final concept for the test procedure break down to the flow diagram is shown
in Figure 3.

Figure 3. Final concept for the test procedure.
The validation phase is used to check and if necessary to improve the concept
specification. It must be checked whether the developed method meets the
requirements profile from the concept phase. To perform the test, it is necessary to
establish criteria and procedures. Based on test criteria and methods check tools can be
developed that allow computer-aided validation. The result of the validation phase
eventually can induce the revision of the concept.
4. Validation
Validation methods target the proof of the validity of the specification and
implementation. They use the methods for verification, confirming the completeness
and correctness of a system with respect to a reference system, and the methods of
falsification, which serve to demonstrate the faultiness of a system. For validation are
used several methods: test methods are used to demonstrate conformity and for the
detection of a stable behavior, check methods are used for qualitative examination of
behavior at different levels of complexity in accordance with predetermined criteria,
and finally measurement methods are used for the qualitative determination of the
behavior of the basis of predetermined measurement criteria.
The use of validation methods is different in the diverse phases of the product
development model. Depending on the degree of product specification, based on the
reference model, the formal specification and the implementation, validation
procedures can be used increasingly quantifying. In our case, all methods have been
S. Bondar et al. / Securing Data Quality Beyond Change Management in Supply Chain 406
integrated into a fully automated workflow, which is based on the software OpenDXM
from PROSTEP (Figure 4). Thus, the traceability on the highest level is achieved [9].

Figure 4. CAD translation in www.opendesc.com.
5. Use Case Daimler
Since Daimler disclosed the decision to switch to CAD system NX of Siemens PLM
instead of CATIA V5 of Dassault Systemes, there arises a huge change in the customer
process of almost each supplier because they are forced to keep the current process
running and to ramp-up the new process. The overall situation is shown in Figure 5
[10]. Each supplier has to serve simultaneously two target systems (CATIA for
ongoing and NX for upcoming projects) with the same or similar content [11]. This
procedure includes many CAD translation steps (scenarios 4 to 9) which are principal
not beneficial for good data quality. The challenge is in preserving such a level of data
quality to be sure that all translating processes are successful. The data quality in
CATIA is checked again with QChecker, in NX by using the newly deployed
Heidelberg CAx Quality Manager (HQM).

Figure 5. The change in collaboration between Daimler and suppliers.
S. Bondar et al. / Securing Data Quality Beyond Change Management in Supply Chain 407
6. Results
The procedure shown above was applied for more than 60 different use cases including
different design content (powertrain, interior, electrics, chassis). Most of them were
Creo data modeled and prepared for translation to CATIA V5 as actually used in
ongoing development projects at Daimler. Therefore a good comparability between
old scenario and new scenario is given. The results were thoroughly checked by
Heidelberg CAx Quality Manager (Figure 6).

Figure 6. Typical results in Heidelberg CAx Quality Manager.
The translators in CATIA and NX disclose the similar level of performance and
robustness. The base system tolerance lies with 0,001 mm on the same level. Initially,
all models can be transferred lossless without exception to CATIA and NX. However,
S. Bondar et al. / Securing Data Quality Beyond Change Management in Supply Chain 408
it appears in some cases that automatic healing algorithms have slightly adjusted the
geometry to satisfy the continuous condition. To what extent this will lead to further
problems in further processing, could not be predicted definitely.
Further comparisons in the model properties like center of gravity, moments of
inertia, as well as cloud of points were also executed systematically. All values
remained within the allowed tolerances and showed no abnormalities what indicated a
mature, and stable process.
Further investigations were accomplished with the models that once have already
been converted because that is the typical implication of complex scenario shown in
Figure 5. Here occurred considerable problems and losses that have to be corrected
manually. Such models generally reveal significant quality problems and should be
avoided because its repair is expensive. The Heidelberg CAx Quality Manager (HQM)
was very helpful to identify the problems and make repair easier. A typical application
is shown in Figure 7 where a folded surface is marked as a potential issue. Such surface
shall not be considered as an isolated issue because it emerges unintentionally during
the feature modeling. For the repair of such portions of CAD models various methods
are derived in the previous work [8].

Figure 7. Typical quality issue detected by Heidelberg CAx Quality Manager.
7. Conclusions and Outlook
In a dynamic collaborative environment like global automotive industry the working
conditions are undertaken continuous change. Suppliers who work together with
different OEMs and tier-1 suppliers have to constantly cope with new requirements
relating to exchange partners, data formats, system environments to be supported,
quality and security requirements, etc. If they take data communication with their
customers into their own hands, this means that they have to constantly adapt their data
translation and exchange processes to the ever-changing requirements. This involves
considerable administrative overhead in terms of time and money, which can on
occasion have a negative impact on quality and adherence to deadlines. Collaboration
with a competent service provider is therefore an interesting alternative as it not only
cuts costs but also facilitates making the exchange processes uniform and ensures a
higher level of reliability and traceability.
Good example for the long-term stability is the proposed approach to support the
recent move of Daimler by replacing CATIA with NX. While many suppliers which
S. Bondar et al. / Securing Data Quality Beyond Change Management in Supply Chain 409
provide the data communication by their own are forced to adapt their infrastructure,
processes and methods to new environment at Daimler by spending a significant
amount of money and time, lasting long time to gather a high level of maturity and
stability, the developed solution allows to work again in the same environment by
using predefined interface in the customer process like supplier portal
www.opendesc.com. The check tool Heidelberg CAx Quality Manager (HQM)
performs a significant contribution to successful data exchange supporting the high
level of data quality.
The weakness of the proposed solution is the lack of solution for parametric data
translation which still has to be developed.
The future development shall encompass the further automation of whole
communication process and could take multiple directions. At first, the collaborative
decision making process can be enhanced by using of an agent-based distributed
architecture [12]. The setup and implementation of the infrastructure can be simplified
by provision of communication plugin [13] or service hierarchy for collaboration
[14] for each OEM based on recent standards (STEP AP242 and JT) [15] to avoid the
exclusive and hence expensive point-to-point connection.
References
[1] J. Stark, Global Product. Strategy, Product Lifecycle Management and the Billion Customer Question,
Springer-Verlag, London, 2007.
[2] J.H. Dyer, Collaborative advantage: winning through extended enterprise supplier networks, Oxford
University Press, 2000.
[3] D. Tang, K.-S. Chin, Collaborative Supplier Integration for Product Design and Development, in Wang
L., Nee A. Y. C. (eds.), Collaborative Design and Planning for Digital Manufacturing, Springer-Verlag,
London, 2009, 99- 116.
[4] N.N., VDA Empfehlung 4961/3, Abstimmung der Datenlogistik in SE-Projekten, VDA, Frankfurt, 2012.
[5] B. Hefley, E.A. Loesche, eSourcing Capability Model for Client Organizations (eSCM-CL), Van Haren
Publishing, Zaltbommel, 2009.
[6] N.N., VDA-Empfehlung 4955 Umfang und Qualitt von CAD/CAM-Daten, Version 3, VDA, Frankfurt,
2002.
[7] S. Bondar, L. Potjewijd, J. Stjepandi, Globalized OEM and Tier-1 Processes at SKF. In J. Stjepandic et
al. (eds.), Concurrent Engineering Approaches for Sustainable Product Development in a Multi-
Disciplinary Environment, Springer-Verlag, London, 2013, 789-800.
[8] T. Fischer, H.P. Martin, M. Endres, J. Stjepandi, O. Trinkhaus, Anwendungsorientierte Optimierung des
neutralen CAD-Datenaustausches mit Schwerpunkt Genauigkeit und Toleranz, VDA, Frankfurt, 2000.
[9] R. Reim, P. Cordon, A. Hund, J. Stjepandi, CAD-Konvertierung: Motivation, Probleme, Lsungen,
CADCAM Report, 2007, 9.
[10] M. Wendenburg, Konvertierung als Dienstleistung, CADCAM 2007, 9-10.
[11] S. Bondar, C. Rupprecht, J. Stjepandi, Securing Data Quality along the Supply Chain, PLM
Conference, 2013, Nantes, July 9-10.
[12] Y. Ouzrout, A. Bouras, E.H. Nfaoui, O. El Beqqali, A Collaborative Decision-making Approach for
Supply Chain Based on a Multi-agent System. In L. Benyoucef, B. Grabot. (eds.), Artificial Intelligence
Techniques for Networked Manufacturing Enterprises Management, Springer-Verlag, London, 2010,
107 128.
[13] F. Mervyn, A.S. Kumar, A.Y.C. Nee, A Plug-and-Play Computing Environment for an Extended
Enterprise. In Li W. D. et al (eds.), Collaborative Product Design and Manufacturing Methodologies
and Applications, Springer-Verlag, London, 2007, 71- 91.
[14] S. Silcher, J. Minguez, B. Mitschang, A Novel Approach to Product Lifecycle Management based on
Service Hierarchies. In T. zyer et al. (eds.), Recent Trends in Information Reuse and Integration,
Springer-Verlag, Wien, 2012, 343 362.
[15] C. Emmer, A. Frhlich, J. Stjepandi, Advanced Engineering Visualization with Standardized 3D
Formats, PLM Conference, 2013, Nantes, July 9-10.
S. Bondar et al. / Securing Data Quality Beyond Change Management in Supply Chain 410
Multi-objective Optimization of Low-floor
Minibus Suspension System Parameters
Goran agi
b c
, Zoran Luli
b c
and Josip Stjepandi
a,1
a
PROSTEP AG, Germany
b
University of Zagreb Zagreb, Croatia
c
Adolo7 d.o.o., Zagreb, Croatia
Abstract. This paper describes improvements and interaction of multi-objective
optimization and simulation tools for the analysis of suspension system kinematics
and vehicle dynamics in the conceptual phase of low-floor minibus development.
Achieving optimum parameters of the vehicle at this stage of development reduces
the possibility of wrong solutions or concepts. Suspension system development
process is a challenging task due to the existence of many influential parameters,
complex and often conflicting objectives related to stability, handling, ride comfort
and other aspects of vehicle dynamics. Also this is a multidisciplinary task which
presents a computational and modeling challenge. This task has three fundamental
steps: multi-objective problem definition, multi-objective optimization process and
multi-criteria decision making. In this research, the focus is on the first two steps.
The definition of design variables, objectives and constraints is necessary in the
problem definition process. After that in optimization process, the goal is to
determine optimal suspension system parameters of low-floor minibus using newly
developed optimization model, based on evolutionary algorithms and vehicle
dynamics simulations. This paper gives a contribution to the validation of
simulation models, the fine adjustment of optimization algorithm parameter values
(population size, probability of mutation, crossover or selection, etc.), the analysis
of convergence of an evolutionary algorithm, the comparison of the results
obtained by evolutionary algorithms with results of other multi-objective
optimization methods and the implementation of proposed optimization model for
determining optimal suspension system parameters of low-floor minibus.
Keywords. Concurrent Engineering, Vehicle Dynamics, Low-floor Minibus,
Multi-objective Optimization, Evolutionary Algorithms, Suspension System
Parameters
Introduction
Competition in the automotive industry imposes a constant improvement of driving
performance of vehicles. Even when considering only vehicle dynamics, vehicles must
meet various requirements related to stability, handling and ride comfort. These
requirements are often conflicting. Improvements caused by one parameter cannot be
achieved without affecting the influence of the others, in most cases having
contradictory tendencies. Thus, the goal is to find a suitable compromise.

1
Corresponding Author, Mail: Josip.stjepandic@opendesc.com.
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-411
411
This compromise should be already achieved, if possible, in the concept phase. So
that the solution space is tightly narrowed-down to be suitable for series development.
One of the most important tasks is the selection of reasonable starting values for the
simulation, so that the optimization can be completed in shorter time. This is mostly
based on historical data from previous or similar vehicles. The task is especially critical
for new concepts in which there is no predecessor model. Lacking suitable starting
values the simulation could take an unnecessary long time. The production of physical
prototypes could also run in the wrong direction and cause unnecessary costs.
The challenge here is to make the virtual prototyping so that the simulation
disciplines are so closely connected and influence each other. This approach has been
validated in the definition of suspension, where the task was to define the suspension
geometry to achieve the best dynamic driving performance. In addition to the use of
virtual simulation model of suspension system and complete vehicle conceptual phase
of development was expanded by using an optimization process. To achieve the
optimum, evolutionary algorithms should be used. They reduce the solution space of a
few good compromise solutions [1] (Fig. 1).
Figure 1.Optimization domain.
1. Related Works
Handling, stability and ride comfort play a crucial role in the vehicle performance,
depending on the well-balanced suspension. Over the years, simulation packages have
emerged that can make a very good prediction of the actual driving performance. A
solid overview of the theoretical foundations can be found in the literature [2] [3] [4].
Several important papers that deal with analysis of influence of suspension system
parameters on the behavior of the vehicle and with optimization of those parameters for
different types of suspension system, by using evolutionary algorithms, are described in
[5], [6], [7] and [8]. Examples of multi-objective optimization of the geometric
G. agi et al. / Multi-Objective Optimization of Low-Floor Minibus Suspension System Parameters 412
parameters of double wishbone suspension using a genetic algorithm, with goal to
improve vehicle handling and stability, were shown by Hwang et al. [5]. Khajavi et al.
[6] showed multi-objective optimization of suspension parameters to improve vehicle
handling and ride comfort. In research NSGA-II (Non-dominated Sorting Genetic
Algorithm) algorithm and 8 degrees of freedom vehicle models were used. Multi-
objective optimization of vehicle parameters with the goal to improve vehicle handling
was shown by Fadel et al. [7]. A vehicle passing through three test procedures related
to handling was simulated in the research. Review and comparison of multi-objective
optimization methods, including evolutionary algorithms and its application on vehicle
development problems was given by Gobbi et al. [8]. According to this survey, neither
method turned out to offer advantages concerning all criteria for all types of problems.
Evolutionary algorithms have been evaluated as a robust algorithm that can manage a
large number of objective functions, with appropriate adjustment of several key
parameters (population size, mutation probability and crossover, etc.) to achieve the
desired convergence.
Main goal of this research was to combine the capabilities of the leading standard
tools for suspension analysis and simulation of vehicle dynamics by using the multi-
objective optimization methods based on evolutionary algorithms [9] [10], to foster this
approach in the concept phase of vehicle development.
2. Solution Approach
The basic concept for software interaction is shown in Figure 2. The vehicle model was
built in CarSim which is one of the most used software packages for entire vehicle
dynamics simulation. One of the main reasons for choosing CarSim is the extendibility
of vehicle model. The CarSim math models cover the entire vehicle system and its
inputs from the driver, ground, and aerodynamics. The models can be extended by
using built-in VehicleSim commands or custom programs written in MATLAB /
Simulink, Visual Basic and other languages. By using this option it is possible to
thoroughly extend the subsystem or the component models such as suspension system,
brakes, powertrain, etc.
Figure 2.Interactions between simulation packages.
G. agi et al. / Multi-Objective Optimization of Low-Floor Minibus Suspension System Parameters 413
CarSim uses a parametric suspension model. This model defines the kinematic
characteristics of the suspension, such as gradients or curve (table), data of wheel
rotation angle (camber, caster, toe, etc.), related to the vertical motion of the wheel
centre. This type of modeling approach is suitable for fast simulation, but does not
provide insight into the suspension system geometry or the position of suspension
system hard points. The appropriate tire model has great influence on the quality of
results.
Lotus, a kinematics analysis program, is used to generate kinematic curves using
suspension system hard points data. Its camber, caster, toe, and other kinematic curves
are implemented in CarSim model. In such way the suspension kinematic was made
interchangeable in term of optimization.
Coupling simulation tools for analysis of vehicle suspension system kinematics
and vehicle dynamics with programming tool for multi-objective optimization is
performed by the implementation of simulation tools into the optimization tool. The
optimization process in this research is multi-objective and multidisciplinary. The
interaction between CarSim and Lotus is provided by using modeFrontier software [11],
which covers the multi-objective optimization methods with various algorithms (Fig. 3).
The advantage of this software is its capability to the easy, user-friendly, flexible
change of optimization method [1]. For the better interaction of used software packages,
a suitable integration layer in modeFrontier was developed. Finally, the genetic
algorithms NSGA-II and FMOGA-II as well as the (/
+
/, ) evolutionary strategies
were chosen for the optimization. For comparison, a typical deterministic optimization
algorithm (NBI-NLPQLP) was also implemented.
Figure 3.Multi-objective optimization within multiple disciplines.
The proposed approach was evaluated by simulating the driving performance of a
vehicle from series production with known test results. The comparison between the
simulation and the test results demonstrated good consistency. The low deviations can
be attributed to simplifications made in the description of behaviour of tyres and shock
absorbers.
G. agi et al. / Multi-Objective Optimization of Low-Floor Minibus Suspension System Parameters 414
3. Use Case Low-Floor Minibus
The low-floor minibus which is used as first application scenario is shown in Figure 4.
This is a new vehicle concept that combines many advantages for passenger transport
as well as for the usage by carrier companies. This vehicle type addresses a market
niche and should be produced in the low series production. For this reason, the concept
development should be accomplished with virtual methods. Physical prototypes should
be avoided in this phase. One of the main tasks is to define the suspension to reach the
required driving performance for this vehicle class. For this purpose, our optimization
approach is applied by using the CAD model geometry from CATIA.
Figure 4. Low-floor minibus, source: www.adolo7.hr
4. Results
Depending on the vehicle class there are various test procedures defined to give the
comprehensive figure of the real driving performance. Many of them (Steady-state
circular driving, double lane change, obstacle avoidance, etc) are standardized by ISO
[1]. For this research 10 basic test procedures are used to evaluate the driveability, the
stability and the riding comfort of the low-floor minibus. In optimization process the
following test procedures were simulated:
double lane changes, test procedure related to the analysis of vehicle handling,
G. agi et al. / Multi-Objective Optimization of Low-Floor Minibus Suspension System Parameters 415
sine with dwell (ESC test), braking in -split and crosswind, test procedures
related to analysis of vehicle stability,
steady-state circular driving, sine wave steer input and fishhook test, test
procedures related to vehicle handling and stability,
bounce sine sweep, driving over small sharp bump and driving on real road
surface profile, test procedures related to analysis of vehicle ride comfort.
For the front suspension of the minibus, 36 variables for the optimization were
defined. There consists of the rates of the spring and the shock absorber as well as 36
characteristic x, y and z coordinates of hard points of suspension system (Fig. 5).
Figure 5.Variables Positions of suspension system hard points.
For the dynamic simulation 42 objectives as well as 28 constraints were defined.
The objectives are related to maxima and minima of the driving characteristics and
limitations by the test procedure definition. The definition of objectives and constraints
in example of the double lane change test procedure is shown in table 1.
Table 1.Double lane change test procedure Objectives and constraints
Objectives Constranits
Lateral offset from design path should
be minimized
Lateral acceleration should be
minimized
Vertical wheel force fluctuations should
be minimized
Transient roll gain (Roll angle to lateral
acceleration ratio) should be minimized
Lateral offset limit is at 0.25 m
Lateral acceleration limit is at
0.66 g (initial configuration
value)
Vertical wheel force fluctuations
limit is at 40% (initial
configuration value)
The optimization runs quickly. By using the standard desktop PC it lasts about 36
hours to get the first set of optimized results. In the comparison of the algorithms the
best results were achieved with the FMOGA-II algorithm, which reached the
convergence after 300 iterations (Fig. 6).
G. agi et al. / Multi-Objective Optimization of Low-Floor Minibus Suspension System Parameters 416
Figure 6. Convergence of 2 objectives with FMOGA-II algorithm.
During the results analysis some results were taken and analyzed thoroughly.
Three configurations close to the Pareto front were compared exactly (Fig. 7). On the
left the objectives deviation of lateral offset from design path and lateral acceleration
are depicted. On the right side the yaw rate and lateral acceleration are shown.
Figure 7.Three optimal solutions from Pareto front.
G. agi et al. / Multi-Objective Optimization of Low-Floor Minibus Suspension System Parameters 417
The driving performance was improved significantly compared to the initial
configuration. As shown in Fig. 8, on the double lane change test procedure, the lateral
offset from design path was reduced from 0.33 m down to the range 0.11 to 0.15 m in
case of first offset and from 0.45 m down to the range 0.15 to 0.19 m in case of second
offset (the limit is at 0.25 m).
Figure 8.Double lane change test procedure Lateral offset from design path
Also, as the dominant criterion for the evaluation of vehicle handling in the
double lane change test procedure the peak value of roll angle is taken. This value
should be minimal. The peak value of roll angle was reduced from 4.97 in the case of
the initial configuration down to the range 3.02 to 3.05 in the case of the optimal
solutions (Fig. 9).
Figure 9.Double lane change test procedure Roll angle
The Fishhook test procedure is related to the analysis of vehicle handling and
stability. Primary, the vehicle response to steering wheel input should be observed. One
of the objectives in this test procedure refers to the peak value of the yaw rate which
should be minimal (Fig. 10). Also, the response time and the overshoot of the yaw rate
to a steering wheel input should be minimal.
G. agi et al. / Multi-Objective Optimization of Low-Floor Minibus Suspension System Parameters 418
Figure 10.Fishhook test procedure Yaw rate
In the ride comfort related test procedures the vehicle acceleration in all directions
is the main criterion to be primarily observed. The Bounce sine sweep test procedure is
used to rate a vehicle ride comfort on a decreasing amplitude and period sine wave road.
The excitation of the road in this procedure is in the vertical direction. The objective
and constraint are primarily set to vertical acceleration of the vehicle which shall be as
low as possible (Fig. 11). Based on the vehicle acceleration the RMS value of
acceleration could be calculated and used for the further analysis of the vehicles ride
comfort.
Figure 11.Bounce sine sweep test procedure Vertical acceleration
Concerning the other objectives, the improvements were achieved in the similar
range .
After the optimization is done and a set of feasible solutions arises, the only task
remaining is the selection of the best solution by using the appropriate decision support
methods like self organizing maps [12] [13].
G. agi et al. / Multi-Objective Optimization of Low-Floor Minibus Suspension System Parameters 419
5. Conclusions and Outlook
This research deals with the better use and integration of multi-objective optimization
tools and simulation tools for the analysis of suspension system kinematics and vehicle
dynamics in the conceptual phase of vehicle development. Using the standard, well
proven simulation tools with a suitable degree of accuracy eliminates the need for
physical vehicle prototype in the early stage of development, which reduces the
unnecessary development costs and shortens the lead time. Achieving optimal
parameters of the vehicle at this stage of development lowers the probability of wrong
solutions or concepts.
The development of an optimization model, capable of handling a large number of
variables, constraints and objectives, is a prerequisite for a complete solution in the
conceptual phase of vehicle development. The model has been built by using modern
evolutionary algorithms and vehicle dynamics simulation tools. In addition, the
influential parameters have been identified and analyzed; the criteria for the evaluation
of vehicle dynamics characteristics, together with the optimization algorithms have
been selected.
The application of the optimization model to the appropriate examples of vehicles
will lead to the evaluation of algorithm applicability and the determination of the key
parameters in algorithms which contribute to finding optimal solutions to the selected
problems. The proposed approach yields acceptable results in short time by spending
minimal efforts.
References
[1] G. agi, Viekriterijsko optimiranje u konceptualnom razvoja cestovnih vozila, PhD thesis, University of
Zagreb, Croatia, 2013.
[2] K. Popp, W. Schiehlen, Ground Vehicle Dynamics, Springer-Verlag, Berlin Heidelberg, 2010.
[3] R. Rajamani, Vehicle Dynamics and Control, second edition, Springer-Verlag, New York, 2010
[4] H.B. Pacejka, Tyre and Vehicle Dynamics, second edition, Butterworth-Heinemann, Burlington, 2006.
[5] J.R. Hwang, S. R. Kim, S. Y. Han, Kinematic design of a double wishbone type front suspension
mechanism using multi-objective optimization, 5th ACAM Australasian Congress on Applied
Mechanics, Brisbane, 2007.
[6] M. N. Khajavi, B. Notghi, G. Paygane, A Multi Objective Optimization Approach to Optimize Vehicle
and Handling Characteristics, World Academy of Science, Engineering and Technology 62, 2010.
[7] G. Fadel, I. Haque, V. Blouin, M. Wiecek, Multi-criteria Multi-scenario Approaches in the Design of
Vehicles, 6th World Congresses of Structural and Multidisciplinary Optimization, Rio de Janeiro, May
30 - June 3, 2005.
[8] M. Gobbi, I. Haque, P. Papalambros, G. Mastinu, A Critical Review of Optimization Methods for Road
Vehicles Design, 11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference,
Portsmouth, Virginia, 2006.
[9] T. Weise, Global Optimization Algorithms Theory and Application, http://www.it-weise.de/, 2009.
[10] L. Wang, A.H.C. Ng, K. Deb, Multi-objective Evolutionary Optimisation for Product Design and
Manufacturing, Springer-Verlag, London, 2011
[11] modeFrontier, http://www.esteco.com/modefrontier/, Accessed 15 May, 2013.
[12] S. Samarasinghe, Neural Networks for Applied Sciences and Engineering. From Fundamentals to
Complex Pattern Recognition, Taylor & Francis, Boca Raton, 2006
[13] S. Noguchi, O. Yuko, Improved Kohonen Feature Map Probabilistic Associative Memory Based on
Weights Distribution, in Artificial Neural Networks Architectures and Applications, Edited by Kenji
Suzuki, InTech, Rijeka, 2013
G. agi et al. / Multi-Objective Optimization of Low-Floor Minibus Suspension System Parameters 420
Prospective Evaluation of Assembly Work
Content and Costs in Series Production
Ralf Kretschmer
a
, Stefan Rulhoff
b
and Josip Stjepandi
b,1

a
Miele & Cie. KG, Germany
b
PROSTEP AG, Germany
Abstract. Strategic decisions in early production planning phases have a high
impact on various production aspects. Decision making is often based on vague
expert knowledge due to lack of a reliable knowledge base. Implications of this
problem are especially observable in the field of assembly planning, which
integrates results from various planning disciplines. This paper introduces a new
concept and the corresponding data model for application of Data Mining (DM)
methods in the field of production assembly planning and product design. The
approach contains the usage of existing planning data in order to extrapolate
assembly processes. Especially linked product and process data allow the
innovative usage of Data Mining methods. The concept presents assistance
potentials for development of new products variants along the product emergence
process (PEP). With this approach an early cost estimation of assembly processes
in series production can be achieved using innovative Data Mining methods.
Furthermore, design and planning processes can be supported effectively.
Keywords. Product Realization, Manufacturing, Digital Factory, Assembly,
Process Planning, Data Mining
Introduction
Today globally operating companies face additional challenges due to the increasing
variability of products and complexity of processes. Therefore there are growing
demands on the flexibility in the production system on the economic dispatch of new
products in an existing production line. In the modern product emergence process
production planning gets increasingly important and has to be executed in parallel to
the product development [1]. In this early phase of the product creation a first start for
planning processes is a cost calculation for the industrialization of the product in
existing production lines regarding basic conditions [2]. The economic feasibility of
series production must be assured with vague information on the product and given
general conditions, e.g. shift model [3]. This is a great challenge especially to the
planning of the cost-intensive assembly of the product [4] [5].
In order to meet this challenge PROSTEP AG supports Miele Cie & KG, one of
the leading manufacturers of domestic appliances, developing innovative methods in
the research project ProMondi. Aim of this project is the accurate estimation of the
expected assembly work content and the resulting costs in an early stage of the product
development. The approach contains the usage of existing planning data in order to

1
Corresponding Author, Mail: Josip.stjepandic@opendesc.com.
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-421
421
extrapolate assembly processes. Especially linked product and process data allow the
innovative usage of data mining methods. New processes appropriate to assemble the
given new product shall be designed based on this existing linked product and process
data. Automatic analysis with a specific data mining model can be used to create a first
draft of the assembly process and estimate the expected costs. Additional use cases can
be addressed. Following production planning processes can be supported by automatic
proposals of adequate assembly processes, which then can be customized. Moreover
the design engineer can be supported at the selection of appropriate joining elements.
With this approach an assembly knowledge based support of the designer in series
production can be achieved using innovative data mining methods.
1. Use Case Miele
In order to address the challenges of data mining the integration of various planning
tasks within the PEP, new concepts are necessary. Though, as a part of integrated
product and process development there are different definitions for various phases and
aspects of planning activities along the PEP. Regardless of the specific definition of
these phases and aspects, however, based on the analysis it is certain that great amount
of their containing information and knowledge are either utilized insufficiently and
ineffectively or remain unused [6]. In this regard, the presented concept focuses on
product design and production assembly planning. Subsequently, for the product
designer and production planer, there are varieties of applications, which can assist the
design or the planning process through information gathered by data mining.
Assembly process estimation: The focus is on the creation of an assembly process
for a new product. Based on existing product and process data compilation of a first
approximated assembly process for a new product could be developed. From this, the
production planner can specify further details and thus determine a first estimation
regarding assembly time. Based on the assembly time and associated calculation
scheme, the planner can perform the first cost estimation in a very early planning phase.
1.1. Preparation and Requirements
The information in production planning and engineering processes can mutually
enrich each other. Additionally intelligent interconnecting information from both areas
creates added value. The newly obtained information supports the workflow throughout
the PEP. Therefore, as part of this concept some requirements need to be met. Thus the
pre-conditions attached to both systems as well as their respective processes have to be
fulfilled [7].
1.1.1. Attributes and Data Sources
Data mining is a process of discovering valuable information from observational data
sets, which is an interdisciplinary field bringing together techniques from databases,
machine learning, optimization theory, statistics, pattern recognition, and visualization.
Data mining has been widely used in various areas such as business, medicine, science,
and engineering. Many books have been published to introduce data-mining concepts,
implementation procedures and application cases [8] [9]. The overall goal of the data
R. Kretschmer et al. / Prospective Evaluation of Assembly Work Content and Costs 422
mining process is to extract information from a data set and transform it into an
understandable structure for further use.
Data mining methods can be used for data clustering and classification, however
criteria for comparison of data sets have to be identified. To determine these criteria,
within the scope of ProMondi project, a survey of users as well as an analysis of
various tools of the DM was performed. The objective of this analysis was to identify
attributes relevant for assembly processes that could be assigned to products and parts
in CAD [10], PDM and production planning systems. In CAD systems attributes
assigned to parts contain mainly geometric information including volume and weight.
The PDM systems contain organizational information, such as creator, version and
revision as well as the mentioned parts information form CAD [11]. In addition to the
conventional systems for design and stacking product parts and assemblies, systems for
process planning and time measurement were also taken into account. They sustain a
comprehensive portfolio of information and therefore can be used to distinguish
different product parts and assemblies. The results of this analyzing are capsulated as
an object oriented data model, further described in section 3.1.
1.1.2. Data Collection and Availability
The necessary enrichment of product and process data on the fly for the presented
concept requires additional efforts in the design. This additional expenditure also
relates to the assembly connections and includes the acquisition of new information
form the designers know how. The designer usually defines assembly connections
either implicitly through formed locked joints by the shaping of the parts or explicitly
by connecting elements such as in screwed fasteners.
The designer of the assembly connection considers all these information in the
design but cannot store them in the CAD model because the CAD tools for the most
part are not able to define the necessary attributes.
To overcome this problem as part of the concept presented in this paper, the
designer is provided with an additional tool in the CAD system. It can be used to create
assembly connections and gives additional information and explicit design possibilities.
These additional assembly informations are named below as product assembly
information. Thus, data will be collected in the source system, the CAD system in
particular. Since the defined objects are not part of PDM systems an extension is
necessary in order to implement connections as objects and to store them after the
transfer in the PDM system persistently. In the further processing, the product
information is linked to the planning processes. Unless the storage of product data are
in the same system as for production planning, the information flow from the PDM
system to the planning system as well as the Data Mining tool, for further analysis, has
to be ensured.
1.1.3. Aggregation of the existing Data
In current planning systems often direct linking of processes to products is possible
[11]. Thus an allocation of to be assembled product and the associated assembly
processes is realized. In the assembly, however, parts are joined with other parts or
product. These assembly connections have no digital equivalent object yet. However by
means of an object such as the product assembly informations it is possible to store
useful additional connection information, which relates directly to the respective
assembly connection. As part of this concept, the combination of the products and
R. Kretschmer et al. / Prospective Evaluation of Assembly Work Content and Costs 423
processes does not take place directly but through the product assembly information.
The linking of product and process does not necessarily need to occur at the part level.
2. Solution Concept
The concept presented in this paper describes an assisting workflow to support the
designer (Figure 1). As part of a new or modified design the designer creates new
product data. In creating the assembly connections a software assistant supports the
designer and enriches the CAD model with product assembly information for each
connection. This product assembly information includes additional connection
information including e.g. the torque screwed fasteners or the type and the form of a
welded joint and information about other connection types. In the ongoing design
process the designer can trigger an evaluation regarding the assembly connections in
the model.
For this purpose, the characteristics of the CAD model are first prepared and
analyzed with Data Mining. The analysis focuses on the product assembly information
and their properties. The parts associated with the product assembly information and
their geometric properties, are also included in the analysis as additional information
set. Furthermore, an extended database is also provided and consists of product and
process data of existing products, which are linked via the product assembly
information. The characteristics of the product assembly information of the new
product are compared with the properties of the product assembly information of the
existing product in the extended database. Then the most similar product assembly
information is determined from the existing products. This analysis can be restricted by
a class of the connection types (screwed, weld, rivet) or deliberately left open to widen
the solution space and to provide the designer with information about other assembly
connections.

Figure 1. Design optimization with additional time data.
A limitation on the particular type of connection yields as a result of the closest
realized assembly connection of the same kind. Depending on the properties of the
parts, other mounting connections can also be found and proposed to the designer in a
proposal list.
The presented application for the support of the design process uses the product
assembly information identified in the analysis of the PDM database to determine the
R. Kretschmer et al. / Prospective Evaluation of Assembly Work Content and Costs 424
respective associated and related sub processes. Each product assembly information
represents an assembly connection. By multiple connections within the assemblies
multiple sub-processes for the assembly can be determined. These processes contain
the time data relevant for the new product design. Therefore the corresponding time
information of the existing products and if requested an alternative proposal list is
transferred in the CAD System and displayed. This assembly time information of the
existing product represents a first approximated assembly time for the new product. So
the designer is provided with this additional information regarding the assembly time
and with an enterprise specific factor the corresponding cost of the current design
solution. In the final step the designer is able to optimize the product iteratively on the
base of anticipated assembly time and costs for each design.
2.1. Data model
Based on determined assembly characteristics a range of attributes is derived to classify
the assembly of the parts. Figure 2 shows an overview of the generated data model for
the data mining analysis.

Figure 2. Data model overview.
The ProductAssemblyInformation (PAI) is the central element in this data scheme
and represents the assembly of the product parts. References for time analysis,
assembly requirements, designed parts or products as well as a wide range of meta data
including the assembly department and other information are lodged. This element is
supplemented with attributes of different connection types (see Figure 3). Further
connection types can be added to the data model. To provide the required information
for the time analysis a standardized data model is applied. In this regard, ADiFa
projects Application-specific data models, so called ADiFa Application Protocols,
were used, which offer the integration of processes and data for different DM systems
[12].
The second fundamental object in the data model is the Item. It contains references
to existing sub-assembly units, geometrical characteristics as well as
ProductAssemblyInformation. Each Item refers to the ProductAssemblyInformation,
which also refers to further used Items. This construct is chosen to enable Data Mining
R. Kretschmer et al. / Prospective Evaluation of Assembly Work Content and Costs 425
methods to determine exact similarities between new parts and/or products and other
existing parts. Furthermore, it makes comparison parts and products the new and
existing ones, in any order and combination interchangeably possible.
In the first approach the attributes for screw connections are clustered and
evaluated regarding the relevance for assembly operation. Figure 3 shows the identified
attributes classified in the categories fasteners, installation / assembly situation, tools /
equipment, installation regulations and additional assembly elements. These attributes
are represented in the data model in different object classes here illustrated by color.
The evaluation regarding the influence on the assembly time provides a first indication
for the relevance in the data mining analysis. Which attributes really are significant for
the similarity of assembly connection have to be determined in a data mining analysis
with a large quantity of product data.
screw head diameter low
thread type medium
number of thread transitions (used) low
screw diameter low
screw length (thread) medium
screw type medium
material low
magnetic screw high
chamfer on screw high
output of the screw low
additional elements high
lack of space high
visual disability high
risk of injury low
additional fix the add part medium
working both hands feasible medium
screwing in medium
threaded sleeves used high
glove used low
equipment used low
tool high
additionally tighten low
tightening torque low
check torque low
assembly sequence medium
multi-stage screwdriving medium
flat washer medium
nut medium
LocTite low
grease low
influence on
the assembly time
f
a
s
t
e
n
e
r
s
i
n
s
t
a
l
l
a
t
i
o
n
/
a
s
s
e
m
b
l
y


s
i
t
u
a
t
i
o
n
Category Product Assembly Information (PAI)
t
o
o
l
s
,

e
q
u
i
p
m
e
n
t
i
n
s
t
a
l
l
a
t
i
o
n

r
e
g
u
l
a
t
i
o
n
s
a
d
d
i
t
i
o
n
a
l

e
l
e
m
e
n
t
s

Figure 3. PAI attributes example screwing connection.
2.2. Data mapping and data mining
After aggregating and appending the data subsets from different sources and
systems, it is necessary to remove redundant data sets [13]. Data removal for the
presented concept is only based on syntactic similarities of attribute structures and data
R. Kretschmer et al. / Prospective Evaluation of Assembly Work Content and Costs 426
sets. The next step is converting and porting data in the presented data model.
Depending on data source the conversion is either fully automated or partially
automated with further manual adjustment. Often value and scale of different attributes
are often heterogeneous (Figure 4). In these cases a normalization of ratings prevents
the undesired high or low impact of certain attributes on the results and evaluation
process. In this regard a [0, 1] linear normalization has been used. Additionally, a
further attribute prioritizing via weighting can be necessary to define the importance of
each attribute for the evaluation. An automated learning of the weights via machine
learning methods depends on the existing data sets and their quality possible.
Otherwise they are determined based on expert knowledge or a combination of both
methods. To prevent further expansion of scope and the complexity of existing problem
expert knowledge was applied to determine the attribute weights. Furthermore, it is
possible to have more than a single weight vector. This approach is useful, if there are
various object types or parts, which have different prioritization for their attributes [14].
To identify the objects with most similar product assembly information for a new
object the classification algorithm k-nearest neighbor (kNN) [15] with Euclidean
distance as evaluation function is used. From the identified objects a list is generated
and the most related one can be manually chosen, which passes its assembly process
data to new object. To assure the reliability of the presented method and prevent
overfitting problem a cross validation [16] is used.

Figure 4. Weighting of Product Assembly Informations (PAI).
2.3. Aggregation of Information and utilization of the concept
The implementation of the presented approach was challenging due to high
requirement for interconnection and the overall quality of the existing data in different
Data Mining systems. In particular the number of realized and existing assembly
connections and thus necessary instances of a product assembly information as well as
the quality of the data are important. As proof of concept, feasibility of the presented
R. Kretschmer et al. / Prospective Evaluation of Assembly Work Content and Costs 427
concept is verified with artificial test data. But in order to evaluate the quality of the
results, it is necessary to rerun the analyses with production data. Furthermore, the
selection of the properties and attributes for the analysis in particular also has to be
determined based on production data to ensure the reliability of generated results. In
this scope a special focus is on the characteristics of the parts and of the connection
itself. In conformity with the presented objectives and concept a utilization of the
methodology is described as follows.
Suggesting assembly connections: The designer creates a new module with
already known and new assembly connections in the CAD system. He designs
assembly connections and complements these connection properties in the context of
the new module. Via the automated Data Mining process, he is provided with various
information about the assembly connections. Moreover, for each assembly connection
a list of alternative or ever realized connections can be created. Depending on the
product properties the five most similar product assembly informations are made
available to the designer as a prepared proposal list, which is generated through cluster
analysis of existing product data. If the analysis is dispensed with the filtering of
associated connections with the product assembly information, the designer can also be
provided with other not associated connections as alternatives.
Estimation of assembly process and information: The production planner drafts
an initial assembly process for a new assembly at an early stage of product
development. Analogous to the use case of the designer, for known assembly
connections that are implemented in the new product as well as in the old product data,
the right product assembly informations and thus the assembly processes are found. For
new unknown connections the most similar product assembly informations and related
assembly processes from the database are determined and duplicated. Each of the
founded product assembly information represents a single connection and the linked
process represents precisely the assembly work content for this connection. The sum of
the individual connections for the new product is its first assembly process. Thereby an
initial draft of an assembly process of the new module can be generated. The founded
individual connections, the individual process, as well as the overall process can be
used in different ways to assist the designer and the production planner. The planner
and designer also get a first estimation for the expected assembly time and cost in the
automated process. In addition, the production planner can increase the quality of the
process by manual intervention. On the one hand he adapts the product assembly
informations, which are created by the designer, before the Data Mining analysis. On
the other hand he can complete the product assembly informations in the attributes with
practical knowledge. Thus he has an impact on the input of the Data Mining analysis
and increases the quality of the result thereby. Furthermore, the designer has a first
draft for the assembly process at ones disposal and a first estimated assembly time in
the current CAD system. By a company-specific factor, the designer receives also
information about the cost of the connection in the assembly. By verifying this
information, the designer can evaluate and compare the alternatives for different
connections.
3. Conclusions and Outlook
Through utilization of Data Mining tools the quality of planning results and planning
processes can be increased, while simultaneously time and cost reduction can be
R. Kretschmer et al. / Prospective Evaluation of Assembly Work Content and Costs 428
realized. In this regard, the presented approach contributes an important added value to
production design and planning through usage of knowledge in the existing systems.
Thus reduction of planning time, increasing availability of information in product
design as well as making the cooperation between the designer and product planning
teams easier are the consequences. The technical feasibility of the proposed solution
has been shown by a prototypical implementation of the concept in CAD and PDM
systems. However, to produce reliable outcomes the product data have to fulfil high
requirements in regard to connection elements. Concurrently the necessary data model
and some tool sets are provided to make the data integration easier. In the future further
development of tool sets and methods could help to reduce the high initial effort for
adjustment of the data even more. Besides the evaluation of the results based on
product data it is important to investigate the behavior and results of the methodology
for new and innovative assembly technologies. Furthermore, for analyzing more
complex data sets as well as obtaining better results, it is important to develop and
refine the concept and to apply further Data Mining methods.
Acknowledgements
The research project Prospective Determination of Assembly Work Content in Digital
Manufacturing (ProMondi) is supported by the German Federal Ministry of Education
and Research (BMBF) within the Framework Concept Research for Tomorrows
Production (funding number 02PJ1110) and managed by the Project Management
Agency Karlsruhe (PTKA). The authors are responsible for the contents of this
publication.
References
[1] U. Bracht; T. Masurat, The Digital Factory between vision and reality, Computers in Industry 56, pp.
325-333, 2005.
[2] H. Bley; C. Franke, Integration of Product Design and Assembly Planning in the Digital Factory, Annals
of the CIRP, Vol. 53/1, pp. 25-30, 2004.
[3] G. Boothroyd: Assembly Automation and Product Design, Second Edition, Taylor & Francis Group,
Boca Raton, 2005
[4] B. Lotter, H.-P. Wiendahl, Montage in der industriellen Produktion, Ein Handbuch fr die Praxis, 2.
Auflage, Springer-Verlag Berlin-Heidelberg, 2013
[5] B. Rekiek, A. Delchambre: Assembly Line Design - The Balancing of Mixed-Model Hybrid Assembly
Lines with Genetic Algorithms, Springer-Verlag London, 2006
[6] O. Erohin; P. Kuhlang; J. Schallow; J. Deuse: Intelligent Utilisation of Digital Databases for Assembly
Time Determination in Early Phases of Product Emergence, 45th CIRP Conference on Manufacturing
Systems 2012, Vol. 3, pp. 424-429, 2012.
[7] J. Schallow; K. Magenheimer; J. Deuse; G. Reinhart: Application Protocols for Standardising of
Processes and Data in Digital Manufacturing, in: ElMaraghy, H. A. (Hrsg.): Enabling Manufacturing
Competitiveness and Economic Sustainability - Proceedings of 4th CIRP Conference on Changeable,
Agile, Reconfigurable and Virtual Production (CARV2011), 2.-5. October 2011, Montreal, Canada,
Springer, Berlin, Heidelberg, New York, pp. 648-653, 2011.
[8] J. Han; M. Kamber; J. Pei: Data Mining: Concepts and Techniques, third edition, Morgan Kaufmann
Publishers, Waltham, 2012.
[9] Y. Yin; I. Kaku; J. Tang; J. M. Zhu: Data Mining: Concepts, Methods and Applications in Management
and Engineering Design, Springer-Verlag, London, 2011.
[10] J. Hartung; J. Schallow; S. Rulhoff: Moderne Produktionsplanung - Integration in der
Produktentstehung, ProduktDaten Journal 19 1, pp. 20-21, 2012
R. Kretschmer et al. / Prospective Evaluation of Assembly Work Content and Costs 429
[11] M. Eigner; R. Stelzer: Product Lifecycle Management - Ein Leitfaden fr Product Development und
Life Cycle Management, Springer-Verlag, Berlin, Heidelberg, 2009.
[12] D. Petzelt; J. Schallow; J. Deuse; S. Rulhoff: Anwendungsspezifische Datenmodelle in der Digitalen
Fabrik, in: ProduktDaten Journal 16 1, pp. 45-48, 2009.
[13] L. Ohno-Machado; H. S. Fraser HS; A. hrn: Improving Machine Learning Performance by Removing
Redundant Cases in Medical Data Sets, AMIA Fall Symposium, pp. 523-527, 1998.
[14] D. Zhang; P. L. Yu; P. Z. Wang: State-dependent weights in multicriteria value functions, Journal of
Optimization Theory and Applications, Vol.74, No.1, pp. 1-21, 1992
[15] S. Dhanabal; S. Chandramathi: Review of various k-Nearest Neighbor Query Processing Techniques,
International Journal of Computer Applications Vol. 31, No.7, 2011
[16] R. Kohavi: A study of cross-validation and bootstrap for accuracy estimation and model selection, in:
14th international joint conference on Artificial intelligence, Vol. 2 (IJCAI'95), Morgan Kaufmann
Publishers Inc., San Francisco, CA, USA, pp. 1137-1143, 1995



R. Kretschmer et al. / Prospective Evaluation of Assembly Work Content and Costs 430
FDMU Functional Spatial Experience
beyond DMU?
Shuichi Fukuda
b
, Zoran Luli
c
, Josip Stjepandi
a,1
a
PROSTEP AG, Darmstadt, Germany
b
Stanford University, USA
c
University of Zagreb, Croatia
Abstract. To stay competitive, the companies have to respond quickly to the
changing demands of customers. At the same time, the products become more and
more complex, including more complex functionalities, and enterprises now have
to deal with concurrent multi-disciplinary environments if they want to optimize
their products globally. This would come with the development of in-process
simulations, but new methods and new tools are needed in order to enable a bridge
between the development domains. An important approach is Digital Mock-up
(DMU), which provides a robust development method to enable the spatial
integration in concurrent environment. In the past decade the DMU has been
implemented as a mandatory development method ensuring good project progress
within distributed collaborative development. Nevertheless, there is a strong need
to pursue the product development by additional methods, which progress beyond
DMU. The development of mechatronic systems involves many disciplines, which
utilize their own specific methods, processes as well as software tools in order to
create partial models of an overall system. A very tight collaboration of the
disciplines is essential, since all the partial models are interdependent. However,
information between these engineering domains is exchanged only periodically.
Progressing rapidly in short steps, the developers need an assisting tool to vividly
obtain the first impression on functional behavior of their products
(physicalisation of data) in each stage of product development. This paper
describes a new approach of cross-skill engineering cooperation between various
engineering domains (mechanical, electrical, software etc) called Functional DMU
which provides a first, quick insight (functional spatial experience) in the recent
progress of singular development tasks and corresponding results in the context of
the whole product.
Keywords. Concurrent Engineering, Digital Mock-up, Functional DMU,
Functional spatial experience
Introduction
Digital mock-up (DMU) has become a key method in product development for many
industries (automotive, aerospace, transportation, etc). DMU is a virtual representation
of the entire product model (e.g. with all variants, options and versions) throughout the
product life cycle and serves as visualization, validation, communication and decision
platform [1]. The DMU is reduced from CAD data and is generated directly from the

1
Corresponding Author, E-mail: josip.stjepandic@opendesc.com
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-431
431
CAx tools through the use of data reduction methods such as tessellation. The creation
of DMUs for variants and versions is not a special overhead (depending on the size of
the data and the hardware used, the process usually runs overnight and is started by the
system automatically). In addition, the archiving goes smoothly directly from the PLM
systems, because the mock-ups are already available in digital form [2]. The data can
come from different CAD systems.
The visualization and validation can be carried out in the context of the whole
product and in each phase in the product development because DMU is based on a
geometric product structure with full structural integrity. DMU provides different
views of the future product, e.g. from the design, manufacturing planning and
validation point of view. Depending on the perspective, the relevant information is
displayed. This facilitates early frontloading and a framework for the methodological
support of concurrent engineering. Regardless of the level of development and the
location (with the additional support of multi-site communication and collaboration
tools), the development teams review compatibility of their developments, detect early
errors and conflicts and consider alternative solutions. Suppliers can be entirely
integrated into this process with respect to the confidentiality rules (they have access
only to the data they need).
Figure 1.Functional DMU versus DMU.
The expansion of the digital mock-up to functional aspects (Functional Digital
Mock-up, FDMU), is an attempt to create more powerful tools for product
development. The virtual products stored in DMU should be enriched by the
information which describes the functionality with respect to environment (Fig. 1).
Based on DMU, FDMU comprises the results of all simulations needed for full
presentation of the behavioral system description. Literally spoken, FDMU extracts the
data from all virtual models of a product and gives them a physical meaning. It makes
the product function experienceable. With this in mind FDMU facilitates the
physicalisation of data by setting the physical effects in context of a product. As a
prerequisite, one has to ensure a deep interaction between visualization and numerical
simulation with respect to product life-cycle management. FDMU application requires
three basic components: a description of the geometry, a description of the behavior
and a visualization of the results.
S. Fukuda et al. / FDMU Functional Spatial Experience Beyond DMU? 432
1. Related Work
In many scientific works, the questions of reducing or preventing inconsistencies
between partial models have been analysed [3]. However, there is still an optimisation
potential regarding the prevention of inconsistencies between models, especially
between CAD and correspondent behaviour models. An approach for a continuous
information exchange between both types of models is needed. The approaches can be
basically divided into four groups.
One of the main approaches currently used consists of extracting information from
geometrical data including its constraints and inserting it into the simulation model.
According to this approach, Modelica models have been associated to CAD models
(CATIA). Thereby, an interface extracts the properties of CATIA models in order to
integrate them into a Modelica model. The involved CAD program CATIA does not
play any role during simulation of Modelica model. The main weakness of this
approach is the size limitation of most of the simulation software packages, to visualise
the whole product.
A further approach has been presented in order to exchange data between the CAD
and the simulation software. The principle consists also of extracting information from
CAD to put it in Modelica via a database. However, model parameters that have been
stored into database may be modified arbitrarily. Therefore, there is an existing risk of
running simulation on the base of wrong parameters. To avoid this risk, a parametric
link between both models can be used. Component objects are created, containing
information related to a CAD model and its corresponding behavioural model.
Therefore, the data related to behavioural model may be extracted from CAD model.
This approach has a significant complexity and performance limitations.
Additionally, there are some integrative approaches. All those approaches have the
characteristic in common that the functionality of one system (often the CAD system)
and therefore its advantages may not be accessed during the simulation. Simulation
tools are not focused on managing geometric information and its visualization in 3D
environments. Moreover, modifications of the CAD models, from which information
has been extracted for input into the simulation environment, can lead to
inconsistencies.
Last but not least, there is the CAx System CATIA V6 from DassaultSystemes
with a comprehensive new approach (Requirements Functional Logical - Physical)
to support the product development in all phases with high level of interaction between
singular modules. It comprises the design and various types of simulation in a unique
platform as well. The problem of CATIA V6 is the low acceptance based on the fear to
drop many applications at once and replace those at the same time by a new, not really
mature software.
A significant support for all those FDMU approaches is given by Modelica
Association with the Functional Mock-up Interface (FMI) [4]. In this approach, many
tools for modeling the geometry and product behavior are equipped with a common
interface (Fig. 2), listed in [5]. This interface exports the model output containing both
the behavior and the input parameters of a simulation: those are practically the
calculation method and the initial values of equations. A simulation tool with the FMI
interface can then read these models to simulate a particular behavior. This approach is
therefore particularly interesting in that several models and several simulations can be
S. Fukuda et al. / FDMU Functional Spatial Experience Beyond DMU? 433
run in parallel, and thus the overall behavior of a complex system can be represented
[6].
Nevertheless, all described approaches have significant weaknesses in representing
the performance in the entire product context (e.g. a passenger car) with a short lead
time (quasi online). Therefore, there is a need for further research and development
work to calculate or to extract the minimal subset of data needed for a sufficient
visualization of physical effect in such a context. While by zooming is possible to
online visualize each relevant detail within the DMU, the similar way is needed for
physical effects within FDMU.
Figure 2.Functional mockup interface.
2. Use Case
Typical use cases for FDMU always arise when several different physical effects
appear on the tight space of a product. One such example is the passenger car, wherein
the comfort of a living room is anticipated, which is affected by many vehicle-related
as well as environmental influences. Therefore, the design of the vehicle interior
becomes correspondingly difficult. A central component of comfort optimization
concerns the vehicle acoustics, which is considered in the complex NVH (noise,
vibrations, harshness).While noise and vibration can be determined by appropriate
experimental methods, harshness is a subjective property, and reflects human
subjective impressions [7] [8]. The scientific analysis of sound perception is subject of
psychoacoustics.
S. Fukuda et al. / FDMU Functional Spatial Experience Beyond DMU? 434
The psychoacoustic characteristics of a vehicle are a decisive factor for almost
every buyer of premium cars [9]. Therefore, appropriate experimental and simulation
methods have been developed that help to control, to mitigate or to combat the noise
[10] [11] [12] [13].
No less important is to make the properties of the sound system in the vehicle so
that an optimal subjective perception is achieved for every passenger at every interior
variant. Here, both the position (driver seat, passenger seat, rear seat) and the fine
adjustment of the seat position have great importance.
This is an area for Functional DMU that the acoustics of the interior of the vehicle
is depicted for each vehicle variant for each passenger in each seat position (fig. 3) to
ensure that decisions on the design of the overall system can be represented. The
scenario describes a system for automatic volume control of (at least) two active sound
sources. In this scenario, a person is in the vicinity of the two sound sources. His
position, and therefore the distances from the sound sources can be changed. Both
sound sources emit varying signals, e.g. music from a stereo system. In the area noise is
present too, compromising the clarity of the music. Another part of the system is a
microphone which detects the total sound pressure in the interior of the vehicle. Total
sound pressure is composed of both the music and from the noise. Figure 3 illustrates
this scenario from a bird's perspective.
Figure 3. Use case Acoustics setup in a passenger car.
Depending on how a human is changing his position in space, he receives the
sound from the speakers with different true stereo loudness. At the same time the sound
sources play the music differently. This is the stereo sound that is not created from
exactly the same signal on both channels.
The input parameters of the scenario are therefore the positions of the human, the
speaker and the microphone, the sound power of the two speakers and the overall
sound pressure that is detected by the microphone. The total sound pressure arises from
the sound of the speakers, as well as from the rest of the (interfering) noise in the
environment.
S. Fukuda et al. / FDMU Functional Spatial Experience Beyond DMU? 435
In addition, the knowledge on the position of the microphone is required to
calculate the sound pressure share of the speakers in the total sound pressure. A sound
pressure level limit is introduced to the system, with the ability to perform the
adjustment in order to prevent human health.
From these parameters, it is determined which sound power change is needed in
each of the speakers to prepare a balanced listening experience to passengers. This
means that it takes both speakers to equalize the loudness and the volume of the music
adapts to the disturbing ambient noise. A control system has the task to control the
speakers so that a balanced listening experience (balance) is created for the passenger,
no matter what position he is in the room. Moreover, the system reacts to changes in
the noise, so that the combined sound pressure of the speaker is always in a fixed ratio
to the sound pressure of the noise. Thus, the clarity of the music and the listening
experience for the passengers remain constant. The acoustic performance of the speaker
is known to the system, or it is calculated by the system in real time. Along with the
overall sound pressure measured by the microphone in the area and later determination
of the position of the passenger, the sound pressure level of the noise can be
determined.
3. Concept
To achieve the goal, the architecture was defined according to Figure 4 [14].
Geometry definition and visualization can be accomplished either by the CAD system
CATIA V5 or the JT format by using with an appropriate viewer. The processing of the
acoustic models is carried out with a specially developed solver based on theoretical
fundamentals described in [8]. Microsoft Excel supplies the Input/Output and low level
standard calculations.
Figure 4.FDMU architecture.
S. Fukuda et al. / FDMU Functional Spatial Experience Beyond DMU? 436
The Excel file consists of two sheets. The first sheet is used as design table, which
is accessed by CATIA V5 and JT viewer. In summary, this sheet comprises the
important geometric information that is visualized in CATIA V5 and JT viewer. Thus
this sheet calculates the mid-position of the ear. CATIA V5 can use the parameters in
this sheet to externally control the parametric models. The input of parameters doesnt
run in CATIA V5 but in Excel. By update function the parameter values in CATIA V5
are imported again and the entire model updated. This approach facilitates the use of
many geometric variants within only one CATIA V5 model.
The user inputs data in the design table with the position and kinematics of seats.
These data affect the behavior as well as the geometry in the selected use case. The
user can change the CAD model if necessary, without influencing the behavior. One
sheet contains the result for behavior with the required changes in the noise power of
the speakers. The body height of the human as well as his seat position (driver,
passenger, rear seat) are the parameters too. The list of parameters is closed by the
position of the loud speakers and the microphone.
A second sheet contains the description of the behavior in the use case. First, the
user must enter the distributed architecture of several frequency ranges with amplitude
values measured by the microphone. These measurements represent the total sound
pressure of all sounds in the application scenario. Secondly, the sound power levels of
both speakers must be registered across multiple frequency ranges.
The need to record the sound performance and the overall sound pressure is the
main weakness of this architecture. If the sound system would really exist, this would
not be necessary and it would happen automatically. This enhancement should be
implemented in the next project phase.
For this, the visualization of the behavior is limited by the limitations of Excel.
Data series in Excel sheets can be represented in a variety of charts or graphs. Excel
also allows sorting of information by freely selectable criteria. For example, values are
marked when they exceed certain limits, can be placed in a hierarchy, as in the
evaluation matrix of the use cases in this work. It is desirable that the behavior of the
system can also be shown directly on the geometry model, such as an application
scenario for heat distribution on surfaces.
Such representations of the model are actually not intended for CATIA V5 and JT.
However, workarounds through various visualization software packages are possible.
This can be used to illustrate a specific behavior on such geometric elements. In the use
case for this work it could be visualized, for example, at what distance to the speakers,
there is a certain level of sound pressure. The typical color schema can be used to mark
and distinguish the singular parameter values. JT facilitates the similar visualization
options.
In Figure 5 two spheres are modeled, with their centers are the mid-points of the
speakers from the use case. The radius of the ball is dependent on the acoustic
performance of the speaker and transmitted by the design table in CATIA V5. In
addition, these spheres will be transparent in order to improve the clarity of
presentation. This image is created only for clarity. In a real system, the sound pressure
distribution has a substantially different shape, since the speakers are aligned in a
certain direction. It would also significantly affect the distribution if the structure of the
car interior were taken into account.
All means needed for visualization of such fields are available in the proposed
concept. The implementation is necessary only by refining of the simulation algorithms.
S. Fukuda et al. / FDMU Functional Spatial Experience Beyond DMU? 437
Similar procedure can be used in the demonstration of further interior equipment (e.g.
sensors, heating, air conditioning, environmental affects, etc).
Figure 5.Illustration of spaces with equal noise pressure level.
4. Further Development
The FDMU solution described here demonstrates all the advantages (lowcost, rapid
implementation and deployment, user-friendly interface and application) and
disadvantages (difficult coordination, functional overlap, costly maintenance, data
exchange losses) of the bottom-up principle in the development of complex IT
applications. As an integrated standard FDMU system in the market offering of a
software vendor seems unlikely at the present time, the further development must
obviously run in two main directions.
First, the candidate applications must get better interoperability capabilities. For
this purpose, the consistent implementation of FMI for each simulation tool would be
very helpful und shall become one of the purchasing criteria.
Second, the loose coupling of individual components must be hierarchically
controlled. FDMU is therefore distributed on four levels (Figure 6): CAD / PDM
system, HIL / SIL system, DMU / VR system and, finally, the FDMU environment. To
achieve a higher performance of the overall system, it is necessary to strengthen the
component "FDMU environment" so that it assumes the user interface and the control
S. Fukuda et al. / FDMU Functional Spatial Experience Beyond DMU? 438
of residual components. A similar application is described in [15]. This component
shall also provide a template by which individual applications are inserted into FDMU
system.
Finally, the open question remains of how the FDMU can be integrated into an
enterprise-wide PLM concept to ensure the quick access, the data consistency and the
broad data availability [16].
Figure6. Information flow within an integral interactive FDMU application.
5. Conclusions and Outlook
FDMU is an attempt to breathe life in the 3D geometric models customary in the
modern product development: With the addition of models describing the behavior of a
product to purely geometric models, it is not only the appearance but also the function
of a product which is shown virtually. A product becomes a virtual experience and does
not have to actually exist.
FDMU allows the product developer to make much more precise predictions about
the future product than before, and facilitates the search of failure sources during
development. This work represents a contribution to the implementation of FDMU and
shows how a FDMU architecture can be realized relatively simply and with means
widely used in an enterprise. In addition, various approaches are possible for the
implementation and dissemination of FDMU in the product development.
During this work an architecture has been developed, which fulfills the main
requirement of FDMU: coupling of digital geometry and behavior models.
By using a CAD system the creation and manipulation of geometric model structure is
possible. All variable parameters that will impact both on the geometry and on the
behavior of the system are stored into Excel. One can define many behavior scenarios
in Excel, as long as they can be represented by mathematical or logical operations.
S. Fukuda et al. / FDMU Functional Spatial Experience Beyond DMU? 439
The possibilities for the visualization of the behavior are limited in this
architecture, but it can be extended through workarounds. For a larger-scale
deployment of the architecture, one can also consider simplifying workarounds by
using macros. In addition to this architecture, another approach is presented, in which
the advantages of neutral 3D interface formats such as JT [17] are used. In this
alternative architecture the modeling of the geometry is limited. However, monetary
benefits are very promising in this case. The implementation of this architecture can
provide points of contact for further scientific work.
References
[1] B. Balasubramarian, Entwicklungsprozess fr Kraftfahrzeuge unter den Einflssen der Globalisierung
und Lokalisierung, in V. Schindler, I. Sievers, Forschung fr das Auto von morgen, Springer-Verlag,
Berlin Heidelberg, 2008, 359 372.
[2] W.R. Dolezal, Success Factors for Digital Mock-ups (DMU) in complex Aerospace Product
Development, PhD Thesis, TU Mnchen, 2008.
[3] A.Biahmou, A. Frhlich, J. Stjepandic, Improving interoperability in mechatronic product development,
Proceedings of PLM 10 - International Conference on Product Lifecycle Management, Inderscience,
2013, 510521.
[4] N.N., Functional Mockup Interface (FMI) Version 1.0, https://www.fmi-standard.org/downloads.
Accessed 15 April, 2013.
[5] T. Blochwitz, M. Otter,J. Akesson, M. Arnold, C. Clau, H. Elmqvist, M. Friedrich, A. Junghanns, J.
Mauss, D. Neumerkel, H. Olsson, A. Viel, Functional Mockup Interface 2.0:The Standard for Tool
independent Exchange of Simulation Models, 9
th
International Modelica Conference, Munich,Sep 3 -5,
2012, https://trac.fmi-
standard.org/export/700/branches/public/docs/Modelica2012/ecp12076173_BlochwitzOtter.pdf,
Accessed 15 April, 2013.
[6] T. Blochwitz, M. Otteret al.: The Functional Mockup Interface for Tool independent Exchange of
Simulation Models. 8
th
International Modelica Conference, Dresden,2011, https://trac.fmi-
standard.org/export/700/branches/public/docs/Modelica2011/The_Functional_Mockup_Interface_paper
.pdf , Accessed 15 April, 2013.
[7] M. Mser, Technische Akustik, Springer, Berlin Heidelberg,2012.
[8] M. Pflger, F. Brandl, U. Bernhard et al,Fahrzeugakustik,Springer, Wien, 2009.
[9] D. M. Howard, J. A. S.Angus, Acoustics and Psychoacoustics, 4th edition, Elsevier, Oxford, 2009.
[10] D.A. Bies, C.H. Hansen, Engineering Noise Control, 3
rd
edition, Spon Press, London, 2003
[11] A. Hepberger, B. Pluymers, K. Jalics et al,Validation of a Wave Based Technique for the analysis of a
multi-domain 3D acoustic Cavity with interior damping and loudspeaker excitation. Prag,
TschechischeRepublik: Inter-Noise 2004 The 33rd International Congress and Exposition on Noise
Control Engineering, 2004.
[12] B. Brhler, C. Bertolini, SEA-Modellierung zur Schallpaketentwicklung unter Einbeziehung simulierter
Lutfschallanregung. In: ATZ, Ausgabe 12/2008, Springer Vieweg, Wiesbaden,2008.
[13] MLuegmaier, M. Trost, Status und Trends der NVH-Simulation im Automobilumfeld aus Anwendersicht.
In: NAFEMS Magazin 4/2012, 24. Ausgabe, Bernau am Chiemsee: NAFEMS, 2012.
[14] J. Schulz, Erweiterung des Digital Mock-up um funktionale Aspekte, Diplomathesis, TU Magdeburg,
2013.
[15] A. Stork et al, FunctionalDMU: towards experiencing the behavior of mechatronic systems in DMU.
Fraunhofer IGD, Darmstadt, 2009,
http://www.igd.fraunhofer.de/sites/default/files/FDMU%20Pr%C3%A4sentation.pdf, Accessed 15
April, 2013.
[16] M. Eigner, T. Hollerith, Concept for an Integrated Mechatronic Simulation by a Cross Domain
Function Model, ProSTEP Science Days Sep 26, 2007, Lehrstuhl fr Virtuelle Produktentwicklung, TU
Kaiserslautern, 2007.
[17] S. Handschuh, R. Dotzauer, A. Frhlich, Standardized formats for visualization - application and
development of JT. 19
th
ISPE International Conference on Concurrent Engineering, Trier, 2012.In J.
Stjepandic et al. (eds.), Concurrent Engineering Approaches for Sustainable Product Development in a
Multi-Disciplinary Environment, Springer-Verlag, London, 2013, 741-752.
S. Fukuda et al. / FDMU Functional Spatial Experience Beyond DMU? 440
Automatic Generation of Curved Shell
Plates Processing Plan Using Virtual
Templates for Knowledge Extraction
Jingyu Sun
a,1
Kazuo Hiekata
a
Hiroyuki Yamato
a

Norito Nakagaki
b
Akiyoshi Sugawara
b

a
Graduate School of Frontier Sciences, The University of Tokyo, Japan.
b
Sumitomo Heavy Industries Marine & Engineering Co., Ltd, Japan
Abstract. An approach of extracting the tacit knowledge during the curved shell
plates manufacturing process by automatically generating the processing plan
using virtual templates is discussed. First, the curved shell plates are extracted
from 3D measured data divided into many regions by obstacles. Then, the virtual
templates are generated from ships design data. Finally, the processing plans
consisting of the heating areas for bending curved shell plates are automatically
generated. By analyzing the correlation between the generated plan and the actual
plan, the tacit knowledge such as the processing rules and habits under different
situation which used to be hardly discovered or educated during the manufacturing
process is extracted and represented using Ripple-Down Rules.
Keywords. tacit knowledge, curved shell plate, processing plan, virtual template
1. Introduction
The curved shell plates, which constitute a ships bow and stern, are thick large steel
plate with arbitrary 3D shapes. Thus they can only be plastically deformed by the
process of heating and water-cooling based on the wooden bending template (Kikata)
check with human eyes. This process has no clear-cut methodology and carries great
risks for the subsequent heat sealing process. The wooden templates used in this
process to evaluate the curved surface of a curved shell plate are made up of bottom
line and perspective stick from the ships design data. During the processing, the
parameters (e.g. the angle between wooden templates perspective sticks, the grade of
the gap between the wooden templates bottom line and the curved shell plate, etc.)
which are required for the design of processing plan based on a personal check-with-
eyes result are quantitatively immeasurable. And there is a lot of tacit knowledge such
as the processing rules and habits which are hardly discovered or educated during the
processing. Thus, individual difference is among the processing plans designed by the
worker, and the variation in after-processing shapes arises due to the heavy depending
on the tacit knowledge, skill and experience of the craftsmen during the processing.

1
Student, Graduate School of Frontier Sciences, the University of Tokyo, Building of Environmental
Studies, Room #274, 5-1-5, Kashiwanoha, Kashiwa-city, Chiba 277-8563, Japan; Tel: +81 (4) 7136 4626;
Fax: +81 (4) 7136 4626; Email: sun@is.k.u-tokyo.ac.jp ; http://www.nakl.t.u-tokyo.ac.jp/
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-441
441
On the other hand, the non-contact 3D scanning technique using 3D scan
equipment such as laser scanner has started to be used in modern engineering. The laser
scanner analyzes a real-world object by emitting laser and receiving reflection to
collect data on its shape. Measured results are formatted in point cloud data which
include three-dimensional coordinates representing the points of the measured objects
surface, the reflection intensity and color information.
This paper presents an automatic processing plan generation system for curved
shell plate using virtual templates for knowledge extraction during the processing. The
system generates virtual templates based on the design data of the ship, registers them
with the measured point cloud data representing the curved shell plates in processing
obtained by laser scanner, and generates the processing plan by calculating the
curvature differences between the virtual templates bottom line and the measured data.
Finally, the tacit knowledge during the processing is extracted by analyzing the
correlation between the automatically generated plan and the actual plan.
2. Related Work
Hiekata et al. calculated the surface displacement of curved shell plates. Curved
shell plates are measured by laser scanner. Curved shell plates accuracy is computed
by registering the measured point cloud data of curved shell plate and the design data in
NURBS and calculating the displacement of these two data [1].
3. Proposed Automatic Processing Plan Generation System
3.1. Overview
The processing flow of this system is illustrated in Figure 1. There are 5 engines in this
system including (1) Curved Shell Plate Measuring Engine, (2) Virtual Templates
(Kikata) Generating Engine, (3)-a Heating Factor (processing plan) Proposing Engine,
(3)-b Virtual Template Control Engine, and (4) Automation Engine of measurement /
analysis flow.


Figure 1. System Overview.
J. Sun et al. / Automatic Generation of Curved Shell Plates Processing Plan 442
The interfaces of this system include the Measuring Parameter Select Interface for
craftsman who produces the curved shell plates and the Automation Flow Setting
Interface for the administrator of the factory. These are called engines because every
one of them can operate independently to make sure the every step can be reproduced
momentarily instead of running the flow all over again.
3.2. Curved Shell Plate Measuring Engine
Because there are many obstacles such as the floor and the wooden templates in the
measured point cloud data of the curved shell plates obtained by laser scanner, as
shown in Figure 2(left), a curved shell plate measuring engine which can extract the
curved shell plates by removing these obstacles automatically is developed.


Figure 2. Curved Shell Plates Extraction.

First, as shown in Figure 3, one point on the curved shell plate is arbitrarily selected as
the start point

. Then, the continuous domain A around

is obtained by the
domain growing method. After that, the edge of domain A is extracted using edge
detecting method from Kalogerakis [2].
Next, the neighborhood points are computed for every point of each edge. The
common domain judging about whether these neighborhood points are from the same
curved shell plate as domain A is carried out using 4th curved surface fitting and cross-
sectional B-Spline curve fitting [3]. The new domain B of the same curved shell plate
is recognized by means of comparing the fitted curved surface and the actually
measured point cloud data. Finally, the points representing the whole curved shell plate
without obstacles are extracted by repeating the above process.


Figure 3. Obstacle Overstriding.

J. Sun et al. / Automatic Generation of Curved Shell Plates Processing Plan 443
3.3. Virtual Templates Generating Engine
The bottom line of the virtual template is generated from the 3-D coordinate of the
frames design data. And the representative upper plane and the perspective plane are
generated to constitute the virtual templates perspective stick. The details are as below.
(1) Generation of Virtual Templates Bottom Line
Lagrange interpolation is used to generate virtual templates bottom line representing
the shape of curved shell plate from the discrete frame point of the design data.
(2) Generation of Representative Upper Plane
The representative upper plane is required in order to generate the perspective stick of
virtual template.


Figure 4. Upper Planes Generation.

First, the normal vectors

of the i
th
frame lines center are calculated and added. The
sum is normalized and multiplied by a fixed distance H. And the result is set to H

.
Next, plane fitting is performed for the curved shell plates frame point
group

. The representative upper plane of virtual template group is generated by


moving the obtained plane using H

as shown in Figure 4.
(3) Generation of Perspective Plane
The evaluation of the curved shell plate in processing is performed based on
checking the position of the virtual templates perspective plane.


Figure 5. Perspective Planes generation.

As shown in Figure 5, the center point

of 1
st
frame line is projected on the
representative upper plane, and projecting point

is obtained. Then the perspective


plane is defined by

and the center point

of 2
nd
frame line.
Finally, the generated virtual templates with the same perspective plane as a real
wooden templates group are drawn on the computer.
J. Sun et al. / Automatic Generation of Curved Shell Plates Processing Plan 444
3.4. Processing Plan Generating Engine
Processing Plan Generating Engine is developed to generate curved shell plates
processing plan automatically by comparing the curvature difference between virtual
templates and measured point cloud data of curved shell plate. The processing plan
automatic generation flow is shown in Figure 6.
First, registration of virtual templates and measured curved shell plates point cloud
data is performed using ICP algorithm [4].


Figure 6. Automatic Generation of Processing Plan.

Since the measured frame line is a discrete point group from measured point cloud,
curve fitting is carried out using a B-spline curve.
Then, the amount of necessary contraction by heating at each frame point is
calculated for every frame line. The section of measured frame line and the virtual
templates bottom line are shown in Figure 7(left).

is the plate thickness at this


frame line. 1/n part of the frame line is shown in Figure 7(right). During the plastic
deformation, if heat is applied to point

, then the length

at upper side of plate at


point

contracts while the lower side of plate expands. And the length

of the
neutral axis displayed by a dotted line does not change.


Figure 7. Processing Plan Generating Method

J. Sun et al. / Automatic Generation of Curved Shell Plates Processing Plan 445
Therefore, heat is applied to point

until it becomes design shape (i.e.,

).
And the necessary surface contraction by heating at each frame point required for
correction processing is expressed as Formula (1). The value which serves as standard
of the necessary thermal at each point of one frame line is computed using Formula (2).

(1)

(2)

As shown in Figure 8, the heating line is located between point
,
and
,
which
has the bigger than the threshold.


Figure 8. Processing Plan
3.5. Virtual Template Control Engine and Automation Engine
In present processing plans design process, the wooden template is rotated to a proper
position in order to make the curvature difference at each frame point observable.
Besides, the heating line is located between contact points of the template and the plate
when the perspective sticks of every template are rotated to constitute one plane. The
engine which rotates each virtual template freely on the measured curved shell plate is
developed by applying ICP algorithm onto each 1/n part of the frame line as shown in
Figure 9. On the other hand, in order to make the whole system easy to use for workers,
the automation engine of measurement / analysis flow sends every system operation
command instead of human using the function of operation system. Therefore it can
detect and control both the system and the scanner, and makes the system completely
automated.


Figure 9. Virtual template control (rotate)
J. Sun et al. / Automatic Generation of Curved Shell Plates Processing Plan 446
3.6. Knowledge Extraction
In order to extract the tacit knowledge during the heating process, interview
investigation about the difference between the automatically generated plan and the
actual plan in different cases was conducted. The knowledge in specific situations
during the manufacturing process is extracted and represented using the RDR (Ripple-
Down Rules) [5] as shown in Figure 10.
The curved shell plates can be roughly divided into 4 types: Saddle Type, Dish
Type, Twist Type, and S Word Type. During curved shell plates manufacturing
process, when the workers have doubts about the generated processing plan, the curved
shell plates type is firstly specified. Then the region where curvature error occurs is
checked to make sure that the workers modification to generated plan is positional
reasonable. Also, the size of curvature error is reviewed to looking for if there is a
correlation between the curvature error and the heating technique. Finally, some other
factors during the processing such as the roller lines arrangement are also considered
to affect the workers decision.


Figure 10. Knowledge extraction using Ripple-Down Rules
4. Case Study
4.1. Automatic Generation of Processing Plan
The which shows the relative curvature errors at the 5 frames of curved shell plate A
is shown in Figure 11.The horizontal axis shows the direction of the virtual templates
bottom line, and the vertical axis is . If at a point has a value of 0.5 or more, the
bending at this point is insufficient and correction heating is required.
The color map of curvature error is shown in Figure 11(lower left). Red shows that
the position has an insufficient bending. As shown in Figure 11(lower right), the
heating line was located between the points with errors exceeding the threshold value.

J. Sun et al. / Automatic Generation of Curved Shell Plates Processing Plan 447

Figure 11. Automatically generated processing plan
4.2. Experiment Using Real Curved Shell Plate in Factory
As shown in Figure 12, proposed processing plan generation system was verified using
the point cloud data of the curved shell plate measured during the heating processing
and the recorded processing plans at each step in factory.
27 times of heating occurred at four steps as shown in Figure 12(right) :
(2)Transvers Bending, (3)Longitudinal Bending(upside), (5)Longitudinal
Bending(underside), (7)Longitudinal Bending(upside) .The point cloud data of the shell
before heating is measured for every step, and the processing plan is generated using
this system. The differences between the generated plan and the actual plan are to be
discussed to extract the tacit knowledge during the process.


Figure 12. Manufacturing processing flow with virtual template
4.3. Knowledge Extraction
The processing plan (left) automatically generated by the proposed system and the
processing plan (right) which the worker actually used are shown in Figure 13. It can
be seen that the arrangement of the heating line are mostly the same except some slight
difference including the heating line 1, 4, 5, 20 and 21 which are also verified as
appropriate heating lines by workers using the proposed virtual template control engine.
J. Sun et al. / Automatic Generation of Curved Shell Plates Processing Plan 448
Table 1. Rules used in RDR


The Rules used to extract the tacit knowledge are shown in Table 1. In this case
study, because the curved shell plate is dish type, Rule 0 is always true. And whether
the curvature errors region is large enough to perform parallel line heating is decided
as Rule 1. Also, Rule 2 and Rule 3 are used to identify the tacit knowledge in specific
steps during this curved shell plates manufacturing process.


Figure 13. Automatically generated and practical plan during the manufacturing process

In this case study, the knowledge during this curved shell plates manufacturing
process is extracted and represented in Table 2.
(1)When the generated heating line intersects the roller line which is decided based
on the curvature distribution at the beginning of the heating processing, in order to
avoid improper torsion, the heating line is usually located along with the roller line.
For example, correction heating was performed at the heating lines 1, 4, and 5 in
two straight parallel lines(along with the roller lines) as shown in Figure 14(left)
instead of single lines connecting the points with big error as shown in Figure 13(left) .
(2)When a nub (relatively small region) occurs because of improper heating,
correction heating is needed for the same position from opposite of curved shell plate.
For example, at the step of back heating after reversal, in order to correct the place
of nub, line heating was performed between the points which have no obvious
curvature error in large region (heating lines 20 and 21 in Figure 13).



J. Sun et al. / Automatic Generation of Curved Shell Plates Processing Plan 449
Table 2. Knowledge Extraction Result



Figure 14. Roller line heating and nubs
5. Conclusion and Future Work
The processing plan of curved shell plate which used to be designed only using
personal check-with-eyes of wooden templates was automatically generated by the
proposed system. The generated plan is almost the same heating line arrangement as
the actual plan with only some slight differences. Interview about these slight
differences is conducted using Ripple-Down Rules. The tacit knowledge in different
situations during the manufacturing process is extracted and represented.
As a future work, experiment about other types of curved shell plate will be carried
out. And a knowledge database is to be constructed. Based on a complete knowledge
database, the automatic processing plan generation flow proposed in this paper can be
expected to be optimized, and the design of the processing plan could be standardized.
6. Acknowledgement
This manuscript is an output of the Joint Study supported by NIPPON KAIJI KYOKAI
(Class NK). The authors would like to thank UNICUS Co., Ltd., FARO Japan, Inc., for
using their large point cloud processing system Pupulpit.
References
[1] K.Hiekata, H.Yamato, Y.Oida, M.Enomoto, Y.Furukawa: Development and Case Studies of Accuracy
Evaluation System for Curved Shell Plates by Laser Scanner, Journal of Ship Production and Design,
vol.27, (2), 2011, pp.84-90.
[2] E.Kalogerakis . NowRouzezahrai P.Simari K.Singh : Extracting lines of curvature from noisy
point clouds Computer-Aided Design 41
[3] E Catmull, J Clark: Recursively generated B-spline surfaces on arbitrary topological meshes, Computer-
aided design, 1978
[4] H.Okuda, M.Hashimoto, S.Kitaaki, S.Kaneko: Fast and High-precision 3-D Registration Algorithm
using Hierarchical M-ICP, The Institude of Electronics,Information and communication
engeers,Technical Report of IEICE, PRMU2003-54, pp1-8, 2004
[5] B.R.Gaines, R.Compton: Induction of Ripple-Down Rules Applied to Modeling Large
Databases, Journal of Intelligent Information Systems, 5, 211-228 (1995)
J. Sun et al. / Automatic Generation of Curved Shell Plates Processing Plan 450
Global Logistic Management for Overseas
Production Using a Bulk Purchase 4PL
Model
Amy J.C. TRAPPEY
a,1
, Charles V. TRAPPEY
b
, Ai-Che CHANG
c
, W.T. LEE
d
and
Hsueh-Yi CHO
e

a
Department of Industrial Engineering and Engineering Management, National Tsing
Hua University, Hsinchu, Taiwan
b
Affiliation Department of Management Science, National Chiao Tung University,
Hsinchu, Taiwan
c
Advanced Research Institute, Institute for Information Industry, Taipei, Taiwan
d
Technology Center for Service Industries, Industrial Technology Research Institute,
Chutung, Taiwan
e
Institute of Industrial Engineering and Management, National Taipei University of
Technology, Taipei, Taiwan
Abstract. In order to improve the quality of the components supplied by key
suppliers and delivered to the final assembly sites overseas efficiently and timely,
a new logistics business model and assessment analysis are proposed using
services provided by fourth-party logistics (4PL) companies. In the research, a
Petri Net process modeling approach is adopted to construct the model and
identify the current bottlenecks discovered in the as-is (existing) logistic processes.
According to the benefits of the proposed e-logistics information system and its
to-be process improvement, total operating time and total operating costs can be
reduced 41% and 21% respectively. The global logistics information platform is
built to enhance the logistics management efficiency of manufacturers with
multi-national operations for improving their global competitiveness.
Keywords. e-logistics information system, fourth-party logistics (4PL), logistics
service provider (LSP)
1. Introduction
In order to increase and sustain its competiveness in logistics management, Taiwan has
begun to link global production and sales distributors. Since China is relatively closer
to Taiwan geographically, most production manufacturers of Taiwan have migrated to
China yet most high value added components required in final product manufacturing
are made in Taiwan. However, the final assembly, the quality of the components, and
the delivery dates are not easy to control, effecting production scheduling, product
yield and decreasing profits.

1
Corresponding Author: trappey@ie.nthu.edu.tw
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-451
451
To solve the above problems, many enterprises build collaborative manufacturing
networks across multinational regions to reduce production costs and gain access to
new and often unfamiliar markets. Developing Special Economic and Trade Zones
(SETZ) is one useful approach to provide reciprocal and convenient logistics services
for multinational corporations [1]. Optimizing the operative efficacy and supporting the
logistics of final product manufacturing is the desired goal. The solution of the
cross-strait logistic process between Taiwan and China where components are
produced or provided by Taiwan and then delivered to factories in China, and then
assembled as final products is the focus of this research. Many companies migrate to
developing countries to take advantage of cheaper land, lower labor costs, and
favorable tax incentives [2]. This study proposes a feasible logistics operation model
for manufacturers and investigates the service niche of domestic logistic practitioners
within this operation model to further develop commercial opportunities.
The paper is organized as follows. Section 2 reviews and discusses the background
literature. The methodology is described in Section 3. Section 4 presents the case
implementation, including as-is and to-be models, the logistic system platform and the
comparison of results. Finally, the conclusions are provided in Section 5.
2. Literature Review
This section reviews the fundamental concepts and related research in the definition of
logistics and global logistics management.
2.1 Logistics and Evolution of Logistic Services
The Council of Logistics Management (CLM) in 1991 defined logistics as the process
of planning, implementing and controlling the efficient, cost effective flow and storage
of raw materials, in-process inventory, finished goods and related information from
point-of-origin to point-of-consumption for the purpose of confirming customer
requirements. Therefore, materials, commercial processes, accounts payable, and
information flow are included in logistics. Coleman defined logistics as an overall
process that starts with ordering, and then involves other processes such as production,
inventory, delivery, and the integration of other services [3]. Primarily, the prevailing
belief was that the support of the back office management should be focus for
satisfying the customers.
Considering the evolution of logistics services, Su categorized logistics
management as five types including first party logistics (1PL), second party logistics
(2PL), third party logistics (3PL), fourth party logistics (4PL), and fifth part y logistics
(5PL) [4]. 1PL is the seller in the business transaction that provides the logistic services
for producers or suppliers. The differences between 2PL and 1PL are that the second
party refers to the buyer in the business transaction who provides clients with
traditional storage and transportation management. 3PL provides the most professional
logistic services among the supply chain so 3PL is called logistics outsourcing. 4PL
offers comprehensive solutions for members on the supply chain, linking one or more
3PL companies with management consultants, technology consultants, and financial
service companies (as proposed and registered by the globally noted management
A.J.C. Trappey et al. / Global Logistic Management for Overseas Production 452
consulting company - Accenture in 1996 [5]). 5PL provides e-services and information
services for the supply chain and integrates all the members in the supply chain
including sellers, buyers, 3PL, and 4PL. According to the research of Morgan Stanley,
the concept of supply chain management from 1PL to 5PL indicated the integration
level from low to high [6].
Accenture further defined the model of 4PL, proposing four required factors for
4PL including architect and integration, an intelligence control room, a supply-chain
intermediary and resource providers [7]. Accenture emphasizes the relationship with
customers and a long-term relationship with the supply chain management system to
enhance risk-and-rewards sharing with the customers [8]. 4PL as the mediator provides
comprehensive logistic services and integrated resource planning between 3PL and
customers [9].
In the current logistic industry, information systems play a crucial role in the
success of logistics operations. A coordinating platform for logistic information
processing can analyze, program, and coordinate all operations to enhance information
sharing among the supply chain. Some researchers suggest that using logistics
information systems (LIS) not only enhances information sharing, but helps to better
control logistic activities in the supply chain, enhance flexibility, improve efficiency,
reduce costs, and shorten delivery times [1, 10, 11].
2.2 Global Logistics Management
Global logistics management (GLM) evolved from logistics management [12]. Since
1980, global brand competition has been increasing and product life cycles ar e
becoming shorter with global logistics playing a critical role in supporting the changes.
In order to better allocate the global resources of an enterprise, GLM applies a
consumer-oriented approach and is open to flexibly adjusting business operation flows
and processes [13]. The development of global logistics requires that the logistic
providers build hubs at strategic positions around the world. This globally distributed
network provides better access to the customer with rapid service delivery while
reducing the stocks and even achieving the goal of zero stock [14].
The two major problems encountered by the transnational logistic service
providers are as follows. First, single logistic providers may be unable to provide all
the services. Second, the performances of logistic service providers may not meet
expectations which impacts on the service delivery image, brand, and reputation.
Global logistics is a comprehensive service composed by multiple services provided by
different service providers. Any delay or improper handling of the service by any
member may cause losses that further affect follow-up service outcomes, and the brand
equity of the member firms [15]. Due to the fierce competition among logistic suppliers,
Li and Lin proposed several dimensions to measure and increase a GLMs
competitiveness including IT infrastructure capacity, cross organizational resource
integration, manufacturing flexibility, information sharing, and asset specificity [16].
Improving the relationships among the network of the supply chain members also
increases the efficiency of GLM.
A.J.C. Trappey et al. / Global Logistic Management for Overseas Production 453
3. Methodology
This study applies Petri Net modeling to construct a model to depict the as-is logistic
service model [1, 17, 18], search for bottlenecks, and then experiment with changes in
processes to derive improved to-be processes. Petri Nets were proposed by the German
mathematician, Carl Adam Petri in 1962 and requires two features: (1) the graphic
notation to describe and visualize the operations of the system such as sequence,
synchronization, and conflict and (2) the support of mathematic theory to seek
satisficing (not always optimal but better) solutions. The Petri Net construction rules
have a solid mathematic foundation verified by well accepted and widely published
algorithms. Therefore, the use of Petri Net is used to model the structure and operation
of the system and also to evaluate and adjust the system. Petri Nets are most frequently
used for modeling discrete events dynamic systems (DEDS), which emphasize the
description of casual relationships among events. Petri Nets are also commonly applied
to study flexible manufacturing systems, logistics systems, electric systems, and
computer systems. The resulting model construction is used to improve scheduling,
control, and conduct performance evaluation of new process configurations.
The model construction tool uses systematic quantification to express the as-is
logistics models and processes. Data collected from the case company was used as the
reference for the system simulation and the benefits of to-be model was evaluated. The
study uses the Process Designer of the INCOME 4 SUITE which provides methods,
tools and services for modeling, simulation and implementation of business processes
to improve the logistics management process. This software was developed by the
German software company, PROMATIS [19].
4. Case Study
This section presents the case study focusing on 3Cs (computer, communication and
consumer products) of the global supply chain. The preliminary case study and its bulk
purchase logistic model were first reported in [20]. The research shows how methodical
data collection and current situation diagnosis of the as-is problems are identified for
the 3C industry. The goal is to identify the reasons that cause problems, derive a to-be
global logistics model solution, build the global logistic system platform, and
demonstrate the improvements in the new logistic operations. INCOME 4 SUITE is
used to organize, archive, and analyze the collected data, simulate the processes, and
benchmark the as-is and to-be models.
4.1 Case Description and Current (As-Is) Model
This research uses data from Company A, a global manufacturer and marketer for
digital image processing devices established in 1991. Over 80% of the companys
business focuses on the development and production of image components for 3C
products, particularly a wide range of scanners. Company A has four strategic global
business units accepting orders from Taiwan, the USA, Germany and Shanghai. The
company has complete distribution networks in America, Europe and China. All
business functions, such as order fulfillment and supply chain management, are
A.J.C. Trappey et al. / Global Logistic Management for Overseas Production 454
conducted through the Taiwan headquarters, excluding the service functions of
accepting orders and after-sales maintenance.
Currently, Taiwanese 3C manufacturers control the volume of key components and
the frequency of shipments from hubs to manufacturing plants for final assembly. Due
to information non-transparency, these manufacturers and their logistics service
providers (LSPs) are only able to collect partial shipping status and updates. The
bottlenecks of the existing global logistics model are illustrated using INCOME 4
SUITE in Figure 1 [20] and are described as follows:
Unable to reach economies of scale with individual small order quantities.
No integrated information system is used.
Difficult to get real-time shipping status since several LSPs are used and
shipments are at different stages of processing.
Slow customs clearance causes deliveries delays.
The process design of the as-is model is depicted in Figure 2. The as-is model
shows that the buyers of the company should contact suppliers if the central factories of
the 3C and peripheral industry have placed excessive demands on the component
manufacturers of products. Multiple buyers will communicate with different suppliers
through FAX or via e-mail and provide purchase lists or request price quotation from
the suppliers. The buyers will select the suppliers based on their quotations and then
fax or e-mail the orders. The supplier evaluates the content of the orders and relays the
relevant information to the central factory. The central factory then commissions the
3PL to deliver, post custom declarations, and ship the components or semi-finished
products to the customers.
4.2 To-Be Global Logistics Models for the 3C Supply Chain
Improving and eliminating problems that Taiwanese 3C and peripheral companies face
requires reengineering of the global logistic business processes. Thus, bulk purchase,
fourth party logistic services (4PLs), and the use of authorized economic operator
(AEO) certifications are identified as the three key strategies for improvement. Bulk
purchase requires aggregating 3C manufacturers with the same key component
demands to place bulk orders with suppliers for better unit price and delivery schedules.
4PL service providers are authorized to integrate order information, negotiate ordering
price and delivery schedules with suppliers and self-manage shipping processes,
including trunk and container loading, customs brokerage services, and air/ocean/land
freight, until the shipments arrives at the overseas manufacturing plants. Furthermore,
Taiwan Customs gives preferential treatment to enterprises with AEO certification, and
security accredited AEOs receive speedy customs clearance for trans-customs
boundary transportation of cargo.
Bulk purchase made by 4PL service providers are emphasized for the to-be models.
The target customers are 3C companies with the same key component demands. With
the same key components demands, 3C manufacturers are able to reach economies of
scale and bulk purchase can be used to resolve small order quantity issues. Collective
3C enterprises, suppliers, and LSPs can log into the global information platform built
by 4PL service providers and share real-time delivery information to avoid information
inconsistency and non-transparency. Government assistance is essential for successful
to-be models. Well-designed logistic centers combine ocean freight, air freight and tax
A.J.C. Trappey et al. / Global Logistic Management for Overseas Production 455
preference will attract oversea investors and encourage Taiwanese LSPs to provide
value-added services. Training programs developed by the government strengthen 4PL
services involvement which supports the alliance formation between Taiwanese LSPs
and oversea LSPs for agile global logistics. AEO certification is promoted by the
Taiwan government for speedy customs processing and supply chain security. The
to-be models and the steps taken for implementation are shown in Figure 3.
Domestic sup uu plier
Domestic supplier
Off ff - ff shore sup uu plier
Off-shore supplier
aiwan headquarter
Taiwan headquarter
versea
manufa ff ctu tt ring plant
Oversea
manufacturing plant
Components/
semi-fi ff nished
goods hub u
Components/
semi-finished
goods hub
Physical fl ff ow
Physical flow
Info ff rmation fl ff ow
Information flow
Taiwan customs Oversea customs
Taiwanese LSPs
Taiwanese LSPs
International LSPs
International LSPs
Components / semi-
fi ff nished goods hub u
Components / semi-
finished goods hub
Local LSPs
Inventory r statu tt s
up uu date
Inventory status
update
Produ dd ction statu tt s/inventory rr statu tt s up uu date
Production status/inventory status update
Delivery rr
info ff rmationn
Delivery
information
Bottleneck 1
Bottleneck 1
Bottleneck 1
Bottleneck 1
Bottleneck 2
Bottleneck 2
Bottleneck 3
Bottleneck 3
Bottleneck 4
Bottleneck 4
Order release
Order release

Figure 1. The existing as-is global logistics model [20]
Product demand
Import
notification
Quotation
Shipment Notification Assembling
container
Export container
Shipping
container
Haulage
notification
Delivery notification
Generate purchase
requisition
Select suppliers
Ship cross-strait
Declare (Overseas) Warehouse Ship inland
Purchase requisition
Declare (Domestic)
Purchase staff of
central factory
Purchase staff of
central factory
LSP
LSP
LSP Customs broker
MIS Staff of
central factory
Customs broker
Order
Transport
Ship to customers
y
u
Customers
LSP
LSP
Activity
Object
Figure 2. The process design of the as-is logistics model
Product demand
Export
container
Parts requirements
Supplier Information
Shipment
Notification
Customs documents
Unloading
container
Import
container
Shipping container
Input orders Integrate orders Select suppliers
Order Declare at custom
Ship cross-strait Ship inland Unload container
Logistic information
system
g
Customs clearance
of cargoes
Assemble in the hub
Assemble in the hub
Delivery notification
Declare at custom
Ship to customers
Customers
MIS staff of 4PL MIS staff of 4PL
MIS staff of 4PL
Logistic staff of 4PL
Custom staff of
4PL
Custom staff of 4PL
MIS staff of
central factory
Logistic staff of 4PL Logistic staff of 4PL Logistic staff of 4PL
Logistic staff of 4PL Logistic staff of 4PL

Figure 3. The process design of the to-be logistics model
A.J.C. Trappey et al. / Global Logistic Management for Overseas Production 456
4.3 Global Logistic System Platform
According to the process design of the to-be logistics model, the essential modules and
functions are proposed in this sub-section. The major users include suppliers, the
central factory, 4PLs, and overseas customers. Therefore, the corresponding forms for
each entity must be established to record respective information and contacts. For
security and efficiency of the system operation, a users information chart is also
managed by the browse and download authority of users is controlled to avoid over
transparency of competitive business information. On the other hand, the main function
and module of the system is designed to speed up the process of customs declaration.
Hence, required documents for custom declaration must be entered into the database.
The detailed information of shipping must also be included, such as the container
number, flight number, time and distance of the shipping (for the system tracking
function). Figure 4 depicts the function and modules of the proposed logistic
information system. The system includes purchase management, inventory
management, customs management, tracking, sales management and financial
management. The integrated information system is a web-based supply chain that can
be used to improve management efficiency for the corporation. Critical contributions
are implementing an information sharing mechanism in the supply chain and speeding
up problem feedback, resolution, and overall processing time.

Global Logistic Information System Platform
Supplier
management
Purchase
Management
Inventory
Management
Selling
Management
Tracking
Management
Custom Affair
Management
Financial
Management
Supplier
information
Trading
records/price file
Purchase
requisition
management
Adding and changing
purchase requisition
Approval System
Purchase order
management
Purchase order
confirmation
Delivery time
tracking
Delivery time reply
Receive and return
management
Delivery notice
Delivery direction
Return notice
Invoice Receive/Maintain
and Packing List
Inventory of
each warehouse
Inventory information
renewal
Safe stocking
maintenance
Inventory Database
Download
Caution and
management of
abnormal storage
Replenish Notice
Abnormal
information
Management
Purchase Notice
Warehousing
management
Order
management
Order check
Order
confirmation
Delivery time
reply
Delivery
management
LSP delivery
direction
Invoice
Receive/Maint
ain and
Packing List
Delivery check
Tracking update
International, sea,
and air logistics
practitioner tracking
system
Stop Setup
Arrival information
check
Container
information check
Cargo tracking
Loading notice and
check
Logistic service
pricing management
Shipping price
quotation
management
Check account and
confirmation system
Custom declaration
documents
Invoice/Packing List
Maintenance and
input
Export/import custom
declaration document
maintenance
Custom clearance
status check
Custom clearance
information input
Custom clearance
information
maintenance
Cabin booking
Cabin information
input
Cabin information
maintenance
Account
payable
Detail of account
payable
Payment and
check account
system
Account
receivable
Detail of account
receivable
Payment and
check account
system
Figure 4. Modules and functions of the logistic information system platform
A.J.C. Trappey et al. / Global Logistic Management for Overseas Production 457
4.4 Comparison of As-Is and To-Be Model
The model has been verified using the INCOME 4 SUITE. The data were provided by
the LSPs [21], information industry companies [22, 23] and the Industrial Technology
Research Institute. The evaluation range, limitation, and data are demonstrated as
follows:
Evaluation range: Use a cross-strait express model to deliver parts and
semi-finished products to the manufacturing factories in Mainland China.
Hypothesis and limitation: Fixed cost is not discussed and the stock of
components in Taiwan and China are assumed to be sufficient.
Based on the evaluation range and the cross-strait logistics model mentioned above,
the study categorized the logistic business process into four types order, inventory,
custom declaration/clearance, and shipping. From the view of the improved method,
order and inventory belong to e-system whereas custom declaration, clearance, and
shipping belong to process improvement. The simulation analysis of the cost and time
is shown Table 1.
According to the analytical results, the operation time and cost demonstrate a
satisficing decrease. The total operation time was dropped from 338 hours to 198 hours
with an approximate 41% decrease. The total operation cost was dropped from TWD
$19,445 to TWD $15,173 with a 22% decrease.
Table 1. Quantified benefit results
Category Quantified index As-Is To-Be Benefits
Benefit of
e-system
Order process time
a
69.68 52.16 25.14%
Order process cost
b
$ 2,214 $ 1,950 12.00%
Inventory process time 34.7 21.97 36.69%
Inventory process cost $ 2,961 $ 2,348 20.70%
Benefit of
process
improvement
Custom declaration/clearance process time 108.01 25.2 76.67%
Custom declaration/clearance process cost $ 4,920 $ 2,525 48.68%
Shipping process time 125.5 98.5 21.51%
Shipping process cost $ 9,350 $ 8,350 10.70%
Total
benefits
Total operating time 337.89 197.83 41.45%
Total operating costs $ 19,445 $ 15,173 21.97%
a. Unit of time: Hour; b. Unit of cost: TWD
The proposed to-be model improves production lead-time, increases inventory
turnover rate, better controls purchase and transportation costs, maintains consistent
quality critical means to improving the sustainable competitive advantages of 3C
manufacturers. The comparisons of business processes between as-is and the to-be
models are shown in Table 2 and the four benefits of the to-be models are explained as
follows:
Shorten the lead time and increase inventory turnover rate using the improved
global logistic service model.
Reduce procurement and transportation costs by using bulk purchase.
Maintain consistent quality of final products by providing high quality
components.
Maintain competitive advantages by manufacturing critical components
domestically and assembling final products overseas.

A.J.C. Trappey et al. / Global Logistic Management for Overseas Production 458
Table 2. Comparisons of operating processes between as-is and to-be models
Operating
Process
As-Is Model To-Be Model
Material
Purchase
Individual 3C manufacturer places orders with
suppliers regardless of economies of scale.
4PL service provides integrated order and
places bulk orders with suppliers to reach
economies of scale.
Shipping Individual 3C manufacturer centralizes key
parts in Taiwan and makes oversea shipments
according to order demand.
4PL service providers centralize key parts
in Taiwan and arrange scheduled oversea
freight shipments.
Information
Integration
3C manufacturers, suppliers, and LSPs
communicate with each other via email, phone
or fax and update order status manually.
All parties involved gain access to a
global information platform which
provides real-time information.
The Role of
LSPs
LSPs are only responsible for logistic services. 4PL service providers are authorized to
negotiate price, procure key parts, and
manage shipping processes.
5. Conclusion
In order to maintain consistent quality of 3C finial assemblies and competitive
advantages of 3C manufacturers, global logistics management reengineering is
conducted to ensure critical component production remaining in Taiwan and final
assembly overseas. The improved global logistics model is constructed to reduce
transportation lead times, increase the inventory turnover rate, and solve delivery
problems of small quantity freight. The benefits of bulk purchase for 3C components
and semi-finished products are devised to accelerate global logistics management
reengineering for domestic component supplies of overseas 3C product final
assemblies. Bulk purchase generates economies of scale that contributes to production
cost control as well. In addition, 3C companies are better enabled to maintain
consistent quality of final assemblies by executing global logistic business and
operation models. Taiwanese 3C companies are encouraged to manufacture critical
components domestically since there is greater efficiency in the supply chain. In
summary, bulk purchase contributes to purchase and transportation cost control, and
implementing to-be models not only improves production lead-time and inventory
turnover rate but expedites global collaborative manufacturing.
References
[1] C.V. Trappey, A.J.C. Trappey, G.Y.P. Lin, W.T. Lee and T.H. Yang, SETZ logistics models and system
framework for manufacturing and exporting large engineering assets, Journal of Systems and Software
(2012), online published: http://dx.doi.org/10.1016/j.jss.2012.09.032
[2] B.M. Beamon, Supply chain design and analysis: models and methods, International Journal of
Production Economics 55 (1998), 281294.
[3] P.V. Coleman, B2B e-commerce e-logistics: The back office of the new economy, Banc of America
Securities (2000), 4.
[4] S.L. Su, Logistics and Logistics Management Concepts, Functions, Integration, Hwatai, Taipei, 2007.
[5] Accenture INC., Accessed: 18/9/2012. [Online] Available: Website: http://www.accenture.com/
[6] Morgan Stanley Research, The Logistics PlayersFrom 1PL to 5PL, China Logistics, 2001. Accessed:
24/3/2013. [Online] Available: http://doc.mbalib.com/view/0fd88cbdf1dfc7f6d5c67c8144b84651.html
[7] J. Bumstead and K. Cannons, From 4PL to managed supply-chain operations, Focus Magazine 4 (2002).
A.J.C. Trappey et al. / Global Logistic Management for Overseas Production 459
[8] G. Buyukozkan, O. Feyzioglu and M.S. Ersoy, Evaluation of 4PL operating models: A decision making
approach based on 2-additive choquet integral, International Journal of Production Economics 121
(2009), 112120.
[9] S.M. Rutnera, B.J. Gibsonb and S.R. Williamsc, The impacts of the integrated logistics systems on
electronic commerce and enterprise resource planning systems, Transportation Research Part E:
Logistics and Transportation Review 32 (2003), 8393.
[10] E.W.T. Ngai, T.C.E. Cheng, S. Au and K.H. Lai, Mobile commerce integrated with RFID technology in
a container depot, Decision Support Systems 43 (2007), 6276.
[11] C.V. Trappey, G.Y.P. Lin, A.J.C. Trappey, C.S. Liu and W.T. Lee, Deriving industrial logistics hub
reference models for manufacturing based economies, Expert Systems with Applications 38 (2011),
12231232.
[12] M.G. Harvey and R.G. Richey, Global supply chain management: The selection of globally competent
managers, Journal of International Management 72 (2001), 105128.
[13] D. Locke, Global Supply Management: A Guide to International Purchasing, Irwin Professional,
Chicago, 1996.
[14] J.C. Jan and L.-H. Lin, The Current Global Logistics Management Models of Taiwan OEM/ODM
Firms, in Department of Commerce for Ministry of Economic Affairs: Proceedings of the 1999
International Logistics Conference, by the Tonhwa university (Hwalan: the Tonhwa university).
[15] N.K. Srivastava, N. Viswanadham and S. Kameshwaran, Procurement of global logistics services using
combinatorial auctions, 4th IEEE Conference on Automation Science and Engineering (2008), August
2326, Arlington, USA.
[16] P.C. Li and B.W. Lin, Building global logistics competence with Chinese OEM suppliers, Technology
in Society 28 (2006), 333348.
[17] S.J. Lee and Y.K. Park, OPNets: An object-oriented high-level petri net model for real-time system
modeling, Journal of Systems and Software 20 (1993), 6986.
[18] F. Ahmad, , H. Huang and X.L. Wang, Petri net modeling and deadlock analysis of parallel
manufacturing processes with shared-resources, Journal of Systems and Software 3 (2010), 675688.
[19] PROMATIS Software GmbH, Accessed: 27/6/2012. [Online] Available:
http://www.promatis.com
[20] C.V. Trappey, A.J.C. Trappey, H.Y. Cho, E. Chen and W.T. Lee, Bulk purchase logistic management
for part supplies of overseas 3C product final assemblies, Western Decision Sciences Institute (WDSI
2012), April 3-6, Hawaii, USA..
[21] Air Sea Group, Cross-strait quick service description, Conference on Purchasing and Supply
Management (2011).
[22] Prolink Solutions Co., Ltd, Accessed: 27/6/2012. [Online] Available:http://www.pllink.com/
[23] eChannelOpen Inc., Accessed: 27/6/2012. [Online] Available:http://www.echannelopen.com/
A.J.C. Trappey et al. / Global Logistic Management for Overseas Production 460
Constructing a Hierarchical Learning Cost
Curve for Photovotaic System
Amy J.C. Trappey
a, 1
, Charles V. Trappey
b
, Penny H.Y. Liu
a
, Lee-Cheng Lin
c
and
Jerry J.R. Ou
d
a
Department of Industrial Engineering and Engineering Management, National
Tsing Hua University, Hsinchu, Taiwan
b
Department of Management Science, National Chiao Tung University, Hsinchu,
Taiwan
c
Green Energy and Environment Research Laboratories, Industrial Technology
Research Institute, Hsinchu, Taiwan
d
Department of Business Administration, Southern Taiwan University, Tainan,
Taiwan
Abstract. Developing renewable energy is an effective approach to reduce global
warming and satisfies increasing energy demands. Among all of the renewable
energy sources, solar power has the greatest potential because it is derived
naturally from the sun. Photovoltaic methods convert solar radiation into direct
current electricity via semiconductors. Although the energy coming from the sun is
abundant, the cost of generating electricity using photovoltaic methods remains
higher than conventional means of generating electricity. Often, expensive
manufacturing and installation costs reduce the benefits of photovoltaic systems.
In order to encourage the usage of photovoltaic systems, several countries have
used a feed-in tariff calculated based on the installation costs as an incentive.
Considering this important incentive, a learning curve model is constructed to
model the installation costs. The cost and benefit model helps evaluate profitability
as well as determine the feasible buyback prices and future installation plans.
There are many factors that influence the costs. Therefore, this study uses Taiwan
as case study and develops a hierarchical installation cost learning curve model for
photovoltaic facility construction. Finally, the progression rate, which represents
the cost trend, is calculated using case data.
Keywords. Learning Curve, Photovoltaic Electricity Generation, Hierarchical
Linear Model
1. Introduction
Global warming and diminishing sources of oil are two critical problems that remain to
be solved for continued economic growth and development. According to the
International Energy Association, the projected primary demand for world energy will
increase 45% between the years 2006 and 2030, an average annual rate of growth of
1.6%. Crude oil remains the dominant fuel in the primary energy mix [1], which leads
to a rapid growth in carbon dioxide (CO
2
) emissions. The CO
2
concentrations have
increased to over 390 PPM or 39% above preindustrial levels by the end of 2010 [2].
Combined with other sources for energy generation such as coal, global climate change

1
Corresponding Author: trappey@ie.nthu.edu.tw

20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-461
461
is causing a greenhouse effect affecting the stability of the earths temperature. Since
1978, the extent of the annual average Arctic sea shelf has shrunk by 2.7% per decade
[3]. The reasonable solution for global warming is to substitute outdated energy sources
which are linked to the cause with the active development and replacement of these
sources with renewable energy. Renewable energy is generated from natural resources
such as sunlight, wind, ocean tides, and geothermal heat. The primary advantages of
renewable energy are that they have no negative effects on the environment and are
non-deplitable.
Natural disasters play a key role in advancing renewable energy development. For
example, in March 2011, a 9.0 magnitude undersea earthquake struck Japan, which
resulted in a tsunami that brought destruction along the Pacific coastline of Japan's
Northern islands. Conventional power was affected but most importantly nuclear power
plants were severely impaired after the earthquake. Approximately 4.4 million
households in Northeastern Japan were influenced and left without electricity [4] and
due the shutdown and damage to the nuclear power facilities, over 200,000 people were
evacuated from the area [5]. As a consequence of the risks and damages sustained, the
Japanese government will likely eliminate all nuclear power over the next two decades
as part of a new long-term energy plan and double the use of renewable energy from
10% to 20% [6]. The advantage of using renewable energy is the increased stability of
the energy supply through decentralization. In July, 2012, India experienced large
blackouts twice in two days. More than 300 million people were left without electricity
for several hours [7]. Therefore, if India develops renewable energy systems to supply
sufficient energy, then the impact of blackouts will be minimized.
Many countries are formulating energy policies to stimulate the development of
renewable energy. According to Renewable Energy Policy Network for the 21st
Century [8], the total global investment in renewable energy was $22 billion US dollars
in 2004 and reached to $211 US billion dollars in 2010, an average annual increase of
47.5%. Two of the most popular policies for governments to stimulate the deployment
of renewable energy are the use of Renewable Portfolio Standards (RPS) and Feed-in-
Tariffs (FIT). RPS requests electricity supply companies to provide a specified
percentage of their electricity from renewable energy sources that are usually
purchased from renewable energy generators. FIT offers long-term contracts; which
lasts on average twenty years, that are granted to producers based on guarantees of
price. In past studies [9][10][11][12], there is support that the FIT policy is increasingly
considered the most effective policy for stimulating rapid development of renewable
energy sources and is currently implemented in 63 jurisdictions worldwide. Taiwan
passed the Statute for Renewable Energy Development in 2009 [13] and applied the
FIT as the primary incentive policy to promote the investments in renewable energy.
In August 2010, the New Energy Development Impetus Committee of the Taiwan
Executive Yuan announced goals for renewable energy. By the year 2030, the
cumulated installed capacity of hydropower, onshore wind, offshore wind, photovoltaic
(PV), geothermal, biogas, waste and ocean energy is required to reach 2502 MW, 2000
MW, 2500 MW, 3000 MW, 200 MW, 31 MW, 1369 MW and 600 MW respectively.
Photovoltaic systems are targeted as the largest sources of renewable energy and will
receive the highest proportion of renewable energy subsides from the electric power
companies.
The use of a feed-in tariff price for photovoltaic power will have a significant (but
unknown) impact on the government when used for renewable energy. The willingness
to invest in PV is decreased if the FIT price of PV is set too low, while the electric
A.J.C. Trappey et al. / Constructing a Hierarchical Learning Cost Curve for Photovotaic System 462
power companies will bear a greater financial load from PV subsides if the FIT price of
price is set too high. Thus, the PV cost change requires modeling and analysis when
setting the PV pricing scale. In order to estimate whether the FIT cost of wind power
was set reasonably in Germany, Ibenhot [14] utilized a learning curve analysis and
measured the change of wind power cost when the cumulated installed capacity of
wind was increased.
Most research literature uses a single learning curve model to describe the
relationship between the cost and the cumulated installed capacity of renewable energy.
There are still very few studies introducing noise factors as influencing the costs of the
learning curve models. Using the characteristics and the renewable energy data for PV
power generation, this paper builds a hierarchical PV cost learning curve model to
include factors that have the greatest impact on the relationship between the installed
costs and the cumulative installed capacity. The results provide useful information for
policy makers in designing optimal PV energy measures.
2. Literature Review
A learning curve offers a means of analyzing past cost development that has been
adapted to analyze future cost developments [15]. First reported by Wright [16], a
mathematical model was proposed to describe the labor hours needed to produce one
unit product. The research found a decrease at a constant rate overtime which is called
the learning effect. Learning curve models are a useful approach that has been widely
applied to analyze the costs trend of PV systems, electricity generated by PV, and PV
modules. Poponi [17] uses experience curves to predict what would be the different
levels of cumulative world PV shipments required to reach the calculated break-even
price of PV systems, assuming different trends in the relationship between price and
the increase in cumulative shipments. Shum and Watanabe [18] study the non-module
balance of systems costs for grid-connected small PV systems using experience curves.
Where most of researches use a single learning curve to analyze the costs trend of
renewable energy, Nemet [19] notes that the learning curve weakly explains the change
in important factors that might influence costs. Yu et al. [20] also notes that single
factor learning curves overlook several factors and uncertainties. Thus this paper
combines the basic learning curve with a hierarchical linear model to expand previous
research to include multiple curves.
The Hierarchical Linear Model (HLM) is a statistical model of parameters that
vary at more than one level. First developed by Lindley and Smith [21], the HLM
model is generally classified as a full model, an intercept model, and a coefficient
model (Table 1).
Table 1. HLM model type
Type Level 1 Level 2
Full model
0 1 ij j j ij ij
Y X = + +

0 00 01 0 j j j
Z u = + +

1 10 11 1 j j j
Z u = + +

Intercept model

0 00 01 0 j j j
Z u = + +

Coefficient model

1 10 11 1 j j j
Z u = + +

A.J.C. Trappey et al. / Constructing a Hierarchical Learning Cost Curve for Photovotaic System 463
Level 1 is the basic structure of HLM in these three models, which denotes the
relation between the dependent variable Y
ij
and the explanatory variable X
ij
. The
difference between the three models is seen in level 2. The intercept (
0j
) and the
coefficient (
1j
) in level 1 of the full model are influenced by Z
j
, while only the
intercept or the coefficient can be influenced in the intercept model or the coefficient
model. HLM is appropriate for researches when the data exist in a nested structure.
Applications have been applied to education studies due to the classical nested
structure of the research [22]. Perry et al. [23] applied HLM and regression techniques
to explore the effects of teachers promoting student academic achievement, behavioral
adjustment, and feelings of competence. Additionally, other researchers have applied
HLM as a method to explore applications in management. The approach appears
successful, as Gentry and Martineau [24] describe HLM as an example of a multilevel
methodological approach applicable for examining change over time in the evaluation
of leadership development in teams.

3. Methodology
In past studies, researches utilize a single learning curve to explore a relationship
between the cost and the cumulative installed capacity while not considering that some
factors may interfere with the relationship. As a result, this leads to bias of the learning
coefficient. This paper proposes a hierarchical costs learning curve model. The
hierarchical learning curve model is shown as:
Level 1:
1
0
C X

= (1)
Level 2:
1 0 1 1 2 2
...
m m
Z Z Z = + + + + (2)
Level 1 is a typical one factor learning curve model. The natural log-linear function
of the learning curve is common because its simplicity and goodness of fit. Thus the
function can be rewritten as:
Level 1:
0 1
ln ln ln ln C X = + + (3)
Level 2:
1 0 1 1 2 2
...
m m
Z Z Z = + + + + (4)
Where C and X in level 1 represent the installation costs and cumulative capacity
respectively. The learning coefficient (
1
) is interfered by the factors (Z
1
, Z
2
,, Z
m
) in
level 2 and
0
,
0
,,
m
are the parameters that are estimated. Using case study data, this
paper implements the above model for the PV system adaption described in section 4.

4. Case Study
In order to construct the cost learning curve of constructing PV systems in Taiwan, the
relationship between the cumulative capacity (X) and the installation costs (C) are
considered. Because silicon is the most important material of the PV module, the model
considers the silicon price (Z
1
) as a noise factor. The installed capacity of PV in Taiwan
is collected from the Bureau of Energy [25]. The installation costs is sourced from the
report of the Lawrence Berkeley National Laboratory [26] and the silicon price is
A.J.C. Trappey et al. / Constructing a Hierarchical Learning Cost Curve for Photovotaic System 464
obtained from iSuppli [27] and Yu et al. [28]. The result of the hierarchical learning
curve model is shown in Table 2.

Table 2. The result of the hierarchical learning curve model
Model Coefficient Standard Error t P Value
Intercept

3.167 0.016 203.039 <0.05
Cumulative Capacity -0.186 0.012 -15.502 <0.05
Silicon PriceCumulative Capacity 0.013 0.002 5.163 <0.05

Based on the results, the function is written as Equation (5) and (6). The
coefficient -0.424 is the learning index and is used to calculate the progression rate,
which increases or declines in cost when doubling the cumulative capacity [14]. The
calculated progression rate function (PR) is shown in Equation (7).
Level 1:
1
ln 3.167 ln C b X = + (5)
Level 2:
1 1
( 0.186) 0.013 b Z = + (6)
PR = 2
-0.186
= 87.7% (7)
A PR value of 87.7% indicates that the installation costs are 0.877 times greater
than the former level when the cumulative capacity was doubled. That is, the
installation costs of PV systems are estimated to decrease in Taiwan. Additionally, the
positive value of the silicon price (Z
1
) has a significant and a positive effect which
interfere with the relationship between the cost of installing PV systems and the
accumulated installed capacity. That is, when the cost of silicon prices increase, the
interference effect is enhanced.
To verify the statistical fit of the hierarchical learning curve model, this study
compares the R
2
and the adjusted R
2
of basic learning curve model and the hierarchical
model. The R
2
and adjusted R
2
are listed in Table 3. Both the R
2
and the adjusted R
2
of
the hierarchical model are higher than the basic model which indicates that the
proposed model fits the case data better and provides a better estimation. Figure 1
depicts installation costs estimated using the hierarchical model and the basic model as
compared to the case data.

Table 3. The R
2
and adjusted R
2
comparison of the hierarchical and basic learning curve model
Hierarchical learning curve Basic learning curve
R
2
97.5% 88.1%
Adj. R
2
96.8% 86.6%


A.J.C. Trappey et al. / Constructing a Hierarchical Learning Cost Curve for Photovotaic System 465

Figure 1. Comparison of the case data and the cost estimated by hierarchical model and basic model

5. Conclusion
To understand the installation cost of PV systems, this research proposes a hierarchical
learning curve model since the single factor learning curve cannot fully explain the cost
trends. The model improves the single factor learning curve which merely studies the
relationship between the cumulative capacity and the installation costs. The
hierarchical model considers noise factors which are important elements in the system
and interfere with the cost computations. Using a case study, the construction of the PV
system in Taiwan demonstrates that the silicon price is the noise factor. In addition, the
case study shows that the hierarchical learning curve model fits the data better than the
basic model. Therefore in this case, the hierarchical model provides more reliable
information. Based on the results, governments can decide the appropriate feed-in tariff
and PV investors can better plan their investments. Most importantly, the hierarchical
model can be extended and applied to other renewable energy areas such as wind
power and geothermal energy to help decision makers develop strategic plans and
successfully promote the use of renewable energy.
References
[1] IEA, World Energy Outlook 2008, Paris, France: International Energy Association, 2008. Accessed:
5/12/2012. [Online] Available: http://www.iea.org/textbase/nppdf/free/2008/weo2008.pdf
[2] IPCC, IPCC Special Report on Renewable Energy Sources and Climate Change Mitigation, Prepared by
Working Group III of the Intergovernmental Panel on Climate Change [O. Edenhofer, R. Pichs-
Madruga, Y. Sokona, K. Seyboth, P. Matschoss, S. Kadner, T. Zwickel, P. Eickemeier, G. Hansen, S.
Schlmer, C. von Stechow (eds)]. Cambridge University Press, Cambridge, United Kingdom and New
York, NY, USA, 2011. Accessed: 5/10/2012. [Online]. Available: http://srren.ipcc-
wg3.de/report/IPCC_SRREN_Full_Report.pdf
[3] IPCC, Climate Change 2007: Synthesis Report. Contribution of Working Groups I, II and III to the
Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Core Writing Team,
2
2.2
2.4
2.6
2.8
3
3.2
3.4
3.6
2001 2002 2003 2004 2005 2006 2007 2008 2009 2010
l
n

C
o
s
t

(
t
e
n

t
h
o
u
s
a
n
d

N
T
D
/
K
W
)
Case data Hierarchical model Basic model
A.J.C. Trappey et al. / Constructing a Hierarchical Learning Cost Curve for Photovotaic System 466
R.K. Pachauri, and A. Reisinger (eds.), Cambridge University Press, 2007. Accessed: 5/10/2012.
[Online]. Available: http://www.ipcc.ch/publications_and_data/ar4/syr/en/mains1.html#1-1
[4] Inajima, T. and Okada, Y., Japanese Quake Forces Evacuation Near Nuclear Reactor; Oil Refinery Burns,
Bloomberg, 2011. Accessed: 27/8/2012. [Online] Available: http://www.bloomberg.com/news/2011-
03-11/cosmo-oil-refinery-set-on-fire-nuclear-power-reactors-shut-by-earthquake.html
[5] Sample, I., Japan's nuclear fears intensify at two Fukushima power stations, The Guardian (London),
2011. Accessed: 27/8/2012. [Online] Available: http://www.guardian.co.uk/world/2011/mar/13/japan-
nuclear-plants-fukushima-earthquake
[6] Iwata, M. and Mochizuki, T., Japan Weighs End to Nuclear Power, The Wall Street Journal, 2012.
Accessed: 27/8/2012. [Online] Available:
http://online.wsj.com/article/SB10000872396390443855804577603051383403854.html
[7] Romero, J., Lack of Rain a Leading Cause of Indian Grid Collapse, IEEE Spectrum, 2012. Accessed:
27/8/2012. [Online] Available: http://m.spectrum.ieee.org/energywise/energy/the-smarter-
grid/disappointing-monsoon-season-wreaks-havoc-with-indias-grid/
[8] REN21, Renewables 2011 Global Status Report, Paris, France: REN21 Secretariat, 2011. Accessed:
5/10/2012. [Online] Available: http://www.ren21.net/Portals/97/documents/GSR/REN21_GSR2011.pdf
[9] Klein, A., Held, A., Ragwitz, M., Resch, G., Faber, T., Evaluation of Different Feed-in Tariff Design
Options: Best Practice Paper for the International Feed-in Cooperation, Energy Economics Group &
Fraunhofer Institute Systems and Innovation Research, Germany, 2008.
[10] REN21. Renewables Global Status Report: 2009 Update, Paris, France: REN21 Secretariat, 2009.
Accessed: 5/10/2012. [Online] Available:
http://www.ren21.net/Portals/97/documents/GSR/RE_GSR_2009_Update.pdf
[11] Ernst & Young, Renewable Energy Country Attractiveness Indices Q1-Q2 2008, Accessed: 5/10/2012.
[Online]. Available:
http://www.frankhaugwitz.info/doks/general/2008_08_China_Industry_Utilities_Renewable_energy_c
ountry_attractiveness_indices_Ernst_Young.pdf
[12] Mendonca, M., Feed-in Tariffs: Accelerating the Deployment of Renewable Energy, EarthScan,
London, 2007.
[13] Legislative Yuan of Republic of China, The Status for Renewable Energy Development, The
Legislative Yuan of Republic of China, 2009. Accessed: 5/10/2012. [Online]. Available:
http://web3.moeaboe.gov.tw/ECW/RENEWABLE/content/wHandMenuFile.ashx?menu_id=774
[14] Ibenholt, K., Explaining learning curves for wind power, Energy Policy 30 (2002), 1181-1189.
[15] Neij, L., Cost development of future technologies for power generation - A study based on experience
curves and complementary bottom-up assessments, Energy Policy 36 (2008), 2200-2211.
[16] Wright, T. P., Factors affecting the cost of airplanes, Journal of the Aeronautical Sciences 3 (1936),
122-128.
[17] Poponi, D., Analysis of diffusion paths for photovoltaic technology based on experience curves, Solar
Energy 74 (2003), 331-340.
[18] Shum, K. L. and Watanabe, C., Towards a local learning (innovation) model of solar photovoltaic
deployment, Energy Policy 36 (2008), 508-521.
[19] Nemet, G.F., Beyond the learning curve: factors influencing cost reductions in photovoltaics, Energy
Policy 34 (2006), 3218-3232.
[20] Yu, C.F., van Sark, W.G.J.H.M. and Alsema, E.A., Unraveling the photovoltaic technology learning
curve by incorporation of input price changes and scale effects, Renewable and Sustainable Energy
Reviews 15 (2011), 324-337.
[21] Lindley, D. V., Smith, A. F. M., Bayes estimates for the linear model, Journal of Royal Statistical
Society. Series B (Methodology) 34 (1972), 1-41.
[22] Wen, F. H., Chiou, H. J., Methodology of multilevel modeling: The key issues and their solutions of
hierarchical linear modeling, New Asia Institute, 2011. Accessed: 5/10/2012. [Online]. Available:
http://140.136.247.242/~nutr2027/trail.pdf
[23] Perry, K. E., Donohue, K. M., Weinstein, R. S., Teaching practices and the promotion of achievement
and adjustment in first grade, Journal of School Psychology 45 (2007), 269-292.
[24] Gentry, W. A., Martineau, J. W., Hierarchical linear modeling as an example for measuring change
over time in a leadership development evaluation context, The Leadership Quarterly 21 (2010), 645-
656.
[25] Bureau of Energy, Energy statistical hand book 2010, 2011. Accessed: 5/10/2012. [Online]. Available:
http://www.moeaboe.gov.tw/English/Statistics/EnStatistics.aspx
[26] Barbose, G., Darghouth, N., Wiser, R. and Seel, J., Tracking the Sun IV: An Historical Summary of the
Installed Cost of Photovoltaics in the United States from 1998 to 2010, Lawrence Berkeley National
Laboratory, 2011. Accessed: 5/10/2012. [Online]. Available: http://eetd.lbl.gov/ea/ems/reports/lbnl-
5047e.pdf
A.J.C. Trappey et al. / Constructing a Hierarchical Learning Cost Curve for Photovotaic System 467
[27] iSuppli, Polysilicon pricing peaks and supply chain immaturity and inflexibility induce significant
supply/demand imbalances in 2009, 2009. Accessed: 11/16/2012. [Online]. Available:
http://seekingalpha.com/article/105936-polysilicon-prices-head-for-steep-fall-good-news-for-solar.
[28] Yu, C.F., van Sark, W.G.J.H.M. and Alsema, E.A., Unraveling the photovoltaic technology learning
curve by incorporation of input price changes and scale effects. Renewable and Sustainable Energy
Reviews 15 (2011), 324-337.

A.J.C. Trappey et al. / Constructing a Hierarchical Learning Cost Curve for Photovotaic System 468
Process Modeling for Supporting Risk
Analysis in Product Innovation Chaine
Germn Urrego-Giraldo
a
and Gloria Luca Giraldo G.
b

a
Engineering Faculty - University of Antioquia Medelln - Colombia
b
Mines Faculty National University of Colombia Medelln - Colombia

Abstract. The management of products innovation projects faces the adoption of a
product innovation chain and the ways to develop its processes. There are many
approaches for managing the processes and these approaches may be applied in
any phases of the innovation chain. That means that each phase may applies one
process management approach with independence of those used in other phases.
Traditional and new approaches may coexist along the innovation chain and could
be mixed for creating other approaches. The transformation of inputs into outputs,
the central aspect of process concept, involves a set of activities in terms of which,
the processes are modeled. Risk analysis in the innovation chain is referred to
process risk analysis in the innovation chain, considering different process
management approaches, such as: concurrent engineering, co-creation, lean
production, etc. These process management approaches are not, in general,
characterized in order to support risk analysis. In our innovation model, process
management approaches are separately characterized in terms of: one hand, their
quality attributes, and another hand, the specific and different activities of every
process management approach. In this work, we characterize and conceptualize
some of these approaches for supporting risk analysis in a product innovation
chain.
Keywords. Process modeling, domain modeling, system functionalities, process
management, activity categories, innovation chain, risk analysis.
Introduction
In the nowadays changing world, facing the rapid increasing of knowledge and
technology, market globalization, greater cultural interactions; the innovation is the
way that organizations must follow, aiming to the sustainability. The powerful strategy
of developing new products must be completed with innovation strategies, going
further than the powerful initiative of offering new products. Innovation is centered, of
one hand, in appropriation and new application of exiting concepts; and of another
hand, in creation and application of new concepts. Going back, some antecedents of the
modern innovation concept are found in the laws of imitation [1]. This author known as
the creator of innovation diffusion curve, or S curve, expresses that human evolution is
imitation and innovation. Other contribution to formal reasoning is the methodology
TRIZ (Russian Acronym for the Theory of inventive Problem solving) [2], which lead
to systematic innovation, indicating that the evolution of technical systems may be
controlled by a set of rules. This theory states methodically treatment for problems
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-469
469
requiring inventive solutions. From social sciences contributions to innovation are
found in [3], [4], among others. In economy and management, important contributions
appear, among others, in [5], [6], [7], [8], [9]. The evolution of innovation models is
described in [10]. Important points of this evolution are remarked in [11], [12], [13].
Innovation is determined by processes in a product innovation chain. The quality
and success of innovation is founded in efficiency and quality of its processes. Many
process management approaches are proposed for supporting the process execution
along the product innovation chain. Concurrent Engineering, Lean Production, Co-
creation, etc., are process management approaches offering alternative specific
activities and particular process quality attributes aiming to obtain competitive products.
Concurrent Engineering is characterized in the literature by concurrent engineering
practices such as: decomposition, overlap, interaction, and iteration of activities, as
well as the transition from sequential phases of product development to concurrent
phases. Some characteristics are highlighted in common definitions of Concurrent
Engineering [14], [15].
Lean thinking or lean production is normally identified as an approach leaved from
a Toyota production system [16], but [17] are considered as originators of the term and
five lean thinking principle: Specify the value desired by the customer, Identify the
value stream for each product providing that value and challenge all of the wasted steps,
Make the product flow continuously, Introduce pull between all steps where continuous
flow is impossible, Manage toward perfection so that the number of steps and the
amount of time and information needed to serve the customer continually falls.
The application of lean thinking in service organization is supported among others
by [18]. Whilst, for example [19] think in adverse aspects, such as: increased
vulnerability of lean systems to errors or resource shortages; divergence of lean
systems with demand variability; potential failure to address human dimensions of
work; implementation of lean techniques and tools without a strategic perspective
The term Co-creation was introduced by [20] in the sense of creating new product
with the customers. Aspects related to collaborative and creative work are considered
for other innovation initiatives using different denominations, such as: collective
intelligence, collaborative knowledge, collaborative creation, etc. In our work, co-
creation convokes interested internal and external agents in any phase of product
innovation chain, adopting and using new concepts in a collaborative or cooperative
way.
Risk management in the product innovation chain is associated to central elements
and characteristics of the process concept. We characterize the process in two ways:
the specific activities and the quality features of the process. Quality features of the
process are the characteristics that the management approach wants to assure in the
activities execution and resources use. Whilst, the process characterization with
specific activities shows the actually activities execution and resources use. Both
process characterizations cover the whole process.
Risks in co-creation processes may be associated to quality features searched by
this process management approach on process activities, or to agent activities; executed
in collaborative space, elaborating new concepts based on existent or new knowledge.
Risk is the probability of having an adverse consequence, from de occurrence of an
event. Vulnerability is a measure of propensity to have an adverse consequence, and
constitutes a level of risk. Risk and vulnerability Concepts are defined in social and
environmental sciences, among others, in [21] and [22]. These concepts are adapted
and complemented in order to manage risks in an agent innovation chain.
G. Urrego-Giraldo and G.L. Giraldo G. / Process Modeling for Supporting Risk Analysis 470
Co-creation may be applied in all or one part of product innovation chain. The
challenge of putting products and/services on hands of consumers and/or users involves
risks in processes execution in the innovation chain. Methods oriented to manage
innovative projects not consider in general risks associated to collaborative activities
aiming to an innovative co-created product.
In our research project, in course, it is searched to contribute to manage risks in the
innovation chain. In this article we present a risk analysis model, which may
instantiated with characteristic activities, or with quality features; of any process
management approach. Instantiation with quality features was made for Concurrent
Engineering approach, and instantiation with specific quality features uses the Co-
creation approach.
The plan of this article considers after the Introduction, in Section 1, a description
of the adopted product innovation chain. Section 2 presents the characterization on
some process management approaches. The subject in Section 3 is the configuration of
the risk analysis model for different process management approaches. Problem
identification according to the characteristics of process management approaches is
treated in Section 4. Conclusion and Future work appears in Section 5. Section 6
contains acknowledge to the research supporters. References constitute the final
Section.
1. Adopted Product Innovation Chain
The adoption and creation of process management approaches and the study of risk in
product innovation is referred in our research project to the product innovation chain.
The comprehension and analysis or process management approaches and their
associated vulnerabilities and risks are favored by the disaggregation of phases of the
innovation chain. The extended product innovation chain contains eighteen phases,
which could be divided in processes or regrouped in bigger phases, and ordered and
overlapped in different ways. These treatments and the way as agents interact, give
place to different innovation models and process management approaches (co-creation,
lean product, concurrent engineering, etc.). The adopted eighteen phases, depicted in
Figure 1, are: a) opportunities identification, b) ideas collection, c) ideas elaboration, d)
elaborated ideas selection, e) Proposal for developing elaborated Ideas (includes
product definition), f) Knowledge availability and research, g) product requirements
obtaining, h) product conceptual modeling, i) product design, j) Product construction,
k) product testing, l) product tuning , m) product promotion n) product distribution, o)
post-distribution services p) product observation and evaluation, q) product impacts
and consumer satisfaction, and r) product elimination.
G. Urrego-Giraldo and G.L. Giraldo G. / Process Modeling for Supporting Risk Analysis 471

Figure 1 Product innovation Phases

Innovation aims put in hands of consumers or users a new product or process or
introduce a new characteristic in a product or process at any phase of product
innovation chain. Oslo Manual, in [23], considers also marketing and organizational
innovation. Finer disaggregation of phases adds in our research project to change
specific characteristics agents, resources, mean, methods, agent interventions, product
evolution, and the understanding of added value to the product in any process in an
innovation chain or in a traditional product development chain. A characterization of
process management approaches is explained in next Section.
2. Characterizations of Process Management Approaches
The here proposed risk analysis model is centered on specific characteristics of an
adopted process management approach, and may be applied in any process of any
phase of the product innovation chain. A particular process management approach
could be used in any phase, in a set of processes of different phases, in all phases, or in
alternative way with other process management approaches.
Two ways, one strategic, and another operational; are used in our work to
characterize processes in the phases of a product innovation chain. The first one is
centered on quality features that a process management approach searches to assure in
the realization of a process, this is an intentional approach. The second type od process
characterization is centered on the specific activities that a process management
approach introduce in the realization of a process, this is an operational approach. The
first one drives to analyze risks on things, that an innovation team attempts to do in a
process. The second one leads to analyze risks on things, which an innovation team
performs in a process. These equivalent ways to analyze risks in processes of the
phases of a product innovation chain do not require a delay between them, for the
application of the analysis model. The first one, the must be, allows abstract analysis;
reflect, to consider intentions, goals. The second one, the to be, holds concrete
analysis, observations, the consideration of activities, measures, and achievements.
In this Section the two referred process characterizations are presented for
Concurrent Engineering, Lean Production, and Co-creation. For each one of these
G. Urrego-Giraldo and G.L. Giraldo G. / Process Modeling for Supporting Risk Analysis 472
process management approaches a definition, a set of characteristic quality features,
and a set of characteristic activities area elaborated, considering the concepts
commonly used in the literature.
2.1. Concurrent Engineering
Definition: CE is a management approach oriented to obtain opportune and qualified
products based on an integral, static and/or dynamic configuration of processes,
activities, and actions; interrelated in a sequential and/or not sequential, simultaneous
and/or delayed, parallel and/or convergent way; coordinating resources optimization,
and responsibilities, objectives, decisions, and operations of internal and external
agents.

Characteristic Non-functional requirements (quality attributes): Opportunity,
Parallelism, Overlap, Interaction, Iteration, Intention (decision).

Variation applied on activities of a basic process:
Identify and extract actions and/or activities, disaggregating processes
Identify and extract actions, disaggregating activities
Reconfigure statically activities
Reconfigure dynamically activities
Reconfigure statically processes
Reconfigure dynamically processes
Reconfigure activities and processes, interrelating actions, activities, and processes;
in sequential and/or non-sequential, simultaneous and/or deferred, parallel and/or
convergent way.
Coordinate the optimal use of resources of internal and external agents
Coordinate the definition and execution of objectives, responsibilities, decisions,
and operations, of internal and external agents.

2.2. Lean Production
Definition: Management approach based on optimization of value that customers can
extract by eliminating wasted flows, and improving the flow and performance in the
value chain of a product or product family.

Characteristic non-functional requirements (quality attributes): performance,
measurability, explicitness, continuity, essentiality, comparability, fluency, flexibility
(ability to self-reshape or to be recomposed).

Variation applied on activities of a basic process:
Arrange activities in order to do explicit and controllable the value chain.
Identify and extract actions and/or activities
Analyze, modify, delete actions and/or activities, and restore the continuous flow
along the value chain
Store the value chain

G. Urrego-Giraldo and G.L. Giraldo G. / Process Modeling for Supporting Risk Analysis 473
2.3. Co-creation
Definition: invention, discovery, development, consultation, or recall; collaborative or
cooperative of new concepts, arranged to be incorporated into processes and / or
products or services; from individual and collective contributions, spontaneous or
foreshadowed, free or structured, concrete or abstract; of internal and external
stakeholders of an organization.

Characteristic Non-functional requirements (quality attributes): collaboration,
cooperation, creativity, novelty, predictability, accessibility, communicability,
reliability, security, equity, clarity, attraction, interactivity, preservation, commitment,
elaboration, conceptualization, sociability, abstraction

Variation applied on activities of a basic process:
Receive individual or collective, spontaneous or foreshadowed, free or structured,
concrete or abstract contributions of internal and external stakeholders of an
organization.
Introduce in a domain one or multiple new concepts, coming from new knowledge
or from applying existent knowledge before not used in this domain; of one or more
internal or external agents, in relation to a process, an activity, or an action; or to
resources involved in these ones, in order to be incorporated into processes and / or
products or services.
Modify or to complement own contributions or of those of other internal or external
agents, related to a process, an activity, or an action; or to resources involved in
these ones.
Submit elaborations on contributions or on elaborations made on previous
contributions, by internal or external agents, relating to a process, an activity, or an
action; or to resources involved in these ones.
Introduce ideas, concepts, contributions, or elaborations into processes, activities, or
actions; or into resources involved in these ones, in a product innovation chain.
Incorporate new concepts in the generation or modification of processes and / or
products or services.

The double characterization of process management approaches before described
gives possibilities to improve the existing approaches and to create other new,
introducing new quality features and/or new variation to process activities, and
extracting and combining quality features, or variation of activities. A risk model to be
instantiated with quality features, or variations of process activities is the subject of the
next Section.
3. Configuration of Risk Analysis Model for Process Management Approaches
Products development along the innovation chain is exposed to events, which menace
the product evolution in each phase of the chain. The adopted risk model considers two
types of events and four risk levels for identifying risk in processes leaded by different
management approaches. Process risks are associated to agents and objects involved in
the activities of processes. Each process management approach considers types of
G. Urrego-Giraldo and G.L. Giraldo G. / Process Modeling for Supporting Risk Analysis 474
coordination, agent participation modes, the use of means and methods, product states,
etc., involving specific activities and a particular set of quality features for each
approach. Terms and categories involved in the risk model are described in next
paragraphs.
3.1. Definition of Risk Model Concepts
Risk is defined as the probability of damage on an exposed object, when an event
occurred. The probability of an adverse consequence is associated to a conscious
action of an agent in a process. An accident is the probability of adverse consequences
of Unconscious actions of an agent in the process, when an event occurred. Anything
that happens is considered an event. Risk is determined by the nature of the event and
the intrinsic nature of the exposed object.
The event causing the risk is an internal event. This is a process event, which arises
from internal agents in a process. Process event is a contextual event, which arise from
an agent intervention in the context.
Threat, in analogue way to the risk, is the probability of damage that an external
event, this one is a supra-process event, may cause to an exposed object of the process.
Vulnerability or risk level is the degree of fragility, a measure of propensity to be
deteriorated. It may be associated to the process and to the process context or supra-
process: internal and external vulnerability, respectively.
Extending the concepts of [22], external vulnerability (extrinsic, expositive) has
four risk levels: a) fragility to surpass the magnitude, severity, and amplitude of sieges
b) fragility to overcome frequency and dynamicity of sieges, c) fragility to control the
nature, object, means, and way of sieges d) fragility for managing the expansion,
consequences and the equilibrium re-establishing, after sieges.
Internal vulnerability (intrinsic, proper) considers also four risk levels: a) the
fragility of agents and objects to repel the magnitude, severity, and amplitude of attacks,
b) the fragility of agents and objects to resist the frequency and dynamicity of attacks,
c) the fragility of agents and objects to handle impacts, due to the nature, object, means,
and way of attacks, d) the fragility of agents and objects to affront expansion,
consequences and a rapid and complete equilibrium re-establishing, after attacks.
Risk is associated with activities of agents in the process level. Events may occur at
internal level, in the process or context, or at external level, in the supra-process or
process context. The former determines risks, the latter defines threats.

3.2. Risk Analysis Model
External and internal vulnerability categories are centered on the process concept, and
may be related to particular activities of co-creation processes. This concept supports
the management of risks along product innovation phases.
The construction of the risk model depicted in Figure 3 is founded on the Process
Model and on concepts of vulnerability and risk.
Risk model is an agent-centered approach, which considers the risks associated to
the agents interventions in a process. In the first column of Figure 2, two types of
events bring risks to agents and involved objects in a process: preventable,
unpreventable. Each type is related, in column 2, to two intrinsic vulnerabilities, each
one connected, in turn with two categories of risk, in column 3. First risk category
G. Urrego-Giraldo and G.L. Giraldo G. / Process Modeling for Supporting Risk Analysis 475
focuses on the nature of agents and involved objects. Second category, considers the
behavior or reaction capacity of agents and involved objects. The risk, in both risk
categories, could be perceived as an intervention of a disturbing element, as an
unfavorable (aggressive) intervention of a disturbing element (agent). These two risk
categories are applied in the four risk types considered in the column 3.


Figure 2 Risk Model

In Second risk category, by an unfavorable (aggressive) intervention of a
disturbing element (agent), as it appears in the four risk types in column 3, a behavior
called insufficient gives place to action invasive of disturbing element (agent). A
behavior signaled as weak generates an action persistent of disturbing agent. A
behavior low allows an action excessive. A behavior very low corresponds to an
action expansive.
Many risks may be recognized for the two risk categories, but here only one risk
for each one of the four risk type is recognized.
The risk categories described in Figure 2 let express the risks listed in row fourth
of the Figure 4. This representation shows four risk levels, which could affect the
achievement of each one of quality features, or the accomplishment of specific
activities, characterizing a process management approach (Concurrent Engineering,
Lean Production, Co-creation, etc.).
This disaggregation of risk categories facilitates the enouncement of risk, the
ulterior allocation of probabilities in quantitative approach, and to analyze the risks and
the associated problem for different process management approaches. In fact, each
process management approach is characterized, from one side, by a set of proper
quality features, and from another side, by a set of specific activities indicating
variations on activities of a traditional process management approach. In Figure 3, for
example, in row fifth, appears six quality features characterizing the Concurrent
Engineering approach.
G. Urrego-Giraldo and G.L. Giraldo G. / Process Modeling for Supporting Risk Analysis 476

Figure 3 Risk Model
4. Problems Identification for Process Management approaches
Risks express the probability of having an adverse consequence, and a problem is
materialization of a risk. To treat risks is, in fact, to define preventive and corrective
measures for preventing and correcting the eventual problems generated when probable
impact on an exposed objet occurs.
Every risk gives place to a generic problem for every quality feature, or specific
activity, characterizing a particular process management approach. In Figure 4 are
depicted the generic corresponding to six quality features characterizing the Concurrent
Engineering approach. For sake of simplicity, Figure 4 contains only the problems for
one of four risk levels represented in Figures 2, and 3.
G. Urrego-Giraldo and G.L. Giraldo G. / Process Modeling for Supporting Risk Analysis 477

Figure 4 Problems Associated to the First Risk Type (Unpreventable
Corruptible) for Quality Features of Concurrent Engineering
5. Conclusion and Future Work
The characterization of different process management approaches, in terms of quality
features, and specific activities; contributes to identify the advantages and constraints
in applying these approaches. These characteristics, related to the process, support the
analysis and improvement of the approaches and the obtaining of others new.
The proposed risk models offer a rich gamma of differentiated analysis categories
centered on the process concept. In fact, there are two types of events, two risk
categories, four risk types, applicable to quality features, or specific activities
characterizing a process management approach.
To center the characterization of process management approaches in intentional
(quality features), and operational (specific activities) features of these processes,
facilitate take advantage of tangible elements of the processes, aiming to quantitative
analysis of probabilities, risks, problems and costs. In this direction, a complementary
result, achieved in the project here presented, is a model for calculating the cost
involved in risk, and the cost of preventive and corrective measures, for any process
management approach.
An ongoing work searches validate a risk management model for analyzing,
evaluating, and treating risks in different innovation projects. It is important to include
in future work empirical studies for validating the features process management
approaches.
G. Urrego-Giraldo and G.L. Giraldo G. / Process Modeling for Supporting Risk Analysis 478
Figure 5 Problems Associated to the First Risk Type (Unpreventable Corruptible)
for Specific Activities of Co-creation
G. Urrego-Giraldo and G.L. Giraldo G. / Process Modeling for Supporting Risk Analysis 479
6. Acknowledges
This work was conducted within the project Manage of Risks in Innovation Projects
under the Co-creation Approach, identified with code 111552129062 supported by
COLCIENCIAS. The models were elaborated by the research team ITOS of Antioquia
University and by the research team Software Engineering of National University of
Colombia.
7. References
[1] De tarde, G. (1903). The laws of imitation, Translated by Elsie Clews Parsons. Ed. H. Holt & Co. New
York. 404 pgs.
[2] Altshuller Genrich, The Innovation Algorithm. TRIZ, Systematic innovation and technical creativity.
Translated by Lev Shulyac and Steve Rodman. Technical Innovation Center, INC. WORCESTER,
MA, 2000.
[3] Ogburn W.F. and Gilfillan S.C. (1933). The Influence of Invention and Discovery, in Recent Social
Trends in the United States, Report of the Presidents Research Committee on Social Trends, New
York: McGraw-Hill, Volume 1, p.132.
[4] Subcommittee on Technology, National Resources Committee. 1937. Technological Trends and
National Policy. Washington.
[5] Schumpeter, J. (1935). Anlisis del cambio econmico. the Review of Economics Statistics. Retrieved
from http://www.eumed.net/cursecon/textos/schump-cambio.pdf
[6] Maclaurin (W. R. 1949), Invention and Innovation in theRadio Industry, New York: Macmillan, p.
xvii-xx.
[7] Brozen.Y. (1951), Invention, Innovation, and Imitation, American Economic Journal, May, pp. 239-
257.
[8] Maclaurin W. R. (1953) The Sequence from Invention to Innovation and its Relation to Economic
Growth, Quarterly Journal of Economics, 67 (1), pp. 97-111.
[9] Drucker, P. (1985). The discipline of innovation. Harvard business review. Retrieved from
http://ukpmc.ac.uk/abstract/MED/10272260
[10] Godin B. 2006. The Linear Model of Innovation: The Historical Construction of an Analytical
Framework Science, Technology & Human Values November 2006 31: 639-667.
[11] Kelly, P. M., Kranzberg, F. A. Rossini, N. R. Baker, F. A. Tarpley, and M. Mitzner. (1975).
Technological Innovation: A Critical Review of Current Knowledge, volume 1, Advanced Technology
and Science Studies Group, Georgia Tech, Atlanta, Georgia, Report submitted to the NSF, p. 33.
[12] Kline. S. J. 1985. Innovation is not a Linear Process, Research Management, July-August, pp. 36-45.
[13] Rothwell. R. 1992. Successful Industrial Innovation: Critical Factors for the 1990s, R&D Management,
22, pp. 221-239.
[14] Kamara, J.M., Anumba, C.J. & Evboumwan, N. F.O. 2000. Developments in the implementation of
Concurrent Engineering in Construction. International Journal of Computer-Integrated Design and
Construction; 2: 68-78.
[15] Lindquist A., Berglund F. & Johannesson H. 2008. Supplier Integration and Communication Strategies
in Collaborative Platform Development. Concurrent Engineering; 16: 23-35.
[16] Monden, Y. 1983. Toyota production system: Practical approach to production management. Industrial
Engineering and Management Press. ISBN 0898060346. 247pgs.
[17] Womack, J. P. and D. T. Jones (1996). Lean Thinking. New York, Simon & Schuster.
[18] Bowen, D.E. and Youngdahl, W.E. (1998). Lean Service: In Defense of a Production-Line
Approach. International Journal of Service Industry Management 9, 3.
[19] Hines, P., Holweg, M. and Rich, N. (2004). Learning to Evolve. A review of Contemporary Lean
Thinking. International Journal of Operations and Production Management, 24, 10.
[20] Prahalad, C.K. Ramaswamy, V. Co-opting customer competence, Harvard Business Review 78 (1)
(2000) 7987.
[21] Chambers, R. 1989. Vulnerability, Coping and Policy, en IDS Bulletin, vol. 20, n 2. Institute of
Development Studies, University of Sussex, Brighton (England), April, pp. 1-7.
[22] Bohle, H. G., T. E. Downing, & M. J. Watts. 1994. Climate change and social vulnerability: the
sociology and geography of food insecurity. Global Environmental Change 4:37-48.
[23] OECD. Oslo Manual: GUIDELINES FOR COLLECTING AND INTERPRETING INNOVATION
DATA. Third edition. 2005.
G. Urrego-Giraldo and G.L. Giraldo G. / Process Modeling for Supporting Risk Analysis 480
Sustainability Indicators for the Product
Development Process in the Auto Parts
Industry

Paulo Roberto Savelli Ussui
1
and Milton Borsato
2

Federal University of Technology Parana (UTFPR), Curitiba, Brazil

Abstract. The severe environmental degradation caused by human activities in
recent decades has forced organizations, especially those related to the auto parts
industry, to take actions and measures in order to reduce or eliminate the negative
effects caused by industrial activities. Those measures aim to cover various aspects
within companies, and require an appropriate measurement system for the
verification of their effectiveness. In this sense, sustainability indicators have
grown in importance because they allow assessing the level of sustainability from
various entities, identifying the potential for improvement and the progress
achieved. Despite the great variety of existing indicators, there are still unexplored
areas, such as indicators for product development. The present paper proposed a
set of sustainability indicators, aiming to contribute to the creation of more
sustainable products and guiding the development process from the earliest stages
of product development. This set of indicators, which were divided into product
and design indicators, were defined based on the best practices and
recommendations from various techniques such as design for environment and
sustainability, and the study of existing sustainability indicators. While product
indicators present a technical character, as they provide means for evaluating the
product characteristics in a quantitative way, design indicators present a
managerial approach, in order to assess whether the main aspects and techniques
of design for sustainability are being considered throughout the development
process, in a qualitative way. The result was a set of 23 product indicators and 26
design indicators, including metrics that were simplified and adapted according to
the characteristics and requirements of each phase of a product development
process. The proposed indicators were applied on the development of a new
component for the auto parts industry and demonstrated to be very useful in
guiding the development team to design sustainable products.
Keywords. Sustainability indicators, Product Development Process, Ecodesign
1. Introduction
The great economic, scientific and technological progress occurred in the twentieth
century caused great changes in various aspects of people's lives. On one hand, they
provided a major improvement in the quality of life to people with access to these
advances, due to the benefits provided by new technologies and services. On the other

1
Corresponding author. Tel.: +55 41 9969 0205, Email: paulo_ussui@hotmail.com
2
Corresponding author. Tel.: +55 41 3310 4941, Email: borsato@utfpr.edu.br
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-481
481
hand, they have caused great social differences and a significant damage to the
environment, due to the increasing consumption of materials and energy, and increased
emissions of waste and pollution in the environment [1].
Many companies and organizations are working to reduce the negative impacts
caused by their products and activities, including those related to the automotive
industry, as it is responsible for a large environmental impact [2]. That work requires,
among other needs, a reliable measurement system to verify the level of sustainability
of those various institutions, in order to assess the current situation, set goals and
follow up the implementation of improvements. In this direction, sustainability
indicators provide an important framework for identifying strengths and weaknesses,
monitoring sustainability goals and plot future plans [3].
There are currently many sustainability indicators developed for various purposes,
such as the indicators for sustainable manufacturing, products, companies, cities and
nations, just to name a few [3]. However, there are still unexplored areas, such as the
product development process. During the development of a new product, many
decisions are made, and these decisions can affect, among other aspects, the
sustainability of that product [4]. Therefore, it is important to define sustainability
indicators since the preliminary design phases, in order to guide the design team in
direction of developing a sustainable product.
In this context, this paper aims to develop a set of sustainability indicators for the
product development process in the auto parts industry. In section 2, the sustainability
concepts are presented, including some methods for the development of sustainable
products. The concept of indicator and correlated terms are presented in the section 3,
as well as some existing sets of sustainability indicators. The proposed sustainability
indicators are presented in the section 4, and the final comments and conclusions are
presented in the section 5.
2. Fundamental concepts
Sustainability can be defined as a pattern of natural resource use that aims to meet
human needs of current generations, without compromising the ability of future
generations to meet their own needs [5]. For that to happen, it is necessary to consider
social and economical aspects within an integrated model with the environmental
actions, in a three dimensional model.
On the environmental aspect, the aim is to consume natural resources in a rational
and efficient way, reducing undesirable emissions in the environment. On the
economical aspect, the objective is to achieve a sustainable profit, maximizing financial
returns with a constant capital. The social aspect considers the equality of conditions
and rights provided to people such as quality of life, education and health [3].
There are many methods and recommendations for development of sustainable
products. DFE (Design for Environment) is one of these methods and combines a
variety of design approaches for reducing the environmental impact of a product,
including the use of recyclable materials, mass reduction and easier disassembly to
enable remanufacturing and material separation at end of life [6]. Another method is
D4S (Design for Sustainability), which consists of several design guidelines for
developing a new product or reviewing an existing one, with focus on the three aspects
of the sustainability [7]. And LCD (Life Cycle Design) is a method that presents
several proposals for the reduction of environmental impact in each phase of a typical
P.R.S. Ussui and M. Borsato / Sustainability Indicators for the Product Development Process 482
product life cycle [8]. The recommendations suggested by these methods were
considered in the development of the sustainability indicators proposed in this paper.
3. Existing sustainability indicators and indexes
Several sets of indicators and indexes assess sustainability of various entities, with
different objectives. For assessing sustainability of manufacturing processes, the
Lowell Center for Sustainable Production has developed a set of 22 indicators
distributed in 6 distinct groups, which evaluate the use of materials and energy,
environmental damage, economical performance, final products performance and social
aspects [9]. General Motors has also employed a set of manufacturing indicators,
aiming to evaluate the sustainability of their production processes by using 30
indicators distributed in 6 groups, based on the three aspects of sustainability [10].
For product assessment, Ford has developed a set of 8 indexes for an entire vehicle
appraisal named Product Sustainability Index (PSI) [11]. They are based on LCA (Life
Cycle Assessment) guidelines and evaluate life cycle cost, safety, noise level,
passenger capacity, use of recycling and toxic material and emissions level during the
whole life cycle. Another set of indicators for product assessment is the Environmental
Product Declaration (EPD) which aims to inform to the final customer the
environmental impact produced by a vehicle during its life cycle. It evaluates the
production, materials use, fuel consumption, emissions, maintenance and end of life
impact [12]. The LiDS Wheel is another example of set of indicators designed to
evaluate the performance of a product according to 8 recommended design strategies,
using qualitative metrics, based on the various phases of its life cycle [13].
Other sets of indicators were developed to evaluate entire companies, industries
and other contexts such as cities and nations. For example, the Global Reporting
Indicators (GRI) is a set of indicators that reports sustainability levels of entire
companies. It consists of a set of more than 100 indicators to evaluate environmental
and economical performance, social aspects such as working conditions and social
responsibility practices and human rights, just to name a few [14]. The ecological
footprint is an example of indicator for environmental impact evaluation of cities and
nations, based on the consumption of renewable and non-renewable resources [15].
Despite the great variety of existing indicators and indexes for different purposes,
there are few sustainability indicators designed for application during the product
development process. For that reason, considering the importance of indicators for the
decision making process during product development, there is an opportunity to be
explored, and the indicators proposed in this paper aim to fill in that gap.
4. Proposed sustainability indicators
Based on the concepts, methods and recommendations for development of sustainable
products, a set of sustainability indicators was developed for application during the
product development process in the auto parts industry.
The proposed set was divided in two major groups. The first group is called
Product Indicators, and they present a technical approach, in order to evaluate the
product and related processes. The second group is called Design Indicators, and they
present a managerial approach, by focusing on the application of recommended design
P.R.S. Ussui and M. Borsato / Sustainability Indicators for the Product Development Process 483
practices and guidelines for development of sustainable products. Anyway, both groups
are based on the three dimensions of sustainability.
These major groups are presented in details in the following sections.
4.1. Product Indicators
Product Indicators have been defined based on the most important design
characteristics that a design team must address when developing a sustainable product,
throughout its life cycle. They may be used as pass/no-pass criteria on gates at the end
of each product development phase, along other phase approval criteria. Such
characteristics have been identified based on recommendations derived from methods
for the development of sustainable products and existing sets of sustainability
indicators. Product Indicators are presented in table 1.
The metric for each proposed indicator has also been simplified to allow data
gathering since the initial product development phases, thus avoiding complex
calculation or information searching that may not be available at the beginning of a
product development process.
Product Indicators are considered performance indicators, because they may be
used for evaluating the design progress towards a goal. Therefore, it is recommended to
set a goal for each indicator, preferably in the informational design phase. In this case,
the information available from the strategic and project planning phases, besides the
data gathered during the informational phase itself may be used. It is also
recommended to select an existing product to be used as a reference in setting goals for
the indicators, as well as evaluating the design progress.
For a better understanding of an indicators function and characteristics, Product
Indicators have been divided into six distinct categories, namely: Use of Materials,
Manufacturing Processes, Social Manufacturing Process, Economics, Logistics,
Product Use and End-of-Life. Product Indicators are presented in Table 1.
The first category, Use of Materials, consists of indicators defined to evaluate the
materials embedded in the product or component itself. The first indicator in this
category is Mass, which is an important parameter to evaluate the existing amount of
material in the product. Mass reduction is one of the most recommended strategies to
develop sustainable products, due to its implication over the whole life cycle. Another
indicator included in category Use of Material is the toxicity of materials, which is
considered a social indicator, because it may affect the health of workers and customers.
The basis for this indicator is the Global Automotive Declarable Substance List
(GADSL), and the indicator metric is the percentage of the total mass that is considered
declarable or prohibited in this list [16]. The last indicator included in this category is
the number of different materials present in the component or product. This indicator
has implications over the end-of-life of a product, because the higher the number of
different materials, the more difficult it will be to separate them for recycling or reuse.
The second category is Manufacturing Processes. Manufacturing processes have a
major influence in sustainability, because they may generate waste, consume energy
and cause health risks to workers, as well as social problems [9]. The indicators
included in this category are Electrical Energy Consumption, Water Consumption and
Fossil Fuel Consumption. These are process inputs that need to be reduced due to their
related environmental impact. Electrical energy consumption, for example, may be
related to several environmental impacts, such as fossil fuel consumption or land use,
depending on the way energy is generated. Water is also an important natural resource,
P.R.S. Ussui and M. Borsato / Sustainability Indicators for the Product Development Process 484
and there are many manufacturing process that use this input, such as boilers or part
cleaning machines. Fossil fuels may be applied in some processes such as heat
treatment, welding and water heating. They are very harmful to the environment, so the
consumption of this kind of energy source needs to be reduced eliminated.

Table 1. Product indicators
# Indicator Metric Category
1 Total mass Gram
Use of
materials
2 Materials toxicity
% of total mass that contain toxic materials, acc.
to GADSL
3 Amount of different materials Qty of different materials in the component
4
Electrical energy consumed at
manufacturing processes
W/h
Manufacturing
process
5 Water consumption l/Component
6 Fossil fuel consumption (Nm/h)/Component
7 Total number of workers Total num. of workers in the production line. Social
manufacturing
indicators
8 Number of accidents at work
Total num. of accidents (with or without lost
time) / year
9 Total cost $
Economics 10 Productivity Qty. products (h) / workers (h)
11 Maintenance cost $ (Repair kit)
12
Fossil fuel consumption during
distribution per product
[Transported distance (km) / fuel consumption
(km/l)] / Total qty. of products transported
Logistics
13 Packaging mass Gram
14 Packagings end of life
1- Package cannot be reused nor recycled
3- Package cannot be reused but can be recycled
5- Package can be reused and recycled
15
Part fuel consumption caused
by addition of mass to the
vehicle (Cp)
km/l [Cp = (Ct x Mp)/Mt], where:
Ct = total vehicle consumption; Mp = part mass;
Mt = total vehicle mass
Product use
16
Driving power consumption
(for automotive components
that consume engine power)
CV
17 Durability km
18 Maintenance
% of components that can be repaired or
replaced in the field
19 Field complaints Qty. of field complaints/year
20 Disassembly
1- Product cannot be disassembled
2- Product can be disassembled with effort and
use of specific tools
3- Product can be disassembled with effort and
use of common tools
4- Product can be disassembled with effort but
manually (no tools required)
5- Product can be manually disassembled
without effort (no tools required)
End of life
21 Amount of fixing elements Qty. of screws and rivets
22 Remanufacturing
% of components that can be reused or
remanufactured at end of life
23 Materials recyclability % of recycled materials

The next category is Social Manufacturing. According to the social aspect of
sustainability, manufacturing processes have to contribute to the progress of local
communities. Two indicators have been defined in this category, namely: number of
workers in the production line and number of accidents at work. The first one is usually
P.R.S. Ussui and M. Borsato / Sustainability Indicators for the Product Development Process 485
defined during the design process, at conceptual and detailed design phases, and it has
great influence on the social development of local communities. The latter has been
conceived for use the Product and Process Monitoring phase, after product launch. It is
an important social aspect, as identified by many guidelines for the development of
sustainable products.
In category Economics, indicators for Manufacturing Cost, Productivity and
Maintenance Cost evaluation have been included. Manufacturing cost represents the
total product cost, and its metric is the sum of all component costs, either purchased
from suppliers or produced internally. Productivity basically measures the amount of
workers required to produce a certain amount of products in a given period [17]. It is an
important indicator of production line performance. And maintenance cost is an
important parameter to make the maintenance process economically feasible, in order
to extend a products life. As its metric, one could suggest to consider the cost of a
maintenance kit that brings the most frequently used field repair components.
Logistics includes 3 indicators, which are fossil fuel consumption during
distribution, packaging mass and end-of-life of packaging. All these indicators focus on
the reduction of the environmental impact caused by distribution processes. Fossil fuel
consumption per product is related to transportation efficiency and distance, as well as
the number of units transported simultaneously. The higher the number of units, the
lower the fossil fuel consumption per part will be. Packaging indicators, on the other
hand, aim to evaluate the amount of material required to produce the package and its
destination at end-of-life, considering its reuse and/or recycling.
Category Product Use brings indicators to assess the environmental impact during
product use phase, which is one the most harmful phases of a products life cycle.
Included in this category are the additional fuel consumption that an auto part may
cause by the addition of mass to the vehicle (Cp), the driving power consumption (only
for auto parts that need engine power to work such as alternators or compressors),
durability (km), percentage of components that can be repaired or replaced in the field,
and quantity of field complaints, which is an indicator designed to be used at product
monitoring phase, after a product is launched in the market, while the other ones can be
used from the informational design phase on.
Finally, in category End-of-Life, indicators have been created to assess end-of-life
strategies and consider disassembly, amount of fixing elements, remanufacturing
potential and recyclability. These indicators aim to evaluate easiness of disassembly,
both in qualitative and quantitative ways, as well as reuse and recyclability of the
materials when it is no longer possible to reuse a component.
4.2. Design indicators
A series of design indicators has been defined to evaluate a project at each phase of the
product development process, with a managerial approach. The purpose is to guide a
design team through the application of recommended tools, methods and best practices
for the development of sustainable products, in all three aspects of sustainability,
namely environmental, social and economical. The metrics are mostly qualitative,
defined individually for each proposed indicator, and present a grade from 1 to 5,
which will be assigned to each indicator according to the answer given to a specific
question.
The sustainable design indicators were divided into eight distinct categories,
according to each phase of a product development process, based on the reference
P.R.S. Ussui and M. Borsato / Sustainability Indicators for the Product Development Process 486
model proposed by Rozenfeld et al. (2006) [4]. Such categories are: Strategic Planning,
Project Planning, Informational Design, Conceptual Design, Detailed Design,
Production Preparation, Product Launch and Product Monitoring.
In the Strategic Planning phase, the aim is to review a companys product portfolio
for putting together business plans of products that are to be delivered in the coming
years [4]. The first indicator in this category ensures that the strategic business
planning is oriented to sustainability in its three dimensions. The second indicator
evaluates if the product portfolio has been developed with a sustainability-oriented
thinking, resulting in the definition of products that present potential to improve
sustainability, by comparing current products of a company. And the last indicator
included in Strategic Planning is related to the new project protocol, as it assesses
elements of sustainability in its three dimensions.
The next category is Project Planning. In this phase, responsibilities, activities and
resources required to implement a project are defined. According to the reference
model [4], this phase starts with the identification of the project stakeholders, and the
first indicator assesses if they are concerned with the three aspects of sustainability [16].
Another important activity of Project Planning is the definition of project scope and
deliverables. So, the second and third indicators in this category aim to assess if scope
and deliverables consider elements of sustainability. The last indicator aims to assess if
sustainability indicators have been defined for the next phases of the design process,
considering the three dimensions of sustainability.
The objective of phase Informational Design is to collect and analyze data for
defining target specifications for the product, which will guide the development
process until product launch [4]. The indicators defined for this phase are related to the
products life cycle, for assessing if sustainable solutions have been considered for its
end-of-life, and if target specifications have been defined for improving sustainability.
With the target specifications for the new product defined, the next phase is
Conceptual Design. It consists of creating new concepts for meeting the project
proposal [4]. There are several sustainability methods developed to assist the process of
defining solution principles and new concepts, such as DFE, D4S and LCD, as
described in section 2. The first indicator in this category evaluates whether these
methods have been considered. The second indicator checks if the new concept
selected for development presents potential for improving sustainability on all
dimensions.
After defining the concept to be developed, the project team starts the Detailed
Design phase, which consists of the development and definition of all product
specifications and related processes. The first indicator in this category is related to the
supplier definition. It assesses whether the selected suppliers are ISO 14000 certified.
The second indicator evaluates if ergonomics and operator safety have been considered
during the manufacturing process definition. The third indicator is related to packaging,
assessing if sustainable alternatives for its end-of-life have been considered, such as
reuse and recycling. The products end-of-life is also an important aspect, and the
fourth indicator aims to analyze whether sustainable solutions have been considered as
well. The fifth indicator assesses if maintenance processes are economically feasible.
The last indicator was developed to verify if the new product is economically viable.
Once product specifications and manufacturing processes are defined, the next
phase of development process is Preparation for Production. The indicators defined in
this category aim to assess the efficiency of manufacturing processes, ergonomics and
P.R.S. Ussui and M. Borsato / Sustainability Indicators for the Product Development Process 487
operator safety, environmental legislation compliance for products and processes, and
the product economical feasibility after the investments in this phase are made.
After the implementation of manufacturing processes, a design team starts Product
Launch phase. In order to evaluate if the most efficient means of transportation and
distribution for a given product have been selected, an indicator for evaluating logistics
processes has been added.
When a product in launched in the market, the Product Monitoring phase starts. It
consists of collecting and processing information about product performance. The
indicators defined for this phase were Economical Feasibility, Customer Safety and
Compliance to Existing Environmental Legislation.
The indicators for phase Product Monitoring can be tracked while a product is
continuously supplied to the market, according to the frequency defined by a project
team, until a company decides to withdraw it from the market (discontinuation), driven
by factors such as declining sales or reduced profit margins.
Table 2 contains all indicators presented in this section, including the proposed
qualitative metrics for each indicator.

Table 2. Proposed design indicators
# Indicator Metric
Design
phase
1
Strategic
business
planning
Does the strategic business planning of the company contain elements
related to sustainability in the 3 aspects (social, environmental or economic)?
1 - Does not contain
2 - Contains only one aspect of sustainability
3 - Contains two aspects of sustainability
5 - Contains all 3 aspects of sustainability
Product
strategic
planning

2
Product
Portfolio
Does the product portfolio have new products with potential for improving
sustainability in the 3 aspects (social, environmental or economic)?
1 - It has no potential for improvement
2 - It has potential improvement in just one aspect of sustainability
3 - It has potential for improvement in two aspects of sustainability
5 - It has potential for improvement in all 3 aspects of sustainability
3
Project
protocol
Does the project protocol have elements of sustainability in the 3 aspects?
1 - It has elements of sustainability
2 - It has elements of sustainability in only one aspect of sustainability.
3 - It has elements of sustainability in two aspects of sustainability
5 - It has elements of sustainability in all 3 aspects of sustainability
4
Project
stakeholders
Are the stakeholders concerned about producing products that include
improvements to issues of sustainability?
1 - They are not concerned
2 - They are concerned with only one aspect of sustainability
3 - They are concerned with two aspects of sustainability
5 - They are concerned with all three aspects of sustainability
Project
Planning 5 Project scope
Does the project scope consider elements of sustainability?
1 - It does not include elements of sustainability
2 - It includes elements of sustainability in only one aspect of sustainability
3 - It includes elements of sustainability in two aspects of sustainability
5 It includes elements of sustainability in all 3 aspects of sustainability
6
Project
deliverables
Were the project deliverables thought to be more sustainable?
1 - They were not thought to be sustainable
2 - They were thought to be sustainable in only one aspect
3 - They were thought to be sustainable in two aspects
5 - They were thought to be sustainable in all 3 aspects.
P.R.S. Ussui and M. Borsato / Sustainability Indicators for the Product Development Process 488
7
Performance
Indicators
Were they defined indicators related to the 3 main aspects of sustainability?
1 - They were not defined
2 - They were defined for only one aspect of sustainability
3 - They were defined for 2 aspects of sustainability
5 - They were defined for all 3 aspects of sustainability
8
Product life
cycle
Does the definition of product life cycle consider elements that contribute to
improving sustainability at the end of life?
1 It does not include elements of sustainability
3 It partially includes elements that contribute to sustainability.
5 It totally includes elements that contribute to sustainability. Informa-
tional
design
9
Product
requirements
Do the requirements of the product elements have the potential to improve
the sustainability?
1 - They have potential to improve sustainability
2 - They have potential for improving sustainability in only one aspect
3 - They have potential for improving sustainability in two aspects
5 - They have potential for improvements in all 3 aspects.
10
Development
of new
concepts
Does the development of new concepts consider methods to support the
development of sustainable products, such as DFE, LCD and D4S?
1 They were not considered
3 They were considered partially
5 They were considered fully Conceptual
Design
11
Concept
selected
Does the selected concept have potential to improve sustainability?
1 - It has no potential
2 - It has potential for improving sustainability in only one aspect
3 - It has potential for improving sustainability in 2 aspects
5 - It has potential for improving sustainability in all 3 aspects.
12
Suppliers
definition
Are the selected suppliers certified ISO 14000?
1 - 0 ~ 20% of suppliers are certified
2 - 21 ~ 40% of suppliers are certified
3 - 41 ~ 60% of suppliers are certified
4 - 61 ~ 80% of suppliers are certified
5 - 81 ~ 100% of suppliers are certified
Detailed
Design
13
Manufacturing
processes
safety
Are the defined manufacturing processes safe for the operators?
1 - They are unsafe for operators
3 - They are partially safe for the operators
5 - They are completely safe for the operators
14 Packaging
Were they verified sustainable alternatives for packaging, such as returnable
packaging, recyclable or biodegradable?
1 They were not considered sustainable alternatives for packaging
3 They were partially considered sustainable alternatives
5 They are fully considered sustainable alternatives
15
End of Life
planning
Does the end of life planning consider sustainable alternatives?
1 - They were not considered sustainable alternatives
3 - They were partially considered sustainable alternatives
5 - They were fully considered sustainable alternatives
16
Maintenance
Process
Is the maintenance process economically feasible?
1 - The process is not feasible
2 - The process is only feasible for the company
3 - The process is feasible only to the customer
5 - The process is feasible for the company and for the customer
17
Economical
feasibility (in
detailed
design)
Is the new product economically feasible (detailed design)?
1 - The new product is not feasible
3 - The new product is partially feasible
5 - The new product is fully feasible
18
Efficiency of
manufacturing
processes
Are the selected processes, equipments and manufacturing technologies the
most energetically efficient in the market?
1 The processes are not the most efficient in the market
3 The processes are partially the most efficient in the market
5 The processes are fully the most efficient in the market
Production
preparation
P.R.S. Ussui and M. Borsato / Sustainability Indicators for the Product Development Process 489
19
Ergonomics
and operator
safety
Were they considered ergonomics and safety of operators in the
development of the productive process?
1 - They were not considered ergonomics and safety aspects
3 - They were partially considered aspects of safety and ergonomics
5 - They were fully considered aspects of ergonomics and safety
20
Environmental
laws for
processes
Will the current environmental laws for manufacturing processes be met?
1 - They will not be met
3 - They will be partially met
5 - They will be fully met
Production
preparation
21
Environmental
laws for
products
Will the current environmental laws for products be met?
1 - They will not be met
3 - They will be partially met
5 - They will be fully met
22
Economical
feasibility
Is the product economically feasible (at preparation for production)?
1 - The product is not feasible
3 - The product is partially feasible
5 - The product is fully feasible
23
Logistics
processes
Were they considered efficient solutions for the logistics process, such as
railroad or maritime transportation instead of truck transportation?
1 - They were not considered sustainable alternatives
3 - They were partially considered sustainable alternatives
5 - They were fully considered sustainable alternatives
Product
launch
24
Financial
return
Is the product within the expected financial return?
1 - The product is experiencing financial loss.
3 - The product is profitable, but lower than expected.
5 - The product presents financial return as expected or above.
Product
monitoring
25
Safety end
users of the
product
Is the product hazardous for users in the field?
1 - The product has presented safety issues in the field.
3 - The product has safety warnings, but the issues were detected earlier.
5 - The product has not presented any safety issue.
26
Legislation
and
environmental
demands
Did the product meet environmental legislation requirements?
1 - The product did not meet environmental legislation demands
3 - The product partially met environmental legislation demands
5 - The product fully met environmental legislation demands
4.3. Application example
The proposed indicators were applied on the development of a new diesel fuel injection
pump for commercial vehicles by an auto parts company, in order to demonstrate the
applicability of those indicators during the design process. The design indicators
applied during the project are shown in table 3, while product indicators are presented
in table 4.
Design indicators were applied from the beginning of the development process. By
using the proposed indicators, the team identified that the strategic planning contained
only two aspects of sustainability, as there was a strong concern on the economical and
environmental aspects of sustainability, but the social aspect was missing. For that
reason, grade 3 was assigned for the first design indicator. Therefore, the design team
decided to include a social requirement in that step, regarding maintenance of jobs.
That is an example of guidance provided by the proposed indicators through the project.
During phase Project Planning, indicators helped the team to analyze the project
scope, which led them to the conclusion that all three aspects of sustainability were
considered. However, even though project deliverables regarded economic and
environmental aspects of sustainability, social aspects were touched only by a
maintenance target.

P.R.S. Ussui and M. Borsato / Sustainability Indicators for the Product Development Process 490
Table 3. Design indicators
# Indicator
Strategic
planning
Project
planning
Info
design
Conceptual
design
Detailed
Design
Production
preparation
10 Strategic planning 3 - - - - -
11 Project protocol 5 - - - - -
12 Project scope - 5 - - - -
13 Deliverables - 3 - - - -
14 Product life cycle - 5 - - -
15 Product requirements - - 5 - - -
16 Concept selected - - - 3 - -
17 Suppliers definition - - - - 5 -
18 Packaging - - - - 3 -
20 Economical feasibility - - - - 5 -
21
Efficiency of manuf.
processes
- - - - - 3
22
Ergonomics and
operator safety
- - - - - 5
24 Economical feasibility - - - - - 5

In phase Informational Design, product indicators were applied for the first time.
In order to support the definition of targets, some product indicators were calculated for
an existing product. Total mass was measured using a scale. The percentage of toxic
materials was calculated based on mass of toxic materials divided by total mass. The
electrical energy consumed by the machines during production was measured in the
plant. Data such as number of workers, cost and productivity were gathered in the plant.
Packaging mass was measured with a scale, and maintenance and remanufacturing
indicators were calculated based on the number of parts that can be repaired or
remanufactured, divided by the total number of parts. That information from a
reference product, along with QFD and project scope, helped the team to define
technical goals for the new product and sustainability indicators. Design indicators
were applied at the end of phase Informational Design to evaluate its deliverables.

Table 4. Product indicators
# Indicator
Info design
(ref product)
Info design
(goal)
Conceptual
design
Detailed
Design
Production
preparation
1 Total Mass (g) 2766 2490 2378 2490 2490
2 Toxicity (%) 3,0 2,7 1,1 1,1 1,1
3 Electrical energy (W/h) 3183 2865 2865 3025 2893
4 Number of workers 40 40 40 40 40
5 Total cost (R$) 156,5 140,85 142,15 141,95 140,85
6 Productivity (Parts/Oper.) 5,0 5,5 5,0 5,3 5,3
7 Packaging mass (g) 0,7 0,63 0,65 0,66 0,66
8 Maintenance (%) 79 86,9 85 85 85
9 Remanufacturing (%). 50 55 50 50 50

In phase Conceptual Design, product indicators were used to evaluate a new
product concept, even though little technical information was available at that stage.
The lack of data required the use of computer simulations for those estimates. Thus, a
comparison between estimates and goals helped the design team to conclude that the
new concept had potential for improvement on the economical and environmental
dimensions of sustainability, so they took the concept to next design phase.
P.R.S. Ussui and M. Borsato / Sustainability Indicators for the Product Development Process 491
In phase Detailed Design, product indicators could be re-calculated based on more
precise product technical specifications. The indicators showed to the team that the new
product reached goals for total mass, toxicity, number of workers and total cost. The
indicators related to manufacturing processes would still be optimized in the next
design phase, but most of them were near the target and much better than those of the
reference product. Design indicators helped the team to conclude that suppliers were
environmentally conscious, the product was economically feasible but packaging end-
of-life could be more sustainable as biodegradable materials were not considered.
In phase Production Preparation, product indicators related to manufacturing were
updated. The final result showed that production parameters were near the target, but
continuous improvement was still required. Design indicators showed to the team that
the manufacturing processes were not the most efficient in industry, as existing
machines were used to avoid the need for further investments. However, they were
considered the best solutions available for ergonomics and safety, and the product is
economically feasible.
At end of the project, the team concluded that the proposed indicators presented
appropriate metrics, which helped them to identify areas for improvement and provided
guidance in the decision making process. For that reason, they can be incorporated into
the development process for auto parts companies.
5. Conclusion
The proposed indicators comply with the initial target, which was the development
of a set of sustainability indicators for the auto parts industry, as they demonstrated to
be a valuable tool to guide design teams for developing more sustainable products.
The application example demonstrated that the proposed indicators can help a
design team to define goals and identify areas for improvement, during each phase of
the design process. Design indicators can be applied since the preliminary phases,
while the product indicators can be applied from the informational design phase, in
order to aid the team defining goals, based on the scope and data from an existing
product. From the concept design phase on, product indicators are used to guide a
design team for evaluating the new development, as presented in the application
example.
Further research is suggested to define additional sustainability indicators for
application during the development process, improving even more the metrics and
optimizing the calculations for a more accurate estimation of the important aspects of
product design, starting on the strategic planning phase, when little information about a
new product is available. Additionally, quantitative metrics can be studied for design
indicators, as oppose to qualitative, in order to eliminate any subjectivity that might
happen during estimation of indicators.
Acknowledgements
The authors wish to thank the Araucaria Foundation for providing financial
support for publishing the results of the present research to the scientific community.
P.R.S. Ussui and M. Borsato / Sustainability Indicators for the Product Development Process 492
References
[1] HOBSBAWM, E., Era dos Extremos: O breve sculo XX, Companhia das Letras, So Paulo, 1995.
[2] ORSATO, R.J.; WELLS, P., The Automobile Industry & Sustainability, Journal of Cleaner Production,
Volume 15, Issues 1112, 2007.
[3] VAN BELLEN, H. M., Indicadores de Sustentabilidade: Uma anlise comparativa 2
nd
Ed, FGV
Publisher, Rio de Janeiro, 2006.
[4] ROZENFELD, H. et al., Gesto de Desenvolvimento de Produtos: Uma referncia para a melhoria do
processo. 1
st
Ed, Saraiva Publisher, So Paulo, 2006.
[5] BRUNDTLAND, Commission. Our Common Future, General Assembly. Report of the World
Commission on Environment and Development, section 42
nd
, 1987.
[6] ROSE, C. M., Design for Environment: A method for formulating Product End-of-Life Strategies, Doctor
Thesis, Department of Mechanical Engineering, Stanford University, Palo Alto, 2001.
[7] UNEP, Design for Sustainability: A Step-by-Step Approach, UNEP and TUDelf, Paris, 2009.
[8] EPA, Life Cycle Design Guidance Manual: Environmental Requirements and the Product System,
Environmental Protection Agency, Washington DC, 1993.
[9] VELEVA V.; ELLENBECKER M., Indicators of sustainable production: framework and methodology.
Journal of Cleaner Production, volume 9, Elsevier, 2001.
[10] DREHER, J. et al. General Motors Metrics for Sustainable Manufacturing. Laboratory for Sustainable
Business, MIT Sloan Management, 14 may 2009.
[11] FORD, Product Sustainability Index. Ford of Europe, Kln DE, 2007.
[12] MAYYAS, A. et al. Design for sustainability in automotive industry: A comprehensive review.
Renewable and Sustainable Energy Reviews, v. 16, p 1845-1862, 2012.
[13] BREZET H.; VAN HEMEL C, Ecodesign: A Promising Approach to Sustainable Production and
Consumption; UNEP, Paris, 1997.
[14] GRI, Sustainability Reporting Guidelines, version 3.1, Global Reporting Initiative, Amsterdam, 2011.
[15] WACKERNAGEL, M. Ecological Footprint and Appropriated Carrying Capacity: A Tool for Planning
Toward Sustainability. Doctor Thesis. The University of British Columbia. Vancouver, Canada, 1994.
[16] GASG, Global Automotive Declarable Substance List Version 1.0. Global Automotive Stakeholder
Group Steering Group, 2013.
[17] RANFTL R., Improving Business Productivity, APICS, Fraser Valley Chapter, 2008.
[18] CARVALHO, M.M.; RABECHINI JR., Fundamentos em Gesto de Projetos. 3
rd
Ed., Atlas S.A.
Publisher, So Paulo, 2011.
P.R.S. Ussui and M. Borsato / Sustainability Indicators for the Product Development Process 493
An Ontology-based Approach for Aircraft
Maintenance Task Support

Wim J.C. VERHAGEN
a
, Richard CURRAN
b


a
PhD candidate, Air Transport & Operations, Delft University of Technology
b
Chairholder, Air Transport & Operations, Delft University of Technolog

Abstract A relatively low level of digitalization in the maintenance domain, a
reliance on legacy, paper-based work processes and systems and a lack of
information exchange across stakeholders work together to complicate the
consistent execution and record keeping of maintenance tasks. This has a negative
effect on the efficiency and costs of product support and phase-out. This paper
moves towards a 'push-of-the-button' digital solution for capturing and using
aircraft maintenance task knowledge, processes and history to support
maintenance execution and prove continued airworthiness compliance. It proposes
the use of Enterprise Knowledge Resources engineering task representations
containing the required knowledge, process and outputs for a task embedded
within a semantic context model based on a Product-Process-Resource structure. A
proof of concept application has been developed that incorporates a sample EKR
within a knowledge management solution to support the capture, use and
maintenance of the required elements for executing a specific maintenance task:
the modification and detailed inspection of the main track downstop of the leading
edge slats of the Boeing B737 aircraft. The developed system has the potential to
improve the efficiency of maintenance task execution and record keeping. Future
work includes extension of the proof of concept, such that it enables the user to
automatically import knowledge from aircraft manufacturers as well as
automatically record the results of maintenance task execution.
1. Introduction
Literature for the aircraft maintenance domain tends to focus on performance
measurement and optimization of maintenance processes [1] and on the relation
between maintenance and safety [2]. In marked contrast to the design and
manufacturing phases of the product lifecycle, literature regarding the development and
use of advanced information technology (IT) such as Product Lifecycle Management
systems (PLM) or Knowledge-Based Systems (KBS) for the maintenance lifecycle
phase is very limited. This is supported by Lee et al. [3], who note the low adoption of
PLM technology in maintenance when compared to other lifecycle phases, as shown in
Figure 1 under 'Service'.
A fair representation of current practice in aircraft maintenance is provided by
Lampe et al. [4] who maintain that aircraft maintenance is currently supported by a
mixture of paper and digital documentation as well as tools, materials and parts
required for the job. Within aircraft maintenance processes, an increasing number of
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-494
494
supporting documentation is offered digitally. This includes the Airplane Maintenance
Manual (AMM), Maintenance Planning Document (MPD), Illustrated Parts Catalogue
(IPC), Structural Repair Manual (SRM) and Service Bulletins (SB) - all OEM
documentation that can be offered through the OEMs web portal (e.g. Baker et al. [5])
or as part of OEM software [6].
However, a significant share of aircraft maintenance processes are largely manual
[3] and frequently paper-based [2, 7]. Similar findings are reported by Lampe et al. [4],
who point out the labor intensive manual documentation and check procedures at
aircraft maintenance providers. The time associated with searching for appropriate
documentation can amount up to 15-20% of the total work time of a mechanic [4].
Consequently, a number of major issues remain to be addressed:

Legacy work processes & systems: the remaining aspects of a paper-based
approach to aircraft maintenance lead workers to shortcut the process as it
takes too long to collect the relevant documents: safety and efficiency are
compromised [2].
Information exchange across stakeholders: various stakeholders hold
different information necessary for the successful execution and record
keeping of aircraft maintenance tasks. For instance, MyBoeingFleet.com can
provide the OEM information, the FAA or EASA holds the regulatory
information (Airworthiness Directives), and the airliner holds engineering
orders (EO) and maintenance records. This information needs to be exchanged
and be available to the end user in an integral way.
Maintenance report keeping and data accuracy: proof-of-concept research
regarding the use of RFID tags to support automatic maintenance
documentation has been performed [4]. However, recent findings [7] suggest
that report keeping is still a manual job that has only partly transferred into
digital format. The manual entry of maintenance data is error-prone and may
cause issues with data accuracy and completeness.

A more structured approach to data, information and knowledge capture, storage
and use in aircraft maintenance processes may resolve these issues while aiding data-
driven research and improvement [8].
This paper consequently aims to make a contribution by moving towards a 'push-
of-the-button' digital solution for capturing and using aircraft maintenance task
knowledge, processes and history to support maintenance execution and prove
continued airworthiness compliance. A proof-of-concept solution for the digitalization,
use and maintenance of knowledge, processes and reports related to a specific
maintenance task has been developed and is discussed as part of this paper. The
maintenance task considers wing maintenance for the Boeing B737: modification and
Figure 1: Adoption of PLM in the MRO domain [3]
W.J.C. Verhagen and R. Curran / An Ontology-Based Approach 495
detailed inspection of the main track downstop assembly of the leading edge slats [9].
This task is associated with a revised Service Bulletin (SB) issued by the OEM, Boeing
[9], and a FAA Airworthiness Directive (AD) [10]. In the remainder of the paper, the
theoretical context is discussed first, followed by development of context and task
models to represent the engineering task related to the proof-of-concept solution. The
implementation of this solution is presented, followed by some conclusions.
2. Theoretical context
The application of knowledge within engineering systems is studied within the field of
Knowledge-Based Systems (KBS). KBS are systems that use an acquired set of
knowledge to offer problem-solving advice (Expert Systems) or solve tasks directly.
KBS are typically comprised of a structured knowledge base containing the required
set of knowledge, acquisition mechanisms and inference mechanisms to solve the
task(s) at hand, and a user interface [11]. Though KBS have been developed for all
phases of the product lifecycle, literature regarding KBS adoption in aerospace and
general maintenance, repair and overhaul (MRO) is extremely scarce (save a few
exceptions, e.g. Painter et al. [12]).
Recent work in the KBS domain has explored an ontology-based approach that
uses an engineering task representation as a central aspect in the development of
knowledge-based applications [13, 14]. In this work, the concept of a task is defined
using Schreiber et al. [15]s description of a task as 'a subpart of a business process that
represents a goal-oriented activity adding value to the organization; handles inputs and
delivers desired outputs in a structured and controlled way; consumes resources;
requires knowledge and other competences; is carried out according to given quality
and performance criteria; and is performed by responsible and accountable agents'. As
such, an engineering task includes the central aspects of input, activity / process, output
and goal. In the ontology-based approach [13, 14], the authors propose the use of an
Enterprise Knowledge Resource (EKR): a modular 'container' of knowledge elements,
process elements and cases that can be used to represent an engineering task by
modelling the task goal(s), inputs and process and capturing the outputs. Besides
EKRs, the ontology-based approach rests upon the development of domain-specific
metamodels based on the Product-Process-Resource (PPR) paradigm [16, 17] to
facilitate annotation of the EKRs for improved traceability and use.
The ontology-based approach has the potential to be extended to the maintenance
domain. In the following sections, the ontology-based approach is adopted for the
development of a KBS for the maintenance engineering task introduced before:
modification and detailed inspection of the main track downstop assembly of the
leading edge slats of the Boeing 737.
3. Modeling
To prepare the development of a proof-of-concept knowledge-based application for the
maintenance engineering task, a number of models are developed based on the
knowledge acquired from the Service Bulletin [9] and Airworthiness Directive [10].
Knowledge acquisition consisted of report analysis as well as maintenance engineer
interviews at a Dutch MRO provider to provide additional insights into the domain.
W.J.C. Verhagen and R. Curran / An Ontology-Based Approach 496
Based on the understanding of the relevant concepts and relations, models have been
developed. Modeling is discussed in two steps: modeling of the maintenance
engineering task using the EKR concept, and modeling of the domain context using the
PPR paradigm.
3.1. Modeling the maintenance engineering task using the EKR concept
The modification and inspection task for the B737 slat track main downstop can be
described in IDEF0 terms using the inputs in the form of the Service Bulletin [9] and
Airworthiness Directive [10], the output a modified and inspected B737 slat track
main downstop assembly and the controls (Airworthiness Directive) and mechanisms
(mechanic, tooling). The AD serves as both input and control to the task: it offers input
information such as aircraft type applicability and controls the task, for instance
through the mandatory compliance time.
The subtasks are represented in Figure 2 as an A0 IDEF0 diagram, based upon the
task description as included in Boeing Service Bulletin 737-57A1302.


Figure 2: IDEF0 A0 diagram for B737 slat track main downstop subtasks

To model the maintenance task in preparation for development of the proof-of-
concept solution, the EKR concept has been used. An EKR class diagram has been
modeled for the maintenance domain and related tasks. It is shown in Figure 3 and
further explained below.

Enterprise Knowledge Resource: the 'container' level of the EKR contains
general attributes such as authorship as well as specific maintenance attributes,
including effective date, applicability, subject, unsafe condition and
compliance time. These attributes have been identified as common attributes
in Airworthiness Directives and Service Bulletins.
EKR_Knowledge: The EKR_Knowledge class has general attributes and
includes common maintenance attributes (effective date, applicability, subject,
W.J.C. Verhagen and R. Curran / An Ontology-Based Approach 497
unsafe condition, compliance time) as identified from regulatory documents
such as ADs and SBs. The Knowledge_Element class inherits the same
attributes. Instances of this class can be used to capture knowledge related to
the problem, mainly product knowledge such as drawings and specifications.
EKR_Process: the EKR uses a process representation for the maintenance
activities that must be completed to fulfil a maintenance task and comply with
any related ADs or SBs. Figure 2 shows the activities for the specific
maintenance task studied in this paper. The individual activities can be
modeled using the Process_Element class. As for the previous classes, the
common maintenance attributes are included into the EKR_Process and
Process_Element classes.
EKR_Case: a central case repository is set up that can hold the results from
the modification and inspection task. Individual case reports are filled into the
repository. The class for individual reports includes maintenance-specific
attributes. Besides the common maintenance attributes previously identified,
other report-specific attributes such as the maintenance visit number, aircraft
registration, flight hours and flight cycles of the aircraft, start date, completion
date, task status, order number and order description are included into the
class.
3.2. Modeling the domain context: Domain ontology development
The second step towards the development of a proof-of-concept solution is to provide a
knowledge structure that can be used to store the captured knowledge and can serve as

Figure 3: EKR class diagram (UML) for generic maintenance task
W.J.C. Verhagen and R. Curran / An Ontology-Based Approach 498
the semantic backbone for the knowledge-based application. A domain-specific set of
concepts and relationships a domain ontology has been developed. To elicit the
applicable concepts and relationships for the domain, various sources have been
employed. This includes the regulatory and OEM documents [9, 10, 18] as well as
literature [1, 3, 4, 7, 8, 19]. The high-level concepts of the PPR paradigm (Product,
Process and Resource) have been used as a starting point for domain ontology
development. These concepts have been extended into domain-specific class
hierarchies. In this section, excerpts of the domain-specific class hierarchies are given
to explain how the domain ontology is composed.
First of all, the domain-specific class hierarchy for the Product class is shown in
Figure 4. The Part and Assembly classes are critical to represent aircraft products; they
can be used to represent the product breakdown structure. Relative to the B737
maintenance task, Figure 4 includes the slat assembly (through aggregation), including
the slat track assembly and the slat can (track housing) assembly. The former contains
the downstop assembly. These assemblies contain parts. To satisfy the requirements of

Figure 4: Product class hierarchy
W.J.C. Verhagen and R. Curran / An Ontology-Based Approach 499
the developed proof-of-concept solution, the parts that make up the downstop assembly
have been added to the part hierarchy and the aggregation relationships are shown.
Second, the class hierarchy for the Process class relative to the maintenance proof-
of-concept development is shown in Figure 5. The Process class has been extended to
include Maintenance_Process, which in turn is a parent for the Inspection_Process,
Modification_Process and Repair_Process classes. As these are all subclasses of the
parent Process class, they inherit the aggregation with the Activity class (in other
words, all of the process classes contain one or more activities).


Figure 5: Process class hierarchy

The Resource class hierarchy is shown in Figure 6. It contains a number of general
resource types, which have been extended using maintenance concepts. In particular,
the Document_Resource class is of note, as this includes the various document types
from the regulator and OEM side.
The maintenance domain ontology has been used to structure the captured
knowledge and will be used to annotate (elements of) the developed solution. This is
further explained in Section 4.
4. Implementation of a proof-of-concept solution
This section describes the development of a proof-of-concept knowledge-based
application for the previously mentioned maintenance task: the modification and
detailed inspection of the main track downstop of the leading edge slats of the Boeing
B737.
One EKR has been developed and implemented for this maintenance domain
proof-of-concept. To implement the EKR and the annotation models presented in the
previous section, a solution has been developed on the basis of the Ardans Knowledge
Maker (AKM) knowledge management tool [14]. This tool consists of a web-based
interface on top of a knowledge base implemented in MySQL. Technological
W.J.C. Verhagen and R. Curran / An Ontology-Based Approach 500
alternatives include semantic wiki tools such as Confluence [20] and dedicated
maintenance management tools (e.g. Maintenix [21]). AKM has been chosen as it
offers the possibility to implement semantic structuring and ontology techniques while
offering the record keeping facilities necessary in the maintenance domain. The web-
based nature of the tool allows for user interaction wherever an internet connection and
device is available.
The following implementation architecture has been devised see Figure 7.
A number of AKM models have been developed for the EKR class and its
subsidiary classes (knowledge, knowledge element, process, process element, case and
case report). For each class, a single model is made that contains fields. These fields
represent the attributes of the classes. The relations between the classes are represented
through the addition of direct links between related AKM models. Some automated
functionality is added by using the XPATH language to identify and fill model fields.
For instance, XPATH expressions are used to let the knowledge, knowledge element,
process and process element models inherit the common maintenance attributes
(effective date, applicability, subject, unsafe condition, compliance time) from the EKR
container class. Furthermore, metadata such as author, date and status is automatically
added. XPATH is also used to facilitate the implementation of 'templates' that
guarantee a consistent representation of model instances.
The AKM models are used to generate knowledge articles; they are in effect
instances of the EKR classes implemented in AKM. The process of creating articles
and generating the article content is currently largely manual. Though the AKM models
take away much work by offering a consistent representation and filling some article
fields automatically, the remaining fields must be filled manually with the appropriate
knowledge. The following Figure gives an example of an implemented EKR article.

Figure 6: Resource class hierarchy
W.J.C. Verhagen and R. Curran / An Ontology-Based Approach 501
As a subsidiary part of the general EKR structure, the case report model and associated
articles are particularly important from the perspective of documentation management
for maintenance compliance. The format of these case reports can easily be changed to
fit company specifications. The AKM tool includes functionality to export articles and
article information into Word or Excel directly. This makes it possible to completely
digitalize the generation, storage and management of maintenance documentation.


Figure 7: Implementation architecture
W.J.C. Verhagen and R. Curran / An Ontology-Based Approach 502
To enable the search and retrieval of EKRs in the maintenance domain, semantic
annotation is used. Annotation of EKRs and its subsidiary elements is achieved through
applying the PPR maintenance domain ontology concepts and relationships to the EKR
classes. An (only partially complete) example for the slat main track downstop
assembly EKR is given in Figure 9. In the figure, only the EKR is annotated to
maintain clarity. In reality, all classes are annotated. In practice, annotation is achieved
through article tags in AKM, which associate an article (be it an EKR article, a
knowledge element article, a process element article or a case report) with a number of
semantic tags.


Figure 8: example of EKR article for maintenance case study slat track downstop assembly modification
and inspection EKR
W.J.C. Verhagen and R. Curran / An Ontology-Based Approach 503

Figure 9: Semantic annotation of EKR
5. Conclusions
With respect to the functionality of the developed proof-of-concept solution relative to
the objective (capturing and using aircraft maintenance task knowledge, processes and
history to support maintenance execution and prove continued airworthiness
compliance), the following observations can be made.
First, the solution provides knowledge life-cycle management through the EKR
approach. In particular, the knowledge and process elements can be captured, used,
maintained, updated and retired independently of each other. There is also the
possibility to track the change of knowledge through the retention of historical
knowledge articles.
Second, the solution facilitates knowledge use through the exploitation of model
'templates' that ensure a consistent representation of knowledge, processes and case
reports. The web-based character of the AKM tool makes it feasible for end users (e.g.
mechanics) to perform a task and immediately fill in a digital report in a consistent
way. Furthermore, the use of a dedicated knowledge base, with the associated
provisions to ensure the availability of trustworthy knowledge, assures users that the
right knowledge is available at the right time. This can also support staff in execution
W.J.C. Verhagen and R. Curran / An Ontology-Based Approach 504
of maintenance processes. It is useful to have discrete process element and knowledge
element representations. This allows a mechanic or engineer to find and inspect exactly
those elements that he/she needs support on. Finally, the current drawbacks of the
paper-based approach are avoided as documentation is stored digitally. With some
additional functionality (e.g. electronic signatures), the solution can fairly easily be
used to manage airworthiness compliance.
Third, the solution addressed knowledge transparency by including semantic
annotation and provision for knowledge explication.
To sum up, the developed solution improves upon the current approach by
facilitating a digital approach to maintenance task support and record keeping.
Consistency of record keeping is maintained through the application of models within a
knowledge base. The contents of reports can be checked manually, with the potential
for automatic checks and completion through future extension of the solution (see
below). The solution allows for digital data exchange between various stakeholders.

There are a number of disadvantages and challenges related to the currently
implemented solution. First of all, the solution requires manual interaction, primarily in
setting up EKRs but also in completing maintenance reports. Despite a low time
needed to implement a single EKR (up to a few hours per EKR), the sheer amount of
ADs and SBs available for the various B737 types (up to 330 per type) indicates a
significant investment of resources to set up a complete knowledge base with EKRs for
each maintenance task. There is however some potential to automate knowledge article
generation by linking AKM with AD and SB information retrieved from
myboeingfleet.com and FAA / EASA databases, as information in XML format can be
imported to and exported from AKM. The completion of case reports also requires
manual input. Similar to the previous point, case report generation is technically
feasible by linking AKM with external maintenance programs. These options will be
explored in future research.
References
[1] Garg, A. and S.G. Deshmukh, Maintenance management: literature review
and directions, Journal of Quality in Maintenance Engineering 12 (2006): pp.
205 - 238.
[2] Wartan, S., Sharing Safety Knowledge for Aircraft Maintenance. Delft
University of Technology, M.Sc. thesis, (2010).
[3] Lee, S.G., Y.S. Ma, G.L. Thimm, and J. Verstraeten, Product lifecycle
management in aviation maintenance, repair and overhaul, Computers in
Industry 59 (2008): pp. 296-303.
[4] Lampe, M., M. Strassner, and E. Fleisch, A Ubiquitous Computing
environment for aircraft maintenance. Proceedings of the 2004 ACM
symposium on Applied computing, Nicosia, Cyprus, 2004, ACM.
[5] Baker, M., T. Dowling, W. Martinez, T. Medejski, D. Pedersen, and D.
Rockwell, New Enhanced Service Bulletins, in Aero Quarterly2006, Boeing.
p. 12-15.
[6] Airbus. (2012). AIRMAN. Retrieved 13-11-2012, from
http://www.airbus.com/innovation/proven-concepts/in-fleet-support/airman/.
W.J.C. Verhagen and R. Curran / An Ontology-Based Approach 505
[7] Burhani, S., Compliance during Aircraft (Component) Redeliveries. Delft
University of Technology, M.Sc. thesis, (2012).
[8] Jagtap, S. and A. Johnson, In-service information required by engineering
designers, Research in Engineering Design 22 (2011): pp. 207-221.
[9] Boeing, Boeing Service Bulletin 737-57A1302, Revision 1, 2010.
[10] FAA, Airworthiness Directive FAA AD 2007-18-52, 2007.
[11] Studer, R., V.R. Benjamins, and D. Fensel, Knowledge Engineering:
Principles and methods, Data and Knowledge Engineering 25 (1998): pp. 161-
197.
[12] Painter, M.K., M. Erraguntla, G.L. Hogg Jr, and B. Beachkofski, Using
simulation, data mining, and knowledge discovery techniques for optimized
aircraft engine fleet management. Proceedings of the Winter Simulation
Conference (WSC), Monterey, CA, USA, 2006.
[13] Bermell-Garcia, P., W.J.C. Verhagen, S. Astwood, K. Krishnamurthy, J.L.
Johnson, D. Ruiz, G. Scott, and R. Curran, A framework for management of
Knowledge-Based Engineering applications as software services: Enabling
personalization and codification, Advanced Engineering Informatics 26
(2012): pp. 219-230.
[14] Verhagen, W.J.C., P. Bermell-Garcia, P. Mariot, J.-P. Cotton, D. Ruiz, R.
Redon, and R. Curran, Knowledge-based cost modelling of composite wing
structures, International Journal of Computer Integrated Manufacturing 25
(2012): pp. 368-383.
[15] Schreiber, G., H. Akkermans, A. Anjewierden, R. de Hoog, N. Shadbolt, W.
Van de Velde, and B. Wielinga, Knowledge engineering and management: the
CommonKADS methodology, MIT Press, Cambridge, MA, 1999.
[16] Butterfield, J., W. McEwan, P. Han, M. Price, D. Soban, and A. Murphy,
Digital Methods for Process Development in Manufacturing and Their
Relevance to Value Driven Design. Air Transport and Operations -
Proceedings of the Second International Air Transport and Operations
Symposium 2011, Delft, The Netherlands, 2012, IOS Press.
[17] Curran, R., J. Butterfield, Y. Jin, R. Collins, and R. Burke, Value-Driven
Manufacture: Digital Lean Manufacture. R. Blockley and W. Shyy (eds.),
Encyclopedia of Aerospace Engineering. John Wiley & Sons, Ltd, 2010, pp.
[18] FAA, Airworthiness Directive FAA AD 2011-06-05, 2011.
[19] Tsang, A.H.C., Condition-based maintenance: Tools and decision making,
Journal of Quality in Maintenance Engineering 1 (1995): pp. 3-17.
[20] Confluence. (2013). Team Collaboration Software - Atlassian Confluence.
Retrieved 07-08-2013, from
http://www.atlassian.com/software/confluence/overview/team-collaboration-
software.
[21] Mxi Technologies. (2013). Mxi Technologies - Aviation Maintenance
Management Software. Retrieved 08-07-2013, from
http://www.mxi.com/products/maintenix/overview/.

W.J.C. Verhagen and R. Curran / An Ontology-Based Approach 506
A Predictive Method For the Estimation of
Material Demand for Aircraft Non-Routine
Maintenance
M. ZORGDRAGER
a
, R. CURRAN
a
, W.J.C. VERHAGEN
a
, B.H.L. BOESTEN
b
, C.N.
WATER
b


a
Faculty of Aerospace Engineering, Delft University of Technology, Kluyverweg 1,
2629 HS Delft, the Netherlands.
b
KLM Royal Dutch Airlines Engineering & Maintenance, Postbus 7700, 1117 ZL
Schiphol, the Netherlands

Abstract A method is developed to forecast material demand caused by aircraft
non-routine maintenance. Non-routine material consumption is linked to scheduled
maintenance tasks to gain insight in demand patterns. Subsequently, a suitable
prediction model can be applied to forecast material demand. To test this approach,
a structural part selection of the Boeing 737NG fleet of KLM Royal Dutch
Airlines has been sampled to form a test case. Several regression and stochastic
models have been applied to the part selection to judge model fit and validity.
Resulting from this analysis, the Exponential Moving Average (EMA) was chosen
as superior model for its small error values and ability to capture general demand
trends. The forecast method incorporating the EMA model has been validated by
forecasting and comparison against an independent dataset. Concluding, the non-
routine maintenance forecast method, comprising the non-routine material
consumption forecasts linked to scheduled maintenance tasks, can be used to
produce material predictions expressed in probability and average quantity figures
for upcoming maintenance checks.

Nomenclature
t = time
n

= number of periods
E
a
= average demand
E

= actual demand
1. Introduction
Due to economic effects such as rising fuel costs and increasing competition from
Maintenance Repair and Overhaul (MRO) organizations in low-wage countries, MRO
organizations in Western Europe are continuously striving to minimize costs while
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-507
507
delivering maximum service. Cost savings may however in no way comprehend the
service level of aircraft. This is an everlasting trade-off between availability and costs
which is also reflected in the supply chain management of MRO organizations. From
the production point of view it is crucial to have all required parts in stock, while from
the financial side it is important not to have too many or too costly parts in stock, as
this all represents dead capital. Data of material consumption, aircraft utilization and an
airlines maintenance program can be captured and used to estimate material demand in
advance of maintenance checks, allowing for more optimal purchasing of parts and an
associated shift from a reactive stock-level oriented planning to a proactive
maintenance activity based planning.
The maintenance of aircraft can be divided into routine maintenance and non-
routine maintenance. Routine maintenance consists of a standard number of
maintenance tasks, performed when aircraft reach a certain amount of flight hours,
flight cycles or calendar age. These routine maintenance tasks are bundled together in
packages such as FA and FC checks. Routine maintenance is planned well ahead and
the availability of required routine parts approaches 100%. Non-routine maintenance is
additional maintenance found during routine maintenance tasks, which is not included
in the scheduled task requirements [1]. Non-routine maintenance shows unpredictable
behavior and the availability of parts is therefore much lower than for routine tasks.
It is a considerable challenge to bring some predictability in the material demand
caused by non-routine maintenance tasks due to the large number of factors
contributing to material demand, the sporadic nature of non-routine material demand
and the large amount of different part numbers present in the aircraft. Currently, MRO
organizations such as KLM E&M do give forecasts about non-routine maintenance
tasks. However, these forecasts are based only on the amount of required man-hours
and are not given on a material specific level. In case a part number required for non-
routine maintenance is not available in the stock of the MRO, it thus has to be ordered
at a vendor during the maintenance check. Hence there is a risk that the part cannot be
delivered and installed within the time of the maintenance check with an Aircraft On
Ground (AOG) as result. MRO organizations require method(s) and/or model(s) to
accurately predict material demand in advance of the maintenance check, to reduce the
risk of expensive AOG orders and to allow for more accurate inventory management.
Consequently, the goal of the present research is to develop a method and tool able
to predict material demand for aircraft non-routine maintenance. To be able to sample
and test the proposed method and associated models, the research scope is limited to
ATA structure chapters 51-57 for the Boeing B737 NG aircraft (encompassing the
main fuselage parts), for which a dataset is available from KLM Engineering &
Maintenance (E&M).
2. Literature
The spare parts demand for aircraft maintenance can be characterized using time
intervals and quantity variation. Typically, demand type can be identified by
considering the Coefficient of Variation (CV) and Average Inter Demand (ADI) values
[2] see Equations (1), (2) and (3), where t is time, n is the number of periods, E
a
is
the average demand and E
i
is the actual demand.
M. Zorgdrager et al. / A Predictive Method for the Estimation of Material Demand 508
n
t
ADI
n
i
i
=
=
1

(1)
a
n
i
a i
e
n
CV

=
1
2
) (

(2)
n
n
i
i
a

=
=
1


(3)

The ADI and CV values can be used to identify demand patterns [2]:
Slow moving, or smooth demand: regular demand with a limited variation in
quantity
Intermittent demand: extremely sporadic demand, with no accentuated
variability in the quantity of the single demand
Erratic demand: large variation in quantity, but constant distribution over
time
Lumpy demand: great number of zero-demand periods and large variation in
quantity.

These demand patterns (or types) are shown in Figure 1. Ghobbar et al. [3] suggest cut-
off values for ADI (1.32) and CV (0.49) to categorize demand as having a constant
distribution over time (ADI < 1.32) or a variable distribution over time (ADI > 1.32),
with limited variation in quantity (CV < 0.49) or a large variation in quantity (CV >
0.49). The cut-off values are given in Figure 1.
Non-routine maintenance (and associated material demand) is characterized by
many instances of zero-demand and significant variation in quantity. As such, the
demand classification of non-routine material is typically intermittent or lumpy; this
assumption will be checked for the dataset considered in this study. Traditional
stochastic forecasting methods give accurate results with smooth, regular demand, but
provide inaccurate results with intermittent and lumpy data [4], as the special role of
zero values is ignored in analyzing and forecasting demand. Furthermore, traditional
forecasting methods assume a normal, classic bell-shaped curve between the likelihood
of the value and the demand. However, this normal distribution assumption is not valid
for intermittent and lumpy demand [5].
Ghobbar et al. [3] consider a range of forecasting models of this demand type.
Based on the characteristics of the dataset (see Section 4), a variety of regression and
stochastic models have been selected for subsequent analysis of model fit. The
M. Zorgdrager et al. / A Predictive Method for the Estimation of Material Demand 509
regression models taken into account for evaluation are a weighted mean and linear,
exponential and polynomial models, while the stochastic models consist of Single
Exponential Smoothing, Moving Average, Exponential Moving Average, Savitzky-
Golay filter, Crostons method and the Syntetos Boylan Approximation [3, 6, 7].
3. Method
The first step in the general method for the prediction of material demand for non-
routine maintenance is to link non-routine material demand with scheduled
maintenance tasks. An overview of the relation between non-routine material demand
and its linked scheduled maintenance task is provided in Figure 2.


Figure 2: Link between FC-check, MRI maintenance tasks and non-routine material orders
A maintenance check such as the FC-check consists of a set of scheduled
maintenance tasks given out by Boeing in the Maintenance Planning Document (MPD)
or given out by the MRO organization as Maintenance Required Inspection (MRI)
tasks. Each scheduled task contains job elements provided as Routine Jobcards (RC). If
during the performance of the maintenance task a part defect is found such as corrosion
or cracks it is written down on a Non-Routine Jobcard (NRC). In order to restore the
functionality of the part material is required, given as the non-routine material. By
Figure 1: Spare part classification [2]
M. Zorgdrager et al. / A Predictive Method for the Estimation of Material Demand 510
linking non-routine material demand to scheduled maintenance tasks an insight is
acquired which parts are inspected when, and can hence be demanded. Furthermore
datapoints are acquired for inspections with zero demand by looking up how many
times the maintenance task is performed. Subsequently the inspection interval of the
scheduled maintenance task has been looked up from which the individual tasks can
each be assigned to a maintenance check. With this information per maintenance check
a list of part numbers can be obtained with accompanied probability and average
quantity figures.
As lumpy demand by definition consists of a small number of nonzero data points,
a binning strategy has been incorporated into the method to enable reliable forecasting
of demand. If too few data points for a specific individual part number are present, the
part number is binned into a group of similar part numbers and the demand pattern of
the group is analyzed. By comparing the data points of the individual part number with
the data points of the part group a prediction can still be made. When too few data
points are present for the similar part group, the part number is subsequently binned
into a Job Instruction Card (JIC) zone and the demand pattern of all parts in the JIC
Zone is analyzed. It has to be emphasized that an individual part is always part of a
similar part group and that specific parts can be located in different JIC zones.
Different JIC zones never overlap. This grouping strategy is visualized in Figure 3.

Figure 3: Grouping strategy of individual part numbers into (1) similar part group, or (2) JIC zone part group
4. Case study: Boeing 737NG maintenance data
To analyze the suitability of the developed method, a research sample has been
developed using maintenance data related to structural parts (ATA chapters 51-57) of
the Boeing 737NG fleet maintained at KLM E&M. The data is first checked for
completeness, after which the various regression and stochastic models have been
applied to the dataset. Finally, the results were analyzed leading to selection of the
most suitable forecasting model to be used in conjunction with the non-routine material
demand forecasting method. This selection has been validated using an independent
data sample.
4.1. Data analysis
Maintenance data from KLMs fleet of 46 Boeing 737NGs has been collected from the
recently implemented system Maintenix as well as from the Aircraft Maintenance
Program of KLM and imported in an Microsoft Excel database. As the Maintenix
M. Zorgdrager et al. / A Predictive Method for the Estimation of Material Demand 511
software was implemented in January 2010 a limited amount of data is available for
research as is shown in Figure 4.


Figure 4: KLM B737NG fleet age including period implemented in Maintenix
A total of 44 FC-checks has been stored in Maintenix up until January 2010,
divided over six different checks, as shown in Figure 5. The inspection interval of each
FC-check is given by either 24 calendar months, 6000 flight hours, or 4000 flight
cycles, whichever boundary condition is reached first.
After cleaning the data, material ordered for non-routine maintenance work has
been linked to scheduled maintenance tasks. By looking up the inspection intervals of
each maintenance task, it is possible to determine what material demand can be
expected in which maintenance checks. This furthermore provides valuable information
when specific parts have been inspected but have not been replaced, representing zero
demand data points. Using the demand intervals and quantities, it has proven possible
to characterize material demand per part number.
To improve anticipated forecasting performance and focus research efforts, a
Pareto chart of the structural part numbers has been made based on the cost impact,
defined as the installed part quantity times the unit cost
1
. The top 10 cost impact part
numbers, representing 60% of the total structural material costs, has been selected to
test the prediction models on. The material demand is lumpy (ADI > 1.32, CV > 0.49)
for all of these part numbers.
For these part numbers both the probability per FC-check, given as the hit-rate per
FC-check, and the average quantity per FC-check have been calculated based on
historical consumption data. Subsequently, the various regression and stochastic
models have been applied to identify demand patterns for both the probability and
average quantity plots. An example is given in the following Section.
4.2. Example of analysis: Forecasting demand for a single part number
As an example, demand for cabin windows is forecasted using the various regression
and stochastic methods. Following the proposed forecasting method, the scheduled
maintenance task for which cabin windows are replaced is first determined, as well as
the interval of the task and the FC-checks in which it is embedded. This information is

1
Given the sensitive nature of cost information, this Pareto chart is not represented
here.
M. Zorgdrager et al. / A Predictive Method for the Estimation of Material Demand 512
provided in Table 1. The scheduled MRI task is performed in all FC-checks and hence
demand can be expected in every FC check.

Table 1: Scheduled MRI tasks and intervals for which cabin windows are replaced


Hereafter, historical maintenance data is used to calculate both the probability per
FC-check and average quantity per FC-check, which the models are subsequently
applied to. The stochastic probability plot is given in Figure 6.


Figure 6: Stochastic probability plot per FC-check for the cabin windows (part number 140N2139-1)
In order to evaluate the accuracy of each model based on the consumption data
included in the database, the Sum of Squared Error (SSE) and Root Mean Squared
Error (RMSE) are calculated for all FC-checks for both probability and quantity
forecasts. Equations (4) and (5) express the SSE and RMSE. The forecast error e
t
is
given in equation (6), where Y
t
is the actual quantity at time t and F
t
represents the
forecasted value for time t.

Figure 5: Number of FC-checks performed with a structural finding
M. Zorgdrager et al. / A Predictive Method for the Estimation of Material Demand 513

=
=
n
t
t
e SSE
1
2
(4)

=
=
n
t
t
n
e
RMSE
1
2
) (

(5)
t t t
F Y = (6)
The evaluation results for the cabin windows are shown in Table 2.

Table 2: Accuracy of forecasting models for cabin windows

Similar analyses have been performed for the demand probabilities and quantities
of the top 10 part numbers by cost impact. As mentioned, various regression and
stochastic models have been applied to forecast demand probability and quantity. For
each of these models, the forecasting errors have been evaluated.
4.3. Results & Validation
When considering the overall forecasting results and errors, the regression
forecasting models have turned out to be unreliable in predicting non-routine material
demand per FC-check. This is caused by the fact that not every part is inspected in all
FC-checks due to the different inspection intervals of the maintenance tasks. The only
regression model that has enough degrees of freedom to capture this reactiveness is the
5th degree polynomial. However, this forecasting model gives unrealistic values when
extrapolating.
The stochastic models show significantly better forecasts. Due to the high
reactiveness, the irregular demand patterns are captured by these models. For the
M. Zorgdrager et al. / A Predictive Method for the Estimation of Material Demand 514
stochastic forecasting models a clear distinction can be seen between the MA models
(MA, EMA, SG) and the SES models (SES, Croston, SBA) in all plots. The MA
models are reactive and approximately follow the path of actual observed values.
Moreover the MA models are able to capture general trends rather than simply
connecting historical values. The SES model as used in this research gives the lowest
predictions errors, which can be explained by the used smoothing factor of one. As the
calculated average probabilities and quantities per FC-check are based on actual
historical data, it makes no sense to scale down the values using a smoothing constant.
Therefore, the smoothing factor alpha was chosen as one, with as a consequence that
the SES model exactly follows the historical data.
The EMA method has been chosen as the most suitable model for use in the
forecasting method, given its low error values for the part number selection and ability
to capture general demand trends. The EMA model has subsequently been verified and
validated by forecasting material demand for an FC07 check at KLM. This check is
independent from the previously introduced dataset. For this specific FC07 check,
seven part numbers have been demanded that were included in the part selection of this
research. Using the forecasting method and the EMA model, demand for five of seven
parts has been forecasted with a probability above 50% and with specific quantities.
Moreover, all parts that have been forecasted by the EMA method with a probability
above 50% have indeed been demanded at or near the quantities forecasted.
5. Conclusion
In this research a method has been developed to forecast material demand related to
non-routine maintenance tasks. To test the methodology, a part selection was made
based on the cost impact, defined by the installed part quantity times the unit cost. The
top 10 part numbers amounting to 60% of the total structural part costs were selected as
test case. Historical maintenance data has been consulted to calculate the probability of
demand per maintenance check. Subsequently a variety of forecasting models have
been applied to the data. The stochastic models have shown satisfying results. The
Exponential Moving Averages (EMA) model has been chosen as the best forecasting
model as it produces low error values whilst still being able to capture general demand
trends. For validation of the EMA model, an FC07 check which was not included in the
original dataset has been used for forecasting material demand, after which the
predictions have been compared to the actual demand. Using the forecasting method
and the EMA model, demand for five of seven parts has been forecasted with a
probability above 50% and with specific quantities. Moreover, all parts that have been
forecasted by the EMA method with a probability above 50% have indeed been
demanded at or near the quantities forecasted. IN conclusion, it is shown that it is
indeed possible to bring a measure of predictability in the demand of parts due to non-
routine maintenance by linking to the scheduled maintenance tasks.

A recommendation following from this research is to further research the demand
distributions per specific FC-check. In this research the average values per FC-check
have been used, which might include data outliers. Furthermore if maintenance data of
different operators are used, a different distribution of flight hours, flight cycles and
aircraft age might be seen per maintenance check. Moreover the scheduled
maintenance tasks embedded in FC-checks might vary per operator. It is therefore
M. Zorgdrager et al. / A Predictive Method for the Estimation of Material Demand 515
recommended to link non-routine material demand solely to scheduled maintenance
tasks when comparing different operators.
References
[1] Aungst, J., M.E. Johnson, S.S. Lee, D. Lopp, and M. Williams, Planning of
Non-routine Work for Aircraft Scheduled Maintenance. Proceedings of the
2008 IAJC-IJME International Conference, Purdue University, 2008.
[2] Haneveld, M., Inventory Planning and Control of Expendable Spare Parts at
KLM E&M. University of Twente, Faculty of Mechanical Engineering,
Twente, the Netherlands, (2005).
[3] Ghobbar, A.A. and C.H. Friend, Evaluation of forecasting methods for
intermittent parts demand in the field of aviation: a predictive model,
Computers & Operations Research 30 (2003): pp. 2097-2114.
[4] Rahman, M.A. and B.R. Sarker, Intermittent demand forecast and inventory
reduction using Bayesian ARIMA approach. Proceedings of the 2010
International Conference on Industrial Engineering and Operations
Management Dhaka, Bangladesh, 2010.
[5] Smart, C.N. (2005). Accurate intermittent demand forecasting for inventory
planning: new technologies and dramatic results. Retrieved 13/05/2012, from
http://smartcorp.com/pdf/Intermittent_Demand_Forecasting_WhitePaper.pdf.
[6] Croston, J.D., Forecasting and stock control for intermittent demands,
Operational Research Quarterly 23 (1972): pp. 289-303.
[7] Syntetos, A., Forecasting of intermittent demand (unpublished PhD
dissertation), 2001, Buckinghamshire Business School, Brunel University,
UK.


M. Zorgdrager et al. / A Predictive Method for the Estimation of Material Demand 516
A Modelica-Based Modeling, Simulation
and Knowledge Sharing Web Platform
Li Wan
a,1
, Chao Wang
a
, Tifan Xiong
a
and Qinghua Liu
a

a
CAD Center, Huazhong University of Science & Technology, Wuhan, China, 430074

Abstract. In order to adapt to the changes brought by mobile office, reduce the initial cost of modeling and
simulation, share knowledge resources, and finally attract more customers by platform-community effect
formed, the WebMWorks is presented in this paper. Deployed on cloud-computing platform, it provides user
a low cost Modelica-based modeling, simulation and knowledge library sharing service of multi-domain
physical system. It is based on Web, supporting multi-tenant, collaborative design, textual and visual
modeling, knowledge publishing, leasing and purchasing, model commenting and communicating between
users. Adopting SOA, the basic function components of WebMWorks is packed into stateless service
composites, which can be deployed on cloud-computing platform and extended easily. WebMWorks is
assembled with these composites, and provides service online through RIA based on browser.
Keywords. Modeling and Simulation, Knowledge Sharing, Web Platform, Cloud Computing
Introduction
The greatest stumbling block for enterprise is being short of innovation - the engine of
developing, due to their lack of innovation tool or enough accumulation of knowledge.
For reasons of money or time, it`s hard for enterprises, especially SMEs (Small or
Middle Enterprises), to build an innovation tool and a certain scale knowledge library
by themselves. Traditional modeling and simulation tools run locally, and have to
download the knowledge models, which may result in a leak of knowledge. So users of
traditional tools are hard to get enough knowledge models, even at a high price.
Knowledge is difficult to get a greater degree of reuse and sharing.
Cloud-computing promises huge benefits for enterprises. Pay-per-use payment models,
dynamic scaling of service, multi-task concurrent processing and the outsourcing of
infrastructure lead to a better resource utilization and lower cost. Based on cloud-
computing, software in the mode of SAAS or PAAS attracts more users and is easy to
manage and maintain. It has become the new direction of software developing.
In this trend, transforming traditional innovation tool to a web application based on
cloud-computing platform, providing services at a low initial cost, has become the
inevitable choice of innovative tools suppliers. Meanwhile, that knowledge models are
stored, modeled and simulated on cloud server, avoids leaks from knowledge sharing.
By this, knowledge owner can share their knowledge for a fee without worries of leaks
when its being reused.
Up to now, most of the web-based modeling and simulation tools are still prototype
system in period of research or experiment, as presented by Eva-Lena
[2]
and Oscar
[4]
.
Existing knowledge modeling tools on web, such as CADren platform
[5]
of Autodesk,
only support knowledge modeling on specific domain in specific innovation designing
period. OMweb
[6]
, a Modelica-based modeling and simulation tool on web, only

1
Corresponding Author.
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-517
517
supports textual modeling. Knowledge modelers have to be spiciest of both domain
knowledge and Modelica language. Its very hard to build complex system models.
Aimed to solving these problems, WebMWorks is presented in this paper. Our purpose
is build a web platform deployed on cloud-computing platform, integrating Modelica-
based visual modeling, simulation, knowledge sharing, community, and team
collaboration tools, supporting modeling and simulation of complex physical systems,
to provide enterprises innovation tools and knowledge models at a low cost.
1. Modelica, MWorks, and WebMWorks
1.1. Modelica
Modelica is a non-proprietary, object-oriented, equation based language to
conveniently model complex physical systems containing, e.g., mechanical, electrical,
electronic, hydraulic, thermal, control, electric power or process-oriented
subcomponents
[1]
. A Modelica model is a meta-knowledge. Its saved as a text file
(.mo file), similar to the class in java. From users point of view, models are described
by schematics, also called object diagrams, as shown in Figure 1.what a model
schematic MS consists can be simply described with the follow tuple:
MS= {Attribute, Imports, Extends, Variables, Components, Annotation, Equations,
Connects}.
Internally a component is defined by another schematic or on bottom level, by an
equation based description of the model in Modelica syntax
[1]
.

Figure 1 Schematic of Modelica Model
1.2. MWorks
MWorks is a general Modelica-based modeling and simulation platform for
engineering systems which supports visual modeling, automatically translating and
solving, as well as convenient post processing. The main process of MWorks is similar
to the described in book of Peter Fritzson
[3]
, as shown in Figure 2.
Figure 2 Main Process of MWorks
L. Wan et al. / A Modelica-Based Modeling, Simulation and Knowledge Sharing Web Platform 518
1.3. WebMWorks
As the name implies, WebMWorks is a web application similar to MWorks. Deployed
on cloud-computing platform, it provides an online visual modeling, simulation and
knowledge sharing service in Modelica. Through the internet, users can get the services
at a low cost whenever from wherever. Whats more, WebMWorks provides
knowledge model sharing and team collaboration service. Users can upload their
knowledge models to cloud server, and share some knowledge models with others for a
fee. Since knowledge model is stored at server and both of modeling and simulation is
processed at server, knowledge models is not allow being downloaded by knowledge
users, which ensures that knowledge models won`t be betrayed when it`s reused. As a
web platformWebMWorks support multi-user modeling and simulation at the same
time. Managed by the collaboration service based on model, a team can work together
to model and simulate complex physical system.
2. Analysis
Compared to MWorks, WebMWorks Platform should have some different features, as
shown in Table 1.
Table 1 the Compare between MWorks and WebMWorks
Feature MWorks WebMWorks
(1)Type Desktop Application Service-based Web Application
(2)Run Environment Single Machine Cloud Platform
(3)Use Environment Local Any Time and Any Place
(4)Available Library Local Library All Sharing Library on Server
(5)Approach to Library Build/Purchase Build/Rent/Purchase
(6)Amount of User Single User Multi-Tenant and Multi-User
(7)Amount of Task Single Task Multi-Task Concurrent Process
(8)Scale of Data Data of single user Mass Data from Multi-Tenant
(9)Expandability Need Redeploy User Unaware
(10)Views of Modeling Text/Diagram/Icon Text/Diagram/Icon
(11)Key Factor Machine Performance Net Performance
In order to achieve feature 0, 0 and 0the most appropriate approach is to build client
as a web application presented by browser. Browser is widely used, and provides a
cross-platform support. Presented as a web pages software service can be accessed to
through browsers on various operation system wherever and whenever possible.
Features (4) and (5) require that knowledge library must be stored on cloud server to
get a greater degree of sharing and data safety. For the reason of feature (6), we need
adopt a multi-tenancy data manage solution to isolate data from different organization.
On the side of server, we should packed function modules in MWorks into a stateless
service modules to achieve multi-task processing concurrently, as feature (7) requires,
by separating user data and request processing logic. All data generated by users
request should be stored or returned to client after request is processed to avoid lost or
leak of data when services is request by different users in the meantime. To improve
user experience, WebMWorks should provide asynchronous processing and load-
balance method for service modules which may spend a lot of time or with a massive
calculation.
L. Wan et al. / A Modelica-Based Modeling, Simulation and Knowledge Sharing Web Platform 519
WebMWorks have to integrate distribute file system and database to store huge amount
of data of users ,pointed by feature (8), and provide a general data access interface for
upper service modules.
We adopt SOA to reuse modules and ensure a better expandability of WebMWorks
platform by wrap each of the function modules as a service. Changes and expansion
can be deployed unwarily as Feature (9) describes.
As feature (11) implies, the key factor of WebMWorks performance is the network
status. Visual modeling on web, present in feature (10) is the core function of
WebMWorks. Limited to network status, it is hard to get model information
immediately when network is bad. We must reduce data exchange of modeling
between client and server and solve how to modeling in a bad network status when
implementing WebMWorks. We can eliminate the influence of network status and
improve user experience in following directions:
A. Reduce the data exchange when separating the platform into client and server,
such as packing visual modeling graphic process module, which has a lot of
interaction with user, into client page;
B. Avoiding transferring big data as possible;
C. Reduce data size and times of communication between client and server by
caching data on client and compressing data before sent.
3. Architecture
According to the preceding analysisThe architecture of WebMWorks is designed, as
shown in Figure 3.

Figure 3 Logic Architecture of Platform
The Logic Architecture of WebMWorks adopt SOA Architecturewhich is easy to
change and deploy on Cloud-computing Platform. The Architecture is divided into 6
L. Wan et al. / A Modelica-Based Modeling, Simulation and Knowledge Sharing Web Platform 520
layers: (1)Client Layer deals with user input and provides the web interface to the
application (2)Present Layer is the entry layer of web page and web service, user can
access WebMWorks platform through different service entrance in present
layer;(3)Business Service Layer provides process service for user request, including
user manage service, permission manage service, model manage service, modeling
service, compile service, simulation manage service; (4)Underlying Service Layer
provides basic task processing service for Business Service Layer, such as translate,
compile, and simulate; (5)Data Access Layer provides a general file and data access
interface, especially implement a access module for model file and data; (6) Data
Storage Layer includes distribute database and file base, which is the base
infrastructure of platform.
Supported by general data access interface, WebMWorks can provide user manage,
permission manage, models manage, modeling, compile and simulation services, which
is stateless, to multi-user at the same time, no matter where and when a user requests.
For task need massive calculation like compile and simulation, WebMWorks provides
an asynchronous and concurrent process method to ensure that user get a better
experience, as shown in Figure 4.
Figure 4 Synchronous and Asynchronous Task Processing
User request send to platform is packed into task by business service. These tasks is
dispatched according its complexity (calculate quantities and average processing time).
Simple tasks are respond by task processor immediatelywhile complex tasks are
added into task queue waiting for asynchronous processing. A task processor can be
any service module in underlying layer. Its the final processor of user request.
4. Implement and Key Techniques
We implemented the WebMWorks Platform in .Net Framework. Adopting SOA, All
modules of the platform is wrapped as a service with WCF. The following section
gives an overview of our major work and key techniques.
4.1. Service Encapsulation
We use WCF (Windows Communicate Foundation) to implement communication and
service encapsulation. WCF is Microsofts unified programming model for building
service-oriented applications. It enables developers to build secure, reliable, transacted
solutions that integrate across platforms and interoperate with existing investments
[12]
.
The structure of service module encapsulated with WCF is shown in the Figure 5.
L. Wan et al. / A Modelica-Based Modeling, Simulation and Knowledge Sharing Web Platform 521

Figure 5 WCF-based Service Encapsulation
A service node consists 3 parts: host process, service contract, and endpoints. Service
contract defines service-level settings, such as the namespace of the service. It`s
defined by creating an interface and applying the ServiceContractAttribute attribute,
the WCF service contract flag, to the interface. The actual service code results by
implementing the interface. Endpoint comprises a location (an address) that defines
where and how a message should be sent. A WCF service is exposed to the world as a
collection of endpoints. A host is the process where service is hosted in, usually is an
application that controls the lifetime of the service.
4.2. Visual Modeling on Web
The modeling page is developed in a MVVM
[9]
architecture using the Silverlight
technology. The vector graphics and asynchronous communication of Silverlight make
it easy to create interactive graphical application in the browser. While a large number
of graphical operations are transferred from the server to the client, the burden of server
is lightened.
We establish a mapping between an expanded svg file and knowledge model to reduce
size of data exchanged. On the client side, the model is presented as an icon tag
described in an expanded svg, which includes icon, attributes and parameters of model,
User can build a new model schematic (an xml document) with it. On the server side,
we translate model schematic into Modelica code file, which is the final file of a model,
and we can generate any information of this model from its Modelica code file,
including its svg file. With mapping like this, user can use models shared on server to
build new knowledge model and simulate. But he can`t get the detail of knowledge
model. We can even generate icon and variables of completed model, and store them in
database in advance to improve access speed. Processing of visual modeling is shown
in Figure 6

Figure 6 Visual Modeling Processing
L. Wan et al. / A Modelica-Based Modeling, Simulation and Knowledge Sharing Web Platform 522
When user is modeling, the svg tags of models on model tree are cached in client
storage. User mostly neednt get any information from server while modeling, except
import a new model library. If completes a model, he can upload it to server, there it
will be translated into a Modelica code file to compile and simulate. User also can
download Modelica code file of an available model to study or research.
4.3. Task Concurrent Processing
Facing to a large number of user requests, WebMWorks should process users requests
concurrently. In order to share the load and achieve a dynamic load balancing, we have
adopted a three-tier distribution mechanism, as shown in Figure 7. First, user requests
are dispatched among different business services. If it`s a simple task request, such as a
query of data in database, it is responded immediately; otherwise the request is added
to a queue as a task. When the dispatcher of a queue gets the task, including user
request, it is dynamically dispatched among different underlying service node servers
according to the load of the server. After receiving a task, underlying service node
dispatches it again among actual processors running on the server. By this way, the
large numbers of requests can be processed concurrently by different task processor.

Figure 7 Task Concurrent Processing
4.4. Multi-Tenant Data Management
As a web platform, there are huge amounts of data from users. To ensure that data from
different users is independent, we must adopt a multi-tenancy data manage solution.
There are three approaches to implementing multi-tenancy database
[10]
(1) Share
Machine, (2) Share Database, (3) Share table. Since knowledge models in
WebMWorks are modeling in Modelica and have the same data schema, we can
implement Multi-tenant database by sharing table. Sharing table is the most suitable
multi-tenancy solution for application used by a large number of small organizations,
because of its better performance and higher resources utilization.
Table 2 Example of Multi-tenant Table Scheme
RuwIdPK TcnantIdFK MudclNamc Val Val
3 MultiBody.Forces.WorldForce
2 Analog.Ideal.IdealThyristor ...

Sharing table means that data from different users is stored in the same tables, as
illustrated in Table 2. TenantId column is added to each table to identify the owner
of each row. Every application query is expected to specify a single value for this
column. To allow customers to extend the base schema, each table is given several
L. Wan et al. / A Modelica-Based Modeling, Simulation and Knowledge Sharing Web Platform 523
additional generic columns. These columns are of type VARCHAR. The data for the n-
th new column of a table for each customer is placed in the n-th generic column of the
appropriate type, after performing any necessary type conversions.
5. Case of Modeling and Simulation
Now let`s take a test. Before modeling, user must select libraries that may be imported
in this model. Go to modeling page, drag icon tags on model tree to diagram area,
connect them, and set value of parameters, then we can get a model diagram similar to
that shown in Figure 8.

Figure 8 Modeling Page
Click Simulate button to start simulation, after checking and compiling, the
simulation page is loaded. Select a variable in variable tree, and we will see the result
plot of the variable, just like Figure 9.

Figure 9 Simulation Page
We can summarize our platform as follow. It achieves most of our purposes except
collaboration tool. When upload or download a model, user may have to wait for a
while, depending on its size. Moreover, we should continue to strengthen security.
6. Conclusion
This paper presents a Modelica-based visual modeling, simulation, and sharing web
platform of multi-domain physical system-WebMWorks. Deployed on cloud, it can
provide users online services at a low cost, and process user requests concurrently. All
users, such as SMEs, colleges, and academies, can get the service anytime from
L. Wan et al. / A Modelica-Based Modeling, Simulation and Knowledge Sharing Web Platform 524
anywhere possible. By sharing knowledge on the platform, knowledge owners can
benefit from reusing of knowledge models, while users of knowledge can use lots of
knowledge models at a low price.
Compared to other web modeling and simulation platform, WebMWorks supports
multi-domain modeling and simulation in Modelica. What`s more, it provides a visual
modeling service by establishing a mapping of icon tags and models and allows team
collaboration based on models, which makes modeling and simulation of huge and
complex physical system easier.
However, our work is just at the beginning. We should perfect it in the future at the
following aspects:
A. Optimal data exchanging and caching strategy, such as compress data before
sent;
B. Improve the security manage capability;
C. Implement Account management;
D. Integrate workflow and collaborative design modules;
E. Integrate 3D visualization of simulation result.
References
[1] Modelica Association. Modelica-A Unified Object-Oriented Language for Physical Systems Modeling
Language Specification Version 3.2. http://www.modelica.org , 2010.3.24
[2] Eva-Lena Lengquist Sandelin, Susanna Monemar, etc. DrModelica A Web-Based Teaching Environment
for Modelica. PELAB, Programming Environment Laboratory Department of Computer and
Information Science.
[3] Peter Fritzson. Principles of Object-Oriented Modeling and Simulation with Modelica 2.1. Wiley-IEEE
Press, 2003.
[4] Oscar Duarte. UN-VirtualLab: A web simulation environment of OpenModelica models for educational
purposes. Universidad Nacional de Colombia, Department of Electrical and Electronics Engineering.
[5] http://www.cadren.com
[6] Mohsen Torabzadeh-Tari, Zoheb Muhammed Hossain, Peter Fritzson, Thomas Richter. OMWeb
Virtual Web-based Remote Laboratory for Modelica in Engineering Courses. Proceedings 8th
Modelica Conference, Dresden, Germany, March 20-22, 2011
[7] FAN-LI Zhou, LI-PING Chen, YI-ZHONG Wuetc. MWorks: a Modern IDE for Modeling and
Simulation of Multi-domain Physical Systems Based on Modelica. Modelica Association, Modelica
2006, September 4th-5th
[8] Peter Fritzson. Principles of Object-Oriented Modeling and Simulation with Modelica 2.1. Wiley-IEEE
Press, 2003.
[9] http://en.wikipedia.org/wiki/Model_View_ViewModel
[10] Dean Jacobs, Stefan Aulbach. Ruminations on multi-tenant databases. Database system in Business,
Technology on Web, BTW 2007 - 12th Fachtagung des GI-Fachbereichs "Database on Information
System" (DBIS), Proceedings, 514-521.
[11] Stefan Aulbach, Torsten Grust, Dean Jacobs etc. Multi-Tenant Databases for Software as a Service:
Schema-Mapping Techniques. Proceedings of the 2008 ACM SIGMOD international conference on
Management of data, 1195-1206. ACM New York, 2008.
[12] MSDN. Windows Communication Foundation. http://msdn.microsoft.com/zh-cn/library/dd456779.aspx

L. Wan et al. / A Modelica-Based Modeling, Simulation and Knowledge Sharing Web Platform 525
Challenges of Online Trade upon Retail
Industry

Dr. KIN KONG WU
a
and Mr. CHUN HEI WU
b

a
M.Sc., Ph.D., MIEAust
b
B.Sc. Hons
Abstract: Facing the challenge of the online trade, there are threats of
job losses in retail industry, close down of retail shops. What should the
retail staff and management do? What should the Australian government
do? What roles should the Australian government play? Are we ready
for the challenge?
Keywords: Online trade, retail outlets, customer satisfaction, customer
expectation, customer perception, customer experience, government
responsibility.
Introduction
The decline of sales revenues in retail marketplace, the liquidations of book stores and
boutique chains, it is found that many customers change purchasing behaviours.
Customers search and shop in the online virtual stores instead of visiting retail
outlets.
In online trade, customers can choose and purchase desired products from the
vast amount of suppliers in the world via the World Wide Web (WWW). Customers
can make the purchase based on different criteria, such as, the best price, availability,
delivery time and so on. They can make their choices at anytime, anywhere as long as
they feel appropriate. The impression for online trading is easy, convenient, best price,
lots of choices, products delivered to customers nominated address, and so on.
In retail marketplace, customers have to travel to the store and checkout the
product. The price of the product will include the product cost, in-store labour cost,
store overheads and profit margin. Customers will ask why they should pay more for
the same product. In addition, customers have to spend time, effort and resources to
do the shopping in the retail outlet with limited choices.
The above provides a superficial comparison between online and retail. If
these are the only factors, the death of the retail marketplace will come soon.
1. Impact of the Death of Retail Marketplace
According to the National Retail Association of Australia, in the NRA media release
- Retail Job Losses [1], there will be 118,700 Australian retail workers jobs loss. 1 in
11 of the traditional retail jobs will be lost by 2015 (around 10% of retail workforce).
However, there are indirect casualties. Lesser tenants in shopping centres and street
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-526
526
strip stores, the rent will be rationalized in long run. There will be a structural change
in our workforce. What will be the governments action? What should the existing
retail management do?
On the other hand, there is a significant increase in online trade. What about
the frauds happen in the WWW? Is there any legislation in place to safeguard the
rights of both the purchaser and the seller? Can our National Broadband Network
(NBN) fulfill the future demand of internet data transmission?
Should we just wish that the death of retail marketplace will not come? Or,
should we simply ignore the challenge of the online trade by preventing it takes root in
Australia. If we cannot change the situation, we have to adapt to the situation.
Otherwise, we are bound to fail.
2. Comparison between Online Trade and Retail Outlet
To compare online trade with retail outlet, there are three parties involved. They are (a)
the customer, (b) the owner of the operations (both online and retail) and (c) the
government. Each party plays a significant role in those operations.
2.1 Customer Aspect
Customers can purchase and select the desired product via internet at anytime,
anywhere, whenever customers feel convenient. Only when, the customer has an
appropriate device with internet access. The customer does not need to find out the
trading hours of the outlet and the means of transportation to the store. In fact, the
online trade operates 24 hours a day and always at the customers finger tips.
Customers will find that the online price of a product is relatively cheaper than
that of retail outlet. It is because of the following reasons: (1) relatively low cost to
setup and maintain an online trade operation; (2) no GST (Goods and Services Tax)
applied to all purchases from international websites below A$1,000; (3) lower labor
cost in manning the operation; (4) less stock carrying cost and (5) lower overheads,
such as, rental, facilities, utilities and staff wage etc
However, in online trade, there is neither advice nor any consultation for
customers. The customer has to make his/her own choice. Customers are fully
responsible for their choices. As there is no point of reference, the choice can be a
right choice or a wrong choice.
Online customers have to wait for products being sent to their nominated
addresses. While, retail customers can pick up the product immediately.
In general, online trade is lack of after sales service. The most common
follow-up is a customer service helpdesk number and a reference number. In
contrast, customers who purchase from retail outlets can go to the outlet and talk to the
sales consultant (face to face) in order to rectify any issues immediately.
Lastly, it is the reliability of the financial transaction involved in online trade
as there is not sufficient legislation to protect customers from online trade frauds.
In summary, customers may enjoy convenience and cheaper price from online
trade. However, they do sacrifice some of their rights or protections which they are

K.K. Wu and C.H. Wu / Challenges of Online Trade Upon Retail Industry 527
entitled. They miss out the satisfaction of enjoying the product immediately, lack of
professional consultation, lack of after sales service and lack of security and
reliability.
2.2 Owner Aspect
There are three types of ownerships. They are (a) the owner of the online trade
operation; (b) the owner of retail outlet; and (c) the owner of both online trade
operation and retail outlet.
Each type of owner has different degrees of commitment in terms of
resources.
(a) The owner of the online trade operation: A computer with internet access,
a setup money transaction channel, such as, Paypal etc, a system to
capture online orders and a supplier and delivery arrangement, then, the
online trade can be started easily. The online trade can be very flexible. It
can be started as a small home business to a multi-national online trade
corporation. In another word, this means anyone who has any stock can
put in Ebay or else and starts the online business. This is a low cost and
flexible operation.
(b) The owner of retail outlet: A suitable (high customer traffic flow) location
for the retail outlet, equipment, furniture and facilities in the outlet, the
management team and the staff to man the outlet, the stock for display
and sell, also, some sort of marketing activities are required. To startup a
retail outlet, the owner has to deploy a lot more resources and efforts in
comparing with that of online operations. In addition, the retail outlet
cannot escape the GST (Goods and Services Tax), while, the online trade
can escape GST as long as the selling price is less A$1,000. The retail
owner bears higher cost and higher commitment.
(c) The owner of both online trade operation and retail outlet: the owner
would like to capture the advantages in both online trade and retail outlet.
But, the owner also faces other problems, such as, (i) how to allocate
costs between those two operations; (ii) which operation should the owner
place more emphasis; (iii) how to resolve conflicts between those two
operations and (iv) how to co-ordinate the two operations in order to
make them work in harmony and gain the synergy from the two.
The reality is that there is always a direct competition between the two types
of operations. One has a lower cost and without GST, no or minimum government
monitor/control, while, the other has to carry all mentioned burdens. The worst
situation is the two operations have to compete at the same price level. These are
areas where conflicts arise.
2.3 Government Aspect
In the above analysis, there are areas where the Australian government should play a
key role in the retail industry.


K.K. Wu and C.H. Wu / Challenges of Online Trade Upon Retail Industry 528
Category Characteristic Advantage Disadvantage Outcome Characteristic Advantage Disadvantage Outcome
Easy to
access
Sales
transaction
can be done
at anytime,
anywhere as
long as there
is www
access
Lonely
Take time and
travel to visit
store (cost
both time and
travelling cost)
Opportunity to
talk and share
experience with
people
Time
consuming
Convenient
Sales
transaction
can be done
at anytime,
anywhere as
long as there
is www
access
Comparatively
inconvenient
Takes time
and effort and
resources to
visit stores
Low cost to
shop
Basic device
with internet
access
Travelling and
time
time and
transportation
means
Static in
computer
One way
communication
Dynamic
Two-way
communication
Rigid
One way
communication
Flexible
staff in store
explain the
product
Lack of
explanation &
clarification
No
interruption
One way
communication
Interactive
response and
explanations
Interactive
communication
C
u
s
t
o
m
e
r

R
e
s
p
o
n
s
e
Impulsiveness
No
interruption
Wrong
selection
Impulsiveness
with interactive
explanations
Interactive
communication,
more likely to
have the
appropriate
selection
Disagree with
what staff's
viewpoints
Low
maintenance
cost
Lower selling
price
Labour cost:
variable cost
High running
cost for
manning the
store
Shopping
Hardware
requirement
Simple
(internet
access and
device)
Store display;
furniture,
maintanence,
marketing
expenses
High cost to
setup the
outlet
No GST tax Low cost Include GST
Additional 10%
cost upon the
selling price
Minimum
stock cost in
Warehouse
Low/No cost
Direct cost: in-
store stock
Higher selling
price
T
i
m
i
n
g
Delivery
leadtime
To customers'
address
Wait for the
delivery
Lack of
passion
Immediate
Receive
products
immediately
Immediately
fulfillment
attain
maximum
customer
satisfaction
S
e
r
v
i
c
e

Q
u
a
l
i
t
y
After sales
service
Only a helpline
or nothing
Lack of
confidence of
the purchase
After sales
service
Staff is available
for helping and
rectifying issues
immediately
More
confidence
in the
purchase
made
S
e
c
u
r
i
t
y

&

r
e
l
i
a
b
i
l
i
t
y
Financial
transaction
May easily
fraud
Lack of
confidence of
the purchase
Financial
transaction
Relatively
reliable and
traceable
More
confidence
in the
purchase
made
Lower selling
price
More likely
to have
higher
selling price
Online Channel Retail Outlet
A
c
c
e
s
s
i
b
i
l
t
y
Easy,
convenient
and cost
effective
Socialising
activity:
Friends to
shop, eat
and play
together
P
r
e
s
e
n
t
a
t
i
o
nLack of
reference
may lead to
wrong
selection
Sharing and
communicat
ing to select
the most
appropriate
choice
C
o
s
t

Table 1. Comparison between Online Trade and Retail Outlets

K.K. Wu and C.H. Wu / Challenges of Online Trade Upon Retail Industry 529
Firstly, the fairness of trade between the two operations, should the
government provide a fair environment for both operations? This is the GST issue.
Secondly, is there any legislation in the online trade area to protect the rights of both
the purchaser and the seller? Thirdly, has the government prepared for the structural
change in the retail industry? What policy should be in place? What support should be
provided? Fourthly, are we going to embrace the challenge? What infrastructure should
be built and what type of technical personnel should be trained and developed? Can the
NBN cope with the challenge? And so on.
In fact, there are a lot of questions to be asked and considered. More
importantly, actions must be taken in due course. It will be too late when the
dislocation of the retail industry happens.
3. Factors Affect Customer Decision
In the above, we have been talking about conflicts between the online trade and retail
outlet, what is the governments responsibility and so on.
The fundamental question is WHO chooses to buy from online or retail?
The answer is plain and simple. The decision maker is the CUSTOMER.
3.1 Measurable Factors
Online shopping has an economical advantage compared to shopping in a retail outlet.
Travelling expenses, a lower selling price and convenience stand out as the major
justification to shop online since they save the customer's money and time.
3.2 Intangible Factors
While, there are advantages of shopping at a retail outlet, most of those advantages are
intangible and being recognized when problems arise. They exist as an after-thought
benefit.
3.2.1 Better Understanding about Product and Service
The understanding of the terms and conditions involved in the product and service
purchased; the after sales service (exchange and warranty procedures) and techniques
and knowhow of applying the purchased product in customers situation and so on,
those advantages will be realized and felt when they happen to be there. This is the
reason why we name it as an after-thought benefit. In contrast to online trade, it is the
potential customers who do the shopping by themselves, customers believe that they
understand and know what the product and service include. However, in most cases,
customers are confused or misled by their own ignorance or the vendors
advertisement.
3.2.2 Expected Service Quality
After sales service quality is another intangible after-thought factor. Lets make it
simple, no one believes the product will be faulty after reading the product information
K.K. Wu and C.H. Wu / Challenges of Online Trade Upon Retail Industry 530
in the website. In fact, the online seller will say how superb the product is? Nearly,
none of them will mention that customers might receive a faulty product; what is the
return or refund policy and procedure etc
In most cases, customers not just purchase physical products (tangible part).
In fact, customers purchase the benefits (intangible part) that the product can bring
along. Customers satisfaction is generated by the applications of the product. The
degree of satisfaction depends on the level of matching or exceeding the customers
expectation. The value of the purchased item to the customer consists of both the
physical product (tangible part) and the fulfillment of the customers expectations
(intangible part).

Chart 1. Successful sale is a consequence of matching customers expectation.
4. Analysis of Customer Expectation
There are three types of customer expectations. They are the positive expectation, the
neutral expectation and the negative expectation.
The positive expectations are the expected outcome when the tangible product
or the intangible service accomplishes what the customer expected. As an illustration, a
young man brought a new car. The customers satisfaction generated (a) by owning the
car as a prestigious status; (b) driving the car as a kind of freedom to go wherever he
wanted; (c) sharing with friends his new powerful car, (d) bringing convenience and
time efficiency and so on. The owner will have a long listing of benefits to justify why
he decided to purchase the car. In most case, those satisfactions will not be realized if
the man bought the car and kept it in his garage. This demonstrates that customer
satisfaction is generated by applying the product and service (intangible part) not just
owning the product (tangible part).
The neutral expectation is not related to any positive or negative feelings of
the customer. It includes the way staff greeting the potential customer, an inquiry about
the product and service, the procedure to purchase the product and so on. A typical
example is the application process of a mortgage. Customer has to bring income proof
documents, taxation receipts and complete the application form for the application
process.
The negative expectations are the expected rectification actions from the
vendor, in case of any malfunctions, missing shipment or delivery, warranty and
exchange procedure etc Every customer is an individual who may have different
interpretations, expectations and degree of tolerance and patience. It is very difficult for
K.K. Wu and C.H. Wu / Challenges of Online Trade Upon Retail Industry 531
the supplier to have a written document to explain the company policy and make every
customer happy when something goes wrong. In this case, a sales consultant will be
able to explain and inform all potential customers, what is the company policy, before
any purchase is made. In this way, most of trading conflicts can be avoided or
minimized. In general, conflicts happen when there is a gap between the suppliers
fulfillment and customers expectation. The negative expectations can be as simple as
having a refund or a replacement. All these factors should be considered by the
customer before a purchase decision to be made.
Those positive, neutral and negative expectations will change according to
customers perception at different circumstances. As an illustration, a customer
purchases a product during his lunch break. A quick and efficient transaction becomes
a positive expectation.
5. Customer Perception, Expectation and Satisfaction
Customers perception dictates customers expectation and subsequently defines
customers satisfaction. It will be very difficult to fulfill customers satisfaction without
any knowledge of customers perception.
5.1 Customer Perception
The customer perception is what customers believe that the attribute of the product is
important to them. The perception happens in several areas. They are (a) the expected
benefits that the product and service will generate; (b) the expected support from
suppliers for the application of the product and service to customers daily usage; (c)
the expected time taken for the transaction, (d) the expected time for the product and
service in the working mode; and (e) the expected response from suppliers when
abnormalities happened. The perception is an assumed and unspoken expectation
in customers mind. The degree of expectation and weighting of each area are different
from customers to customers.
Similarly, the above perceptions will map directly into the three expectations
in section 4.
5.1.1 Perception in Product Features
This perception is generated from the product features. When customers purchase a
mobile phone, its ability to take high quality photos and videos, its ability to
communicate with friends via social networks and its ability to record messages and so
on are included in this perception. This perception is the measurable and easily
recognized by both customers and suppliers. But, which features that customers value
more? This depends on customers perceptions which, in turn, depend on customers
experience and personal situation.
5.1.2 Perception in Applications of the Products and Services
This is the perception follows from the above perception. Customers know the product
can do this and that but how can the customer make it work? That is, the support and
K.K. Wu and C.H. Wu / Challenges of Online Trade Upon Retail Industry 532
knowhow where the supplier can play a key role during and after the sales transaction.
The after sales service is the way to demonstrate that the supplier will look after the
customer even when the product is sold. The after sales service can be any support
from how to use the product, to setup and to troubleshoot with customers throughout
the product life time.
5.1.3 Perception in Process
This is the perception of customer how easy and simple to purchase or apply for the
product and service. The simplicity, clarity and easiness of understanding the terms and
conditions involved in the transaction. There are NO hidden costs and traps in the
purchase products and services. As mentioned in the above service quality section,
the sales consultant will interpret and explain the details of the application process, the
terms and conditions involved and the commitment from both the supplier and the
customer and, more important, the legal rights of the customer.
5.1.4 Perception in Time
In general, most customers believe that the shorter the time taken to activate the
product, the better the service will be. Some customers may expect the product and
service activate at a particular time slot. In this case, it is the more accurate the better.
While, some customers visit retail outlets and talk to sales consultants because they
want to share their experience and stories with someone who feel trustworthy. In this
case, it is the longer the time taken for the process the better the customers feel.
Again, this demonstrates the customers perception in time is subjective and
situational [2].
5.1.5 Customer Perception of Time and Worthiness
Time is unlike price and is not as well defined by our senses. Our sense of time can be
manipulated through infinite means, it could be the fact that kids are in a playground or
being in a comfortable environment or that the customer is in a rush to attend a party.
The sense of time can vary according to customers priorities and environmental
circumstances.
Time is a constant factor. Either we use it with care or not, time goes as it is.
We cannot stop and save for some other important moments when we need it. In fact,
we are using our time, one way or another. The feeling of worthiness in spending time
with a sales consultant, a medical consultant, doing physical exercise to keep fit,
simply idling without doing anything and so on, these all depend on the perception of
the person. One believes it is worthwhile to spend time with loved ones to enjoy a
lunch or dinner, while some may believe it is worthwhile to spend time to exercise for
personal health. Some persons may believe it is worthwhile to spend time to do
shopping.
Emotion is one of the factors which can influence the perception of time. It
could be as simple as the sales consultant being grumpy or even another customer
yelling because of waiting too long. In fact, emotions can influence the perception of
time multiplicatively [3].
Again, the perception of time and worthiness is situational and personally
dependent.
K.K. Wu and C.H. Wu / Challenges of Online Trade Upon Retail Industry 533
6. From Suppliers Attitude to Customers Expectation
In short, supplier sells satisfaction to customer. Only when the generated satisfaction
from the retail outlet exceeds the monetary benefit from the online trading, then the
retail outlet will survive.
There are several key factors that control the overall outcome of the
transaction. They are the attitudes of the supplier and the customer; the expectations of
the supplier and the customer. They are summarized in the following table.
Supplier's
Attitude
Economic
Model
(Profit &
Loss)
Economic
and Caring
Caring
(Empathical)
Cost to
provide the
service
Low Medium High
Customer's
Attitude
Rational
(commodity
exchange)
Rational
and
Emotional
Empathic
Price to pay
for the
service
Low Medium High
Degree of
Customer
Satisfaction
Low Medium High
Level of
Commitment
Low Medium High
Level of
Experience
Low or No Neutral
High level of
sharing
Table 2. Factors generate the outcome of the sale transaction.
Customers would like to pay the minimum and receive high level of
satisfaction and commitment. On the other hand, the supplier would like to keep the
cost minimum and have the profit maximum. There is a balance between the price that
customers are willing to pay for and the cost that the supplier is willing to bear to
achieve the desired outcome.
According to our findings, it is not a good choice for retail outlets to compete
with online trading by simply reducing price. Instead, the retail outlets should reduce
the cost of providing high degree of customer satisfaction. Subsequently, customers
feel the suppliers commitment and enjoy the service experience at a reasonable price.
In this situation, the retail outlet will win in the competition.

K.K. Wu and C.H. Wu / Challenges of Online Trade Upon Retail Industry 534
7. Ideal Retail Model (Extreme Situation)
The experience chain starts from customers perception to customers expectation.
When customers expectation is fulfilled, then, it comes with satisfaction. At a high
level satisfaction, this becomes memorable experience. Customers are really proud
of and willing to share and demonstrate the memorable experience.
Base on the above logic, we are able to project an ideal retail model which is
based on the customers perception and experience.








Chart 2. Customer perception to experience cycle

One of the extreme examples of high level involvement and commitment is in
the marriage vow. It seems absurd to setup a retail outlet with such high level of
involvement and commitment as that of a marriage commitment.
As an illustration of ideal retail model, the analogy of marriage commitment
is an excellent guideline for retailing organizations. We should start with the most
common marriage vow:
I (name)
Take you (name)
To be my wife/husband
I promise to share my life with you
And be true to you through the good times and the bad
Through sickness and in health
In poverty and prosperity
I will love, honour and cherish you
And remain forever faithful
As long as we both shall live. [4]
In a marriage, it starts from a relationship to a commitment with the time
factor (always = forever). This is a commitment for lifetime regardless under any
circumstances until the end of the two parties involved. If we can apply this practice in
K.K. Wu and C.H. Wu / Challenges of Online Trade Upon Retail Industry 535
retail industry, we shall be able to capture lifetime customers under any
circumstances.
Moment of true always comes from the adverse situation. As an illustration,
one of my friends feels sick. She cannot work under high pressure and stress. Her
health deteriorates, even simple home tasks, is hard for her to accomplish. It takes
nearly one and half year for my friend to recover from the disease.
During this period, her husbands behaviour has changed substantially.
Beforehand, when he came back from his office, he would neither help his wife to
prepare for dinners nor wash dishes. His main activity was reading newspaper,
checking share market and visiting websites.
Since my friend got the disease, her husband took over all loads that left over
from her. At a stage, her husband suggested her to quit the job. His sole objective was
to provide the best environment for his wife to recover. My friend feels her husbands
love and commitment which is stated as above in the marriage vow.
From time to time, my friend shares her experience with her friends. Her
conclusion is the moment of true always happens in adverse situations.
Realistically, it is impossible to have an organization that treats every
customer with such a deep and beloved commitment. However, the closer the
organization approaches to the ideal retail model, the stronger the binding force
between the supplier and the customer will be.
8. Real Life Examples
According to the 2012 Satmetrix Net Promoter Benchmark study (reference 3), the
study unveils those customer loyalty leaders for 22 U.S. industry sectors.
In Technology sector: Apples loyalty performance matched its financial
performance in the technology sector this year, again leading in the computer hardware
sector with an NPS of 71%. The company also performed well for its consumer
software applications, scoring an NPS of 68%.
Why some companies are able to achieve higher customer loyalty, while some
cannot.
Apple stores provide training for customers who do not know how to use the
product. This helps customers enjoy benefits of applying the product. Subsequently,
this not only satisfies the customers need but also make customers feel that the
company does support them even though the product had been sold. In this way, the
company converts a negative feeling (potentially, may change to a negative experience)
into a positive experience. Apple fans are happy to pay at premium price for the
premium service which they receive.
The other area that Apple has done differently is that there are technicians in
Apple stores. If there are any faulty Apple products, the technician will diagnose the
faulty device immediately. The technician will advise the customer the possible
solutions on the spot. So, customers do not need to wait for several days or weeks.
Again, this arrangement not only reduces the waiting time and inconvenience caused
by the faulty unit, but also converts the negative experience into a positive experience
by providing a solution immediately. More importantly, customers feel that they are
being looked after by the company.
The above are part of reasons why Apple has a higher customer loyalty Score
in NPS (Net Promoter Score). There are factors, such as, company image, brand name
K.K. Wu and C.H. Wu / Challenges of Online Trade Upon Retail Industry 536
and store merchandising standard and so on. They all add up together and provide an
overall customer impression and experience.
Lastly, the Apple store demonstrates that price is not the only factor. Apple
aims at providing the best service with reasonable prices.
Recently, I have attended a National Franchisee conference. John Lees is one
of the professional speakers. During the presentation, John has talked about driving
customers, driving staff, driving economic results and so on. Right after the
presentation, I have bought a training DVD and a book from John. Why?! It is because
most of Johns ideas are interesting and inspiring. With the training DVD and the book,
I shall have new insights in both my career and research. So, what I have bought is my
potential success, but not just a DVD and a book.
Similarly, this applies to any commodities from books, electrical appliances,
and clothes. What customers, really, buy is the expected satisfaction and success.
9. Conclusions
According to what we have found in our research, we are able to the draw the
following conclusions:
(a) Price is not the only determinant in the survival of the retail outlet;
(b) Provide excellent customers experience, suppliers should start with the
customers perception (if I were the customer, what is my expectation?
Why should I buy from you? And so on.)
(c) Face and address the negative expectation (What is your plan in case of a
faulty unit? What is the company policy in case of a request for refund?
And so on. Again, if I were the customer, will I accept this kind of
arrangement?)
(d) Identify every possible situation to convert a negative expectation to a
positive expectation. As illustrated in above, customers feel the suppliers
integrity and commitment to provide the best possible solution for
them.
(e) Provide the best possible solution to customers with the best possible
price. This means the supplier must improve the operational productivity
in order to maintain a reasonable profit margin for sustaining the
business. So, a balance between customers and suppliers is important.
(f) Government should provide a fair and open environment for all business
types to survive.
The retail outlet is having a tremendous price threat from the online trade. We
do not believe that by reducing price to compete with online trade is a viable choice.
Instead, retail outlets should develop and strengthen the area where the online trade
cannot provide or not yet developed areas.
History has demonstrated that once the market leaders fall behind not
because they have done something wrong but because they cannot catch up with
customers expectations. Customers expectation is a dynamic entity. It varies from
time to time. In addition, customers are not always rational. Sometimes they are
emotional as well.
If we choose to survive, we should start with understanding customers
perception and then, customers expectations. Instead of just doing what we believe to
be right.
K.K. Wu and C.H. Wu / Challenges of Online Trade Upon Retail Industry 537
By this way, we believe the most adaptive organizations will be able to
convert deadly threats into opportunities.
References:
[1] National Retail Association Media Release - Retail Job Losses Looming
[2] Sylvie Droit-Volet and Warren H. Meck, How emotions colour our perception of time. TRENDS in
Cognitive Sciences Vol. 11 No. 12, 2007
[3] Gerrit Antonides, Peter C. Verhoef, Marcel van Aalst, Consumer Perception and Evaluation of Waiting
Time: A Field Experiment. Journal of Consumer Psychology 12(3), 193-202, 2002
[4] www.brideguide.com.au
[5] http://www.satmetrix.com/net-promoter/net-promoter-benchmarking-2/
[6] Wu, Kin Kong, and Davison, Ron. B., Productivity Management Linking Producer and End Users.
International Journal of Technology Management, Inderscience Enterprise Ltd., 1998
[7] James J. Lynch, Customer Loyalty and Success by Macmillan Press Ltd. 1995
[8]Brend H. Schmitt, Experiential Marketing How to Get Customers to SENSE, FEEL, THINK, ACT, and
RELATE to Your Company and Brands By The Free Press, 1999
[9] Theodore Levitt, The Marketing Imagination By The Free Press, 1983

K.K. Wu and C.H. Wu / Challenges of Online Trade Upon Retail Industry 538










Cloud Technology for Service-Oriented
Manufacturing

XUN XU
b

Department of Mechanical Engineering
University of Auckland, Auckland 1142, New Zealand




Abstract. Cloud computing is changing the way industries and enterprises do their
businesses in that dynamically scalable and virtualized resources are provided as
services mostly over the Internet. Cloud Computing is also emerging as one of the
major enablers for the manufacturing industry; it can transform the traditional
manufacturing business model, help it align product innovation with business
strategy, and create intelligent factory networks that encourage effective
collaboration. Two types of Cloud Computing adoptions in the manufacturing
sector have been suggested, manufacturing with direct adoption of Cloud
Computing technologies and Cloud Manufacturing the manufacturing version of
Cloud Computing. Cloud computing has been implemented in some of key areas
of manufacturing such as IT and pay-as-you-go business models. In Cloud
Manufacturing, distributed resources are encapsulated into cloud services and
managed in a centralized way. Clients can use cloud services according to their
requirements. A cloud manufacturing platform has been proposed to provide users
with a big range of flexible and sustainable manufacturing capabilities.
Manufacturing capabilities and business opportunities are integrated and
broadcasted in a larger resource pool, which can enhance the competiveness of the
entire consortium.

Keywords. Cloud Computing, Cloud Manufacturing, Service-Oriented Business
Model



Nomenclature

B2B Business-to-business
BPM Business Process Management
CAD Computer-Aided Design
CAE Computer-Aided Engineering
CAM Computer-Aided Manufacturing
CNC Computer Numerical Control
CRM Customer Relationship Management
DAMA Design Anywhere, Manufacture Anywhere
DARPA Defense Advanced Research Projects Agency, USA
ERP Enterprise Resource Planning
IaaS Infrastructure as a Service
IT Information Technology
MaaS Manufacturing as a Service
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-539
539








MDSL Manufacturing Description Service Language
MGrid Manufacturing Grid
NIST National Institute of Standards and Technology, USA
openCBM open Computer-Based Manufacturing
OWL Web Ontology Language
PaaS Platform as a Service
RFID Radio-Frequency IDentification
SaaS Software as a Service
SHOE Simple HTML Ontology Extension
SMC Sustainable Manufacturing Cloud
STEP Standard for Exchange of Product data
XaaS everything is treated as a Service


1. Introduction

In the recent past, the manufacturing industry has undergone a major transformation
enabled by information technology. Cloud Computing is one of such technologies. The
main thrust of Cloud Computing is to provide on-demand computing services with high
reliability, scalability and availability in a distributed environment. In Cloud
Computing, everything is treated as a service (i.e. XaaS), e.g. SaaS (Software as a
Service), PaaS (Platform as a Service) and IaaS (Infrastructure as a Service). These
services define a layered system structure for Cloud Computing (Figure 1). At the
Infrastructure layer, processing, storage, networks, and other fundamental computing
resources are defined as standardized services over the network. Cloud providers
clients can deploy and run operating systems and software for their underlying
infrastructures. The middle layer, i.e. PaaS provides abstractions and services for
developing, testing, deploying, hosting, and maintaining applications in the
integrated development environment. The application layer provides a complete
application set of SaaS. The user interface layer at the top enables seamless interaction
with all the underlying XaaS layers [1].
Sometimes, Cloud Computing is considered as a multidisciplinary research field as
a result of evolution and convergence of several computing trends such as Internet
delivery, pay-as-you-go/use utility computing, elasticity, virtualization, distributed
computing, storage, content outsourcing, Web 2.0 and grid computing. In fact, Cloud
Computing can be considered the business-oriented evolution of grid computing [2].
Implementing Cloud Computing means a paradigm shift of business and IT
infrastructure, where computing power, data storage and services are outsourced to
third-parties and made available as commodities to enterprises and customers.
There are valid reasons and perhaps requirement for manufacturing businesses to
embrace Cloud Computing and to borrow the concept of Cloud Computing to give
rise to Cloud Manufacturing, i.e. the manufacturing version of Cloud Computing.
Such a lateral thinking is considered logical and natural as manufacturing businesses in
the new millennium become increasingly IT-reliant, globalised, distributed and agile-
demanding.
X. Xu / Cloud Technology for Service-Oriented Manufacturing 540









Figure 1. Cloud computing: everything is a service [3]


2. Cloud Computing in the Context of Manufacturing

The philosophy of design anywhere, manufacture anywhere (DAMA) has emerged in
recent years [3-6]. DAMA also helps establish links between manufacturing resource
planning, enterprise resource planning, engineering resource planning and customer
relationship management. It is believed that Cloud Computing can play a critical role in
the realization of DAMA. In general, there are two types of Cloud Computing
adoptions in the manufacturing sector, manufacturing with direct adoption of some
Cloud Computing technologies and Cloud Manufacturing the manufacturing version
of Cloud Computing.

2.1. Smart Manufacturing with Cloud Computing

Cloud computing is rapidly moving from early adopters to mainstream organizations. It
has become one of the top priorities of many CIOs for strategic business considerations.
Some manufacturing industry starts reaping the benefits of Cloud adoption today,
moving into an era of smart manufacturing with the new agile, scalable and efficient
business practices, replacing traditional manufacturing business models.
In terms of Cloud Computing adoption in the manufacturing sector, the key areas
are around IT and new business models that the Cloud Computing can readily support
the type of business models and operations such as pay-as-you-go, the convenience of
scaling up and down per demand, and flexibility in deploying and customizing
solutions. The adoption is typically centred on BPM applications such as HR, CRM,
and ERP functions with Salesforce and Model Metrics being two of the popular PaaS
providers. The cost benefit of adopting Clouds in a typical manufacturing enterprise
can be multiple. The savings obtained from the elimination of some of the functions
that were essential in traditional IT can be significant. With Cloud-based solutions,
some application customizations and tweaks that the company needs at the process
X. Xu / Cloud Technology for Service-Oriented Manufacturing 541








level may be assisted by some of the smart Cloud Computing technologies. When it
comes to supporting smart business processes, Cloud Computing can be effective in
offering Business-to-business (B2B) solutions for commerce transactions between
businesses, such as between a manufacturer and a wholesaler, or between a wholesaler
and a retailer. Cloud-based solutions enable better-integrated and more efficient
processes.
Collaboration at scale using Cloud technology is an emerging business trend.
Adopting Cloud technologies, enterprise collaboration can happen at a much broader
scale. Within the organization, demand planning and supply chain organization can be
tied into a Cloud-based system, allowing different parts of the organization to take a
peek into the opportunities that their sales teams are working on. In a more traditional
environment, that would involve a few sit-down meetings, several face-to-face
discussions, or phone conversations. The Cloud in this case provides a collaborative
environment that can give people agility, more transparency, and empowerment
through more effective collaborations.
Typically, there are some parts of the manufacturing firm that can quickly and
easily adopt Cloud-based solutions, whereas other areas are better to remain traditional.
Hence, what a Cloud-adopting manufacturing enterprise also requires is a smart
mechanism to deal with integration.

2.2. Cloud Manufacturing

Moving from production-oriented manufacturing to service-oriented manufacturing and
inspired by Cloud Computing, Cloud Manufacturing offers an attractive and natural
solution. Like Cloud Computing, Cloud Manufacturing is also considered as a new
multidisciplinary domain that encompasses technologies, e.g. networked manufacturing,
manufacturing grid (MGrid), virtual manufacturing, agile manufacturing, Internet of
things, and of course Cloud Computing. Cloud Manufacturing reflects both the concept
of integration of distributed resources and the concept of distribution of integrated
resources. Mirroring NISTs definition of Cloud Computing, Cloud Manufacturing
may be defined as a model for enabling ubiquitous, convenient, on-demand network
access to a shared pool of configurable manufacturing resources (e.g., manufacturing
software tools, manufacturing equipment and manufacturing capabilities) that can be
rapidly provisioned and released with minimal management effort or service provider
interaction.
In Cloud Manufacturing, distributed resources are encapsulated into Cloud
services and are managed in a centralized way. Clients can use the Cloud services
according to their requirements. Cloud users can request services ranging from product
design, manufacturing, testing, management and all other stages of a product life cycle.
A Cloud Manufacturing service platform performs search, intelligent mapping,
recommendation and execution of a service. Figure 2 illustrates a Cloud Manufacturing
system framework, which consists of four layers, manufacturing resource layer, virtual
service layer, global service layer and application layer.

2.2.1. Manufacturing Resource Layer

Manufacturing resources may take two forms, manufacturing physical resources and
manufacturing capabilities. Manufacturing physical resources can exist in a hardware
or software form. The former includes equipment, computers, servers, raw materials
X. Xu / Cloud Technology for Service-Oriented Manufacturing 542








and etc. The latter includes for example simulation software, analysis tools, know-
hows, data, standards, employees and etc. Manufacturing capabilities are intangible
and dynamic recourses representing the capability of an organization undertaking a
particular task with competence. These may include product design capability,
simulation capability, experimentation, production capability, management capability
and maintenance capability. The types of service delivery models that may exist at this
layer are IaaSs and SaaSs.


Figure 2. Layered architecture of a Cloud Manufacturing system

2.2.2. Manufacturing Resource Layer

The key functions of this layer are to (a) identify manufacturing resources, (b)
virtualize them and (c) package them as Cloud Manufacturing services. Comparing
with a typical Cloud Computing environment, it is more challenging to realise these
functions for a Cloud Manufacturing application. Manufacturing resource virtualization
refers to abstraction of logical resources from their underlying physical resources.
Quality of virtualization determines the robustness of a Cloud infrastructure. The next
step is to package the virtualized manufacturing resources to become Cloud
Manufacturing services. To do this, resource description protocols and service
description languages can be used. The latter may include different kinds of ontology
languages, e.g. Simple HTML Ontology Extension (SHOE), DARPA Agent Markup
Language (DAML) and Web Ontology Language (OWL).

2.2.3. Encapsulating Manufacturing Resources with Mapping

The process of virtualizing a manufacturing resource can also be viewed as an
encapsulating process, which can be carried out using three different mapping methods,
one-to-one, many-to-one and one-to-many. One-to-one mapping applies to
manufacturing resources that can only provide a single function and can therefore
directly be encapsulated into one service. The CAD and CAE data format exchange
service is one of the common types of such resource. In a many-to-one mapping,
multiple resources (each providing a specific function) may be combined to create a
X. Xu / Cloud Technology for Service-Oriented Manufacturing 543








more powerful or functional resource form. In Cloud Manufacturing, when multiple
manufacturing resources are combined, more comprehensive manufacturing resource
services called resource service composition can be provided to users to enable value-
added services. The one-to-many mapping concerns with a single resource that appears
to a client as multiple resources. The client interfaces with the virtualized resources as
though he/she is the only consumer. In fact, the client is sharing the resource with other
users. For example, ANSYS software can provide structure analysis, thermal analysis,
magnetic analysis and computational fluid dynamics analysis. Therefore, ANSYS
software can be encapsulated by many different services.

2.2.4. Enterprise Requirements Global Service Layer

The Global Service Layer relies on a suite of Cloud deployment technologies (i.e.
PaaS). Internet of things has advanced to a new level with RFID, intelligent sensors
and nano-technology as supporting technologies. Interconnections between physical
devices or products are made easier because of Internet of things. This said, a
centralized and effective management regime needs to be in place to provide
manufacturing enterprises with agile and dynamic Cloud services. Based on the nature
of the provided Cloud resources and the users specific requirements, two types of
Cloud Manufacturing operation modes can take place at the Global Service Layer,
complete service mode and partial service mode.

2.2.5. User Requirements Application Layer

The Application Layer serves as an interface between the user and manufacturing
Cloud resources. This layer provides client terminals and computer terminals. Some
examples of interfaces are complex system modelling tools, generic simulation
terminals and new product development utilities. The user can define and construct a
manufacturing application through the virtualized resources. Such a manufacturing
application often involves more comprehensive manufacturing resource services that
provide users with a value-added service [7,8]. Similar to Cloud Computing, end-user
consumption-based billing and metering in Cloud Manufacturing resembles the
consumption measurement and allocation of costs of water, gas and electricity. The
issue of user-centric privacy is a thorny one. The main concern is related to the storage
of personal/enterprise sensitive data. This data includes not only product information
but also information of some of the high-end manufacturing resources. A rigorous
Service Level Agreements for Cloud Manufacturing is a must to win any end-users
trust and confidence over the services.


3. Research into the Concept of Cloud Manufacturing

Although the concept of Cloud Manufacturing is relatively new, virtual enterprise and
distributed manufacturing concepts have been around for a while and some of the
proposed systems and frameworks bear visible traces of Cloud Manufacturing or make
contributions to a Cloud Manufacturing system. This section discusses some of these
research outcomes.
X. Xu / Cloud Technology for Service-Oriented Manufacturing 544








Brecher, et al [9] recognised that applications in an information-intensive
manufacturing environment can be organized in a service-oriented manner. They
proposed a module-based, configurable platform for interoperable CAD-CAM-CNC
planning. The approach is called open Computer-Based Manufacturing (openCBM) in
support of co-operative process planning (Figure 3). STEP standard is utilized to
preserve the results of manufacturing processes that are fed back to the process
planning stage [10]. The openCBM platform is organized through a service-orient
architecture providing the abstractions and tools to model the information and connect
the models [20]. It is much like the Platform as a Service concept and resembles an
Application Layer, where applications are not realised as monolithic programs, but as a
set of services that are loosely connected to each other, guaranteeing the modularity and
reusability of a system. The module providers as shown in the figure form the
Manufacturing Virtual Service Layer and the module database forms a Global Service
Layer.



Figure 3. Module users and providers of the openCBM approach

More significantly, a Cloud-based manufacturing research project was launched in
2010, which was sponsored by the European Commission [11]. The goal of this project
(named ManuCloud) is to provide users with the ability to utilize the manufacturing
capabilities of configurable and virtualized production networks, supported by a set of
SaaS applications. In the proposed system, customized production of technologically
complex products is enabled by dynamically configuring a manufacturing supply chain
[12-14]. It is considered that the development of a front-end system with a next level
integration to a Cloud-based manufacturing infrastructure is able to better support the
specification and on-demand manufacture of customized products. Based on the
conceptual architecture, two main types of users who interact with the front-end system
are identified: manufacturing service consumer (e.g. a product designer) and
manufacturing service provider (e.g. a lighting product manufacturer). Compared with
service consumer, more interactions are required between service provider and the
manufacturing Cloud. Nevertheless, there is still a lack of methods to support the
activity provider. In this Research work, Manufacturing-as-a-Service (MaaS) was
proposed to achieve configurable and customized manufacturing (Figure 4) [14].
X. Xu / Cloud Technology for Service-Oriented Manufacturing 545








Manufacturing Description Service Language (MDSL) was developed to model and
represent different types of product characteristics, for example, shape, size,
mechanical, electrical and etc. However, it is envisaged that this language may have
difficulties in integrating with exiting CAD models because of the different data syntax.


Figure 4. Processing of Manufacturing Service Descriptions in the MaaS Environment

A Sustainable Manufacturing Cloud (SMC) has been is proposed (Fig. 5). This
platform aims to provide a solution for rapid development of customized products, with
the intention of minimizing costs of consumers and maximizing profits of service
providers, as well as taking environmental issues into account. Consumers only need to
submit their service requests, and the platform takes care of the remaining procedures,
such as cost estimation, time estimation, service selection optimization, sustainability
evaluation, and other aspects related to manufacturing service. Once the solution is
composited, it will be delivered to demanders automatically.
The platform consists of four layers:
Resource Layer - enveloping the resources compulsory for the platform, including
manufacturing resources and customer demands and their virtual mapping
information;
Infrastructure Layer - the hardware environment of the platform, comprising of
cloud servers, cloud database, Internet/ Intranet.
Global Service Layer- coalescing all task processing procedures into an integrated
intelligent package, including order processing, resource retrieval and matching,
manufacturing resources monitoring, service quality evaluation and data security
protector.
Application Layer- an interface between users, i.e. service demanders and service
providers, and the platform. The interface to service demanders offers a toolkit for
product design optimization and service provider selection. The interface to
service providers can be used to aid in manufacturing simulation and
manufacturing process optimization.
X. Xu / Cloud Technology for Service-Oriented Manufacturing 546










Figure 5 A Sustainable Manufacturing Cloud- solution for rapid development of customized products

Intelligence, user experience, and resource sharing are enhanced in this platform
by providing a one-stop service, from product design to product delivery. Having the
capabilities common to Cloud Manufacturing systems, this platform is seen as an ideal
and mature scenario when implementing Cloud Manufacturing. With this platform,
customized products fabrication, environment protection and energy saving can be
achieved.
Some substantial impacts on industry may emerge with the proposed platform.
Traditional industry companies can be classified into three categories, that is,
companies only engaged in product design, companies only engaged in fabrication, and
companies engaged in both. By implementing this platform, the boundary between
these companies would be more explicit, as the division of work throughout the life
cycle of product development becomes more distinct as enabled by Cloud
Manufacturing. In the long run, companies that used to carry out both design and
fabrication would evolve into ones that either undertake design or fabrication according
to their expertise. An industry network consisting of heterogeneous nodes, being either
design agencies or fabrication workshops, facilitates products customization and user
experience improvement from the customer side, and service upgrade and product
innovation from the service provider side.
X. Xu / Cloud Technology for Service-Oriented Manufacturing 547








4. Research into the Concept of Cloud Manufacturing

Cloud Computing is emerging as one of the major enablers for the manufacturing
industry, transforming its business models, helping it align product innovation with
business strategy, and creating intelligent factory networks that encourage effective
collaboration. This pay-by-use scenario will revolutionize manufacturing in the same
way that the Internet did to our everyday and business lives. Manufacturing shops are
starting to take advantage of cloud computing because it simply makes good economic
sense. Two types of Cloud Computing adoptions in the manufacturing sector are
suggested, manufacturing with direct adoption of Cloud Computing technologies and
Cloud Manufacturing the manufacturing version of Cloud Computing.
In terms of direct adoption of cloud computing in the manufacturing sector, the
key areas are around IT and new business models, e.g. pay-as-you-go, production
scaling up and down per demand, and flexibility in deploying and customizing
solutions. The HR, CRM, and ERP functions may benefit from using some emerging
PaaS. Cloud Computing can be effective in offering Business-to-business solutions for
commerce transactions between businesses, such as between a manufacturer and a
wholesaler, or between a wholesaler and a retailer.
Moving from production-oriented manufacturing to service-oriented
manufacturing, Cloud Manufacturing can offer an attractive and natural solution. In
Cloud Manufacturing, distributed resources are encapsulated into cloud services and
managed in a centralized way. Clients can use cloud services according to their
requirements. Cloud users can request services ranging from product design,
manufacturing, testing, management and all other stages of a product life cycle. The
Cloud Manufacturing service platform performs search, mapping, recommendation and
execution of a service. Two main types of manufacturing resources can be considered
at the manufacturing resource layer, manufacturing physical resources and
manufacturing capabilities.
A Cloud Manufacturing platform is proposed. This platform is designed to
facilitate the rapid development of customized products. It is envisioned as the future
business model and implementation strategy for Cloud Manufacturing. Customized and
original requirements can be easily realized, compared with traditional manufacturing
practices. The proposed platform offers new opportunities, especially for SMEs. With
an industry network consisting of heterogeneous nodes, being either design agencies or
fabrication workshops, product innovation and customization can be achieved with
minimum investment and effort.
It can be anticipated that Cloud Manufacturing will provide effective solutions to
the manufacturing industry that is becoming increasingly globalised and distributed.
Cloud Manufacturing means a new way of conducting manufacturing businesses, that
is everything is perceived as a service, be it a service you request or a service you
provide.


References

[1] Pallis G. Cloud computing: the new frontier of internet computing. IEEE Internet Computing
2010:14:5: 5562494:70-73.
[2] Foster I, Zhao Y, Raicu I, Lu S. Cloud computing cloud computing and Grid computing 360 degree
compared. In: Grid Computing Environments Workshop (2008)
X. Xu / Cloud Technology for Service-Oriented Manufacturing 548







[3] X. Xu. From cloud computing to cloud manufacturing. Robotics and Computer-Integrated
Manufacturing. Vol. 28 No. 1 (2012) pp 7586
[4] Heinrichs W. Do it anywhere. IEE Electronics Systems and Software 2005:3:4:30-33.
[5] Venkatesh S, Odendahl D, Xu X, Michaloski J, Proctor F, Kramer, T. Validating Portability of STEP-
NC Tool Center Programming. 2005 ASME International Design Engineering Technical Conferences
& Computers and Information In Engineering Conference, Hyatt Regency, Long Beach, California
September 24-28, 2005. IDETC/CIE 2005, DETC2005-84870, CIE-3-1. 3:A:285-290
[6] Manenti P. Building the global cars of the future. Managing Automation. 2011:26:1:8-14.
[7] Zhang L, Luo Y-L, Tao F, Ren L, Guo H. Key technologies for the construction of manufacturing
cloud. Computer Integrated Manufacturing Systems, CIMS 2010:16:11:2510-2520.
[8] Guo H, Zhang L, Tao F, Ren L, Luo YL. Research on the Measurement Method of Flexibility of
Resource Service Composition in Cloud Manufacturing. Proc. of Int. Conf. on Manufacturing
Engineering and Automation (ICMEA 2010), Guangzhou, China, Dec. 10-12 2010
[9] Brecher C, Lohse W, Vitr, M. Module-based Platform for Seamless Interoperable CAD-CAM-CNC
Planning. In: XU, X. W., NEE, A. Y. C. (eds.) Advanced Design and Manufacturing Based on STEP.
London: Springer. 2009
[10] Brecher C, Vitr M, Wolf J. Closed-loop CAPP/CAM/CNC process chain based on STEP and STEP-NC
inspection tasks. International Journal of Computer Integrated Manufacturing. 2006:19:570-580
[11] W. Terkaj, G. Pedrielli, M. Sacco, Virtual Factory Data Model, in: Proceedings of the Second
International Workshop on Searching and Integrating New Web Data Sources (VLDS 2012), Istanbul,
Turkey, 2012.
[12] A.L.K. Yip, A.P. Jagadeesan, J.R. Corney, Y. Qin, U. Rauschecker, I. Fraunhofer, A Front-End System
to Support Cloud-Based Manfuacturing of Cutstomized Products, in: Proceedings of the 9th
International Conference on Manufacturing Research ICMR 2011, 2011.
[13] O.E. Ruiz, S. Arroyave, J. Cardona, EGCL: An Extended G-Code Language with Flow Control,
Functions and Mnemonic Variables, World Academy of Science, Engineering and Technology, 67
(2012) 455-462.
[14] Abdul-Ghafour, S., Ghodous, P., Shariat, B. Integration of product models by ontology development.
Proceedings of the 2012 IEEE 13th International Conference on Information Reuse and Integration, IRI
2012, art. no. 6303057, pp. 548-555.
X. Xu / Cloud Technology for Service-Oriented Manufacturing 549
Overview on the Development of
Concurrent Design Facility
Dajun Xu
a,1
, Cees Bil
b
and Guobiao Cai
a

a
School of Astronautics, Beihang University, Beijing, China
b
School of Aerospace, Mechanical and Manufacturing Engineering, RMIT University,
Melbourne, Australia
Abstract. Concurrent Design Facility (CDF) is a workspace and information
system allowing multidisciplinary experts working in a focused environment and
conducting design collaboration. CDF has been proved to be an effective and
efficient manner to implement Concurrent Engineering methodology. The first
prototype of CDF was Project Design Center established by the Jet Propulsion
Laboratory (JPL) in 1994 for the purposes of developing and implementing new
tools and processes centering on concurrent engineering for space system. The
most famous CDF was established at the European Space Agency (ESA) in 1998
and it has been a template for subsequent development of CDF. To date, more than
20 concurrent design environments have been developed around the world by
industries, governments and universities. This paper gives an overview of the
background, history and status of CDF development. Some successful applications
of CDF in the field of aerospace and its benefits are outlined. Key elements of a
CDF, include process, multidisciplinary team, integrated data model, appropriate
facility and software infrastructure, are summarized and discussed. Some other
topics related with CDF, such as Integrated Design Tool, Multidisciplinary Design
Optimization (MDO), are clarified by distinguish their characteristics with CDF.
Since the interaction and collaboration among experts are prominent aspect in
CDF, the effect of personal behaviors and culture in CDF are also discussed. In
engineering education, CDF is invaluable to lecturers by enabling the entire
student team to gain cross-discipline skills and at the same time stay at the cutting
edge of technology. The paper concludes with a discussion of proposed trends of
CDF in the future.
Keywords. Concurrent Design Facility, Concurrent Engineering
Introduction
System engineering has characters of art and science because good system engineering
requires the creativity and knowledge of systems engineers, but it also requires systems
management or the application of a systematic disciplined approach. The traditional or
the most classical design methodology is the sequential approach which means a
sequence of specialists working in series. The overall design passes, during the
various design steps, from a technical domain specialist (working in isolation from the
rest of the design team) to another, in successive time intervals. Lack of
communication among the specialists cause incorrect assumptions may be adopted and
the main system parameters are not monitored in real-time. This method reduces the

1
Corresponding Author. xdj@buaa.edu.cn
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-550
550
opportunity to find interdisciplinary solutions and to create system awareness in the
specialists. An improved method is the centralized design, where the various technical
domain specialists provide subsystem design information and data to a core team of
one or more system engineers. It also has the lacks of the sequential approach [1].
Concurrent Engineering is offered as an alternative to the classical approach and it
provides better performance by taking full advantage of modern information
technology (IT). Experts from various disciplines in the co-location could communicate
in real-time and face to face. As a result of many disciplines are involved in the design
process of complex systems, the concurrent approach has been proven particularly
effective [1].
Concurrent Design Facility (CDF) is a workspace and information system allowing
multidisciplinary experts working in a focused environment and conducting design
collaboration. This paper gives an overview of the background, history and status of
CDF development. Some successful applications of CDF in the field of aerospace and
its benefits are outlined. Key elements of a CDF, include process, multidisciplinary
team, integrated data model, appropriate facility and software infrastructure, are
summarized and discussed. Some other topics related with CDF, such as Integrated
Design Tool, and Multidisciplinary Design Optimization (MDO), are clarified by
distinguish their characteristics with CDF. Since the interaction and collaboration
among experts are prominent aspect in CDF, the effect of personal behaviors and
culture in CDF are also discussed. In engineering education, CDF is invaluable to
lecturers by enabling the entire student team to gain cross-discipline skills and at the
same time stay at the cutting edge of technology. At last the paper concludes with a
discussion of proposed trends of CDF in the future.
1. Background, History and Status of CDF
1.1. Some Teminologies
There are some teminologies about the Concurrent Design Facility and they are theory
basis to support the development of CDF. Reviewing and explaining these
terminologies will help us to understand the connotation of CDF.
System Engineering
Chambers Science and Technology Dictionary provides the following very apt
definition of the term 'system engineering' as used in the space field [1]: A logical
process of activities that transforms a set of requirements arising from a specific
mission objective into a full description of a system which fulfills the objective in an
optimum way. It ensures that all aspects of a project have been considered and
integrated in to a consistent whole. g
Concurrent Engineering (CE)
The definition of CE adopted in the ESA is [2]: Concurrent Engineering is a
Systematic approach to integrated product development that emphasizes the response
to customer expectations. It embodies team values of cooperation, trust and sharing in
such a manner that decision making is by consensus, involving all perspectives in
parallel, from the beginning of the product life-cycle.
Concurrent Engineering Methodology (CEM)
D. Xu et al. / Overview on the Development of Concurrent Design Facility 551
The CEM is a collection of techniques, lessons learned, rules of thumb, algorithms,
and relationships that has been developed for conceptual space system design. When
applied, the CEM makes it possible to rapidly generate processes and tools that are
customized to meet the specific requirements of a study. [3]
1.2. History of CDF
Some attempts on CE began from 1980's in the field of aerospace and defense
industry. A result of survey about CE was presented in 1993 by the Integrated Process
Laboratory at the Concurrent Engineering Research Center (CERC), which was
established at West Virginia University in 1988 by Defense Advanced Research
Projects Agency (DARPA) to promote CE in US industry. The results showed the
major impetus in moving to a CE environment was seen to be the promise of reduction
in overall costs and design costs, and another impetus is the need to be competitive and
to improve product quality. This survey clearly indicated that the most pressing need
was to foster a teamwork environment, and the greater leverage exists in teamwork and
process improvement. [4]
According to literature study, the first CDF with full features, which named with
the Project Design Center (PDC) was opened by the Jet Propulsion Laboratory (JPL) in
June of 1994 [5]. The PDC provides a facility, with multiple rooms, for design teams to
use to conduct concurrent engineering sessions. Aerospace Corporation had developed
the process and the tools for CE almost at the same time and they had been successfully
applied to several programs. Based on the experience of the Aerospace Corporation, the
JPL contracted The Aerospace Corporation to develop CEM processes and tools for
PDC. The Concept Design Center (CDC) was developed by The Aerospace
Corporation in 1997, to enhance support to its customers by providing a process for
bringing together the conceptual design capabilities and experts [3].
In the European space industry, concurrent engineering was also applied in the
spacecraft design from the beginning of 1990'. The first example is provided by the
Satellite Design Office (SDO) at DASA/Astrium, with the cooperation of the System
Engineering (SE) group at the Technical University of Munich. An experimental
design facility, Concurrent Design Facility (CDF), was created in the ESA Research
and Technology Centre (ESTEC) at the end of 1998 and used to performance the
assessment of several missions. The CDF is in effect an Integrated Design Environment
(IDE) based on the concurrent engineering methodology. [6]
1.3. Status of CDF
Up to now, more than 20 CDFs have been established around the world, showed in
Table 1. These CDFs scatter in United States[3][5][7]~[19], Germany[20][21],
France[23], Italy[24][25], Switzerland[26][27], British[28] and Japan[29], and they can
be classified by owners into government, industry and university.
1.4. Applications and Benefits of CDF
At ESA, concurrent design is primarily used to assess the technical, programmatic and
financial feasibility of future space missions and new spacecraft concepts. Additionally
the ESA CDF is also used for many other multi-disciplinary applications, such as

D. Xu et al. / Overview on the Development of Concurrent Design Facility 552
Table 1. List of Concurrent Design Facility around the world
Abb. Full Name Affiliation
PDC Product Design Center Jet Propulsion Laboratory, USA
CDC Concept Design Center Aerospace Corporation, USA
ASDL Aerospace System Design Laboratory Georgia Technical Institute, USA
SRDC Space Research and Design Center Laboratory Navy Postgraduate School, USA
ICDF Integrated Concept Design Facility TRW, USA
SSAL Space System Analysis Laboratory Utah State University, USA
SSR Space System Rapid Design Center Ball Aerospace, USA
IMDC Integrated Mission Design Center
Goddard Space Flight Center,
USA
LSMD Laboratory for Spacecraft and Mission Design
California Institute of
Technology, USA
DE-ICE
Design Environment for Integrated Concurrent
Engineering
MIT, USA
Center The Center
Boeing Military Aircraft
Company, USA
HEDS-IDE
Human Exploration and Development of Space
Integrated Design Environment
Johnson Space Center, USA
NAC NRO Analysis Center
National Reconnaissance Office,
USA
COMPASS
Collaborative Modeling and Parametric
Assessment of Space Systems
Glenn Research Center, USA
CDF Concurrent Design Facility ESA, EU
S2C2 Space System Concept Center
Technical University of Munich,
Germany
ISU CDF
International Space University, Concurrent Design
Facility
International Space University,
France
CEF Concurrent Engineering Facility
DLR German Aerospace Center,
Germany
ISDEC Integrated System Design Center Thales Alenia Space Italia, Italy
EPFL CDF
cole Polytechnique Fdrale de
Lausanne, Switzerland
CDL Concurrent Design Laboratory University of Glasgow, UK
MDC Mission Design Center JAXA, Japan

payload instrument preliminary design, System of System (SoS) architectures, space
exploration scenarios, etc. [30]
Since 1994, two research teams, team-X and team-I, had conducted concurrent
engineering design for space mission and space instrument in PDC of JPL.
Applications of modern information systems enabled fundamental improvements to the
system engineering process through the use of real time concurrent engineering. Many
design teams have demonstrated dramatic savings in time and money compared with
the traditional process for space systems conceptual design. In reference[5], metrics of
the improvements in efficiency resulting from team-X and the PDC were showed and it
should be noted that the dramatic reduction in average time to prepare proposals and
very significant decrease in cost per proposal.
2. Five CE Key Elements
The ESA/ESTEC concluded the key elements on which the CDF implementation has
been based are: a process, a multidisciplinary team, an integrated design model, a
facility, and an infrastructure [6]. These elements are described in order below.
D. Xu et al. / Overview on the Development of Concurrent Design Facility 553
2.1. Process
It is a fact the space system has many interdependencies between components. This
implied that the definition and evolution of each component has an impact on other
components and that any change will propagate through the system. Early assessment
of the impact of changes is essential to ensure that the design process converges on an
optimized solution.
The process starts with a preparation phase in which some representatives of the
engineering team (team leader, system engineer, and selected specialists) and of the
customer meet to refine and formalize the mission requirements, to define the
constraints, to identify design drivers, and to estimate the resources needed to achieve
the study objectives. Then the study kick-off takes place and the design process starts.
It is conducted in a number of sessions in which all specialists must participate. This is
an iterative process that addresses all aspects of the system design in a quick and
complete fashion. One key factor is the ability to conduct a process that is not
dependent on the path followed. At any stage it must be possible to take advantage of
alternative paths or use professional estimates to ensure that the process is not
blocked by lack of data or lack of decisions.
2.2. A multi-disciplinary team
Human resources are the most important and crucial element. A fundamental part of the
CE approach is to create a highly motivated multi-disciplinary ream that performs the
design work in real-time. The challenge, the novelty of the method, the collective
approach, the co-operative environment, the intense and focused effort and a clear and
short term goal are all essential elements that contribute to personal motivation.
To work effectively the team members had to accept to use a new method of
working, co-operate, perform design work and give answers in real-time, and
contribute to team spirit. For each discipline a position is created within the facility
and assigned to an expert in that particular technical domain. Each position is equipped
with the necessary tools for design modeling, calculations and data exchange. The
choice of disciplines involved depends on the level of detail required and on the
specialization of the available expertise. On the other hand, the number of disciplines
has to be limited, especially in the first experimental study, to avoid extended debate
and to allow a fast turn-around of design iterations.
2.3. An Integrated Data Model
The design process is model-driven using information derived from the collection and
integration of the tools used by each specialist for his or her domain. A parametric-
model-based approach allows generic models of various mission/technological
scenarios to be characterized for the study being performed. A parametric approach
supports fast modification and analysis of new scenarios, which is essential for the real-
time process. It acts as means to establish and fix the ground rules of the design and to
formalize the responsibility boundaries of each domain. Once a specific model is
established it is used to refine the design and to introduce further levels of detail.
Each model consists of an input, output, calculation and results area. The input and
output areas are used to exchange parameters with the rest of the system (i.e. other
internal and external tools and models). The calculation area contains equations and
D. Xu et al. / Overview on the Development of Concurrent Design Facility 554
specification data for different technologies in order to perform the actual modeling
process. The results area contains a summary of the numeric results of the specific
design to be used for presentation during the design process and as part of the report at
the end of the study.
2.4. An Appropriate Facility
The team of specialists meets in the Concurrent Design Facility (CDF) to conduct
design sessions. The accommodation generally comprises a design room, a meeting
room and project-support office space. The equipment location and the layout of the
CDF are design to facilitate the design process, the interaction, the co-operation and the
involvement of the specialists. The facility is equipped with computer workstations
each dedicated to a technical discipline. To the front, a Multimedia wall supporting two
or three large projector screens. Each screen can show the display of each workstation,
so that the specialists can present and compare design options or proposals and
highlight any implications imposed on, or by, other domains.
2.5. A Software Infrastructure
An infrastructure to implement the Concurrent Design Facility outlined above requires
tools for the generation of the model, integration of the domain models with a means to
propagate data between models in real time, a means to incorporate domain-specific
tools for modeling and/or complex calculations, a documentation-support system, and
storage capability. The infrastructure must allow its users to work remotely from other
Facilities, and exchange information easily between the normal office working
environment and the Facility environment.
For the system model, Microsoft Excel spreadsheet was chosen for its availability
and the exciting skills of the team. The distribution of the model required a mechanism
to exchange relevant data between domains. This was solved by preparing a shared
workbook to integrate the data to be exchanged, with macros to handle the propagation
of new data in a controlled way. In some specific cases it was found more convenient
not to use centralized data exchange, but rather to create a direct interface between
those applications, such as the transfer of geometrical 3-D data of spacecraft-
configuration to the simulation system.
3. Other Topics in CDF
3.1. Integrated Design Tool vs. CDF
Integrated design tool is a software or a multidisciplinary software environment which
is developed to design aircraft/spacecraft or analyze performance, even conduct
optimization [31]-[33]. It has many differences with CDF. CDF emphasize specialists
in different domains to work together and to contribute their knowledge and experience
to design project. CDF is a work process in dynamic and real-time manner. But
integrated design tool is always developed by a small research team, even several
research fellows. Design and analysis methods come from textbooks or published
D. Xu et al. / Overview on the Development of Concurrent Design Facility 555
technology articles. Thus it is insufficient to directly get information from engineers
and scientists.
3.2. MDO in CDF
There are various definitions of Multidisciplinary Design and Optimization. The AIAA
defines multidisciplinary design and optimization as the optimal design of complex
engineering systems which requires analysis that accounts for interactions amongst the
disciplines (or parts of the system) and which seeks to synergistically exploit these
interactions [34]. In the AIAA white paper [35] multidisciplinary design and
optimization is characterized as a human centered environment for the design of
complex systems, where conflicting technical and economic requirements must be
rationally balanced. CDF essentially is a multidisciplinary design and always can get a
feasible design result. Application of MDO method in CDF has started with an aim to
get optimal design. Combined with the development of integrated design tool which
described above, MDO will make CDF more efficient and effective.
3.3. Culture in CDF
Akira gave a very interesting research report in his master thesis [36] about concurrent
engineering in different cultures. He found CE had got remarkable successes in the
United States and Europe, but it is neither used nor well known in other parts of world.
His thesis analyzed the CE approach to identify the key factors for successful
implementation and operation from both system engineering and cultural perspectives
through the case studies of an implementation failure in a Japanese organization and
some successes in Euro-American organization. The CE approach is not the one-fit-all
design tool and each organization needs to have its own clear goals and objects to
implement the CE approach. In Japanese organization, the ambiguity of their design
process, system models and responsibilities of each engineer made it difficult to build
the real-time design environment, and bottom-up culture and the organization structure
prevented forming the dedicated fixed team. These findings and conclusions are also
suitable for other Asia countries, such as China.
3.4. Human Behaviors in CDF
The design of large-scale engineering systems requires design teams to balance a
complex set of considerations. Formal approaches for optimizing complex system
design assume that designers behave in a rational, consistent manner. However,
observation of design practice suggests that there are limits to the rationality of
designer behavior. The paper [37] explored the gap between complex system designs
generated via formal design process and those generated by teams of human designers.
Results show that human design teams employed a range of strategies but arrived at
suboptimal designs. Analysis of their design histories suggested three possible causes
for the human design teams performance: poorly executed global searches rather than
well executed local searches, a focus on optimizing single design parameters, and
sequential implementations rather than concurrent optimization strategies.
D. Xu et al. / Overview on the Development of Concurrent Design Facility 556
3.5. CDF in Education
CDF has been successfully implemented in the aerospace engineering education, such
as The Collaborative Design Environment (CoDE) at Georgia Institute of Technology
[17], and Concurrent Design Facility of International Space University (ISU) [23].
During the applications of CDF, the process and collaborative model are developed. In
CoDE a generalized model was formulated highlighting the key concepts and
challenges of collaborative design. The model identifies communication and group
cognition, problem-solving and decision-making as interdependent and critical
elements. Using the generalized model as a starting point and frame of reference a
detailed process was constructed by strategically aligning pertinent models of
collaborative design from a variety of fields. It is important to note that the process is
not specific to the CoDE and that the generalized model is flexible enough to explain
collaborative design in varying degrees of complexity and scope. Many other
universities are preparing to implement the CDF in the aerospace engineering
education, such as RMIT of Australia [38] and BUAA of China.
4. Trends in the Development of CDF
With the rapid developing of new technologies, aerospace industry is facing the huge
challenge that how to design aircraft or spacecraft mission in a fast and low cost pattern.
There are many design alternatives need to be evaluated and screened. CDF based on
concurrent engineering methodology is an effective and efficient approach to solve this
problem. A lot of cases and experiences have showed CDF dramatically reduced the
time and cost to complete the design missions compared with traditional design process.
Many industry and academic research institutes in the field of aerospace are
implementing or are developing their own CDF. It is obvious that more aerospace
vehicle designs and flight mission assessments will be conducted in CDF and
aerospace engineering education in the CDF environment will also be a trend in many
universities.
Aerospace vehicle design involves different disciplinary and integrate them into a
complex engineering problem. In CDF environment, specialists in each domain
contribute their knowledge and experience to solve this problem. But human design
decisions are always affected by some personal subjective factors and it is difficult to
obtain an optimal results. Application of MDO methodologies in CDF will be powerful
to accelerate convergence to optimal results by automatic data exchanging and
searching progress rather than hand-on trade-off design results.
Future aircraft and aerospace vehicle design will also need to consider the cost of
operation, turn-around, and maintain, etc. Thus CDF should extend the applicability
across the project lifecycle. Large scale aerospace mission projects generally are
undertaken by international cooperation. Collaborative distributed design ability will be
requirement for the future CDF [39]-[41].
5. Conclusion
The development of CDF has a history of near 20 years from the first facility PDC was
opened in 1994. Up to now, more than 20 CDFs have been established around the
D. Xu et al. / Overview on the Development of Concurrent Design Facility 557
world and they have been implemented to design of aircraft, aerospace vehicle and
space mission. Applications of modern information systems enabled fundamental
improvements to the system engineering process through the use of real time
concurrent engineering. Many design teams have demonstrated dramatic savings in
time and money compared with the traditional process for systems conceptual design.
CDF is effective and efficient has been proven by design cases and experiences of
research team which apply CDF in their work.
CDF implementation based on five key elements and they are a process, a
multidisciplinary team, an integrated design model, a facility, and an infrastructure. A
multidisciplinary team, in other words human resources, is the core of those elements,
because CDF is characterized with communication in real-time between specialists in
different disciplinary domain. Process determine the efficiency of design under the
environment of CDF, thus its system engineer or project lead must has some
management knowledge and tips. Other three elements, model, facility and
infrastructure, support the operation of CDF fluently and effectively.
CDF is not just a facility equipped with workstations, projector, and SmartBoard,
and it is not a computer environment installed with an integrated design tool which
could generate design results automatically. CDF essentially is a multidisciplinary
design environment and Multidisciplinary Design Optimization method could promote
the CDF to obtain optimal or better result. Since human factor is very important for
CDF, culture of different country and human decision behaviors are also influence the
implement of CDF.
CDF will continually develop in the future and more and more aircrafts, aerospace
vehicles and space missions will be designed, simulated and evaluated under it.
Aerospace engineering education also needs to transform student training pattern, so as
to adapt the new requirements in industry.
References
[1] Fortescue, Peter, Graham Swinerd, and John Stark. Ch20 Spacecraft System Engineering. Spacecraft
Systems Engineering. John Wiley & Sons, Ltd., Publication. 2011.
[2] M. Bandecchi, B. Melton, F. Ongaro. Concurrent Engineering Applied to Space Mission Assessment and
Design. ESA Bulletin No.99. September 1999, pages:34-40.
[3] Aguilar, Joseph A., Andrew B. Dawdy, and Glenn W. Law. The Aerospace Corporations Concept
Design Center. 1998.
[4] M. Lawson, H.M.Karandikar. A Survey of Concurrent Engineering. Concurreing Engineering: Research
and Applications . No.2, pages: 1-6, 1994.
[5] Jeffrey L. Smith. Concurrent Engineering in the Jet Propulsion Laboratory Project Design Center.
98AMTC-83.
[6] M. Bandecchi, B. Melton, B. Gardini. The ESA/ESTEC Concurrent Design Facility. Proceedings of
EuSEC 2000, pages: 329-336.
[7] Julie C. Heim, Kevin K. Parsons, Sonya F. Sepahban. TRW Process Improvements for Rapid Concept
Designs. 1999, pages: 325-333.
[8] Joseph A. Aguilar, Andrew Dawdy. Scope vs. Detail: The Teams of the Concept Design Center. IEEE
Aerospace Conference Proceedings 2000, pages: 465-481.
[9] Robert Shishko. The Proliferation of PDC-Type Environments in Industry and Universities. 2000.
[10] F. Pena-Mora, K.Hussein, S. Vadhavkar. CAIRO: a concurrent engineering meeting environment for
virtual design teams. Artificial Intelligence in Engineering, No.14, 2000, pages:203-219.
[11] Donald W. Monell, William M. Piland. Aerospace Systems Design in NASAs Collaborative
Engineering Environment. IAF-99.U.1.01, 1999.
[12] Michael N. Abreu. Conceptual Design Tools for the NPS Spacecraft Design Center. Master Thesis,
NAVAL POSTGRADUATE SCHOOL, 2001.
D. Xu et al. / Overview on the Development of Concurrent Design Facility 558
[13] Charles M. Reynerson. Developing an Efficient Space System Rapid Design Center. IEEE Aerospace
Conference Proceedings 2001, pages: 3517-3522.
[14] Karpati, G., J. Martin, and M. Steiner. The Integrated Mission Design Center (IMDC) at NASA Goddard
Space Flight Center. IEEE Aerospace Conference Proceedings 2002, pages: 3657-3667.
[15] Linda F. Halle, Michael J. Kramer, M. Denisa Scott. Space Systems Acquisitions Today: Systems
Modeling, Design and Development Improvements, Integrating the Concept Design Center (CDC) and
the NRO Analysis Center (NAC). IEEE Aerospace Conference Proceedings 2003, pages: 3647-3656.
[16] Todd J. Mosher, Jeffrey Kwong. The Space Systems Analysis Laboratory: Utah State Universitys New
Concurrent Engineering Facility. IEEE Aerospace Conference Proceedings 2004, pages: 3866-3872.
[17] Jan Osburg, Dimtri Mavris. A Collaborative Design Environment to Support Multidisciplinary
Conceptual Systems Design. AIAA 2005-01-3435.
[18] Thomas Coffee. The Future of Integrated Concurrent Engineering in Spacecraft Design. Research Report
of Massachusetts Institute of Technology, 2006.
[19] Hernando Jimenez, Dimitri N. Mavris. A Framework for Collaborative Design in Engineering
Education. AIAA 2007-301.
[20] Schaus, V., Fischer, P., Ludtke, D. Concurrent Engineering Software Development at German Aerospace
Center Status and Outlook. Engineering for Space, No.1, 2010.
[21] Daniel Schubert, Oliver Romberg, Sebastian Kurowski. A New Knowledge Management System for
Concurrent Engineering Facilities. 4
th
International Workshop on System & Concurrent Engineering for
Space Applications, SECESA 2010.
[22] Fischer, Philipp M., Volker Schaus, and Andreas Gerndt. Design Model Data Exchange Between
Concurrent Engineering Facilities By Means of Model Transformantion. 13
th
NASA-ESA Workshop on
Product Data Exchange 2011.
[23] Paulo Esteves, Emmanouil Detsis. Concurrent Engineering at the International Space University. 2011.
[24] M. Marcozzi, G. Campolo, L. Mazzini. TAS-I Integrated System Design Center Activities for Remote
Sensing Satellites. SECESA 2010.
[25] First Studies of ASI Concurrent Engineering Facility (CEF). SECESA 2010.
[26] A. Ivanov, M. Noca, M. Borgeaud. Concurrent Design Facility at the Space Center EPFL. SECESA
2010.
[27] CO2DE: A Design Support System for Collaborative Design. Journal of Engineering Design, Vol.21,
No.1, 2010. Pages: 31-48.
[28] Vasile, Massimiliano. Conccruent Design Lab in Glasgow. 2006.
[29] Kazuhik Yotsumoto, Atsushi Noda, Masashi Okada. Introduction of Mission Design Center in JAXA.
2005.
[30] M. Fijneman, A. Matthyssen. Application of Concurrent Design in Construction, Maritime, Education
and Other Industry Fields. 2010.
[31] Thomas S. Richardson, Cormac McFarlane, Askin Isikveren. Analysis of conventional and asymmetric
aircraft configurations. Progress in Aerospace Sciences, Vol.47, No.8, 2011. Pages: 647-659.
[32] Arthur Rizzi. Modeling and simulating aircraft stability and control The SimSAC project. Progress in
Aerospace Sciences. Vol.47, No.8, 2011. Pages: 573-588.
[33] Arthur Rizzi, Peter Eliasson, Tomasz Goetzendorf-Grabowski. Design of a canard configured
TransCruiser using CEASIOM. Progress in Aerospace Sciences. Vol.47, No.8, 2011. Pages: 695-705.
[34] Anonymous, AIAA MDO technical committee, www.aiaa.org, accessed November 2004
[35] J.P. Giesing, J.F.M. Barthelemy, A summary of applications and needs, 1998, AIAA.
[36] Akira Ogawa. Concurrent Engineering for Mission Design in Different Cultures. Master of Science in
Engineering and Management. Massachusetts Institute of Technology, 2008.
[37] Jesse Austion-Breneman. Observations of Designer Behaviors in Complex System Design.
[38] Cees Bil, Lachlan Thompson. Aerospace Design Education at RMIT University. AIAA 2010-9066.
[39] Liqing Fan, Huabing Zhu, Shung Hwee Bok. A Framework for Distributed Collaborative Engineering
on Grids. Computer-Aided Design & Applications, Vol.4, Nos.1-4, 2007, pages: 353-362.
[40] E. Rowland Watkins, Mark McArdle, Thomas Leonard. Cross-middleware Interoperability in
Distributed Concurrent Engineering. Third IEEE International Conference on e-Science and Grid
Computing Proceedings, 2007, pages: 561-568.
[41] Daniel Bohnke, Bojorn Nagel, Volker Gollnick. An Approach to Multi-fidelity in Conceptual Aircraft
Design in Distributed Design Environments. IEEE Aerospace Conference 2011.
D. Xu et al. / Overview on the Development of Concurrent Design Facility 559
A Low Cost CDF Framework for
Aerospace Engineering Education based on
Cloud Computing
Dajun Xu
a,1
, Cees Bil
b
and Guobiao Cai
a

a
School of Astronautics, Beihang University, Beijing, China
b
School of Aerospace, Mechanical and Manufacturing Engineering, RMIT University,
Melbourne, Australia
Abstract. Concurrent Design Facility (CDF) is an effective and efficient manner
to implement Concurrent Engineering methodology. In aerospace engineering
education, CDF is invaluable to lecturers by enabling the entire student team to
gain cross-discipline skills and at the same time stay at the cutting edge of
technology. Establishment of CDF is always consuming much money on hardware
and software. This paper presents a low cost CDF framework which is suitable for
aerospace engineering education in class room based on cloud computing. An
important aspect of CDF is collaboration between multidisciplinary specialists or
virtual specialists in the environment of engineering education. Collaboration in
CDF requirement some dedicated hardware or software to exchange file, manage
knowledge, collaborative work on writing report, and even remote communicate
with other work teams in traditional means. Emergence and development of cloud
computing have made above-mentioned requirements become very easy to be
fulfilled. Some public cloud computing servers, such as Google Drive, SkyDrive,
Dropbox, Mendeley, can be used in CDF for education to save investment on
hardware and software related to data, file, and information exchange. Google
Talk and Skype can be used to remotely communicate with work team at other
location. This CDF framework has many benefits, include low cost on hardware,
software and human, reduce preparation time, and easy to deploy in classroom
education.
Keywords. Concurrent Design Facility, Aerospace Engineering Education
Introduction
Concurrent Design Facility (CDF) is a workspace and information system allowing
multidisciplinary experts working in a focused environment and conducting design
collaboration. The development of CDF has a history of near 20 years from the first
facility PDC was opened in 1994 [1]. Up to now, more than 20 CDFs [2]~[26] have
been established around the world and they have been implemented to design aircraft,
spacecraft and space mission.
With the rapid developing of new technologies, aerospace industry is facing the
huge challenge that how to design aircraft or spacecraft mission in a fast and low cost
manner. There are many design alternatives need to be evaluated and screened. CDF

1
Corresponding Author. xdj@buaa.edu.cn
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-560
560
based on concurrent engineering methodology is an effective and efficient approach to
solve this problem. Applications of modern information systems enabled fundamental
improvements to the system engineering process through the use of real time
concurrent engineering. Many design teams have demonstrated dramatic savings in
time and money compared with the traditional process for systems conceptual design.
CDF is effective and efficient has been proven by design cases and experiences of
research team which apply CDF in their work. Many industry and academic research
institutes in the field of aerospace are implementing or are developing their own CDF.
It is obvious that more aerospace vehicle designs and flight mission assessments will be
conducted in CDF and aerospace engineering education in the CDF environment will
also be a trend in many universities.
This paper summarize some CDFs in universities for aerospace engineering
education and based on analysis of essential requirements of a general CDF a low cost
CDF framework is presented, which is suitable for aerospace engineering education in
class room based on cloud computing. An important aspect of CDF is collaboration
between multidisciplinary specialists or virtual specialists in the environment of
engineering education. Collaboration in CDF requirement some dedicated hardware or
software to exchange file, manage knowledge, collaborative work on writing report,
and even remote communicate with other work teams in traditional means. Emergence
and development of cloud computing have made above-mentioned requirements
become very easy to be fulfilled. Some public cloud computing servers, such as Google
Drive, SkyDrive, Dropbox, Mendeley, can be used in CDF for education to save
investment on hardware and software related to data, file, and information exchange.
Google Talk and skype can be used to remotely communicate with work team at other
location. This CDF framework has many benefits, include low cost on hardware,
software and human, reduce preparation time, and easy to deploy in classroom
education.
1. CDF for Aerospace Education
University as an academic research power always stands at the leading edge of new
technology. Some universities had paid attention to CDF at the beginning of it emerged,
and they also have established their own CDFs to study this new design methodology
for aircraft or spacecraft. These CDFs are also applied to aerospace engineering
education.
1.1. Design Environment for Integrated Concurrent Engineering (DE-ICE) at MIT
A teaching concurrent engineering environment can be found in the Design
Environment for Integrated Concurrent Engineering (DE-ICE) at MIT. This center is
14 design stations and two projectors. PCs are not provided in the environment as each
student receives a campus laptop upon entering the college. The facility is designed
around two modes: design mode and teaching mode [6] .
1.2. Space System Concept Center (S
2
C
2
) at Technical University of Munich
The Technical University of Munich has also developed a concurrent engineering
environment as a teaching tool. Using approximately 10 user stations, the environment
D. Xu et al. / A Low Cost CDF Framework for Aerospace Engineering Education 561
provides students with hands on exposure with tools and methodologies used in the
aerospace industry. Excel based models are used to integrate the design and MuSSat is
used to allow the students to design as he or she finds the time [6].
1.3. Laboratory for Spacecraft and Mission Design (LSMD) at California Institute of
Technology
The Laboratory for Spacecraft and Mission Design (LSMD) at California Institute of
Technology was developed in 1999 and is modeled after JPLs PDC. It currently
houses three Macintosh and five PCs and is primarily used as a teaching tool. The
LSMD uses self-developed tools to teach students about concurrent engineering design
over the course of a semester. Since the design is drawn out over the course of a long
period of time, little has been required in the form of automation of the processes [6].
1.4. Space Systems Analysis Laboratory (SSAL) Concurrent Engineering Facility at
Utah State University
Utah has a growing interest in space system design and has, for two reasons,
established a concurrent engineering environment. The first and foremost is to augment
the existing space research teachings at the university. The second is to perform system
level designs on space systems. They chose the PDC and CDC as models for
development of an in house center and intend to team with other centers to test
distributed concurrent design in the near future [13].
1.5. The Collabrative Design Environment(CoDE) at Georgia Institute of Technology
CoDE belongs to The Aerospace Systems Design Laboratory (ASDL) of Georgia
Institute of Technology. The objective of CoDE is to rapidly execute collaborative
design conceptualizations by fostering designers creativity in multidisciplinary design
teams. The environment set out with two missions: Enhance the fidelity of simulation
models for design space exploration and robust design methodologies, and create a
national asset for the development of next-generation conceptual design facilities and
approaches [14][16].
1.6. Concurrent Design Facility at International Space University
the International Space University (ISU) received its own Concurrent Design Facility
(CDF) under the continued support of the European Space Agency (ESA). This facility
comes to open the possibility to ISUs students of getting to know the principles of
Concurrent Engineering and its means of application. During the two years of
operations of the ISU CDF, workshops and assignments for some of ISUs programs
were devised and put into practice where technical and non-technical students are
exposed to the process of Space Mission Design applying Concurrent Engineering, in
particular to Remote Sensing and Telecommunications spacecraft design [20].
D. Xu et al. / A Low Cost CDF Framework for Aerospace Engineering Education 562
2. Essential Requirements of a General CDF
2.1. Team, Hardware, and Software
The paper [27] compared collaborative engineering environments that are reported in
the literature with respect to three specific aspects: software, hardware, and peopleware
configurations. A taxonomy was presented in it to fully describe each of the different
environments. Using this taxonomy, an intersecting set of features from these
environments may be used to develop future environments for customized purposes.
In modern engineering, design software has taken an enormous role. These tools
are now commonplace and used to communicate business, financial, and technical
information. There is numerous software required or desired to operate a successful
concurrent engineering environment. They include software to facilitate collaboration,
support analysis, support integration, perform modeling, and to support visualization.
Further, these software packages can be commercial off the shelf (COTS) items,
modified COTS, and custom in house software tools. Different combinations of
software are found in each CEE.
Another key consideration in establishing a concurrent engineering environment is
the electronic/computational hardware. The hardware serves many different functions
within the environment including supporting the individual engineer/designer, servers
to tie the individual hardware components together, visualization hardware,
communication hardware, and individual domain specific pieces of hardware. All of
these hardware items work in concert to support the concurrent engineering activities
within the environment. Hardware for the individual engineer may include permanent
desktop systems, mobile preconfigured systems within the CEE, and support for
external mobile systems. Like the software, multiple combinations of hardware
solutions are deployed at the concurrent engineering facilities around the world and no
one solution stands out as the best.
The final key aspect is how human beings interact with each other and the design,
peopleware. Although engineering design is meant as a technical activity, it truly
functions as a social activity. It was confirmed that team introductions, pooling of
knowledge, and team maintenance accounts for 10-20% of design time. At the heart of
concurrent engineering lie five distinct decision areas when establishing a concurrent
engineering environment: the roles of the team members, definition of process, team
formation strategies, who addresses conflict, and how concurrent is the operation of the
environment.
2.2. Essential Requirements of a General CDF
A survey of concurrent engineering environments (CEE) was presented in the paper [27]
and summarized key similarities and key differences of those CEEs. The peopleware is
a key aspect for CDF, but the first step to establish CDF is to prepare software and
hardware. The bigger part of investment to establish CDF will be put on hardware and
software, thus essential requirements of a general CDF are tabulated in Table.1.
Satisfying these requirements would make CDF has basic capabilities and functions to
analyze, simulate, integrate, exchange data, visualize design status and communicate
with remote design center.

D. Xu et al. / A Low Cost CDF Framework for Aerospace Engineering Education 563
Table 1. Essential Requirements of a General CDF
Essential Requirements of CDF
Hardware
Workstation
PCs
Interface for Laptops
Server Information Server
Visualization
Projectors
Smart Board
Communication Audio Systems
Software
Collaboration Commercial: [Novell]
Analysis Commercial: [. ; in house tools]
Visualization Commercial: [Pro/E; CATIA; Solidworks]
Integration Commercial: [iSight; ModelCenter]
Modeling In house tools: [Excel+VB]
3. A Collaborative Architecture based on Cloud Computing
3.1. About Cloud computing
The term Cloud Computing emerged in publications in the year 2009. Cloud
computing is a model for enabling convenient, on-demand network access to a shared
pool of configurable computing resources (e.g., networks, servers, storage, applications,
and services) that can be rapidly provisioned and released with minimal management
effort or service provider interaction [28].
3.2. Requirements of Collaboration in CDF
An important aspect of CDF is collaboration between multidisciplinary specialists or
virtual specialists in engineering education. Collaboration in CDF require some
dedicated hardware or software to exchange file, manage knowledge, collaborative
work on writing report, and even remote communicate with other work teams in
traditional means. Requirements of collaboration in CDF can be summarized as four
items: document collaboration, file exchange, knowledge management, and remote
communication. In CDF, Spreadsheets are usually used as a simple integrated model
collecting data from each specialist and calculate performance of vehicle or system.
Many files, such as CAD file, need to be sent to other specialist for flow field
simulation or structure analysis. Some literatures related to current project need to be
managed and classified. Sometimes remote communication is also necessary to connect
with people who are at other location.
D. Xu et al. / A Low Cost CDF Framework for Aerospace Engineering Education 564
3.3. Collaboration based on Cloud Computing
A general CDF usually are equipped with dedicated hardware and software to realize
requirements mentioned above, such as information server and communication
software, and those equipments will consume much funds. But now this problem will
be solved by cloud computing with low cost. Table 2 show a solution for collaboration
in CDF based on cloud technology.
Table 2. Collaboration based on Cloud Technology
Requirements of Collaboration in CDF Options based on Cloud Technology
Document Collaboration Google Drive (Google Docs)
File Exchange Dropbox, or SkyDrive
Knowledge Management MENDELEY
Remote Communication Google talk, or skype
Google Drive is a file storage and synchronization service provided by Google,
released on April 24, 2012, which enables user cloud storage, file sharing and
collaborative editing. Google Drive is now the home of Google Docs, a suite of
productivity applications, that offer collaborative editing on documents, spreadsheets,
presentations, and more [29][30].
Dropbox is a file hosting service operated by Dropbox, Inc., that offers cloud
storage, file synchronization, and client software. Dropbox allows users to create a
special folder on each of their computers, which Dropbox then synchronizes so that it
appears to be the same folder (with the same contents) regardless of which computer is
used to view it. Files placed in this folder also are accessible through a website and
mobile phone applications [31]. SkyDrive is also a file hosting service and has some
similar function with Dropbox, but It can integrated with Microsoft Office [32].
Mendeley is a desktop and web program for managing and sharing research papers,
discovering research data and collaborating online. It combines Mendeley Desktop, a
PDF and reference management application (available for Windows, Mac and Linux)
with Mendeley Web, an online social network for researchers. Mendeley requires the
user to store all basic citation data on its servers - storing copies of documents is at the
user's discretion. Upon registration, Mendeley provides the user with 2 GB of free web
storage space, which is upgradeable at a very low cost [33].
Google Talk is an instant messaging service that provides both text and voice
communication [34]. Skype allows users to communicate with peers by voice using a
microphone, video by using a webcam, and instant messaging over the Internet. Phone
calls may be placed to recipients on the traditional telephone networks. Calls to other
users within the Skype service are free of charge, while calls to landline telephones and
mobile phones are charged via a debit-based user account system. Skype has also
become popular for its additional features, including file transfer, and
videoconferencing [35] .
3.4. Benefits
Applications of cloud technology in CDF environment will bring some benefits. First,
the costs on hardware, software, and human are reduced, as dedicated equipment and
software are not needed to purchase and there is no need to employ persons to maintain
computer system. Secondly, preparation time is saved then project of establishing CDF
will be completed in advance. Third, all these cloud technologies are familiar to almost
everyone, thus they are very easy to use and to realize collaboration in CDF without
D. Xu et al. / A Low Cost CDF Framework for Aerospace Engineering Education 565
any special training. If a classroom has equipped projector and large screen, and Wi-Fi
is provided in campus, CDF education environment will be built easily and quickly in
classroom by using those cloud servers that mentioned above.
4. Conclusion
CDF is effective and efficient has been proven by design cases and experiences of
many research teams in past twenty years. Some universities have also established their
own CDF for academic research and aerospace engineering education. Essential
requirements of a general CDF are analyzed by comparing collaborative engineering
environments that are reported in the literature with respect to three specific aspects:
software, hardware, and peopleware configurations. Some cloud computing
technologies, include Google Drive, Dropbox, SkyDrive, MENDELEY, Google Talk
and skype, are presented to realize collaboration in CDF environment, with many
benefits, such as reducing cost on hardware, software and human, reducing prepare
time and easy to use. This simple and low cost CDF framework is adaptable to be
implemented in classroom education of aerospace engineering.
References
[1] Jeffrey L. Smith. Concurrent Engineering in the Jet Propulsion Laboratory Project Design Center.
98AMTC-83.
[2] Aguilar, Joseph A., Andrew B. Dawdy, and Glenn W. Law. The Aerospace Corporations Concept
Design Center. 1998.
[3] M. Bandecchi, B. Melton, B. Gardini. The ESA/ESTEC Concurrent Design Facility. Proceedings of
EuSEC 2000, pages: 329-336.
[4] Julie C. Heim, Kevin K. Parsons, Sonya F. Sepahban. TRW Process Improvements for Rapid Concept
Designs. 1999, pages: 325-333.
[5] Joseph A. Aguilar, Andrew Dawdy. Scope vs. Detail: The Teams of the Concept Design Center. IEEE
Aerospace Conference Proceedings 2000, pages: 465-481.
[6] Robert Shishko. The Proliferation of PDC-Type Environments in Industry and Universities. 2000.
[7] F. Pena-Mora, K.Hussein, S. Vadhavkar. CAIRO: a concurrent engineering meeting environment for
virtual design teams. Artificial Intelligence in Engineering, No.14, 2000, pages:203-219.
[8] Donald W. Monell, William M. Piland. Aerospace Systems Design in NASAs Collaborative
Engineering Environment. IAF-99.U.1.01, 1999.
[9] Michael N. Abreu. Conceptual Design Tools for the NPS Spacecraft Design Center. Master Thesis,
NAVAL POSTGRADUATE SCHOOL, 2001.
[10] Charles M. Reynerson. Developing an Efficient Space System Rapid Design Center. IEEE Aerospace
Conference Proceedings 2001, pages: 3517-3522.
[11] Karpati, G., J. Martin, and M. Steiner. The Integrated Mission Design Center (IMDC) at NASA Goddard
Space Flight Center. IEEE Aerospace Conference Proceedings 2002, pages: 3657-3667.
[12] Linda F. Halle, Michael J. Kramer, M. Denisa Scott. Space Systems Acquisitions Today: Systems
Modeling, Design and Development Improvements, Integrating the Concept Design Center (CDC) and
the NRO Analysis Center (NAC). IEEE Aerospace Conference Proceedings 2003, pages: 3647-3656.
[13] Todd J. Mosher, Jeffrey Kwong. The Space Systems Analysis Laboratory: Utah State Universitys New
Concurrent Engineering Facility. IEEE Aerospace Conference Proceedings 2004, pages: 3866-3872.
[14] Jan Osburg, Dimtri Mavris. A Collaborative Design Environment to Support Multidisciplinary
Conceptual Systems Design. AIAA 2005-01-3435.
[15] Thomas Coffee. The Future of Integrated Concurrent Engineering in Spacecraft Design. Research Report
of Massachusetts Institute of Technology, 2006.
[16] Hernando Jimenez, Dimitri N. Mavris. A Framework for Collaborative Design in Engineering
Education. AIAA 2007-301.
D. Xu et al. / A Low Cost CDF Framework for Aerospace Engineering Education 566
[17] Schaus, V., Fischer, P., Ludtke, D. Concurrent Engineering Software Development at German Aerospace
Center Status and Outlook. Engineering for Space, No.1, 2010.
[18] Daniel Schubert, Oliver Romberg, Sebastian Kurowski. A New Knowledge Management System for
Concurrent Engineering Facilities. 4
th
International Workshop on System & Concurrent Engineering for
Space Applications, SECESA 2010.
[19] Fischer, Philipp M., Volker Schaus, and Andreas Gerndt. Design Model Data Exchange Between
Concurrent Engineering Facilities By Means of Model Transformantion. 13
th
NASA-ESA Workshop on
Product Data Exchange 2011.
[20] Paulo Esteves, Emmanouil Detsis. Concurrent Engineering at the International Space University. 2011.
[21] M. Marcozzi, G. Campolo, L. Mazzini. TAS-I Integrated System Design Center Activities for Remote
Sensing Satellites. SECESA 2010.
[22] First Studies of ASI Concurrent Engineering Facility (CEF). SECESA 2010.
[23] A. Ivanov, M. Noca, M. Borgeaud. Concurrent Design Facility at the Space Center EPFL. SECESA
2010.
[24] CO2DE: A Design Support System for Collaborative Design. Journal of Engineering Design, Vol.21,
No.1, 2010. Pages: 31-48.
[25] Vasile, Massimiliano. Conccruent Design Lab in Glasgow. 2006.
[26] Kazuhik Yotsumoto, Atsushi Noda, Masashi Okada. Introduction of Mission Design Center in JAXA.
2005.
[27] Jonathan Osborn, Joshua D. Summers, and Gregory M. Mocko. Review of Collaborative Engineering
Environments: Software, Hardware, Peopleware. International Conference on Engineering Design,
ICED11, 2011.
[28] Moises Dutra, Minh Tri Nguyen, Parisa Ghodous. An approach to adapt collaborative architectures to
cloud computing. Advanced Concurrent Engineering, DOI: 10.1007/ 978-0-85729-799-0_19, Springer-
Verlag London Limited 2011.
[29] http://en.wikipedia.org/wiki/Google_Docs
[30] http://en.wikipedia.org/wiki/Google_Drive
[31] http://en.wikipedia.org/wiki/Dropbox_(service)
[32] http://en.wikipedia.org/wiki/SkyDrive
[33] http://en.wikipedia.org/wiki/Mendeley
[34] http://en.wikipedia.org/wiki/Google_Talk
[35] http://en.wikipedia.org/wiki/Skype
D. Xu et al. / A Low Cost CDF Framework for Aerospace Engineering Education 567
A Framework for Completeness in
Requirements Engineering: An Application
in Aircraft Maintenance Scenario
Marina M.N. Zenun
a,1
and Geilson Loureiro
b
a
Aeronautics and Mechanical Dept. - ITA
b
Aeronautics and Mechanical Dept. - ITA
Abstract. This paper presents a framework for Completeness in Requirements
Engineering (RE) to be used by civil aircraft industry in order to achieve a
sufficient degree of completeness in requirements. In civil aircraft industry,
completeness is crucial to the success of product development, where a missing
requirement can mean a missing attribute. The completeness of requirements is
such a need, and at the same time it is a huge challenge. In order to improve the
requirements elicitation processes, in a manner that all requirements receive the
same level of attention, this work proposes a framework which considers the entire
system life cycle, all processes, and people involved. The proposed framework is
applied in civil aircraft maintenance scenario, starting at the early stages of
development, achieving a sufficient degree of completeness in civil aircraft for
maintenance.
Keywords. Completeness, requirements elicitation, Requirements Engineering,
Systems Engineering, aircraft maintenance scenario.
Introduction
This paper presents a framework for Completeness in Requirements Engineering (RE)
to be used in the development of transport category airplanes as categorized by the
FAA in the 14 CFR Part 25 (FAA, 2013), in order to achieve a sufficient degree of
completeness in requirements, in the context of civil aircraft industry.
Completeness of requirements is a very important attribute to the success of any
product, and at the same time it is, according to Wiegers (2003), a huge challenge.
While missing requirements are frequent, they are hard to detect, because they are
invisible (WIEGERS, 2003). However, a missing requirement results in a missing
attribute in complex products (KOSSMANN et al., 2007).
Furthermore, the development of systems for transport category airplanes is guided
by standards, such as SAE ARP 4754A (Guidelines for development of civil aircraft
and systems), which requires completeness in the set of requirements (SAE, 2011). So,

1
Marina M. N. Zenun ()
Instituto Tecnolgico Aeronutica, Pa Mal.Eduardo Gomes, 50, SJC, SP, Brazil, 12228-900
e-mail: marina.zenun@embraer.com.br
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-568
568
completeness is crucial to the success in the development of civil aircraft, mainly for
transport category airplanes.
Due to this relevance, this paper aims to answer the following question: How to
achieve a sufficient degree of completeness in RE during the development of civil
aircraft? To answer this question, this work applies an exploratory research involving a
literature search.
This paper presents the concepts of completeness, and completeness in
development of transport category airplane. This paper proposes a framework to
achieve a sufficient degree of completeness in RE. The proposed framework shall be
used starting at the early stages of aircraft development. Following the framework, a
complete set of requirements could be produced.
This paper presents concepts of completeness (Section 1), related works (Section
2), aircraft maintenance process (Section 3), framework for CoRE (Section 4), method
for CoRE and application of framework in maintenance scenario (Section 5), results
and discussions (Section 6), and ends with conclusions and future work (Section 7).
1. Concepts of completeness
Completeness in transport category airplane is defined by Aerospace Recommended
Practice (ARP) 4754A as the degree to which a set of correct requirements, when met
by a system, satisfy the interests of customers, users, maintainers, certification
authorities as well as aircraft, system and item developers under all modes of operation
and life cycle phases for the defined operating environment (SAE, 2011).
In the sense of whether the requirements set contains all requirements, absolute
completeness is a theoretical but probably unattainable goal for most requirements
documents - the only truly complete specification of something would be the thing
itself. It also may be unnecessary and uneconomical for most situations (LEVESON,
2000). So, the trick is to determine whether a requirements document is sufficient no
more and no less (LEVESON, 2000, GRADY, 1997).
According to Carson et al. (2004), the completeness of the problem statement step
is that all stakeholder interfaces are identified and quantified for all applicable life
cycle phases (development, production, certification, training, operation, maintenance,
and disposal phases) and relating operating modes.
2. Related works
Some authors and also standards have presented approaches to requirements
completeness in the Systems Engineering area. However, with few exceptions the
general inclination has been that completeness is a probable result of following a
process consisting of a combination of templates, checklists, and stakeholder
involvement. But few papers have demonstrated processes which are sufficient to
confidently provide complete requirements according to the above definitions
(CARSON et al., 2004). Examples of these approaches are:
A template or checklist approach: Volere requirements techniques volere is an
Italian verb which means to wish or to want - (ROBERTSON and ROBERTSON,
2006); requirements categorization (GABB et al., 2001), templates in military standard
document (e.g. MIL-STD-961E) (U.S.DoD, 2003), and checklist in civil aviation
M.M.N. Zenun and G. Loureiro / A Framework for Completeness in Requirements Engineering 569
standard document as ARP 4754 A (SAE, 2011). The theory is that addressing all
elements of the checklist will ensure completeness (CARSON et al., 2004).
Requirements-elicitation basis: Approaches that involve users, for example:
Quality Function Deployment (QFD) and prototyping. They can be effective in gaining
stakeholder involvement and gathered from them users requirements (CARSON et al.,
2004).
Operating concepts and use case: Use case and other functional analyses based on
the mission and concept of operations which establish what the system must do while
useful for functional requirements, these approaches do not typically address design
constraints, although some approaches may include some environmental
considerations; i.e., under what conditions is a specific requirement applicable?
(CARSON et al., 2004).
Review method: Review several times until no one can think of anything else
(GRADY, 1993).
Context analysis: This is a formal approach proposed by Carson et al. (2004), and
basically consists of stakeholder identification and interface quantification. These
techniques need to be validated (CARSON et al., 2004).
3. Aircraft maintenance process
Due to the fact that the proposed framework is applied in the aircraft maintenance
scenario as an example of its implementation, this Section provides a brief summary
about this subject.
The purpose of a generic maintenance process is to sustain the capability of the
system to provide a service (ISO, 2008). In the case of civil aviation, the purpose of the
maintenance process is to sustain the capability of the aircraft to operate.
The maintenance process includes the activities to provide operations support,
logistics, and material management. Based on feedback from ongoing monitoring of
the operational environment, problems are identified and corrective, remedial, or
preventive actions are taken to restore full system operational capability. This process
contributes to the Requirements Engineering Process when considerations of
constraints imposed in later life cycle stages are used to elicit system requirements and
to influence architectural design (INCOSE, 2011). The aircraft maintenance scenario
analysis contributes to requirements for system-of-interest (aircraft), special
maintenance tools, and for enabling systems. An enabling system is a system that
supports a system-of-interest during its life cycle stages but does not necessarily
contribute directly to its function during operation (e.g. Ground Support Equipment -
GSE). Provisions for enabling systems, as external test equipment signals and
connections should be defined in the maintenance requirements (SAE, 2011).
Types of maintenance in civil aviation are: over-night checks, A checks, B checks,
C checks, unscheduled maintenance, and fixed interval checks. The overnight check is
done when the aircraft is in service at any airport and consists of routine tasks, such as
fluid checks. The A checks occur at predetermined intervals and consist of preventive
maintenance tasks. B checks and C checks are at incrementally longer intervals than A
checks. Each level is progressively deeper in its diagnostic inspections. Unscheduled
maintenance is driven by the MTBUR (Mean Time Between Unscheduled Removal),
and it contains procedures as troubleshooting to find a fault equipment or problem
cause (JACKSON, 1997).
M.M.N. Zenun and G. Loureiro / A Framework for Completeness in Requirements Engineering 570
In the maintenance scenario, appears maintainability, a specialty engineering
requirements (YOUNG, 2004). The Business Dictionary (2013) defines maintainability
as the characteristic of design and installation which determines the probability that a
failed system can be restored to its normal operable state within a given timeframe,
using the prescribed practices and procedures.
So, maintainability requirements include scheduled and unscheduled maintenance
requirements and any links to specific safety-related functions. Factors such as the
percent of failure detection or the percent of fault isolation may also be important to
help in the maintenance task (SAE, 2011).
4. Framework for CoRE
This work presents a structured process to be followed in order to elicit and write
requirements in a way to achieve a sufficient degree of completeness in requirements
set. The framework for Completeness in Requirements Engineering is named
framework for CoRE by this work, using the first letters of the words Completeness in
Requirements Engineering. In the same way, the method to apply the framework is
named as: Method for CoRE.
The framework for CoRE is derived from Total View Framework proposed by
Loureiro (1999). The framework proposed by Loureiro was evolved and it was
presented again in 2010 (Loureiro et al., 2010). However, from the time that his
framework was conceived until now, there was no concern about completeness applied
in it. Thus, his framework has been modified and in such way it has been incremented
to address the aspect of completeness in requirements, resulting in the framework for
CoRE.
The framework for CoRE aims to achieve sufficient completeness performing
analysis in three dimensions: analysis, integration, and structure dimensions. The
Framework for CoRE is presented in the Figure 1.
Analysis dimension or requirements analysis is the dimension related to the
elements analysis which will enable the creation of requirements set. The analysis
dimension means stakeholders analysis, system analysis, and architectural analysis
(Figure 1).
Requirements analysis is performed, simultaneously, for the product under
development and for organizations which perform life cycle process. Product and
organizations are elements to be integrated during the Integration dimension (Figure 1).
The first application of these two analyses (Requirements and Integration analysis),
it means, the first application of requirements analysis for the product, and for
organization which performs life cycle process will result in the first layer of a
structured hierarchy of requirements (layer i). Allocations shall be performed applying
requirements analysis, for both product and organizations, to produce requirements for
the lower levels. The structure dimension means layers of a hierarchy which
commences with high level requirements (layer i), presents intermediate levels of the
system breakdown structure (layer i + j), and ends with low level requirements (layer n).
As a result of these three dimensions (analysis, integration, and structure
dimensions), the framework for CoRE provides elements, flows, and attributes, which
are the basis for the requirements statements for all layers of the product under
development. The way how elements, flows and attributes are elicited; for product and
organization, and also observing the interface between them, is the key point to achieve
M.M.N. Zenun and G. Loureiro / A Framework for Completeness in Requirements Engineering 571
a sufficient degree of completeness in requirements, such in the requirements set as in
the requirement itself. This point means the main contribution of this work. This fact
was not observed in the Loureiros total view framework.
So, the final output of the framework is the requirements set. Each requirement is
created by adding elements, flows, and attributes, gathered from the three dimensions
of the framework for CoRE. Figure 1 shows the framework for CoRE with a zoom in
the requirements.
Figure 1. Framework for CoRE.
Figure 1. Framework for CoRE.
5. Method for CoRE and application of framework in maintenance scenario
The Method for CoRE explains how to apply the Framework for CoRE. The steps of
the method are explained and illustrated by applying the framework in the maintenance
scenario.
Methods steps and items to be identified are listed in the Table 1. In order to
illustrate how the framework fits for addressing sufficient completeness in RE, this
paper presents an example of the framework application in the aircraft maintenance
scenario. In the maintenance scenario, it was choose an operating mode of an
unscheduled maintenance. The development of the Ground-Based Augmentation
Systems GBAS to be installed in a transport category airplane was elected as a
complex product to be used in this example. Framework steps and example
implementation are presented in the next paragraphs.
M.M.N. Zenun and G. Loureiro / A Framework for Completeness in Requirements Engineering 572
Table 1. Methods steps and items to be identified
Steps Items to be identified
Step 1 Mission
Step 2 Product life cycle processes
Step 3
Stakeholders
Product scenario Organization scenario
Step 4
System
Product scenario Organization scenario
Step 5
Architecture
Product scenario Organization scenario
Step 6
Attributes for elements
Product scenario Organization scenario
Step 7 Write requirements
Step 8 Add traceability
Step 9 Add traceability to the architecture
Step 1: Identify the product mission. The mission of an Aircraft (A/C) with GBAS
System is defined as: To provide capabilities to the A/C to make a landing in airports
with GBAS Ground Subsystem, under coverage of GNSS Satellites (ZENUN and
LOUREIRO, 2013).
The mission of aircraft maintenance process is to sustain the capability of the
aircraft to operate. Specifically, in the case of GBAS installed in an A/C, the mission of
maintenance process is to sustain the capability of the embedded GBAS to operate.
Step 2: Identify product entire life cycle processes. The life cycle processes of an
A/C with GBAS System highlight the processes to be executed by the organizations.
These processes are: Development, Production, Integration, Certification, Operation,
Maintenance and Discard (ZENUN and LOUREIRO, 2012).
Step 3: Identify product stakeholders and organization stakeholders, and their
concerns for each product life cycle process scenario. Figure 2 shows stakeholders and
their concerns for A/C with GBAS in maintenance scenario and Figure 3 highlights
organization stakeholders and their concerns for A/C with GBAS in Maintenance
Organization (Repair Station) scenario. Concerns such safety, maintainability, easy to
maintain, conformity with regulations, and profit appear for stakeholders in the Figure
2. The terms safety, conformity with regulations, and profit also appear in the
stakeholder organization scenario (Figure 3).
Figure 2. Stakeholders and their concerns for Product in Maintenance scenario.
M.M.N. Zenun and G. Loureiro / A Framework for Completeness in Requirements Engineering 573
Figure 3. Stakeholders and their concerns for Maintenance Organization scenario.
Step 4: Identify system context for product at each life cycle process scenario and
for organization at each life cycle process scenario within the scope of the development
effort.
Figure 4 shows system context for A/C with GBAS system in maintenance and
Figure 5 shows system context for Maintenance Organization in maintenance scenario.
System context defines the function performed by the system element and identifies the
elements in the environment of the system. The environment of the system contains the
elements outside the system function scope, which may be another system or user.
These elements interact among them. In these interactions, they can exchange material,
information and energy flows with the system. Those flows define logical interface
requirements for the system (LOUREIRO et al., 2010, SAE, 2011). In this example, the
identified elements for product context (Figure 4) are inspector, mechanic, and the
interface between mechanic and A/C. These elements exchange material, information,
and energy with the system and between them. The identified elements for organization
context are suppliers, airlines, A/C builders, and Certification Agencies.
Environment elements may have different relevant states. Sets of environment
element states are called circumstances. The system must have different modes
depending on the circumstances. Behavior modeling is required to show under which
conditions system mode and system state transition occurs. Functions are identified per
mode. Functions are identified from outside in by identifying which responses the
system is supposed to give to deal with each stimulus provided by the environment
elements (LOUREIRO et al., 2010). From the model (Figure 4), it can be observed that
the mechanic can command a build in test at the interface between mechanic and A/C.
The interface shall send a response to this stimulus. On the other hand, the interface
will send a stimulus to GBAS in maintenance, and GBAS shall send a response to this
stimulus to the interface. These analyses will provide subject to write maintenance
functional requirements.
M.M.N. Zenun and G. Loureiro / A Framework for Completeness in Requirements Engineering 574
Figure 4. System (Product) context for Product in
Maintenance scenario.
Figure 5. System (Organization) context for
Maintenance Organization scenario.
Step 5: Identify implementation architecture context for product at each life cycle
process scenario and for organization at each life cycle process scenario within the
scope of the development effort.
Physical connections between the system and the environment elements define the
physical external interface requirements. Physical parts are defined. Physical internal
interfaces are defined by architecture connections and architecture flows among those
parts (LOUREIRO et al., 2010). Figure 6 shows product architectural context for A/C
with GBAS in Maintenance scenario and Figure 7 shows organization architectural
context for Organization in Maintenance scenario. Figure 6 shows connections
between external elements and the A/C with GBAS. In this level, the model (Figure 6)
shows among others, the interface between mechanic and the computer maintenance
(architecture solution chose for the interface between mechanic and A/C). Traditionally,
the mechanic can operate directly into the computer maintenance. However, if is
desired a means to perform data download, so a connection for download will drive
requirements interface.
Figure 6. Architectural context for Product in
Maintenance scenario.
Figure 7. Architectural context for
Organization in Maintenance scenario.
Step 6: Identify attributes for elements of product and organization. In this step all
elements, flows, and attributes gathered from all analysis shall be listed. Table 2 lists
some examples derived from product stakeholders and system analysis for Product in
Maintenance.
M.M.N. Zenun and G. Loureiro / A Framework for Completeness in Requirements Engineering 575
Table 2. Product stakeholders, their concerns and attributes for the Product in Maintenance analysis
scenario
Elements Flows Attributes
Airlines Profit Low Maintenance Cost
Certification Agencies Safety Extremely small rate of A/C accidents
Computer maintenance Send response Low delay
Step 7: Write requirements. The requirements are derived from the Table 2, by
adding the three columns <elements>, <flows>, and <attributes>. However, it is not a
direct conversion. In some cases, it is necessary to convert flows in actions in the
requirement statement. For example, if airlines concern is profit and there is a case of
a failure in the GBAS, and unscheduled maintenance is required to restore to its normal
operable state, the required action should be: perform maintenance in the GBAS, and in
order to have profit, the maintenance cost should be minimized. So, a requirement may
be written: The airlines shall be able to perform maintenance in GBAS within 1 hour
from receipt of failure report.
After application of those 7 steps, the first layer of the requirements set will be
complete for the maintenance scenario. It is necessary to repeat Step 3, 4, 5, 6 and 7
(which are highlighted in Table 1) to decompose requirements for the layers below and
to move to Step 8.
Step 8: Add traceability. Now, requirements for the high level (level i) and for the
next level (level i + j) are presented. At this point, it should be added traceability from
high level requirement to the low level requirement, which exists to attend the high
level requirement.
Step 9: Add traceability to the architecture. The last step of the method is to add
traceability to the architecture. If the requirement is detailed enough to be implemented
in the architecture it means that there will be no requirement in the lower level to attend
this requirement. Then, a link from the low level requirement to the architecture shall
be added. In the method for CoRE, this link (requirement to architecture) means that
the requirement will not be flow down anymore.
The purpose of traceability added in the Steps 8 and 9 is to guarantee that a
complete set of requirements gathered follow steps 3 to 7, will be implemented in the
solution in a complete way.
6. Results and discussions
In the maintenance scenario, it was identified all stakeholders and systems
elements. Their interfaces are identified and quantified. The application of the
framework for CoRE in the maintenance scenario for unscheduled maintenance mode
was beneficial and provided a complete understanding of all elements of this phase
mode, considering simultaneously product and organization. The framework
application provided requirements for system-of-interest (aircraft), and also for
maintenance procedures to the manuals (gathered from organizational context). This
case does not provided requirements for enabling systems due to its characteristics.
Maybe, if the framework for CoRE was applied in the maintenance scenario for
scheduled maintenance mode, additional requirements will be elicited.
M.M.N. Zenun and G. Loureiro / A Framework for Completeness in Requirements Engineering 576
7. Conclusions and future work
This paper has presented a Framework for Completeness in RE to be used in the
development of civil aircraft, in the context of aircraft industry.
A complete pilot study is planned to be ended in this year to confirm the method
validation. After a complete pilot study, the advantages and disadvantages will be
evaluated to valid the framework applicability. Metrics to prove the framework
effectiveness will also be presented. Notwithstanding, the results presented in this work
shows it is on the right direction. In the next future, the framework could be used as a
method for requirements elicitation to achieve a sufficient degree of completeness in
requirements, in the development of civil aircraft. The framework for CoRE may also
be applied in the development of other complex products.
References
[1] A. Gabb, D. Haines, D. Jones, J. van Gaasbeek, W. Vietinghoff, P. Davies, G. Caple, S. Eppig, A. Hall,
D. Lamont, Requirements Categorisation, In: Incose Annual International Symposium, 11
th
, 2001,
Melbourne, Proceedings Melbourne, 2001, p.1-8.
[2] Business Dictionary, Maintainability, Available at:
<http://www.businessdictionary.com/definition/maintainability.html>, Acessed on: 04 Apr 2013.
[3] Federal Aviation Administration (FAA), CFR 14 Part 25: Airworthiness standards: Transport category
airplanes, Washington D.C., 2013.
[4] G. Loureiro, C.E.V. Ribeiro, A.G. Adinolfi, R.C.B. Andrade, Systems Concurrent Engineering to
Develop an Aeronautical Navigation System. Product (IGDP) 8 (2010), 16-31.
[5] G. Loureiro, A Systems Engineering and Concurrent Engineering framework for the integrated
development of complex products, Thesis (Ph.D.), Loughborough University, 1999.
[6] International Council on Systems Engineering (INCOSE), Systems Engineering Handbook: A guide for
system life cycle processes and activities, Version 3.2.2, San Diego: Edited by Cecilia Haskins, 2011,
v.1, 386 p.
[7] International Organization for Standardization (ISO/IEC), Software and systems engineering
15288:2008, Geneva, 2008.
[8] J.O. Grady, System Validation and Verification, Boca Raton: CRC Press, 1997, v.1, 327 p.
[9] K. E. Wiegers, Software Requirements, 2
nd
Edition, Redmond: Microsoft Press, 2003, v.1, 544 p.
[10]M. Kossmann, M. Odeh, A. Gillies, C. Ingamells, 'Tour D'horizon' In Requirements Engineering Areas
Left For Exploration, In: Incose Annual International Symposium, 17
th
, 2007, San Diego. Proceedings
San Diego, 2007, p.1-21.
[11] M.M.N. Zenun, G. Loureiro, A Framework for Dependability and Completeness in Requirements
Engineering, In: Latin American Symposium on Dependable Computing, 6
th
, 2013, Rio de Janeiro,
ProceedingsRio de Janeiro, 2013, pp. 1-4.
[12] M.M.N. Zenun, G. Loureiro, A Framework for Requirements Concurrent Engineering, In: ISPE
International Conference on Concurrent Engineering (CE), 19
th
, 2012, Trier, ProceedingsLondon,
Springer, 2012, pp. 133-144.
[13] N. Leveson, Completeness in formal specification language design for process - control systems, In:
Workshop on Formal Methods in Software Practice, 3
rd
, 2000, New York, Proceedings New York,
ACM Press, 2000. p.1-8.
[14]R. S. Carson, E. Aslaksen, G. Caple, P. Davies, R. Gonzales, R. Kohl, A. Sahraoui, Requirements
Completeness, In: Incose Annual International Symposium, 14
th
, 2004, Toulouse, Proceedings
Toulouse, 2004, p.1-15.
[15]S. Jackson, Systems Engineering for Commercial Aircraft, 1
st
Edition, Brookfield: Ashgate Publishing
Company, 1997, v.1, 194 p.
[16] S. Robertson, J. Robertson, Mastering the Requirements Process, 2
nd
Edition, Boston: Addison Wesley
Professional, 2006, v.1, 592 p.
[17]Society of Automotive Engineers (SAE), Aerospace Recommended Practice (ARP) 4754A:
Development of civil aircraft and systems, Warrendale, 2011.
[18]U.S. Department of Defense (DoD), Military Standard: Defense and Program-Unique Specifications
Format and Content Mil-Std-961E, Arlington, 2003.
M.M.N. Zenun and G. Loureiro / A Framework for Completeness in Requirements Engineering 577
Aero-structure Direct Operating Cost
Estimation and Sensitivity Analysis within
a Knowledge Based Engineering System
Xiaojia ZHAO
a,1
and Richard CURRAN
b

a
Ph.D. student, Air Transport and Operations section, Flight Performance and
Propulsion, Faculty of Aerospace Engineering, Delft University of Technology
b
Professor, Air Transport and Operations section, Faculty of Aerospace Engineering,
Delft University of Technology
Abstract. This paper presents a methodology to incorporate Direct Operating Cost
(DOC), Surplus Value (SV) and Sensitivity Analysis (SA) into a Knowledge
Based Engineering (KBE) system. Based on a parametrically modeled product
geometry, the system automatically generates manufacturable geometry as well as
the material and production data required by the Product Breakdown Structure
(PBS) and the Bill Of Materials (BOM). By taking each item from the BOM, DOC
is estimated using a production cost module and simplified weight estimation
module. The presented application is able to perform an SA with respect to the
geometric factors The parameterization of the geometry definition and the
subsequent integration of automated DOC estimation provide a practical example
of an advanced concurrent engineering applications. It strengthens aero-structure
conceptual design from the cost perspective. Because this design and analysis
system is highly integrated and automated, the benefits in terms of short design
time and moderate effort are highlighted. The overall findings lead to further
analysis on aero-structure performance from a life cycle and total value
perspective.
Keywords. Aero-structure, Direct Operating Cost, Sensitivity Analysis,
Knowledge Based Engineering
Introduction
In the early conceptual design stage, an integrated and concurrent design approach is
required to fulfill the complex requirement of aerospace engineering system
development. Additionally, in order to explore most of the design space, it is necessary
to speed up the design process. Knowledge Based Engineering (KBE) is proposed as a
solution to assist the aforementioned design process, It is based on Knowledge Based
System (KBS) and has strong roots in Artificial Intelligence (AI) [1]. As an evolution
of Computer-Aided Engineering (CAE), it includes product design knowledge,
Computer-Aided Design (CAD), Object-Oriented Programming (OOP) and AI
methodology, assisting design automation and saving development time and cost [2].

1
Corresponding Author.
Kluyverweg 1 (building 62), 2629 HS Delft, The Netherlands
e-mail: X.Zhao-1@tudelft.nl
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-578
578
On the one hand, manual work within repetitive design and analysis processes is
simplified and reduced by Information and Communications Technology (ICT)
solutions, thereof saving the development cost. On the other hand, extensive design
knowledge is extracted and stored in the modeling system, and the knowledge is
automatically utilized, therefore, the looking up time for specific knowledge/data is
deducted.
In aviation industry, product competition is primarily related to the aircraft
acquisition cost, operating cost and other operating performance indices. The cost
property is always the fatal factor of a bid. Therefore, Direct Operating Cost (DOC)
estimation is considered as a dedicated analysis module combined with the geometric
aero-structure model in the KBE system. In addition, sensitivity can compensate the
lack of understanding of the variables influence. The design factors are highly inter-
connected, while in what degree the design factors influence the output is often
neglected. The Sensitivity Analysis (SA) is applied to enable this capability. Instead of
focusing on all factors, the design parameters that primarily drive cost are identified.
With a proper focus on these factors, the design process becomes more efficient.
This study originates from the research carried out as part of the Thermoplastic
Affordable Primary Aircraft Structure (TAPAS) project [3], which focuses on the
design of thermoplastic primary aero-structures. Furthermore, it is also inspired by a
cooperative project between Aerostructures B.V. (Fokker) and Delft University of
Technology (DUT), which focuses on the design of the aircraft box structures.
Collectively, both of the projects are aiming at improving the designer and the product
competitiveness in the future operating and service stage. Therefore, in order to quickly
measure the economic performance, the production cost integrated KBE system has
been employed, which is documented by X. Zhao et al[4] In addition to the project
work, it is proposed to further evaluate the associated product DOC, which can
explicitly indicate the operating and service capability. Moreover, designers need a
sensitivity indication with respect to the design factors, which enables them to
intentionally focus on main factors and to reasonably simplify design models. In
summary, the purpose of this research is to integrate DOC analysis into aero-structure
conceptual design phase by applying KBE techniques, while simultaneously analyzing
the sensitivity of the design factors which drive DOC in the system. This paper will
illustrate the way how the KBE technology, the DOC and the SA integration support
the early design decision making process.
It is structured as follows. The research objective is clarified in the introduction
section. Chapter 1 constructs the framework of the application. A detailed methodology
and approach are illustrated in steps. Chapter 2 shows the case studies on a stiffened-
panel design. Panel materials of aluminum, thermoset and thermoplastic will be
investigated. It also explored the DOC and SV performance of stiffened panel designs
with L, T, I, U and Omega stringer types. Initial results are obtained. Chapter 3
presents the conclusions and future steps.
X. Zhao and R. Curran / Aero-Structure Direct Operating Cost Estimation and Sensitivity Analysis 579
1. Methodology and Approach
1.1. KBE system integrated DOC, SV and SA
A developed KBE system-Design and Engineering Engine (DEE) is chosen as a
platform to capture implicit design knowledge and automate analysis process within
aerospace design domain. The DOC, SV and SA are built into the system.

Figure 1. Adapted DEE according to La Rocca [5]
Figure 1 shows the DEE, It starts with an initiator, which presents a product
instance with initial input values. The core of the DEE is a Multi-Model Generator
(MMG). It is a KBE application of a parametric product model, including High Level
Primitives (HLPs) and Capability Modules (CMs). HLPs are defined as basic design
elements of a geometric model. CMs are used for operating engineering rules and
automatically generating disciplinary product representations, so-called disciplinary
views generation. Disciplinary views are the combination of the geometric views and
attributes stored in the formatted files for disciplinary analyses. For cost estimation, a
Bill Of Material (BOM) including part shape parameters, materials and
manufacturing/assembly processes is generated. For weight estimation, a list of part
shape parameters and material densities is generated.
Analysis tools include cost estimation, weight estimation, etc. For cost estimation,
a list of correspondent Cost Estimation Relationships (CERs) are selected according to
each items in the BOM. CERs are parameterized functions for cost analysis. For weight
estimation, a list of correspondent structural and non-structural weight functions are
collected according to weight CM. Extra CMs are developed specifically for deriving
DOC, SV and SA. DOC is analyzed by the integration of production cost and weight
influence. SV, which reflects the financial property of aero-structure, was simplified to
be incorporated with the DOC analysis. SA is performed by analyzing the partial
derivatives of the design parameters, which needs several iterations. Analysis data files
store the performances of disciplines, In this paper, the relevant performance
parameters are related to the value of production cost, weight, DOC, SV and sensitivity.
X. Zhao and R. Curran / Aero-Structure Direct Operating Cost Estimation and Sensitivity Analysis 580
By evaluating the automatic generated analysis data, the convergence of the result is
checked in Converger & Evaluator.
The procedures of implementing KBE are summarized as follows:
1) Extract and formalize design knowledge, including product topology, product
geometric, material, production attributes and disciplinary design rules.
2) Define design process, which refers to the work flow starts from List of
Requirements (LOR) to the design solution. Within this step, tools of
knowledge storing, geometric modeling, disciplinary analyzing, optimization
are selected.
3) Automate design process, which mainly includes automate design work flow
and automate repetitive analysis process.
4) Define design solution, which indicates that to find the feasible and optimal
design solution by applying optimization analyses or trade-off studies.
The application is realized via various tools. In this study, the design requirements,
the analysis input files from the Disciplinary views and the analysis data files are stored
in the text and Microsoft Excel files. The MMG is implemented in the General-purpose
Declarative Language (GenDL) [6]. The Converger & Evaluator are built in MATLAB.
1.2. DOC and Surplus Value Modeling
1.2.1. DOC modeling
For the sake of illustrating the relation between a component and its induced DOC , an
estimation method relating design factors and DOC is employed. DOC is associated
directly to operating processes. It is often used for airplane comparative analysis and
design trade-off. In general, DOC includes costs of crew (flight crew and cabin crew),
landing fees, navigation fees, maintenance, fuel, depreciation, insurance and interest,
see in Eq. (1) [7,8,9].
DOC Crew Fees Maintenance Fuel Depreciation In surance Interests Crew Fees Maintenance Fuel Depreciation In surance Interests (1)
The elements of crew, fees, fuel and maintenance cost are operating oriented and
are consequently based on the aircraft weight; the items of depreciation, insurance and
interest are financial oriented and are consequently based on the aircraft acquisition
cost. The cost driving parameter for each item can be summarized according to Liebeck
et al [7]. The crew cost, landing fees and fuel cost are driven by the Maximum Take-off
Gross Weight (MTOGW) and the Fuel Burn Weight (FBW); the maintenance cost is
driven by the Air Frame Weight (AFW); the depreciation, insurance and interest are the
percentages of the Acquisition Cost (ACC). In practice, MTOGW and FBW are
estimated as the functions of AFW. ACC is proportioned to the Production Cost. As
this research is concerned with linking the design factors and the DOC performance,
the AFW and the Production Cost can be modeled according to the product properties,
therefore, the cost of crew, fees, fuel and maintenance are simplified to be the product
of the weight penalty ( p ) and the AFW ( AFW ), whereas the cost of depreciation,
insurance and interest are a function of the Production Cost ( ProductionCost ) with a
weight factor ( n ) . The DOC is adapted as Eq.(2).
DOC pAFW nProductionCost pAFW nProductio (2)
X. Zhao and R. Curran / Aero-Structure Direct Operating Cost Estimation and Sensitivity Analysis 581
Where, the AFW is estimated by a simplified analytical bottom-up weight
estimation, the production cost is calculated by a detailed analytical bottom-up cost
estimation. The weight penalty is set to 532/kg (as of April 2013) according to the
assumption of the G650 aircraft: The minimum fuel consumption is 7304kg with
12960km range. The density of kerosene is 0.81kg/L. Then it results in a fuel
consumption of 0.7L/km. Further assume the aircraft flies 20 years, 200 days/year with
12000km/day. Then the total flown distance in the life of the plane is 48 million km.
The kerosene price is 0.658/L (as of April 2013). The weight penalty is calculated by
the production of the fuel consumption per kilometer, the total flown distance and the
fuel price [10]. Production cost is composed of recurring production cost and
nonrecurring production cost. For practical reasons, only the recurring cost was
considered in this research. The actual recurring cost is influenced by the material
usage, the production labor consumption, the machine/equipment usage and the
overhead. Although the influence due to the machine/equipment usage and the
overhead exist, especially when comparing automatic manufacturing processes with
manual ones, these two items were excluded since limited data access. Considering the
influence of the fuel price rose, the weight factor value of 3 is chosen according to
Castagne, Curran and Rothwell and Markus [8,9,10].
The general calculations of weight and cost analysis for each BOM are given in
Eqs. (3) to (12). Eqs. (3) and (4) estimate weight, including optimum weight (
o p t i m u m
W )
such as part weight calculated by part length ( l ), width ( w ), height ( h ) and material
density ( ) and additional weight (
additional
W ) such as assembly fasteners weight [11].
Eqs (5) to (10) estimate production cost, including manufacturing cost (
m a n u f a ct u r i n g
C )
and assembly cost (
a s s e m b l y
C ). Manufacturing cost involves material cost (
material
C ) and
labor cost (
l abor
C ). The former is modeled by the production of the material price
(
material
P ) and the airframe weight ( AFW ) incorporating chipped material rate (
c h i p p e d
r )
(Eq. (8)). The latter is modeled by the production of the labor hour (
labor
H ) and the
labor rate (
l abor
R ) (Eq. (9)). The labor hour is estimated by one of the three forms in Eq.
(10) to Eq. (12), x stands for the cost driving parameter such as part/connection length,
contact area, fly-weight. Eq. (10) is the CER based on the power law model, A is a cost
coefficient, b is an exponential coefficient. Eq. (11) is the CER based on the first-order
law model,
0
is the manufacturing/assembly delay time,
0
v is the
manufacturing/assembly steady state speed. Eq. (12) is the CER based on the
hyperbolic function model [12].
o p t i m u m a d d i t i o n a l
A F W W W W W (3)
modeled
W lwh lwh (4)
m a n u f a ct a s u r i n s e l y g m b
P r o d u c t i o n C o s t C C C C (5)
m a n u f a ct u r i n g m a t er i a l l a b o r
C C C C C (6)
X. Zhao and R. Curran / Aero-Structure Direct Operating Cost Estimation and Sensitivity Analysis 582
a d d t i o n a l m a t e a s s em b r i a l l y l a b o r
C C C
m a t e r i a l i l
C C (7)
(1 )
m a t er i a l ch i p p ed m a t er i a l
C A F W r P ( ) A F W r P (1 ) (8)
labor labor labor
C H R H R (9)

b
l a b o r
h A x
b
A x (10)

0
0 0 0
1
labor
h
labor
h x
e
v
h
labor
h
l b
h
hh
labor
labor
h
1 e
0
1 e
0
1 e 1 e
0
(11)

2
0
0 0
2
l a b o r
x
h x
v v
2
22 x
x
0
2 x
x
0
2 x
0 0
v v
0
(12)
The procedures of implementing DOC estimation is summarized as follows:
1) Derive PBS according to master geometry.
2) Generate BOM.
3) Calculate cost items based on shape, material, production properties for each
BOM item.
4) Estimate airframe weight for each BOM item.
5) Aggregate component production cost, aggregate component AFW.
6) Select weight penalty and weight factor, obtain component DOC.
1.2.2. Surplus Value
Along with the DOC analysis, Surplus Value (SV) is introduced in the application. SV
is developed for enabling a relatively simple ranking of different engineering options
on a financial basis without performing detailed future forecasts[13], see in Eq. (13).
co s t P m a r ket C f l i g h t M a n D ev
S V D N D U C C C D N
C f l i g h t M a n D ev
D U C C C
C f l i h M C f l i h M
D U C C C D U C C C (13)
Where,
P
D is the producer discount multiplier,
C
D is the customer discount
multiplier,
market
N is the annual market size, U is the annual product utilization,
f l i g h t
C is
the operating cost per flight,
Man
C is the manufacturing cost,
Dev
C is the development
cost. According to the
f l i g h t
C and the
Man
C defined by Hollingsworth and Patel [13,14],
the DOC and ProdcutionCost are calculated with Eq. (13). As this work focuses on
design influence on the financial property and the trade-off study, it is possible to leave
out the fixed item of the
Dev
C , therefore, Eq. (13) is transformed to Eq. (14).
co s t P ma r ket C
S V D N D U D O C P r o d cu t i o n C o s t D N
C
D U D O C P r o d cu t i o n C o s t
C
D U D O C P r o d cu t i o n C o s t D U D O C P r o d cu t i o n C o (14)
X. Zhao and R. Curran / Aero-Structure Direct Operating Cost Estimation and Sensitivity Analysis 583
Comparing Eq.(2) and Eq.(14), DOC and SV are interrelated values: one stresses
the cost spend on the airplane operating, while the other indicates the benefit obtained
from the operating process. Both reflect the value integrated with the design and the
production.
1.3. Sensitivity analysis
In order to provide the sensitive ranking of the interested design parameters and to
assist the designer to reason the behavior of the modeled system. The sensitivity
analysis is performed for DOC. For
1 2
, , ,
n
X x x x x x x , DOC X f X f , Sensitivity
(
i
) is calculated in Eq. (15) [15].
( )
i
i
i
DOC
x
x
DOC
DOC DOC DOC
(
x DO
(15)
General procedure of SA is as follows:
1) Define target function and essential inputs.
2) Implement perturbation on the inputs.
3) Evaluate model to obtain output change.
4) Assess sensitivity of each input based on target function.
2. Case study and initial results
2.1. Detailed workflow
The aforementioned methodology has been applied on a stiffened-panel case. The work
flow for DOC &SV and SA analysis is shown in Figure.2. Each step can be matched
with the DEE building blocks.
Figure 2. UML activity diagram for integrated work flow
X. Zhao and R. Curran / Aero-Structure Direct Operating Cost Estimation and Sensitivity Analysis 584
2.2. Product model
The stiffened-panel configuration is first built in the CAD modeling system, where
material options of metal and composite are enabled. Stringer types of T, L, I, Z, U,
Omega are incorporated. Figure.3.gives the product model with a) inverted T stringers
and b) modeled stringer types. The design parameters and their initial values are shown
in Table 1. All design parameters are distributed in three categories: shape-relevant
variables, material-relevant variables and manufacturing-relevant variables. Beside the
top-level parameters listed in Table 1, there are secondary parameters such as material
density manufacturing labor rate, which are derived based on the top-level parameters,
also stored in the model.
Figure 3. a) Stiffened-panel product model with inverted T stringers b) Modeled stringer-types

Table 1. stiffened panel design parameters
Parameter Symbol Parameters Symbol Parameters Symbol
skin-length L stringer-type T, L, I, U, Z,
Omega
web-thickness t
w

skin-width W flange-width W
F
material-type al2024,
T300/PPS,T300/epoxy
skin-
thickness
t flange-
thickness
t
F
manufacturing-
process
cutting, hand layup,
curing, consolidation,
etc.
Stringer-
pitch/
number
p
s
Stringer-
height /web-
height
h
s
part- type skin, stringer,
connection

2.3. DOC and SV
Table 2 shows a DOC comparison based on the stiffened panel with the same configuration but different
stringer types. Assuming a DOC indexed to 100% for the Thermoplastic T300/PPS, T stringer panel, this
allows for the normalization of all other values respective to this value. Since the same portion was obtained
for the SV value, the SV estimation table is omitted. The reason for the same portion is because the DOC and
the SV have the same substance in essence. According to Table 2, the thermoplastic stiffened panel has
relatively low DOC value, due to its low production cost.
Table 2. DOC estimation for stiffened panels (%)
Aluminum(al2024) Thermoplastic(T300/PPS) Thermoset(T300/epoxy)
T 88.9 100 110.6
L 100.2 89 97.3
I 79.5 92.2 100.8
Z 106.7 96.2 102.2
U 88.9 83.8 93.3
Omega 97.9 90.1 97.8

The distribution of the cost categories for DOC of aluminum (al2024),
thermoplastic composite material (T300/PPS), Thermoset composite material
X. Zhao and R. Curran / Aero-Structure Direct Operating Cost Estimation and Sensitivity Analysis 585
(T300/epoxy) are shown in figure 4. The production processes sequences are
automatically predicted according to the combination of part/connection types and
material types, which is listed in table 3.
The estimation value of production cost induced DOC is relatively higher than the
actual expenses. This may be due to the value of the weight factor for DOC analysis.
The cost distribution shows that the production cost induced DOC takes the major
share of the whole DOC, which complies with the historical data [16]. Additionally, it
shows that the composite stiffened panel has less share of DOC induced by the
airframe weight. The fuel burn cost is low based on low weight, while their production
cost and maintenance cost go up when the composite material structure is newly
introduced in the industry. The material cost and labor cost increase given the
immature production techniques.


Figure 4. T-stiffener panel DOC. Figure 5. Tornado diagram of stiffened-panel DOC
sensitivities with respect to geometric factors.
Table 3. manufacturing sequences prediction
Aluminum(al2024) Thermoplastic(T300/PPS) Thermoset(T300/epoxy)
skin Sheet metal
machining, painting
Cutting, hand layup,
consolidation, pressing
Cutting, Automatic Tape
Laying (ATL), curing
stringer Cutting, milling,
painting
Cutting, hand layup,
consolidation, press
forming
Cutting, ATL, curing,
forming
skin-stringer
connection
Fastening Induction welding Adhesive bonding
Stringer-stringer
connection
N/A Induction welding Adhesive bonding
2.4. Sensitivity Analysis
By applying a perturbation of 2% for each design parameter, the initial result of the
sensitivity analysis is shown in Figure 5. The horizontal axis stands for the sensitivity
value in percentage. Assume the panel length and width is given. It can be seen that
similar impact trend applies on both metal and composite panels. The skin thickness is
the most influential parameter in this model, taking up to 40%. Moreover, the influence
of the web height, the flange thickness and the web thickness cannot be neglected
either. In addition, it can be seen that T300/PPS is the most sensitive material compare
with T300/epoxy and al2024.
X. Zhao and R. Curran / Aero-Structure Direct Operating Cost Estimation and Sensitivity Analysis 586
3. Conclusions
In conclusion, a methodology integrating DOC, SV and SA into a KBE system was
presented with the practical application procedures. The system evaluated the DOC and
SV for aero-structures with various materials and stringer types. The initial results were
obtained. The main findings of this paper are as follows. Based on the same panel size
and stiffener numbers, the thermoplastic material structure has relatively low DOC. For
composite stiffened panels, the production cost induced cost has the major contribution
on the total DOC, while the fuel burn cost has less influence. In addition, the skin
thickness and the flange thickness in this model have big impacts on the DOC.
In this research, all the comparisons are based on stiffened panel configurations
with the same panel length, width and stiffener pitch. The comparisons between
various configurations with similar structural performance are out of scope. Due to
limited data access and specific case set-up based on the manufacturing sequences and
CERs, it may lead to deviation of the result comparing with the actual value. In the
future researches, a comprehensive comparison based on similar structural properties
will be performed. Moreover, since the DOC and the SV value combine both the cost
and weight influence, a balance between the cost and weight will be analyzed through
optimization studies.
References
[1] G. La Rocca., Knowledge based engineering: Between AI and CAD: Review of a language based
technology to support engineering design, Advanced Engineering Informatics 26 (2012), 159179.
[2] C.B. Chapman, M. Pinfold, The application of a knowledge based engineering approach to the rapid
design and analysis of an automotive structure, Advances in Engineering Software, 32 (2001) 903-912.
[3] TAPAS project, http://www.tapasproject.nl/nl/Default.cshtml
[4] X.Zhao et al, Concurrent Aerospace Thermoplastic Stiffened Panel Conceptual Design and Cost
Estimation Using Knowledge Based Engineering. the 19th ISPE International Conference on
Cucurrent Engineering (2013), 195-206, DOI:10.1007/978-1-4471-4426-7_17
[5] G. La Rocca, Knowledge Based Engineering Techniques to Support Aircraft Design and Optimization.
PhD thesis, TU Delft , 2011..
[6] Genworks International, Gendl, http://www.genworks.com/sessions/fd4b61e3d/index.html.
[7] R. H. Liebeck. et al., Advanced Subsonic Airplane Design & EconomicStudies, NASA CR-195443, April
1995.
[8] S. Castagne, et al, A generic tool for cost estimating in aircraft design, Res Eng Design (2008) 18:149
162, DOI 10.1007/s00163-007-0042-x
[9] R. Curran, A. Rothwell, Numerical method for cost weight optimisation of stringer-skin panels, Journal
of Aircraft (2006), 43:264274
[10] M. Kaufmann, M., et al, Integrated Cost/Weight Optimization of Aircraft Structures, Struct Multidisc
Optim (2010), 41:325-334, DOI 10.1007/s00158-009-0413-1
[11] A. Elham. Weight Indexing for Multidisciplinary Design Optimization of Lifting Surface, PhD thesis,
TU Delft, 2012
[12] S.M. Haffner, Cost modeling and design for manufacturing guidelines for advanced composite
fabrication, PhD thesis, MIT, 2002.
[13] P. Hollingsworth, An Investigation of Value Modeling for Commercial Aircraft, Air Transport and
Operations Symposium, 2011.
[14] P. Hollingsworth and D. Patel. Development of a Surplus Value Parameters for Use in Initial Aircraft
Conceptual Design, Air Transport and Operations Sympoisium,2012
[15] D.M. Hamby, A Review of Techniques for Parameters Sensitivity Analysis of Environmental Models.
Environmental Mornitoring and Assessment (1994), 32 : 135-154
[16] M.Price et al, Integrating Design, Manufacturing and Cost for Trade-Off on Aircraft Configurations,
the 6th AIAA Aviation Technology, Integration and Operations Conference (ATIO)(2006) AIAA:
Wichita, Kansas, DOI : 10.2514/6.2006-7739.
X. Zhao and R. Curran / Aero-Structure Direct Operating Cost Estimation and Sensitivity Analysis 587
Heat Diffusion Method for Intelligent
Robotic Path Planning
Jeremy HILLS and Yongmin ZHONG
1
School of Aerospace, Mechanical and Manufacturing Engineering
RMIT University
Bundoora, VIC 3083, Australia
Abstract. Real-time collision-free motion planning is an important issue in many
autonomous systems including robotics and intelligent systems. It endows
intelligent robotic systems with an ability to plan motions and to navigate
autonomously. This ability becomes critical particularly for robots which operate
in dynamic environments, where unpredictable and sudden changes may occur.
This paper presents an intelligent method for real-time robotic path planning by
using potential field data generated by simulating heat diffusion. Heat is conducted
throughout the field from the objective location, and produces a gradient (potential
field) which is used for path planning. By iteratively following the highest
temperature gradient, the optimal path between a robot and its objective can be
established. Both steady-state and transient heat diffusion are studied. By utilizing
the iterative process of transient heat diffusion during potential field generation,
path planning in a dynamic environment becomes more feasible than steady-state
diffusion. A computer program is built using Java code to simulate dynamic
obstacles/environments and to generate potential fields upon which to enable
dynamic path planning. Techniques to enable obstacle avoidance (for obstacles
lying linearly between the robot and its objective) are examined. The introduction
of obstacles that allow heat to diffuse through them (however do not allow robot
passage) and the introduction of multiple heat sources to a potential field produce a
problem known as stalling at the local maximum.
A basic iterative solution of replicating fake obstacles is developed and described
that may be expanded upon to effectively solve the problem of stalling at the local
maximum, hence enabling dynamic robotic path planning to be successfully
performed using a simple and quick transient heat diffusion algorithm. This
ultimately means that a mobile robot can quickly plan a path amongst moving
obstacles to its target destination and can adjust its path rapidly while on the move
to avoid collisions or stalling.
Keywords. Robotic path planning, heat diffusion, optimal path, and obstacle
avoidance.
1. Introduction
In applications such as medicine (Ahmidi et. al. 2012), game Artificial Intelligence
(Mocholi et al 2010), vehicular and robotic navigation systems (Tisdale et al 2009),
path planning or pathfinding is an attempt at creating an artificial intelligence to

1
Corresponding Author.
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
doi:10.3233/978-1-61499-302-5-588
588
realize an optimum path between two points in space; an intelligent complexity that a
human can take for granted.
Many algorithms have been and are currently being developed to enable artificial
systems to think about their environments and plan paths themselves. Some current
algorithms such as A* (A-Star) (Pal et al 2012; Wang & Goh 2012) are very popular
and successful at plotting a path between two points within a discrete space under static
conditions. However, it is with regards to dynamic conditions that algorithms such as
A* are inefficient (Bennewitz & Burgard 2000). Generating a path only once is taxing
enough on time and computing resources, but when the movement of obstacles is
introduced, improvement by increase of complexity of path planning algorithms is
needed in order to redefine paths as environments change over time (Burn 2003; Lucas
2012).
Some algorithms such as D* (D-Star) (Carsten et al 2006; Zheng et al 2012) are
specifically designed to address this problem, however there is such a variety of
potential approaches to solving the optimal-path problem in dynamic environments,
that modifications of old/current algorithms and developments of new approaches are
popular and vital to expanding knowledge and resources in this field.
Neural networks provide artificial intelligence for solving the problem of optimal
path planning (Zhong et al 2011). However, the learning based neural networks
(Lebedev et al 2005) are time-consuming in operation, and the non-learning based
neural networks (Yang & Meng 2003) require the rigorous analysis of stability.
Graham, McCabe & Sheridan (2003) assigned pathfinding algorithms to two
primary categories; directed and undirected. Directed pathfinding involves assessing
the cost/value of progress for each adjacent node before proceeding to the next one.
Undirected pathfinding involves a blind search for the objective in order to create a
path; like a mouse in a maze; where when the end is reached through whatever
combination of sub-paths taken; the overall path is defined.
Directed pathfinding can be further broken down into uniform cost search methods
and heuristic search functions. Uniform cost methods search for the adjacent node that
has the lowest cost of movement from the current location; minimizing the cost of the
path taken so far, from the starting point. Heuristic search functions evaluate the costs
between adjacent nodes and the (nearest) goal point; producing a minimum path not yet
covered. The general definition of heuristic methods of problem solving is that they
involve using a method or rule to select the best of a number of solutions, and then
iteratively use the previous-best solution in assistance to determine the next-best
solution. Each iteration hence relies on the outcome of the iteration before it.
Path planning upon potential fields can hence be considered as directed, employing
heuristic methods. When you think of potential fields, picture in your mind either a
charged particle navigating through a magnetic field or a marble rolling down a hill.
The basic idea is that behaviour exhibited by the particle/marble will depend on the
combination of the shape of the field/hill. (Goodrich 2002, pp2). Potential fields are
arrays that consist of vectors. Each vector possesses magnitude and direction,
resembling a force. The collection of these vectors/forces is called a potential field
because they represent energy potentials that a robot might follow. Depending on the
orientation of these forces and the location of the robot, the forces can be attractive or
repulsive/rejecting. By enabling obstacles to generate repulsive forces and a goal to
generate attractive forces, and by adding these separate fields together, we get a
potential field relevant to real-world robotic pathfinding problems.
J. Hills and Y. Zhong / Heat Diffusion Method for Intelligent Robotic Path Planning 589
Wang & Chirikjian (2000) discussed the use of generating an artificial potential
field by simulation of steady-state heat transfer, claiming that it is more suitable for
dynamic environments than transient heat transfer; The disadvantage of using the
transient state equation is its slow response to a dynamic environment and time-
delays. A grid of temperatures can allow the optimal path problem to be solved by
moving between two nodes; following the path of minimal thermal resistance. An
attractive potential is generated from the goal, and repulsive potentials generated from
the obstacles. The action of pursuing path propagation with minimal thermal resistance
or steepest temperature gradient (between two nodes) can be termed as hill climbing.
If a successor node b exists for a current state node a, then if temperature :
make the successor b the new current state node. For hill climbing to work correctly,
the successor node chosen must be the best out of any other successor of that current
state. Since the favoured successor is of greater temperature than the current state, then
the attractive potential of the goal will be based on its possession of a higher
temperature than its surroundings, and the repulsive potential of obstacles; their
possession of a lower temperature than their surroundings.
The method of generating a potential field from simulated heat transfer and using a
hill climbing algorithm is the method used in this paper to solve the problem of optimal
path planning.
Unfortunately, potential fields suffer from a series of problems (Csiszar et al 2012).
These include:
Local Maxima/Minima: A local peak/trough (e.g. peak temperature) that is not the
highest in the potential field.
Plateaus: Regions of equivalent temperatures, providing no temperature gradient to
follow.
Ridges: Cases where up, down, left and right adjacent nodes may have lower
temperatures, yet a diagonally adjacent node may have a greater temperature. This is
only a foreseeable issue if the path planning algorithm does not account for
diagonally adjacent nodes in each iteration.
Using a finite region for a potential field, some boundary conditions must be
established. The popular definition for the simple Dirichlet boundary condition is that
when imposed on an ordinary or a partial differential equation, it specifies the values a
solution needs to take on the boundary of the domain. Fringer (2002) demonstrates
simple examples of Dirichlet boundary conditions as:
0 ) ( = a y
2 ) ( = b y
(1)
For the case discussed in this paper, we can use the boundary conditions as
follows:
0 ) , , ( = y x t T
0 ) , ( = y x T
(2)
By setting nodes within a grid of a discretized environment to a constant
temperature (0 here), we create Dirichlet boundary conditions. Due to the fact that heat
J. Hills and Y. Zhong / Heat Diffusion Method for Intelligent Robotic Path Planning 590
sources within our case will produce temperatures greater than 0, the boundary
conditions also absorb heat from the potential field grid.
This paper presents an argument to the statement by Wang & Chirikjian (2000)
that steady-state simulation is more effective for dynamic environments, predicting that
in most if not all cases, some rapid solution can be achieved using transient diffusion,
to correct the potential field as the configuration space changes. This is evidenced
where steady-state diffusion requires the computational solution of many more matrix
elements in contrast to transient diffusion. As well as this, a basic iterative method of
solution to local maxima stalling is proposed, that can be applied to obstacles with
diffusive properties. A process of identifying potential local maxima and filling them is
proposed to enable a robot to avoid stalling.
2. Methodology
A finite environment is constructed as a regular grid of square nodes. Boundary
conditions are set at the edges of the grid (Dirichlet; T = 0), and an origin for the path
construction is set (the robots starting location). Any number of obstacles is defined
upon the grid, and any number of heat sources or sinks is set upon the grid. Just as with
any real case of heat transfer, the sources/sinks can cause heat to diffuse throughout the
grid, changing the magnitude and/or sign of temperature stored in each node; defined
by spatial coordinates. Thus, any spatially oriented node with a temperature magnitude
can be considered as a vector. Likened to forces, these nodal vectors can be considered
as attractive (higher temperature) or repulsive (lower temperature), with regards to the
hill-climbing path planning algorithm.
To generate temperatures for each node, an application designed using Java code
allows user-interaction to define initial conditions such as obstacle, robot and target
locations, permitting rapid setup prior to simulation.
2.1. Potential Field: Steady-state Heat Diffusion
To start with, a discretized equation for evaluating the unknown node temperatures (i.e.
nodes that are not sources/sinks/obstacles/boundary conditions) is required. Beginning
with the heat equation:
0
2
= + q T k (3)
The temperature T is a function of several variables, varying location to location (x,
y, z), and can vary with time (t). For two-dimensional steady-state heat transfer:

2
in Eq. (1) denotes the Laplace operator.
Steady-state is time-independent (stable; same at any time t, so t is not part of
function T).
T is only a function of x and y in 2D.
Heat sources/sinks will have fixed time-independent temperatures. The heat rate q is
zero for the case discussed in the paper.
k is a conductivity constant. Its effect can be disregarded, since what is to be
generated is a grid of temperatures that have finished diffusing into a steady-state.
J. Hills and Y. Zhong / Heat Diffusion Method for Intelligent Robotic Path Planning 591
Therefore reduce Eq. (3) to the following for steady-state conditions:
0
2
2
2
2
=

y
T
x
T
(4)
By using the finite difference scheme, the x component can be calculated as:
2 2 2
2
) , ( ) , ( ) , ( ) , (
) , (
x
y x x T y x T
x
y x T y x x T
y x
x
T

+
=

(5)
For the simplicity, we can set without loss of generality. Thus, Eq. (5)
becomes:
) , ( 2 ) , 1 ( ) , 1 (
2
2
y x T y x T y x T
x
T
+ + =

(6)
Similarly, the y component is
) , ( 2 ) 1 , ( ) 1 , (
2
2
y x T y x T y x T
y
T
+ + =

(7)
Substituting Eq. (6) and Eq. (7) into Eq. (4),
0 ) , ( 2 ) 1 , ( ) 1 , (
) , ( 2 ) , 1 ( ) , 1 (
= + + +
+ +
y x T y x T y x T
y x T y x T y x T
(8)
If matrix notation is used to represent these nodes from Eq. (8), then
0 4
, 1 , 1 , , 1 , 1
= + + +
+ + j i j i j i j i j i
T T T T T
(9)
Eq. (9) is now used to determine the temperature T at any node of unknown
temperature in the ith row, and jth column.
Since the temperature of each node in steady-state diffusion is dependent upon the
temperatures of its surrounding nodes, a number of simultaneous equations arise to be
solved. If there are n nodes, there are n simultaneous equations, each with n elements.
This leads to node temperatures needing to be solved by matrix operations (i.e. Gauss-
Jordan elimination) upon a matrix of nn elements. This is computationally expensive,
as for example, a grid of 5050 nodes gives rise to a matrix of 25002500 = 6,250,000
elements to be solved.
2.2. Potential Field: Transient Heat Diffusion
The heat equation can be manipulated to the following for transient conditions:
J. Hills and Y. Zhong / Heat Diffusion Method for Intelligent Robotic Path Planning 592
q
y
T
x
T
k q T k
t
T
+

= + =

2
2
2
2
2
(10)
Knowing what
2
2
) , (
x
y x T

and
2
2
) , (
y
y x T

are, from the steady-state derivation,


and considering the inclusion of time-dependence in a transient state
t
y x t T y x t t T
t
y x t T
t

+
=


) , , ( ) , , (
lim
) , , (
0
(11)
After converting (as before), to difference equations
) , , ( 2 ) , 1 , ( ) , 1 , (
2
2
y x t T y x t T y x t T
x
T
+ + =

(12)
) , , ( 2 ) 1 , , ( ) 1 , , (
2
2
y x t T y x t T y x t T
y
T
+ + =

(13)
) , , ( ) , , 1 ( y x t T y x t T
t
T
+ =

(14)
Substituting Eqs. (12)-(14) into Eq. (10)
) , , ( ) , , (
)] , , ( 4 ) 1 , , ( ) 1 , , (
) , 1 , ( ) , 1 , ( )[ , , ( ) , , 1 (
y x t q y x t T
y x t T y x t T y x t T
y x t T y x t T y x t k y x t T
+ +
+ + +
+ + = +
(15)
If, as in the steady-state calculations, matrix notation is used to represent these
nodes, then
j i t j i t j i t j i t j i t j i t j i t j i t j i t
q T T T T T T k T
, , , , , , 1 , , 1 , , , 1 , , 1 , , , , , 1
] 4 [ + + + + + =
+ + +
(16)
This equation is used to calculate the temperature of any internal node of
unknown temperature at any time t.
J. Hills and Y. Zhong / Heat Diffusion Method for Intelligent Robotic Path Planning 593
Initial conditions can then be set for any node temperature such as 0
, ,
=
j i t
T for
internal nodes, or 1
, ,
=
j i t
q for heat source nodes. By using grids of node
temperatures, diffusivities, and heat rates at 0 = t , the initial conditions are set, and
temperatures for all internal nodes can be subsequently calculated for 1 = t . Every
iteration depends upon the temperature data generated in the previous iteration and
upon any predetermined diffusivity or heat rate values for the previous iteration.
Referring to the previously discussed expense of computing a 5050 grid of nodes
in steady-state diffusion, by contrast, simulating the same in transient diffusion for 100
iterations (to diffuse heat entirely across a 5050 grid), only 250,000 elements are
computed in total (2500 per iteration); a mere 4% of resources required to simulate the
same in steady-state in this case.
2.3. Path Planning
As an example of the hill-climbing method of path planning using the heat grid
(potential field) data, consider the example shown in Fig. 1 below, with a temperature
distribution of the following:
1 2 3 4 5 6 7 8
0 0 0 0 0 0 0 0 0 0
1 0 0.177 0.353 0.236 0.138 0.079 0.046 0.025 0.011
2 0 0.353 1 0.452 0.236 0.134 0.078 0.043 0.020
3 0 0.236 0.452 0.337 0.222 0.142 0.088 0.051 0.024
4 0 0.138 0.236 0.222 0.172 0.123 0.082 0.050 0.023
5 0 0.079 0.134 0.142 0.123 0.095 0.067 0.043 0.020
6 0 0.046 0.078 0.088 0.082 0.067 0.050 0.033 0.016
7 0 0.025 0.043 0.051 0.050 0.043 0.033 0.022 0.011
8 0 0.011 0.020 0.024 0.023 0.020 0.016 0.011 0.005
9 0 0 0 0 0 0 0 0 0
Figure 1. Example of a 10x10 grid for path planning.
Say for example, now that this data has been generated by the application, we want
to find a path from the node at (8,8), to the node at (2,2). In order to do this, we repeat
the process of checking all adjacent nodes to determine which one has the highest
temperature.
Iteration 1: Investigate the 8 nodes surrounding (8,8). These are (7,7), (7,8), (7,9),
(8,7), (8,9), (9,7), (9,8), and (9,9). (7,7) has the highest temperature, so we use that
as the centre node for the next iteration.
Iteration 2: Investigate the 8 nodes surrounding (7,7). (6,6) has the highest
temperature out of these, so we use that as the centre node for the next iteration.
The iterations proceed until we select (2,2) as our centre (current) node.
The nodes that we have used as the centre for each iteration comprise the path
between the two points chosen initially.
These are shown in green in Fig. 2.
The path shown in Fig. 2 is highlighted on an activity landscape of the potential
field in Fig. 3.
J. Hills and Y. Zhong / Heat Diffusion Method for Intelligent Robotic Path Planning 594
1 2 3 4 5 6 7 8
0 0 0 0 0 0 0 0 0 0
1 0 0.177 0.353 0.236 0.138 0.079 0.046 0.025 0.011
2 0 0.353 1 0.452 0.236 0.134 0.078 0.043 0.020
3 0 0.236 0.452 0.337 0.222 0.142 0.088 0.051 0.024
4 0 0.138 0.236 0.222 0.172 0.123 0.082 0.050 0.023
5 0 0.079 0.134 0.142 0.123 0.095 0.067 0.043 0.020
6 0 0.046 0.078 0.088 0.082 0.067 0.050 0.033 0.016
7 0 0.025 0.043 0.051 0.050 0.043 0.033 0.022 0.011
8 0 0.011 0.020 0.024 0.023 0.020 0.016 0.011 0.005
9 0 0 0 0 0 0 0 0 0
Figure 2. Example of the 10x10 grid with path.
Figure 3. The activity landscape of the 10x10 grid
Note that path planning can be achieved by checking in four directions (above,
below, left, right), in eight directions (above, below, left, right, above-left, above-right,
below-left, below-right), or in even more directions, in theory.
3. RESULTS & DISCUSSIONS
3.1. Path Planning in a Dynamic Environment
The fundamental benefit of transient heat diffusion being used to produce a potential
field is that the field is affected by the changing obstacles, as it is generated. It does not
come down to the path generation mechanics to determine course corrections, but
J. Hills and Y. Zhong / Heat Diffusion Method for Intelligent Robotic Path Planning 595
rather the fact that the potential field dynamically changes itself, thus affecting the
robots behaviour. Fig. 4 shows the setup of our example of dynamic obstacles. At the
31
st
transient iteration, a large wall materializes in the path of the robot. The target is
displayed in yellow (heat source), obstacles in blue, and the robot in green.
Figure 4. Setup of Dynamic Environment
Figure 5. Heat Diffusion in Dynamic Environment
Figure 6. Wandering and Path Correction in Dynamic Environment
By the time the obstacle appears, the sources heat has already diffused toward the
robot (Fig. 5), and the robot has begun to follow the temperature gradient directly
towards the target. Since an obstacle now exists adjacent to the robot (composed of
multiple obstacle nodes), the robot cannot follow the temperature gradient further, and
actually begins to engage in an interesting behaviour of wandering as the potential field
J. Hills and Y. Zhong / Heat Diffusion Method for Intelligent Robotic Path Planning 596
is reorganized; heat diffusing around the new obstacle (Fig. 6). Depending on the setup
of the obstacles, the robot wanders for some arbitrary amount of time.
In Fig. 6, the robot finally becomes affected by the corrected temperature gradient,
and begins to move again, rapidly towards the objective, having taken the total path
shown in Fig. 7. The potential field corrects itself thanks to a couple of key factors:
The obstacle is of a lower temperature than its surroundings. The materialized wall
is composed of Dirichlet boundary condition nodes set to a temperature of zero.
These nodes do not heat up, but still cause heat to be removed from adjacent nodes.
It is necessary to cool the region where the robot lies at this stage, to halt the robot
from remaining in this new dead-end area.
Heat from the target source diffuses around the materialized wall, and begins to
create an appealing temperature gradient between its self and the robots current
position. Nodes below the robot become warmed as those above and to its right
become cooled, thus the robot begins to move downward.
Figure 7. Path Taken Around Dynamic Obstacle
Consider however, that had the wall been of a different shape, it could have caused
heat diffusion from the target source to take longer to affect the region where the robot
lay (at the 31
st
iteration). The more immediate cooling effect of the Dirichlet boundary
condition obstacle nodes that comprised the wall could have acted faster than the
sources warming effect, thus causing the robot to move thanks to the formation of a
negative gradient rather than a positive one.
Avoiding a negative temperature gradient in this case is a problem, since it pushes
the robot away to any arbitrary point (depending only upon where the robot sits relative
to the object causing the negative gradient). This repulsion may cause the robot to
unintentionally evade the attractive influence of the positive temperature gradient being
generated by the target heat source, thus causing the state of wandering to persist for an
extensive duration.
A solution to this potential problem should be investigated by experimenting with
different types of obstacles of different sizes, different sizes of potential field grid, and
different temperatures of target heat sources and boundary conditions. By examining
the influences that combinations of these factors hold over the diffusion of heat, the
duration of wandering could be effectively reduced in any scenario.
J. Hills and Y. Zhong / Heat Diffusion Method for Intelligent Robotic Path Planning 597
3.2. Diffusive Obstacles and the Local Maxima Problem
Fig. 8 shows some types of obstacles in the transient simulation application. Dirichlet
obstacles are set to a permanent temperature and are not touched by the simulation;
their temperatures remain unchanged. Diffusive obstacles allow heat to diffuse through
them, do not let robots pass through them, and allow for their temperatures to change.
A diffusive obstacle with zero diffusivity and no internal heat generation (heat rate),
causes the temperature at its location to remain constant over time.
Figure 8. Types of obstacles
Recall Eq. (14):
j i t j i t j i t j i t j i t j i t j i t j i t j i t
q T T T T T T k T
, , , , , , 1 , , 1 , , , 1 , , 1 , , , , , 1
] 4 [ + + + + + =
+ + +
(16)
When
j i t
k
, ,
and
j i t
q
, ,
are equal to zero, then
j i t j i t
T T
, , , , 1
=
+
.
An increasing diffusivity causes hotter nodes to cool faster (thanks to
] 4 [
, , , , j i t j i t
T k ), and cooler nodes to heat faster (thanks to
] [
1 , , 1 , , , 1 , , 1 , , , + +
+ + +
j i t j i t j i t j i t j i t
T T T T k ).
Recall however, that up until this stage, heat has not diffused through obstacles but
around them (in the case of Dirichlet obstacles). Due to the path taken to diffuse from a
heat source to some arbitrary point beyond an obstacle, a robot can successfully back-
trace this temperature gradient path to the target source. When the robot however, lies
on a maximum temperature gradient that contains an obstacle lying upon it, the robot
will not deviate from its course to navigate past the obstacle, and can become trapped.
Refer to Fig. 9; where the robot reaches a local maximum point. Thanks to heat
J. Hills and Y. Zhong / Heat Diffusion Method for Intelligent Robotic Path Planning 598
diffusing directly through the obstacle toward the robot, at the location of entrapment
the surrounding nodes either contain lower temperatures or are not traversable. This is
quite similar to the popular problem of local maxima where multiple heat sources exist
upon a potential field, and at an arbitrary point in time, the robot navigates to a heat
source node containing no higher adjacent temperatures, but however may not be the
preferred target or global maximum.
Figure 9. Local maximum at diffusive obstacle
Since in these cases, the robot cannot navigate to the target, we could integrate a
different pathfinding technique/algorithm altogether into the pathfinding process. This
could cure the problem, but potentially be computationally expensive where multiple
diffusive obstacles lie upon the maximum temperature gradient.
3.3. An Iterative Solution to the Local Maxima Problem
Here is where an iterative solution can be presented to modify the node data to prevent
robot entrapment. Due to the tendency of a concave diffusive obstacle to cause
entrapment, then its inner region must be filled by some sort of obstacle in order to be
avoided. The problem with this in a heat diffusion potential field is that the obstacle
that fills the concave diffusive obstacle must not trap the robot its self.
An iterative algorithm is used to determine whether a node is potentially a local
maximum if it fulfils certain conditions. If a culprit node is surrounded by four adjacent
nodes that are each either of lower temperatures or are obstacles, then the culprit is
marked as an obstacle, and the next node in the potential field is investigated. By
iterating through row and column elements for a specific iteration upon when the robot
J. Hills and Y. Zhong / Heat Diffusion Method for Intelligent Robotic Path Planning 599
became trapped, obstacles can be sequentially created until no more nodes exist where
a robot might become stuck or stall at a local maximum.
The process repeats creating an obstacle at the local maximum, which in turn
causes two adjacent nodes to become local maxima. Filling these in, the process
eventually reaches the point where placing the last pseudo obstacle node results in no
remaining local maxima. After this point, the path can be generated, and the robot will
skirt the diffusive obstacle.
If we refer now, back to the entrapment encountered in Fig. 9, and apply this iterative
process of filling the concave diffusive obstacle with more obstacle nodes, then we get
the resultant path shown in Fig. 10.
Figure 10. One application of iterative local maximum solution
It is advised that the pseudo obstacles also be marked with weights, based on their
distance from the original local maximum (which can be achieved by assigning them
with the current iteration number that is being undertaken to fill in local maxima). If a
robot for example, already lay in the region which was filled with pseudo obstacles,
then it would become unable to move from its starting position. If however, weights
were assigned to the pseudo obstacles based on iteration, then the robot might find its
self sitting on a pseudo obstacle of weight equal to ten for example, and trace other
pseudo obstacles based on assigned weight, in order to be free of overlapping pseudo
obstacles, finally tracing its way to the global target.
J. Hills and Y. Zhong / Heat Diffusion Method for Intelligent Robotic Path Planning 600
4. CONCLUSIONS
Several conclusions can be made from examining simulation data of the application
constructed for analysis of the concept of potentials fields based upon heat diffusion,
proposed by Wang & Chirikjian (2000).
Firstly, the method of heat diffusion to produce a potential field is highly effective
when used in conjunction with Dirichlet boundary condition obstacles or with non-
diffusive obstacles (diffusivity = 0). The generated temperature gradient across the
field enables a simple greedy highest temperature gradient-seeking algorithm to
rapidly plan a path that avoids obstacles.
Wang & Chirikjian (2000) claimed that steady-state diffusion was a more effective
solution to dynamic path planning than transient diffusion, however this can be
contended due to the nature of achieving solutions for node temperatures in steady-
state and transient diffusion methods. If a grid of lw nodes exists, the generic steady-
state solution to produce a potential field of temperatures requires (lw) elements to
be solved. The generic transient state solution to produce a potential field of
temperatures requires approximately at least (lw)(l+w) elements to be solved; much
less in comparison and potentially much faster and less memory-intensive than a
steady-state solution to dynamic environment path planning. With regards to the state
of wandering that a robot engages in during transient diffusion simulation in a dynamic
environment, regeneration of a useable potential field using transient diffusion in most
cases may be even faster than a single attempt to generate the same field using steady-
state diffusion. Should the configuration space change dynamically, the simulation
could clear the potential field and regenerate it using transient diffusion in less time
than it takes to apply steady-state diffusion.
A potential field that exists in a transient state can correct the temperature gradient
over time as obstacles are introduced, moved or removed to simulate a dynamic
environment. Depending on the obstacle type, its temperature or heat rate, and the
relative positions of the robot, target heat source and obstacles as well as the grid
resolution of the potential field, the robot can engage in a temporary state of wandering
as the field is corrected by diffusion. The distance that the robot wanders, as well as the
duration of wandering could be minimized by further experimentation using the
aforementioned variables.
The introduction of not only multiple heat sources, but of obstacles with diffusive
properties, can produce the commonly known problematic phenomenon of local
maxima, where a robot tracing the maximum temperature gradient across the potential
field, can stall or become trapped. By using an iterative filling method, obstacles can be
placed over local maxima, until no more local maxima can exist, other than the global
maxima or primary target, after which the robot proceeds to follow the remaining
temperature gradient to the objective.
It is recommended that a faster process for rectifying a dynamic potential field
should be developed. When an obstacle is placed, duration of the state of wandering
should be minimized or eliminated to avoid unnecessary travel. It is recommended that
rather than reset the potential field, an algorithm be developed to reset node
temperatures only in the immediate region of the location where robot stalling due to
obstacle movement or introduction occurs. This would allow for heat to diffuse again
J. Hills and Y. Zhong / Heat Diffusion Method for Intelligent Robotic Path Planning 601
in the immediate region, to more rapidly reintroduce a correct temperature gradient to
be followed by the robot.
References
[1] A.N. Author, Book Title, Publisher Name, Publisher Location, 1995.
[2] A.N. Author, Article title, Journal Title 66 (1993), 856890.
[3] N. Ahmidi, G.D. Hager, L. Ishii, G.L. Gallia, M. Ishii, Robotic Path Planning for Surgeon Skill
Evaluation in Minimally-Invasive Sinus Surgery, Lecture Notes in Computer Science 7510 (2012), 471-
478.
[4] M. Bennewitz, W. Burgard, An experimental comparison of path planning techniques for teams of
mobile robots, Autonome Mobile Systeme, Springer, Berlin, Heidelberg, 2000, pp175-182.
[5] A. Burn, Game AI Dynamic Path Planning, Undergraduate Thesis, Department of Computer Science,
University of Durham, 2003.
[6] J. Carsten, D. Ferguson, A.Stentz, 3D Field D: Improved Path Planning and Replanning in Three
Dimensions, IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China,
2000, pp3381-3386.
[7] A. Csiszar, M. Drust, T. Dietz, A. Verl, C. Brisan, Dynamic and Interactive Path Planning and Collision
Avoidance for an Industrial Robot Using Artificial Potential Field Based Method, Mechatronics,
Springer, Berlin, Heidelberg, 2012, pp413-421.
[8] O. Fringer, Lecture 9: Numerical solution of boundary value problems, 2000,
http://www.stanford.edu/~fringer/teaching/numerical_methods_02/handouts/lecture9.pdf.
[9] M. A. Goodrich, Potential Fields Tutorial, 2002,
http://students.cs.byu.edu/~cs470ta/goodrich/fall2004/lectures/Pfields.pdf.
[10] D. V. Lebedev, J. J. Steil, H. J. Ritter, The dynamic wave expansion neural network model for robot
motion planning in time-varying environment, Neural Networks 18(3), 2005, pp267-285.
[11] D. Lucas, Development of a multi-resolution parallel genetic algorithm for autonomous robotic path
planning, 12th International Conference on Control, Automation and Systems, Jeju, Korea, 2012,
pp1002-1006.
[12] J. A. Mocholi, J. Jaen, A Catala, E. Navarro, An emotionally biased ant colony algorithm for
pathfinding in games, Expert Systems with Applications 37(7), 2010, pp4921-4927.
[13] A. Pal, R. Tiwari, A Shukla, Modified A* Algorithm for Mobile Robot Path Planning, Studies in
Computational Intelligence 395, 2012, pp183-193.
[14] J. Tisdale, Z. Kim, J.K. Hedrick Autonomous UAV path planning and estimation, IEEE Robotics &
Automation Magazine 16(2), 2009, pp35-42.
[15] Y. Wang, G.S. Chirikjian, A New Potential Field method for Robot Path Planning, 2000 IEEE
International Conference on Robotics & Automation, San Francisco, CA, 2000, pp977-982.
[16] W. Wang, W.-B. Goh, Multi-robot Path Planning with the Spatio-Temporal A* Algorithm and Its
Variants, Lecture Notes in Computer Science 7068, 2012, pp313-329.
[17] S.X. Yang, M. Meng, Real-Time Collision-Free Motion Planning of a Mobile Robot Using a Neural
Dynamics-Based Approach, IEEE Transactions on Neural Networks 14(6), 2003, pp1541-1552.
[18] C. Zheng, J. Cai, H. Yin, A Linear Interpolation-Based Algorithm for Path Planning and Replanning on
Girds, Advances in Linear Algebra & Matrix Theory 2, 2012, pp20-24.
[19] Y. Zhong, B. Shirinzadeh, B, X. Yuan, Optimal Robot Path Planning with Cellular Neural Network,
International Journal of Intelligent Mechatronics and Robotics 1(1), 2011, pp20-39.
J. Hills and Y. Zhong / Heat Diffusion Method for Intelligent Robotic Path Planning 602



Subject Index
3D editor 225
4D modelling 343
active isolation system 155
activity categories 469
aerospace engineering education 560
aero-structure 578
agents 225
air transportation 391
aircraft design 110
aircraft maintenance scenario 568
airplane design 12
artificial neural network 361
assembly 421
automated engineering design 173
automatic electronic control 361
Bellmans optimality principle 183
benchmark 333
biofuel 391
biomethane 391
body movement-based interaction 163
bottom-up method 353
business process 40
business rules 40
CAD 401
capability enhancement 254
case study 50
change management 254
cloud computing 40, 517, 539
cloud manufacturing 216, 539
collaborative design 198
collaborative development 40
collaborative engineering 110, 225,
284
completeness 568
complexity 1
computed tomography 60
computer supported engineering
design systems 324
concurrent design facility 550, 560
concurrent engineering (CE) 30, 137,
303, 353, 411, 431, 550
concurrent engineering in
aerospace industry 190
concurrent engineering in
practice 190
concurrent engineering principles 190
cost and energy consumption 129
critical thinking 264
curved shell plate 441
customer expectation 526
customer experience 526
customer involvement 72
customer needs management 81
customer perception 526
customer satisfaction 526
customer-oriented product design 235
customized products 119
data management 147
data mining 235, 421
data quality 401
decision funnel model 50
decision making 235
decision support 274, 284
dental implant 303
design for sustainability 314
design rationale and traceability 324
design structure matrix 91
design tools 12
digital factory 421
digital grid 371
digital mock-up 353, 431
direct operating cost 578
distributed generation 371
diverse possible design solution
sets 155
DMU 353
documentation 119, 324
domain modeling 469
ecodesign 481
electricity coloring 371
electricity market 371
electricity service 371
e-logistics information system 451
emotional product design 91
empirical performance evaluation 110
encapsulation 216
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
603

engineering collaboration 401
enterprise trajectory 254
evolutionary algorithms 411
exertion-oriented programming 381
expert systems 303
extended enterprise (EE) 314
feature model 333
flight dynamics 12
formal methods in concurrent
engineering 190
formal verification 225
fourth-party logistics (4PL) 451
functional DMU 431
functional spatial experience 431
Gantt chart 264
generator 333
genetic algorithm 244
geometric prosthesis modeling 60
government responsibility 526
graph decomposition 91
hand gesture 163
head movement remote
collaboration 163
heat diffusion 588
hierarchical linear model 461
human prosthesis 60
hybrid method 353
hypothesis testing 72
increase of productivity 129
inference engine 173
innovation chain 469
interactive multiple regression
model 72
interchangeable common
components 129
ITF model 81
Kansei clustering 91
Kansei engineering 91
knowledge based engineering 190,
578
knowledge management 119, 284
knowledge object 173
knowledge sharing 517
laser scanner 147
lean engineering 190
lean manufacturing 274
learning curve 461
linguistic variables 183
liquefied natural gas 391
load balancing 101
logistics service provider (LSP) 451
low-floor minibus 411
machine tool 19
manufacturing 421
master geometry 353
microwave heating 361
migration 401
modeling and simulation 517
moisture content 361
multi domain simulation 19
multidisciplinary collaboration 110
multidisciplinary design
optimization 198
multi-objective optimization 411
needs of costumer 293
network computing 198
neural network 129
obstacle avoidance 588
OCNR system 81
online trade 526
open innovation 284
optimal path 588
parallel distributed processing 129
parameterized 333
parametric design 235
performance 1
photovoltaic electricity
generation 461
physics-based design 198
planning and scheduling 343
process estimation 183
process management 469
process modeling 469
process planning 421
processing plan 441
product and service portfolio 137
product development 50, 60, 81, 303
product development
methodology 284
product development process 293,
481
product engineering 303
product line engineering 333
product realization 421
product structure 353
production strategy 244
product-service lifecycle 314
product-service systems (PSS) 314
604

project design 244
project management 264
provision 216
quality functions deployment 293
radical innovation 30, 72
reduction of time 129
refinery 1
renewable energy 371
requirements elicitation 568
requirements engineering 568
resource-constrained project
scheduling problem 343
retail outlets 526
risk analysis 244, 469
robotic path planning 588
sensitivity analysis 578
service development 50
service engineering 137, 314
service improvement 183
service oriented computing
environment (SORCER) 198, 381
service-oriented architecture 216
service-oriented business model 539
service-oriented mogramming 381
shipbuilding 147
simulation 274
smart grid 371
SOA 381
software architecture 19
spatio-temporal validation 343
stage gate 50
storage battery 371
subbase 361
supply chain 401
support solution 254
support system management 1
suspension system parameters 411
sustainability indicators 481
system functionalities 469
system maintenance 119
system simulation 19
system support engineering 1
systems engineering 137, 568
tacit knowledge 441
task scheduling 101
three dimensional measurement 147
top-down method 353
transdisciplinary concurrent
engineering 381
transitional system architecture 254
unexpected circumstance 155
variability management 333
var-oriented modeling 381
vehicle dynamics 411
virtual template 441
web based 284
web platform 517
wind tunnel testing 12
work performance 264

605
This page intentionally left blank



Author Index
Abe, R. 371
Alsaidi, M. 1
Amghar, Y. 40
Anemaat, W.A.J. 12
Anichkin, A. 343
Bachmann, A. 110
Bartelt, C. 19
Beckett, R.C. 30
Benfenatki, H. 40
Benharkat, N. 40
Bil, C. v, 391, 550, 560
Boesten, B.H.L. 507
Bondar, S. 401
Borsato, M. 481
B, V. 19
Brning, J. 19
Burston, M. 391
Burton, S. 381
Cai, G. 550, 560
Canciglieri Jnior, O. 50, 60, 293, 303
Cangelir, C. 190, 353
Carroll, J. 12
Cha, J. 216
Chang, A.-C. 451
Chang, D. 72, 91
Chang, W. 81
Chen, C.-H. 72, 81, 91
Chen, M.-S. 235
Chen, X. 81
Cho, H.-Y. 451
Conroy, T. 391
Curran, R. 494, 507, 578
da Silva, S.B. 293
de Lima, J.P. 293
de Souza, T.M. 50
Denkena, B. 19
Dineva, E. 110
Dorrington, G.E. 391
Downey, K. 254
Elgh, F. 119, 173, 324
Fukuda, S. 129, 431
Garbi, G.P. 137
Germani, M. 314
Ghodous, P. 40
Giraldo G., G.L. 469
Gollnick, V. 110
Gorbachev, I. 183
Greboge, T. 60
Herget, W. 225
Hiekata, K. 147, 244, 441
Hills, J. 588
Huang, Y. 91
Inoue, M. 155
Ishikawa, H. 155
Ito, T. 163, 274
Jahnen, A. 60
Jeffery, J. 12
Johansson, J. 173, 324
Kamalov, E. 183
Kamalov, L. 183
Karademir, . 190
Kaushik, B. 12
Kazar, O. 40
Khoo, L.P. 91
Kimura, S. 147
Kolonay, R.M. 198, 381
Kong, L. 216
Krau, C. 225
Kretschmer, R. 421
Lee, W.T. 451
Lin, J.-Y. 235
Lin, L.-C. 461
Lin, M.-C. 235
Lin, Y.-H. 235
Liu, P.H.Y. 461
Liu, Q. 101, 517
Loureiro, G. 137, 568
Luli, Z. 411, 431
Mantwill, F. 284
Marilungo, E. 314
Masum, M.A. 361
Mitsuyuki, T. 244
Mo, J.P.T. v, 1, 254
Mochida, S. 264
Moerland, E. 110
Mohamad, E.B. 274
20th ISPE International Conference on Concurrent Engineering
C. Bil et al. (Eds.)
2013 The Authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License.
607

Morozov, S. 343
Moser, B. 244
Nagel, B. 110
Nakagaki, N. 441
Nonnengart, A. 225
Oellrich, M. 284
Ou, J.J.R. 461
Pereira, J.A. 293, 303
Peruzzini, M. 314
Pokhilko, A. 183
Poorkiany, M. 324
Rausch, A. 19
Rock, G. 333
Rger, R. 333
Rudek, M. 60, 303
Rulhoff, S. 421
Ruppert, C. 401
agi, G. 411
Sales, O.P. 50
Saouli, H. 40
Semenov, V. 343
enaltun, G. 353
Shetu, N.S. 361
Shibano, K. 371
Sobolewski, M. 381
Spieldenner, T. 225
Spiteri, L. 391
Spiteri, M. 391
Stjepandi, J. 401, 411, 421, 431
Sugawara, A. 441
Sun, J. 441
Szejka, A.L. 303
Takahashi, M. 155
Tanaka, K. 371
Tarlapan, O. 343
Tatou, J.P. 19
Trappey, A.J.C. 451, 461
Trappey, C.V. 451, 461
Urrego-Giraldo, G. 469
Ussui, P.R.S. 481
Verhagen, W.J.C. 494, 507
Wan, L. 101, 517
Wang, Chao 517
Wang, Chuan 101
Warwas, S. 225
Water, C.N. 507
Wu, C.H. 526
Wu, K.K. 526
Xiong, T. 101, 517
Xu, D. 550, 560
Xu, W. 216
Xu, X. 539
Yamato, H. 147, 244, 441
Yuniawan, D. 274
Zenun, M.M.N. 568
Zhao, X. 578
Zhong, Y. 588
Zinnikus, I. 225
Zolotov, V. 343
Zorgdrager, M. 507

608

You might also like