You are on page 1of 24

This article was downloaded by: [New York University]

On: 10 June 2015, At: 07:27


Publisher: Taylor & Francis
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered
office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Journal of Engineering Design


Publication details, including instructions for authors and
subscription information:
http://www.tandfonline.com/loi/cjen20

Optimal design processes under


uncertainty and reciprocal dependency
a a
Samuel Suss & Vince Thomson
a
Department of Mechanical Engineering , McGill University ,
Montreal , QC , Canada
Published online: 07 Aug 2012.

To cite this article: Samuel Suss & Vince Thomson (2012) Optimal design processes under
uncertainty and reciprocal dependency, Journal of Engineering Design, 23:10-11, 829-851, DOI:
10.1080/09544828.2012.704546

To link to this article: http://dx.doi.org/10.1080/09544828.2012.704546

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the
“Content”) contained in the publications on our platform. However, Taylor & Francis,
our agents, and our licensors make no representations or warranties whatsoever as to
the accuracy, completeness, or suitability for any purpose of the Content. Any opinions
and views expressed in this publication are the opinions and views of the authors,
and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content
should not be relied upon and should be independently verified with primary sources
of information. Taylor and Francis shall not be liable for any losses, actions, claims,
proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or
howsoever caused arising directly or indirectly in connection with, in relation to or arising
out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Conditions of access and use can be found at http://www.tandfonline.com/page/terms-
and-conditions
Journal of Engineering Design
Vol. 23, Nos. 10–11, October–November 2012, 829–851

Optimal design processes under uncertainty


and reciprocal dependency
Samuel Suss and Vince Thomson*
Department of Mechanical Engineering, McGill University, Montreal, QC, Canada
Downloaded by [New York University] at 07:27 10 June 2015

(Received 23 November 2011; final version received 12 June 2012)

Design processes are characterised by uncertainty and iteration, making them difficult to plan and manage.
These characteristics are incorporated into an original model in which information flow is simulated
explicitly such that the dynamic complexity of design processes with interdependent tasks is captured.
This is accomplished through the linkage of information exchange to the work done during each design
task, the availability of resources, and the techniques used to manage the product development (PD) process.
The model generates rework due to iteration and failure in design reviews according to the ability of tasks to
reduce the uncertainty of design information. The model is applied to the investigation of coordination and
its effects on process behaviour under various conditions. Coordination mechanisms are studied through
the choice of input parameters that influence the degree of overlapping of tasks, the allocation of resources
that process information, the delay of communication of information, and the interval of communication
between tasks. Findings uncover the mechanisms that drive the pace of progress during engineering design
processes and highlight strategies that reduce span time in complex PD.

Keywords: dependency modelling; complex system; design; simulation; coordination

1. Introduction

This paper describes a novel model of product development (PD), revealing a new perspective
on the design process. It integrates prior insights in a new way along with new concepts. A key
contribution of the model lies in its treatment of uncertainty reduction in progressive development
tasks. The analyses of hypothetical cases that provide a measure of validity and confidence in the
model are described, showing that it gives expected results in accordance with the assumptions
and in line with other studies. Management guidelines are developed and provide generalisable
insights for improvements to PD processes.
New PD is a critically important part of product life cycle, consuming a large proportion of
the overall time period of bringing a product to the market and setting about 70% of product cost
(Wheelwright and Clark 1992). It has been estimated that each day’s delay in introducing a new
model of an automobile into the market represents a one-million-dollar loss in profit (Clark et al.
1987). In the electronics sector, the rule of thumb is that the first two manufacturers that get a new

*Corresponding author. Email: vincent.thomson@mcgill.ca

ISSN 0954-4828 print/ISSN 1466-1837 online


© 2012 Taylor & Francis
http://dx.doi.org/10.1080/09544828.2012.704546
http://www.tandfonline.com
830 S. Suss and V. Thomson

generation product to the market lock up 80% of the business (Port 1989). The implementation of
engineering design tools, concurrent engineering practices, and product data management systems
has contributed to reducing PD cycle times in recent years. However, in large PD projects where
hundreds of engineers work to develop complex products, there remain significant inefficiencies.
The amount of waste in aerospace and defence PD programmes is estimated at 60–90% of the
charged time with about 60% of all tasks being idle at any given time (Oppenheim 2004). The
actual time that engineers working on PD spend on value-added activities is much less than
half of their total working time. There is much efficiency lost in wasted communication, waiting
for information, and lack of coordination. Moreover, the increased use of inter-organisational
collaboration in PD has highlighted the need for better coordination mechanisms.
Our goal in this paper is to find methods for reducing span time and effort in PD. We argue
that this can be done with better coordination of the complex engineering design process and that
this in turn can be achieved through a better understanding of how coordination strategies and
Downloaded by [New York University] at 07:27 10 June 2015

tactics impact the PD process as well as under what conditions they are effective. It is believed
that by treating the PD process as a complex system of elements: resources, tasks, and developing
information, all of which interact to change the state of the system, and by explicitly modelling
information flow, PD can be studied with computer modelling and simulation that are often used
to analyse complex systems. The outcomes of this study should indicate how PD processes could
be improved to be faster and more effective.
The paper is organised as follows: Section 2 is a literature review; Section 3 describes the model;
Section 4 describes and discusses the results of simulations for several coordination mechanisms
under various conditions of uncertainty and impediments to information flow; and Section 5
summarises the managerial insights and gives a conclusion.

2. Literature review

The literature on the PD process and related fields is very broad. In the following sections, a
review of previous research in organisation theory, coordination, iteration, uncertainty, quality,
complexity, and communication important to the development of the model described in this paper
is presented. Finally, other models of PD are discussed and the reasoning for the development of
the new model presented here is explained.

2.1. Organisation theory and the information processing framework

Research on organisation theory has explored the ways in which large and complex tasks can most
efficiently be performed and has led to the use of an information processing model (Galbraith
1977). This model shows how the efficient breakdown of a task into differentiated subtasks
creates interdependencies that necessitate greater coordination of activities. Uncertainty is the core
concept upon which this model is based and is defined as the difference between the information
required to accomplish a task and the information currently residing with the actor charged
with performing it. The basic proposition is that as the amount of uncertainty increases, there is
an increasing frequency of non-routine, unique, consequential events which cannot be foreseen
and which require decisions to be made. Making decisions requires information gathering and
communication about the state of affairs that led to the events (as well as authority granted by
the owners or the stakeholders of the process). This is referred to as information processing;
it takes time and requires resources. Therefore, increasing uncertainty increases workloads and
time delays in the decision-making mechanisms. Better coordination of information processing
mitigates these increased workloads and time delays.
Journal of Engineering Design 831

We understand what coordination means intuitively when we attend a well-run event, observe a
smoothly running manufacturing process, or watch a successful sports team, and notice how well
harmonised the unfolding actions are. Coordination is defined as ‘the management of dependencies
between activities’ (Malone et al. 1999). This definition is consistent with common intuition and
with the nature of dependencies among interdependent tasks put forth in organisation theory.
Coordination theory suggests that dependencies among activities and resources create problems
that constrain how activities can be performed. Coordination theory defines the fundamental types
of dependencies and coordination mechanisms and catalogues the specialisation of processes that
are examples of these fundamental coordination mechanisms in various scenarios. Here, we refer
to the tactical coordination or action taken during the execution of a process to ensure that each
subtask reliably has what it needs, when it needs it. Coordination is also strategic, where the
structure of a process, the information processing, and decision-making mechanisms are designed
such that the execution of each subtask can take place in concert with the other subtasks with
Downloaded by [New York University] at 07:27 10 June 2015

minimum effort and minimum delay.


In this paper, we are concerned with a particular activity, the development of a complex prod-
uct, and more specifically, the engineering tasks associated with such a product. In a classical
work on the methodology of engineering design, Pahl and Beitz (1996) laid out what are, in
essence, strategic guidelines that split the design process into four main phases. Within a phase,
much of the work of design consists of proposing solutions for specific design requirements
and evaluating whether or not the solutions meet the criteria developed for the design problem.
Iteration between phases is described in terms of the evaluation and feedback that take place
at each stage. At the end of each phase, a design review is typically conducted to decide if the
development has met the requirements. During this design review, there is an opportunity not
only to evaluate solution specifications in light of product requirements, but also to synchronise
the remaining work for all those participating in the design project. A result of design reviews
is a decision about the state of progress whether to continue to the next phase or to return to
a previous phase with revised or clarified information to refine the design before proceeding
further.

2.2. Iteration in engineering design

It has been observed by many researchers that complex engineering design processes are often
iterative (Kline 1985, Peters 1986, Whitney 1990, Susman 1992, Fredriksson 1994). Design
iteration implies rework or refinement of activities to account for changes in their inputs (Browning
1998). These changes can come from additional or changed information or failure to meet design
requirements (Smith and Eppinger 1997). Most definitions of iteration in engineering design
suggest that it consists of more than repetition and that it is associated with the improvement,
evolution, or refinement of a design (Eppinger et al. 1997). This characterisation suggests a
cyclical process that achieves a succession of intermediate improvements on the way towards
a final outcome rather than a strictly incremental process that arrives at a single final outcome
(Safoutin 2003).

2.3. The effect of uncertainty in engineering design

There are several definitions of the term uncertainty mainly due to the fact that different types,
causes, and sources of uncertainty co-exist (Thunnissen 2004). Several classifications of types of
uncertainty have been put forward with a view towards achieving a greater understanding as to
how each type may require a different managerial approach (DeMeyer et al. 2002, Wynn et al.
2011).
832 S. Suss and V. Thomson

Perhaps one of the main distinctions regarding uncertainty in engineering design is between
the lack of knowledge (epistemic) and random (aleatory) uncertainty (Oberkampf et al. 2004).
Epistemic uncertainty is derived from ignorance or incomplete information. Some epistemic
uncertainty is reducible, for instance, via further studies, measurements, or expert consultation.
Aleatory uncertainty describes the inherent variation associated with a system or environment
such as dimensional variation in a production system, the variation in design task duration, or an
unexpected change in product requirements originating from the customer. This aleatory uncer-
tainty cannot be reduced through more information or knowledge acquisition during the design
process, although gaining knowledge about variability may allow its influence to be mitigated
through the design of systems that are adaptable, robust, flexible, etc. (Chalupnik et al. 2009). In
the engineering design process, it seems intuitive that the influence of variability may be managed
through frequent information exchange between dependent activities, and one of the purposes of
the analysis in this paper is to determine the conditions under which this is most effective.
Downloaded by [New York University] at 07:27 10 June 2015

2.4. Defining the state of task progress

Hykin and Laming (1975) referred to work progress as a process in which the level of uncertainty
in the artefact is reduced as the design progresses. Wood et al. (1990) proposed that as a design
progresses, the level of design imprecision is reduced, although a degree of stochastic uncer-
tainty usually remains. Having a high level of confidence in design information means that the
information is detailed, accurate, robust, well-understood, and physically realistic (Clarkson and
Hamilton 2000). Also, progress in a design process refers to a process that achieves a succession
of improvements or refinements on the way towards the final outcome (Safoutin 2003). Browning
et al. (2002) evaluated PD project progress as reduction in the risk that critical technical perfor-
mance measures (TPMs) will be met. Fundamentally, if we view PD as a process of generating
information to reduce uncertainty, the rate of progress of the information developed by a task in
PD is measured by how it reduces epistemic uncertainty.
Since the reduction in epistemic uncertainty in a task is due to the learning or discovery of
information required to complete the task, we would expect it to exhibit a slower rate of change
at the beginning and end of the task as in a sigmoid or S-shaped function (Ritter and Schooler
2002). According to Carrascosa et al. (1998), the S-shape is the best shape function to represent
task completion as it can embody a growth phenomenon similar to learning.

2.5. Quality in PD

ISO 9000 defines quality as ‘the degree to which a set of inherent characteristics fulfils require-
ments’. The American Society for Quality defines quality in technical usage as ‘the characteristic
of a product or service that bears on its ability to satisfy stated or implied needs’. Under the
conventional paradigm, higher quality in PD can be achieved only at the expense of increased
development expenditures and longer cycle times (Harter et al. 2000). However, an alternate view
is that quality, cost, and cycle time in PD are complementary, that is, improvements in quality
directly relate to improved cycle time and productivity (Nandakumar et al. 1993). These improve-
ments in quality are thought to arise from reduced defects and rework in a mature1 PD process
(Harter et al. 2000) and better knowledge sharing, acquisition, integration, and application in new
PD processes (Jing and Yang 2009).

2.6. Complexity of engineering design processes

PD processes can be more easily understood by applying some of the methods used in analysing
other complex systems. A useful system modelling tool that has been applied to PD is the design
Journal of Engineering Design 833

structure matrix (DSM). Whereas these matrix tools have been used for many types of systems
and analyses (Danilovic and Browning 2007), we would like to analyse dependencies among
activities in complex PD processes.
If the system’s state is time-varying, a model that incorporates the dynamic behaviour of
the system can be built with the use of simulation. Many simulation models of PD have been
successfully developed (Jin and Levitt 1996, Cantamessa 1997, Clarkson and Hamilton 2000,
Bhuiyan 2001, Browning and Eppinger 2002, Cho and Eppinger 2005, Lin et al. 2008, Levardy
and Browning 2009). Simulation methodology is similar to deductive theory development in
that outcomes follow directly from assumptions made, but without the constraint of analytic
tractability (Harrison et al. 2007). It also resembles inductive reasoning or empirical analysis in
that relationships among variables may be inferred from analysing output data (Carley and Lin
1997). Simulation can show that the modelled process can produce certain types of behaviour (Lant
and Mezias 1992), can discover unexpected consequences of the interaction of simple processes
Downloaded by [New York University] at 07:27 10 June 2015

(Carroll and Harrison 1994), can explain the mechanisms causing observed behaviour (Mark
2002), and can be used to suggest a better mode of operation of a process. The model’s processes
can be based on empirical observation through either the functional forms or the parameter
settings or both. Where empirical estimates are not available, empirical work can still provide much
information for model construction, and variations and sensitivity analysis can be used to examine
the robustness of the results. Empirical grounding can also be established through the results of
the simulation either through comparison with observations of real systems or to serve as a basis
for subsequent observations of these systems. Simulation can also be a valuable research tool even
if the outcomes cannot be assessed empirically. This is a form of discovery and is characteristic
of much theoretical work in both the natural and social sciences (Harrison et al. 2007).

2.7. Communication and system-level integration

One type of collaborative PD is a series of multi-phase design processes linked via the original
equipment manufacturer, which acts in a project as the system-level integrator of the various
subsystems. The role of a system-level integrator is to ensure that the designs being created by
development teams on the various subsystems function together correctly as a product. The system-
level integrator transmits requirements to and receives information and requests for information
from local-level development teams (Yassine and Braha 2003). Communication directly between
the local-level development teams can also occur and is often informal and unscheduled. The
rate at which this information is received, processed, and transmitted between the system-level
and the local-level teams is an important element in the rate of progress of the overall design
process (Mihm et al. 2003, Yassine et al. 2003). Drawbacks of this scheme are the tendency
for communication channels to become overloaded and thereby cause delays in the progress of
the project and the tendency for default decisions to be taken by actors who lack the requisite
information usually obtained through hierarchical channels.
Information management in large-scale engineering design is difficult and challenging for a
variety of reasons: diversity of channels, scale, variety of perspectives, and uncertainty (Eckert
et al. 2001). Evidence of a multitude of influences on the lack of coordination that are manifested
in an apparent communication problem has been discussed in a study by Maier et al. (2008). In
that study, the interaction of communication with planning and product complexity leading to
wasted effort and rework in the case of automotive PD has been illustrated.

2.8. Models of PD

Extensive surveys of PD process models were published by Browning and Ramasesh (2007)
and Wynn (2007). Krishnan’s (1996) model provided insights into the relation of the rate of
834 S. Suss and V. Thomson

evolution of information and the degree of downstream sensitivity to change for overlapping
activities. The importance of timely information exchange in PD has also been addressed (Ha and
Porteus 1995, Krishnan et al. 1997). Krishnan’s concept of evolution of information is roughly
equivalent to the rate of reduction of uncertainty. If uncertainty is reduced quickly, the information
generated by the task evolves quickly towards its final value. Bhuiyan et al. (2004) analysed the
relationship of functional interaction and overlapping of activities on effort and total span time
using empirically derived relationships between rework and the level of functional interaction.
Other models have focused on the mechanisms of iteration and rework that occur in PD as a result
of the interdependencies of design activities (Browning and Eppinger 2002), but in these works
the probability of rework was determined a priori as an input to the model.
A mathematical analysis of workload in PD using the DSM and system dynamics applied to a
case study in the automotive industry showed the value of this approach in providing operational
insights that explicitly captured the fundamental characteristics of a development process (Yassine
Downloaded by [New York University] at 07:27 10 June 2015

et al. 2003). Levardy and Browning (2009) considered iteration as a managerial control option
and used agent-based simulation in an adaptive process model that simulates possible process
paths using the evolution of TPM profiles through the execution of activities.
Previous models of PD are not able to examine the effects of improvements in coordination.
They generally assume a priori levels of rework based on empirical data gathered in organisations
with fixed coordination schemes employed. Since rework is thought to be a significant factor in
longer span times and increased effort in engineering design, it would be advantageous to have a
model that could estimate the potential improvements that various efforts at coordination could
achieve.

3. The collaborative process model

3.1. Overview of the model

In order to analyse the effects of coordination on an engineering design process, we model the
information flow and its uncertainty explicitly and link it to task progress in a discrete event
simulation. We consider a generalised engineering design process composed of tasks arranged in
phases with a design review at the end of each phase as shown at the top of Figure 1.
Each of the tasks is considered to be performed by a cross-functional development team. We
also include a system-level integrator task in our process model (middle-left in Figure 1).
The modelled process may be viewed as a numerical task–task DSM D, where the numerical
value of each of the elements represents the dependency strength between the respective pair of
tasks. The dependency strength may be formed by the product of the initial uncertainty and the
sensitivity between each pair of tasks, each valued on a relative scale of 0–3, respectively, so that
dependency strength can have values of 0–9 (Yassine et al. 1999).
We consider that there is a certain total amount of information processing required for an organ-
isation to perform a particular PD project. The decomposition of the overall PD project serves to
allocate the total information processing work among the tasks according to their dependency rela-
tionships as illustrated in Figure 2. The total information processing requirements are presumed
to be those with ideal coordination between the tasks where there is no rework that develops due
to the lack of coordination of work among dependent tasks.
The collaborative process model (CoPM) considers the process as a dynamic system with tasks,
resources, and information as its elements. The tasks are linked together through their need for
information from each other in order to make progress in their work. Information is created in
discrete units in each task as a result of the technical work accomplished by that task and is
communicated to each dependent task in the process.
Journal of Engineering Design 835

Phase 1 Phase 2
System level System level
integrator integrator

In In
ion fo ion fo
at rm at rm
m at rm at
for ion fo ion
In In

Information

Information
Task 1 Task 3 Task 1 Task 3
Work Information
Work Information
Information
Information Information subtask subtask store

Information

Information
Information

Information
n
io
n
io

at
at

rm
rm

fo
fo

In
In

Design
Read
subtask

In
Prepare
subtask
Read
subtask
Prepare
subtask
Design
In
fo ion
fo ion
rm
at
io
n
Information

fo
rm
at
review rm
at
io
n
Work
subtask
Information
Information
store

In
fo
rm
at
review
In

Information
Information

Read Prepare
subtask subtask

Task 2 Task 2

Phase 1
System level
integrator
Task 3
In
Downloaded by [New York University] at 07:27 10 June 2015

i on fo
at rm
at
rm io
fo n Work
Task 1 In Task 3 Information
Information
Information

Information Information
subtask store

Information
Information

Information

n
io
at
rm
fo
In
n
In

io
at
fo
rm

rm
at

Information

fo
io

In
Information
n

Read Prepare
n
atio
rm

Information
fo
In

subtask subtask
n
Task 2 a tio
rm
fo
In

Figure 1. The generalised engineering design process model.

Total information processing


requirements for the project

Task 1 Task 2 Task 3 Task 4 etc.


Internal
Information Information Information
Task 1 information etc.
from task 2 from task 3 from task 4
exchange
Internal
Information Information Information
Task 2
from task 1
information
from task 3 from task 4
.
exchange
Internal
Information Information Information
Task 3
from task 1 from task 2
information
from task 4
.
exchange
Internal
Information Information Information
Task 4
from task 1 from task 2 from task 3
information .
exchange
etc. . . . . .

Figure 2. Total information processing requirements allocated among the tasks in the decomposition.
836 S. Suss and V. Thomson

In the CoPM, a type of sigmoid curve called the Gompertz function is used to model the
reduction in epistemic uncertainty of information created in each task as a function of that task’s
state of progress. The state of progress of a task is presumed to be a function of the amount of
work done in the task and the amount of input information it receives. The argument for this goes
as follows: progress in a task is achieved by the reduction in epistemic uncertainty; epistemic
uncertainty in a task is reduced by refining knowledge; the means of refining knowledge is
through received information and work done to develop information through analysis, testing,
and research; thus, progress is a function of the amount of work done in the task and the amount
of input information received. The quality of the work done and the quality of input information are
taken into consideration by imposing the condition that if the uncertainty of information received
by a task increases during the process, some of the work done by the task must be redone, and
the state of progress in the task is reduced accordingly. As a consequence of rework then, the
subsequent information generated by that task is of greater uncertainty as well. This then leads to
Downloaded by [New York University] at 07:27 10 June 2015

secondary effects of rework in the process. The Gompertz function is used because of its flexibility
to allow for different rates of approach to the lower and upper asymptotes at the start and end of
an activity.
In the CoPM, we do not examine the different process paths that could be taken, but rather
consider the intricacies of the information exchange between interdependent tasks throughout the
process and consider how improved coordination of the elements that influence this information
exchange can improve process performance. This is different, but complementary to other models
such as the signposting model (Clarkson and Hamilton 2000) or the adaptive product development
process model (Levardy and Browning 2009) which consider various sequences of activities in a
PD project to find the optimal process path.
In our model, we differentiate between iteration needed to evolve and refine a design and the
rework that develops in engineering design due to the lack of coordination of work in dependent
tasks. This type of rework, which we call ‘unnecessary’ rework, would not be required if, in
each instance, new information developed during the process was delivered to and processed by
dependent tasks before incorrect work was done. In the CoPM, progressive iteration performed
in a task is accounted for with information exchange requirements imposed on dependent tasks,
whereas unnecessary rework is generated when there is a lack of synchronicity and an increase
in uncertainty in interim information flow. We incorporated the view of quality described in the
latter part of Section 2.5 into our model in the criteria governing the probability of design version
rework. A failure to achieve the required information exchange imposed on dependent tasks also
influences the quality of the results achieved by the end of a phase and can trigger design version
rework in the simulation (top of Figure 1).
Drawing on the literature (Browning 1999, Wynn et al. 2011), other components of uncertainty
applicable to engineering design processes have been identified. Many of these are related to the
interdependence of design activities and to the lack of synchronicity of developing information,
and as such we can classify them as ‘derived uncertainty’. In the PD model proposed here, these
‘derived’ aspects of uncertainty should be emergent in a process model that incorporates the
dynamics of the interactions (Suss et al. 2010).

3.2. Details of the model

In the CoPM, each task is divided into three subtasks: work, which comprises the technical work
required to complete a task; read, which is the time spent by a resource performing the work
required to read, interpret, and comprehend the incoming information; and prepare, which is the
time spent by a resource in preparing information developed in the task for communication (lower
right of Figure 1). A resource assigned to a task can only perform one of these subtasks at a time.
Journal of Engineering Design 837

We model each of these subtasks as a stochastic process so that the amount of time that a resource
needs to perform each of them is chosen from an inverse triangular probability density function
(PDF).2 This is accomplished by dividing the information into discrete units such that each unit
requires a stochastic amount of processing time in the prepare or read subtasks.
Supported by empirical observations in PD organisations (Allen 2007), we assume that the
total amount of information that must be communicated between interdependent tasks is directly
proportional to their dependency strength. Allen observed that the number of communications
between participants in PD projects is greater between people performing activities with greater
dependency. This was the case reported regardless of other factors such as physical distance
between the participants or departmental membership. The results were obtained by measuring
the number of communications or messages that were exchanged between people. We reason
that if there is greater uncertainty in an activity’s output, it is likely that more estimates of the
output information need to be generated and communicated to dependent activities before the
Downloaded by [New York University] at 07:27 10 June 2015

design activity is completed. Similarly, if there is greater sensitivity to another activity’s output,
it is likely that more information needs to be transferred before the linked activities can arrive at
a jointly satisfactory solution. For each increased level of uncertainty, the effect of sensitivity is
magnified and vice versa; so, we assume that the effects of these characteristics are multiplicative.
Therefore, in order to simulate progressive iteration, there are a fixed number of units of
information that must be received by a dependent task from each of the tasks upon which it is
dependent. The number of information units that must be communicated by each task i to another
task j for each pair of tasks in the process is represented as a matrix, NC, proportional to matrix D.
Similarly, elements on the diagonal of NC represent the number of information units that must
be exchanged internally within each development team to resolve the information dependency
between team members. The constant of proportionality between NC and D can be determined by
equating the ratio of the total average effort in communication work and the total average effort in
technical work per phase to an input parameter α. We initially set α = 1 based on interviews with
managers and participants of PD processes in aerospace companies and subsequently verified the
effects of this assumption with sensitivity analyses. Thus, the constant of proportionality relates
α and the total average effort in technical work to the average dependency strength and average
effort required to process a unit of information (Suss 2011).
Each task and each unit of information carries out its own processing thread in a discrete
event simulation of the design process. Each thread is subject to mathematical and logical rules
governing the events it encounters and the values of state variables that are assigned accordingly.
The primary state variables are WTti , the amount of work performed by task i until time t; WDi ,
the amount of work required to be performed by task i (which is initially set by an inverse PDF,
but later can be increased when rework is required); and Iti , the number of units of information
read by task i until time t. We define γi as the total number of units of information from all other
tasks required to be read by task i (i.e. the sum of elements in the ith column of matrix NC) and
ηi as the total number of units of information that must be prepared by task i (i.e. the sum of
elements in the ith row of matrix NC).
The mathematical and logical rules governing the processing threads and values of state
variables are summarised as follows:

(1) Work begins in a task according to the task’s sequence and may be overlapped with other
tasks. Fully overlapped tasks begin simultaneously, while partially overlapped tasks begin
when they have received the specified fraction of required input information from other tasks
in the process.
(2) Without timely input information, work in a task stops. This occurs when WTti /WDi , the
fraction of work performed in task i, exceeds a stochastically determined starve condition
which is distributed normally about Iti /γi , the fraction of information received by the task.
838 S. Suss and V. Thomson

(3) Information is generated in a task uniformly in accordance with the work progress, that is,
the nth unit of information is ‘created’ by task i when WTti /WDi ≥ n/ηi .
(4) Work in each task is performed in cycles between which communication is done, if required.
The maximum length of each cycle, ti , is a modelling input for each task i in the process
being modelled. An interim state variable DTi is used to track this.
(5) The state of progress of a task at any time t, Sti , is assumed to be a linear combination of the
fraction of achieved technical work and the fraction of received input information:
 
1 WTti Iti
Sti = + . (1)
2 WDi γi
This relationship is used because it incorporates the two aspects of task progress which
describe its ‘nearness’ to completion. Simply relating the state of a task to the amount of tech-
nical work that has been accomplished is insufficient because there may be rework required
Downloaded by [New York University] at 07:27 10 June 2015

for some of the work already done. However, we reason that the sum of the proportions of
work done and received input information is a better measure because we expect less likeli-
hood of design iteration if a greater proportion of the input information has been processed.3
Since technical work alone is insufficient to measure the progress of a task, using the linear
combination of the fraction of technical work and the fraction of input information is the
simplest relation that incorporates the effects of both with equal weight.
(6) The ratio of the epistemic uncertainty of each unit of information created by task i at time t
to the initial epistemic uncertainty of that task, εti , is given by a modified Gompertz function
which yields an S-shaped curve that allows for different rates of approach to the lower and
upper asymptotes at the start and end of a task:
(−ci Sti )
εti = 1 − e(−bi e )
, (2)
where bi and ci are input parameters that govern the shape of the profile chosen for each task.
The arguments for this choice are outlined in Section 3.1.
(7) The ratio of the aleatory uncertainty of each unit of information created by task i at time t to
the initial uncertainty of that task is given by the following equation:
ϕti = εti Mi UNIF(0,1), (3)
where Mi is a scaling factor and UNIF(0, 1) is a sample chosen from the uniform probability
distribution between 0 and 1.
(8) If the uncertainty of received input information by a task is greater than the uncertainty of
information received earlier, rework is required. The amount of rework is calculated as the
amount of work done in the intervening work cycles where the input uncertainty was lower
than the most recent information received. The rework is manifested as an increase in WDi .
We denote ϕkl and εkl as the aleatory and epistemic attributes of information unit l originating
from task k. When information is read by the receiving task i, these attributes are used to
calculate the average total input uncertainty Ūikm and the average input value of ε̄ikm from task
k between work cycle m − 1 and m in task i:
 N m 
ik
(ϕ m
+ ε m
)
Ūikm = l=1 kl kl
,
Nikm
 N m  (4)
ik m
(ε )
ε̄ikm = l=1 kl
,
Nikm

where Nikm is the number of information units arriving at task i from task k between work
cycles m − 1 and m. Prior to commencing work cycle m, the uncertainty Ūikm is compared
Journal of Engineering Design 839

1.2
Uik m–1
Uik m

1.0

eik p
Uncertainty/initial uncertainty

0.8

0.6

0.4
Downloaded by [New York University] at 07:27 10 June 2015

0.2
WTti m
WTti p

0.0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Task state

Figure 3. Illustration of the variables used in the calculation of rework in Equations (4) and (5).

with Ūikm−1 . If it is higher, the amount of incremental rework that is required by task i due to
the increase in uncertainty of information received from task k is calculated as follows:

p NCki
[WDi ]k = (WTm
ti − WTti ) , (5)
γi
p
where WTti is the amount of work done by task i until the beginning of a previous work cycle
p
p such that the value of ε̄ik is greater than Ūikm . This is further illustrated in Figure 3.
If there is an increase in uncertainty in information received from other tasks, the total
amount of rework needed to be done at the start of work cycle m is the sum of the incremental
rework from Equation (5) for each k. Thus, a new value of WDi is calculated at the beginning
of each work cycle as

NPT
WDi = [WDi ]k + WDold
i . (6)
k=1
k =i

The total amount of rework generated corresponds to the weighted contribution of each
task whose output information has increased in uncertainty. The changes to all of the WDi
during a simulation are accumulated and recorded as an output statistic called churn. The
term churn has been defined in the context of engineering design as rework due to redoing
a task when making an informal incremental change (Bhuiyan et al. 2004) or as a scenario
where the total number of problems being solved (or progress being made) does not reduce
(increase) monotonically as the project evolves over time (Yassine et al. 2003). In the present
paper, churn is generated in a task when the uncertainty of interim input information is
not monotonically decreasing and this occurs because of the sequence of events affecting
the information received by dependent tasks. Churn generated because of a decrease in the
840 S. Suss and V. Thomson

precision of input information implies that some of what has already been done by the task
must be redone. Through the relationship between the task’s state of progress and its work
fraction (Equation (1)), the occurrence of this type of rework in the model leads to an increase
in the uncertainty of the subsequent information generated by a task and thus to emergent
rework in other dependent tasks (secondary rework cycles). Note that churn is quite different
from design version rework, which is generated when there is an unsuccessful evaluation of
intermediate outcomes during a progressive process; this is discussed in item (10).
(9) A task is completed when it is stopped because the deadline for that task (an input to the
model for each task) has been reached or it has completed its work and input information
requirements, that is, WTti = WDi and Iti = γi . A phase is completed when all the tasks in
the phase are completed.
(10) Once all the tasks in a phase are completed, the possibility of design version rework is explored
according to the quality achieved in the previous phase. This is simulated with a series of
Downloaded by [New York University] at 07:27 10 June 2015

Bernoulli trials, where each probability is governed by the minimum of


(a) the normalised certainty of information in each task at the end of the phase,
(b) the percentage of completion of information exchange of each task,
(c) the percentage of completion of the work of each task, and
(d) the percentage of completion of system-level oversight by the integrator task.
(11) Design version rework requires less effort each time it is repeated. This is because of first-order
learning (von Hippel and Tyre 1995) and because of a reduction in epistemic uncertainty.
The effects of both of these are included in the CoPM with the use of the learning curve
model (Adler and Clark 1991) for the reduction in technical work and the reduction in
the epistemic uncertainty according to the progress made in previous design versions (Suss
2011).
The next event in the processing thread of a task entity occurs as a result of the logic in
the algorithm. For example, whichever of the following events takes place first triggers the
subsequent events in this processing thread.
(12) The span time of task i exceeds its deadline. Work in task i stops, that is, the task entity
‘releases’ the resource. This processing thread continues through a series of task end events.
(13) The amount of work done in a task exceeds the starve condition where insufficient input
information has been received to continue the technical work by the task. This triggers the
suspend work event. The task entity waits for subsequent events before continuing to process
anything further.
(14) The interim state variable DTi in the task exceeds the maximum work cycle time (input
variable ti . See item (4).). This triggers the suspend work event and DTi is set to zero. The
task entity waits in a queue, while the resource representing the associated development team
processes the waiting information units.
Each unit of information associated with task i follows a unique processing thread under
conditions imposed on it during the simulation that is summarised as follows:
(15) creation initiated in a task when sufficient work has been done (step (3)) and assignment of
‘addressed to’ attributes,
(16) assignment of attributes for epistemic uncertainty (ε) and aleatory uncertainty (ϕ) (from
Equations (2) and (3) based on the state of progress of a task at that instant, from Equation
(1)),
(17) queue and seize the resource associated with task i for the prepare subtask, pause for the time
representing the prepare subtask operation, release the resource associated with task i,
(18) pause for the time representing the random transmission delay (normally distributed about an
input mean value),
(19) queue and seize the integrator resource, pause for the time representing the integrator operation
(normally distributed about an input mean value), release the integrator resource,
Journal of Engineering Design 841

(20) pause for the time representing the random transmission delay (normally distributed about an
input mean value),
(21) wait for the time representing the random latency delay (normally distributed about an input
mean value),
(22) queue and seize the resource assigned to the task associated with the ‘addressed to’ attribute,
pause for the time representing the read subtask operation in the addressee task, release the
resource,
(23) trigger the update of state variables for uncertainty of input information in the receiving task
j and input information Itj ,
(24) end the processing thread.

Although we assume that information develops uniformly with progress made during the technical
Downloaded by [New York University] at 07:27 10 June 2015

work of the task, information is not communicated as soon as it is developed, but rather it is
prepared for communication and transmitted periodically as one would expect in practice. In the
model, therefore, the information units that are created wait until an event triggers their release
to the prepare queue (step (17)). This occurs each time the ratio WTti /WDi reaches an integer
multiple of the modelling input CIi which defines the communication interval as a fraction of
the nominal span time of task i. Thus, when the following condition is true, the kth batch of
information units is released to the prepare queue of task i:

WTti 1
≥ kCIi for all k = 1, 2, 3, . . . , . (7)
WDi CIi

3.3. Model inputs

The following table summarises the inputs to the model:


Input variable Description

a The ratio of total effort in communication to total technical effort in the


modelled scenario (introduced in Section 3.2)
ATD, STD Mean value and standard deviation of the normal PDF for the transmission
time of an information entity from the system-level integrator to the final
addressee (step (20))
b, c Vector defining Gompertz function values for the epistemic uncertainty for
each task (Equation (2))
CI Vector defining the nominal communication interval for each task (Equation (7))
DeltaT Vector defining t (hours) maximum work cycle time between which
communications are attended to by each development team (introduced in
step (4))
D Matrix defining dependency strength between each pair of tasks with element
values of 0–9
INTMAX System-level integrator capacity (step (19))
IQ Integrator queue time threshold in hours where additional integrator resource
is added (step (19))
LF Learning factor for design version rework
LAT Matrix defining the mean value and standard deviation of the PDF for
communication latency for each task (step (21))
842 S. Suss and V. Thomson

Input variable Description

MINT, SINT Mean value and standard deviation of the normal PDF for the integrator
process (step (19))
M Vector defining the scaling factor for aleatory uncertainty for each task
(Equation (3))
NPT Number of tasks in each phase
OVF Matrix defining the degree of overlap between each pair of tasks in a project
(step (1))
PRD Matrix defining the mean value and standard deviation of the normal PDF
for the transmission time of an information entity from each task to the
system-level integrator (step (18))
PR Matrix defining the parameters of the triangular PDF for the prep subtask for
Downloaded by [New York University] at 07:27 10 June 2015

each task (step (17))


RD Matrix defining the parameters of the triangular PDF for the read subtask for
each task (step (22))
SCH Vector defining the ratio of scheduled span time for each task to the maximum
value of the work subtask PDF for that task (step (12))
STSD Vector defining the standard deviation σi for the calculation of the starve
condition for each task (step (2))
TP Number of phases in the project
WK Matrix defining the parameters of the triangular PDF for the work subtask for
each task (Section 3.2)

3.4. Verification and validation of the model

As described in Section 2.6, we made efforts to verify and validate the CoPM. Validation was
carried out with animation, extreme condition tests, and internal and face techniques and with
comparison to other models and sensitivity analyses (Sargent 2005). Verification was assured
with the use of programme modularity, dynamic testing, boundary analysis, execution tracing,
and monitoring (Law and Kelton 2000).

4. Model applications and discussion

In this section, we present results produced with the model for certain key cases that are of
interest in the PD modelling literature. The first of these are the effects of overlapping sequentially
dependent tasks for various rates of uncertainty reduction. Then, we examine situations where
there is reciprocal dependency between tasks. In each of these cases, we show the ability of the
model to estimate the effects of improvements in coordination for various rates of uncertainty
reduction. Finally, we use the model to show the influence of an agile coordination scheme to
improve span time in a PD process with high aleatory uncertainty and slowly reducing epistemic
uncertainty.

4.1. Overlapping sequentially dependent tasks with various scenarios of uncertainty

The DSMs for the case of five tasks with high sequential dependency are given in Table 1. The case
presented is for a total average technical effort of 15,000 h per phase, two phases, and average read
Journal of Engineering Design 843

Table 1. DSMs D (dependency strength) and NC (number of units of


information to be communicated) for five tasks with high sequential
dependency.

D 1 2 3 4 5

1 9 9 9 9 9
2 0 9 9 9 9
3 0 0 9 9 9
4 0 0 0 9 9
5 0 0 0 0 9
NC 1 2 3 4 5
1 56 56 56 56 56
2 0 56 56 56 56
3 0 0 56 56 56
4 0 0 0 56 56
Downloaded by [New York University] at 07:27 10 June 2015

5 0 0 0 0 56

2.50

2.40

2.30

2.20 Increasing
aleatory
Normalised effort

2.10 uncertainty

2.00
Ins
rea
sin

1.90
go
ve

1.80 Increasing
rla

aleatory
p

1.70 uncertainty

1.60

1.50
0.30 0.40 0.50 0.60 0.70 0.80 0.90
Normalised span time
Figure 4. Effort versus span time for several profiles of uncertainty reduction for overlapping of five sequentially
dependent tasks.

and prepare process times per unit of information of 4.5 h (an estimate for technical information
processing). Output quantities are normalised by dividing by the total average effort.
The simulation results in Figure 4 show the performance of effort and span time for this case for
various profiles of uncertainty. The data points on each individual curve show simulation results
for zero, 50%, and full overlap conditions (step (1) in Section 3.2 gives details on modelling
of overlapping). The data points are joined by a smooth curve for clarity in describing results
of scenarios with the same parameters. These cases depict identical parameters for all tasks for
uncertainty with near zero latency, no resource constraints, and optimal communication frequency.
For all cases shown, zero overlap yields the identical result with the longest span time and
the least effort. Increasing overlap reduces span time, but with diminishing benefits as epistemic
844 S. Suss and V. Thomson

uncertainty reduces more slowly and when there are higher magnitudes of aleatory uncertainty.
Similarly, effort increases with overlap to a greater extent when epistemic uncertainty reduces
more slowly and when there are higher magnitudes of aleatory uncertainty.
Thus, we can see that for tasks with fast reduction of epistemic uncertainty, there is a substantial
reduction in span time when overlapping tasks. This is evident even when there is a large magnitude
of aleatory uncertainty and strong dependency between tasks, but this requires good coordination
in the form of optimal frequency of communication of interim information (CIi = 0.1) and low
delays in information flow due to latency or resource constraints. The influence of these factors
is further illustrated in the following two sections.
For tasks with slow reduction in epistemic uncertainty, overlapping tasks by more than 50% has
little additional benefit in span time and incurs a significant increase in effort when there are higher
amounts of aleatory uncertainty. The simulations show that the reason for the poor span time and
effort performance in these scenarios is the increasing amount of churn that is generated when
Downloaded by [New York University] at 07:27 10 June 2015

there is high overlap. When uncertainty reduces more slowly, there is more rework generated with
increasing overlap. Thus, the CoPM predicts that when there is greater uncertainty, tasks do not
make progress, but spend more time doing rework due to imprecise information. The additional
rework results in increased effort and increased span time. These results are similar to the results
observed in practice (Krishnan et al. 1997, Bhuiyan et al. 2004) and serve to validate our model,
but the features of the CoPM allow us to see the influence of better coordination on span time and
effort as we shall see in the following sections.

4.2. The effects of delays in information flow

In projects performed by many people in various locations, delays in simply getting information
to the attention of those who can make use of it can be a significant portion of the time required
to do the work itself. These delays, where information must travel through several layers of an
organisation, can cause unnecessary rework as defined in Section 3.1 with significant knock-
on effects on many tasks. These delays in information flow are referred to as communication
latency. Moreover, since engineers and designers are often occupied with several projects at the
same time, there is a delay before they turn their attention to following up on missing information
required to make progress in one of their tasks. Other delays can be caused by insufficient resource
capacity.
The CoPM was used to study the impact of communication latency, resource constraints, and
inattention of development teams to communication on system performance. The mean values of
latency (Section 3.2, step (21)), ti (Section 3.2, steps (4) and (14)), and capacity of the integrator
resource (Section 3.2, step (19)) were varied individually and in combination for a PD process
of five reciprocally interdependent tasks (the DSM D in this case had all elements equal to 9).
Simulations evaluated the impact of these delays for various PD system structures and levels of
uncertainty. As shown in Figure 5, it was found that sources of delay in information flow combine
in a nonlinear way to significantly increase span time. Examining the simulations, we found that
each impediment increased the likelihood that the information getting to a dependent task was not
continuously reducing in uncertainty, and this generated churn as tasks operated with imprecise
information. Each additional source of delay further exacerbated the delays due to other sources,
and this led to a tipping point where the combination of delays was such that tasks could not
reduce their uncertainty sufficiently fast and cycles of design version rework ensued.
This phenomenon has been widely observed in several industries by other researchers (Mihm
et al. 2003, Yassine et al. 2003) and serves to further validate the model. Furthermore, the CoPM
illustrates the mechanisms and magnitudes of the effects of the impediments to information flow
and allows estimation of the benefits of better coordination to reduce them. This insight has
Journal of Engineering Design 845

0.70
Combination of all three impediments
0.65 Impediments due to slow addition of critical resources and latency in communication
Latency in communication
Slow addition of critical resources
0.60
Inattention to communication by developement teams
Normalised span time

0.55

0.50

0.45

0.40
Downloaded by [New York University] at 07:27 10 June 2015

0.35

0.30
0.00 0.02 0.04 0.06 0.08 0.10
Normalised delay parameter

Figure 5. The nonlinear effects of impediments to information flow on span time.

important managerial implications in that it shows how making an effort to reduce one or more
impediments to information flow has an outsized effect on project performance.

4.3. The effects of communication interval with reciprocal dependency

With high reciprocal dependency, parallel execution with interim information exchange is an
important method of enabling dependent tasks to effectively continue with their work. The time-
liness with which interim information is exchanged allows each task to make progress towards
a successful design review at the end of each phase. However, exchanging uncertain interim
information too often leads to unnecessary rework as defined in Section 2.2, which impedes task
progress.
We examine the effect of the communication interval with which a task sends out interim infor-
mation to other tasks. Figure 6 shows the variation of normalised span time with five reciprocally
dependent tasks with normalised communication interval for several cases of uncertainty. The
DSM D for this case is identical to that reported in the previous section.
The results in Figure 6 show that with slowly reducing epistemic uncertainty, span time reduces
with smaller communication intervals until a minimum is reached at CIi approximately equal to
0.1 (the critical point). The reduction in span time up to the critical point due to more interim
communication is nearly 75%. The steepness and magnitude of the rise of span time as CIi reduces
below the critical point are strongly affected by the value of aleatory uncertainty, indicating how
the magnitude of the variation of aleatory uncertainty affects the span time when information is
communicated with high frequency. When CIi is above the critical value, a 10-fold increase in
aleatory uncertainty has little effect on the slope of increase in span time or its magnitude. With
more rapid reduction in epistemic uncertainty, there is a 35% increase in the span time over the
range of CIi . The minimum span time with more rapid reduction in uncertainty is 36% lower than
that with more slowly reducing uncertainty.
Examination of the results for rework, starve time, and design version rework shows that the
behaviour of the CoPM is largely affected by the increase in starve time when CIi is higher than
846 S. Suss and V. Thomson

1.00

0.90

0.80

0.70
Normalised span time

0.60 Slowly reducing


Increasing aleatory epistemic uncertainty
uncertainty
0.50

0.40

0.30
Downloaded by [New York University] at 07:27 10 June 2015

0.20 Rapidly reducing epistemic


Increasing aleatory uncertainty
uncertainty
0.10

0.00
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
Normalised communication interval (CI)

Figure 6. Span time with varying communication intervals for different cases of uncertainty.

the critical point. The starve time increase is a result of tasks not getting enough information to
allow them to progress in their work when the communication interval is too high. This causes
an insufficient reduction in uncertainty when the design review takes place and triggers design
version rework cycles. Increasing amounts of design version rework cause increases in effort and
span time when CIi is above the critical point. When CIi is below the critical point, increased span
time is caused by churn from too frequent communication of interim information that is subject
to aleatory uncertainty.

4.4. Adapting agile PD methods to non-software PD

In this section, we model a coordination scheme called ‘scrum’ that is part of agile PD methods
(Cockburn 2006). In the scrum scheme, the development team is a cross-functional group which
does the analysis, design, implementation, testing, etc. and which is required to create a deliverable
product increment in a short period of time (typically 1 month). During this period of time,
called a ‘sprint’, the team is required to produce an entire, tested version of the software that
completely answers a planned set of requirements. Each successive sprint goes on to add additional
requirements so that at the end of the project the software meets the complete list of requirements
for the final product. The sprints are characterised by intense communication within the self-
managing development team (typically co-located), and ironclad commitment to achieving the
agreed to requirements within the allotted time frame, which cannot be changed during a sprint.
This PD method has been found to be effective in significantly reducing software development
time.
Although this scheme is feasible in software development, it cannot be literally replicated in
mechanical design where physical parts must be designed, materials procured, and then undergo
a manufacturing process in order to produce a complete version of the product that can be tested.
However, in an analogy to the scrum concept, we can consider the goal of a sprint in mechanical
Journal of Engineering Design 847

design as the solution to a series of well-defined design problems that provides the information
required to make a key decision in a PD project. Each successive sprint then provides further
information to an additional series of design problems until the final product design is achieved.
A key element here is that the tasks involved in each phase collectively and completely solve a
specific set of design problems that can be evaluated during a design review. Since development
of a complex product is essentially a series of activities providing information that allow key
design decisions to be made, the design review at the end of each of these short phases or ‘sprints’
formalises the point at which these decisions are made. The tasks in each phase leading up to a
design review are those that generate the information required to make these key decisions. As in
the sprint method for software PD, the work in each phase must be done with intense interaction
between all participants such that the design review is successful and rework of a phase with the
same design requirements is not necessary. In practice, this is facilitated by smaller sized pieces
of work or phases.
Downloaded by [New York University] at 07:27 10 June 2015

Thus in our model, a simulated project is divided into a series of short phases, analogous
to sprints with a design review at the end of each phase. To model the co-location and intense
communication between the various tasks, we eliminated the system-level integrator function
from its role as the scrutiniser of each information item, but rather included the integrator as a
participant in the co-located group of development teams. All delays due to latency of information
exchange were removed. For comparison, we performed simulations with the same technical
work content with two phases and five tasks with an integrator function acting as oversight of
all information communicated between the tasks and communication latency to get information
from one development team to the other.
Figure 7 shows that with slow reduction in epistemic uncertainty, span time is significantly
lower with the scrum method. When the magnitude of random uncertainty increases from 0
to 0.9, the span time remains approximately constant in the scrum scenarios, but increases by
approximately 25% in the standard scenarios. This result is primarily due to lower churn when
using the scrum method and the mitigation of the effects of increases in aleatory uncertainty on

0.50
Standard method, slow reduction in epistemic uncertainty
Scrum method, slow reduction in epistemic uncertainty
0.45 Standard method, rapid reduction in epistemic uncertainty

Scrum method, rapid reduction in epistemic uncertainty


Normalised span time

0.40

0.35

0.30

0.25

0.20
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Magnitude of aleatory uncertainty

Figure 7. Comparison of span time in scrum and standard PD for several cases of uncertainty.
848 S. Suss and V. Thomson

churn. When epistemic uncertainty reduces more rapidly, both the standard and scrum methods
are insensitive to increases in the magnitude of aleatory uncertainty.
In scrum, short sprints with more design reviews provide opportunities to efficiently judge the
completed portion of work accomplished in each sprint and to provide feedback, keeping the
work from straying too far off track. In interviews with senior PD project managers in large PD
aerospace projects, we found that frequent design reviews are sometimes used at the discretion
of the project director rather than as a standard operating procedure. In the words of a veteran PD
project director we interviewed, ‘with frequent interim design reviews we were able to keep the
work on track … stay “out of the ditch” and keep the work as close as possible to the “white line”
in the middle of the road’. Thus, the scrum method works by allowing for intense coordination
among the people doing the work who can most effectively do this coordination while providing
the opportunity for managers to keep the work on track and to refine and clarify requirements as
the project progresses.
Downloaded by [New York University] at 07:27 10 June 2015

5. Managerial insights and conclusion

The explicit modelling of information flow taking place in a PD process enabled the CoPM to
capture the dynamic complexity of projects with interdependent tasks. This was accomplished
through the explicit and detailed modelling of information exchange, the linkage of information
exchange to the work accomplished by each task, the deployment of resources, and the techniques
used to manage the PD process.
The model differentiated between unnecessary rework that develops due to a lack of coordi-
nation of work in interdependent tasks (churn) and the iterative refinement of tasks that occurs
in engineering design as information is communicated from a task to a dependent task. This
iterative refinement was incorporated in the information exchange between each pair of tasks
where each group of interdependent tasks had to exchange information to carry out the refine-
ment of the mutual work and where information was increasingly updated as the process was
carried out. The iterative refinement was considered to have a positive influence on the quality of
the work, and trade-offs were examined between tightening task deadlines and the potential for
design version rework. However, unnecessary rework came about in the model from information
exchange in combination with changing uncertainty. In simulating information exchange as it
encountered impediments from its preparation for communication to its utilisation by develop-
ment teams performing dependent tasks, the model was able to tie together aspects of the structure
and management of the process to overall span time and effort.
The behaviour predicted by the CoPM for span time and effort was supported by the behaviour
observed in practice and predicted by other models, but the work here went further to determine the
underlying mechanisms for the observed performance and expanded the range of configurations
and conditions that were studied. Moreover, the CoPM is the first model that can evaluate changes
to PD process performance due to improvements in information flow brought about by better
coordination. Other models are limited by their a priori assumptions about the amount of rework
from empirical data in particular scenarios (Bhuiyan et al. 2004), linear theory (Yassine et al.
2003), or assumed probability distributions of rework (Mihm et al. 2003).
Simulations of PD processes with the CoPM allowed detailed examination of the effects of
process structure, critical resource management, communication policies, and uncertainty on
project span time, effort, and rework. Results showed that significant reduction in project span
time and effort can be achieved by applying the following methods with a knowledge of the nature
of task interdependencies:
a. Overlap sequentially dependent tasks when there is sufficiently frequent, interim information
exchange to the extent warranted by rates of reduction of epistemic and aleatory uncertainty.
Journal of Engineering Design 849

b. Structure projects such that the size of interdependent tasks is similar. All tasks wait for
information generated by tasks that take a longer time and thus are held to the rate of progress
of the slowest task.
c. Provide sufficient resource capacity early enough by anticipating workload requirements.
Insufficient resource capacity in critical tasks impedes information flow and causes other
tasks to be starved or to receive information out of synchronisation and thus perform more
rework.
d. Implement ‘scrum’ methods where there is high uncertainty to enable intense coordination
and keep PD projects on track. This reduces latency delays of information exchange most
effectively, forces synchronisation of information more frequently, and allows project leaders
to review the work done in each sprint, to maintain the influence of the overall organisation,
and to provide guidance.
e. Adopt policies and implement systems to reduce delays and to reduce effort in communication
Downloaded by [New York University] at 07:27 10 June 2015

of information in the information flow between dependent tasks.

Overall, simulation of complex PD processes can be used to help plan and manage actual PD
projects by developing and testing guidelines for improving information flow. Importantly, anal-
yses using the CoPM can provide a deeper understanding of the mechanisms that drive PD
performance.
Aspects of teamwork, culture, and trust that have been found to be important in PD (Dayan and
Benedetto 2010) are not presently considered in our analysis. However, inasmuch as research in
these areas points to behaviour that can be modelled with probability distributions, these aspects
of the PD process can be included in system models of the type proposed here as part of future
research. Additionally, the model does not consider rework that can come about as a result of the
failure of development teams to achieve quality if they are in possession of the input information
that they require early enough. This assumption is reasonable for comparison of coordination
schemes with factors such as average development team quality and personal performance of
individuals being equal. Another limitation in the model is the assumption that all members of a
development team must be involved in the execution of each subtask (read, prepare, and work)
because of the sharing of knowledge required for each subtask.

Notes

1. Process maturity is defined in the CMMI (capability maturity model integration) by the Software Engineering
Institute of Carnegie Mellon University, http://www.sei.cmu.edu/cmmi/research/.
2. The inverse PDF, also known as the inverse probability integral transform or inverse transformation method, is a
basic method for generating numbers at random from any probability distribution (Devroye 1986).
3. Recall from Section 2.2 that design iteration implies rework or refinement of activities to account for changes in
their inputs.

References

Adler, P.S. and Clark, K.B., 1991. Behind the learning curve: a sketch of the learning process. Management Science, 37
(3), 267–281.
Allen, T.J., 2007. Architecture and communication among product development engineers. California Management
Review, 49 (2), 23–41.
Bhuiyan, F., 2001. Dynamic models of concurrent engineering processes and performance. Thesis (PhD). McGill
University, Montreal, Canada.
Bhuiyan, N., Gerwin, D., and Thomson, V., 2004. Simulation of the new product development process for performance
improvement. Management Science, 50 (12), 1690–1703.
Browning, T.R., 1998. Modeling and analyzing cost, schedule, and performance in complex system product development.
Thesis (PhD). Massachusetts Institute of Technology, Cambridge, MA.
850 S. Suss and V. Thomson

Browning, T.R., 1999. Sources of schedule risk in complex system development. Systems Engineering, 2 (3), 129–142.
Browning, T.R. and Eppinger, S.D., 2002. Modeling impacts of process architecture on cost and schedule risk in product
development. IEEE Transactions on Engineering Management, 49 (4), 428–442.
Browning, T.R. and Ramasesh, R.V., 2007. A survey of activity network-based process models for managing product
development projects. Production and Operations Management, 16 (2), 217–240.
Browning, T.R., et al., 2002. Adding value in product development by creating information and reducing risk. IEEE
Transactions on Engineering Management, 49 (4), 443–458.
Cantamessa, M., 1997. Agent-based modeling and management of manufacturing systems. Computers in Industry, 34
(2), 173–186.
Carley, K.M. and Lin, Z., 1997. A theoretical study of organizational performance under information distortion.
Management Science, 43 (7), 976–997.
Carrascosa, M., Eppinger, S.D., and Whitney, D.E., 1998. Using the design structure matrix to estimate product develop-
ment time. Proceedings of the ASME Design Engineering Technical Conferences (Design Automation Conference),
Atlanta, GA.
Carroll, G.R. and Harrison, J.R., 1994. On the historical efficiency of competition between organizational populations.
American Journal of Sociology, 100 (3), 720–749.
Chalupnik, M.J., Wynn, D.C., and Clarkson, P.J., 2009. Approaches to mitigate the impact of uncertainty in development
Downloaded by [New York University] at 07:27 10 June 2015

processes. International Conference on Engineering Design, Stanford, CA.


Cho, S.-H. and Eppinger, S.D., 2005. A simulation-based process model for managing complex design projects. IEEE
Transactions on Engineering Management, 52 (3), 316–328.
Clark, K., Chew, B., and Fujimoto, T., 1987. Product development in the world auto industry. Brookings Papers on
Economic Activity, 1987 (3), 729–771.
Clarkson, P.J. and Hamilton, J.R., 2000. ‘Signposting’, a parameter-driven task-based model of the design process.
Research in Engineering Design, 12 (1), 18–38.
Cockburn, A., 2006. Agile software development: the cooperative game. Boston: Addison-Wesley.
Danilovic, M. and Browning, T.R., 2007. Managing complex product development projects with design structure matrices
and domain mapping matrices. International Journal of Project Management, 25 (3), 300–314.
Dayan, M. and Benedetto, C.A.D., 2010. The impact of structural and contextual factors on trust formation in product
development teams. Industrial Marketing Management, 39 (4), 691–703.
DeMeyer, A., Loch, C.H., and Pich, M.T., 2002. Managing project uncertainty: from variation to chaos. MIT Sloan
Management Review, 43 (2), 60–67.
Devroye, L., 1986. Non-uniform random variate generation. New York: Springer-Verlag.
Eckert, C., Clarkson, J., and Stacey, M., 2001. Information flow in engineering companies: problems and their causes.
International Conference on Engineering Design, Glasgow, UK.
Eppinger, S.D., Nukala, M.V., and Whitney, D.E., 1997. Generalised models of design iteration using signal flow graphs.
Research in Engineering Design, 9 (2), 112–123.
Fredriksson, B., 1994. Systems engineering – a holistic approach to product development. Griffin, 94, 95–105.
Galbraith, J.R., 1977. Organization design. Reading, MA: Addison-Wesley.
Ha, A.Y. and Porteus, E.L., 1995. Optimal timing of reviews in concurrent design for manufacturability. Management
Science, 41 (9), 1431–1447.
Harrison, J.R., et al., 2007. Simulation modeling in organizational and management research. The Academy of Management
Review, 32 (4), 1229–1245.
Harter, D.E., Krishnan, M.S., and Slaughter, S.A., 2000. Effects of process maturity on quality, cycle time, and effort in
software product development. Management Science, 46 (4), 451–466.
von Hippel, E. and Tyre, M.J., 1995. How learning by doing is done: problem identification in novel process equipment.
Research Policy, 24 (1), 1–12.
Hykin, D.H.W. and Laming, L.C., 1975. Design case histories: report of a field study of design in the United Kingdom
engineering industry. Proceedings of the Institution of Mechanical Engineers 1847–1982, 189 (1975), 203–211.
Jin, Y. and Levitt, R.E., 1996. The virtual design team: a computational model of project organizations. Computational
and Mathematical Organization Theory, 2 (3), 171–195.
Jing, N.N. and Yang, C., 2009. The interrelationship among quality planning, knowledge process and new product devel-
opment performance. International Conference on Industrial Engineering and Engineering Management, Beijing,
China.
Kline, S.J., 1985. Innovation is not a linear process. Research Management, 28 (4), 36–45.
Krishnan, V., 1996. Managing the simultaneous execution of coupled phases in concurrent product development. IEEE
Transactions on Engineering Management, 43 (2), 210–217.
Krishnan, V., Eppinger, S.D., and Whitney, D.E., 1997. A model-based framework to overlap product development
activities. Management Science, 43 (4), 437–451.
Lant, T.K. and Mezias, S.J., 1992. An organizational learning model of convergence and reorientation. Organization
Science, 3 (1), 47–71.
Law, A.M. and Kelton, W.D., 2000. Simulation modeling and analysis. New York: McGraw-Hill.
Levardy, V. and Browning, T.R., 2009. An adaptive process model to support product development project management.
IEEE Transactions on Engineering Management, 56 (4), 600–620.
Lin, J., et al., 2008. A dynamic model for managing overlapped iterative product development. European Journal of
Operational Research, 185 (1), 378–392.
Journal of Engineering Design 851

Maier, A.M., et al., 2008. Exploration of correlations between factors influencing communication in complex product
development. Concurrent Engineering-Research and Applications, 16 (1), 37–59.
Malone, T.W., et al., 1999. Tools for inventing organizations: toward a handbook or organizational processes. Management
Science, 45 (3), 425–443.
Mark, N., 2002. Cultural transmission, disproportionate prior exposure, and the evolution of cooperation. American
Sociological Review, 67 (3), 323–344.
Mihm, J., Loch, C.H., and Huchzermeier, A., 2003. Problem-solving oscillations in complex engineering projects.
Management Science, 46 (6), 733–750.
Nandakumar, P., Datar, S.M., and Akella, R., 1993. Models for measuring and accounting for cost of conformance quality.
Management Science, 39 (1), 1–16.
Oberkampf, W.L., et al., 2004. Challenge problems: uncertainty in system response given uncertain parameters. Reliability
Engineering & System Safety, 85 (1–3), 11–19.
Oppenheim, B.W., 2004. Lean product development flow. Systems Engineering, 7 (4), 352–376.
Pahl, G. and Beitz, W., 1996. Engineering design: a systematic approach. London: Springer.
Peters, T., 1986. The mythology of innovation, a skunkworks tale. In: J. William Pfeiffer, ed. Strategic planning: selected
readings. San Diego, CA: University Associates, 485–500.
Port, O., 1989. Pssst! Want a secret for making superproducts? Business Week, 2 Oct, pp. 106–107.
Downloaded by [New York University] at 07:27 10 June 2015

Ritter, F.E. and Schooler, L.J., 2002. The learning curve. In: W. Kintsch, N. Smelser, and P. Baltes, eds. International
encyclopedia of the social and behavioural sciences. Amsterdam: Pergamon, 8602–8605.
Safoutin, M.J., 2003. A methodology for empirical measurement of iteration in engineering design processes. Thesis
(PhD). University of Washington, Seattle.
Sargent, R.G., 2005. Verification and validation of simulation models. Proceedings of the 37th Conference on Winter
Simulation, Orlando, FL.
Smith, R.P. and Eppinger, S.D., 1997. A predictive model of sequential iteration in engineering design. Management
Science, 43 (8), 1104–1120.
Susman, G.I., 1992. Integrating design and manufacturing for competitive advantage. New York: Oxford University Press.
Suss, S., 2011. Coordination in complex product development. Thesis (PhD). McGill University, Montreal, Canada.
Suss, S., Grebici, K., and Thomson, V., 2010. The effect of uncertainty on span time and effort within a complex design
process. In: P. Heisig, J. Clarkson, and S. Vajna, eds. Modelling and management of engineering processes. London:
Springer, 77–88.
Thunnissen, D.P., 2004. Propagating and mitigating uncertainty in the design of complex multidisciplinary systems. Thesis
(PhD). California Institute of Technology, Pasadena, CA.
Wheelwright, S. and Clark, K., 1992. Revolutionizing product development. New York: The Free Press.
Whitney, D.E., 1990. Designing the design process. Research in Engineering Design, 2 (1), 3–13.
Wood, K.L., Antonsson, E.K., and Beck, J.L., 1990. Representing imprecision in engineering design: comparing fuzzy
and probability calculus. Research in Engineering Design, 1 (3), 187–203.
Wynn, D.C., 2007. Model-based approaches to support process improvement in complex product development. Thesis
(PhD). Cambridge University, Cambridge.
Wynn, D.C., Grebici, K., and Clarkson, P.J., 2011. Modelling the evolution of uncertainty levels during design.
International Journal on Interactive Design and Manufacturing, 5 (3), 187–202.
Yassine, A.A. and Braha, D., 2003. Complex concurrent engineering and the design structure matrix method. Concurrent
Engineering: Research and Applications, 11 (3), 165–176.
Yassine, A.A., Falkenburg, D., and Chelst, K., 1999. Engineering design management: an information structure approach.
International Journal of Production Research, 37 (13), 2957–2975.
Yassine, A.A., et al., 2003. Information hiding in product development: the design churn effect. Research in Engineering
Design, 14 (3), 145–161.

You might also like