You are on page 1of 9

Bringing Operational

FEATURE
SPECIAL
Perspectives into the
Analysis of Engineered

SEPTEMBER 2O17
VOLUME 20 / ISSUE 3
Resilient Systems
Valerie B. Sitterle, Valerie.Sitterle@gtri.gatech.edu; Erika L. Brimhall, Erika.Brimhall@gtri.gatech.edu; Dane F. Freeman,
Dane.Freeman@gtri.gatech.edu; Santiago Balestrini-Robinson, Santiago.Balestrini@gtri.gatech.edu; Tommer R. Ender,
Tommer.Ender@gtri.gatech.edu; and Simon R. Goerger, Simon.R.Goerger@erdc.dren.mil

Copyright © 2016 by Valerie B. Sitterle, Erika L. Brimhall, Dane F. Freeman, Santiago Balestrini-Robinson, Tommer R. Ender, and
Simon R. Goerger. Published and used by INCOSE with permission.
Presented at the 26th Annual INCOSE International Symposium (2016) Edinburgh, UK-Scotland, 18-21.

 ABSTRACT
Engineered Resilient Systems (ERS) is a Department of Defense (DoD) program focusing on the effective and efficient design
and development of complex engineered systems across their lifecycle. An important area of focus is the evaluation of early-stage
design alternatives in terms of their modeled operational performance and characteristics. The work in this paper ties together
differentiated operational needs with requirements specification and maturation of previous analytical constructs toward a more
operationally relevant viewpoint. We expand on the concept of Broad Utility as a high-level aggregated measure of robustness of
fielded system capabilities with respect to operational requirements. The relation to requirements is more explicit, and systems are
failing to achieve threshold requirements are penalized. The impact of this approach and how it offers a foundation from which to
more fully explore sensitivity to Pre-Milestone A requirements are discussed.

A
ERS FOUNDATIONS AND RELATION TO TRADESPACE ANALYSIS
large body of work currently reused, the ability to iterate designs quickly, the operational environment (Spero et al.
exists concerning developing and a clear linkage to mission needs. An 2014). This requires development and mat-
decision support methods and important area of focus is the evaluation uration of executable and scalable analytical
a tradespace toolset framework of early-stage design alternatives regarding constructs and processes that must be able
architecture in support of the Department their modeled operational performance and to be implemented within the context of a
of Defense’s (DoD) Science & Technology characteristics. This includes research and larger workflow to guide tradespace explo-
priority for Engineered Resilient Systems development of methodologies to conduct ration and evaluate ERS resiliency concepts.
(ERS). ERS is a U.S. DoD program focusing Analysis of Alternatives (AoA) relevant to Dr. Jeffery Holland defined the charac-
on effective, efficient design and develop- evaluating different dimensions of resilien- teristics of a resilient system from an ERS
ment of complex engineered systems across cy for these systems. perspective as i) trusted and effective in a
their lifecycles and actively being imple- Towards this end, tradespace explora- wide range of contexts, ii) easily adapted to
mented across a wide variety of engineering tion and analysis enable decision makers many others through reconfiguration and
concepts, techniques, and design tools. to discover and understand relationships replacement, and iii) having a predictable
Through ERS, the DoD seeks a transfor- across capabilities, gaps, and potential degradation of function (Holland 2013).
mation in defense acquisition processes via compromises that facilitate the achieve- Goerger, Madni, & Eslinger (2014) matured
systems engineering throughout a system’s ment of system objectives. These objectives this view to include the concept of “Broad
lifecycle that will enable the DoD to better undergo expression through requirements Utility,” a mission-focused perspective
respond to an environment character- or other metrics. To be effective, decision defined as the “ability to perform effectively
ized by rapidly changing threats, tactics, makers must have deep knowledge of the in a wide range of operations across multiple
missions, and technologies. ERS calls for component elements of a system, which potential alternative futures, despite expe­
adaptable designs with diverse systems includes how these elements interact riencing disruptions.” Conceptually, Broad
models that can easily be modified and internally to the system and externally with Utility relates to concepts of robustness via

47
performance across a wide range of opera- 2011, Pflanz et al. 2012). Key Performance (MOPs)-to-System Design (horizontally
FEATURE
SPECIAL

tions and possible mission contexts. Parameters (KPPs) are key system capabil- at the bottom). In the former, evaluation
Using the guiding principles of ERS as a ities that must be met for a system to meet measures derive from the operational
foundation, this work adheres to a specific its operational goals. Key System Attributes requirements and stakeholder expectations.
context of evaluation. Namely, we do not (KSAs) are capabilities considered crucial Next identification of KPPs and potential
seek to address the myriad of dimensions in support of achieving a balanced solution system design architectures occurs. The
that constitute resilience, but rather seek to a KPP or other key performance attribute latter branch leads to the specification of
to mature the concept of Broad Utility deemed necessary by the stakeholder. Other system design variables that will aid analy-
concerning its implementation as a com- Performance Parameters (OPPs, also called sis of the various evaluation measures. Both
SEPTEMBER 2O17
VOLUME 20 / ISSUE 3

putationally executable analytical construct Tier III) are desirable but not critical toward branches are iterative processes not always
to embody a more operationally relevant providing required capabilities for mission performed collaboratively, but the genera-
perspective. This effort builds on previous success. KPPs, KSAs, and OPPs/Tier IIIs tion of a tradespace requires a synthesis of
work (Sitterle, Curry, and Ender 2014, are considered “must,” “should,” and “could” these concepts.
Sitterle et al. 2015) while still seeking to haves respectively. It follows that KSAs are AoAs happen in large part through
promote scalable, implementable methods below KPPs in priority, while OPPs/Tier exploration of a tradespace. A tradespace
that are transparent, intuitive, rational, IIIs are below KSAs. Requirements such as is defined as the complete enumeration of
and quantifiably traceable. We discuss KPPs and KSAs specifications are usually as the system alternative design variables to-
how distinct operational scenarios may be quantitative metrics containing those attri- gether with the set of program and system
defined to support a more operationally butes or characteristics of a system consid- parameters, attributes, and characteristics
relevant context of analytical exploration ered critical or essential to the development required to satisfy performance attributes
in an executable environment, how these of an effective defense capability. They are associated with each system alternative. It
scenarios relate to requirements specified expressed as having a threshold and objec- is the complete solution space. Once a we
by the stakeholders, and how together these tive levels, corresponding to the minimum create a tradespace we can then explore
concepts can lead to an expression of Broad acceptable and desired values (Defense Ac- it. a significant amount of work goes into
Utility that goes beyond the commonly quisitions University 2011). This structure is creating the problem, potential design
used framework of additive multi-attribute directly amenable to quantitative evaluation architectures, modeling and simulation
valuation. Broad Utility as it is matured and offers a basis for external reference for (M&S) components that will map the de-
here is simply a starting point for systems valuation of individual requirements as sign variables to output measures on which
engineering resiliency of engineered sys- explained in the “Evaluating Broad Utility” tradeoffs will be assessed, and how the
tems. It is intended to be composable with section. overall problem specification maps to stat-
other analytical constructs and not as the We illustrate the larger systems engineer- ed stakeholder requirements and distinct
single basis for evaluation. We aim to pro- ing process that relates the expression of a operational scenarios.
mote a better synergy between the design CONOPS with the subsequent derivation of Figure 2 illustrates this secondary
analysis and requirements generation pro- requirements and leads to the identification process whereby a tradespace may be
cesses, which are often not well integrated. of early-stage designs and their evaluation generated in a computational environment,
And, in doing so, we offer more insight to by the idealized workflow shown in Figure allowing designs to be evaluated based on
requirements maturation for systems under 1. The figure highlights two distinct branch- modeled fielded performance and essen-
development and how this relates to opera- es leading to a system description model: tially reversing the flow of Figure 1. Once
tional performance expectations. Measures of Effectiveness (MOEs)-to-Ar- system design variables and relevant M&S
chitecture Identification (vertically on components are identified and/or devel-
THE LARGER SYSTEMS ENGINEERING the left), and Measures of Performance oped, then we can create a tradespace for
PROCESS AND GENERATION OF A
TRADESPACE Operational Requirements Stakeholder expectation
In Acquisitions, a Concept of Operations statements.
(CONOPS) is used to “examine current and
new and proposed capabilities… and de- “Operational” measures of success related to
scribes how a system will be used from the MOEs the achievement of the mission or operational
objective.
viewpoints of its stakeholders” (AcqNotes
2015). A CONOPS commonly expresses as
a verbal or graphical statement of a com- Architecture
KPPs High-level architecture definition for system designs.
mander’s assumptions or intent concerning A critical
an operation or series thereof. It provides a subset of the M&S To refine and identify individual system performance needs.
bridge between vaguely expressed capa- performance
parameters
bilities needed from a system and specific representing Performance
technical requirements needed to evaluate the most
the system and enable it to be successful. critical
capabilities
These requirements “govern what, how well, and MOPs M&S System Design Variables
and under what conditions a product will characteristics.
achieve a given purpose” (ANSI/EIA 2003), Measures that To refine and identify Attributes
characterize what attributes are expressed as value
inherently describing capabilities necessary. physical or needed to characterize properties that
In specified mission contexts or for fu- functional attributes a system design and describe specific
ture operations. Capabilities–defined as the relating to the evaluate its MOEs, system design
system operation. MOPs, KPPs, and more. alternatives.
ability to execute specific courses of action.
Requirements may be classified according Figure 1.  High-Level systems engineering process identifying evaluation measures
to type (Defense Acquisitions University and design variables

48
System Design TRADESPACE includes various compo-

FEATURE
SPECIAL
(Input Variables) nents that serve as user interfaces to create
Transfer Functions and execute analyses or analytical blocks,
Value Properties that
(Model Constraints)
characterize various Evaluation Measures a combination of hosting engines, and
design alternatives. Engineering or Data Models (Output Parameters) models and linking engines that coor-
Intrinsic to the system. & Operational Simulations dinate, structure, and integrate analyses
MOEs,
Transfer functions that evaluate MOPs from different software components. ERS
Operational Scenario MOEs and MOPs from: KPPs, and more TRADESPACE thereby supports an end-
(Input Variables) — System Design value Various measures or to-end capability linking requirements

SEPTEMBER 2O17
VOLUME 20 / ISSUE 3
properties, parameters that together and CONOPS specification to specifica-
Environment & Use define the capabilities of tion of design alternatives, generation of
— Operational Scenario
Parameters defining: parameters, the system and/or its a trade space, and subsequent tradespace
effectiveness from the exploration and analysis (Balestrini et al.
i. Physical environment factors — Outputs from various perspectives of the 2015a, Balestrini et al. 2015b). While the
(Operational Environment) other transfer function different stakeholders.
and blocks or simulations. underpinnings of ERS TRADESPACE are
ii. How a system will be used beyond the scope of the present paper, it is
(CONOPS) NOTE: The relationship between System Design variables and the platform through which we have been
Performance Measures may be reversed depending on the nature
that will effect performance of the analysis and transfer functions. investigating various degrees of modular-
characteristics (needed ity in implementation and how this may
to evaluate MOPs.) MOE — Measure of Effectiveness, MOP — Measure of Performance,
KPP — Key Performance Parameter impact reusability, the backend data model,
and the interface semantics necessary to
Figure 2.  Development of a tradespace for analysis facilitate subsequent analyses. The end goal
is to use the description of the scenario and
evaluation of the MOPs, MOEs, and KPPs to specify attributes necessary to generate other building blocks to build an execut-
across the design alternatives. A tradespace an operationally specific tradespace. Parts able environment necessary to compute
analysis should be capable of capturing may be reused and expanded upon for cre- the required system metrics. Analyses that
multiple viewpoints and analytical goals. ation of new operational scenario blocks. differentiate and integrate these scenario
Different stakeholders may value different Like most aspects of model development, objects build from the backend data model
sets of evaluation measures (MOEs, MOPs, the attributes or measures the tradespace specification. The methods described in
KPPs, etc.) or may value the same measures generation is intended to produce will drive the following sections, however, are toolset
differently. Analyses may take many differ- what parameters to define in an operation- agnostic. They are described and written to
ent forms: i) define outcome measures and al scenario. For example, many defense be transparent and implementable in any
explore system designs that meet them, ii) systems are designed to operate in frigid, executable environment.
converge on what these outcome measures icy conditions as well as hot and humid
should be through an iterative process, or conditions. The environment model blocks EVALUATING BROAD UTILITY IN TERMS OF
iii) identify system design properties from should capture precisely those parameters STAKEHOLDER VALUATION
an exploration of evaluation measures. relevant to how the system under study will Needs Contexts and Valuation from a Re­
While what constitutes a MOE, MOP, and perform (vehicle acceleration or internal quirements Reference
KPP beyond the descriptions provided in climate control, and air filtration). These A significant concern during early phases
Figure 1 is outside the scope of this paper, model blocks may be dynamic simulations of acquisition, or during the Pre-Mile-
more detailed discussion is provided by the specific to certain scenarios. stone An analysis of the DoD Acquisi-
US Department of Defense (2013). Any number of operational scenarios may tion process, is the resiliency of a system
be defined in this way depending on the design across simultaneously competing or
DEFINING UNIQUE OPERATIONAL SCENARIOS scope and needs of the analysis. Scenar- sequentially changing requirements on its
When generating a tradespace, the pro- ios may be interested in the same output performance attributes. A Needs Context,
cess must enable the definition and subse- measures (cruise range) but impose different defined in previous work (Sitterle, Curry,
quent analysis of the operational environ- objective or threshold levels or require & Ender 2014; Sitterle et al. 2015), is a
ment and CONOPS parameters necessary operation in vastly different environmental scalable, applied methodology to capture
to evaluate the MOEs, MOPs, and technical profiles that, in turn, alter the measured certain resiliency dimensions related to
requirements expressed as KPPs, KSAs, and performance. Similarly, operational scenar- how well a system performs its functions in
others. To define an operational scenario ios may be interested in entirely different the face of requirements perturbations. It
as depicted in Figure 2, one must specify output measures regardless of operational builds from robustness as defined by Ryan,
physical environment factors (ambient tem- environment, and impose requirements not Jacques, & Colombi (2013) and the concept
perature, road conditions, humidity, winds) present in other scenarios. The level of detail of Broad Utility advocated by Goerger et
as well as parameters directly relating to required by the present stage of analysis, al. (2014), creating a requirements-based
system use (crew carrying capacity, maxi- effort required to model and define them, evaluation of the non-cost value of system
mum speed up a specified grade, ) that will and time and cost required to do so governs design alternatives. Needs Contexts may be
effect and therefore be required to evaluate the complexity and extent of the definition more completely described as characteriz-
system performance characteristics. The of these parameters. ing “Robustness of Fielded System Capabil-
physical environment factors character- Implementation. ERS TRADESPACE is ities and Capacity with respect to Opera-
ize the operational environment, and the an integrated toolset and software archi- tional Requirements.” Contexts are defined
use parameters correspond to CONOPS tecture development undertaken collabo- based on flexible subsets of performance
expectations. Delineating operational ratively by Georgia Tech Research Institute attributes relevant to the stakeholder(s)
scenarios into these two parameter classes (GTRI) and the Army Engineer Research and ranking of those attributes within each.
serves as intuitive scaffolding for modelers and Development Center (ERDC). ERS Succinctly, an individual Needs Context

49
specifies a subset of evaluation measures deemed critical to a MATURING TO AN OPERATIONAL NEEDS CONTEXT
FEATURE
SPECIAL

stakeholder as the basis for analysis. The motivation is that choices The challenge is to mature the overall valuation measure from a
must be made based on what is valued most by stakeholders, traditional additive MAV to a construct more representative of the
recognizing that some stakeholders may have a greater influence. operational viewpoint. The Needs Context is already well suited
Together, multiple Needs Contexts can be constructed to represent to represent disparate operational scenarios that may exist, and
different viewpoints and can represent different or directly com- by definition, it captures those measures of performance deemed
peting objectives for a system’s performance: critical from a given operational perspective. However, failure to
■■ Different stakeholders, each with different or competing prior- meet one or more critical requirement thresholds should be either
ities in parallel readily apparent or carry some penalty that prevents the alterna-
SEPTEMBER 2O17
VOLUME 20 / ISSUE 3

■■ Changes in requirements over time (future performance tive from possessing a valuation on the same level as an alternative
requirements differ in series) meeting all thresholds. Since there are analyses that may need data
■■ Different mission profiles with performance objectives, points representing designs that do well in many measures but fail
whether in parallel or series. in one or two to persist, we will not force the valuation for these
designs to zero.
Requirements Basis for Value Functions. Value of a given Penalty Function. Considering the operational perspective, we
system attribute is scaled against objective and threshold require- sought to modify the additive MAV model to include an “opera-
ment levels using a KPP concept to promote comparability across tional penalty” for alternatives with any one or more individual
analyses. The attribute value functions limit all possible valuations attribute value functions evaluating to zero. The additive MAV
to the range of 0 to 1 by assigning any levels below the threshold model value is the maximum valuation, an alternative could
or above the objective equal to 0 or 1 respectively. A tradespace achieve (as it contains no penalty), while the overall valuation
may or may not cover the entire range. Value of a system design even with every attribute failing to meet threshold preserved the
alternative is then assessed using an additive multi-attribute value understood lower limit of zero. An exponential function of the
(MAV) model, synergistic with the concept of evaluating Broad value was used to generate a penalty effectively equal to the weight
Utility via the robustness descriptor presented above. Since each of any attribute with a value of zero and no penalty otherwise.
Needs Context may be defined using different attributes, and A direct comparison of the traditional additive MAV model and
different valuations and preference weightings, Needs Contexts the model with an operational penalty for a given system design
can produce a different value for each system design alternative k alternative are as follows:
(SDk) within each Needs Context m,:
U+MAV, k = nΣ i=1 wi ✽ vi (Yik ) UOpPenalty, k = U+MAV, k ✽ [ 1– nΣ i=1
| Uk = wi ✽ vi (Yik ) + wj ✽ vj (Yjk ) + … wn ✽ vn (Ynk ) = nΣ i=1 wi ✽ exp ( – θ ✽ vi (Yik )) ]
wi ✽ vi  (Yik ) | Needs Context m
where n represents the number of attributes included in the val-
Uk denotes the overall value of system design alternative ue model, wi are the weights of each attribute, and vi are the values
k (SDk) for a given Needs Context, Yik represents a system attri- of the attributes for the given design alternative as obtained from
bute i for SDk, each vi is a value function expressing the relative the individual attribute value functions as before. The exponential
value of the given system attribute level to a stakeholder, and wi penalty function produces UOpPenalty = U+MAV  when all require-
are weights derived from preference rankings or other means. ments meet or exceed threshold levels, and a penalty effectively
In keeping with traditional utility theory, overall system value is equal to the weight of the individual attribute if its level is below
limited to the range of 0 to 1. Value functions are typically linear threshold such that vi = 0. θ is chosen to be sufficiently large as to
or exponential expressions but may be any monotonic function. ensure this outcome for even the smallest feasible value. θ = 1000,
Cost is a function of system design alternative characteristics, for example, reduces the exponential term to 4.54E –  5 even if an
though it depends on other influences and variables as well. Utility individual valuation vi = 0.01.
and cost are therefore expressed as related dimensions, linked by Surrogate Weighting. Methods used in ERS TRADESPACE
an underlying SDk. and all methods described here are agnostic to how ranks, or even
Limitations of Additive Multi-Attribute Value Models. In the weights, are derived. Weights in an additive MAV model may
previously cited work, the Needs Context served as the basis from originate from any number of methods including subject matter
which an analyst could construct an overall valuation for each expert (SME) opinions, historical priorities, guided stakeholder
system design alternative from the perspective of the individual discussions, and pairwise comparisons. Another approach that
stakeholders. Though using a unique, requirements-based valua- may be particularly useful for analysis of early-stage designs is to
tion construct, the overall valuation still relied on the commonly use surrogate weights based on the attribute rankings. If ranks are
used additive multi-attribute value (additive MAV) model, also inconsequential or unknown across our subset of critical perfor-
called the sum additive weight (SAW) method. This approach is mance measures, equal weights are an appropriate starting point.
scalable and intuitive, yet it does not adequately represent a more If the ranks are known, and the preference order holds, different
operationally focused perspective. For example, consider a set weighting methods may better reflect how the ranks are valued,
of five attributes each with equal weights (all wi = 0.2). A system such as linearly, exponentially, and more (Roszkowska 2013).
design that exhibits valuations of each attribute to a level of 0.8 Among these, rank order centroid (ROC) surrogate weights are
(all vi = 0.8) will produce the same measure of overall value, Uk one of the most robust options when there is some uncertainty in
= 0.8, as a design alternative where 4 of the 5 attributes meet the weights, but the rank preferences are clear. ROC weights are com-
objective but one attribute fails to meet threshold (vi = [1, 1, 1, 1, puted from the vertices of the simplex where w1 ≥ w2 ≥ … ≥ wn
0]). Similarly, as the number of attributes in the measure increases, ≥ 0. Weights are t the coordinates of the centroid for the simplex,
the impact of attributes failing to meet threshold decreases. This found by averaging the coordinates of the defining vertices. This
same effect occurs with the traditional attribute value scaling to approach assumes that the ranks specify the information set on the
the given tradespace. In an operational environment, a defense weights and that no point in the simplex is, therefore, more likely
system failing to achieve a key requirement threshold is not than another (weight density uniformly distributed over the sim-
equally acceptable. plex). Consequently, ROC weights are the expected value weights

50
for the respective probability density functions over the feasible weight space (Barron and all requirements. That can be logical in the

FEATURE
SPECIAL
Barrett, 1996) (This is readily demonstrated using a Monte Carlo simulation). sense designs focused on meeting OPP/
Despite their advantages, ROC weights alone do not solve issues of range sensitivity Tier III requirements should not be valued
in decision analysis. Weights are usually adjusted from one tradespace analysis to the as highly as those meeting KPPs. The key to
next because MAV functions traditionally normalize to the range of the local decision keeping such analyses meaningful, however,
context, the current tradespace. Normalizing this way can produce very different deci- is consistency in application and document-
sion outcomes when the tradespace range changes if the weights do not change. This is ing why that application is warranted.
known as the “range dependence of weights” or “range sensitivity principle.” The swing Example. The Operational Needs Con-
weight concept was developed specifically to preserve consistent decision outcomes in text matures the prior construct to include

SEPTEMBER 2O17
VOLUME 20 / ISSUE 3
the face of changing tradespace ranges (Johnson et al. 2013). Swing weights and other a penalty for failure of any performance
weight-adjusting approaches focus only on altering the weights in the additive MAV attribute to meet its threshold value, produc-
model but work very well across different tradespace instantiations using range-depen- ing a more operational view for alternative
dent normalization. valuation. As an openly sharable example, a
However, intuitive perceptions of attribute importance are often independent of the traditional additive MAV valuation model
range of the outcomes. We focused instead on how to adjust the value functions that ef- and the operational penalty valuation model
fectively grade the individual attributes within the MAV model. Developing value func- were applied to the Iris dataset as shown in
tions that normalize to a basis external to the local decision context offers two primary Figure 3. The Iris data set is a multivariate
advantages. Firstly, it preserves consistency in decision outcomes just as do the previ- data set introduced by Fisher (1936) and
ously developed swing weight methods. Secondly, externally valuing attributes promotes is also included in the Seaborn Python
direct comparability from one tradespace to the next while weight-adjusting methods visualization library Waskom (2015). We set
with tradespace-dependent value functions do not. When using a value function basis threshold and objective values within the
external to the tradespace, weight-adjusting methods are not necessary. Our approach data range for each attribute, specified a rank
exploited the KPP/KSA requirements structure to form value functions not dependent order of {petal_length, sepal_length, petal_
on the current tradespace range also offers a clear analytical link to requirements. We width, sepal_width}, and prioritized higher
can compare the impact of competing or changing requirements readily through con- values of petal_length and sepal_width. We
struction of new Operational Needs Contexts. ROC weights, as expected values over the evaluated the cost as a model function of
feasible weight space, are now a solid starting point for analyses when more rigorously the attributes. Designs with any attribute
obtained weight data are not available. When using an additive MAV model and the in- levels below the threshold can be part of the
dividual valuations are also expected values of the given attributes, ROC weights produce broader Pareto set when using a U+MAV, k
the expected MAV Broad Utility given the preference order established by the weights. value model but not classified as such using
Requirements Differentiation. As discussed earlier, requirements expression may a UOpPenalty, k value model. The “best set”
occur according to hierarchical type. While the Needs Context and Operational Needs taken from the traditional additive MAV
Context valuation models presented in the previous section make no distinction between approach in (a) underwent identification
types of requirements, the models are easily amenable to do so. Weights in any MAV are in a manner analogous to the “fuzzy Pareto
scaling constants and, as such, may be “re-scaled” if necessary to differentiate between set” described by Smaling & de Weck (2004).
levels of priority or value. Returning to the concept of “must,” “should,” and “could” have Instead of taking points off the Pareto fron-
requirements corresponding to KPP, KSA, and OPP/Tier III requirements types respec- tier as a function of some K percent of the
tively, the equations used to assess Broad Utility may alter via a “requirement weight,” total range of the utility/cost data, however,
βi. For example, all “must have” requirements could be assigned βi = 1, which reduces we took successive Pareto layers. Specifical-
to the previous version of these value models. Measures of performance classified as ly, we identified the Pareto frontier for the
“should have” and “could have” requirement types, might be assigned values of 0.8 and whole data set, then the Pareto frontier for
0.6 for βi respectively. The βi values for these lower level requirement types are simply the remaining data, and so on for a defined
examples by but not directly based on DoD guidelines. The following valuation shows number of layers. This approach removed
the function form when the requirement weight is only applied to the penalty term if a sensitivity to the range of data and helped
design fails to meet a requirement threshold: preserve a transparent linkage between iden-
tifying a Pareto set and a “best” decision.
UOpPenalty–𝛽, k = U+MAV, k ✽ [ 1– nΣ i=1 𝛽i ✽ wi ✽ exp ( – θ ✽ vi (Yik )) ] Relevance to Sensitivity and Uncertainty
Analysis. Approaches investigating uncer-
If we apply the βi term in the traditional U+MAV, k model as well (yielding U+MAV–𝛽, k tainty on attribute weights in additive value
= nΣ i=1 𝛽i ✽ wi ✽ vi (Yik ) and subsequently a UOpPenalty–𝛽, k model also using this form), models were well defined by Charnetski &
analyses that choose to focus on OPP/Tier III measures will not yield valuations of Broad Soland (1976, 1978) and expanded upon by
Utility on par with those based only on critical, KPP type measures even when meeting Lahdelma, Hokkanen, & Salminen (1998),
Lahdelma, Miettinen, & Salminen (2003),
and Tervonen & Lahdelma (2007). They
0.7
U+MAV as a function of Cost 0.7
UOpPenalty as a function of Cost apply well to the UOpPenalty model. But, there
0.6 0.6 are interesting ramifications when inves-
tigating uncertainty in the value functions
0.5 0.5
comprising the model and the impact of
0.4 0.4 this synthesis with the UOpPenalty construct.
0.3 0.3 Firstly, if normalizing attribute values to the
0.2 0.2 tradespace range, uncertainty must be char-
0.1 0.1
acterized or propagated before normaliza-
0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.5 0.55 0.6 0.65 0.7 0.75 0.8 tion. Otherwise, valuations at range extremes
can produce values below 0 or above 1. In
Figure 3.  Example comparison of U+MAV, k and UOpPenalty, k contrast, distributions associated with uncer-

51
tainty can be incorporated at any stage in when there is enforcement of preference concept of value scaling for design alterna-
FEATURE
SPECIAL

the analysis when using an external value ranks. Both distributions are broader than tive attributes supports these DoD goals,
reference; attribute values will always be the comparable case for weights-only offering a direct analytical linkage between
bound between 0 and 1. There are interest- uncertainty in (b). The UOpPenalty case in tradespace exploration and requirements
ing dynamics for uncertain attributes with (d) magnifies the effect from (c), resultingmaturation. In our implementation via the
levels close to their objective and threshold. in a heavier distribution toward the lower ERS above TRADESPACE toolset, we do
As shown in Figure 4, which investigated a not force any given method on systems
values due to the uncertainty of an attribute
uniform distribution of uncertainty on the near to its threshold (petal_length). This engineers. Rather, we enable the flexibility
iris attributes, data (shown as normalized underscores the importance of rigorous- to tailor analyses to needs. Engineers are
SEPTEMBER 2O17
VOLUME 20 / ISSUE 3

histograms) skewing can occur near the ly investigating design alternatives with therefore able to use simply an additive
objective (sepal_width) and discontinuous MAV model or apply the operational pen-
uncertain attribute levels near to thresholds
near the threshold (petal_length). Pulling and objectives, especially if classified as alty as described here when constructing
value function distributions with these being in the “best set” of Pareto designs. Needs Context perspectives and analyses.
characteristics into a higher-level model With or without an operational penalty or
such as UOpPenalty necessitates a sampling DISCUSSION weighting according to requirement prior-
strategy since they are not readily mathe- If we revisit the DoD acquisition pro- itization, the method offers direct means
matically convoluted with other distribu- cess, the “system need is established, and for an analyst to investigate the impact of
tions. Figure 5 extends these results to the high-level system requirements are defined” various requirements on Broad Utility to
evaluation of Broad Utility as characterized during the Materiel Solution Analysis stakeholders.
by the U+MAV and UOpPenalty constructs. (MSA) phase that culminates with the The Operational Needs Context is a
Figures 5 (a) and (b) show uncertainty only Milestone A decision (Baldwin et al. 2012). modular construct designed to preserve
on the weights, evaluated through a Monte Notional system architectures are “often scalability. Using an exponential penalty
Carlo simulation as described by Lahdel- created to assist with the requirements function, for example, enables ease in exe-
ma, Miettinen, & Salminen (2003). The analysis and definition for the preferred cution compared to attribute-by-attribute
distributions are understandably narrower system concept,” and reviews are conducted “if-then” statements for each alternative.
when the rank order enforcement occurs to ensure that the resulting requirements The method is based on selecting key eval-
as shown in (b). Figures 5 (c) and (d) then set “agrees with the customer’s needs and uation measures, focused on what matters
show the impact of uncertainty on the expectations” (Baldwin et al. 2012). Our most to stakeholders. Its design supports the
weights and iris “alternative” valuations methodology using a requirements-based analysis of operational scenarios that differ

sepal_length sepal_width petal_length petal_width


2 200 25 80
20
1.5 150 60
15
1 100 40
10
0.5 50 5 20

0 0 0 0
0 0.5 1 0.9 1 1.1 0 0.2 0.4 0.27 0.28 0.29
value value value value
Figure 4.  Illustration of impact of uncertainty on value function results

a) U+MAV: No Rank Order b) U+MAV: Rank Order Holds c) U+MAV: Rank Order Holds d) U+OpPenalty: Rank Order Holds
Uncertainty on Weights Uncertainty on Weights Uncertainty on Weights & Values Uncertainty on Weights & Values
3 9 6 5
8 4.5
2.5 5 4
7
3.5
2 6 4
3
5
1.5 3 2.5
4
2
1 3 2 1.5
2 1
0.5 1
1 0.5
0 0 0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
Figure 5.  Illustration of relation between broad utility and uncertainty

52
Global Tradespace Perspective Local’ Tradespace Perspective

FEATURE
SPECIAL
Traditional Decision-Oriented
View View
Use Global sensitivity, uncertainty,
Sensitivity analysis at and resiliency measures to derive
whole tradespace level “best” design set. Further
propagating forward to evaluate sensitivity and

SEPTEMBER 2O17
VOLUME 20 / ISSUE 3
derive insights for decision. uncertainty in the context of that
Local set for insights more
Leverage Local’ set to feed additional analytical measures. tailored to decision space.
Figure 6.  Comparison of a traditional data view with a more decision-oriented view

according to the operational environment, include Broad Utility (Goerger et al. 2014), “Local” data set, will enable deeper insights
the system use (to do what – a CONOPS), reliability, manufacturability, flexibility to regarding sensitivity, uncertainty, and
and even competing stakeholder priorities engineering change (Sitterle et al. 2015), correlations across the “best” set of designs.
across the desired capabilities. Prioritizing system development cost, overall lifecycle Toward this end, the Operational Needs
agility and acceleration for a vehicle design, cost, and so on. There will likely not be any Context can alter which alternatives are in
for example, may not produce alternatives one answer or view across the numerous that set via the penalty function. The Oper-
well suited to driving through mud or with resiliency dimensions that exist and yet ational Needs Context may be used after a
strong underbody blast protection. Some to undergo development. For early-stage different analytical treatment narrows the
designs may be robust across multiple design, Pre-Milestone A in the DoD alternatives or attributes thereof, or it may
stakeholder operational needs; some may materiel acquisitions process, we typically be used as a precursor, reducing the “input
not. The Operational Needs Contexts can do not have the a priori insights to specify tradespace” to the next analytical treatment
help highlight where compromise may or the exact nature of the relationships across to a more rational set of designs. This may
may not be possible while providing a trace- various dimensions of resiliency. More be done in any number of ways: taking a
able, quantitative basis for reducing a set of often, we evaluate multiple designs and top set of designs including and off of the
options for further analysis. architectures, bubbling up to the design Pareto front as described earlier, taking a
This work has also shown the im- space regions that are feasible and desir- specified top percentage of alternatives with
portance of uncertainty, its treatment, able. Through continued analysis of these the highest values (irrespective of cost),
and interpretation. Uncertainty analysis desirable regions, we begin to analyze and and more. Which way to take a “best set” to
concerning identifying a “best set” of gain insights concerning how the resilien- propagate to the next analytical processes
alternatives should focus on what aspects cy dimensions for hypothetically realized depends entirely on the needs of the next
of uncertainty change our decision about design concepts interrelate. treatment and the overall process. These
which design alternatives to include in In traditional decision analysis of concepts are illustrated in Figure 6.
that set. Figures 4 and 5 illustrate how tradespace data, the context of evaluation By treating the Operational Needs
valuation uncertainty can result in highly basis is on the whole of a specific instanti- Context as an analytical building block that
skewed or discontinuous distributions, ation of a tradespace. An ERS perspective helps identify promising designs for addi-
and uncertainty on weights and values can requires the maturation of a more holistic tional types of resiliency analyses, we must
produce a bimodal aggregate distribution approach toward integrating the whole understand that workflow matters when
when an operational penalty is applied. tradespace analytical view with analyses seeking to compare results across different
Design alternatives in a “best set” with targeted toward the “best” set of designs. analytical efforts. Previous work began
attribute levels near the objective and es- For small sets of system design alternatives, to investigate how different constructs
pecially threshold values must be carefully it may be quite feasible to evaluate various might be synthesized when one is serving
evaluated. Simply taking a mean, standard measures of performance feeding resiliency to reduce a set of design options (Sitterle
deviation, or quartile representation may measures (in turn a higher-level MOP or et al. 2015). To help promote a more local
not represent the uncertainty well. As with MOE) for all design alternatives. As the tradespace context, one analysis used a
any mathematical method, selection and number of design alternatives increases, say weighted local covariance, a covariance
effectiveness of synthesis with other meth- to 10,000 or 1M or more, the global versus calculation taken from machine learning
ods are problem- and process-dependent. local treatment becomes more complicat- applications that prioritize the contribu-
And, addressing composability highlights ed. Some sensitivity analyses may need tions of nearest neighbors with appropriate
the need to understand global versus local to be global, performed across the entire selection of the weighting kernel. Even so,
tradespace perspectives. tradespace to reduce a set of parameters results from a measure of flexibility change
or attributes under evaluation. In other when the analysis is applied to the global
COMPOSABILITY AND GLOBAL-TO-LOCAL analytical treatments, a global approach can (entire) tradespace versus the local, “best
ANALYSIS PERSPECTIVES compromise insights that may be specific set” tradespace filtered by Broad Utility.
Within an ERS perspective, evaluating to the “better” design alternatives with This is intuitive but serves to underscore
resiliency of systems consists of making bias from the “poor” alternatives as well the importance of understanding where
decisions (trades) at a high level across as inhibit the ability to scale both compu- and when to take local perspectives. In
broad measures of performance that tationally and visually. (This is especially some tradespaces and types of analyses,
themselves derived through a process of true for component-based tradespaces.) including all points may hinder effective
making trades across lower level variables. Focusing some aspects of the analysis on evaluation when some points characterize
High-level resiliency dimensions may a more tailored, decision-oriented space, a completely different systems.

53
SUPPORTING DECISION ANALYSIS FOR ERS both local and non-local model and dations for resiliency analyses. Analytical
FEATURE
SPECIAL

The focus for early-stage design is simulation components. A limitation processes for characterizing resiliency still,
not identifying an optimal solution, but of model-based tradespace generation however, depend on the specific stage of
rather narrowing down the potential and analysis is the level of time, cost, design or place in the acquisition lifecycle.
solution space for more rigorous data expertise, and effort required to develop Mature frameworks for resiliency evalua-
collection, generation, and evaluation. We and implement relevant M&S components. tion appropriate to various types and stages
are interested in how our broader Pareto There is a limit to how many threats and of the acquisition lifecycle will emerge from
set changes across Operational Needs how many simulations may be defined and the more extensive application and lessons
Contexts and how that set changes if there evaluated for each system. Our guiding learned. We hope that by sharing the
SEPTEMBER 2O17
VOLUME 20 / ISSUE 3

is uncertainty associated with weights philosophy is therefore to support an perspectives described here, we will help
assigned to the attributes prioritized integrated and yet highly flexible design promote a collaborative, communal matu-
in each context and valuation of those and analysis process focused on identifying ration of resiliency analyses for ERS across
attributes. With explicitly quantifiable the set of alternatives most likely to meet researchers and DoD customers alike.  ¡
and traceable links to requirements, requirements based on the information
the analytical constructs described here (data, design architectures, models)
offer a means for direct exploration of available. ACKNOWLEDGEMENTS
the sensitivity to threshold and objective Throughout this effort, we adhere to Portions of this material come from
values. In keeping with the goals of ERS, the philosophy that the most important work supported, in whole or in part, by the
this capability can aid early-stage design goal is insight, not numerical treatment or United States DoD through the Systems
requirements maturation and offer an inference. In decision analyses, qualitative Engineering Research Center (SERC) un-
approach that can be synthesized with concepts and judgments are often required der Contract HQ0034-13-D-0004. SERC is
other analytical approaches to build a to be translated into quantitative measures a federally funded University Affiliated Re-
complete characterization of resiliency. to enable scalable, consistent, and traceable search Center managed by Stevens Institute
Even so, the challenge for ERS extends analyses. Even so, workflow matters. Con- of Technology. The views and conclusions
much further than analytical methods. sistency across treatments and processes by are those of the individual authors and
Active development continues on ERS which they apply are critical to meaningful participants, and should not be interpreted
TRADESPACE, the aforementioned (actionable) insights. Methods, processes, as necessarily representing official policies,
integrated toolset by GTRI in collaboration and tools (MPTs) should be engineered either expressed or implied, of the DoD,
with ERDC, especially on methods and together to promote transparent, intuitive, any specific US Government agency, or the
tools to support complex analyses across rational, and quantifiably traceable foun- US Government in general.

REFERENCES
■■ AcqNotes. 2015.“JCIDS Process: Concept of Operations (CON- ■■ Fisher, R. A. 1936. “The Use of Multiple Measurements in Tax-
OPS).” http://acqnotes.com/acqnote/acquisitions/concept-of-op- onomic Problems.” Annals of Eugenics 7 (2): 179–188.
erations-conops . ■■ Goerger, S. R., A. M. Madni, and O. J. Eslinger. 2014. “En-
■■ ANSI/EIA. 2003. Processes for Engineering a System. ANSI/EIA gineered Resilient Systems: A DoD Perspective,” Procedia
632-2003. Philadelphia, US- PA: American National Standards Computer Science 28: 865-872.
Institute /Electronic Industries Association. ■■ Holland J. P. 2013. “Engineered Resilient Systems (ERS) Over-
■■ Baldwin, K., et al. 2012. “The United States Department of De- view” Presentation at the U.S. Army Engineer Research and
fense Revitalization of System Security Engineering Through Development Center (ERDC) December.
Program Protection.” Systems Conference (SysCon), IEEE ■■ Johnson, E. R., G. S. Parnell, S. N. Tani, and T. A. Bresnick.
International. Vancouver, British Columbia (Canada): IEEE. 2013. “Perform Deterministic Analysis and Develop Insights.
■■ Balestrini-Robinson, S., D. F. Freeman, and D. C. Browne. “In Handbook of Decision Analysis, 166-226.
2015. “An Object-oriented and Executable SysML Framework ■■ Lahdelma, R., J. Hokkanen, and P. Salminen. 1998.
for Rapid Model Development.” Procedia Computer Science 44: “SMAA-Stochastic multiobjective acceptability analysis.” Eu­
423-432. ropean Journal of Operational Research 106(1): 137-143.
■■ Balestrini-Robinson, S., D. F. Freeman, J. Arruda, and T. R. ■■ Lahdelma, R., K. Miettinen, and P. Salminen. 2003. “Ordinal
Ender. 2015. “ERS TRADESPACE: A Collaborative Systems Criteria in Stochastic Multicriteria Acceptability Analysis
Engineering Framework.” Proceedings. AHS Systems Engineer­ (SMAA).” European Journal of Operational Research 147(1):
ing Technical Specialists’ Meeting. Huntsville, US-AL: AHS. 117-127.
■■ Barron, F. H., and B. E. Barrett. 1996. “Decision quality using ■■ Pflanz, M., C. Yunker, F. N. Wehrli, and D. Edwards. 2012.
ranked attribute weights.” “Applying Early Systems Engineering: Injecting Knowledge
■■ Management Science 42(11): 1515-1523. into the Capability Development Process.” Defense Acquisition
■■ Charnetski, J. R., and R. M. Soland. 1976. “Technical Note – Research Journal 19(4): 422-433.
Statistical Measures for Linear Functions on Polytopes. Opera­ ■■ Ryan, E. T., D. R. Jacques., and J. M. Colombi. 2013. “An Onto-
tions Research, 24(1): 201-204. logical Framework for Clarifying Flexibility-Related Terminol-
■■ Charnetski, J. R., and R. M. Soland. 1978. “Multiple‐Attribute ogy via Literature Survey.” Systems Engineering 16(1): 99-110.
Decision Making with Partial Information: The Comparative ■■ Roszkowska, E. 2013. “Rank Ordering Criteria Weighting
Hypervolume Criterion.” Naval Research Logistics Quarterly Methods – A Comparative Overview.” Optimum Studia Eko­
25(2): 279-288. nomiczne 5 (65): 14-33.
■■ Defense Acquisition University. 2011. IPS Element Guidebook, ■■ Sitterle, V. B., M. D. Curry, D. F. Freeman, and T. R. Ender.
2- Design Interface: 2.3 Key Performance Parameters (KPPs) 2014. “Integrated Toolset and Workflow for Tradespace
and Key System Attributes (KSAs). U.S. Office of the Assistant Analytics in Systems Engineering.” In INCOSE International
Secretary of Defense for Materiel Readiness. Symposium 24(1): 347-361. Las Vegas, US-NV: INCOSE.

54
■■ Sitterle, V. B., D. F. Freeman, S. R. Goerger, and T. R. Ender. ■■ Tervonen, T., and R. Lahdelma. 2007. “Implementing Stochas-

FEATURE
SPECIAL
2015. “Systems Engineering Resiliency: Guiding Tradespace tic Multicriteria Acceptability Analysis.” European Journal of
Exploration within an Engineered Resilient Systems Context.” Operational Research 178(2): 500-513.
Procedia Computer Science 44: 649-658. ■■ US Department of Defense. 2013. Interim Department
■■ Smaling, R. M., and O. L. de Weck. 2004. “Fuzzy Pareto of Defense Instruction 5000.02. Operation of the Defense
Frontiers in Multidisciplinary System Architecture Analysis.” Acquisition System. Washington, DC (US): Office of the
AIAA Paper 4553: 1-18. Under Secretary of Defense for Acquisition, Technology, and
■■ Spero, E., M. Avera, P.Valdez, and S. Goerger. 2014. “Trades­ Logistics.
pace Exploration for the Engineering of Resilient Systems.” ■■ Waskom, M. 2015. http://stanford.edu/~mwaskom/software/

SEPTEMBER 2O17
VOLUME 20 / ISSUE 3
Procedia Computer Science 28: 591-600. seaborn/index.html .

ABOUT THE AUTHORS


Dr. Valerie B. Sitterle is a Senior Research Engineer at the Dr. Santiago Balestrini-Robinson is a Senior Research Engineer
Georgia Tech Research Institute. Her primary areas of research at the Georgia Tech Research Institute (GTRI). His primary area of
include developing frameworks and associated analytical research is the development of opinion-based and simulation-based
methods for the design and characterization of complex defense decision support systems for large-scale system architectures. He
systems and systems-of-systems. Dr. Sitterle earned a Ph.D. in earned a B.S., an M.S., and Ph.D. in Aerospace Engineering from
Mechanical Engineering at Georgia Tech, a BME and MS in the Georgia Institute of Technology.
Mechanical Engineering from Auburn University, and an MS
in Aerospace Engineering and Engineering Science from the Dr. Tommer R. Ender is a Principal Research Engineer at the
University of Florida. Georgia Tech Research Institute and serves as Chief of the Systems
Engineering Research Division. His primary area of research
Erika L. Brimhall is a Research Engineer II at the Georgia includes the development of systems engineering tools and
Tech Research Institute. She has nine years of experience in methods as applied to complex systems-of-systems, concerned with
aerospace and defense, including working on the Space Shuttle supporting decision-making through a holistic treatment of various
main propulsion system at Kennedy Space Center, radar signal problems. Dr. Ender is an instructor in Georgia Tech’s Professional
processing at Raytheon Missile Systems, and systems modeling Masters in Applied Systems Engineering. He earned a B.S., M.S.,
at GTRI. She earned her B.S. in Aerospace Engineering from and Ph.D. in Aerospace Engineering from Georgia Tech.
the Florida Institute of Technology and her M.S. in Systems
Engineering from the Johns Hopkins University. She is currently Dr. Simon R. Goerger currently serves as the Director of the
leading a system architecture project. Institute for Systems Engineering Research, U.S. Army Engineer
Research and Development Center (ERDC). He was previously
Dr. Dane F. Freeman is a Research Engineer II at the Georgia served as a Colonel in the U.S. Army, where his appointments
Tech Research Institute in the Systems Engineering Application included Director of the DoD Readiness Reporting System
Branch. His primary area of research includes the development of Implementation Office in the Office of the Undersecretary of
systems engineering tools, probabilistic design space exploration, Defense, Joint Multinational Networks Division Chief for the U.S.
and product family design. He earned a B.S. in Mechanical Army Central Command in Kuwait, and Director of the Operations
Engineering from the University of New Orleans, and an M.S. Research Center of Excellence in the Department of Systems
and Ph.D. in Aerospace Engineering from Georgia Tech. Engineering at the United States Military Academy (USMA). He
received his B.Sc. from the USMA, and his M.Sc in Computer
Science and his Ph.D. in Modeling and Simulation both from the
Naval Postgraduate School.

2018
Annual INCOSE
international workshop
Jacksonville, FL, USA
January 20–23, 2018

55

You might also like