You are on page 1of 31

Chapter 2

Uncertainty
Introduction to the Topic
• Uncertainty is the reason for risk analysis. Risk analysis is
in a sense, the confluence of social values and science.
• A major purpose of risk analysis is to push risk assessors
and risk managers to be intentional in how they address
uncertainty in analysis and decision making.
• This chapter focuses on a conceptual way on the pile of
things we do not know. In order to know how best to
address the “things” we do not know, we must first
understand the nature of those things in the pile of
unknowns.
Separating Known From Unknowns
and Sorting Them Out
Output
Scenar Model
Knowl Criterio
ios and Domai
edge n
Theory n
Variabi Param
lity eter

Things
we
Index
know
Variabl Value
Models e Param
Natural eter
Variabi
lity

Things
we do Decisi
not on Define
know Quantit Variabl d
ies es Consta
nts

Empiri
cal
Quantit
ies
Levels of Uncertainty
• Uncertainty at the macro level affects values through
constantly and rapidly changing social environment.
• Uncertainty at the micro level occurs in the specific details
of the problems decision makers face, at the level of our
scientific knowledge.
• The two levels of uncertainty can pose markedly different
challenges to risk analysis
What is Knowledge Uncertainty
• Refers to uncertainty that results out of lacking or
incomplete information.
• Quantitative uncertainty analysis attempts to analyze and
describe the degree to which a calculated value may differ
from the true value.
• For analysis, probability distributions are used.
Basic Probability Concepts
• Probability is the chance that something will happen.
• Probabilities are expressed as fractions (1/4, 1/2,3/4) or
as decimals (0.25, 0.50, 0.75) between 0 and 1.
• When you assign a probability of 0, it would mean that
something can never happen
• When you assign a probability of 1, it would mean that
something will always happen.
Approaches to Probability
Classical Approach
• Classical probability defines the probability that an
event will occur as:

 The probability of an event = Number of outcomes


favorable to the occurrence of the event / total number
of possible outcomes
Relative Frequency
• To introduce this definition, let us ask ourselves
questions such as:
• What is the probability that a 25 year old athlete will
have brain tumor
• What is the probability that I will live to 100?
• What are the chances that a new paper plant on the
river near town will produce a significant fish kill?
• What is the chance that turning up a 200 watt
amplifier wide open will blow one of my speakers?
• This method uses the relative frequencies of past
occurrences as probabilities.
Subjective Probabilities
• Subjective probabilities are based on the personal
belief or feeling of the person who makes the probability
estimate.
• We can define subjective probability as the probability
assigned to an event on the basis of whatever evidence
is available.
• Managers generally assign probabilities subjectively
when events occur only once or at most a very few
times.
• Example:
• Suppose it is your responsibility to select a new
assistant and you have narrowed the choice down to
three persons. All three have attractive appearance,
an apparent high level of energy, high self confidence
and equally impressive records of past
accomplishments.
• What is the chance that each of these candidates will
make a good assistant?
Types of Probability Distributions
Type of Types Most
Definition Examples
Distribution Commonly Used
• Tossing a coin
• Binomial
• Success or failure in a
Probability is Distribution
job interview
allowed to take
Discrete • Arrival of patients in a
only a limited
number of values • Poisson clinic
Distribution • Arrival of vehicles at a
toll booth
The variable
• The absenteeism rates
under
in the 4 classes of JBB
consideration is
• Normal (Mla and QC)
Continuous permitted to take
Distribution • Earnings of college
on any value
graduates 10 years after
within a given
graduation
range
Natural Variability
• Refers to the true differences in attributes due to
heterogeneity or diversity.
• Variability is usually not reducible by further measurement
or study, although it can be better characterized.
Types of Uncertainty
• Quantity Uncertainty
• Scenario Uncertainty
• Model Uncertainty
• Parameter / Input Uncertainty
Quantity Uncertainty
• Quantity or input uncertainty is encountered when the
appropriate or true values of quantities are not known.
• Risk analysis can require a lot of information. Risk
assessment in particular can involve a great deal of
quantitative information that includes many parameters
(numerical constants) and variables.
Morgan and Henrion’s (1990)
Classification of Quantity Uncertainty
• Empirical Quantity
• Defined Constants
• Decision Variables
• Value Parameters
• Index Variables
• Model Domain Parameters
• Outcome Criteria
Empirical Quantities
• Empirical Quantities are things that can be measured or
counted
• This includes distances, times, sizes, temperatures,
statistics, and any sort of imaginable count.
• They have exact values that are unknown but measurable
in principle, although it may be difficult to do so in practice.
• Examples are stream flow, eggs produced daily, vehicles
crossing a bridge, temperature, time to complete a task,
prevalence
Defined Constants
• Defined constants have a true value that is fixed by
definition.
• When these values are not known by the analyst, these
quantities can end up in the pile of things we do not know.
• Examples are square feet in an acre, gallons of water in a
tank, speed of light, size of a city
Decision Variables
• This is a quantity which someone must choose or decide
on
• Decision makers exercise direct control over these values
as they have no true value.
• The person deciding this value may or may not be a
member of the risk analysis team, depending on the nature
of the variable.
• Examples are acceptable daily intake, tolerable level of
risk, appropriate level of protection, reasonable cost,
mitigation goal
Value Parameters
• These values represents aspects of decision makers’
preferences and judgments; they have no true value.
• They are subjective assessments of social values that can
describe the values or preferences of stakeholders, the risk
manager, or other decision makers.
• Examples are value of statistical life, discount rate, weights
assigned in a multi-criteria decision analysis, user-day
values
Index Variables
• Index variables identify elements of a model or locations
within spatial and temporal domains.
• They may or may not have a true value.
• A point in time can be referenced as a time in a model, and
a grid cell can be referenced using coordinates.
• Examples are a particular year in a multi-year model, the
location of an egg on a pallet, a geographic grid in a spatial
model
Model Domain Parameters
• These values specify and define the scope of the system
modeled in a risk assessment.
• These parameters describe the geographic, temporal, and
conceptual boundaries (domain) of a model.
• They define the resolution of its inputs and outputs; they
may or may not have true values.
• Examples are study area, planning horizon, industry
segment, climate range
Outcome Criteria
• Outcome criteria are output variables used to rank or
measure the desirability or undesirability of possible model
outcomes.
• These values are determined by the input quantities and
the models that use them.
• Uncertainty in these values is evaluated by propagating
uncertainty from the input variables to the output variables.
• Examples are mortalities, illness rates, infrastructure
failures, fragility curves, costs, probabilities, cost - benefit
ratios, risk-risk tradeoffs
Scenario Uncertainty
• Uncertainty in specifying the risk scenario that is consistent
with the scope and purpose of the assessment
Model Uncertainty
• Uncertainty due to gaps in scientific knowledge that
hamper an adequate capture of the correct causal relation
between risk factors.
Parameter Uncertainty
• Uncertainty involved in the specification of numerical
values for the factors that determine the risk.
Sources of Uncertainty in
Empirical Quantities
• Random Error and Statistical Variation
• Systematic Error and Subjective Judgment
• Linguistic Impression
• Natural Variability
Random Error and Statistical Variation
• No measurement can be perfectly exact.
• Even tiny flaws in observation or reading measuring
instruments can cause variations in measurement from
one observation to the next.
• Classical statistical techniques provide a wide array of
methods and tools for quantifying this kind of uncertainty.
• These include estimators, standard deviations, confidence
intervals, hypothesis testing, sampling theory, and
probabilistic methods.
Systematic Error and Subjective
Judgment
• Systematic errors arise when the measurement instrument,
the experiment, or the observer is biased. Imprecise
calibration of instruments is one cause of this bias.
• If the observer tends to over- or underestimate values, a
more objective means of measurement is needed or the
observer needs to recalibrated.
• The challenge to the risk assessor is to reduce systematic
error to a minimum.
• The best solution is to avoid or correct the bias.
Linguistic Impression
• This makes communication about complex matters of risk
especially challenging.
• If we say a hazard occurs frequently or a risk is unlikely,
what do these words really mean?
• The best and obvious solution to this kind of ambiguity is to
carefully specify all terms and relationships and to clarify
all language as it is used.
• Using quantitative rather than qualitative terms can also
help.
Natural Variability
• Many quantities may vary over time, space, or from one
individual or object in a population to another.
• This variability is inherent in the system that produces the
population of things we measure.
• Frequency distributions based on samples or probability
distributions for populations, if available, can be used to
estimate the values of interest. Other probabilistic methods
may be used as well.
End Of Presentation

You might also like