You are on page 1of 159

JOÃO LUÍS SANTOS

QUANTITATIVE RISK ANALYSIS


THEORY AND MODEL

2008
QUANTITATIVE RISK ANALYSIS

ABOUT THE AUTHOR

The author was born in Angola, in March 1968. The author received a Bachelor (1994) degree and a
Licenciature (1996) degree in Chemical Engineering from the School of Engineering of the Polytechnic
Institute of Oporto (Oporto, Portugal), and a Master (M. S.) degree in Environmental Engineering (2001)
from the Faculty of Engineering of the University of Oporto (Oporto, Portugal). Also, the author has a
advanced diploma in Occupational Health and Safety (ISQ – Institute of Welding and Quality, Vila Nova de
Gaia, Portugal). The author is a Professional Safety Engineer, licensed by ACT (Labor Conditions National
Authority, National Examination Board in Occupational Safety and Health) and ISHST (Institute of Safety and
Hygiene and Occupational Health). The author has several years of background experience in industrial,
chemical and petrochemical processes and operations. The author experience in occupational health and
safety field include chemical and petrochemical plant safety reviews, surveys and inspections, safety
requirements and application of safety standards, codes and practices (including OSHA, NFPA, NEC, API,
ANSI), process safety, industrial risk analysis and assessment, fire and loss prevention engineering and
management. The author’s research interest include environmental engineering and safety engineering;
much of his research in safety field has emphasized the development of safety processes and risk analyis.

Inquiries about the content of this work may be directed to the following address:

Rua Parque de Jogos, 357, Bloco A, 2º Centro


Lugar do Covelo
4745–457 São Mamede do Coronado
PORTUGAL

E-mail: joao.santos@lycos.com
Telephone: (351) 91 824 2766

This work is copyright. Apart from any use as permitted under the Copyright Act, no part may be reproduced
by any process without prior written permission from the authors. Requests and inquiries concerning
reproduction and rights should be addressed to the above address.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

EXECUTIVE SUMMARY

In this document the term “risk analysis” is employed in its broadest sense to include risk assessment, risk
management and risk communication. Risk assessment involves identifying sources of potential harm,
assessing the likelihood (probability) that harm will occur and the consequences if harm does occur. Risk
management evaluates which risks identified in the risk assessment process require management and
selects and implements the plans or actions that are required to ensure that those risks are controlled. Risk
communication involves an interactive dialogue between stakeholders and risk assessors and risk managers
which actively informs the other processes. Due to the relatively short history of use of technology, the
potential variety of hazards and the complexity of the environments into which they may be introduced, the
risk analysis process may rely on both quantitative and qualitative data.

RISK ANALYSIS = RISK ASSESSMENT + RISK MANAGEMENT + RISK COMMUNICATION

The first step in risk assessment is establishing the risk context. The risk context includes: the scope and
boundaries of the risk analysis as determined by national and international regulations, the Regulations and
the Regulator’s approach to their implementation; the proposed dealings; and the nature of the process and
technology. It should be noted that consideration of potential harm may include economic issues such as
marketability and other trade considerations, which sometimes fall outside the scope of the national and
international regulations. As the Regulator must consider risks to human health and safety and the
environment (and other targets) arising from, or as a result of, human activities and interaction with
technology. In addressing harm it is important to define harm, and the criteria to assess harm.
Risk assessment can be usefully considered as a series of simple questions: “What might happen?”, “How
might it happen?”, “Will it be serious if it happens?”, “How likely is it to happen?”, and finally, ”What is the
risk?”. In the first instance, answering these questions involves hazard identification, a process that identifies
sources of potential harm (“What…?”) and the causal pathway through which that harm may eventuate
(“How…?”). This is followed by a consideration of the seriousness of the harm being realised (consequence),
the chance or probability (likelihood) that harm will occur, the chance of exposure to the hazard, and the
safety level existing for the process or technology. The hazard identification, consequence, likelihood,
exposure, and safety level assessments together lead to an appraisal of whether the hazard will result in a
risk and to make a qualitative estimate of the level of that risk (risk estimate). Although risk assessment is
most simply presented as a linear process, inreality it is cyclical or iterative, with risk communication actively
informing the other elements. For this reason, it is helpful to use terminology that clearly distinguishes
between the likelihood assessment, consequence (loss criticality) assessment, exposure assessment, safety
level assessment, and the risk estimate. Therefore, several different descriptors have been selected for each
component that are designed to convey a scale of sequential levels. The consistent application of this
distinct terminology is intended to clarify the discussion of these components of the risk assessment. The
explanations of the descriptors for consequence need to encompass adverse consequences of events
relating to both human health and safety and other targets, e.g. the environment. They are relatively simple,
in order to cover the range of different factors (e.g. severity, space, time, cumulative, reversibility) that may
contribute to the significance of adverse consequences. The risk estimate is derived from the combined
consideration of both exposure, likelihood, loss criticality (severity), and safety level. The individual
descriptors can be incorporated into a Risk Estimate Matrix. The aim of the matrix is to provide a format for
thinking about the relationship between the exposure, likelihood, and loss criticality (severity) of particular

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

hazards. It is important to note that uncertainty about either or both of these components will affect the risk
estimate.
The risk management component of risk analysis builds on the work of the risk assessment and may be
described as answering the questions: “Does anything need to be done about the risk?”, “What can be done
about it?”, and “What should be done about it?”. While risk assessment deals as far as possible with
objective evidence, risk management necessarily involves prudential judgements about which risks require
management (risk evaluation), the choice and application of treatment measures, and ultimately whether
the dealings should be permitted. Consequently, if there is uncertainty about risks (e.g. in early stage
research) this may influence the management measures that are selected. A consideration of the causal
pathways for harm to occur, that were elucidated in the risk assessment, provides a basis for strategic
selection of how, where and when to undertake risk treatment measures. This enables the identification of
the points at which treatment can be most effectively applied to break the causal pathway and prevent
adverse outcomes from being realised. While the focus of risk management is on prevention, the Regulator
also addresses how to manage adverse outcomes if a particular risk is realised. Important considerations are
whether the adverse consequences can be reduced or reversed, identifying measures that can achieve these
ends, and including these in licence conditions or contingency plans. Risk management actions undertaken
by the Regulator are not limited to devising the risk management plan.
Risk communication underpins the processes of risk assessment and risk management and the safety
regulations (both national and international regulations) provides legislative mechanisms to ensure the
clarity, transparency and accountability of the Regulator’s decision-making processes and that there is public
input into that process. Risk communication involves an interactive dialogue between risk assessors, risk
managers and stakeholders. In many instances differing perceptions of risk can influence the approach of
stakeholders to particular issues. The Regulator undertakes extensive consultation with a diverse range of
expert groups and authorities and key stakeholders, including the public, before deciding whether to issue a
licence. The Regulator endeavours to provide accessible information to interested parties on applications,
licences, dealings with process and technology, trial sites and the processes of risk assessment, risk
management, monitoring and compliance undertaken by the Regulator Office. Therefore, the Regulator is
committed to active risk communication.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

ACRONYMS

ANSI
American National Standards Institute.

API
American Petroleum Institute.

ASME
American Society of Mechanical Engineers.

ASTM
American Society for Testing and Materials.

CCI
Confidential Commercial Information.

CCPS
Center for Chemical Process Safety.

CPD
Continuing professional development, a means to ensure ongoing competence in a changing world.

CSR
Corporate social responsibility, a system whereby organisations integrate social and environmental concerns
into their business operations and interactions with stakeholders.

DIR
Dealings involving Intentional Release.

DNIR
Dealings not involving Intentional release.

DOE
United States Department of Energy.

DOT
United States Department of Transportation.

ERP
Emergency Response Planning Guideline.

EVC
Equilibrium Vapor Concentration.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

ACRONYMS

FTAP
Fault Tree Analysis Program.

FMEA Failure Mode and Effects Analysis.

GRI
Global Reporting Initiative, an international sustainability reporting institution that has developed guidelines
for voluntary reporting on the economic, environmental and social performance of organisations.

HAZOP
Hazard and Operability Analysis.

HHC
Highly Hazardous Chemical.

HHM
Hierarchical Holographic Modelling.

HSE
Health and Safety Executive, the United Kingdom Occupational Safety and Health regulator.

IDLH
Immediately Dangerous to Life or Health.

IEEE
Institute of Electrical and Electronic Engineers.

ILO
International Labour Organization, a United Nations agency, based in Geneva.

IMO
International Maritime Organization, a United Nations agency, based in London.

IOSH
Institution of Occupational Safety and Health.

IRRAS
Integrated Reliability and Risk Analysis System.

ISA
Instrument Society of America.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

ACRONYMS

ISM
International Safety Management, a formal code requirement of the IMO that applies to most classes of
large ship.

ISO
International Organization for Standardization.

JHA
Job Hazard Analysis.

LFL
Lower Flammability Limit.

M&O
Management and Operation.

MCS
Minimal Cut Set.

MOC
Management of Change.

MSDS
Material Safety Data Sheet.

NFPA
National Fire Protection Association.

ORC
Organization Resources Counselors.

ORR
Operational Readiness Review.

OSH
Occupational safety and health.

OSHA
Occupational Safety and Health Administration.

OSHMS
Occupational safety and health management system.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

ACRONYMS

P&ID
Piping and Instrumentation Diagram.

PC 1-4
Physical Containment levels 1-4.

PEL
Permissible Exposure Limit.

PrHA
Preliminary Hazards Analysis.

PHA
Process Hazard Analysis.

PSI
Process Safety Information.

PSM
Process Safety Management.

PSR
Pre-Startup Safety Review.

RARMP
Risk Assessment and Risk Management Plan.

SAR
Safety Analysis Report.

SHI
Substance Hazard Index.

SMARTT
Specific, measurable, agreed, realistic, timetabled and tracked action – a method for managing action plans.

SOP
Standard Operating Procedure.

SWOT
Strengths, weaknesses, opportunities and threats analysis.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

ACRONYMS

TLV
Threshold Limit Value.

TQ
Threshold Quantity.

UFL
Upper Flammability Limit.

WHO
World Health Organization, a United Nations agency, based in Geneva.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

GLOSSARY

ABSORBED DOSE (AD)


The energy absorbed by matter from ionizing radiation per unit mass of irradiated material at the place of
interest in that material. The absorbed dose is expressed in units of rad or gray (1 rad = 0.01 gray).

ACCEPTABLE
When applied to fire safety, “acceptable” is a level of protection which the Safety Authority, considers
sufficient to achieve the fire and life safety objectives defined in safety requirements. In some instances, it is
the level of protection necessary to meet a code or standard. In other instances it is a level of protection
that deviates (plus or minus) from a code or standard as necessary and yet adequately protects against the
inherent fire hazards.

ACCIDENT
An unplanned sequence of events that results in undesirable consequences.
An unwanted transfer of energy or an environmental condition that, due to the absence or failure of barriers
or controls, produces injury to persons, damage to property, or reduction in process output.

ACCIDENT ANALYSES
The term accident analyses refers to those bounding analyses selected for inclusion in the safety analysis
report. These analyses refer to design basis accidents only.

ACCIDENT EVENT SEQUENCE


An unplanned event or sequence of events that has an undesirable consequence.

ACCIDENT (EXPLOSIVE)
An incident or occurrence that results in an uncontrolled chemical reaction involving explosives.

ACCOUNTABILITY
The state of being liable for explanation to a superior official for the exercise of authority. The delegate of na
authority is accountable to the delegating responsible authority for the proper and diligent exercise of that
authority. Responsibility differs from accountability in that a responsible official “owns” the function for which
he or she is responsible; it is an integral part of his or her duties to see that the function is properly
executed, to establish criteria for the judgement of excellence in its execution, and to strive for continuous
improvement in that execution. A responsible official is associated with the outcomes of the exercise of
authority regardless of whether it was delegated, and regardless of whether the designee properly followed
guidance. Accountability, on the other hand, involves the acceptance of the authority for execution (or for
further delegation of components of execution), by using guidance and criteria established by the
responsible authority.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

GLOSSARY

ADDITIONS AND MODIFICATIONS


Changes to a structure, system, and component (SSC) for reasons other than increasing resistance to
natural phenomena hazards.

ADMINISTRATIVE CONTROLS
Provisions relating to organization and management, procedures, recordkeeping, assessment, and reporting
necessary to ensure safe operation of a facility.
With respect to nuclear facilities administrative controls means the section of the Technical Safety
Requirements (TSR) containing provisions for safe operation of a facility including (1) requirements for
reporting violations of TSR, (2) staffing requirements important to safe operations, and (3) commitments to
the safety management programs and procedures identified in the Safety Analysis Report as necessary
elements of the facility safety basis provisions.

AGGREGATE THRESHOLD QUANTITY


The total amount of a hazardous chemical contained in vessels that are interconnected, or contained in a
process and nearby unconnected vessels, that may be adversely affected by an event at that process.

AIRBORNE RADIOACTIVE MATERIAL OR AIRBORNE RADIOACTIVITY


Radioactive material dispersed in the air in the form of dusts, fumes, particulates, mists, vapors, or gases.

AIRBORNE RADIOACTIVITY AREA


Any area, accessible to individuals, where: (1) the concentration of airborne radioactivity, above natural
background, exceeds or is likely to exceed the derived air concentration (DAC) values; or an individual
present in the area without respiratory protection could receive an intake exceeding 12 DAC-hours in a
week.

AMBIENT AIR
The general air in the area of interest (e.g., the general room atmosphere), as distinct from a specific
stream or volume of air that may have different properties.

ANALYSIS
The use of methods and techniques of arranging data to: (1) assist in determining what additional data are
required; (2) establish consistency, validity, and logic; (3) establish necessary and sufficient events for
causes; and (4) guide and support inferences and judgements.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

GLOSSARY

ANALYTICAL TREE
Graphical representation of an accident in a deductive approach (general to specific). The structure
resembles a tree - that is, narrow at the top with a single event (accident), then branching out as the tree is
developed, and identifying root causes at the bottom branches.

ANNUAL LIMIT ON INTAKE (ALI)


The derived limit for the amount of radioactive material taken into the body of an adult worker by inhalation
or ingestion in a year. ALI is the smaller value of intake of a given radionuclide in a year by the reference
man (ICRP Publication 23) that would result in a committed effective dose equivalent of 5 rems (0.05
sievert) or a committed dose equivalent of 50 rems (0.5 sievert) to any individual organ or tissue.
ALI values for intake by ingestion and inhalation of selected radionuclides are based on Table 1 of the U.S.
Environmental Protection Agency's Federal Guidance Report No. 11, Limiting Values of Radionuclide Intake
and Air Concentration and Dose Conversion Factors for Inhalation, Submersion, and Ingestion, published
September 1988.

ARC-FLASH HAZARD
A dangerous condition associated with the release of energy caused by an electric arc (IEEE 1584-2002).

ARC RATING
The maximum incident energy resistance demonstrated by a material (or a layered system of materials)
prior to breakopen or at the onset of a second-degree skin burn. Arc rating is normally expressed in cal/cm2.

ASSESSMENT
A review, evaluation, inspection, test, check, surveillance, or audit, to determine and document whether
items, processes, systems, or services meet specific requirements and are performing effectively.

AUTHORIZED PERSON
Any person required by work duties to be in a regulated area.

BACKGROUND RADIATION
Radiation from (1) naturally occurring radioactive materials which have not been technologically enhanced;
(2) cosmic sources; (3) global fallout as it exists in the environment (such as from the testing of nuclear
explosive devices); (4) radon and its progeny in concentrations or levels existing in buildings or the
environment which have not been elevated as a result of current or prior activities; and (5) consumer
products containing nominal amounts of radioactive material or producing nominal amounts of radiation.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

GLOSSARY

BARRIER
Anything used to control, prevent, or impede energy flows. Common types of barriers include equipment,
administrative procedures and processes, supervision and management, warning devices, knowledge and
skills, and physical objects. Barriers may be either control or safety.

BARRIER ANALYSIS
An analytical technique used to identify energy sources and the failed or deficient barriers and controls that
contributed to an accident.

BASELINE
A quantitative expression of projected costs, schedule, and technical requirements; the established plan
against which the status of resources and the progress of a project can be measured.

BIOASSAY
The determination of kinds, quantities, or concentrations, and, in some cases, locations of radioactive
material in the human body, whether by direct measurement or by analysis, and evaluation of radioactive
materials excreted or removed from the human body.

BREATHING ZONE
A hemisphere forward of the shoulders with a radius of approximately 6 to 9 inches (i.e., an area as close as
practicable to the nose and mouth of the employee being monitored for a chemical or biological hazard).
Breathing zone samples provide the best representation of actual exposure.

CALIBRATION
To adjust and determine either: (1) The response or reading of an instrument relative to a standard (e.g.,
primary, secondary, or tertiary) or to a series of conventionally true values; or (2) The strength of a
radiation source relative to a standard (e.g., primary, secondary, or tertiary) or conventionally true value.

CATASTROPHIC RELEASE
A major uncontrolled emission, fire, or explosion, involving one or more highly hazardous chemicals that
presents serious danger to employees in the workplace or to the public.

CAUSAL FACTOR
An event or condition in the accident sequence necessary and sufficient to produce or contribute to the
unwanted result. Causal factors fall into three categories: direct cause, contribution cause, and root cause.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

GLOSSARY

CAUSE
Anything that contributes to an accident or incident. In an investigation, the use of the word “cause” as a
singular term should be avoided. It is preferable to use it in the plural sense, such as “causal factors”, rather
than identifying “the cause”.

CEILING LIMIT
The concentration in the employee's breathing zone that shall not be exceeded at any time during any part
of the working day. For airborne contaminants, if instantaneous monitoring is not feasible, then the ceiling
shall be assessed as a 15 minute time-weighted average exposure that shall not be exceeded at any time
during the working day.

CERTIFICATION
The process by which contractor facility management provides written endorsement of the satisfactory
achievement of qualification of a person for a position.

CHANGE
Stress on a system that was previously in a state of equilibrium, or anything that disturbs the planned or
normal functioning of a system.

CHANGE ANALYSIS
An analytical technique used for accident investigations, wherein accident-free reference bases are
established, and changes relevant to accident causes and situations are systematically identified. In change
analysis, all changes are considered, including those initially considered trivial or obscure.

CHANGE CONTROL
A process that ensures all changes are properly identified, reviewed, approved, implemented, tested, and
documented.

CHEMICAL PROCESSING
Those activities or operations that involve the production, use, storage, processing, and/or disposal of
caustic, toxic, or volatile chemicals in liquid, gaseous, particulate, powder, or solid states.

COLLECTIVE DOSE
The sum of the total effective dose equivalent values for all individuals in a specified population. Collective
dose is expressed in units of person-rem (or person-sievert).

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

GLOSSARY

COMBINED STANDARD UNCERTAINTY


Standard uncertainty of the result of a measurement when that result is obtained from the values of a
number of other quantities, equal to the positive square root of a sum of terms, the terms being the
variances or covariances of these other quantities weighted according to how the measurement result varies
with changes in these quantities.

COMBUSTIBLE LIQUID
A liquid having a closed cup flash point at or above 100ºF (38ºC).COMBUSTIBLE MATERIAL
Any material that will ignite and burn. Any material that does not comply with the definition of
“noncombustible” is considered combustible. The term combustible is not related to any specific ignition
temperature or flame spread rating.

COMMITTED DOSE EQUIVALENT (HT50)


The dose equivalent calculated to be received by a tissue or organ over a 50 year period after the intake of a
radionuclide into the body. It does not include contributions from radiation sources external to the body.
Committed dose equivalent is expressed in units of rem (or sievert). See Committed Effective Dose
Equivalent, Cumulative Total Effective Dose Equivalent, Deep Dose Equivalent, Dose Equivalent, Effective
Dose Equivalent, Lens of the Eye Dose Equivalent, and Total Effective Dose Equivalent.

COMMITTED EFFECTIVE DOSE EQUIVALENT (HE, 50)


The sum of the committed dose equivalents to various tissues in the body (HT, 50), each multiplied by the
appropriate weighting factor (WT). Committed effective dose equivalent is expressed in units of rem (or
sievert). See Committed Dose Equivalent, Cumulative Total Effective Dose Equivalent, Deep Dose Equivalent,
Dose Equivalent, Effective Dose Equivalent, Lens of the Eye Dose Equivalent, and Total Effective Dose
Equivalent.

CONFINEMENT AREA
An area having structures or systems from which releases of hazardous materials are controlled. The
primary confinement systems are the process enclosures (glove boxes, conveyors, transfer boxes, other
spaces normally containing hazardous materials), which are surrounded by one or more secondary
confinement areas (operating area compartments).

CONFINEMENT BARRIERS
Primary confinement – Provides confinement of hazardous material to the vicinity of its processing. This
confinement is typically provided by piping, tanks, glove boxes, encapsulating material, and the like, along
with any offgas systems that control effluent from within the primary confinement.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

GLOSSARY

Secondary confinement – Consists of a cell or enclosure surrounding the process material or equipment
along with any associated ventilation exhaust systems from the enclosed area. Except in the case of areas
housing glove-box operations, the area inside this barrier is usually unoccupied (e.g., canyons, hot cells); it
provides protection for operating personnel.
Tertiary confinement – Typically provided by walls, floor, roof, and associated ventilation exhaust systems of
the facility. It provides a final barrier against the release of hazardous material to the environment.

CONFINEMENT SYSTEM
The barrier and its associated systems (including
ventilation) between areas containing hazardous materials and the environment or other areas in the nuclear
facility that are normally expected to have levels of hazardous materials lower than allowable concentration
limits.

CONSEQUENCE
Adverse outcome or impact of an event There can be more than one consequence from one event.
Consequences can be expressed qualitatively or quantitatively. Consequences are considered in relation to
harm to human health, activity and the environment.

CONTAINMENT SYSTEM
A structurally closed barrier and its associated systems (including ventilation) between areas containing
hazardous materials and the environment or other areas in the nuclear facility that are normally expected to
have levels of hazardous materials lower than allowable concentration limits. A containment barrier is
designed to remain closed and intact during all design basis accidents.

CONTAMINATED FACILITIES
Facilities that have structural components and systems contaminated with hazardous chemical or radioactive
substances, including radionuclides. This definition excludes facilities that contain no residual hazardous
substances other than those present in building materials and components, such as asbestos-contained
material, lead-based paint, or equipment containing PCBs. This definition excludes facilities in which bulk or
containerized hazardous substances, including radionuclides, have been used or managed if no contaminants
remain in or on the structural components and systems.

CONTAMINATION AREA
Any area, accessible to individuals, where removable surface contamination levels exceed or are likely to
exceed the removable surface contamination values.

CONTEXT
Parameters within which risk must be managed, including the scope and boundaries for the risk assessment
and risk management process.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

GLOSSARY

CONTINUOUS AIR MONITOR (CAM)


An instrument that continuously samples and measures the levels of airborne radioactive materials on a
“real-time” basis and has alarm capabilities at preset levels. See Monitoring, Performance Monitoring,
Personal Monitoring, Personnel Monitoring, Post-Accident Monitoring, Primary Environmental Monitors,
Safety Class Monitoring Equipment, and Secondary Environmental Monitors.

CONTROLLED AREA
Any area to which access is managed by or for DOE to protect individuals from exposure to radiation and/or
radioactive material.

CONTROLLED DOCUMENT
A document whose content is maintained uniform among the copies by an administrative control system.

CONTROLS
When used with respect to nuclear reactors, apparatus and mechanisms that, when manipulated, directly
affect the reactivity or power level of a reactor or the status of an engineered safety feature. When used
with respect to any other nuclear facility, "controls" means apparatus and mechanisms, when manipulated
could affect the chemical, physical, metallurgical, or nuclear process of the nuclear facility in such a manner
as to affect the protection of health and safety.

CORE SAFETY MANAGEMENT FUNCTIONS


The core safety management functions are (1) define the scope of work, (2) analyze the hazards; (3)
develop and implement hazard controls; (4) perform work within controls; (5) provide feedback and
continuous improvement.

COST-BENEFIT ANALYSIS
A systematic and documented analysis of the expected costs and benefits related to a particular action.

CRITICALITY
The condition in which a nuclear fission chain reaction becomes selfsustaining.

CRITICALITY ACCIDENT
The release of energy as a result of accidentally producing a self-sustaining or divergent fission chain
reaction.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

GLOSSARY

CUMULATIVE TOTAL EFFECTIVE DOSE EQUIVALENT


The sum of all total effective dose equivalent values recorded for an individual, where available, for each
year occupational dose was received. See Committed Dose Equivalent, Committed Effective Dose Equivalent,
Deep Dose Equivalent, Dose Equivalent, Effective Dose Equivalent, Lens of the Eye Dose Equivalent, and
Total Effective Dose Equivalent.

DECONTAMINATION
The removal or reduction of residual radioactive and hazardous materials by mechanical, chemical or other
techniques to achieve a stated objective or end condition.

DEEP DOSE EQUIVALENT


The dose equivalent derived from external radiation at a depth of 1 cm in tissue. See Committed Dose
Equivalent, Committed Effective Dose Equivalent, Cumulative Total Effective Dose Equivalent, Dose
Equivalent, Effective Dose Equivalent, Lens of the Eye Dose Equivalent, and Total Effective Dose Equivalent.

DEFERRED MAINTENANCE
Maintenance that was not performed when it should have been or was scheduled to be and which,
therefore, is put off or delayed for a future period and reported annually.

DEFICIENCY
Any condition that deviates from the designed-in capacity of structures, systems, and components and
results in a degraded ability to accomplish its intended function.

DEFLAGRATION
A rapid chemical reaction in which the output of heat is sufficient to enable the reaction to proceed and be
accelerated without input of heat from another source. Deflagration is a surface phenomenon, with the
reaction products flowing away from the unreacted material along the surface at subsonic velocity. The
effect of a true deflagration under confinement is an explosion. Confinement of the reaction increases
pressure, rate of reaction and temperature, and may cause transition into a detonation.

DERIVED AIR CONCENTRATION (DAC)


For the radionuclides, the airborne concentration that equals the ALI divided by the volume of air breathed
by an average worker for a working year of 2000 hours (assuming a breathing volume of 2400 m3).

DERIVED CONCENTRATION GUIDE (DCG)


The concentration of a radionuclide in air or water that, under conditions of continuous exposure for one
year by one exposure mode (i.e., ingestion of water, submersion in air, or inhalation) would result in an
effective dose equivalent of 100 mrem or 0.1 rem (1 mSv). Do not consider decay products when the parent
radionuclide is the cause of the exposure (1 rem = 0.01 sievert).

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

GLOSSARY

DETERMINISTIC METHOD
The technique in which a single estimate of parameters is used to perform each analysis. To account for
uncertainty, several analyses may be conducted with different parameters.

DETONATION
A violent chemical reaction within a chemical compound or mechanical mixture involving heat and pressure.
A detonation is a reaction that proceeds through the reacted material toward the unreacted material at a
supersonic velocity. The result of the chemical reaction is the exertion of extremely high pressure on the
surrounding medium, forming a propagating shock wave that is originally of supersonic velocity. When the
material is located on or near the surface of the ground, a detonation is normally characterized by a crater.

DOCUMENT
Recorded information that describes, specifies, reports, certifies, requires, or provides data or results.

DOCUMENTED SAFETY ANALYSIS


A documented analysis of the extent to which a nuclear facility can be operated safely with respect to
workers, the public, and the environment, including a description of the conditions, safe boundaries, and
hazard controls that provide the basis for ensuring safety.

DOSE EQUIVALENT (H)


The product of absorbed dose in rad (or gray) in tissue, a quality factor, and other modifying factors. Dose
equivalent is expressed in units of rem (or sievert) (1 rem = 0.01 sievert). See Committed Dose Equivalent,
Committed Effective Dose Equivalent, Cumulative Total Effective Dose Equivalent, Deep Dose Equivalent,
Effective Dose Equivalent, Lens of the Eye Dose Equivalent, and Total Effective Dose Equivalent.

EFFECTIVE DOSE EQUIVALENT (EDE)


The dose equivalent from both external and internal irradiation. The effective dose equivalent is expressed in
units of rem. See Committed Dose Equivalent, Committed Effective Dose Equivalent, Cumulative Total
Effective Dose Equivalent, Deep Dose Equivalent, Dose Equivalent, Lens of the Eye Dose Equivalent, and
Total Effective Dose Equivalent.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

GLOSSARY

EFFECTIVE DOSE EQUIVALENT (HE)


The summation of the products of the dose equivalent received by specified tissues of the body and the
appropriate weighting factor. It includes the dose from radiation sources internal and external to the body.
The effective dose equivalent is expressed in units of rem (or sievert). See Committed Dose Equivalent,
Committed Effective Dose Equivalent, Cumulative Total Effective Dose Equivalent, Deep Dose Equivalent,
Dose Equivalent, Lens of the Eye Dose Equivalent, and Total Effective Dose Equivalent.

EIGHT-HOUR TIME-WEIGHTED AVERAGE (TWA) EXPOSURE LIMIT


The time-weighted average concentration in the employee's breathing zone which shall not be exceeded in
any 8 hour work shift of a 40-hour workweek.

ELECTRIC SHOCK
Physical stimulation that occurs when electrical current passes through the body (IEEE 1584-2002).

ELECTRICAL HAZARD
A dangerous condition such that contact or equipment failure can result in electric shock, arc flash burn,
thermal burn, or blast.

ENERGIZED
Electrically connected to or having a source of voltage.

ENERGIZED
Electrically connected to a source of potential difference, or electrically charged so as to have a potential
significantly different from that of earth in the vicinity.

ENERGY
The capacity to do work. Energy exists in many forms, including acoustic, potential, electrical, kinetic,
thermal, biological, chemical and radiation (both ionizing and non-ionizing).

ENERGY FLOW
The transfer of energy from its source to some other point. There are two types of energy flows: wanted
(controlled and able to do work) and unwanted (uncontrolled and able to do harm).

ENGINEERED CONTROLS
Physical controls, including set points and operating limits; as distinct from administrative controls.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

GLOSSARY

ENGINEERED SAFETY FEATURES


Systems, components, or structures that prevent and mitigate the consequences of potential accidents
described in the Final Safety Analysis Report including the bounding design basis accidents.

ENVIRONMENTAL MANAGEMENT SYSTEM (EMS)


The part of the overall management system that includes organizational structure, planning activities,
responsibilities, practices, procedures, processes, and resources for developing, integrating, achieving,
reviewing, and maintaining environmental policy; a continuing cycle of planning, implementing, evaluating,
and improving processes and actions undertaken to achieve environmental goals.

ENVIRONMENTAL PERFORMANCE
Measurable results of the environmental management system, related to an organization’s control of its
environmental aspects, based on its environmental policy, objectives, and targets.

ENVIRONMENTAL PROTECTION STANDARD


A specified set of rules or conditions concerned with delineation of procedures; definition of terms;
specification of performance, design, or operations; or measurements that define the quantity of emissions,
discharges, or releases to the environment and the quality of the environment.

EVENT
Occurrence of a particular set of circumstances. The event can be certain or uncertain. The event can be a
single occurrence or a series of occurrences.

EVENTS AND CAUSAL FACTORS CHART


Graphical depiction of a logical series of events and related conditions that precede the accident.

EXPANDED UNCERTAINTY
Quantity defining an interval about the result of a measurement that may be expected to encompass a large
fraction of the distribution of values that could reasonably be attributed to the measurand.

EXPLOSIVE
Any chemical compound or mechanical mixture that, when subjected to heat, impact, friction, shock, or
other suitable initiation stimulus, undergoes a very rapid chemical change with the evolution of large
volumes of highly heated gases that exert pressures in the surrounding medium. The term applies to
materials that either detonate or deflagrate. DOE explosives may be dyed various colors, except pink, which
is reserved for mock explosives.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

GLOSSARY

EXPLOSIVES HAZARD CLASSES


The level of protection required for any specific explosives activity, based on the hazard class (accident
potential) for the explosives activity involved. Four hazard classes are defined for explosives activities as
follows in definitions for explosives hazard classes I to IV.
Class I consists of those explosives activities that involve a high accident potential; any personnel exposure
is unacceptable for Class I activities and they thus require remote operations. In general, Class I would
include activities where energies that may interface with the explosives are approaching the upper safety
limits, or the loss of control of the interfacing energy is likely to exceed the safety limits for the explosives
involved. This category includes those research and development activities where the safety implications
have not been fully characterized. Examples of Class I activities are screening, blending, pressing, extrusion,
drilling of holes, dry machining, machining explosives and metal in combination, some environmental testing,
new explosives development and processes, explosives disposal, and destructive testing.
Class II consists of those explosives activities that involve a moderate accident potential because of the
explosives type, the condition of the explosives, or the nature of the operations involved. This category
consists of activities where the accident potential is greater than for Class III, but the exposure of personnel
performing contact operations is acceptable. Class II includes activities where the energies that do or may
interface with the explosives are normally well within the safety boundaries for the explosives involved, but
where the loss of control or these energies might approach the safety limits of the explosives. Examples of
Class II activities are weighing, some wet machining, assembly and disassembly, some environmental
testing, and some packaging operations.
Class III consists of those explosives activities that represent a low accident potential. Class III includes
explosives activities during storage and operations incidental to storage or removal from storage.
Class IV consists of those explosives activities with insensitive high explosives (IHE) or insensitive high
explosives subassemblies. Although mass detonating, this explosive type is so insensitive that a negligible
probability exists for accidental initiation or transition from burning to detonation. IHE explosions will be
limited to pressure ruptures of containers heated in a fire. Most processing and storage activities with IHE
and IHE subassemblies are Class IV.

EXPOSURE ASSESSMENT (EA)


The estimation or determination (qualitative or quantitative) of the magnitude, frequency, duration, and
route of employee exposure to a substance, harmful physical agent, ergonomic stress, or harmful biological
agent that poses or may pose a recognized hazard to the health of employees.
The systematic collection and analysis of occupational hazards and exposure determinants such as work
tasks; magnitude, frequency, variability, duration, and route of exposure; and the linkage of the resulting
exposure profiles of individuals and similarly exposed groups for the purposes of risk management and
health surveillance.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

GLOSSARY

EXTERNAL DOSE OR EXPOSURE


That portion of the dose equivalent received from radiation sources outside the body (e.g., “external
sources”).

EXTERNAL EVENTS
Natural phenomena or man-caused hazards not related to the facility.

FINAL SAFETY ANALYSIS REPORT (FSAR)


The Safety Analysis Report (SAR) submitted to and approved by DOE prior to the authorization to operate a
new nuclear facility or that documents the adequacy of the safety analysis for an existing nuclear facility.
See Preliminary Documented Safety Analysis, Preliminary Safety Analysis Report, Safety Analysis Report,
Safety Basis, Safety Evaluation, and Safety Evaluation Report.

FIRE BARRIER
A continuous membrane, either vertical or horizontal, such as a wall or floor assembly that is designed and
constructed with a specified fire resistance rating to limit the spread of fire and that also will restrict the
movement of smoke. Such barriers may have protected openings.

FIRE HAZARDS ANALYSIS


An assessment of the risks from fire within an individual fire area in a facility analyzing the relationship to
existing or proposed fire protection. This shall include an assessment of the consequences of fire on safety
systems and the capability to safely operate a facility during and after a fire.

FIRE LOSS
The money cost of restoring damaged property to its pre-fire condition. When determining loss, the
estimated damage to the facility and contents should include replacement cost, less salvage value. Fire loss
should exclude the costs for: (1) property scheduled for demolition; (2) decommissioned property not
carried on books as a value.
Fire loss should also include the cost of: (1) decontamination and cleanup; (2) the loss of production or
program continuity; (3) the indirect costs of fire extinguishment (such as damaged fire department
equipment; (4) the effects on related areas.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

GLOSSARY

FIRE PROTECTION
A broad term which encompasses all aspects of fire safety, including: building construction and fixed building
fire features, fire suppression and detection systems, fire water systems, emergency process safety control
systems, emergency fire fighting organizations (fire departments, fire brigades, etc.), fire protection
engineering, and fire prevention. Fire protection is concerned with preventing or minimizing the direct and
indirect consequences of fire. It also includes aspects of the following perils as they relate to fire protection:
explosion, natural phenomenon, smoke, and water damage from fire.

FIRE PROTECTION SYSTEM


Any system designed to detect and contain or extinguish a fire, as well as limit the extent of fire damage
and enhance life safety.
FIRE RESISTANCE RATING
The time that a particular construction will withstand a standard fire exposure in hours as determined by
American Society of Testing and Materials standard (ASTM E-119).

FLAME RESISTANT (FR)


The property of a material whereby combustion is prevented, terminated, or inhibited following the
application of a flaming or non-flaming source of ignition, with or without subsequent removal of the ignition
source.

FLAME SPREAD RATING


Flame spread rating is a numerical classification determined by the test method in American Society of
Testing and Materials standard (ASTM E-84), which indexes the relative burning behavior of a material by
quantifying the spread of flame of a test specimen. The surface burning characteristic of a material is not a
measure of resistance to fire exposure.

FLAMMABLE GAS
A gas that, at ambient temperature and pressure, forms a flammable mixture with air at a concentration of
13 percent by volume or less; or a gas that, at ambient temperature and pressure, forms a range of
flammable mixtures with air wider than 13 percent by volume, regardless of the lower limit.

FLAMMABLE LIQUID
A liquid having a closed cup flash point below 100ºF (38ºC) and having a vapor pressure not exceeding 40
psia (2068 mm Hg) at 100ºF (38ºC).

FLASH HAZARD
A dangerous condition associated with the release of energy caused by an electric arc.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

GLOSSARY

FLASH HAZARD ANALYSIS


A study investigating a worker’s potential exposure to arc-flash energy, conducted for the purpose of injury
prevention and the determination of safe work practices and the appropriate levels of personal protective
equipment.

GRADED APPROACH
The process of assuring that the level of analysis, documentation, and actions used to comply with the
safety requirements are commensurate with: (1) The relative importance to safety, safeguards, and security;
(2) The magnitude of any hazard involved; (3) The life cycle stage of a facility; (4) The programmatic
mission of a facility; (5) The particular characteristics of a facility; (6) The relative importance of radiological
and nonradiological hazards; (7) Any other relevant factor.

HARSH ENVIRONMENT
Air environment where service conditions are expected to exceed the mild environment conditions as a result
of design basis accidents, fires, explosions, natural phenomena or other man-caused external events.

HAZARD
A source of danger (i.e., material, energy source, or operation) with the potential to cause illness, injury, or
death to a person or damage to a facility or to the environment (without regard to the likelihood or
credibility of accident scenarios or consequence mitigation).

HAZARD ANALYSIS
The determination of material, system, process, and plant characteristics that can produce undesirable
consequences, followed by the assessment of hazardous situations associated with a process or activity.
Largely qualitative techniques are used to pinpoint weaknesses in design or operation of the facility that
could lead to accidents. The hazards analysis examines the complete spectrum of potential accidents that
could expose members of the public, onsite workers, facility workers, and the environment to hazardous
materials.

HAZARD CATEGORIES
The consequences of unmitigated releases of radioactive and hazardous material are evaluated and
classified by the following hazard categories:
Category 1 – The hazard analysis shows the potential for significant offsite consequences.
Category 2 – The hazard analysis shows the potential for significant onsite consequences.
Category 3 – The hazard analysis shows the potential for only significant localized consequences.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

GLOSSARY

HAZARD CLASSES
Non-nuclear facilities will be categorized as high, moderate, or low hazards based on the following:
High – hazards with a potential for onsite and offsite impacts to large numbers of persons or for major
impacts to the environment;
Moderate – hazards which present considerable potential onsite impacts to people or the environment, but
at most only minor offsite impacts;
Low – hazards which present minor onsite and negligible offsite impacts to people and the environment.

HAZARDS CONTROLS
Measures to eliminate, limit, or mitigate hazards to workers, the public, or the environment, including (1)
Physical, design, structural, and engineering features; (2) Safety structures, systems, and components; (3)
Safety management programs; (4) Technical Safety requirements; (5) Other controls necessary to provide
adequate protection from hazards.

HAZARDOUS EXPOSURE
Exposure to any toxic substance, harmful physical agent, ergonomic stressor, or harmful biological agent
that poses a recognized hazard to the health of employees.

HAZARDOUS MATERIAL
Any solid, liquid, or gaseous material that is not radioactive but is toxic, explosive, flammable, corrosive, or
otherwise physically or biologically threatening to health. Any solid, liquid, or gaseous material that is
chemical, toxic, explosive, flammable, radioactive, corrosive, chemically reactive, or unstable upon prolonged
storage in quantities that could pose a threat to life, property, or the environment.
A substance or material, which has been determined to be capable of posing an unreasonable risk to health,
safety, and property when transported in commerce, and which has been so designated. The term includes
hazardous substances, hazardous wastes, marine pollutants, and elevated temperature materials.
Any chemical which is a physical hazard or a health hazard.

HAZARDOUS WASTE
A solid waste, or combination of solid wastes, which because of its quantity, concentration, or physical,
chemical, or infectious characteristics may (1) cause, or significantly contribute to an increase in mortality or
an increase in serious irreversible, or incapacitating reversible, illness; or (2) pose a substantial present or
potential hazard to human health or the environment when improperly treated, stored, transported, or
disposed or, or otherwise managed.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

GLOSSARY

HEALTH HAZARD ASSESSMENT


The comprehensive and systematic process of identifying, classifying, and evaluating health hazards in the
workplace. Health hazard assessments evaluate the probability and severity of the effects of exposure to the
hazard.

HEALTH SURVEILLANCE
The continuing scrutiny of health events to detect changes in disease trends or disease distribution. The
continuing collection and maintenance of industrial hygiene data is a component of health surveillance
needed to determine whether observed adverse health events are related to working conditions.

HEAT RESISTANT
A material having the quality or capability of withstanding heat for a specified period at a maximum given
temperature without decomposing or losing its integrity.

HIGH EFFICIENCY PARTICULATE AIR (HEPA) FILTER


A filter capable of trapping and retaining at least 99.97 percent of 0.3 micrometer monodispersed particles.

HUMAN FACTORS
Those biomedical, psychosocial, workplace environment, and engineering considerations pertaining to people
in a human-machine system. Some of these considerations are allocation of functions, task analysis, human
reliability, training requirements, job performance aiding, personnel qualification and selection, staffing
requirements, procedures, organizational effectiveness, and workplace environmental conditions.

HUMAN FACTORS ENGINEERING


The application of knowledge about human performance capabilities and behavioral principles to the design,
operation, and maintenance of human-machine systems so that personnel can function at their optimum
level of performance.

IMMUNE RESPONSE
The series of cellular events by which the immune system reacts to challenge by an antigen.

INCIDENT
An unplanned event that may or may not result in injuries and loss.

INTERNAL DOSE OR EXPOSURE. That portion of the dose equivalent received from radioactive material
taken into the body (e.g., “internal sources”).

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

GLOSSARY

LIFE CYCLE
The life of an asset from planning through acquisition, maintenance, operation, remediation, disposition,
long-term stewardship, and disposal.

LIFE CYCLE PLAN


An analysis and description of the major events and activities in the life of a functional unit from planning
through decommissioning and site restoration. The plan documents the history of the functional unit and
forecasts future activities, including major line item and expense projects and their duration, relationships,
and impact on life expectancy. The plan also describes maintenance practices and costs.

LIKELIHOOD
Chance of something happening. Likelihood can be expressed qualitatively or quantitatively.

MAINTENANCE
Day-to-day work that is required to sustain property in a condition suitable for it to be used for its
designated purpose and includes preventive, predictive, and corrective (repair) maintenance.
The proactive and reactive day-to-day work that is required to maintain and preserve facilities and SSCs
within them in a condition suitable for performing their designated purpose, and includes planned or
unplanned periodic, preventive, predictive, seasonal or corrective (repair) maintenance.

MAINTENANCE MANAGEMENT
The administration of a program utilizing such concepts as organization, plans, procedures, schedules, cost
control, periodic evaluation, and feedback for the effective performance and control of maintenance with
adequate provisions for interface with other concerned disciplines such as health, safety, environmental
compliance, quality control, and security. All work done in conjunction with existing property is either
maintenance (preserving), repair (restoring), service (cleaning and making usable), or improvements. The
work to be considered under the DOE maintenance management program is only that for maintenance and
repair.

MAINTENANCE PLAN
A narrative description of a site's maintenance program. The plan should be a real-time document which is
updated at least annually and which addresses all elements of a successful maintenance program. The plan
should describe the backlog and strategies to reduce the backlog, as well as the maintenance funding
required to sustain the assigned mission. The maintenance plan should integrate individual maintenance
activities addressed under each functional unit life-cycle plan.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

GLOSSARY

MAINTENANCE PROCEDURE
A document providing direction to implement maintenance policy, comply with external directives, laws or
meet operational objectives in a consistent manner. A procedure provides adequately detailed delineation of
instructions, roles, responsibilities, action steps, and requirements for conducting maintenance activities.

MARGIN
The difference between service conditions and the design parameters used in the design of a component,
system, or structure.

MARGIN OF SAFETY
That margin built into the safety analyses of the facility as set forth in the authorization basis acceptance
limits.

MAXIMUM CREDIBLE FIRE LOSS (MCFL)


The property damage that would be expected from a fire, assuming that: (1) All installed fire protection
systems function as designed; (2) The effect of emergency response is omitted except for post-fire actions
such as salvage work, shutting down water systems, and restoring operation.

MAXIMUM POSSIBLE-FIRE LOSS (MPFL)


The value of property, excluding land value, within a fire area, unless a fire hazards analysis demonstrates a
lesser (or greater) loss potential. This assumes the failure of both automatic fire suppression systems and
manual fire fighting efforts.

MONITORING
The measurement of radiation levels, airborne radioactivity concentrations, radioactive contamination levels,
quantities of radioactive material, or individual doses and the use of results of these measurements to
evaluate radiological hazards or potential and actual doses resulting from exposures to ionizing radiation.
See Continuous Air Monitoring, Performance Monitoring, Personal Monitoring, Personnel Monitoring, Post-
Accident Monitoring Equipment, Primary Environmental Monitors, Safety Class Monitoring Equipment, and
Secondary Environmental Monitors.

NATURAL PHENOMENA HAZARD (NPH)


An act of nature (e.g., earthquake, wind, hurricane, tornado, flood, precipitation [rain or snow], volcanic
eruption. lightning strike, or extreme cold or heat) which poses a threat or danger to workers, the public, or
to the environment by potential damage to structures, systems, and components.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

GLOSSARY

NEAR MISS
An event that did not result in an accidental release of a highly hazardous chemical, but which could have,
given another “failure”. Near misses, sometimes called “precursors”, include: (1) The occurrence of an
accident initiator where the protection functioned properly to preclude a release of a highly hazardous
chemical; or, (2) the determination that a protection system was out of service such that if an initiating
event had occurred, a release of a highly hazardous chemical would have taken place.

NONCOMBUSTIBLE
A material that in the form in which it is used and under the conditions anticipated will not ignite, burn,
support combustion, or release flammable vapors when subjected to fire or heat, as defined by fire
protection industry standards on the basis of large scale fire tests performed by a nationally recognized
independent fire test authority.
NONSTOCHASTIC EFFECTS
Effects due to radiation exposure for which the severity varies with the dose and for which a threshold
normally exists (e.g., radiation-induced opacities within the lens of the eye).

OCCUPATIONAL DOSE
An individual's ionizing radiation dose (external and internal) as a result of that individual's work assignment.
Occupational dose does not include doses received as a medical patient or doses resulting from background
radiation or participation as a subject in medical research programs.

OCCUPATIONAL HEALTH PROGRAM


A comprehensive and coordinated effort of those involved in Radiological Protection, Industrial Hygiene,
Occupational Medicine, and Epidemiology to protect the health and well-being of employees.

PERMISSIBLE EXPOSURE LIMIT


The maximum level of exposure to airborne contaminants or physical agents to which an employee may be
exposed over a specified time period.

PERMISSIBLE EXPOSURE LIMIT (PEL)


The maximum level to which an employee may be exposed to a hazardous agent in a specified period.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

GLOSSARY

PREDICTIVE MAINTENANCE
Predictive maintenance consists of the actions necessary to monitor, find trends, and analyze parameters,
properties and performance characteristics or signatures associated with structures, systems, and
components (SSCs), facilities or pieces of equipment to discern whether or not a state or condition may be
approaching which is indicative of deteriorating performance or impending failure, where the intended
function of the SSCs, facilities or pieces of equipment may be compromised. Predictive maintenance
activities involve continuous or periodic monitoring and diagnosis in order to forecast component
degradation so that "as-needed" planned maintenance can be initiated prior to failure. Not all SSC, facility or
equipment conditions and failure modes can be monitored and diagnosed in advance; therefore, predictive
maintenance should be selectively applied. To the extent that predictive maintenance can be relied on
without large uncertainties, it is normally preferable to activities such as periodic internal inspection or
equipment overhauls.

QUALITY CONTROL
To check, audit, review and evaluate the progress of an activity, process or system on an ongoing basis to
identify change from the performance level required or expected and the opportunities for improvement.

RELIABILITY CENTERED MAINTENANCE (RCM)


A proactive systematic decision logic tree approach to identify or revise preventive maintenance tasks or
plans to preserve or promptly restore operability, reliability and availability of facility structures, systems, and
components; or to prevent failures and reduce risk through types of maintenance action and frequency
selection to ensure high performance. Reliability centered maintenance is the performance of scheduled
maintenance for complex equipment, quantified by the relationship of preventive maintenance to reliability
and the benefits of reliability to safety and cost re uction through the optimization of maintenance task and
frequency intervals. The concept relies on empirical maintenance task and frequency intervals to make
determinations about real applicable data suggesting an effective interval for task accomplishment. The
approach taken to establish a logical path for each functional failure is that each functional failure, failure
effect, and failure cause be processed through the logic so that a judgement can be made as to the
necessity of the task, and includes: (1) reporting preventive maintenance activities, plans and schedules; (2)
optimizing and calculating the preventive maintenance interval by balancing availability, reliability and cost;
(3) ranking preventive maintenance tasks; (4) accessing preventive maintenance information from piping
and instrument drawings (P&IDs); (5) accessing preventive maintenance and other maintenance data; (6)
listing recurring failure modes and parts, including failure to start and failure to run; (7) calculating and
monitoring structure, system, and component availability; (8) accessing preventive maintenance procedures,
and (9) keeping track of preventive maintenance cost.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

GLOSSARY

RISK
The chance of something happening that will have an undesired impact. Impact in terms of the Act is the
chance of harm to human health and safety, or the environment due to or as a result of gene technology.
Risk is measured in terms of a combination of the likelihood that a hazard gives rise to an undesired
outcome and the seriousness of that undesired outcome.

RISK ANALYSIS
The overall process of risk assessment, risk management and risk communication.

RISK ANALYSIS FRAMEWORK


Systematic application of legislation, policies, procedures and practices to analyse risks.

RISK ASSESSMENT
The overall process of hazard identification and risk estimation.

RISK COMMUNICATION
The culture, processes and structures to communicate and consult with stakeholders about risks.

RISK ESTIMATE
A measure of risk in terms of a combination of consequence and likelihood assessments.

RISK EVALUATION
The process of determining risks that require management.

RISK MANAGEMENT
The overall process of risk evaluation, risk treatment and decision making to manage potential adverse
impacts.

RISK MANAGEMENT PLAN


Integrates risk evaluation and risk treatment with the decision making process.

RISK TREATMENT
The process of selection and implementation of measures to reduce risk.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

GLOSSARY

ROOT CAUSE
The determination of the causal factors preceding structures, systems, and components (SSC) failure or
malfunction, that is, discovery of the principal reason why the failure or malfunction happened leads to the
identification of the root cause. The preceding failure or malfunction causal factors are always events or
conditions that are necessary and sufficient to produce or contribute to the unwanted results (failure or
malfunction). The types of causal factors are: (1) direct causes, (2) contributing causes, and (3) root causes.
The direct cause is the immediate event or condition that caused the failure or malfunction. Contributing
causes are conditions or events that collectively increase the likelihood of the failure or malfunction, but that
individually do not cause them. Thus, root causes are events or conditions that, if corrected or eliminated,
would prevent the recurrence of the failure or malfunction by identifying and correcting faults (often hidden)
before an SSC fails or malfunctions.

SAFETY-CLASS STRUCTURES, SYSTEMS, AND COMPONENTS (SC-SSCs)


Systems, structures, or components including primary environmental monitors and portions of process
systems, whose failure could adversely affect the environment, or safety and health of the public as
identified by safety analyses. (See Safety Class Structures, Systems, and Components; Safety Significant
Structures, Systems, and Components; and Safety Structures, Systems, and Components.
Safety-class SSCs are systems, structures, or components whose preventive or mitigative function is
necessary to keep hazardous material exposure to the public below the offsite Evaluation Guidelines. The
definition would typically exclude items such as primary environmental monitors and most process
equipment.

STAKEHOLDERS
Those people and organisations who may affect, be affected by, or perceive themselves to be affected by a
decision, activity or risk. The term stakeholder may also include “interested parties”.

STANDARD UNCERTAINTY
Uncertainty of the result of a measurement expressed as a standard deviation.

TECHNICAL SAFETY REQUIREMENTS (TSRs)


Those requirements that define the conditions, safe boundaries, and the management or administrative
controls necessary to ensure the safe operation of a nuclear facility and to reduce the potential risk to the
public and facility workers from uncontrolled releases of radioactive materials or from radiation exposures
due to inadvertent criticality. Technical Safety requirements consist of safety limits, operating limits,
surveillance requirements, administrative controls, use and application instructions, and the basis thereof.

TECHNICAL SAFETY REQUIREMENTS (TSRs)


The limits, controls, and related actions that establish the specific parameters and requisite actions for the
safe operation of a nuclear facility and include, as appropriate for the work and the hazards identified in the
documented safety analysis for the facility: safety limits, operating limits, surveillance requirements,
administrative and management controls, use and application provisions, and design features, as well as a
bases appendix.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

GLOSSARY

THRESHOLD LIMIT VALUES (TLVs)


Airborne concentrations of substances or levels of physical agents, and represent conditions under which it is
believed that nearly all workers may be repeatedly exposed day after day without adverse health effects.
TLVs are issued by the American Conference of Governmental Industrial Hygienists (ACGIH).

TOTAL EFFECTIVE DOSE EQUIVALENT (TEDE)


The sum of the effective dose equivalent (for external exposures) and the committed effective dose
equivalent (for internal exposures). See Committed Dose Equivalent, Committed Effective Dose Equivalent,
Cumulative Total Effective Dose Equivalent, Deep Dose Equivalent, Dose Equivalent, Effective Dose
Equivalent, and Lens of the Eye Dose Equivalent.

TYPE A EVALUATION (OF UNCERTAINTY)


Method of evaluation of uncertainty by the statistical analysis of series of observations.

TYPE B EVALUATION (OF UNCERTAINTY)


Method of evaluation of uncertainty by means other than the statistical analysis of series of observations.

UNCERTAINTY
Imperfect ability to assign a character state to a thing or process; a form or source of doubt.

WEIGHTING FACTOR (WT)


The fraction of the overall health risk, resulting from uniform, whole body irradiation, attributable to specific
tissue (T). The dose equivalent to tissue, (HT ) is multiplied by the appropriate weighting factor to obtain the
effective dose equivalent contribution from that tissue.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

CONTENTS

Fundamentals of Risk Theory ............................................................................................................ 37


Risk Dynamic Equation Model........................................................................................................ 39
Risk Continuity Equations.............................................................................................................. 53
General Equation of Risk Transfer .................................................................................................. 56
Risk Equilibrium and Entopy .......................................................................................................... 58
Risk Analysis Model .......................................................................................................................... 60
Risk Assessment .......................................................................................................................... 61
Risk Management ........................................................................................................................ 62
Risk Communication ..................................................................................................................... 62
Models of Risk Analysis................................................................................................................. 62
Components in Risk Analysis ......................................................................................................... 63
Qualitative and Quantitative Risk Assessment.................................................................................. 63
Uncertainty ................................................................................................................................. 64
Risk Assessment .............................................................................................................................. 66
The Scope of Risk Assessment....................................................................................................... 66
Hazard Assessment ...................................................................................................................... 70
Evidence and Exposure................................................................................................................. 72
Likelihood (Probability) of Occurrence ............................................................................................ 75
Loss Criticality (Consequences)...................................................................................................... 77
Safety Level ................................................................................................................................ 91
Risk Estimation and Risk Treatment ............................................................................................... 92
Risk Management ............................................................................................................................ 96
Risk Management and Uncertainty ................................................................................................. 97
The Risk Management Plan ........................................................................................................... 98
Risk Evaluation ............................................................................................................................ 98
Risk Protection............................................................................................................................. 99
Risk Treatment ............................................................................................................................ 99
Monitoring for Compliance ...........................................................................................................101
Quality Control and Review ..........................................................................................................101
Determine Residual Risk ..............................................................................................................104
Risk Communication........................................................................................................................105
Risk Perception...........................................................................................................................105
Uncertainty ....................................................................................................................................107
Qualitative and Quantitative Methods ............................................................................................109
Sources of Failure .......................................................................................................................111
The Unwanted Consequences.......................................................................................................114
System Analysis ..........................................................................................................................117
Describing Random Variables .......................................................................................................121
Quantitative Analysis methods......................................................................................................124
Identifying the Risks....................................................................................................................131
Theoretical Background to Quantifying Risks ..................................................................................132
The Expression of Uncertainty in Risk Measurement........................................................................134
Reliability.......................................................................................................................................146

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

Failure or Hazard Rate.................................................................................................................146


Safety Reliability .........................................................................................................................148
References.....................................................................................................................................150

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

FUNDAMENTALS OF RISK THEORY

The starting point for the optimisation of the activity for the prevention of activity-related events, incidents,
accidents and work-related diseases in a system is represented by the risk assessment of such system.
Regardless of whether a workplace, a workshop or a company is involved, such an analysis allows to form
the hierarchical order of hazards depending their dimension and the efficient assignment of resources for
priority measures. Risk assessment implies the identification of all risk factors within the system under
examination and the quantification of their dimension, based upon the combination between four
parameters: frequency (likelihood or probability) of the maximal possible consequences, exposure to the
harm or hazard, severity (loss criticality) for the human body or any other target (e.g. evironment, product,
equipment, business interruption), and the safety level of the system. Thus, partial risk levels are obtained
for each risk factor, respectively the global risk levels for the entire system (e.g. process plant, workplace,
facility) under examination. This risk assessment principle is already included in the European standards (CEI
812/85), respectively EN 292/1-91 and EN 1050/96, and constitutes the basis for different methods with
practical applicability.
In specialised terminology, the safety of the person in the work process (or any other target) is considered
to be that state of the system in which the possibility of work-related accidents and disease (or any harm or
hazard) is excluded. In the usual language, safety is defined as the fact of being protected from any danger,
while risk is the possibility to be in jeopardy, potential danger. If we take into consideration the usual senses
of these terms, we may define safety as the state of the work system in which the risk of any hazard (or
accident or disease) is null. As result, safety and risk are two abstract notions, mutually exclusive. In reality,
because of the characteristics of any system, such absolute character states may not be reached. There is
no system in which the potential danger of harm, accident or disease, could be completely excluded; there is
always a “residual” risk, even if only because of the unpredictability of human action or activity. If there are
no corrective interventions, along the way, this residual risk will increase, as the elements of the work
system will degrade, by “ageing”. Consequently, systems may be characterised by “safety levels”,
respectively “risk levels”, as quantitative indicators of the binomial states of safety and risk. Defining safety
(Y) as a risk (X) function

1
Y [1.01]
X

where it may be asserted that a system is safer when the risk level is lower, and reciprocally. Thus, if the
risk is null, from the relation between the two variables it results that safety tends towards the infinite, while
if risk tends towards the infinite, safety vanishes (see Figure 1.01). In this context, in practice must be
admitted both a minimal risk limit, respectively a risk level other than null, yet sufficiently low to consider
that the system is safe, and a maximal risk level, equivalent to such a low safety level that the operation of
the system should no longer be permitted.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS
Risk, Y

Safety, X

Figure 1.01 – Relation between risk and safety.

The specialised literature defines risk as the probability of occurrence of an hazard-related harm, or activity-
related accident or disease in a work process, with a certain exposure, frequency and severity of
consequences. Indeed, if we assume a certain risk level, it is possible to make its representation depending
on the exposure, severity and probability of occurrence of the consequences. This curve allows the
distinction between acceptable risk and unacceptable one. Thus, the risk of occurrence of an event, with
severe consequences (high loss criticality) but low frequency (likelihood or exposure), located under the
acceptability curve, is considered to be acceptable, while the risk of an event, with less severe consequences
(low loss criticality) but high probability of occurrence (likelihood or exposure), with its coordinates located
above the curve, is considered as unacceptable. In exchange, for the risk represented by the second event,
although this type of event generates consequences less severe (low loss criticality), the probability of
occurrence is so great (very high frequency) that the exposure to the hazard is considered to be unsafe
(unacceptable risk). Any safety study has the objective to ascertain the acceptable risks. Treating risk in
such way raises two problems:
(1) How to determine the coordinates of risk (severity, exposure, and probability coupling);
(2) Which coordinates of risk should be selected for the delimitation of acceptability areas from those of
unacceptability.

In order to solve those problems, the premise for the elaboration of the assessment method was the risk-
risk factor relation. It is a known fact that the existence of risk in a system is attributable to the presence of
human activity-related event (e.g. accident and disease) risk factors. Therefore, the elements that are

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

instrumental for the characterisation of risk, thus to the determination of its coordinates, are actually the
exposure to the risk factor, probability for the action of a risk factor to lead to harm and the severity of the
consequence (loss criticality) of the action of the risk factor on the target. Consequently, in order to assess
the risk or safety levels it is necessary to follow the next stages:
(1) The identification of the risk factors from the system under examination;
(2) The determination of the consequences of the action on the target, respectively the severity of such
consequences (loss criticality);
(3) The determination of the exposure and probability (likelihood) of the event on the target;
(4) The determination of the safety level of the target;
(5) Attribution of risk levels depending on the exposure, severity, probability of the consequences, and the
safety level of the event of risk factors.

In the conditions of a real system that is real, in operation, there are not sufficient resources (e.g. time,
financial, technical ones, etc.) to make possible the simultaneously tackling of all risk factors for human
activity-related events. Even if they exist, the efficiency criterion (both in the limited sense of economic
efficiency, as in the sense of social efficiency) forbids such action. For this reason, it is not justified to take
them integrally into consideration in the safety analysis, either. From the multitude of risk factors that may
finally link together, having the potential to result in an harm, accident or a disease, those factors that may
represent direct, final causes are the ones that must be eliminated, in order to guarantee that the
occurrence of such an event is impossible; thus, it is mandatory to focus the study on these factors.

RISK DYNAMIC EQUATION MODEL


The mathematical definition of risk has been a source of confusion because it depends on the model used
to calculate the risk. Risk is defined as a combination of several parameters such exposure, likelihood or
probability, severity or loss criticality (include the injury severity to human health or personnel target), and
safety level, varying with time. For a tridimensional model we can use the parameters exposure, likelihood,
and severity; if we are using a quadrimensional (or multidimensional) model, we can use the tridimensional
parameters plus the parameters safety level and time. In this proposed model, a quadridimensional model,
the dynamic equation of risk (Ri) depends on exposure coefficient (E), likelihood or probability coefficient
(L), severity of consequences or loss criticality coefficient (K), safety level coefficient (), and time (t).

R i  f E, L , K , , t  [1.02]

The algebraic equation that relates risk (Ri) of any hazard (i) with the aforesaid parameters (e.g. exposure,
likelihood, severity, safety level and time) is called the risk dynamic model. The parameters such exposure,
likelihood or probability, severity or loss criticality, and safety level are not directly dependent with time, but
they can vary and shift with time; instead the parameters like exposure, likelihood, and severity are directly
dependent with the safety level of the system. If the safety level of the system varies with time, the
exposure, likelihood, and severity will also change with time by following the safety level variation. In turn,
safety level is directly dependent of the quality of the safety management system and with other factors
such safety issues and safety measures related with the safety management system implemented: safety
policy, HAZOP analysis, process safety management (PSM), periodical internal and external audits, life and
fire protection systems, training, etc. Thus, we can say that the aforesaid parameters of the risk dynamic
equation (e.g. exposure, likelihood, severity, safety level and time) are not directly dependent with time but
change when the safety level change with both time and safety management factors.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

At an instant of time, the risk equation model for risk estimate, for a specific hazard (i) and a safety level
coefficient (), can be given by the following expression,

E L K
R i ,  [1.03]

where E is the exposure coefficient, L is the probability or likelihood coefficient, K is the severity or loss
criticality coefficient, and  is the safety level coefficient. Considering, as we have seen above, that the risk
dynamic equation (Equation [1.02]) is a function of time, we can mathematically represent the risk by the
following differential equation,

dR i,
R i, t   [1.04]
dt

The risk dynamic equation is solely a function of risk parameters (e.g. exposure, likelihood, severity, safety
level and time) at a given point in the system and is independent of the type of the system in which the risk
is carried out. However, since the properties of the risk parameters can change with position (parameters or
variables) in time, the risk dynamic equation can in turn vary from point to point with time (time-spatial
movement). The risk is an intensive quantity and depends on the afresaid parameters and time. Therefore,
in the presence of an hazard (i), the risk dynamic equation may be written mathematically by the following
expression,

dR i,
R i, t      R i,  1     R i, [1.05]
dt

where indicates the conservation of risk with time. Ri, is risk variable,  is the risk level coefficient ( =
1), and  is the safety level coefficient for risk depletion. Equation [1.05] shows how risk change with
time. If the initial risk, at initial time (t0), is notice as R i0, , and if some later time (t), the risk has changed to
Ri,, and applying integrals to the Equation [1.05] gives,

Ri t
dR i,
    dt  [1.06]
R i ,
R 0i,  t0

Integrating the Equation [1.06],

lnR i, RR i


0
i, 
   t t0
t
[1.07]

and solving the Equation [1.07] gives,

 
lnR i,   ln R i0,    t  t 0  [1.08]

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

Simplifying Equation [1.08] and substituting  by (1) gives,

 R i , 
ln     t  1     t  t 
0 [1.09]
R0 
 i , 

and rearranging the Equation [1.09], we can obtain the following final expression for the risk dynamics
model, which relates risk with time,

 R i , 
   exp1     t  t  [1.10]
0
 R0 
 i , 

or alternatively,

 R i , 
   e 1 t  t0  [1.11]
 R0 
 i , 

Simplifying the Equation [1.10] and Equation [1.11], we have the following equations,

R i,  R 0i,  exp1     t  t 0  [1.12]

and


R i,  R 0i,  e 1 t  t 0 
 [1.13]

Assuming that the initial time is zero (t0 = 0), and rearranging the Equation [1.10] or Equation [1.11], we
have the following equation,

 R i , 
   exp1     t   e 1 t  [1.14]
 R0 
 i , 

This equation establishes the relation between risk estimate, as a ratio between risk at a given instant of
time and the initial risk, and time when the initial time (t0) is zero; this equation of the risk dynamic model is
shown graphically in Figure 1.02 and in Figure 1.03 for different time (t).
Simplifying the Equation [1.14] gives,

R i,  R i0,  exp1     t   R 0i,  e 1  t  [1.15]

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

35
=0,3
30
25
=0,4
Risk Ratio

20

15
=0,5
10
=0,6
=0,7
5
0
0,0 0,5 1,0 1,5 2,0 2,5 3,0 3,5 4,0 4,5 5,0
Time (years)
2,5
=0,6 =0,7 =0,8
2,0
=0,3
=0,9
1,5
=1,0
1,0
=1,1
=1,2
0,5
=1,5
0,0
0,0 0,5 1,0 1,5 2,0 2,5 3,0 3,5 4,0 4,5 5,0

Figure 1.02 – Risk ratio estimate representation based on Equation [1.14] as a function of time (t) for
different values of safety level coefficients ().

From Figure 1.03 we can see for safety level coefficients lower than 1 (<1) the risk ratio increases with
time; conversely, for safety level coefficients higher than 1 (>1) the risk ratio decreases with time below
the unity. This signifies that the implicit costs of the safety system in the cases for safety level coefficients
greater than the unity are lower than the cases with safety level coefficients below the unity. This
demonstrates the competition paradigma of the safety performance: higher the safety level coefficients, less
is the cost and financial effort.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

22
20
t=5,0 t=5,0 t=2,2
18
t=4,6 t=2,0
16 t=4,2 t=1,8
14
Risk Ratio

t=4,0 t=1,4
12 t=3,8 t=1,0
10 t=3,4 t=0,6
t=3,0
8
t=2,6
6 t=2,4
4
2
0
0,4 0,5 0,6 0,7 0,8 0,9 1,0 1,1 1,2 1,3 1,4
Safety Level Coefficient
3,0
t=5,0
2,5
t=1,4
2,0 t=1,0
t=0,6
1,5
t=0,0
1,0 t=0,6
0,5
t=5,0
0,0
0,4 0,5 0,6 0,7 0,8 0,9 1,0 1,1 1,2 1,3 1,4

Figure 1.03 – Risk ratio estimate representation based on Equation [1.14] as a function of the safety level
coefficient () for different values of time (t).

If we want to determine the implicit safety level coefficient (), assuming that both final risk value (Ri,) and
initial risk value ( R i0, ) are known, we shall solve Equation [1.10] or Equation [1.11] to variable  (safety
level coefficient). For Equation [1.10], when the initial time is not zero (t = t0), the solution gives,

 1   R i, 

  1     ln
  0 [1.16]
 t  t0   R i, 

and for Equation [1.11], when the initial time is zero (t = 0), we have the following expression,

 1   R i , 
  1     ln  [1.17]
 t   R i0, 

Let us now establish the relation between two risks. Assuming the one risk is given by the following risk
dynamic equation,

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

R i, 1  R i0, 1  e 1 t  t 


0
[1.18]

 
where R 0i, 1 is the initial risk, and 1 is the safety level coefficient for that risk, and the second risk is given
by the identical equation,

R i, 2  R 0i, 2  e 1 t  t 


0
[1.19]

  is the initial risk, and 


where R 0i, 2 2 is the safety level coefficient, respectively. The relation between both
risks can be determined by dividing both equations (Equation [1.18] and Equation [1.19]),

R i, 1 R 0i, 1 e 1 t  t 


1 0
  [1.20]
R i, 2 R i0, 2 e 1 t t 
2 0

for any instant of time (t) and an initial time (t0).

Consecutive Risks
Consecutive risks are complex scenarios of great importance in safety engineering, and so, equations
describing the risk dynamics are of real interest. In consecutive risks, the consequences of an risk become
the risk of the following consequence, and can be expressed by the following relation,

R i, 1  R i, 2  R i, 3 [1.21]

If the risk equation of consecutive risks are considered to be as similar to Equation [1.02], then the
differential equations which describe the consecutive risk chain are as follows. For risk R i, 1 , the
differential equation has been already developed and is similar to Equation [1.05].

dR i, 1
  1  R i, 1 [1.22]
dt

For risk R i, 2 , it depends on characteristics of risk R i, 1 , and is given by the following equation,

dR i, 2
  1  R i, 1   2  R i, 2 [1.23]
dt

For risk R i, 3 , and the last risk in the risk chain, it depends solely on characteristics of risk R i, 2 , given by
the equation,

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

dR i, 3
  2  R i, 2 [1.24]
dt

For each risk, the risk level coefficient is respectively, 1 = (11), 2 = (12), and 3 = (13). If at the
   
initial time (t0) we consider the R i, 1  R i0, 1 , R i, 2  R 0i, 2 and R i, 3  R i0, 3 , then the solution for  
the risk of each phase at some time (t) is as follows. For risk R i, 1 , the equation is similar as Equation
[1.13].

R i, 1  R i0, 1  e 1 t t 


1 0
[1.25]

If we assume that the initial time is zero (t = t0), the Equation [1.25] becomes,

R i, 1  R i0, 1  e 1 t  1


[1.26]

For risk R i, 2 , substituting Equarion [1.25] in Equation [1.23] gives the following,

dR i, 2
dt
 
  1  R i0, 1
 
 e 11 t  t 0    2  R i, 2 [1.27]

and rearranging Equation [1.27], we obtain the following equation,

dR i, 2
dt
  2  R i, 2  1  R 0i,    e 
1
1 1 t  t 0 
 [1.28]

We can say that the Equation [1.28] is in the form of an differential equation type of

R      R 
i , 2
'
2 i , 2    e 
 1  R i0, 1
1 1 t  t 0 
 [1.29]

If we multiply the Equation [1.29] by e 2 t  t 0  or e  2 t  , then we obtain the following relation,

     
e 2  t  t 0  R i, 2   2  R i, 2   1  R i0,
'
  1
e 
11 t  t 0   e  2 t t 0 

[1.30]

and assuming the following equalty,

    
e 2  t  t 0  R i, 2   2  R i, 2 
' d
dt
 
R i, 2  e 2 t t 0   [1.31]

and substituting the second term of Equation [1.31] in the first term of Equation [1.30] gives,

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

d
dt
 
R i, 2  e 2 t t0   1  R i0,   1
e 
11 t  t 0   e  2 t  t 0  [1.32]

Hence, if we apply integrals to the Equation [1.32] and rearranging gives,

R i, 2 t 2 t  t0  



 dR i, 2  1   R 0i,
    e  1
1 1 t  t 0 
 ee  
2 t  t 0  
 dt [1.33]
R 0i, 2 t0  

Simplifying Equation [1.33], we have the following,

R i, 2 t

 R i, 1  e 
dR i, 2   1  0 11 t  t 0   dt [1.34]

R 
0
i , 2
t0

Solving Equation [1.34] gives,

R   R i,  2
i , 2 R 0 
i , 2
 
 1  R 0i, 1

1
1  1 
e 
11 t  t 0  
t
t0 [1.35]

Simplifying Equation [1.35] and considering that 1 = 11 gives,

R i, 2  R i0, 2  R 0i, 1  e 1 t t   e 1 t  t  


1 0 1 0 0
[1.36]

and

R i, 2  R 0i, 2  R 0i, 1  e 1 t t   1


1 0
[1.37]

If we assume the initial time as being zero (t0 = 0) the Equation [1.37] becomes,

R i, 2  R i0, 2  R i0, 1  e 1 t   1 1


[1.38]

Finally, for risk R i, 3 , substituting the equation of risk R i, 2 , the Equation [1.37], in Equation [1.24] it
gives,

dR i, 3
dt
   R   e 
  2  R 0i, 2
0
i , 1
1 1 t 
1  [1.39]

Applying integrals to the Equation [1.39] and assuming that 2 = 12, it becomes,

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

R i, 3 t

 dR i, 3  1   2    R i, 2  R i, 1  e


0 0 1 t   1 dt
1
[1.40]
 
R 0i , 3 t0

Solving the Equation [1.41] gives,

 t 
R   R i , 3
i , 3 R 0   
 1   2    R 0i,
 2
t
 
 t t 0  R 0i, 1
 1

1   

 e 11 t  
t
t0  t t0 

i , 3
 1 

[1.41]

Rearranging and simplifying the Equation [1.41] gives,

 
R i, 3  R 0i, 3  1   2   R i0, 2  t  t 0   R 0i, 1   1 1   e 1 t   e 1 t    t  t 0 
1 1 0

  1 

[1.42]

Hence, the Equation [1.42] becomes,

 
R i, 3  R 0i, 3  1  2   R 0i, 2  t  t0   R 0i, 1   1
 
 e1 1 t   e1 1 t 0   t  t 0 
  1  1  

[1.43]

If the initial time is equal to zero (t0 = 0), the Equation [1.43] becomes,

 
R i, 3  R 0i, 3  1  2   R 0i, 2  t  t0   R 0i, 1   1
 
 e1 1 t   1  t 
  1  1  

[1.44]

For the above equations we have assumed that the risk level coefficients (1, 2, and 3) are different;
instead, we will assume as being the same risk level coefficient ( = 1 = 2 = 3) and reciprocally, the
safety level coefficient is the same ( = 1 = 2 = 3). In this case, the equations for consecutive risk
becomes as follows. For risk R i, 1 we have the following equation,

R i, 1  R i0, 1  e 1 t  t 


0
[1.45]

and for risk R i, 2 we have the equation,

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

R i, 2  R i0, 2  R i0, 1  e 1 t t   1


0
[1.46]

and finally, for risk R i, 3 we have the equation,

R i, 3  R 0i, 3  1     Ri0, 2  t  t0   Ri0, 1   1


  
 e1  t   e1  t 0   t  t 0 
  1    

[1.47]

or simplifying,

R i, 3  R 0i, 3  1     R 0i, 2  t  t0   R 0i, 1   e1 t   e1 t   1     t  t0 


0

[1.48]

Risk Dynamic Model Representation


As we have seen, the risk dynamic equation for a single risk is written as being one equation identical to the
Equation [1.15].

R i,  R i0,  e 1 t  [1.15]

Let us consider a certain domain (see Figure 1.04) and one general function of risk being represented in that
domain, Ri(x,y,z), and one point in the domain (M0). Let us trace one vector representing time (t) with the
cosine directors cos , cos , and cos . Again, let us consider over the vector time one space-time distance,
noted as t, with a origin in the starting point of the domain (M0), assuming that the point M0 as being the
position of the initial time (t0), and cross the point M1. The space-time distance vector is defined as

t  x 2  y 2  z 2 [1.49]

Assuming that function Ri(x,y,z) is continuous in space-time domain, and has continuous derivatives for the
independent variables x, y, and z in the domain. The variation (e.g. growing or decreasing) of the total
function is given by the following expression,

R i, R i, R i,


R i,   x   y   z   x  x   y  y   z  z
x y z

[1.50]

where x, y, and z becomes zero whenever t approximate to zero (t  0). Dividing all members of
Equation [1.50] by t,

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

R i,  R i , x R i, y R i, z x y z


       x   y   z 
t x t y t z t t t t

[1.51]

M1


M0 
y

Figure 1.04 – Risk dynamic model in a space-time domain.

Assuming that the cosine directors are expressed as follows,

x
cos   [1.52]
t

y
cos   [1.53]
t

z
cos   [1.54]
t

and substituting in the Equation [1.51] gives,

R i,  R i , R i,  R i ,
  cos     cos    cos    x  cos     y  cos    z  cos 
t x y z

[1.55]

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

Ri,

Figure 1.05 – Risk dynamics dimensional model in a space-time domain.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

R i,
The limit of when t  0 is named the function derivative of risk function Ri(x,y,z), in any abstract
t
 R i ,
space-time position (x,y,z) over the time vector direction, and is commonly stated as .
t

R i, R i,
lim
t  0 t

t
[1.56]

Hence, the limit of risk function Ri(x,y,z) is given by

R i, R i,  R i ,  R i ,
  cos     cos    cos  [1.57]
t x y z

and x, y, and z are the parameters (e.g. safety level coefficient, time, and risk value) of risk in a
tridimensional system. Figure 1.05 shows the dimensional model of risk estimate (Equation [1.15])
dependence both on the safety level coefficient () and time (t). Risk estimate increases with lower safety
level coefficents and with high time elapsed values.

Cylindrical Coordinate System


It is often convenient to use cylindrical coordinates to solve the equation of risk dynamics. Figure 1.06 shows
how this transformation takes in a coordinate system. The relations between rectangular (x,y,z) and
cylindrical (r,,z) coordinates are,

x  r  cos   [1.56]

y  r  sin [1.57]

zz [1.58]

and

r x2  y2 [1.59]

y
tan 1     [1.60]
x

The risk dynamics equation becomes a function of,

Ri = f(r,) [1.61]

Assuming that x =  and y = t,

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

r  2  t2 [1.62]

and

t
  tan 1   [1.63]


(x,y,z)

z
y

r x
x 
y

Figure 1.06 – The cylindrical coordinate system.

Taken the risk dynamics equation for a single risk (Equation [1.15]) when the initial time is zero (t0 = 0), as
example of application for cylindrical coordinates transformation, and assuming z as being Ri,, the risk
dynamics equation (Equation [1.15]) becomes,

R i,  R 0i,  e 1r cos  rsin  [1.64]

with the following transformation steps,

  r  cos   [1.65]

t  r  sin   [1.66]

t
  tan 1   [1.67]


SAFETY MANAGEMENT SERIES


THEORY AND MODEL

RISK CONTINUITY EQUATIONS


The general equation for risk transfer allowed us to solve many elementary risk situations. In general, to
solve many practical cases, we need to know the state in the beginning and the state at the final position,
and also the exchanges between the system and the sorroundings. These positions are represented by
differential equations. Often these equations are called equations of change, since they allways describe the
variations or shifting of risk parameters with respect to position and time. Several types of time derivatives
are used in the differential processes, as will be seen below. The most common type of derivative expression
 R i ,
is the partial time derivative. The partial time derivative of risk is defined as being . This definition
t
show to us the local change of risk with time in a fixed position – represented as being any parameter of the
risk, e.g. exposition, likelihood or probability, severity or loss criticality, or safety level – usually denoted by
letters x, y, z, etc. Suppose that we want to measure the risk while we are moving with the system in the x,
dx dy dz
y, or z positions (or directions), stated as being , , and . The total derivative is commonly
dt dt dt
expressed as being,

dR i, R i, R i, dx R i, dy R i, dz


       [1.68]
dt t x dt y dt z dt

dx dy dz
This means that the risk is both a function of time (t) and the parameters , , and at which the
dt dt dt
dx dy dz
observer is moving. Assuming a new notion for , ,and ,
dt dt dt

dx
  i, x [1.69]
dt

dy
  i, y [1.70]
dt

dz
  i ,z [1.71]
dt

the equation [1.68] becomes,

dR i, R i, R i, R i, R i,


    i, x    i, y    i ,z [1.72]
dt t x y z

Another useful type of time derivative is obtained if the observer floats along with the directions and follows
the change in risk with respect with time. This is called the derivative that follows the motion, or the
substancial time derivative,

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

DR i,  R i , R i, R i,  R i ,


    i, x    i, y    i ,z [1.73]
Dt t x y z

where  i,x,  i,y, and  i,z are components (the parameters of risk) of risk motion. These components (Ri,x, Ri,y,
and Ri,z) define a vector. This substancial time derivative is apllied to both scalar and vector variables.
If we want to establish a transfer equation, a risk balance must be made, assuming the case of a general
risk flowing inside a fixed volume element of space (see Figure 1.07).

[initial risk] + [risk generated] = [final risk] + [accumulated risk]

[1.74]

R i,
The risk accumulation is given by x  y  z  and will be assumed that the risk generated in the
t
system is zero (Ri,G = 0). Substituting all the terms in the general risk transfer equation (Equation [1.74]) by
mathematical expressions,

R i,x  y  z  R i,y  x  z  R i,z  x  y  R i,G 


 R i ,
 R i,x  x  y  z  R i,y  y  x  z  R i,z  z  x  y  x  y  z 
t

[1.75]

z,  i,z

(x,y,z+z) (x,y+y,z+z)

(x+x,y,z+z)
(x+x,y+y,z+z)

(x,y,z) (x,y+y,z)
y,  i,y

(x+x,y,z) (x+x,y+y,z)

x,  i,x

Figure 1.07 – Risk continuity tridimensional representation in a space-time domain.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

Simplifying the Equation [1.75] gives,

R i,x  y  z  R i,y  x  z  R i,z  x  y  0 


 R i ,
 R i,x  x  y  z  R i,y  y  x  z  R i,z  z  x  y  x  y  z 
t

[1.76]

and dividing all members by x  y  z the Equation [1.76] becomes,

R i , x  R i , x  x R i , y  R i , y  y R i ,z  R i ,z  z R i,
   [1.77]
x y z t

Taking the limit as x, y, and z approaching to zero, we obtain the equation of continuity or conservation
of risk.

R i,  R i, R i, R i, 


    i,x    i, y    i ,z  [1.78]
t  x y z 

The above equation tell us how risk changes with time at a fixed position resulting from the changes in the
risk parameters. We can convert the Equation [1.78] into another form by carrying out the actual partial
differentiation,

R i,   i,x  i,y  i,z   R i, R i, R i, 


 R i,        i,x    i, y    i ,z 
t  x y z   x y z 

[1.79]

Rearranging the Equation [1.79] gives,

R i,  R i, R i, R i,    i,x  i,y  i,z 


   i, x    i, y    i,z   R i,    
t  x y z   x y z 

[1.80]

Substituting the Equation [1.72] into Equation [1.80] gives,

dR i,   i,x  i,y  i,z 


dt
 R i,       R i,     i,x ,y ,z    [1.81]
 x y z 

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

where

 i,x   i, y   i ,z
   i , x , y ,z     [1.82]
x y z

Equation [1.78] through Equation [1.81] shows the risk dynamics continuity in a space-time domain.

GENERAL EQUATION OF RISK TRANSFER


In risk transfer process we are concerned with the transfer of a given risk parameter or singular entity
caused by movement through a given system. This risk parameter (or entity) being transfer within the
system can be one of the following variables in order to change (by time unit) the value of risk estimate:
exposure, likelihood or probability, severity or loss criticality, or safety level. Each risk has a given different
quantity of the property associated with it. When a difference of value of the variable exists for any of these
parameters from one position to an adjacent position in the space-time domain, a net transfer of this
property occurs. We can formalize the risk transfer equation process by writing,

d
R i ,  [1.83]
d
where Ri, is the amount of risk transferred per unit of time in the  (safety level coefficient) direction,  is
the funtion of risk parameters (i.e. exposure, likelihood or probability, and severity or loss criticality), and 
is a proportionality constant (risk level coefficient). If the process of risk transfer is at steady state, then the
risk transfer is constant. Applying integrals to the Equation [1.83] gives,

2 2
Ri   d   d [1.84]
1 1

Solving the Equation [1.84] and simplifying gives,

R i , 
 2   1    [1.85]
 2  1  

and  is a function of exposure coefficient (E), likelihood or probability (L), and severity or loss criticality (K),
i.e. the risk parameters,

  f E, K , L  [1.86]

In a unsteady state system, when we are calculating the risk transfer, it is necessary to account for the
amount of the property being transferred. This is done by writing a general property balance or conservation
balance for safety level coefficient (),

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

[Initial Risk] + [Generated Risk] = [Final Risk] + [Accumulated Risk]

[1.86]

Formulating the above statement using a mathematical expression,

 R i ,
R i ,  R i ,G  R i ,    [1.87]
t

and rearranging the Equation [1.87] gives,

R i,
R i,G  R i,    R i,  [1.88]
t

Assuming,

 i
R i,    R i,  [1.89]


and substituting the Equation [1.89] in the Equation [1.88] we have,

  i  R i ,
R i ,G   [1.90]
 t

If we consider that the generated risk in an unsteady state system is zero (Ri,G = 0), the Equation [1.90]
becomes,

 i  R i ,
 [1.91]
 t

From Equation [1.78] we can establish the following general equation for risk transfer process,

 i R i, R i,  R i ,
   i, x    i, y    i ,z [1.92]
 x y z

which eatabish the relationship between risk estimate when the safety level shift and the risk parameters
change in a space-time domain.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

RISK EQUILIBRIUM AND ENTOPY


There are three basic ways a risk may lose its initial value. One way is by elimination of risk; a second way is
when risk can cmbine with other risks; and, the third way is transferring risk. Part of the risk theory is
concerned, in one way or anther, with the state of equilibrium and the tendency of systems (including the
safety management system) to move in the direction of the equilibrium state. In safety engineering field,
many of the problems related to risk transfer involve the substituition of risk with other one of lower value,
the insurance protection and other techniques such elimination, mitigation, and engineering features and
measures. The equilibrium theory is not directly concerned with the dynamics of risk transfer, it serves to
indicate how far a risk (and the safety system) is from the equilibrium, which in turn is a factor in the risk
transfer dynamics. Thus, the risk transferred is proportional to the difference between the risk at equilibrium
and the actual risk value.

dR i,
dt

 R i,eq  R i,  [1.93]

This concept serves as the basis for engineering calculations, in risk treatment methods: substitution,
elimination and mitigation, application of engineering features and measures, and insurance cover for risk
which cannot be eliminated.
The concept of entropy would serve as a general criterion of an established state for risk changes within a
safety system. Also, the concept of entropy in essence states that all systems tend to approach a state of
equilibrium, in the ideal conditions. The significance of the equilibrium state is realized from the fact that risk
shift can be obtained from a system only when the system is not already at equilibrium. If a safety system is
at equilibrium, no risk shift process tends to occur, and no risk changes are brought about. The safety
practioneer (or engineer) interest in entropy is relate to the use of this concept to indicate something about
the position of equilibrium in a safety process or safety system (including safety management systems).
Entropy (i,) of any risk or hazard (i) is defined by the following differential equation,

dR i, R i,eq  R i,


d i,   [1.94]
d 

where Ri,eq is the risk at equilibrium ( = 1), Ri, is the risk at any instant of time, and  is the safety level
coefficient. When  is far away from the unity, we say that risk is not at equilibrium state. The entity
Ri,eqRi, is the amount of risk that the safety system absorbs if a change in risk is brought about in an
infinitely and reversible manner. It is the change in entropy in a safety system, hazard or risk which is of
usual interest, and this is evaluated as follows,

2
dR i,
 i,   i, 2   i, 1   [1.95]
1
dt

or alternatively,

 i,  R i, 2  R i,1 [1.96]

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

If 2 = 1 we can say that the Equation [1.96] establishs the relation between two states, the risk equilibrium
(Ri,eq) state and any risk state at any instant of time,

 i,  R i,eq  R i, [1.97]

If  is positive, the Equation [1.97] shows that the change could occur from a higher safety level (>1) to
a both less lower level of safety level (<1) or equilbrium state (where  is equal to the unity); if  is
negative, the system would occur in the reverse direction. When  is null or zero, the system is at
equilibrium, and any risk change could not take place in either direction.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

RISK ANALYSIS MODEL

This chapter describes the model of risk analysis consistent with the international regulations. In addition it
discusses two other matters, the use of quantitative risk assessment methodology and uncertainty. There
are three major elements of risk analysis. These are risk assessment, risk management and risk
communication (Davies 1996) and each is integral to the overall process. Prior risk analysis we should made
a risk identification; risk identification sets out to identify an organisation’s exposure to uncertainty. This
requires an intimate knowledge of the organization, the market in which it operates, the legal, social,
political and cultural environment in which it exists, as well as the development of a sound understanding of
its strategic and operational objectives, including factors critical to its success and the threats and
opportunities related to the achievement of these objectives. Risk identification should be approached in a
methodical way to ensure that all significant activities within the organization have been identified and all
the risks flowing from these activities defined. All associated volatility related to these activities should be
identified and categorised. Business activities and decisions can be classified in a range of ways, examples of
which include:
(1) Strategic – These concern the long-term strategic objectives of the organization.They can be affected by
such areas as capital availability, sovereign and political risks, legal and regulatory changes, reputation
and changes in the physical environment.
(2) Operational – These concern the day-today issues that the organization is confronted with as it strives to
deliver its strategic objectives.
(3) Financial – These concern the effective management and control of the finances of the organization and
the effects of external factors such as availability of credit, foreign exchange rates, interest rate
movement and other market exposures.
(4) Knowledge management – These concern the effective management and control of the knowledge
resources, the production, protection and communication thereof. External factors might include the
unauthorised use or abuse of intellectual property, area power failures, and competitive technology.
Internal factors might be system malfunction or loss of key staff.
(5) Compliance – These concern such issues as health and safety, environmental, trade descriptions,
consumer protection, data protection, employment practices and regulatory issues.

Whilst risk identification can be carried out by outside consultants, an in-house approach with well
communicated, consistent and co-ordinated processes and tools is likely to be more effective. In-house
ownership of the risk management process is essential. After the risk identification, we proceed with risk
description. The objective of risk description is to display the identified risks in a structured format, for
example, by using a table.The risk description table overleaf can be used to facilitate the description and
assessment of risks. The use of a well designed structure is necessary to ensure a comprehensive risk
identification, description and assessment process. By considering the exposure, severity of consequences,
probability of occurrence, and the safety level associated with each of the risks set out, it should be possible
to prioritise the key risks that need to be analysed in more detail. Identification of the risks associated with
business activities, or human activities and decision making may be categorised as strategic, project or
tactical, operational, etc. It is important to incorporate risk management at the conceptual stage of all
projects and human activities as well as throughout the life of a specific target (e.g. personnel, equipment,
project, business interruption, product, environment).

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

Figure 2.01 – Example of an risk analysis methodology.

RISK ASSESSMENT
For the purposes of this document risk assessment is defined as the overall process of hazard identification
and risk estimation (likelihood, exposure, consequence, and safety level assessments). The risk assessment
process aims to identify and assess all risks that could result in harm to human health or other target (e.g.
environment, business interruption, product, equipment) due to the proposed dealings with the human
activities and technology. The three main steps involved in risk assessment include:
(1) Hazard identification involving analysis of what, how, where and when something could go wrong and
the causal pathway leading to that adverse outcome;
(2) Consideration of the likelihood of an adverse outcome and the severity of that outcome (consequences);
(3) Risk estimation to determine the chance that potential harm would be realised.

The risk estimate is a combination of the exposure, likelihood (probability), consequences (loss criticality) of
an adverse outcome, and safety level. Also, the risk estimate incorporates consideration of uncertainty.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

RISK MANAGEMENT
For the purposes of this document risk management is defined as the overall process of risk evaluation, risk
treatment and decision making to manage potential adverse impacts. Risk management includes risk
evaluation, the process of identifying those risks that warrant treatment to reduce the likelihood, exposure,
or severity of an adverse outcome. Under the international regulations, potential adverse effects involve
consideration of risk posed by or as the result of human activity or the use of technology. Risk management
is a key mechanism used by the Regulator to regulate dealings with hazards. One of the Regulator’s principal
functions in risk management is to decide whether or not to allow certain dealings with hazards. The criteria
for those decisions consider only harm to human health and safety and other targets (e.g. the environment,
business interruption, product, and equipment). Specific questions addressed as part of risk management
include:
(1) Which risks require management?
(2) What conditions need to be in place to manage those risks?
(3) Which of the proposed management conditions will adequately control those risks?
(4) Is human health and safety and the environment adequately protected under the proposed licence
conditions?

The three main steps involved in risk management include:


(1) Evaluating the risks, selecting those that require management;
(2) Identifying the options for risk treatment;
(3) Choosing the actions proposed for risk treatment.

RISK COMMUNICATION
For the purposes of this document risk communication is defined as the culture, processes and structures to
communicate and consult with stakeholders about risks. Specifically, it is the communication of the risks to
human health and the environment posed by certain dealings with hazards. The principal functions of risk
communication in the context of the Act are:
(1) To inform stakeholders of risks identified from proposed dealings with gmos and the licence conditions
proposed to manage those risks;
(2) To establish effective dialogue with the gene technology advisory committees, agencies prescribed in
legislation, and all interested and affected stakeholders.

That dialogue is used to ensure that the scientific basis for the risk assessments is sound, that the Regulator
takes into account all of the necessary considerations to adequately protect human health and the
environment including community based concerns, and that the functions and processes involving
communication are monitored and continually improved.

MODELS OF RISK ANALYSIS


The model of risk analysis used consists of the three elements discussed above, namely risk assessment, risk
management and risk communication. The model is constrained by the national and international regulations
in which risk assessment and risk management are referred to as separate processes. The model recognises
that there is overlap between the individual elements but also that certain functions required by the
legislation are quite distinct within each of those elements. These components can be represented as
overlapping domains whose functions are highly interdependent. The separation of risk assessment and risk

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

management is critical to clearly distinguishing the evaluation of risk based on scientific evidence from
assessing the significance of those risks in a wider context and determining appropriate management
measures. However, it is recognised that risk analysis is an iterative process, and interaction between risk
managers and risk assessors is essential for practical application.

COMPONENTS IN RISK ANALYSIS


There are three key steps in assessing the risks to human health and safety and the environment. These are
establishing the risk context, assessing the risks and then managing or treating those risks. The model
adopts the principles that the risk analysis process should follow a structured approach incorporating the
three distinct but closely linked components of risk analysis (risk assessment, risk management and risk
communication), each being integral to the overall process. There should be a functional separation of risk
assessment and risk management, to the extent practicable, in order to ensure the scientific integrity of the
risk assessment, to avoid confusion over the functions to be performed by risk assessors and risk managers
and to reduce any conflict of interest. The risk context includes:
(1) The scope and boundaries of the risk analysis determined by the national and international regulations
and the Regulator’s approach to their implementation; the proposed dealings;
(2) The nature of the genetic modification;
(3) The criteria and baselines for assessing harm.

The risk assessment includes assessing hazards, likelihoods, exposure, consequences, and the safety level to
arrive at a risk estimate. Risk management includes identifying the risks that require management, the
options to manage those risks and then selecting the most appropriate options. A decision is made by the
Regulator on whether to issue a licence on the basis of the risk assessment and that the risks identified can
be managed. These three elements form the basis of risk analysis under the Safety Law. The final decision
on issuing a licence or permission is only made after consultation and consideration of comments provided
by all key stakeholders. This feedback provides a key point of input by stakeholders into the decision making
process. Quality control through internal and external review is also an integral part of every stage of the
process. Monitoring and review are undertaken as part of ensuring that the risks are managed once a licence
or permission is issued. This is undertaken both by the Regulator and by applicants, and the results feed
back into the process. Compliance with licence conditions is also monitored by the Regulator or any
Regulator’s official representative. The overall process of risk analysis is highly iterative and involves
feedback both internally during the process and through communication and consultation.

QUALITATIVE AND QUANTITATIVE RISK ASSESSMENT


The aim of risk assessment is to apply a structured, systematic, predictable, repeatable approach to risk
evaluation. This is the case for both qualitative and quantitative assessments. The aim of quantitative risk
assessment is to determine the probability that a given hazard will occur and the error associated with the
estimation of that probability. In such an assessment the probability includes both the likelihood that a
hazard will occur, the consequences if it did occur, the exposure to the hazard, and the safety level as there
is a direct relationship between the four parameters. This type of analysis is appropriate to situations such as
chemical and industrial manufacture where there is a long history during which information has been
accumulated on the type and frequency of risks. It requires large amounts of data and extensive knowledge
of the individual processes contributing to the potential risks to be accurate.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

Environmental risk assessments and other targets (e.g. business interruption, product, equipment, and
personnel or human health) in the absence of quantitative data are often qualitative because of their
complexity, the number of inputs, and the necessity to deal with multiple receptors that can give multiple
impacts. This is not to say that qualitative assessments do not employ quantitative data. On the contrary,
qualitative assessments use quantitative information when it is available. In using qualitative assessments
the maximum amount of information can be provided describing exposue, likelihood and consequence (loss
citicality). Quantitative assessments use numerical values that may be derived from:
(1) Experimental data;
(2) By extrapolation from experimental studies on related systems;
(3) Historical data;
(4) Inferred from models used to describe complex systems or interactions.

Qualitative assessments use relative descriptions of likelihood and adverse outcomes and can combine data
derived from several sources, some of which may be quantitative. The use of qualitative or quantitative
approaches depends on the amount, type and quality of the data; the complexity of the risk under
consideration; and the level of detail required for decision making. The weakness associated with qualitative
assessments can be overcome by taking a number of precautions. Four specific weaknesses were identified
and these can be controlled and minimised. Ambiguity can be reduced by using defined terminology for
likelihood, exposure, consequences, safety level and risk. Potential variations between assessors can be
reduced through quality control measures including internal and external review and sourcing expert advice.
Differing viewpoints, perspectives and biases can be reduced through better descriptions of what the
national and international regulation is trying to protect and stakeholder input through consultation.
Qualitative risk assessments are, in most instances, the most appropriate form because:
(1) The types of human activities and types of introduced technologies are highly varied and often novel;
(2) Potential human health and environmental adverse effects are highly varied;
(3) Environmental and other target effects arise within highly complex systems that have many incompletely
understood variables;
(4) Adverse effects may occur in the long term and are therefore difficult to quantify.

Therefore qualitative risk assessment provides the most feasible mechanism to assess risk for the majority of
cases as there is insufficient data to apply quantitative methods. Models can be used to inform the process
but are unable to approach the complexity of the systems involved or contribute definitive answers.
Qualitative assessments are also more accessible for risk communication.

UNCERTAINTY
Regardless of whether qualitative or quantitative risk assessment is used, it must be based on evidence and
is therefore subject to uncertainty. Uncertainty is an intrinsic property of risk and is present in all aspects of
risk analysis, including risk assessment, risk management and risk communication. A number of different
types of uncertainty are discussed in more detail below. There is widespread recognition of the importance
of uncertainty in risk analysis. In its narrowest use within risk assessments, uncertainty is defined as a state
of knowledge under which the possible outcomes are well characterised, but where there is insufficient
information confidently to assign probabilities to these outcomes. It is recognised that both dimensions of
risk – the potential adverse outcome or consequence (loss criticality), exposure, likelihood, and safety level –
are always uncertain to some degree. Within this context, uncertainty has been interpreted more broadly as
incertitude, which arises out of a lack of knowledge of either potential outcome, exposure, likelihood, or

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

safety level. However, uncertainty in risk analysis extends even more widely: there can also be uncertainty
of how risk is perceived and how it is described and estimated. Therefore, uncertainty may be more usefully
described in a broader sense that accords more with common usage. Examples of uncertainty within the
elements of risk analysis could include:
(1) Risk assessment – Uncertain nature of the hazard, such as the lack of knowledge of biochemical and
properties of the introduced hazard, environment specific performance of the technology, its interaction
with other target entities and processes, or landscape changes over long time periods; Uncertainty of the
calculations within the risk assessment process, including assessment of hazards, likelihood, exposure
and consequences; Uncertain descriptions used in qualitative risk assessments due to insufficient
explanations of terminology, use of related terms that are not fully congruent or the use of the same
term in different contexts.
(2) Risk management – Balancing the sufficiency of protective measures against their effectiveness (safety
level); Decision making in the presence of incomplete knowledge and conflicting values.
(3) Risk communication - Uncertainty of communication effectiveness due to difference in knowledge,
language, culture, traditions, morals, values and beliefs.

The processes in risk analysis that are particularly sensitive to this broadly defined form of uncertainty
include establishing the risk context, estimating the level of risk, and decision making. Therefore, this
broader consideration of uncertainty is useful for a number of reasons, including:
(1) Applicability to qualitative risk assessments where the sources of uncertainty cover both knowledge and
descriptions used by assessors;
(2) Ensuring that information is not over or under-emphasised during the identification of uncertainty;
(3) Highlighting areas where more effort is required to improve estimates of risk and apply appropriate
cautionary measures;
(4) Even with the best risk estimates, extending analysis of uncertainty to the decision making process will
improve the quality of the decisions;
(5) Helping to produce a clearer separation of the values and facts used in decision making;
(6) Fulfilling an ethical responsibility of assessors to identify the limits of their work;
(7) Developing trust between stakeholders through increased openness and transparency of the regulatory
process;
(8) Increasing the opportunity for more effective communication about risk.

One aspect of uncertainty is related to the meaning of words (semantics) adopted by the Regulator. A clear
definition of terms is a very important and practical means for reducing uncertainty that might arise simply
as a result of ambiguity in language. Specific terms have been selected as unique descriptors of likelihood,
consequence, exposure, safety level and risk. This will aid the clarity and rigour as well as the consistency
and reproducibility of assessments. This will in turn enhance the intelligibility of documentation prepared by
the Regulator. It may be seen as a part of the scientific discipline and intellectual rigour that the Regulator
seeks to bring to all levels and aspects of risk analysis. The use of consistent terminology has impacts at all
levels of risk analysis: in having an agreed setting of context, of undertaking risk assessment, in risk
treatment (especially for licence conditions which need to be intelligible, unambiguous and enforceable) and
in risk communication.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

RISK ASSESSMENT

Risk assessment is the overall process of identifying the sources of potential harm (hazard) and assessing
both the seriousness (consequences or loss criticality), the exposure to any hazard, the likelihood of any
adverse outcome that may arise, and the safety level. It is based on hazard, consequence, exposure,
likelihood, and safety level assessments leading to an estimation of risk. For the purposes of this document
risk is defined as the chance of something happening that will have an undesired impact. In the context of
the national and international regulations, only hazards that arise as a result of human activity and
technology and lead to an adverse outcome for humans or the other target where human is involved (e.g.
environment, business interruption, product, processes, equipment interface) can be considered by the
Regulator. Risk, as considered here, is concerned with assessing potential harm to human health and safety
and the environment that might arise from the use of any asset or technology. Kaplan and Garrick (1981)
suggest that risk is most usefully considered as a narrative that answers three questions:
(1) What can happen
(2) How likely is it to happen
(3) If it does happen, what are the consequences?

Therefore, an estimate of the level of risk (negligible, low, moderate, substancial, high or very high) is
derived from the likelihood, exposure, consequences of individual risk scenarios that arise from identified
hazards, and safety level. In addition, uncertainty about likelihood, exposure, and consequences of each risk
scenario will affect the individual estimates of risk. The individual steps in the process of risk assessment of
hazards are discussed in this chapter. They consist of setting the context for the risk assessment, identifying
the hazards that may give rise to adverse outcomes, assessing the consequences, exposures and likelihoods
of such outcomes and arriving at a risk estimate.
The purpose of risk assessment under the international regulations is to identify risks to human health and
the targets and estimate the level of risk based on scientific evidence. Risks to all living organisms and
relevant ecosystems should be considered. Risk analysis can be applied to many different types of risk and
different methodologies have been applied to assess different risks. Assessment of risks to health and safety
often takes the form of hazard identification, dose-response assessment and exposure assessment leading to
risk characterisation. It draws on information from disciplines such as toxicology, epidemiology and exposure
analysis. Environmental risk assessment requires assessing harm not only to individuals and populations
within a species but also to interactions within and between species in the context of biological communities
and ecosystems. There may also be the potential for harm to the physical environment. Information can be
sourced from studies of botany, zoology, entomology, mycology, microbiology, biochemistry, population
genetics, agronomy, weed science, ecology, chemistry, hydrology, geology and knowledge of
biogeochemical cycles, and therefore requires consideration of complex dynamic webs of trophic
interactions.

THE SCOPE OF RISK ASSESSMENT


Risks that may be posed by human activities and technology are required to be considered in the context of
the proposed dealing with hazards and are assessed on a case by case basis. In the case of field trials, the
scale of the release is limited in both space and time. In a commercial release the scale is not necessarily
restricted and therefore a wider range of environmental and ecological settings is considered in the risk

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

assessment. An application submitted to the Regulator must contain information that defines the hazard and
the dealings as set out in the Safety Law and national and international regulations. Other important factors
in establishing the context for risk assessment are:
(1) The location of the dealings, including the biotic and abiotic properties of the site(s);
(2) Size and time scale of the dealings;
(3) The applicant’s proposed management of the dealings to limit dissemination of the hazard or its
hazardous material;
(4) Other hazards already released;
(5) Particular vulnerable or susceptible entities that may be specifically affected by the proposed release.

In some instances, a particular hazard may already be present naturally in the environment and this
background exposure may be important. For example, many antibiotic resistance marker genes are derived
from soil bacteria that are abundant in the environment. Therefore exposure to the protein encoded by such
a gene derived from a genetic modified organism may be insignificant against this background.

Qualitatitve Approach
The first task of the safety practitioner is to develop an understanding of the organization to be assessed.
This does not mean that the practitioner must become an expert in the operation of the enterprise to be
evaluated, but must acquire enough of an understanding of how the organization operates to appreciate its
complexities and nuances. Consideration should be of factors such as (1) hours of operation, (2) types of
clients served, (3) nature of the business activity, (4) types of services provided or products produced,
manufactured, stored, or otherwise supplied, (5) the competitive nature of the industry, (6) the sensitivity of
information, (7) the corporate culture, (8) the perception of risk tolerance, and so on. The types of
information that the practitioner should ascertain:
(1) The hours of operation for each department;
(2) Staffing levels during each shift;
(3) Types of services provided and goods produced, stored, manufactured, etc.;
(4) Type of clientele served (e.g. wealthy, children, foreigners, etc.);
(5) The competitive nature of the enterprise;
(6) Any special issues raised by the manufacturing process (e.g. environmental waste, disposal of defective
goods, etc.);
(7) Type of labor (e.g. labor union, unskilled, use of temporary workers, use of immigrants, etc.).

The second step in the process is to identify the assets of the organization that are at risk to a variety of
hazards:
(1) People – People include employees, customers, visitors, vendors, patients, guests, passengers, tenants,
contract employees, and any other persons who are lawfully present on the property being assessed. In
very limited circumstances, people who are considered trespassers also may be at risk for open and
obvious hazards on a property or where an attractive nuisance exists (e.g. abandoned warehouse,
vacant building, a “cut through” or path routinely used by people to pass across property as a short cut).
(2) Property – Property includes real estate, land and buildings, facilities, tangible property such as cash,
precious metals, and stones, dangerous instruments (e.g. explosive materials, weapons, etc.), high theft
items (e.g., drugs, securities, cash, etc.), as well as almost anything that can be stolen, damaged, or
otherwise adversely affected by a risk event. Property also includes the “goodwill” or reputation of an
enterprise that could be harmed by a loss risk event.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

(3) Information – Information includes proprietary data, such as trade secrets, marketing plans, business
expansion plans, plant closings, confidential personal information about employees, customer lists, and
other data that if stolen, altered, or destroyed could cause harm to the organization.

The second part in the safety risk assessment methodology is to identify the types of events or incidents
which could occur at a site based on the history of previous events or incidents at that site, events at
similarly situated sites, the occurrence of events (e.g. crimes) that may be common to that type of business,
natural disasters peculiar to a certain geographical location, or other circumstances, recent developments, or
trends. Loss risk events can fall into three distinct categories: crimes, non-criminal events such as human-
made or natural disasters, and consequential events caused by an enterprise’s relationship with another
organization, when the latter organization’s poor or negative reputation adversely affects the enterprise.
There are numerous sources for information and data about crime-related events that may impact an
enterprise. The safety practitioner may consider any of the following sources in aiding the determination of
risk at a given location:
(1) Local police crime statistics and calls for service at the site and the immediate vicinity for a three to five
year period.
(2) Uniform crime reports published by the Department of Justice for the municipality.
(3) The enterprise’s internal records of prior reported criminal activity.
(4) Demographic and social condition data providing information about economic conditions, population
densities, transience of the population, unemployment rates, etc.
(5) Prior criminal and civil complaints brought against the enterprise.
(6) Intelligence from local, state, or federal law enforcement agencies regarding threats or conditions that
may affect the enterprise.
(7) Professional groups and associations that share data and other information about industry-specific
problems or trends in criminal activity.
(8) Other environmental factors such as climate, site accessibility, and presence of “crime magnets”.

The practitioners should consider two subcategories of non-crime-related events: natural and “human-made”
disasters. Natural disasters are such events as hurricanes, tornadoes, major storms, earthquakes, tidal
waves, lightning strikes, and fires caused by natural disasters. “Human-made” disasters or events could
include labor strikes, airplane crashes, vessel collisions, nuclear power plant leaks, terrorist acts (which also
may be criminal-related events), electrical power failures, and depletion of essential resources.

Consequential Events
A “consequential” event is one where, through a relationship between events or between an enterprise and
another organization, the enterprise suffers some type of loss as a consequence of that event or affiliation,
or when the event or the activities of one organization damage the reputation of the other. For example, if
one organization engages in illegal activity or produces a harmful product, the so-called innocent enterprise
may find its reputation tainted by virtue of the affiliation alone, without any separate wrongdoing on the part
of the latter organization.
Probability of loss is not based upon mathematical certainty; it is consideration of the likelihood that a loss
risk event may occur in the future, based upon historical data at the site, the history of like events at similar
enterprises, the nature of the neighborhood, immediate vicinity, overall geographical location, political and
social conditions, and changes in the economy, as well as other factors that may affect probability. For
example, an enterprise located in a flood zone or coastal area may have a higher probability for flooding and
hurricanes than an enterprise located inland and away from water. Even if a flood or hurricane has not

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

occurred previously, the risks are higher when the location lends itself to the potential for this type of a loss
risk event. In another example, a business that has a history of criminal activity both at and around its
property will likely have a greater probability of future crime if no steps are taken to improve security
measures and all other factors remain relatively constant (e.g., economic, social, political issues). The
degree of probability will affect the decision-making process in determining the appropriate solution to be
applied to the potential exposure.
When looked at from the “event” perspective, the practitioner may want to query how often an exposure
exists per event type. For example, if the event is robbery of customers in the parking lot, then the relevant
inquiry may be how often customers are in the lot and for how long when walking to and from their
vehicles. If the event is the rape of a resident in an apartment building, then the inquiry may focus on how
often the vulnerable population is at risk. If the event were a natural disaster such as a hurricane, the
practitioner certainly would want to know when hurricane season takes place.
The security practitioner should consider all the potential costs, direct and indirect, financial, psychological,
and other hidden or less obvious ways in which a loss risk event impacts an enterprise. Even if the
probability of loss is low, but the impact costs are high, security solutions still are necessary to manage the
risk. Direct costs may include:
(1) Financial losses associated with the event, such as the value of goods lost or stolen;
(2) Increased insurance premiums for several years after a major loss;
(3) Deductible expenses on insurance coverage;
(4) Lost business from an immediate post-risk event (e.g. stolen goods cannot be sold to consumers);
(5) Labor expenses incurred as a result of the event (e.g. increase in security coverage post event);
(6) Management time dealing with the disaster or event (e.g. dealing with the media);
(7) Punitive damages awards not covered by ordinary insurance.

Indirect costs may include:


(1) Negative media coverage;
(2) Long-term negative consumer perception (e.g. that a certain business location is unsafe);
(3) Additional public relations costs to overcome poor image problems;
(4) Lack of insurance coverage due to a higher risk category;
(5) Higher wages needed to attract future employees because of negative perceptions about the enterprise;
(6) Shareholder derivative suits for mismanagement;
(7) Poor employee morale, leading to work stoppages, higher turnover, etc.

The safety practitioner will have a range of options available, at least in theory, to address the types of loss
risk events faced by an enterprise. “In theory” alludes to the fact that some options may not be available
either because they are not feasible or are too costly, financially or otherwise. Options include safety
measures available to reduce the risk of the event. Equipment or hardware, policies and procedures,
management practices, and staffing are the general categories of safety-related options. However, there are
other options, including transferring the financial risk of loss through insurance coverage or contract terms
(e.g. indemnification clauses in security services contracts), or simply accepting the risk as a cost of doing
business. Any strategy or option chosen still must be evaluated in terms of availability, affordability, and
feasibility of application to the enterprise’s operation. The practical considerations of each option or strategy
should be taken into account at this stage of the risk assessment. While financial cost is often a factor, one
of the more common considerations is whether the strategy will interfere substantially with the operation of
the enterprise. For example, retail stores suffer varying degrees of loss from the shoplifting of goods. One
possible “strategy” could be to close the store and keep out the shoplifters. In this simple example, such a

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

solution is not feasible because the store also would be keeping out legitimate customers and would go out
of business. In a less obvious example, an enterprise that is open to the public increases its access control
policies and procedures so severely that a negative environment is created by effectively discouraging
people from going to that facility as potential customers and hence, loses business. The challenge for the
safety practitioner is to find that balance between a sound safety strategy and consideration of the
operational needs of the enterprise, as well as the psychological impact on the people affected by the safety
program.
The final step in conducting a safety risk analysis is consideration of the cost versus benefit of a given
security strategy. The security practitioner should determine what the actual costs are of the implementation
of a program and weigh those costs against the impact of the loss, financially or otherwise.

HAZARD ASSESSMENT
For the purposes of this document a hazard is defined as a source of potential harm. It can be an event or a
substance or an organism. Hazard identification underpins the process of risk assessment. In its simplest
form it can be conceptualised as asking the question: “What can go wrong?” This process should be
distinguished from risk estimation, which includes consideration of likelihood, exposure, consequences or
loss criticality, and safety level. A critical stage of risk assessment is identifying all likely hazards in the
process of the dealing with a particular hazard. Unidentified hazards may pose a major threat to health and
the environment. It is important, therefore, that a comprehensive approach is adopted to ensure that the full
range of hazards is identified. A hazard needs to be distinguished from an adverse outcome and also from a
risk. A hazard is a source of potential harm and only becomes a risk when there is some chance that harm
will actually occur. These are important distinctions that can be difficult to establish clearly in some
circumstances. For example, the hazard of catching a dangerous disease only becomes a risk if there is
exposure to the organism that causes that disease. The adverse outcome only arises if infection occurs.
Although a hazard is a source of potential harm, often particular circumstances must occur before that harm
can be realised and before it can be considered a risk. Indeed, quite specific conditions may be required for
the adverse outcome to eventuate. For instance, a gene encoding virus resistance in a plant could lead to
increased weediness in the presence of the virus, but only if the viral disease is a major factor limiting the
spread and persistence of the plant.

Hazard Analysis
A number of hazard identification techniques are available that range from broad brush approaches to more
targeted analysis. Techniques used by the Regulator and staff include, but are not limited to checklists,
brainstorming, commonsense, previous agency experience, reported international experience, consultation,
scenario analysis and inductive reasoning (fault and event tree analysis). The AS/NZS 4360:2004 (New
Zeland and Australian standards) contain details of a range of other techniques that have not been broadly
applied in the context of biological systems. These include HAZOP (Hazards and Operability Analysis), SWOT
(Strengths, Weaknesses, Opportunities and Threats Analysis), Failure Mode Effect Analysis (FMEA),
Hierarchical Holographic Modelling (HHM), Multicriteria Mapping, Delphi Analysis and Systems Analysis.
Hazards can be considered from the top down, that is, the potential adverse outcomes are identified and the
processes that may give rise to them described. Or they can be addressed from the bottom up, that is the
biological, physical, chemical and human components and processes that make up the system to be studied
are examined and potential adverse outcomes identified. Where risks have already been identified and
characterised and are well understood it is possible to use deductive reasoning to identify hazards. However
deductive techniques are unlikely to identify synergistic or antagonistic effects. The process of hazard

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

identification involves consideration of causal pathways that result in harm. Although it is important to
identify all potential hazards it is also important to apply a test of reasonableness. The number of hazards
that can be conceived as an intellectual exercise by varying circumstances, environmental conditions or
chemical and physical processes is infinite but not all are realistic, likely to eventuate, or to result in
identifiable harm. In identifying hazards the Regulator’ representative will look specifically at:
(1) Altered biochemistry;
(2) Altered physiology;
(3) Unintended change in gene expression;
(4) Production of a substance toxic to humans;
(5) Production of a substance allergenic to humans;
(6) Unintended selection;
(7) Unintended invasion;
(8) Expansion into new areas;
(9) Gene flow by sexual gene transfer;
(10) Gene flow by horizontal gene transfer;
(11) Production of a substance that is toxic to, or causes ill-health or mortality;
(12) Secondary effects (e.g. loss of genetic modified trait efficacy such as pest or pathogen resistance,
development of herbicide resistance);
(13) Production (farming) practices;
(14) Alteration to the physical environment including biogeochemical cycles;
(15) Intentional or unauthorised activities.

Not all of the above categories of hazard will be relevant to all hazards and specific ones may warrant a
more detailed consideration in one application than in others. Some hazards will have similar adverse
outcomes and could be grouped on that basis. In risk assessments for the main hazard groups considered
include human health and that of other organisms (non-target).

Causal Linkages
Once hazards have been identified it is important to establish that there is a causal link between the hazard
and an adverse outcome. There should be an identifiable pathway or route of exposure that demonstrates
that the hazard will cause the adverse outcome. There are several possible combinations:
(1) A single hazard gives rise to a single adverse outcome;
(2) A single hazard gives rise to multiple adverse outcomes;
(3) Multiple hazards that act independently and give rise to a single adverse outcome;
(4) Multiple hazards that interact and give rise to single or multiple adverse outcomes.

The Regulator will also consider if any of the identified hazards have synergistic, additive, antagonistic,
cumulative or aggregate effects, in combination with both non-target organisms and other existing targets.
Additive effects may occur where different hazards give rise to the same adverse outcome, which could
increase the negative impact. Synergism arises when the effects are greater than when added. For example,
a genetic modified organism expressing two insecticidal genes with different modes of action may have
greater potency than the addition of the effects from the individual genes. Cumulative effects arise where
there may be repeated exposure over time that may aggravate an established disease or state and
antagonistic effects may occur where the hazard alters the characteristics of the target in opposing ways.
For example, if a gene was introduced or modified to increase production of a particular compound but it
also reduced growth rates, this would be regarded as an antagonistic effect. Establishing the underlying

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

causal linkage provides the foundation for likelihood, exposure and consequence assessments and makes it
easier to identify where further information may be required or where there may be uncertainty. Other
methods of linking a hazard to an adverse outcome are descriptions based on expert scientific knowledge, or
by inference from experimental data and models.
Specific circumstances may be required for a hazard to eventuate, for instance, certain characteristics of the
environment may be important such as soil type or rainfall and this must be taken into account. Factors that
may be positively associated with the development of adverse effects will also be considered. These include
enabling factors such as poor nutrition and precipitating factors such as the exposure to a specific disease
agent or toxin. The regulations require the Regulator to consider the short and the long term when
assessing risks. The conduct of risk analysis does not attempt to fix durations that are either short or long
term, but takes account of the exposure, likelihood and impact of an adverse outcome over the foreseeable
future, and does not discount or disregard a risk on the basis that an adverse outcome might not occur for a
long time. An example of a short term effect is acute toxic effects on an organism due to direct exposure to
the genetic modified organism. In contrast, increased weediness arising from gene flow from a genetic
modified plant is an example of what could be considered a long term effect as it develops over a number of
generations. The timeframes considered by the Regulator will be appropriate to the hazard, its lifecycle and
the type of adverse outcome under consideration. A genetic modified organism that has a lifespan of many
years may involve considerations on a longer timeframe than another genetic modified organism that has a
significantly shorter lifespan, although the implications and long term consequences of the release of either
would be also be considered.

Hazard Selection
Hazards that warrant detailed estimation of likelihood, exposure and consequence to assess whether they
pose a risk to human health and safety and other targets (e.g. environment, business interruption, product,
equipment and process) are determined by applying a number of criteria including those specified by the
safety regulations and those of specific concern to stakeholders. Those that do not lead to an adverse
outcome or could not reasonably occur will not advance in the risk assessment process. In some cases the
adverse outcome may not be significant, in which case the hazard may be set aside. Thus, even at an early
stage, consideration of likelihood, exposure and consequence (loss criticality), and safety level becomes part
of an iterative process in the cycle of risk assessment. Screening of hazards occurs throughout the risk
assessment process, with those that do not require further consideration being set aside. It is also possible
that additional hazards may be identified during other stages of the process, in which case if regarded as
relevant, they will be considered. Consultation with stakeholders on applications ensures all relevant hazards
are identified. Hazard selection should be comprehensive and rigorous. However, care should be taken to
avoid over emphasis of unrealistic events. It should be relevant to the nature of the hazard and the spatial
and temporal scale of the proposed release. The process should be iterative with feedback mechanisms
between individual steps and take into account the spatial and temporal scale of the proposed release,
previous relevant assessments and data collected from previous releases of the hazard or hazardous
materials, if available. It should also be transparent and consider stakeholder’s concerns relevant to
thehealth and safety of people and the environment.

EVIDENCE AND EXPOSURE


A critical consideration related to evidence is how much and what data are required. It is important to
distinguish between data necessary for the risk assessment and background information that does not
directly inform the estimate of risk. Collection of data simply to have the information when that information

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

serves no purpose is an inefficient use of resources. The evidence used to assess an application comes from
a variety of sources. It can also include experimental data from other scientific literature relevant to the
application, practical experience, reviews, theory, models, observations, anecdotal evidence and
uncorroborated statements. Previous assessments of a hazard by other international regulatory agencies are
considered. Where a recognised overseas regulatory agency has made an assessment of the same or a
similar hazard, their findings will also be considered during the risk assessment. Other sources of qualitative
information include:
(1) Expert opinion, from committees or groups of experts, other regulatory authorities or from individual
experts;
(2) Information on potential hazards provided through public consultation;
(3) Published material on related situations.

Emphasis is placed on quantitative data. Scientific studies by the applicant will be assessed for the
appropriateness and quality of experimental design and data analysis and the conclusions must be
substantiated by the data. All of these aspects should be independently evaluated by appropriately qualified
staff. There are internationally accepted standards that must be met for particular types of studies and data
is assessed against these standards. For instance, in toxicological assessments experimental data from
animal studies are used to extrapolate to humans using defined safety factors and environmental risk
assessment is often based on effects on accepted test species. Evidence is weighted by its source (e.g. a
peer reviewed article in a recognised international journal will have more weight than an uncorroborated
statement on a personal website) and by its content. Where statements have insufficient backing they may
be given lesser weight or credence. In cases where there may be conflicting evidence with regard to adverse
impacts, for instance some information showing a negative impact and some showing no effect, this will be
considered in coming to a final conclusion. Evidence can be judged to have both strength and weight. It can
be weighted by the number of studies, a number of weaker pieces of evidence may counter-weigh a single
strong piece of evidence, or by the depth of the studies, a detailed study may have more weight that a
superficial one. The strength of the evidence can be considered through its relationship to the problem. If
evidence is directly related to the problem it will be stronger than evidence that only has an indirect bearing
on the problem. Thus if there are studies of the weediness of a particular species, this will have greater
strength than information about the weediness of a related species. In the absence of direct evidence
indirect evidence is not excluded but will be weighted appropriately. If data are unavailable or incomplete,
the significance of that absence or incompleteness in undertaking an evaluation of the risks of a proposal will
be considered. If the Regulator considers that the lack of data creates uncertainty around a level of risk that
appears manageable, then further collection of data may be required under strictly limited and controlled
field conditions. However, if the Regulator determines that the risk is not manageable, a licence will not be
granted. It is important to consider not only all available evidence and to use that, through logical deduction,
to extend the value of that evidence, but also to consider uncertainty wherever it is apparent and take it into
account.
As with regard to frequency or likelihood, it is a known fact that accident or disease are random events.
Therefore, risk factors will be differentiated by the fact that each of them leads to the occurrence of an
accident or disease, while the probability differs. Thus, the probability for an accident to happen because of
the hazardous motions of the movable parts of a drilling machine is different than the one of an accident
caused by lightning. Likewise, one and the same factor may be characterised by another frequency of action
on the worker, in different moments of the operation of a work system or in analogous systems, depending
on the nature and on the state of the generating element. For instance, the probability of electrocution by
direct touch when handling a power-supply device is higher if the latter is old and the protection isolation of

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

its conductors is torn, than if it is a brand new one. Yet, from the point of view of promptness and efficiency,
it is not possible to work with probabilities that are strictly determined for each risk factor. In certain cases,
this could not even be calculated, as in the case of factors relating to the worker. The probability to act in a
manner that generates accidents may not but be approximated. In other situations, the calculation required
for the rigorous determination of the probability of occurrence of the consequence is so toilsome, that it
would eventually be even more costly and time-consuming than the actual application of preventive
measures. This is why it would be better, as a rule, to determine the probabilities by estimation, and to
classify them by ranges. For the purpose we pursue, for instance, it is easier and more efficient to
approximate that a certain accident is likely to be generated by a risk factor characterised a frequency that is
lower than once every 100 hours. The difference is insignificant, as against more rigorous values of 1 every
85 hours or 1 every 79 hours, yet the event may be classified, in all three cases, as very frequent. For this
reason, if we use the intervals specified in CEI-812/1985, we obtain five general groups of events for
exposure coefficient (E), as follows in Table 3.01. We will attribute to each group a probability class for
exposure, from I to VI, thus saying that event, which has a probable frequency of exposure L < 110-7/hr is
classified in exposure Class I or Class II, while other event, which has a probable frequency of occurrence L
> 110-2/hr is classified in exposure Class V or Class VI.

Table 3.01 – Quotation scale for exposure coefficient (hourly basis) of the action of risk factors on targets.
EXPOSURE LEVEL EXPOSURE EXPOSURE
EXPOSURE
COEFFICIENT LEVEL
lower limit upper limit CATEGORY
(E) CLASS
< 1109 Extremely Rare 1 I
9
110 1108 2
8 Very Rare II
110 1107 3
7
110 1106 4
6 Rare III
110 1105 5
1105 1104 6
4 Low Frequency IV
110 1103 7
3
110 1102 8
2 Frequent V
110 1101 9
110 1
1100 Very Frequent 10 VI

The exposure of an target (e.g. business, facility, personnel, process, product, equipment) to an potential
harm or hazard relates the time of exposure (contact time) with the life time of that target. We can include
also the property associated to the hazard, i.e. the concentration of the agent (or vector) if is the case of
human exposure to a chemical or biological hazard. In the first case, the exposure coefficient can be
measured by the following expression,

m
 a j  t c  j 
j1
E [3.01]

 ti
i1

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

where aj is the number of contact occurrences with the hazard for a given event, tc is the contact time (in
homogeneous time units), and ti is the life time of the event in the same time units as tc. in the second case,
when we are dealing with accountable hazards, i.e. we can measure one given prperty of the hazard such
the concentration of chemical or biological specie, the exposure coefficient can be evaluated by the following
expression,

 H j  
  HSTV   t c  j 
  
E [3.02]

 ti
i 1

where Hj is the measured value of the property (i.e. the concentration,pressure, temperature), HSTV is the
standard value for the measured property (if exists); if the standard value for the measured property does
not exists we will assume the HSTV as being the unity. The exposure coefficient will be given the value of
zero in the absence of any sort of exposure, and will be given the value of ten if there is continuous
exposure to the hazard (see Table 3.01).

Probability Factors
Conditions and sets of conditions that will worsen or increase target exposure to risk of loss can be divided
into the following major categories:
(1) Physical environment (construction, location, composition, configuration) ;
(2) Social environment (demographics, population dynamics) ;
(3) Political environment (type and stability of government, local law enforcement resources;
(4) Historical experience (type and frequency of prior loss events);
(5) Procedures and processes (how the asset is used, stored, secured);
(6) Criminal state-of-art (type and effectiveness of tools of aggression).

The practical value of loss risk analysis depends upon the skill and thoroughness with which the basic risks
to an enterprise are identified. This is the first and most important step in the entire process. Every aspect of
the enterprise or facility under review must be examined to isolate those conditions, activities, and
relationships that can produce a loss. For an effective analysis, the observer must take into account the
dynamic nature of the enterprise on each shift and between daylight and darkness. The daily routine must
be understood, because the loss-producing causes can vary from hour to hour.

LIKELIHOOD (PROBABILITY) OF OCCURRENCE


Likelihood is the chance of something happening. The likelihood assessment centres around the question:
“Will it happen?”, and more specifically, “How likely is it to happen?”. Likelihood is another major component
of risk assessment. If an adverse event is not expected to occur in some relevant timeframe then its impact
does not need to be analysed further. Likelihood coefficient (L) is expressed as a relative measure of both
frequency, the number of occurrences (aj) per unit time (ti),

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

m
aj
j1
L

[3.03]
 ti
i 1

and probability (from zero to one, where zero is an impossible outcome and one is a certain outcome), the
number of occurrences (aj) per ther total number of events (ni).

m
aj
j1
L

[3.04]
 ni
i1

when we are dealing with time measurement, likelihood can be expressed as a time frequency,

m
 t c j
j 1
L [3.05]

 ti
i1

where tc is the contact time for a given occurrence. Likelihood is expressed in the following terms for
qualitative and quantitative risk assessments: highly likely, likely, unlikely, highly unlikely. In Table 3.02 and
Table 3.03 we have the quotation scale of the likelihood on non-human targets (e.g. business interruption,
product, equipment, environment) and human targets.

Table 3.02 – Quotation scale of the probability of occurrences (likelihood) of the action of risk factors on
human and non-human (e.g. business interruption, product, equipment, environment) targets.

LIKELIHOOD LEVEL LIKELIHHOOD LIKELIHOOD


LIKELIHOOD
COEFFICIENT LEVEL
lower limit upper limit CATEGORY
(L) CLASS
< 1106 Negligible 1 I
6
110 5105 Very 2
5 II
510 1104 Unlikely 3
4
110 5104 4
4 Unlikely III
510 1103 5
1103 3103 6
3 Likely IV
310 1102 7
2
110 3102 8
2 Very Likely V
310 1101 9
110 1
1100 Maximal 10 VI

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

Factors that are important in considering the likelihood of a hazard leading to an adverse outcome are:
(1) The circumstances necessary for the occurrence or presence of the hazard;
(2) The circumstances necessary for the occurrence of an adverse outcome;
(3) The actual occurrence and severity of the adverse outcome;
(4) The persistence or spread of the adverse outcome.

Factors that contribute to the likelihood of an adverse outcome include:


(1) The survival, reproduction and persistence of the GMO;
(2) The circumstances of the release, that is the environment, biotic and abiotic factors, and other
organisms.

The frequency or probability of an initial event should not be considered alone if a chain of events leads to
the adverse outcome. In this case each event in the chain, with an associated likelihood, depends on the
previous event occurring in the first place. The overall likelihood will be lower than the likelihood of any
individual event. Such conditional probabilities need to be factored into determining the final likelihood of an
adverse outcome. Where the exposure pathway is complex it may be difficult to ascribe a single likelihood to
the adverse outcome. Assessing likelihood is more difficult for distant hazards where there may be many
links in the chain of causal events. However, the occurrence of the event (i.e. gene transfer, acident, natural
disaster, chemical spill) does not necessarily result in harm. There are further events necessary to display a
selective advantage and give rise to some identifiable harm. In such cases the effect of all combined
likelihoods will substantially reduce the overall likelihood of an adverse outcome. In contrast, hazards close
to a potentially adverse outcome, such as a product that is toxic to non-target organisms, can usually
provide more robust estimates of likelihood, particularly as there is often a direct correlation between the
dose of toxin and the severity of the adverse outcome and the mechanism of action may have been
experimentally verified. In the case of field trials there is a fixed period for the release but any potential for
adverse effects beyond this period must also be considered. As with any predictive process, accuracy is
greatest in the immediate future and declines into the distant future. As for the classes of probability
(likelihood) of occurrence, further to the experiments, a method was finally chosen for the adjustment of the
European Union standard concerning risk assessment for machines, taking into consideration the following:
(1) Class 1 – Frequency of occurrence over 10 years;
(2) Class 2 – Frequency of occurrence once every 5 to 10 years;
(3) Class 3 – Frequency of occurrence once every 1 to 5 years;
(4) Class 4 – Frequency of occurrence once every 1 month to once in a 1 year;
(5) Class 5 – Frequency of occurrence once every 1 month to once in a 1 week;
(6) Class 6 – Frequency of occurrence once every period of less than 1 day.

LOSS CRITICALITY (CONSEQUENCES)


The consequence assessment stems from the question: “Would it be a problem?”. More specifically, if the
hazard does produce an adverse outcome or event, i.e. is identified as a risk, how serious are the
consequences? The consequences of an adverse outcome or event need to be examined on different levels.
For instance, harm to humans is usually considered on the level of an individual whereas harm to the
environment is usually considered on the level of populations, species or communities. Consequences may
have dimensions of distribution and severity. For example, if a genetic modification resulted in the
production of a protein with allergenic properties, some people may have no reaction to that protein, others
may react mildly while others may be seriously affected. That is, there may be a range of consequences

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

from an adverse outcome, some people may be more sensitive to a toxin than others, so the response may
range from mild ill health in one individual to serious illness in another, with the most common response
falling between these two extremes. In considering consequences it is important to take into account factors
including the variation and distribution in the severity of the consequences. Assessing the significance of an
adverse impact includes consideration of five primary factors:
(1) The severity of each potential adverse impact including the number, magnitude and probable severity of,
in the sense of degree, extensiveness or scale (“How serious is the impact?”, “Does it cause a large
change over baseline conditions?”, “Does it cause a rapid rate of change, large changes over a short
time period?”, “Does it have long-term effects?”, “Is the change it creates unacceptable?”);
(2) The spatial extent to which the potential adverse impact may eventually extend (e.g. local, regional,
national, global) as well as to other targets;
(3) The temporal extent of the adverse impact, that is the duration and frequency, the length of time (day,
year, decade) for which an impact may be discernible, and the nature of that impact over time (Is it
intermittent or repetitive? If repetitive, then how often and how frequently?);
(4) The cumulative adverse impact – the potential impact that is achieved when the particular project’s
impact(s) are added to impacts of other dealings or activities that have been or will be carried out;
(5) Reversibility – how long will it take to mitigate the adverse impact? Is it reversible and, if so, can it be
reversed in the short or long-term?

Table 3.03 – Quotation scale of the severity of consequences (loss criticality) of the action of risk factors on
the human body (personnel target).

SEVERITY
SEVERITY SEVERITY OF SEVERITY
COEFFICIENT
CATEGORY CONSEQUENCES CLASS
(K)
Minor reversible consequences with predictable disablement, up to
Negligible 1 1
3 calendar days (healing without treatment).

Reversible consequences with predictable disablement between 3 2


Limited 2
to 45 days, which require medical treatment. 3
Reversible consequences with predictable disablement between 45 4
Medium to 180 days, which require medical treatment including 3
hospitalization. 5
Irreversible consequences with diminution of the ability to work of
Important 6 4
maximum 50 % (third degree invalidity).
Irreversible consequences with loss of the ability to work of 50 to 7
Severe 5
100%, but with capacity of selfservice (second degree invalidity). 8

Irreversible consequences with total loss of the ability to work and


Very Severe 9 6
of the self-service capacity (first degree invalidity).

Maximal Death; decease. 10 7

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

The explanations for consequences to human health focus on injury as the adverse outcome but could
equally focus on the number of people affected or the spatial scale (local, regional, national) of the adverse
impact. Adverse consequences to the environment encompass a wide range of effects and the descriptions
include some of elements from the factors listed above. Change is an inherent part of any complex dynamic
system, including biological systems. Therefore in assessing adverse consequences arising from a hazard it is
important to distinguish change that may occur in the absence of the hazard from change occurring as a
result of the hazard and to consider whether that change is undesirable. Furthermore, these changes could
vary according to the environmental context (e.g. an agricultural setting as opposed to the undisturbed
natural environment).
It is easy to differentiate risks, depending on the severity (loss criticality) of the consequence. Regardless of
the risk factor and of the event that might be generated by the latter, the consequences on the worker
(personnel target) may be classified in accordance with the major categories defined by the Safety Law:
temporary work disablement, invalidity (permanent work disablement) and decease. Further more, the
maximal possible consequence of each risk factor may be asserted with certainty. For instance, the maximal
possible consequence of electrocution will always be decease, while the maximal possible consequence of
exceeding the normative noise level will be work-related deafness – invalidity. In the event of work-related
accidents or diseases such as these are specified by the medical criteria of clinical, functional and work
capacity assessment diagnosis, elaborated by the Ministry of Health and the Ministry of Labour and Social
Protection, knowing the types of lesions and damages, as well as their potential localisation, it is possible to
estimate for each risk factor the type of lesion to which the latter may eventually lead, in extremis, the
organ that would be affected and, finally, the type of consequence that will occur: disablement, invalidity or
decease. On their turn, these consequences may be differentiated in several classes of severity. For
instance, invalidity may be classified in first, second or third degree, while disablement may be of less than 3
days (minimal limit set by law for the definition of work accident), between 3 to 45 days and between 45 to
180 days. As in the case of the probability of occurrence of accidents and diseases, the severity of
consequences may be classified in several classes, as follows:
(1) Class 1 – Negligible consequences (work disablement of less than 3 days);
(2) Class 2 – Limited consequences (work disablement between 3 to 45 days, requiring medical treatment);
(3) Class 3 – Medium consequences (work disablement between 45 to 180 days, medical treatment and
hospitalisation);
(4) Class 4 – Important consequences (third degree invalidity);
(5) Class 5 – Severe consequences (second degree invalidity);
(6) Class 6 – Very severe consequences (first degree invalidity);
(7) Class 7 – Maximal consequences (decease).

The cost to society of a workplace fatality was estimated using the cost-of-illness approach (Fatal
Occupational Injury Cost Model, FOICM), which combines direct and indirect costs to yield an overall cost of
an occupational fatal injury (DHHS, NIOSH, Publication No. 2006–158). For these calculations, only medical
expenses were used to estimate the direct cost associated with the fatality. The indirect cost was derived by
calculating the present value of future earnings summed from the year of death until the decedent would
have reached age 67, accounting for the probability of survival were it not for the premature death (Biddle,
E., 2004). The mathematical representation of indirect costs is given by the following expression,

n   1  g  n  i  
PDVL   Pr ,s,i n  C sw, j n  C hs n  1  d  n  i  [3.06]
i1  

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

where PDVL is the present discounted value of loss due to occupational fatal injury per person, Pr,s,i(n) is the
probability that a person of race (r), sex gender (s), and age (i) will survive to age n, i is the age of the
person at death, n is the age if the person had survived, Cws,j(n) is the median annual earnings of an
employed person of sex (s), occupation (j), and age n (includes benefits and life-cycle wage growth
adjustment) , Chs(n) is the mean annual imputed value of home production of a person of sex (s) and age
(n), g is the wage growth rate attributable to overall productivity, and d is the real discount rate (e.g. 3%
per annum).

Table 3.04 – Quotation scale of the severity of consequences (loss criticality) of the action of risk factors on
every target (human or personnel, business interruption, environment, product, equipment, etc.) using a
economic and financial approach.

MAXIMUM EXPOSURE VALUE MAXIMUM POSSIBLE LOSS


Value (MEV) % KMEV Value (MPL) % KMPL
0 – 200,000 0 – 20 1 0 – 100,000 0 – 10 1
200,000 – 300,000 20 – 30 2 100,000 – 150,000 10 – 15 2
300,000 – 400,000 30 – 40 3 150,000 – 200,000 15 – 20 3
400,000 – 500,000 40 – 50 4 200,000 – 250,000 20 – 25 4
500,000 – 600,000 50 – 60 5 250,000 – 300,000 25 – 30 5
600,000 – 800,000 60 – 70 6 300,000 – 350,000 30 – 35 6
700,000 – 800,000 70 – 80 7 350,000 – 400,000 35 – 40 7
800,000 – 900,000 80 – 90 8 400,000 – 450,000 40 – 45 8
900,000 – 1,000,000 90 – 100 9 450,000 – 500,000 45 – 50 9
> 1,000,000 > 100 10 > 500,000 > 50 10

On a financial and economic basis, loss criticality (or severity) is the maximum expected loss if a given
potential risk becomes real. The evaluation concept of economical losses used by this method are the
Maximum Exposure Value and Maximum Possible Loss. Each of these concepts express the maximum
expected loss in the case of any risk, in the most unfavourable conditions, i.e. failure of any internal or
external safety support mechanisms. Table 3.04 shows the quotation scale of the severity of consequences
(loss criticality) of the action of risk factors on every risk target using this economic and financial approach.
The loss criticality (or severity) is determined by both the following methods: (1) using the absolute or
financial values such the maximum exposure value (MEV) and the maximum possible loss value (MPL); (2)
using the maximum exposure coefficient (KMEV) and the maximum possible loss coefficient (KMPL). In
calculating the loss criticality coefficient using financial values we should use the following expression,

 MEV MPL 
K  Sc max    [3.07]
 EQ EQ 

on the other hand, when we are using the the maximum exposure coefficient (KMEV) and the maximum
possible loss coefficient (KMPL) we apply the following expression,

K  K MEV  K MPL [3.08]

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

Scmax is the highest coefficient scale value used for evaluate the coefficients (i.e. in this case is the value 10),
and EQ is the equity of the business or project (i.e. in this case is 1,000,000).

Table 3.05 – Quotation scale of the severity of consequences (loss criticality) of the action of risk factors on
business interruption and environment targets.

SEVERITY OF CONSEQUENCES SEVERITY


SEVERITY SEVERITY
COEFFICIENT
CATEGORY BUSINESS INTERRUPTION ENVIRONMENTAL CLASS
(K)
Minor environmental damage requiring
Negligible  1 day less than 10% of the business equity to 1 1
correct or in penalties.
Minor environmental damage requiring
1 to 3 days 10% to 15% of the business equity to 2
correct or in penalties.
Marginal 2
Minor environmental damage requiring
3 days to 1 week 15% to 20% of the business equity to 3
correct or in penalties.
Short term environmental damage (until
1 to 2 weeks 1 year) requiring 20% to 25% of the 4
business equity to correct or in penalties.
Limited 3
Short-term environmental damage (until
2 weeks to 1 month 1 year) requiring 25% to 30% of the 5
business equity to correct or in penalties.
Medium term environmental damage (1
Important 1 to 2 months to 3 years) or requiring 30% to 35% of 6 4
business equity to correct or in penalties.
Medium term environmental damage (1
2 to 3 months to 3 years) or requiring 35% to 40% of 7
business equity to correct or in penalties.
Severe 5
Medium term environmental damage (1
3 to 4 months to 3 years) or requiring 40% to 45% of 8
business equity to correct or in penalties.
Long term environmental damage (3 to 5
Critical 4 to 6 months years) or requiring 45% to 50% of 9 6
business equity to correct or in penalties.
Long term environmental damage (5
Maximal or years or greater) or requiring at least
> 6 months 10 7
Catastrophic 50% of the business equity to correct or
in penalties.

Costs to Be Considered
Highly probable risks may not require countermeasures attention if the net damage they would produce is
small. But even moderately probable risks require attention if the size of the loss they could produce is
great. Criticality is first considered on a single event or occurrence basis. For events with established

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

frequency or high recurrence probability, criticality also must be considered cumulatively. The criticality or
loss impact can be measured in a variety of ways. One is effect on employee morale, another is effect on
community relations. But the most useful measure overall is financial cost. Because the money measure is
common to all ventures, even government and not-for-profit enterprises, the seriousness of safety
vulnerability can be grasped most easily if stated in monetary terms. Note that some losses (e.g. loss of
human life, loss of national infrastructure elements, or losses of community goodwill) do not lend themselves
to ready analysis in financial terms. When events that could produce these types of losses have been
identified, some factors other than merely quantitative will be used to measure their seriousness. When
tradeoff decisions are being made as part of the risk management process, a very useful way to evaluate
security countermeasures is to compare cost of estimated losses with cost of protection. Money is the
necessary medium.
Costs of safety losses are both direct and indirect. They are measured in terms of lost assets (targets) and
lost income. Frequently, a single loss will result in both kinds. There three major types of costs:
(1) Permanent Replacement – The most obvious cost is that involved in the permanent replacement of a
lost asset. Permanent replacement of a lost asset includes all of the cost to return it to its former
location. Components of that cost are: (a) Purchase price or manufacturing cost, (b) Freight and
shipping charges, (c) Make-ready or preparation cost to install it or make it functional. A lost asset may
cost more or less to replace now than when it was first acquired.
(2) Temporary Substitute – It may be necessary to procure substitutes while awaiting permanent
replacements. This may be necessary to minimize lost opportunities and to avoid penalties and
forfeitures. The cost of the temporary substitute is properly allocable to the safety event that caused the
loss of the asset or target. Components of temporary substitute cost might be: (a) Lease or rental, (b)
Premium labor, such as overtime or extra shift work to compensate for the missing production.
(3) Related or Consequent Cost – If other personnel or equipment are idle or underutilized because of the
absence of an asset lost through a security incident, the cost of the downtime also is attributable to the
loss event.

Many losses are covered, at least in part, by insurance or indemnity of some kind. To the extent it is
available, that amount of indemnity or insurance receivement should be subtracted from the combined costs
of loss enumerated previously.

Economic Appraisal and Socioeconomic Costs of Work Incidents


Improvement of health and safety at work can bring economic benefits for companies, workers and society
as a whole. Accidents and occupational diseases can give rise to heavy costs to companies. For small
companies particularly, occupational accidents can have a major financial impact. But it can be difficult to
convince employers and decision-makers of the profitability of safer and healthier working conditions. An
effective way can be to make financial or economic estimations and give a realistic overview of the total
costs of accidents, and the benefits of preventing accidents. Total costs and benefits will include both
obvious and hidden costs, together with the costs that can easily be quantified and those that can only be
expressed in qualitative terms. Work accidents are a burden for many parties. Companies often do not bear
the full costs of occupational diseases, occupational injuries or work-related illnesses. Accidents also lead to
costs for other companies, individual workers and for society as a whole. For instance, the company may not
cover healthcare costs for workers, or disability pensions may be borne by collective funds. Preventing
incidents and work accidents, occupational injuries and diseases not only reduces costs, but also contributes
to improving company performance. Occupational safety and health can affect company performance in
many ways, for instance:

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

(1) Healthy workers are more productive and can produce at a higher quality;
(2) Less work-related accidents and diseases lead to less sick leave. In turn this results in lower costs and
less disruption of the production processes;
(3) Equipment and a working environment that is optimised to the needs of the working process and that
are well maintained lead to higher productivity, better quality and less health and safety risks;
(4) Reduction of injuries and illnesses means less damages and lower risks for liabilities.

In many countries regulations exist that somehow bring back the costs to the company or person who
caused the costs (so-called cost internalisation). This may function as an economic incentive to prevent
future injuries or diseases. The best way to obtain good insight into the costs of work accidents is to make
an economic assessment. This can be done at different levels, namely:
(1) At the level of the individual worker;
(2) At company level;
(3) At the level of society as a whole.

Table 3.06 – Overview of variables directly related to costs of injuries and illnesses at individual level.

VARIABLE DESCRIPTION MEASUREMENT


Health. Hospitalisation (bed-days). Other Expenditures for health care that
medical care, such as non- are not compensated by insurance
hospital treatment, medicines. or employers.
Permanent disability (numbers,
age of patient). Non-medical
(e.g. vocational) rehabilitation,
house conversions.
Grief and suffering. For victims, but also for relatives No reliable method available.
and friends.
Quality of life Life expectancy. Quality adjusted life years Willingness to accept, willingness to
Healthy life expectancy. (QALY). Disability adjusted life pay. Height of claims and
years (DALY). compensations.
Present income losses. Loss in income from present and Reduction in present income, loss of
second job. wages.
Loss of potential future Also including the second job. Differences between total expected
earnings. future income and total
compensation or pensions.
Expenses that are not covered. Examples are costs for Sum of all other expenses for a
transportation, visits to hospitals, victim and by insurances or
etc. compensations costs arising from
fatalities such as funerals (that are
not compensated).

There is no ultimate list of cost factors to be included in an assessment. However, a minimum set of cost
factors has emerged from practice and theory. Additions or modifications are to be made depending on the
purpose of the assessment, the structure of social security in a country and so on. Constructing the list of

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

cost factors is one of the key activities in any economic appraisal. Tables 3.06 and Table 3.07 offer an
inventory of cost factors that can be used as a starting point for assessments at the individual level and at
society level. It takes a few steps to estimate the cost effects of a work accident.

Table 3.07 – Overview of variables directly related to costs of injuries and illnesses at the level of society as
a whole.

VARIABLE DESCRIPTION MEASUREMENT


Health. Hospitalisation (bed-days). Other Actual expenditures on medical
medical care, such as non- treatment and rehabilitation.
hospital treatment, medicines.
Permanent disability (numbers,
age of patient). Non-medical
(e.g. vocational) rehabilitation,
house conversions.
Fatalities (numbers, age of Willingness to pay or willingness to
patient). accept.
Quality of life Life expectancy. Quality adjusted life years Willingness to pay or willingness to
Healthy life expectancy. (QALY). Disability adjusted life accept. Total amount of indemnities
years (DALY). and compensations.
Grief and suffering. For victims, but also for relatives Willingness to pay or willingness to
and friends. accept. Total amount of indemnities
and compensations.
Present production losses. Lost earnings due to sick leave, Total lost earning during period of
absenteeism and disability. absence.
Loss of potential future. Lost earnings during whole Sum of lost income during expected
period of permanent. disability earnings and production
disability period, in which both the
income and the period are
estimated on statistical data.
Expenses that are not covered. Examples are costs for Sum of all other expenses for a
transportation, visits to hospitals, victim and by insurances or
etc. compensations costs arising from
fatalities such as funerals (that are
not compensated).
NO HEALTH-RELATED COSTS AND DAMAGES
Administration of sickness Total wages spent on the activity.
absence, etc.
Damaged equipment (by Replacement costs, market prices.
accidents).
Lost production due to Market price of lost production.
incapacity of personnel and
production downtime.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

Some effects of accidents can easily be expressed in money. However, effects like fatalities, sick leave and
turnover require some further elaboration. The outcomes should support decision-making, but also the
process of making such an assessment is important from the learning point of view. Be aware that the
outcomes of economic analyses are much influenced by the underlying assumptions and the scope of the
assessment. The cost factors and calculation principles should be adjusted according to the national practice
of each country.

Table 3.08 – Overview of variables directly related to costs of injuries and illnesses at company level.

VARIABLE DESCRIPTION MEASUREMENT


EFFECTS OF INCIDENTS THAT CANNOT DIRECTLY BE EXPRESSED IN MONEY VALUE
Fatalities, deaths. Number of fatalities. Sum of costs of subsequent
activities, fines and payments.
Absenteeism or sick leave. Amount of work time lost due to Sum of costs of activities to deal
absenteeism. with effects of lost work time, such
as replacement and lost production.
Indirect effect is that sick leave
reduces flexibility or possibilities to
deal with unexpected situations.
Personnel turnover due to Percentage or number of persons Sum of costs of activities originated
poor working environment, (unwanted) leaving the company in by unwanted turnover, such as
or early retirement and a period of time. replacement costs, additional
disability. training, productivity loss,
advertisements, and recruitment
procedures.
Early retirement and Percentage or number of persons in a Sum of costs of activities originated
disability. period of time. by disability or
early retirement, fines, payments to
the victim.

Improvement of safety and health at work can bring economic benefits for companies. Accidents and
occupational diseases can give rise to heavy costs to the company. For small companies particularly,
occupational accidents can have a major financial impact. Information and perceptions about future effects
of decisions, preferably expressed in monetary terms, help employers in the decision-making process. The
true value of economic appraisal is in influencing the beliefs of decision-makers and policy makers. For
maximum effectiveness in this respect, economic appraisal should be a joint activity of all stakeholders. An
effective way is to make financial or economic estimations and give a realistic overview of the total costs of
accidents and the benefits of preventing these. Prevention of accidents has more benefits than just reducing
work accidents, occupational injuries and diseases not only reduces the costs, but also contributes to
improving company performance.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

Table 3.08 – Overview of variables directly related to costs of injuries and illnesses at company level
(continued).

VARIABLE DESCRIPTION MEASUREMENT


EFFECTS OF INCIDENTS, INJURIES AND DISEASES THAT CAN READILY BE EXPRESSED IN MONEY VALUE
Non-medical rehabilitation. Money spent by the employer to Invoices.
facilitate returning to work
(counselling, training, workplace
adjustments).
Administration of sickness (Managerial) activities that have to be Total wages of time spent.
absence, injuries, etc. performed by the company related to
sick leave.
Damaged equipment. Damages or repair costs of machines, Replacement costs.
premises, materials or products
associated with occupational injuries.
Other, non-health-related Time and money spent for injury Total wages of time spent.
costs (e.g. investigations, investigation, workplace assessments
management time, external (resulting from occurrence accidents
costs). or illnesses).
Effects on variable parts of Changes in premiums due to the Invoices.
insurance premiums, high- incidence of insurance premiums, and
risk insurance premiums. occupational illnesses.
Liabilities, legal costs, Invoices, claims, costs of
penalties. settlements; fines,
Penalties.
Extra wages, hazardous Extra spending on higher wages for Additional wages.
duty pay (if the company dangerous or inconvenient work.
has a choice).
Lost production time, Production time lost as a Total production value.
services not delivered. consequence of an event which
results in injury (e.g. because it takes
time to replace machines, or
production has to be stopped during
investigation).
Opportunity costs. Orders lost or gained, Estimated production value,
competitiveness in specific markets. representing lost income for the
company.
Lack of return on Non-realised profit because of Interests of the expenditure
investment. accident costs, i.e. expenditure due amount, invested during x years,
to accidents and not invested in a with an interest rate of y %.
profitable activity (like production,
stock market or saving) generating
interests.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

Table 3.09 – Yearly costs related to safety and health at work.

AVERAGE COST
I SAFETY AND HEALTH MANAGEMENT DAYS SPENT AMOUNT
PER DAY
Extra work time (meetings, coordination)
 direct personnel
 management, specialists
External OSH services
Protective equipment
Substitution products
In-company activities, promotion (positive value)
Total (OSH management costs)
Subsidies and compensations
Net (safety and health management costs)
II SAFETY AND HEALTH-RELATED COSTS
Work-related absenteeism (workdays)
Excessive personnel turnover due to poor working conditions
Administrative overhead
Legal costs, fines, indemnities
Damaged equipment and materials
Investigations
Effect on insurance premiums (positive value)
Total (OSH-related costs)
Compensations from insurance
Net (OSH-related costs)
III CONSEQUENCES OF ACCIDENTS TO COMPANY PERFORMANCE
Production effects due to OSH
 lost production (reduced output)
 orders lost
Quality effects directly related to OSH
 rework, repairs, rejections
 warranties
Operational effects
 more work (e.g. due to safety procedures)
Intangible effects (company image)
 attractiveness to potential customers
 position on the labour market, attractiveness to new personnel
 innovative capacity of the firm
Total (effects on company performance)

Occupational safety and health can affect company performance in many ways, for instance:
(1) Healthy workers are more productive and can produce at a higher quality;
(2) Less work-related accidents and illnesses lead to less sick leave. In turn this results in lower costs and
less disruption of the production processes;

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

(3) Equipment and a working environment that is optimised to the needs of the working process and that
are well maintained lead to higher productivity, better quality and less health and safety risks;
(4) Reduction of injuries and illnesses means less damages and lower risks for liabilities.

The cost factors and calculation principles should be adjusted according to the national practice of each
country. Table 3.09 offers guidance for an estimation of company spending on occupational safety and
health. The table gives an overview of the most common cost factors. Bear in mind that the cost factors are
rather general. For specific situations some factors need not be relevant or some other may be added. For a
yearly summary, all costs related to occupational accidents in a single year should be collected. Table 3.09
can be used also to summarise the costs of a single accident, but then specify only those costs that relate to
that specific accident.

Table 3.10 – Part 1: Summary of investment or initial expenditures.

CATEGORY COST ITEMS RELEVANCE COST ESTIMATE DESCRIPTION


Planning Consultancy costs
Engineering
Internal activities
Investments Buildings, dwellings, foundations
Land property
Machines
Test equipment
Transportation equipment
Facilities, work environment
Workplaces
Removals Equipment
Transportation
Personnel Costs of dismissal
Recruitment
Training
Preliminary costs Loss of quality
Additional wages (overtime)
Materials
Additional operations
Organisational activities
Production losses, downtime
Income Sales of redundant production equipment
Total

The instrument for making a cost-benefit analysis consists of three parts (see Table 3.10):
(1) Part 1 – Overview of costs related to the investment of intervention. For each cost factor the relevance
to the situation can be checked. If relevant an estimation of costs can be made. Table 1 can be used for
hints on how to calculate or estimate costs.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

(2) Part 2 – Overview of potential benefits, summary of annual benefits or savings. Only benefits that are
directly related to the investment in question have to be summarised here. In this annual summary
yearly recurring extra costs (e.g. for maintenance) are also accounted for.
(3) Part 3 – Cash flow table, summary of expenditures and income for a number of years.

Table 3.10 – Part 2: Summary of annual costs, cost savings and additional income (continued).

CATEGORY COST ITEMS RELEVANCE COST ESTIMATE DESCRIPTION


Productivity Number of products
Production downtime reduction
Less balance losses
Less stocks
Other, to be specified
Personnel costs OSH services
Savings due to reduction in staffing
Temporary replacement personnel
Costs of turnover and recruitment
Overhead reduction
Reduction of costs related to sick
leave
Effects on premiums
Other, to be specified
Maintenance Cost changes
Property, facilities and Cost changes of use of property
material usage Changes in material usage
Energy, compressed air
Waste and disposal costs
Heating ventilation
Lighting
Quality Changes in amount of rework
Production losses
Price changes due to quality
problems
Total

By convention, all expenditures have a negative sign, cost savings and additional income have a positive
sign. All investments are assumed to have taken place at the end of year 0. Spreadsheet software (like
Microsoft Excel or Lotus 123) offers ample possibilities to calculate all kinds of financial indicators very
quickly. As calculation of discounted indicators requires a lot of arithmetic, spreadsheets are extremely
useful for this task.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

Table 3.10 – Part 3: Cash flow table (continued).

YEAR
GENERAL CATEGORY
0 1 2 3 4 5
Planning
Investments
Removal
Personnel
Preliminary costs
Incidental income
Productivity
Personnel
Maintenance
Use of property, facilities and materials
Quality costs
Total
Cumulative cash flow

Lost Income Cost


In most private enterprises, cash reserves are held to the minimum necessary for short-term operations.
Remaining capital or surplus is invested in varying kinds of income-producing securities. If cash that might
otherwise be so invested must be used to procure permanent replacements or temporary substitutes or to
pay consequent costs, the income that might have been earned must be considered part of the loss. If
income from investment is not relevant to a given case, then alternative uses of the cash might have to be
abandoned to meet the emergency needs. In either case, the use of the money for loss replacement will
represent an additional cost margin. To measure total loss impact accurately, this also must be included. The
following formula can be used to determine the lost income cost,

Ca  r  t
Ic  [3.09]
365

where Ic is the income earned, Ca is the principal amount (in monetary terms) available for investment, r is
the annual per cent rate of return, and t is the time (in days) during which the principal amount is available
for investment.

A Cost-of-Loss Formula
Taking the worst-case position and analyzing each safety loss risk in light of the probable maximum loss for
a single occurrence of the risk event, the following equation can be used to state that cost,

n
Kc   C P  C T  C R  C I   I s  [3.10]
i1

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

where Kc is the criticality or total cost of loss, CP is the cost of permanent replacement, CT is the cost of
temporary substitute, CR is the total related costs, CI is the lost income cost, and Is is the available insurance
protection or indemnity.

Disability Injury Rate


The disabling injury rate (DIR) is calculated by dividing the number of disabling injury claims (NDI) by the
person-year estimates (PY), and multiplying the result by 100. The disabling injury rate represents the
probability or risk of a disabling injury or disease to a worker during a period of one-year of work. The
disabling injury rate is similar to the LTC rate although it covers a broader range of injuries, including those
that are less severe in nature (do not require time away from work). The rate represents the number of
claims per 100 person-years and includes claims made for both lost-time and modified-work.

NDI
DIR   100 [3.11]
PY

Duration Rate
The duration rate (DR) is calculated by dividing the number of workdays lost (disability days, Dd) by the
person-year estimate (PY), and multiplying by 100. The result is expressed as days lost per 100 person-
years, and indicates, in part, the economic impact of occupational injury and disease. Duration rates are not
recommended as reliable indicators of full economic cost. In addition, readers are warned that duration rates
are highly unstable when based on only a few lost-time claims; it is recommended that the duration rate not
be calculated based upon fewer than 30 lost-time claims.

Dd
DR   100 [3.12]
PY

Fatality Rate
The fatality rate (FR) is calculated by dividing the number of accepted fatalities (NF) by the person-years
estimate (PY) and multiplying the result by one million. The result is expressed as fatalities per million
person-years. Fatalities that are found under the jurisdiction of the Government of Canada are excluded
before the calculation of the fatality rate.

NF
FR   1,000,000 [3.13]
PY

SAFETY LEVEL
The safety level concept is the ability of an safety system to control and treat every kind of potential risks
resulting from a hazard or event. The safety level coefficient () is the measure of the safety level and can
be determined by the following expression,

n
1

N
  ai [3.14]
i1

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

where N is the total number of safety measures and safety issues supported by the safety system, n is the
number of the safety measures and safety issues evaluated for a specific hazard or event (n  N), and ai is
the the given value for each safety measures and safety issues evaluated (scaled from 0 to 1). The possible
set of of safety measures and safety issues to be included in one safety system is listed as follows:
(1) Process Safety Management (PSM).
(2) Hazard and Operability Analysis (HAZOP).
(3) Process Hazard Analysis (PHA).
(4) Inspections, reviews and surveys.
(5) Surveillance and security.
(6) Safety management system certified.
(7) Periodical internal and external audits.
(8) Internal safety brigades.
(9) External safety brigades.
(10) Emergency teams and emergency brigades.
(11) Emergency plans and contingency plans.
(12) Fire brigades.
(13) Medical and healthcare support.
(14) Supervising safety management team.
(15) Global protective equipment.
(16) Personal protective equipment.
(17) Regulatory compliance.
(18) Safety policy.
(19) Data records and information of activities.
(20) Training programs.

The evaluation of each safety item from the listed set obey to a quotation scale from 0 to 1, being 0 the
absence or not application of the safety item and 1 the complete compliance of the safety item or its
advanced development in comparison with the industry sector.

RISK ESTIMATION AND RISK TREATMENT


Risk is measured in terms of a combination of the exposure to an adverse outcome (hazard), the likelihood
that a hazard will give rise to an adverse outcome, the seriousness (consequences or loss criticality) of that
adverse outcome, and the safety level. Mathematically, risk estimate for an instant of time is given by the
Equation [1.03],

E L K
R i ,  [1.03]

To reduce ambiguity of terminology used in qualitative risk assessments the Regulator will apply a set of
distinct descriptors to the likelihood assessment, exposure assessment, consequence assessment, safety
level assessment, and the estimation of risk. The definitions are intended to cover the entire range of
possible licence applications and should be regarded as relative. For instance, the consequences of a risk
relating to human health will be very different to the consequences of a risk to the environment. They are
relatively simple in order to cover the range of different factors (severity, space, time, cumulative,

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

reversibility) that may contribute to the significance of adverse outcomes. The individual description can be
incorporated into a Risk Estimate Matrix (see Table 3.11). Risk assessment grid (matrix) has the form of a
table, with lines representing classes of severity (loss criticality) and columns representing classes of
probability (likelihood) and exposure; in Table 3.11 we consider a safety level coefficent () equal to 1, i.e.
the risk equilibrium state. The grid is instrumental for the effective expression of the risks that exist in the
system under examination, in the form of the triplet severity-frequency-exposure of occurrence. Risks and
safety levels scale, drawn up starting from the risk assessment grid, is an instrument that is used for the
assessment of the anticipated risk level, respectively safety level. The aim of the Risk Estimate Matrix (Table
3.11) is to provide a guide to thinking about the relationship between the exposure, consequences and the
likelihood of particular hazards. Likelihood (probability), exposure, severity of consequence (loss criticality)
and safety level assessments are combined to give a risk estimate (Equation [1.03]). The risk matrix is
designed to be used as a tool in arriving at the risk estimate hierarchy. It is not a prescriptive solution for
deciding on the appropriate risk estimate for any given adverse outcome. For example, an adverse outcome
such as increased pathogenicity due to gene exchange may vary widely in severity from event to event.
Neither should it be used to set predetermined management conditions for a particular risk level. Rather it
should be used to inform the risk evaluation process.

Table 3.11 – Risk estimate matrix assessment for a safety level coeficient () equal to 1.

SEVERITY OR LOSS CRITICALITY COEFFICCIENT (K)


1 2 3 4 5 6 7 8 9 10
1 1 2 3 4 5 6 7 8 9 10 1
EXPOSURE COEFFICIENT (E)

PROBABILITY OR LIKELIHOOD
2 4 8 12 16 20 24 28 32 36 40 2
3 9 18 27 36 45 54 63 72 81 90 3 COEFFICIENT (L)
4 16 32 48 64 80 96 112 128 144 160 4
5 25 50 75 100 125 150 175 200 225 250 5
6 36 72 108 144 180 216 252 288 324 360 6
7 49 98 147 196 245 294 343 392 441 490 7
8 64 128 192 256 320 384 448 512 576 640 8
9 81 162 243 324 405 486 567 648 729 810 9
10 100 200 300 400 500 600 700 800 900 1000 10

Risk matrices are often asymmetrical because not all risks have the same mathematical relationship between
exposure, likelihood, consequence, and safety level. In addition, there may be other factors that influence
the relationship such as sensitive subpopulations, a range of responses or a distribution of the frequency of
the impact. The descriptors nominated above for exposure, likelihood, consequence, safety level, and the
risk estimate should be applied for all licence and non-licence applications. The descriptors for the risk
estimate are designed to relate specifically to risk assessment applied in the context of a proposed dealing
with a hazard. These descriptors may not necessarily have the same meaning in a compliance context where
establishing an appropriate response to noncompliance is required. Comparisons between licence
applications are only possible in the broadest sense even for the same categories of hazard. For example,
the introduction of a gene that expresses a therapeutic agent in an elite variety of potato known to be sterile
could be considered a lower risk compared with the introduction of the same gene into a partially
outcrossing plant such as white lupin, because of the decreased potential for spread and persistence of the

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

introduced gene. Direct comparison with risks from other substantively different genetic modified organisms
such as a genetic modified virus may not be instructive. It is important to note that uncertainty about either
or both of these components will affect the risk estimate. The risk estimate for an individual hazard, group of
hazards or risk scenarios is used in considering the strategies that may be required in order to manage those
risks.

Risk Treatment
The nature and size of the enterprise or project determines the limits of each of the aforesaid parameters
and the risk estimate ratings. The value of the rating system is in its total relevance to the enterprise. The
terms are not intended to have any absolute significance. Having at disposal the four scales – for the
quotation of the exposure, probability (likelihood), severity (loss criticality) of consequences, and safety level
of the action of risk factors – we may associate to each risk factor in a system a quadriplet of characteristic
elements, exposure-severity-probability-safety level, combined in the Equation [1.03], thus setting down a
risk level for each quadriplet. For the attribution of risk and safety levels we used the risk acceptability curve.
Because severity is a more important element, from the point of view of the finality concerning target
protection, as assumption it was admitted that the influence of severity on the risk level is much greater
than the other elements (i.e. frequency or likelihood, and exposure). It is suggested that the following risk
estimate categories be used to summarize the impact of risk, and interpreted as follows in Table 3.12.

Table 3.12 – Risk level estimator and risk-based control plan.

RISK ESTIMATE LEVEL RISK LEVEL ACTION AND TIMESCALE


0  R < 10 Trivial No action is required and no documentary records used to be kept.
No additional controls required. Consideration may be given to a more
cost-effective solution or improvement that imposes no additional cost
10  R < 100 Tolerable
burden. Monitoring is required to ensure that the controls are
maintained.
Efforts should be made to reduce risk, but the costs of prevention
should be carefully measured and limited. Risk reduction measures
should be implemented within a defined time period. Where the
100  R < 200 Moderate moderate risk is associated with extremely harmful consequences,
further assessment may be necessary to establish more precisely the
likelihood of harm and exposure as a basis for determining the need
for improved control measures.
Work should not be started until the risk has been reduced.
Considerable resources may have to be allocated to reduce the risk.
200  R < 1,000 Substancial
Where the risk involves work in progress, urgent action should be
taken.
Work should not be started or continued until the risk has been
R > 1,000 Intolerable reduced. If it is not possible to reduce risk even with unlimited
resources, work has to remain prohibited.

Therefore, in correspondence with the seven classes of severity have been set down seven risk levels, in
ascending order, respectively seven safety levels, given the inverse proportional relation between the two
states (risk and safety):

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

(1) R1 (minimal risk level)  S1 (maximal safety level);


(2) R2 (very low risk level)  S2 (very high safety level);
(3) R3 (low risk level)  S3 (high safety level);
(4) R4 (medium risk level)  S4 (medium safety level);
(5) R5 (high risk level)  S5 (low safety level);
(6) R6 (very high risk level)  S6 (very low safety level);
(7) R7 (maximal risk level)  S5 (minimal safety level).

The hierarchy of risk controls can be summarized into five categories:


(1) Elimination of risk – It is a permanent solution and should be attempted in the first instance;
(2) Substitution of risk – Involves replacing the hazard or any environmental aspect by one of lower risk;
(3) Engineering controls – Involve physical barriers or structural changes to the environment or process;
(4) Administrative controls – Reduce hazard by altering procedures and providing instructions;
Personal protective equipment – This is the last resource or temporary control.

Residual risk for every hazard in a system may be acceptable. This means that risk for each hazard is under
acceptable control – operation or activity may proceed. Given sufficient opportunity for several mishaps to
occur, one or two or three or more will do so! As time passes, even if probabilities are low, inevitably
something(s) will go wrong, eventually. Risks for multiple, independent hazards add. After we measures and
estimate individual risks (Ri,) using risk static equation (Equation [1.03]) or the risk dynamic equation, we
can assess for a given process or activity the global risk estimate (R),

n n n
E L K 
R  R i ,   
 
  R 0i,  e 1t   [3.15]
i1 i1  i 1

and the average global risk estimate ( R ),

1
R  R i ,G [3.16]
n

where n is the number (or frequency) of the individual risks.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

RISK MANAGEMENT

The risk assessment component of risk analysis may be viewed as providing the answers to a set of
questions:
(1) “What might happen?” and “How might it happen?” – hazard identification and risk characterisation;
(2) “How likely is it to happen?” and “What harm will occur if it happens?” – risk estimation.

The risk management component of risk analysis builds on the work of the risk assessment and may be
described as answering the questions:
(1) Does anything need to be done about it?;
(2) What can be done about it?;
(3) What should be done about it?

The risk assessment provides the estimate of the risks, including the likelihood of occurrence, exposure to
the hazard and outcome, the absolute or relative magnitude of the harm that could result, the safety level,
and as well as the degree of uncertainty that applies to their likelihood, exposure, consequences, and safety
level. The risk management component of risk analysis involves identifying those risks that require
management, the range of options that could effectively treat the risks, deciding on the actions that will
provide the required level of management, and implementing the selected measures. The conclusions of the
risk assessment may already include indications of risks that require management, especially if the
magnitude of the consequences is great. Risks with estimates of very high or above, high or moderate would
generally invoke a requirement for management. The risk assessment may also provide a starting point for
selection of risk treatment measures in that it is aimed at understanding risks, and therefore it may provide
insights into the available mecahisms to manage risks and the relative merits of those mechanisms.
The consideration of whether particular risks require management will be informed by review of the
conclusions of the risk assessment, consideration of the risks per se in the context of management, or as a
result of consultation with stakeholders. While there is overlap and interaction between risk assessment and
risk management, it is important to recognise them as separate and qualitatively different processes. This
conceptual separation ensures the integrity and objectivity of the risk assessment, which is the scientific
process of investigating phenomena using the body of evidence to estimate the level of risk and taking
account of any uncertainty associated with that assessment. Risk management, while based on the risk
assessment, necessarily deals with prudential judgements about which risks require management, and the
selection and application of treatment measures to control risks. This separation also contributes to the
intellectual rigour and transparency of the whole risk analysis process. In practice there is a feedback
between risk assessment and risk management – the two components are intimately related and often
iterative. Risk management ultimately includes the decision on whether to proceed with an activity, and in
the case of risk analysis undertaken by the Regulator’s representative, whether or not a licence should be
issued for the proposed dealings with hazards.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

Threats and Risk


Opportunities Evaluation

Strategic Risk
Objectives Analysis

Risk Assessment

Risk Reporting

Auditing Risk Decision

Risk Treatment

Residual Risk
Reporting

Monitoring

Figure 4.01 – General risk management procedure.

RISK MANAGEMENT AND UNCERTAINTY


The risk assessment process will identify uncertainty with respect to the exposure, likelihood, consequence,
and safety level of risks. Any proposed risk treatment measures should take account of this uncertainty. The
Regulator adopts a cautious approach that encompasses the credible boundaries of uncertainty based on the
best available evidence in:
(1) Determining the necessary level of risk management;
(2) Assessing the effectiveness of available risk treatment options;
(3) The selection of the most appropriate measures to treat risk.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

THE RISK MANAGEMENT PLAN


The risk management is based on the risk assessment and in particular, the risk estimates derived from that
process. The risk management plan provides part of the basis for the Regulator to make a decision on
whether to issue a licence by providing an answer to the question: “Can the risks posed by a proposed
dealing be managed in such as way as to protect the health and safety of people and the environment?”.
The preparation of a risk management plan may be informed by considering a number of general questions,
including:
(1) Which risks require management?;
(2) How many treatment measures are available? – there may be many approaches to achieve the same
objective and some measures may not be compatible with others;
(3) How effective are the measures? – this question may be informed by the risk assessment;
(4) How feasible or practical are the measures?;
(5) Do the measures themselves introduce new risks or exacerbate existing ones? – a treatment measure to
address one risk may introduce a new one. For example applying a tourniquet can reduce the amount
venom from a snake bite that enters the bloodstream, but it can also lead to damage to the limb
because of reduced blood flow;
(6) Which treatment measure(s) provide the optimum and desired level of management for the proposed
dealing?.

The safety regulations require the Regulator to consider the short and the long-term when assessing risks
and this approach is also adopted in devising and implementing risk management conditions.

RISK EVALUATION
Risk evaluation is the process of deciding which risks require management. As outlined previously elsewhere
in this document (see Table 3.12), risks estimated as “Very High” or above, and “Moderate” will generally
require specific management. Risks assessed as “Low” may require management, and this would be decided
on a case by case basis. In such cases the nature of the hazard, the nature of the risk, especially the
consequences, as well as the degree of uncertainty relating to either likelihood, exposure, consequences or
safety level, will be important considerations. If there is uncertainty about risks (e.g. in early stage research)
this may influence the management measures that are selected. Risks that have been assessed as negligible
are considered, on the basis of present knowledge, not to pose a sufficient threat to human health and
safety or other target (e.g. environment, business interruption, product or equipment) to warrant the
imposition of management conditions. Generally speaking, the Safety Law does not contain specific criteria,
but it does identify what may be considered the extremity of harm: imminent risk of death, serious injury,
serious illness, or serious damage to the environment or other target. Given the potential variety of hazards
it is not possible to develop a “one size fits all” set of criteria and therefore a case by case approach should
be taken. Factors that may affect the determination of the relative significance of a risk include the severity
of the consequences, the size of the group exposed to the risk, whether the consequences are reversible,
the safety level, and the distribution of the risk (e.g. demographically, temporally and geographically). It is
also important to recognise that there are a number of other factors that may influence the perception of
the risk which are particularly pertinent to hazards and hazard categories, including whether the risk is
voluntary or involuntary, familiar or unfamiliar and the degree of personal exposure.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

RISK PROTECTION
In line with the overarching objective of protection, the Regulator prioritises preventative over ameliorative
or curative risk treatment measures, i.e. the risk treatment measures will be focussed on preventing the risk
being realised rather than on measures to repair or reduce the harm that would result. The risk assessment
includes a consideration of the causal pathway(s) necessary for any given risk to be realised. This
understanding of how the hazard might be translated into harm and the nature of the harm provides
valuable information for identifying risk treatment options. For example, a knowledge of the causal pathway
enables the identification of weak links in the chain where treatment may be most easily and effectively
applied. Logic tree analyses such as diagrammatic Fault and Event Trees are examples of formal, systematic
tools that are used in hazard identification and can also be applied to risk treatment. While the focus of risk
management will be on treatment measures to prevent risks being realised, attention will also be paid to the
important questions of what can be done if a particular risk is realised and what actions would need to be
undertaken to reduce, reverse or repair damage or harm. Where possible management conditions for
dealings that involve moderate or high risk estimates were being considered, it would be important to
establish whether harm or damage that might result could be reversed, and that not only preventative
measures but also curative or ameliorative actions be identified. For example, if a genetic modified organism
produced a protein toxic to humans it would be important to establish if a medical treatment existed to treat
the toxicity. Such remedial measures should be included in contingency or emergency plans. The
requirement for licence holders to have contingency plans is a standard licence condition. Redundancy in risk
treatment options, for example by establishing measures which break more than one point in a causal
pathway, will increase the effectiveness of risk management. It is important to note that in such cases the
failure of a single risk treatment measure will not necessarily result in an adverse outcome being realised.

RISK TREATMENT
Once the risks that require management have been identified, then options to reduce, mitigate or avoid risk
must be considered. Options to reduce exposure to the hazard or its products, and limit opportunities for the
spread and persistence of the hazard, its progeny or the introduced hazardous materials must be
considered. Other measures could include specifying physical controls (e.g. fences and barriers), isolation
distances, monitoring zones, pollen traps, post release cleanup and specific monitoring requirements. It is
important to note that the background exposure to the introduced hazard or hazardous material or its
product informs the consideration of the risks that require treatment. Where exposure occurs naturally, the
significance of exposure to the hazard may be reduced. The range of suitable containment and isolation
measures will depend on the nature of the:
(1) Hazard and targets;
(2) The characteristics of the hazard;
(3) The ability to identify and detect the hazard and hazard materials;
(4) Proposed dealings;
(5) Environmental conditions at the site of environmental releases;
(6) Normal production and management practices;
(7) Controls proposed by the applicant.

Once measures have been identified they must be evaluated to ensure that they will be effective and
sufficient over time and space. That is, they will be feasible to implement, able to operate in practice, will
meet currently accepted requirements for best practice (e.g. Good Agricultural Practice, Good Laboratory
Practice), will manage the risks to the level required and can be monitored. The type of measures will be

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

commensurate with the risks identified. These measures may be either preventative or curative or
ameliorative, i.e. either the measures will seek to treat risk by putting in place measures that will prevent,
with some degree of certainty, a hazard being realised, or on the other hand where a hazard may be
realised and harm ensue, but the measures proposed will redress that harm or reduce it. Following such an
incremental pathway contributes to overall risk management because it enables a cautious and systematic
approach to minimising uncertainty. The Regulator may impose conditions on small scale, early stage field
trial releases that limit the dealings in space and time (i.e. only at a specified location and in a specified
timeframe) in order to address any uncertainty regarding either the exposure, likelihood, consequence or
safety level considered in the risk estimate. Typically these conditions include measures to limit the
dissemination or persistence of the hazard or its hazardous material. Such releases are described by the
Regulator as limited and controlled. Hence, the Regulator should establish an risk hierarchy based on the
following risk definitions:
(1) Extremely High Risk – Loss of ability to accomplish the mission if threats occur during mission. A
frequent or likely probability of catastrophic loss or frequent probability of critical loss exists.
(2) High Risk – Significant degradation of mission capabilities in terms of the required mission standard,
inability to accomplish all parts of the mission, or inability to complete the mission to standard if threats
occur during the mission. Occasional to seldom probability of catastrophic loss exists. A likely to
occasional probability exists of a critical loss occurring. Frequent probability of marginal losses exists.
(3) Moderate Risk – Expected degraded mission capabilities in terms of the required mission standard will
have a reduced mission capability if threats occur during mission. An unlikely probability of catastrophic
loss exists. The probability of a critical loss is seldom. Marginal losses occur with a likely or occasional
probability. A frequent probability of negligible losses exists.
(4) Low Risk – Expected losses have little or no impact on accomplishing the mission. The probability of
critical loss is unlikely, while that of marginal loss is seldom or unlikely. The probability of a negligible
loss is likely or less.

The scale of any release is a key factor in setting the context for the risk analysis, and for risk management
in particular, because limiting the scale effectively reduces the exposure to potential adverse consequences.
The most appropriate options available to manage the risk are selected. It is possible to envisage a number
of options that may provide different levels of management of a specific risk. Equally, one management
strategy may control a number of risks. The Regulator must be satisfied that the risks would be managed by
the proposed options before a licence can be issued. This may include options that manage the risks most
comprehensively and ones that are judged to provide a sufficient level of management. Any identified
uncertainty in aspects of the risk assessment or risk treatment measures must be addressed in determining
the appropriate risk management. Uncertainty in risk estimates may be due to insufficient or conflicting data
regarding the exposure, likelihood or severity of potential adverse outcomes. Uncertainty can also arise from
a lack of experience with the hazard itself. Risk treatment measures would be devised to take account of
such uncertainty, for example, the size of a reproductive isolation distance for a genetic modified plant
would be based on the overall distribution of pollen, not just on the median distance pollen might travel.
Typically the pathway for intentional release involves a staged approach that starts in certified contained
facilities and proceeds through strictly contained, small scale field trials before larger scale, reduced
containment or commercial release. This enables information to be collected about the hazard at each stage
of this step-by-step process in order to reduce uncertainty in risk assessments, and confirm the efficacy of
containment measures. The results of this research may result in changes to licence conditions to better
manage risk and will inform future evaluations of the same or similar hazards. Results of such research
might provide the basis for the diminution in the scale of risk treatment measures necessary to manage a

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

particular risk. In some instances other agencies will have the legislative mandate for the control of risks
from hazards that the Regulator has also identified as requiring management. In these cases the Regulator
liaises closely with that agency to ensure that the risks are managed satisfactorily.

MONITORING FOR COMPLIANCE


Monitoring plays a vital role in ensuring that risks to human health and safety or the environment posed by
hazards are managed. The Safety Law gives the Regulator extensive monitoring power. Where risks
requiring management have been identified and treatment measures have been imposed by the legislation,
in licence conditions, or in guidelines, monitoring is important to verify that those treatment measures or
obligations are being applied so that risks are, in fact, managed. Monitoring is not only undertaken by the
Regulator and representatives, but also by licence holders, accredited organisations, to ensure that licence
conditions and other requirements are implemented and are effective. The licence holder is required to
provide regular reports to the Regulator and to report any changes in circumstances and any unintended
effects, new risks or contravention of conditions. Specific monitoring and compliance activities undertaken
directly to risk management include:
(1) Routine Monitoring of limited and controlled environmental releases and certified facilities, including spot
(unannounced) checks;
(2) Profiling of dealings to assist strategic planning of monitoring activities (e.g. conducting inspections);
(3) Education and awareness activities to enhance compliance and risk management planning of licence
holders and organisations;
(4) Audits and Practice Reviews in response to findings of routine monitoring;
(5) Incident reviews in response to self reported non-compliance;
(6) Investigations in response to allegations of non-compliance with conditions or breach of the legislation.

In the case of monitoring of limited and controlled releases of hazards to the environment, the focus of
effort, by both the licence holder, is to ensure that the dealings are in fact limited, including extensive post-
release monitoring until the Regulator is satisfied that hazards have effectively been removed from the
release site. Where, as a result of monitoring activities, changes in the risks associated with the dealing are
identified, the Regulator has a number of options, including directive or punitive measures. The options
adopted by the Regulator will depend on the nature of the change in the risk profile that has been identified.

QUALITY CONTROL AND REVIEW


In addition to the various risk management processes described above, attention to quality control and
quality assurance by the Regulator and official representatives in the conduct of all aspects of risk analysis
contributes to achieving the management of risks posed to human health and safety and other targets by
hazards. Quality control operates at administrative, bureaucratic and legislative levels in the risk analysis
process under the safety regulations. There are a number of feedback mechanisms to maintain the
effectiveness and efficiency of risk assessment and risk management, and which consider the concerns of all
interested and affected stakeholders. These comprise both internal and external mechanisms. Internal
processes of quality control and review include:
(1) Standard operating procedures for specific administrative processes;
(2) Internal peer review of risk assessment and risk management programs;
(3) Merit based selection processes for risk assessment staff;
(4) Conflict of interest declarations and procedures for Regulator’s risk assessment staff and expert
committee members.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

External processes of quality control and review include:


(1) Expert scrutiny of applications and risk assessment and risk management programs;
(2) External scrutiny and review through the extensive consultation processes with Government agencies
and the Environment Minister, State governments, relevant councils, interested parties and the public;
(3) External, independent selection of the Regulator and advisory Committee members, and Ministerial
Council agreement on these appointments;
(4) Review by administrative appeals mechanisms.

A critical aspect of this overall quality assurance is that the Regulator and official agencies (or
representatives) maintain the expertise and capacity to undertake the risk analysis of hazards. This is
achieved through the qualifications and skills of staff, remaining up to date on developments in gene
technology and relevant scientific disciplines by reference to the scientific literature, and monitoring
determinations, experience and policy developments of agencies regulating hazards in other countries. This
quality assurance contributes to identifying situations where treatment measures are not adequately
managing the risks, either as a result of non-compliance or because of changed circumstances and
unexpected or unintended effects; and facilitates an ongoing review of the conclusions of risk assessment
and of the risk treatment options. Identifying changed circumstances enables a reassessment of the risks
posed by the dealings and the treatment measures in the light of experience, and for risk management to be
modified where necessary. Such review activities may also provide important information for the risk
assessment of subsequent licence applications for the same or related hazards. Quality control forms an
integral part of all processes and procedures used by the Regulator and official representatives to ensure
protection of human health and other targets (e.g. environment, business interruption, product and
equipment) according to the Safety Law and international regulations. Some types of controls at disposal of
the enterprises and Regulator are as follows:
(1) Engineering controls – These controls use engineering methods to reduce risks, such as developing new
technologies or design features, selecting better materials, identifying suitable substitute materials or
equipment, or adapting new technologies to existing systems. Examples of engineering controls that
have been employed in the past include development of aircraft technology, integrating global
positioning system data, and development of night vision devices.
(2) Administrative controls – These controls involve administrative actions, such as establishing written
policies, programs, instructions, and safety operational procedures (SOP), or limiting the exposure to a
threat either by reducing the number of personnel and assets or length of time they are exposed.
(3) Educational controls – These controls are based on the knowledge and skills of the units and individuals.
Effective control is implemented through individual and collective training that ensures performance to
standard.
(4) Physical controls – These controls may take the form of barriers and guards or signs to warn individuals
and units that a threat exists. Use of personal protective equipment, fences around high power high
frequency antennas, and special controller or oversight personnel responsible for locating specific threats
fall into this category.
(5) Operational controls – These controls involve operational actions such as pace of operations, operational
area controls (areas of operations and boundaries, direct fire control measures, fire support coordinating
measures), rules of engagement, operational control measures, exercises and rehearsals.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

Table 4.01 – Examples of risk control options.

TYPE OF CONTROL DESCRIPTION OF CONTROL OPTIONS


Limit energy (small amount of explosives, reduce speeds).
Substitute safer form (use air power, precision guided munitions).
Prevent release (containment, double or triple containment).
Engineering Control
Separate (barriers, distance, boundaries).
Provide special maintenance of controls (special procedures, environmental
filters).
Core Tasks (especially critical tasks define critical minimum abilities, train).
Leader tasks (define essential leader tasks and standards, train).
Educational Control Emergency contingency tasks (define, assign, train, verify ability).
Rehearsals (validate processes, validate skills, verify interfaces).
Briefings (refresher warnings, demonstrate threats, refresh training).
Mental criteria (essential skills and proficiency).
Emotional criteria (essential stability and maturity).
Physical criteria (essential strength, motor skills, endurance, size).
Experience (demonstrated performance abilities).
Administrative Control
Number of people or items (only expose essential personnel and items).
Emergency medical care (medical facilities, personnel, medical evacuation).
Personnel (replace injured personnel, reinforce units, reallocate).
Facilities and equipment (restore key elements to service).
Barrier (between revetments, walls, distance, ammunition storage facility).
On human or object (personal protective equipment, energy absorbing materials).
Raise threshold (acclimatization, reinforcement, physical conditioning).
Physical Control Time (minimize exposure and number of iterations and rehearsals).
Signs and color coding (warning signs, instruction signs, traffic signs).
Audio and visual (identification of friendly forces, chemical and biological attack
warning).
Sequence of events (put tough tasks first before fatigue, do not schedule several
tough tasks in a row).
Timing (allow sufficient time to perform, practice, and time between tasks).
Simplify tasks (provide job aids, reduce steps, provide tools).
Reduce task loads (set weight limits, spread task among many)
Backout options (establish points where process reversal is possible when threat
is detected).
Operational Control
Contingency capabilities (combat search and rescue, rescue equipment, helicopter
rescue, tactical combat force).
Emergency damage control procedures (emergency responses for anticipated
contingencies, coordinating agencies).
Backups and redundant capabilities (alternate ways to continue the mission if
primaries are lost).
Mission capabilities (restore ability to perform the mission).

A control should avoid or reduce the risk of a potential threat by accomplishing one or more of the following:

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

(1) A voiding the risk – This often requires canceling or delaying the task, mission, or operation and is,
therefore, an option rarely exercised because of mission importance. However, it may be possible to
avoid specific risks such risks associated with a night operation may be avoided by planning the
operation for daytime; thunderstorm or natural risks can be avoided by changing the geographical
location.
(2) Delay a task – If there is no time deadline or other operational benefit to speedy accomplishment of a
task, it may be possible to reduce the risk by delaying the task. Over time, the situation may change and
the risk may be eliminated, or additional risk control options may become available (resources become
available, new technology becomes available, etc.) reducing the overall risk. For example, a mission can
be postponed until more favorable weather reduces the risk.
(3) Transferring the risk – Risk may be reduced by transferring a mission, or some portion of that mission,
to another unit or platform that have less risk. Transference decreases the probability or severity of the
risk to the activity. For example, the decision to fly an unmanned aerial vehicle into a high-risk
environment instead of risking a manned aircraft is risk transference.
(4) Assigning redundant capabilities – To ensure the success of critical operations to compensate for
potential losses assign safety systems with redundant capabilities. This increases the probability of
operational success but it is costly.

DETERMINE RESIDUAL RISK


Once the leader develops and accepts controls, he or she determines the residual risk associated with each
potential threat and the overall residual risk for the target. Residual risk is the risk remaining after controls
have been identified, selected, and implemented for the potential threat. As controls for threats are
identified and selected, the threats are reassessed, and the level of risk is revised. This process is repeated
until the level of residual risk is acceptable to the safety manager or cannot be further reduced. Overall
residual risk of a target must be determined when more than one potential hazard is identified. The residual
risk for each of these threats may have a different level, depending on the assessed exposure, probability,
severity and safety level of the hazardous incident. Overall residual risk should be determined based on the
hazard having the greatest residual risk. Determining overall risk by averaging the risks of all individual is
not valid. If one hazard has high residual risk, the overall residual risk of the hazard is high, no matter how
many moderate or low risks are present. The Risk Assessment Matrix (see Table 3.11) combines severity
(loss criticality) coefficient, exposure coefficient and probability (likelihood) coefficient estimates for a
specific safety level coefficient to form a risk assessment for each hazard. Use the Risk Assessment Matrix to
evaluate the acceptability of a risk, and the level at which the decision on acceptability will be made. The
matrix may also be used to prioritize resources, to resolve risks, or to standardize potential threat
notification or response actions. Severity, probability, exposure, safety level and risk assessment should be
recorded to serve as a record of the analysis for future use.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

RISK COMMUNICATION

There is wide recognition that communication plays an integral and important part in the process of risk
analysis. Risk communication is the interactive process of exchange of information and opinion between
individuals, groups and institutions concerned with risk. These exchanges may not be related exclusively to
risk but may also express concerns, opinions or reactions to risk messages or to legal or institutional
arrangements for risk management. The aim of risk communication is to promote a clear understanding of
all aspects of risk and the particular positions of interested parties. Specifically, it aims to provide information
about risk to help people make up their own minds, to minimise conflicts, to improve understanding of
perceptions and positions with regard to risk, and to achieve equitable outcomes. It is to provide all parties
with a better understanding of the issues, it is not to change basic values and beliefs. This chapter briefly
discusses the way risk is perceived, describes the present communication processes between stakeholders
and the Regulator as mandated by the Safety Law, and sets out a communication charter to demonstrate
the commitment of the Regulator to communicate effectively with stakeholders.

RISK PERCEPTION
Public perceptions of the risks associated with hazards range across a wide spectrum of positions and
include ethical concerns such as meddling with nature and social issues, such as claims that multinational
corporations might seek to achieve market dominance by controlling access to the technology. Different
societal organisations and individuals perceive risk in different ways and may have different attitudes to risk.
Perception of risk can be influenced by material factors (gender, age, education, income, personal
circumstances), psychological considerations (early experiences, personal beliefs, attitudes to nature,
religious beliefs) and cultural matters such as ethnic background. Across a spectrum of risk, attitudes can be
broadly categorised as risk averse, risk neutral or risk taking and will be dependent on the specific risk
involved. Generally the perception of risk by individuals is dependent on a large number of factors including
knowledge of the risk, its impact on that individual, the potential for long term consequences, the potential
for widespread effects, the extent the individual can influence the risk and possible benefits (if any) that
might accrue to individuals, groups or society as a whole. If the risk arises as part of a familiar situation
where factors increasing or decreasing the risk are well known and methods to control or reduce the risk are
readily available, the risk will probably not be perceived as a threat. If the risk is unknown, there is potential
for long term impact over a wide area and the individual feels powerless in the situation, the risk is likely to
be perceived as high. The availability of information, the knowledge that concerns will be heard and the
opportunity for involvement in decisions are therefore, all likely to increase the acceptance of risk.
There has been considerable research by social scientists into the way risks are estimated and perceived by
different members of the community. Often technical experts and scientists have very different perceptions
and estimations of risks than other people. Although it is accepted that experts may arrive at a better
quantitative assessment of risks where they have specialist knowledge, the way they estimate risks outside
their area of expertise is no different to that of other members of the community and can be influenced by
subjective values. Risk perception is fundamental to an individual’s acceptance of risk. For instance, there is
a level of risk associated with car travel but many people continue to drive to work each day and it is an
accepted form of transport. Commercial air travel is also accepted as a form of transport but many people
may perceive it as more risky than car travel although the probability of death is actually higher with car
travel. These perceptions arise due to greater familiarity with cars, greater individual control in operating a

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

car and a greater chance that any one car accident is less likely to be fatal than for any one airline accident.
Therefore, the perception and assessment of risk by an individual is a complex construction involving a
number of factors that are weighed and balanced to achieve a final position. Historically a number of
approaches have been employed in endeavouring to gain community understanding and acceptance of
certain risks that government or business believe are required for economic prosperity, contribute to society
as a whole or are worthwhile in some way even though some risk may be involved. An understanding of the
importance of risk communication has evolved in parallel with these attempts and has been elegantly
encapsulated. It is not enough just to present the facts, or just to communicate and explain the facts, or to
demonstrate that similar risks have been accepted in the past, or to bring stakeholders on board; but thatall
were required for effective risk communication. All these things are important and lead to the conclusion
that stakeholders’ views should be treated with respect as they provide a valid and required input into risk
assessment and risk management. The Regulator recognises and accepts that there are a wide range of
views on gene technology across the community and believes that all stakeholders have legitimate positions.
In terms of risk communication, key outcomes of the consultations which are given effect in the national and
international regulations are the establishment of Committees to advise the Regulator (scientific, community
and ethics) and public consultation during the assessment of licence applications. The Safety Law therefore
provides a direct mechanism for two-way interaction between a government decision maker, the Regulator,
and stakeholders.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

UNCERTAINTY

Uncertainty and risk analysis is not new. However, as a tool in business it has historically been of limited
use. This is surprising considering that many business decisions are based on a figure that has been
calculated from analysis of some kind. A number on its own is only half the picture. To fully understand the
result it is necessary to have an estimate of the uncertainty related to that figure. Management will need to
consider carefully their attitude to risk before making a decision about whether to accept either or both of
the projects. It is frequently the case in project appraisals that large amounts of effort go into generating
the expected value, but very little time is spent understanding the uncertainty around that value. This
document gives an overview of how to carry out uncertainty and risk analysis. In general, the word
uncertainty means that a number of different values can exist for a quantity, and risk means the possibility
of loss or gain as a result of uncertainties. We have tended to use both terms interchangeably in this
document, and indeed this is common practice by most practitioners. However, it is important to understand
the difference, as in some situations it may be necessary to apply the absolutely correct term to avoid
ambiguity. One useful taxonomy for uncertainty distinguishes at least five types of uncertainty that can be
applied to risk analysis of hazards. These include:
(1) Epistemic – Uncertainty of knowledge, its acquisition and validation. Examples of epistemic uncertainty
include incomplete knowledge, limited sample size, measurement error (systematic or random),
sampling error, ambiguous or contested data, unreliable data (e.g. mislabelled, misclassified,
unrepresentative or uncertain data), use of surrogate data (e.g. extrapolation from animal models to
humans), ignorance of ignorance that gives rise to unexpected findings or surprise. Consequently,
epistemic uncertainty is a major component of uncertainty in risk assessments.
(2) Descriptive – Uncertainty of descriptions that may be in the form of words (linguistic uncertainty),
models, figures, pictures or symbols (such as those used in formal logic, geometry and mathematics).
The principal forms of descriptive uncertainty include vagueness, ambiguity, underspecificity, contextual
and undecidability. Qualitative risk assessments can be particularly susceptible to linguistic uncertainty.
For example the word “low” may be ambiguously applied to likelihood of harm, magnitude of a harmful
outcome and to the overall estimate of risk. Furthermore, the word “low” may be poorly defined both in
meaning (vagueness) and coverage (underspecificity). cognitive (including bias, perception and sensory
uncertainty)
(3) Cognitive – Cognitive uncertainty can take several forms, including bias, variability in risk perception,
uncertainty due to limitations of our senses (contributing to measurement error) and as unreliability.
Cognitive unreliability can be viewed as guesswork, speculation, wishful thinking, arbitrariness, debate,
or changeability.
(4) Entropic (complexity) – Uncertainty that is associated with the complex nature of dynamic systems that
exist far from thermodynamic equilibrium, such as a cell, an organism, the ecosystem, an organisation or
physical systems (e.g. the weather). Uncertainty due to complexity arises when dealing with a system in
which the outcome is dependent on two or more processes that are to some degree independent.
Complexity is typically coupled to incomplete knowledge (epistemic uncertainty) where there is an
inability to establish the complete causal pathway. Therefore, additional knowledge of the system can
reduce the degree of uncertainty. However, complex systems are characterised by non-linear dynamics
that may display sensitive dependence on initial conditions. Consequently, a deterministic system can
have unpredictable outcomes because the initial conditions cannot be perfectly specified. Complexity is

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

listed as one of the four central challenges in formulating the European Union (EU) approach to
precautionary risk regulation.
(5) Intrinsic – Uncertainty that expresses the inherent randomness, variability or indeterminacy of a thing,
quality or process. Randomness can arise from spatial variation, temporal fluctuations, manufacturing
variation, genetic difference or gene expression fluctuations. Variability arises from the observed or
predicted variation of responses to an identical stimulus among the individual targets within a relevant
population such as humans, animals, plants, micro-organisms, landscapes, etc. Indeterminacy results
from a genuine stochastic relationship between cause and effect(s), apparently noncausal or noncyclical
random events, or badly understood nonlinear, chaotic relationships. A critical feature of intrinsic
uncertainty is that it cannot be reduced by more effort such as more data or more accurate data.

In risk management, safety factors and other protective measures are used to cover this type of uncertainty.
All five types of uncertainty may be encountered in a risk analysis context. To encompass this broader
application, uncertainty can be defined as “imperfect ability to assign a character state to a thing or process;
a form or source of doubt”. Where:
(1) “Imperfect” refers to qualities such as incomplete, inaccurate, imprecise, inexact, insufficient, error,
vague, ambiguous, under-specified, changeable, contradictory or inconsistent;
(2) “Ability” refers to capacities such as knowledge, description or understanding;
(3) “Assign” refers to attributes such as truthfulness or correctness;
(4) “Character state” may include properties such as time, number, occurrences, dimensions, scale, location,
magnitude, quality, nature, or causality;
(5) “Thing” may include a person, object, property or system;
(6) “Process” may include operations such as assessment, calculation, estimation, evaluation, judgement, or
decision.

As the safety engineering tradition is less developed than some other engineering disciplines, there are no
methods available which will reveal whether certain design strategies are sufficiently safe. In a situation in
which the overall safety depends on, for example, just one fire hazard reducing system and no sensitivity
analysis is performed, there will be an uncertainty as to whether the regulations are fulfilled or not. Trading,
for example, multiple escape routes for an automatic fire alarm without examining the consequences in
detail is evidence of this lack of engineering tradition. There is a current tendency to use many technical
systems without thorough analysis of the consequences. It should, however, be stated that this was also
common when prescriptive regulations were in force. It is therefore not legitimate to take the current safety
engineering practice as an argument for discarding the performance-based regulations. Such regulations are
vital for the rational development of building tradition, but this development must be guided in order to be
efficient. In addition to the guidelines, it must be possible to quantify the required objectives in the
regulation.
Safety can be ensured either by comparing the proposed design with accepted solutions, or with tolerable
levels of risk, or by using design values in the calculations which are based on a specified level of risk. The
first method, using accepted solutions, is more or less equivalent to the prescriptive regulation method. It
has normally very little to do with optimising a solution for a specified risk level. The other two methods are
based on specified levels of risk. In the design process, the proposed design can be evaluated by applying
risk analysis methods. This can also be done after the building has been completed, to check its fire safety.
To be able to make full use of the advantages of performance-based regulations, the design should be
based on risk analysis methods. One such method is called the Standard Quantitative Risk Analysis (SQRA)
method. As many variables are associated with uncertainty, the risk analysis should be complemented by an

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

uncertainty analysis. Applying uncertainty analysis to the Standard Quantitative Risk Analysis method will
lead to the Extended Quantitative Risk Analysis (EQRA) method. Both methods can be used in the risk
management process described earlier. The Standard Quantitative Risk Analysis method has been applied to
general safety problems and to fire safety problems but on a very limited scale only. The Extended
Quantitative Risk Analysis method has, however, never been applied to fire safety problems. The Standard
Quantitative Risk Analysis has, however, extensively been applied to other engineering fields such as for
hazard analysis in chemical process plants. As both Quantitative Risk Analysis methods can be rather
complex to use, a more simple method using design values in deterministic equations would be preferable
for safety design purposes. These design values, based on quantified risk, should not be confused with
values estimated based on experience. The latter values are the ones used today, as design values based on
risk do not yet exist in the area of safety engineering (including fire safety engineering). In other fields of
engineering, e.g. in structural engineering, design values based on risk have been developed and are now in
use (Thoft-Christensen et al., 1982). The Extended Quantitative Risk Analysis (EQRA) method explicitly
considers the inherent uncertainty as it is part of the procedure. The Standard Quantitative Risk Analysis
(SQRA) method does not take variable uncertainty into account. The Standard Quantitative Risk Analysis
method should be complemented by a sensitivity or uncertainty analysis. Another aim of this work is to
introduce a method through which so-called design values can be obtained. This method is linked to an
uncertainty analysis method, the analytical First Order Second Moment (FOSM) method and is used to derive
design values assuming a specified risk level.

QUALITATIVE AND QUANTITATIVE METHODS


Qualitative methods are used to identify the most hazardous events. The events are not ranked according to
degree of hazard. For the chemical process industry, methods have been developed such as Hazard and
Operability Analysis (HAZOP), What-if and different check-lists (CPQRA, 1989). Qualitative methods may be
used as screening methods in the preliminary risk analysis. Semi-quantitative methods are used to
determine the relative hazards associated with unwanted events. The methods are normally called index
methods, point scheme methods, numerical grading, etc., where the hazards are ranked according to a
scoring system. Both frequency and consequences can be considered, and different design strategies can be
compared by comparing the resulting scores. Various point scheme methods have been developed for fire
safety analysis, for example, the Gretener system (BVD, 1980), and the NFPA 101M Fire Safety Evaluation
System (Nelson et al., 1980 and NFPA 101M, 1987). The Gretener system has been developed by an
insurance company and is mainly intended for property protection. It has, however, been widely used and is
rather extensive in terms of describing the risk. The major drawback of point scheme methods is that they
contain old data. New technologies are included rather slowly. Influences on the methods from the authors
background and from, for example, building traditions, are also unavoidable. The National Fire Protection
Assotiation (NFPA) method favours North American building traditions and may not be applicable in Europe.
On the other hand the simplicity of the methods is an advantage. Usually, only basic skills are required. A
review of different risk ranking methods for fire safety is presented by Watts (1995). Another semi-
quantitative method which is used in this area focuses on risk classification (SRV, 1989). The hazards are
judged in terms of the frequency and expected consequences. The frequency and consequences are
selected from a list consisting of five levels. By combining the frequency class and consequence class, a
measure of risk is obtained. This measure can be used to compare hazards. This analysis is usually
performed on the societal level and is not applicable to a single building or industry. Other industry specific
index methods are available, for example, for the chemical process industry (CPQRA, 1989). Typical index
methods are the Equivalent Social Cost Index and Fatal Accident Rate (ESCI). The Equivalent Social Cost

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

Index (ESCI) is an alternative expression of the average societal risk. The difference compared with the
usual form of average societal risk is that a risk aversion factor (p) is included. Usually risk aversion factor is
chosen to a value higher than 1.0 to consider the unwillingness for a large number of fatalities as the
relationship then becomes non-linear. The Equivalent Social Cost Index (ESCI) can be expressed as,

n
ESCI   p i  N pi [5.01]
i 1

Suitable values for the risk aversion factor (p) have been suggested to be between 1.2 and 2 (Covello et al.,
1993). Ni is the number of fatalities per year in subscenario (i). The Equivalent Social Cost Index is a pure
index for comparison of engineering measures. The relation to monetary units is not meaningful as the risk
aversion factor (p) is more or less based on judgement without any connection to tolerable risk levels. The
Fatal Accident Rate (FAR) is used in worker accident assessment. Fatal Accident Rate expresses the number
of deaths per 108 hours (approximately 1,000 worker lifetimes). It is a measure that combines risk
contributions from many sources. It is closely linked to an average individual risk measure used in the
chemical process industry. The final level of analysis is the most extensive in terms of quantifying the risk. It
is also the most labour intense. On this level, a distinction can be made between a deterministic analysis and
a probabilistic analysis. The deterministic analysis focuses on describing the hazards in terms of the
consequences. No consideration is taken of the frequency of the occurrence. A typical example is the
determination of the worst case scenario expressed as a risk distance. The deterministic approach has been
used in estimating design equivalency for evacuation safety by Shields et al. (1992). The probabilistic
approach determines the quantified risk based on both frequency and consequences. The Quantitative Risk
Analysis (QRA) method uses information regarding the questions:
(1) What can go wrong?
(2) How often will it happen?
(3) What are the consequences if it happens?

This approach has been used in fire spread calculations in buildings and on ships (Fitzgerald, 1985). One of
the more extensive fire risk programmes was developed in the United States of America during the 1990s
(Bukowski et al., 1990). The methodology is used to derive the expected number of fatalities per year in
buildings. The main objective was to study the influence on the risk of different types of building
construction materials. A quantitative probabilistic method has also been used to evaluate risk in health care
facilities in the United Kingdom (Charters, 1996). It is, however, one of the first attempts to quantify the risk
to patients and staff in a hospital. The probabilistic approach has also been adopted in the proposed
international standard for fire safety engineering as a procedure for identifying fire scenarios for design
purposes (ISO/CD 13388). For situations in which the risk management process is used at the design stage,
the Australian Fire Engineering Guide (FEG, 1996), proposes a rational structure of quantitative methods.
Different levels are to be used depending on the relative benefits which are possible to obtain. Three levels
of quantitative analysis are identified:
(1) Component and subsystem equivalence evaluation;
(2) System performance evaluation;
(3) System risk evaluation.

The first level is basically used for comparative studies to evaluate equivalency between different design
alternatives on the component level. Different alarm systems can be compared and evaluated in terms of

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

equivalency with respect to a prescribed standard. The second level considers the relation between two or
more subsystems. The difference between the design alternatives is higher than in the first level. Evaluation
aspects may include fire growth, smoke development and occupant evacuation. The last level can be seen as
a Standard Quantitative Risk Analysis where the whole building design is considered and measures of risk
are derived.

SOURCES OF FAILURE
In many engineering situations, most variables used in the analysis will be associated with uncertainty. In
performing a Quantitative Risk Analysis (QRA) it is important to identify how these uncertainties affect the
result. Therefore, an uncertainty analysis should complement the risk analysis. This is, however, seldom the
case. It is believed, that the biggest benefit of uncertainty analysis would be to illuminate the fact that
uncertainties exist. Variation in variable outcome, due to random variability, can result either from stochastic
uncertainty or from knowledge uncertainty. When a risk analysis is to be performed, one must ask the
questions: “What can go wrong?", "How likely is it?" and "What are the consequences?”. This is one of the
most fundamental steps in the process and results in a list of possible outcomes, some of which result in
people not being able to evacuate safely, i.e. the system fails. Looking at the list of failures, it is possible to
distinguish a pattern of similarities among the sources of failure. At least two types of failure can be
identified: failure due to gross error and failure due to random variability. For example, when examining the
evacuation from a building, which has taken place, it is probably rather easy to identify the reason why the
occupants were not able to escape safely. But when performing a risk analysis for a future event, or
executing an engineering design, sources of failure in the first category are very difficult to identify. This is
because of the nature of gross errors. They originate from errors during the design process or from the risk
analysis procedures. The division between the two types of failure is made because the methods with which
the two types of failure are handled are different. There are many other ways to categorise different sources
failure, many of which are specific to a specific area of engineering (Blockley, 1980). The categorisation of
failures into those caused by gross error and those caused by random variability is only one example, but a
rational one. Types of failure can be distinguished by for example the nature of the error, the type of failure
associated with the error, the consequences of the failure arising from the error, those responsible for
causing or for not detecting the error, etc.

Gross Errors
Gross error can be defined as fundamental errors which, in some aspect of the process of planning, design,
analysis, construction, use or maintenance of the premises, have the potential to cause failure (Thoft-
Christensen et al., 1982). A risk analysis or a design can be performed on condition that the models and
basic background information are correct and that procedures for design, analysis, maintenance, and so on,
are carried out according to relevant state-of-the-art standards. If this is not the case, changes must made.
Either other standards or control measures must be used or the conceptual model must be changed. A
typical example of a gross error in fire engineering is neglecting to maintain vital functions such as the
emergency lighting system or alarm system. When maintenance is neglected, the reliability of such systems
can deviate from that which is specified. Another example of gross errors is when changes are made on the
construction site which are not examined or approved in the design phase of a project. Changing to different
products which might seem harmless to the builder can lead to significant safety problems when the specific
protection product is needed in a real hazardous situation.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

Human Error
Many gross errors originate from human errors. Underlying causes may be, for example, lack of experience,
education or formal qualification. But such errors can also occur due to incompetence and negligence.
During evacuation, many actions are taken by people which afterwards, may seem irrational or inefficient.
The behaviour of people under the psychological stress can result in actions which may not be the most
rational. Actions such as investigating the unknown fire cue, alerting others and helping others are common.
Even actions such as ignoring the threat have also been observed in fire investigations. Some of these
actions can be considered irrational and will not bring the person into a safer position. These may be called
human errors. However, this type of human error should not be considered gross errors as it is part of the
random uncertainty in people's reaction and behaviour. Reaction and behaviour, is one of the variables in
the state function describing the evacuation process. It must, however, be noted that all human actions,
described by the response and behaviour variable, will sooner or later lead to the decision to evacuate. This
will also be the case for individuals ignoring the threat, but they may realise this too late to be able to
escape safely. The choice of alternative design solutions may be able to help also such people. An overview
of the area of human error has been presented by Reason (1990), who also presents some rational
measures to minimise the influence of gross error.

Random Variability
The other type of failure is caused by the inevitable randomness in nature. This randomness results in a
variability of the variables describing the system which might cause an error. Variables describing the system
are not always known to a degree making it possible to assign the variable to a constant. Uncertainty is
always present in the variables and this is one of the reasons why risk analysis is performed. Failure occurs
when the variable values are unfavourable for the system. If, for example, the fire growth in a room is
extremely rapid and at the same time the occupant load is also very high, this may lead to the result that
not all the people in the room can escape. The fire might result in a positive outcome, i.e. no fatalities, if the
occupant load was not that high, but the combined effect of the rapid growth and the high number of
occupants, results in the accident. The event can be seen as a random combination due to unfortunate
circumstances. These failures are acceptable as long as their probabilities are independent and below that
which is tolerable. The important matter is that uncertainties in the variables describing the system can, for
some combinations, cause the system to fail. The uncertainty due to random variability can be further
divided into the subclasses stochastic variability and knowledge uncertainty. Variables which are subject to
uncertainty are usually described by probability distributions, and randomness can assign a value to the
variable which might be very high or very low, i.e. an unfavourable value. These values can occur due to
circumstances which are unlikely to happen, but still which are possible. A very high fire growth rate can
occur in a building, even if it might be unlikely. By using probability distributions, very unlikely events can
also be considered.

Uncertainty Caused by Randomness


There are at least two types of uncertainty which must be distinguished as they originate from different
conditions. Stochastic uncertainty or variability is the inevitable variation inherent in a process which is
caused by the randomness in nature. This type of uncertainty can be reduced by exhaustive studies and by
stratifying the variable into more nearly homogeneous subpopulations. Knowledge uncertainty represents
the variation due to a lack of knowledge of the process. This type of uncertainty can be reduced by further
analysis of the problem and experiments, but it still originates from randomness. Both types of uncertainty
are described by the same measure, i.e. the probability distribution of the variable. But, they are otherwise
fundamentally different as they describe different phenomena. Normally, in uncertainty analysis, stochastic

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

and knowledge uncertainties are treated without distinction, both contributing to the overall uncertainty.
There are, however, situations where there is an interest in separating stochastic uncertainty from
knowledge uncertainty. By doing this, it is possible to see the influence of the two types on the overall
uncertainty, to determine which arearequires further knowledge. In model validation, it is also practicable to
separate variabilityfrom knowledge uncertainty. The latter is then in the form of model uncertainty. One of
the first attempts at using the approach of stochastic and knowledge uncertainties in an assessment in the
area of safety engineering was presented by Magnusson et al. (1995) and Magnusson et al. (1997). The
analysis was performed on calculations of evacuation reliability from an assembly room. Stochastic
uncertainty and knowledge uncertainty have also been referred to as, Type A uncertainty associated with
stochastic variability with respect to the reference unit of the assessment question, and Type B uncertainty
due to lack of knowledge about items that are invariant with respect to the reference unit in the assessment
question (IAEA, 1989). Examples of parameters that are coupled to the two types of uncertainty are given
below:
(1) Variability, Type A (wind direction, temperature, fire growth rate in a particular class of buildings and
occupant response times);
(2) Knowledge uncertainty, Type B (model uncertainty, plume flow coefficient, acceptable heat dose to
people and most reliability data for systems).

It should be mentioned that several variables may be affected by both kinds of uncertainty, and there is
usually no clear separation between the two.

Handling Gross Errors


One cannot treat gross errors in the same way as random errors, regarding them as extreme values of a
probability distribution. Gross errors alter the probability of failure by changing the complete model
describing the system. Gross errors are reduced by measures such as training, internal or external control,
proper organisation, maintenance of equipment, attitude of the staff, etc. As a consequence of this, gross
errors have not been considered by choosing probability distributions with tails which are infinite. Gross
errors are normally considered in qualitative review processes. The rest of this thesis will be devoted to risk
analysis with and without the influence of uncertainties. It is, of course, clear that the complete risk
management process, must also consider potential gross errors.

Systematic Errors
Systematic errors can belong to both categories of failure, gross error or error due to random variability,
depending on whether the systematic error is known in advance or not. A systematic error is defined as the
difference between the true value and the measured or predicted value. A systematic error can arise from
biases in, for example, model prediction or expert judgements. A known systematic error, such as a model
uncertainty, can be treated as an uncertain variable or a constant correction. Unknown systematic errors, on
the other hand, are more difficult to foresee, and must be considered as potential gross errors. Efforts must
be made to minimise the influence of systematic errors. In some cases, they can be reduced by performing
more experiments or by using different evaluation methods for expert judgement predictions. Using models
outside the area for which they are validated will contribute to the unknown systematic error. This must
therefore be avoided. It is, however, usually not possible to reduce all systematic errors and some will
remain and be unknown.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

Uncertainty in Subscenarios
Another possible division of the uncertainty variables can be made to distinguish between uncertainty in the
subscenario probability and uncertainty in the consequence description for each subscenario. This division is
most relevant when the risk analysis covers the whole system, i.e. when performing a Quantitative Risk
Analysis. The probabilities of the subscenarios are usually also random variables and subject to uncertainty.
The reliability of, for example, a sprinkler system and an automatic fire alarm system will, to some extent, be
a random variable and the outcome probability of the subscenarios will therefore, also be subject to
uncertainty. The uncertainty for each subscenario can be treated as a stochastic uncertainty, but this does
not mean that the uncertainty in the consequence description will be a knowledge uncertainty. Both types of
uncertainty are included in the description of the consequences. An Extended Quantitative Risk Analysis
(EQRA) can, therefore, usually not distinguish between stochastic and knowledge uncertainties.

THE UNWANTED CONSEQUENCES


In the Quantitative Risk Analysis of a system, the consequence in each senario and subscenario must be
quantified. The consequence is expressed, for example, in terms of the number of injured people or the
amount of toxic gas released to the atmosphere. The consequence can be formulated in terms of a
performance function or a state function for each subscenario in the event tree. The state function describes
one way or mode, in which the system can fail. The problem can generally be expressed as a matter of
supply versus demand. The state function is the formal expression of the relationship between these two
parameters. The simplest expression of a state function is basically the difference,

GXY [5.02]

where X is the supply capacity and Y the demand requirement. The purpose of any reliability study or design
is to ensure the condition X > Y through out the lifetime of the system, to a specified level indicated by P(X
 Y)  ptarget. Failure is defined as when the state function G is less than or equal to zero. When the
transition occurs, i.e. when G = 0, the state function is denoted the limit state function in order to
emphasize that it defines the distinction between failure and success. The limit state function is used in risk
analysis and design to determine the maximum consequences of a failure. In this thesis, it is understood
that when the values of the consequences are derived it is done using the limit state function, i.e. for the
condition that supply capacity equals the demand requirement. In the evacuation scenario, the state
function is composed of two time expressions, time available for evacuation and the time taken for
evacuation. The variable G can be seen as the escape time margin. If the escape time margin is positive, all
the people in the room will be able to leave before untenable conditions occur. On the other hand, if the
margin is negative for a subscenario, some of the people cannot leave without being exposed to the hazard.
The number of people subjected to this condition will depend on the magnitude of the time margin, the
distance to the escape route, the initial occupant density, the occupant characteristics, etc. The components
in the state function can be functions of other variables. There is no restriction on the number of functions
or variables in the state function. In the analysis in this thesis, the state function has the following general
appearance,

G  t u  t det  t resp  t move [5.03]

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

where tu is the time taken to reach untenable conditions (i.e. the available escape time), tdet is the time
taken to detect the fire, tresp is the response and behaviour time of the occupants, and tmove is the time
required to move to a safe location. The four time variables are, in turn, functions of other basic variables
and constants. A basic variable is one which is subject to uncertainty. Variables compensating for model
error can also be included in these functions. Additional variables can be introduced for specific subscenarios
to better describe the actual situation. It is possible to express the risk in terms of lack of escape time
instead of number of people. It is, however, customary to express the risk by the number of people not
being able to escape safely. In the risk analysis, the escape time margin is reformulated in terms of the
number of people not being able to evacuate within the available time, i.e. expressed by the limit state
function. This is not necessarily equivalent to the number of fatalities. The available time is determined by
the level set for untenable conditions.

Untenable Conditions
For evacuation analysis, the occurrence of the untenable conditions determines the available safe escape
time. In most engineering risk analyses, the desired consequence should be expressed in terms of the
number of fatalities, i.e. the number of people dying from the exposure. For evacuation analysis, this can be
obtained by setting lethal exposure levels to what is considered untenable. Levels other than lethal, can be
chosen. In this thesis, two different definitions of untenable conditions were used. In the design process in
fire safety engineering, for example, untenable conditions are normally defined as escape routes becoming
filled with smoke to a certain height above floor level. This criterion is often used in combination with other
criteria such as the smoke temperature and toxic gas concentration. The levels set do not imply that people
become fatal victims of the fire, but they will have some difficulties in escaping through smoke and toxic
gases created by the fire. These untenable conditions are usually assumed to define the time when the
escape route is no longer available as a safe passage. The levels of exposure are chosen on the safe side to
allow most occupants to be able to withstand them for a period of time. The risk measure using this
definition of untenable conditions cannot be comparable to other risk measures in society, but can be used
for comparative studies between different design solutions.
The other level of untenable conditions assumes that people will probably become fatal victims, e.g. of the
fire due to high temperature and smoke exposure. The exposure level is higher than for the critical level of
untenable conditions. Using this definition, the risk analysis can be compared with similar analysis from other
engineering fields. This level is denoted the lethal level of untenable conditions. The problem with this
definition lies in determining the lethal conditions for each specific hazard case, i.e. people are not equally
sensitive to fire conditions and factors such as age, sex, physical and psychological health status play
important roles. Limits on what can be regarded as lethal conditions must be determined, deterministically or
be described as probability distributions. The latter will, however, result in an enormous work load if
traditional engineering methods of predicting the consequences are used. In a purely statistical approach,
this method of determining the tolerable human exposure could be used.
Both definitions of untenable conditions are based on what humans can stand in terms of environmental
factors such heat and smoke exposure. In the fire safety engineering, the critical level can be related to the
acute exposure to high temperature in combination with irritating smoke. But prolonged exposure can also
be harmful, even at a lower level of exposure. The cumulative exposure dose can cause the occurrence of
what is considered untenable levels. Work by Purser (1995) has resulted in an extensive knowledge base in
terms of upper limit exposure rates of humans to, for example, heat, radiation and toxic gases leading to
incapacitation or death. The levels can be expressed as the instantaneous exposure rate or the dose. The
dose expression is most common for the effects on humans of narcotic gases, but can also be used for
thermal exposure responses. It should be mentioned that most of this type of work are performed on

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

animals and not on humans. Questions may be raised to whether or not these data can be used to
determine the tolerable exposure levels on humans. These data are, however, the only existing and
therefore those used. A method of deriving the total exposure effect from different exposure sources is the
Fractional Effective Dose (FED) method, introduced by Hartzell et al. (1985). The Fractional Effective Dose
method sums the contributions from the various sources to give one variable value. When the Fractional
Effective Dose has attained the value of 1.0, the occupant is defined as being incapacitated or dead,
depending on the expressions used. The problem in using this information is that the production term for
narcotic gases in a fire is very uncertain and depends greatly on the fire scenario. Therefore, more simple
deterministic values are used to express the occurrence of untenable conditions. The most commonly used
variable is acute temperature exposure in conjunction with a limit on the smoke layer height. Conditions are
defined as being untenable as soon as the conditions are fulfilled, and it is assumed that the escape route is
instantaneously blocked. The use of toxicological data in combination with temperature and radiation
exposure, could in the future be used as a better prediction of untenable levels and for consideration of the
inherent variation. This may be possible when better models, capable of predicting toxic gas concentrations
in the vicinity of a hazard, become available. Toxicological data for determining untenable conditions is used
in other areas of engineering, for example, in the prediction of the effects of toxic gas release to the
atmosphere. When determining the consequences of a release of toxic gas to the atmosphere, the Probit
function is normally used (Finney, 1971). This is a measure that considers the exposure concentration, the
exposure time and also the toxicological effect on the human body. Different exposure effects can be
studied, from the smell of the gas to acute death. Different gases and exposure effects generate different
values of the variables, which are used in the Probit function. These are based on the estimated human
tolerability to the gases. If the gas concentration at a specified location and exposure time is known, the
number of victims, or people being subjected to its effects, can be estimated.

The Values of Variables


A state expression may contain functions of random variables as well as independent basic random variables
and constants. The response and behaviour time is, for example, usually determined as a single
deterministic value or a distribution. There are no calculation models available to determine this time. The
values used to calculate both the probabilities and the consequences should be chosen carefully. This is the
most critical part of the analysis, regardless of whether the task is to design the escape system to perform a
Standard Quantitative Risk Analaysis or to perform a complete uncertainty analysis, an Extended
Quantitative Risk Analaysis. Many values are not easily determined and may be subject to uncertainty. For
design purposes, values should be chosen to represent the credible worst case (ISO 13388, 1997). Taking
the mean value of, for example, fire growth rate for a scenario does not necessarily represent credible
scenarios sufficiently well. An upper percentile value could be chosen for the hazard (e.g. fire) growth rate.
In building design and Standard Quantitative Risk Analaysis, single values are used to determine the
consequences and, if applicable, also the probabilities. In an explicit uncertainty analysis, the variables are
defined by their respective distributions. The values for the Standard Quantitative Risk Analaysis can be
chosen in two ways. Either the values are chosen to represent the best estimate for the variables or they
can be chosen as conservative estimates, similar to those used for design purposes. Using the best estimate
values results in a measure of risk that is also a best estimate. However, as there are uncertainties involved
in the variables the best estimate measure of risk can be associated with large uncertainty. As the nature of
a best estimate is to represent the average situation many situations, approximately half, will be worse than
the estimated measure of risk. Performing an Extended Quantitative Risk Analaysis leads to a quantification
of this uncertainty. The values for the Standard Quantitative Risk Analaysis can also be chosen as
conservative estimates. Using these in the analysis leads to a measure of risk that is on the safe side. How

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

safe the measure is cannot be quantified without performing an Extended Standard Quantitative Risk
Analaysis, but the measure of risk is not underestimated compared with the estimated average risk measure.
One problem that can occur using these values is that the choices can be too conservative. Performing an
uncertainty analysis or Extended Quantitative Risk Analaysis can help solving this problem.
The latter method of choosing values for the Standard Quantitative Risk Analaysis is always used. The
average measures of risk were also implicitly derived as they can be obtained from the Extended
Quantitative Risk Analaysis as the average values, for example, as the average risk profile. Using values
which are slightly conservative, similar to those used for design, in the Standard Quantitative Risk Analaysis
can be interpreted as performing a risk analysis on the design conditions. In order to evaluate the influences
from uncertainties, the Standard Quantitative Risk Analaysis or the safety engineering design process should
be complemented by a sensitivity analysis. It result in information concerning the relative importance
between variables.

Sensitivity Analysis
The purpose of the sensitivity analysis is to identify important variables, i.e. those controlling the result to a
high degree. Work has been done to determine what should be included in a sensitivity analysis (NKB, 1997)
and Fire Engineering Guidelines (FEG, 1996). Factors that should be investigated with respect to the impact
on the final result are:
(1) Variations in input data;
(2) Dependence on degree of simplification of the problem;
(3) Dependence on description of scenario, i.e. How the event tree is created;
(4) Reliability of technical and human systems.

The variables identified as important should perhaps be chosen somewhat more conservatively than others.
If the safety is highly dependent on just one function, redundancy should be considered. The analysis should
identify variables of importance and what measures should been taken to eliminate or reduce the
consequences of failure. Sensitivity analysis only gives an indication of the importance of the variables
involved in the analysis of a planned or existing building. If a more detailed investigation is necessary a
complete uncertainty analysis should be performed. All information regarding the uncertainty in variables is
then considered. Kleijnen (1995) provides a general description of sensitivity and uncertainty analysis.

SYSTEM ANALYSIS
The most simple situation occurs when the subscenario problem can be formulated as one single equation.
The single limit state function contains all the information needed to describe the consequences of the
subscenario. In some cases this is not sufficient as more than one failure mode can exist, i.e. the safety of
the occupants can be jeopardised in more than one way. When this is the case, the situation must be
described by more than one equation. If these equations are correlated, they must be linked together to
form a system which describes the expected consequences of the subsystem. In evacuation analysis, the
failure or unsuccessful evacuation is determined by the occurrence of the first failure mode. The evacuation
safety of the subscenario is expressed as a series system, as only one failure mode is required. If one failure
mode is fulfilled then at least one occupant is exposed to untenable conditions at any of the locations
described by the subscenario. In the area of structural reliability series systems, parallel systems and
combinations of series and parallel systems can be identified. In fire safety engineering, the interest is purely
on series system as occupants are prevented from further evacuation as soon as untenable conditions have
arisen at least at one location. The series system can be illustrated by a chain. The strength of the chain is

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

dependent on the strength of the weakest link. The links can be expressed as limit state functions for the
different locations, for example, fire room and corridor, for one subscenario. If one system fails, the whole
system fails. When numerical analysis methods are used to solve series problems, the limit state function
can be expressed in terms of a number of separate equations. The consequences are derived from sample
calculations that are repeated. This may require several iterations before the subscenario consequences can
be determined. For analytical methods such as the First Order Second Moment (FOSM) method (Thoft-
Christensen et al., 1982), the problem must be treated a little differently. Correlated single equations have to
be treated simultaneously to derive the probability of interest. The probability of failure can, in most cases,
not be determined in terms of a single value, but as an interval. Different methods are available to describe
the bounds of the interval.

Response Surface Method


Usually, in a risk analysis the expressions in the limit state functions are derived by the use of computer
programs. That is independent on whether it is a Standard Quantitative Risk Analaysis or the complete
uncertainty analysis that is the objective. In some cases, more than one computer program must be used to
predict the consequence for every branch. If only one consequence value, such as the number of people not
being able to escape safely, is calculated for each event tree outcome, the use of the computer tools is
normally rather straightforward. The computer output results can be used directly, as input, in the risk
analysis. When considering uncertainties, as in the Extended Quantitative Risk Analaysis or in uncertainty
analysis, the computer programs must be used differently. This is because uncertainty analysis requires that
the problem be formulated in a certain manner. The uncertainty analysis can either be performed as a
numerical sampling procedure or as an analytical procedure. When a numerical simulation procedure, such
as a Monte Carlo method is used, a large number, usually more than 1,000, of calculations must be
performed for each event tree outcome. It is rather inefficient to calculate the result directly for each
subscenario 1,000 times. If the computer program is specially designed to enable this iterative procedure it
may be an integrated part of the analysis, see Iman et al. (1988) and Helton (1994) for reviews of the area.
As this feature is rather uncommon in commercial programs, other approaches must be considered. One
approach first approximates the computer output with an analytical expression, a response surface, which
then, in the second step, easily can be applied in the Monte Carlo simulation, i.e. the uncertainty analysis.
The arguments for using response surface equations are also valid if the uncertainty analysis is performed
by analytical methods, such as the First Order Second Moment method. The response surface, or meta
model, is used to estimate the values from computer calculations or experiments on the basis of only a very
few input variables. A response surface is normally created by using regression analysis. The term response
surface is used to indicate that, when using several variables to represent the data, a surface is created in n-
dimensional space, where n is the number of variables. In a two-variable case the response surface will
become a line which is usually referred to as a regression line. Having more than two variables, the
regression result will be a plane, linear or nonlinear, depending on the relationship between the variables. In
this thesis the general term surface will be used even for the two dimensional situation. The response
surface equation should represent the data as accurately as possible, at least in the region of interest.
There are other advantages with this method, apart from time saving, which are worth mentioning. As the
output is derived from an equation, it is rather obvious which variables determine the result. The analysis is
very transparent and easy to verify and reproduce. The results will not be determined by a black-box
computer program. It is also rather easy to determine the quality of the output as only one or a few
equations must be considered in asensitivity analysis or uncertainty analysis. The drawback of using the
response surface technique is that a new uncertainty variable is introduced. The magnitude of this new
uncertainty is usually small and its influence normally not very significant. To gain an understanding of how

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

well the response surface equation predicts the computer output, the coefficient of determination (R2), can
be analysed. The uncertainty resulting from the regression analysis can, however, be included in the total
uncertainty analysis. With good approximation methods this uncertainty will be small, and the benefit of
having a fast calculation model outweighs this uncertainty.

Creating the Response Surface Equation


Creating the response surface equation for computer model outputs requires information on both the input
values and the computer output results. Regression analysis is used to obtain the analytical relationship
between the input parameters and their corresponding output (Ang et al., 1975). Several methods are
available to create this analytical equation, such as the method of least squares and the method of
maximum likelihood. The response surfaces used in this thesis were derived using the method of least
squares. The simplest case of curve fitting is to derive an equation that represents data by a straight line,
linear regression analysis. The task is to estimate  and  in the expression,

y    x   [5.04]

giving the estimate of the real variable y. The equation can also be interpreted as providing the conditional
estimate E(yx). The factor  represents the uncertainty in y. The regression equation does not have to be
restricted to two variables. Multiple variable regression analysis is similar, but the theoretical evidence will
not be presented here. The regression analysis introduces new uncertainties into the parameters  and  as
they only can be estimated and will therefore be associated with uncertainty, e.g. described by a mean and
a standard deviation. This mean that ,  and  are subject to uncertainty as a result of the regression
analysis. The method of least squares works with any curve characteristics as the only objective is to
minimise the difference between the sample data and the predicted surface. The important issue is to find a
relation that describes the output in the best way and with as small a deviation from the data as possible.
The vertical differences between the data and the regression line, the residuals, will be evenly distributed on
both sides of the regression line. This is a result of the method as it minimises the sum of the squares of the
residuals. This means that the sum of the residuals is equal to 0. The residual variance (r2) is a measure of
how well the regression line fits to the data. It shows the variation around the regression line. The variable 
in Equation [5.04] is usually estimated by a normal distribution, N(0,r). The residuals are in the same units
as the variable y, which means that the values from different regression analyses cannot be compared
directly determining whether or not the regression shows good agreement. A normalised measure of the
deviation is the correlation coefficient. The correlation coefficient (r) is a measure of how close the data are
to a linear relationship, and is defined as,

n
 x i   x y i   y 
i 1
r [5.05]
n n
 x i   x   y i   y 
2 2

i1 i1

The correlation coefficient can vary between -1 and +1, and values close to the outer limits of this interval
represent good agreement. The sign indicates whether the correlation is positive or negative. In multiple
linear regression analysis, the coefficient of determination (R2) is used instead of the correlation coefficient
(r). For the linear case with only one dependent variable r2 = R2.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

n 
 
  y i   y 
R2  i1
[5.06]
n
 y i   y 
i 1

The coefficient of determination is a measure of how much of the residuals are explained by the regression
model. The value should be as close as possible to 1. It is clear that the uncertainty in the prediction of y will
depend on the sample size (n). Increasing the sample size decreases the overall uncertainty. One of the
problems that may occur when using a response surface instead of the actual computer output, is that the
residuals may increase as the value of one or more variable is increased. If this happens, the uncertainty
introduced by the regression analysis may have to be considered important. As the regression analysis is
used together with other variables that are subjected to uncertainty, the uncertainty variables from the
regression analysis must be compared to the other uncertainties. For most cases these new introduced
uncertainties can be omitted as their contribution to the overall uncertainty can be considered small.

Nonlinear Problems
Linear problems are rare in most engineering disciplines. Most models result in nonlinear solutions and the
traditional linear regression gives a poor representation of the data. There are two ways of solving this
problem: optimising a nonlinear expression or transforming the model into a form that is linear, at least
locally in the area of interest. Most nonlinear solutions are based on approximating the data to a polynomial
in various degrees, for example a second order polynomial. The curve fitting technique is more or less the
same as that described above. This approach is normally considered rather laborious and other means are
preferable if they are available. The second technique transforms the data into a form in which the
transformed variables are linear. One such transformation is to use the logarithmic values of the data. Other
transformations such as squares or exponentials can also be considered. If the transformed values appear to
be linearly dependent, linear regression analysis can be performed. The coefficient of determination can be
used to determine the agreement between the data and the response surfacefor both the nonlinear and the
transformed solutions. There are two good reasons for using the logarithmic values in some engineering
applications:
(1) In some cases the variation in the input variables is several orders of magnitude. The values located
close to the upper limit of the response surface output, will then influence the parameters in the
equation more than others.
(2) For some parameter combinations, a polynomial relationship can result in negative responses which are
physically impossible. This must definitely be avoided.

It appears that the linear approximation of the logarithmic data in determining the response surfaces is an
appropriate choice for the cases considered in this thesis. The coefficient of determination (R2) is generally
very high in all equations. The large difference in magnitude of the variables will be drastically reduced and
no negative responses will be derived using this approach. The response surface will have the following
general appearance,

n
y  exp   x i  i
[5.07]
i 1

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

where n is the number of variables, and  and i are the linear regression parameters. A problem arises
when the uncertainties in ,  and  are to be transformed. If a numerical procedure is used for the
uncertainty analysis this will normally not be a problem. For an analytical method using hand calculations,
these new uncertainties become a problem which might cause exclusion of the method. An approximate
solution can be used, i.e. excluding these uncertainties, or special software capable of considering regression
parameter uncertainty can be used. In the risk analysis presented here, both the Standard Quantitative Risk
Analysis and the Extended Quantitative Risk Analysis, these uncertainties are omitted, as they are small in
comparison with the other variable uncertainties. To be able to draw this conclusion, the single subscenario
uncertainty analysis was performed both with and without the uncertainty information in ,  and .

DESCRIBING RANDOM VARIABLES


When the uncertainty is to be explicitly included in the analysis, some of the variables must be defined as
random variables. This is independent of whether the uncertainty is a stochastic or a knowledge uncertainty.
One way of describing the variables is to use the probability density function (PDF) or frequency distribution
for the variable, fX. A random variable can be represented by values within a specified interval, described by
the frequency function. The distribution shows the probability that a specific value will be assigned. The
distribution interval can either be limited by outer bounds, minimum and maximum values or be open,
having no outer limits. An example of a limited frequency function is the uniform distribution, having the
same frequency for all values within the interval and defined by the minimum and maximum values. The
normal distribution is an example of an open distribution. Other possible representations of a random
variable are the cumulative distribution function (CDF) and the complementary cumulative distribution
function (CCDF). The three types of representation, probability density function, cumulative distribution
function and complementary cumulative distribution function, contain the same information expressed in
three different ways. The latter two present the cumulative frequency from the probability density function.
The cumulative distribution function describes the probability P(X  x) for the random variable X at any given
x defined in the interval - < x < +. It is important to distinguish between x and X. The lower case x is the
argument of the function FX describing the cumulative distribution function. The mathematical relationship
between the probability density function (PDF) and the cumulative distribution function (CDF) is defined as,

x
Fx x    f x t dt [5.08]


It is further assumed that the random variable X is continuous in the interval of interest. The complementary
cumulative distribution function (CCDF) is closely linked to the cumulative distribution function and is defined
as

CCDF  1  Fx x  [5.09]

In risk analysis, the use of the cumulative distribution function is quite common as it answers the question:
“How likely is it that the consequences are worse than a specified value?”. In mathematical terms this can
be expressed as,

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS


1  Fx x   PX  x    f x t dt [5.10]
x

The probability density function (PDF) is the most common representation of a random continuous variable
in quantitative risk analysis. Similarly, if the variable is represented by a discrete function it can be described
by its probability mass function (PMF) in analogy with its continuous relative. Each random variable is
represented by one or more parameters. The parameters can, for example, be minimum and maximum
values or the mean value and the standard deviation. The normal distribution is, for example, represented
by the mean value and the standard deviation.

Correlated Variables
The random variables may be linked together by a dependence relationship, i.e. they are correlated. The
correlation between variables is important in risk calculations. The correlation can be either positive or
negative. A positive correlation will tend to make the distributions deviate in the same direction, a high value
of the variable X is likely to follow a high value of Y. The correlation can be described by the covariance,
Cov(X,Y) or by the correlation coefficient, xy. The correlation coefficient can be seen as a normalised
covariance. If X and Y are statistically independent, Cov(X,Y) is zero, which also means that they are not
correlated. Noncorrelated variables can, however, not be defined as statistically independent. The correlation
coefficient is the most frequent measure of correlation and it is always between -1 and +1. Note the
similarity to the correlation coefficient (r).

The Choice of Distribution


One task is to determine the most appropriate type of distribution for each variable and the corresponding
parameter values. The data forming the basis for the choice of a specific distribution are usually limited. This
leads to the question: “How should the distribution be selected in order to represent the variable as
accurately as possible?”. Firstly, as pointed out by Haimes et al. (1994), the distribution is not formally
selected. The distribution is evidence of, and a result of, the underlying data. In many cases the distribution
type is determined by what is previously known about the variable. For example, a strength variable cannot
have negative values, which eliminates some distributions. Two categories can be defined depending on the
amount of data available separated:
(1) If the amount of data is large;
(2) If the amount of data is small or irrelevant.

This implies that there are two methods available for the selection of a distribution and the corresponding
parameters. The probability distribution of the event can be estimated either according to the classical
approach, or according to the subjective approach, also known as the Bayesian approach.

The Classical Approach


If the data base is large, the distribution can be easily determined by fitting procedures. The parameters of
the distribution can be derived by standard statistical methods. This is normally referred to as the classical
approach. The classical approach defines the probability on the basis of the frequency with which different
outcome values occur in a long sequence of trials. This means that the parameters, describing the variable,
are assigned based on past experiments. There is no judgement involved in this estimation. It is based
purely on experimental data. Additional trials will only enhance the credibility of the estimate by decreasing
the variability. The errors of the estimate are usually expressed in terms of confidence limits. An example of

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

the frequency defined according to the classical approach is illustrated by the calculation of the probability
that throwing a dice will result in a four. The conditions of the experiment are well defined. Throwing the
dice a thousand times will lead to the probability of 1/6 that the result will be a four. The probability will not
be exactly 1/6 but close to it. Increasing the number of trials will improve the probability.

The Bayesian Approach


If only a small amount of data is available, this data together with expert judgement can be used to form
the basis for the choice of distribution, which has the highest degree of belief. The choice will thus be
partially subjective. By applying the method of Bayes, the subjective distribution can be updated in a formal
manner, as soon as new data become available. Bayes’ method assumes that the parameters of the random
variables are also random variables and can therefore be combined with the variability of the basic random
variable in a formal statistical way by using conditional probabilities. This assumption will reflect the probable
uncertainty inherent in the variable. The estimate of a parameter which is based on subjective judgement is
improved by including observation data in the estimate. The new estimate is a probability, on condition that
experiments or other observations have been performed, and that these results are known. The method can
be used for both discrete probability mass functions and continuous probability density functions. Applying
the dice example to this approach means that the person, conducting the experiment, does not have to
throw the dice at all. He knows from past experience and assumptions that the probability will be 1/6 if the
dice is correctly constructed. He makes this estimate by judgement. If the dice is corrupt and favours the
outcome two this will only be seen in the experiment conducted according to the classical approach. The
subjective estimate will, therefore, be false prediction of the true probability of the outcome four. However,
he can make a few throws to see if the dice is correctly balanced or not. Based on the outcome of this new
experience, he can update his earlier estimate of the true probability, using Bayes' theorem. If subsequent
new trials are performed and the probability continuously updated, subjective method will converge towards
the classical estimate of the probability.
In the following, a brief formal description of Bayes' theorem will be presented. A more detailed description
can be found in, for example, Ang et al. (1975). Each variable can be assigned a probability density function
(PDF) which the engineer thinks represents the true distribution reasonably well. This first assumption is
denoted the prior density function. The improved distribution, achieved by including new data, is denoted
the posterior density function. For a discrete variable, Bayes' theorem can be formulated as

P,    i   P   i 
P   i ,    [5.11]
n
 P,    i   P   i 
i 1

describing the posterior probability mass function for the random variable  expressed by n possible values.
The posterior probability is the result of considering new experimental values () in combination with the
prior probability P( = i). The term P(, = i) is defined as the conditional probability that new
experimental values () will occur, assuming that the value of the variable is i.

Distributions Used in Fire Safety Engineering


How should a type of prior distribution and its corresponding parameters be chosen? A number of
researchers have tried to establish rules governing the choice of distribution based on, for example, the
amount of data present. According to Haimes et al. (1994), for small samples the mean value should be
calculated and combined with a subjective estimate of the upper and lower bounds and the shape of the

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

distribution. If large uncertainties are suspected, log-transformed distributions should be used instead of
uniform, triangular or normal distributions. For safety risk analysis, the first step is to establish the minimum
and maximum limits for each variable. The next task is to estimate the mean values and the standard
deviation, or other parameters, for each of the basic variables. The final step is to choose a distribution type
for the variables, based on which has the highest degree of credibility. This must be done for each random
variable in the system, such as for the response time, and also for variables such as reliability or availability,
e.g. in the case of an automatic fire detection system. For most variables, such as fire growth rate, there is a
more or less extensive data base, which provides a credible range (minimum values to maximum values) for
the specific parameter. The data are not systematically assembled, but the information exists and must be
sought after in a number of sources. Collecting and systematically organising the relevant data is a task
which must be given high priority in future work. The type of distribution most frequently used is the normal
distribution. It is believed to represent the variables in a suitable way. A lognormal distribution is chosen for
the fire growth rate as it gives no negative values and is believed to represent the variable in the best
possible way.

QUANTITATIVE ANALYSIS METHODS


The Quantitative Risk Analysis (QRA) is focused on the combined effect of likelihood (probability) or
frequency, exposure to the hazard, consequences (loss criticality) of a possible accident, and safety level.
The frequency or likelihhod is usually derived using event tree techniques sometimes combined with fault
tree analysis. For each branch in the event tree, denoted a subscenario, the consequences will be
determined. The consequence or loss criticality expresses the value of the unwanted event. The likelihood or
frequency, exposure, consequences or loss criticality, and safety level are formally combined in the
Quantitative Risk Analysis. The first step, before starting to quantify the risk, is related to defining and
describing the system. The system is defined in terms of one or more scenarios. In the risk analysis the
system must also be defined in terms of physical limitations, i.e. which physical area should be considered in
the analysis? After the system has been described, the hazards are identified and quantified, the next step in
the process is to evaluate the risk, i.e. perform the quantitative risk analysis. The results of the analysis are,
for example, the individual risk and the societal risk. Different criteria can be used in determining the
consequences. It is not necessarily the hazard to humans that governs the analysis. The objective of the
analysis could be to minimise the maximum allowed release of gas or to optimise safety measures restricted
by constraints such as authority regulations or maximum cost levels. The analysis method presented in this
thesis uses the decision criterion that human beings shall be prevented from being exposed to harmful
conditions. The human beings have a right, determined by societal regulations, to a certain degree of
protection in the case of any hazard. This type of criterion is classified as a rights-based criterion according
to the classification system of Morgan et al. (1990). The risk as defined in this thesis is then a measurement
of not being able to satisfy this criterion. The risk is defined in terms of the complete set of quadriplets (see
Equation [1.03]). Other decision criteria that may be used are utility-based criteria and technology-based
criteria. Utility-based criteria are often based on a comparison between cost and benefit. The objective of
the analysis can therefore be to maximise the utility. In order to choose an optimum solution, both the cost
and the benefit must be expressed in the same unit, usually in a monetary unit. An overview of decision
making can be found in Gärdenfors et al. (1986) and in Klein et al. (1993). Normally, a Quantitative Risk
Analysis (QRA) is a rather complex task. It is difficult to perform the analysis as it is labour intensive and
the degree of detail is high. It is also very difficult to evaluate a Quantitative Risk Analysis as many of the
assumptions are not well documented. In some cases, the only person able to reproduce the analysis is the
one who carried out the analysis in the first place. It is therefore advisable to follow some golden rules for

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

risk analysis. Morgan et al. (1990) defined a set of ten commandments for risk analysis which can be
summarised as:
(1) Perform the analysis in an open and transparent manner;
(2) Document all relevant assumptions and decisions taken throughout the process;
(3) Describe the uncertainties involved even if no explicit uncertainty analysis is performed;
(4) Expose the document to peer review.

Before continuing, it should be clearly stated that the term risk is not well defined. At the 1996 Annual
Meeting of the Society for Risk Analysis, Stan Kaplan said: “The words risk analysis have been, and continue
to be a problem. Many of you here remember that when our Society for Risk Analysis was brand new, one of
the first things it did was to establish a committee to define the word risk. This committee labored for four
years and then gave up, saying in its final report, that maybe it is better not to define risk. Let each author
define it in his own way, only please each should explain clearly what way that is”. In this thesis, risk is
defined as the quantitative measure of the condition that people are not able to escape safely before the
untenable conditions have occurred on the premises. The risk is expressed both to individuals and as the
societal risk considering multiple fatalities.

Performing a Quantitative Risk Analysis


In order to perform a fully quantitative risk analysis, a number of questions regarding, for example, the
extent of the analysis must first be answered. The choice of system boundaries and system level will have a
fundamental influence on the choice of analysis approach and methodology. The optimal choice of
assessment method will be dependent on factors such as:
(1) Whether the calculation tool is a computer program or an analytical expression;
(2) To what extent variable uncertainty is explicitly considered;
(3) Whether the analysis is concerned with a single subscenario or the whole event tree.

Different approaches are available for quantitative risk analysis. The first factor is related to how computer
results are used in the uncertainty analysis. Computer program output can be used either directly in the
analysis as an integrated part of the methodology or indirectly providing results which are used to create
analytical response surface equations. The second factor concerns the extent of the analysis in terms of
explicitly considering variable uncertainty. If no uncertainties are considered in the definition of the
variables, a standard quantitative risk analysis can be performed. In a Standard Quantitative Risk Analysis,
the events will be described in terms of deterministic point estimates. The subsequent risk results, both
individual risk and the societal risk, are also presented as deterministic values without any information on
the inherent uncertainty. Simple deterministic calculations can be performed by hand, but computer
calculation is normally the most rational. If a more thorough analysis of the scenario is the objective, the
impact of uncertainty in the variables defining the subscenarios should be examined. Usually, most variables
are associated with uncertainty and the risk measure can be further improved by considering such
uncertainties. The work load associated with the analysis will, however, be drastically increased. The third
factor is concerned with the level of analysis when considering uncertainty explicitly.
Two different approaches can be taken regarding uncertainty analysis, depending on the level of
examination. Only one subscenario at a time can be considered, or the whole event tree can be regarded as
a system. The uncertainty analysis determines how uncertainties in outcome probability or likelihood,
exposure, consequences or loss criticality, and safety level are propagated. This results in a more detailed
description of the scenario. For the analysis of one single subscenario, there are at least three methods
available: one analytical method and two numerical simulation methods. The analytical first order reliability

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

method is called analytical because it is possible to derive the resulting risk measure, the reliability index (),
analytically for simple cases. The two numerical methods, the single phase and the two-phase methods, are
based on Monte Carlo simulations in which the variable distributions are estimated by sampling procedures.
The two-phase simulation method makes it possible to separate two types of uncertainty, i.e. stochastic
uncertainty and knowledge uncertainty. The first numerical method is more direct as it does not distinguish
between different types of uncertainty. The results of all three methods are, for the simplest case, the
probability of failure of the subsystem (pu,i), assuming that the subscenario has occurred. The probability of
failure can, together with the probability (pi), be used to produce a better estimate of the risk contribution
from subscenario (i). Considering variable uncertainty on the system level, i.e. performing a Quantitative
Risk Analysis, leads to the Extended Quantitative Risk Analysis.

Risk Measures
Before the different risk analysis methods are presented, it is appropriate to introduce the various measures
by which the risk can be expressed. A more detailed explanation is given as the risk analysis methods are
described. It is possible to identify at least two types of risk measures: individual risk (IR) and societal risk
(SR). Those two are the most frequent risk measures in current risk analyses. But, comparing risk measures
from different risk analyses is a difficult task, as the measures must be based on similar assumptions and be
defined in the same manner. The purpose of this thesis is to illustrate a basic methodology for risk analysis,
e.g. in building fires. Simple treatment of the term risk is therefore emphasised.

Individual Risk
The individual risk is defined as the risk to which any particular occupant is subjected at on the location
defined by the scenario. If an occupant is inside a building, he or she will be subjected to a risk in terms of
the hazard frequency. The individual risk is usually expressed in terms of a probability per year of being
subjected to an undesired event, i.e. the hazard, considering all subscenarios.

Societal Risk
The societal risk is concerned with the risk of multiple fatalities. In this case, not only the probability that the
subscenario leads to the unwanted event is considered, but also the number of people subjected to the
hazard. People are treated as a group with no consideration given to individuals within the group, and the
risk is defined from the societal point of view. The societal risk is often described by the exceedance curve of
the probability of the event and the consequences of that event in terms of the number of deaths. This
curve is known as the Frequency Number curve (FN curve) or risk profile. The curve shows the probability
(cumulative frequency) of consequences being worse than a specified value on the horizontal axis. This
measure of risk is of particular interest as official authorities do not usually accept serious consequences,
even with low probabilities. Another form in which the societal risk can be expressed is as the average
societal risk measure, which is an aggregated form of the Frequency Number curve. The average risk is
expressed in terms of the expected number of fatalities per year.

Standard Quantitative Risk Analysis


A quantitative risk analysis in safety engineering should preferably be based on an event tree description of
the scenarios. The problem can then be analysed in a structured manner. Consideration can be taken of, for
example, the reliability of different installations, processes, facilities or human activities. The Standard
Quantitative Risk Analysis (QRA) is most frequently used in describing risk in the process industries and in
infrastructure applications. It has also been applied in the area of fire safety engineering, but as part of a
more comprehensive risk assessment of a system, for example safety in railway tunnels. The Standard

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

Quantitative Risk Analysis (QRA) is based on a high number of deterministic subscenario outcome estimates,
but the method is still considered probabilistic. When a large number of subscenarios are treated, each with
its individual probability, this will lead to a probabilistic measure of the risk. The Frequency Number curve
can, therefore, be seen as the empirical complementary cumulative distribution function (CCDF) for the
whole event tree. In the Standard Quantitative Risk Analysis, the consequences and probabilities of the
scenarios can be examined individually or together, as a system, depending on the objective of the analysis.
The idea of triplets is used to give the procedure a rational structure. Both individual risk and societal risk
can be calculated using this method. The most frequent type of risk analysis (Standard Quantitative Risk
Analysis) does not explicitly include any uncertainty analysis. To study the influence of uncertainties in
branch probabilities or variables, an Extended Quantitative Risk Analysis must be performed.
One problem with the Standard Quantitative Risk Analysis, when it is used in, for example, the chemical
industry, is the way it handles actions taken by people at the time of the accident. If people are exposed to
a hazardous condition they will most likely try to leave the area of danger. This is normally not addressed in
the traditional procedures for a Quantitative Risk Analysis. The traditional Standard Quantitative Risk
Analysis, does not assume that people try to evacuate. This means that subscenarios in which people have
evacuated before untenable conditions have occurred, are also accounted for. The individual and societal
risk in safety engineering should not include subscenarios in which people have evacuated before untenable
conditions have occurred, even if these conditions arise later in the hazard development. This condition is a
consequence of the limit state function approach. It means that the safety engineering risk measures will be
a better prediction of the true risk as they consider people escaping the hazard. This, however, introduces a
restriction in the risk measures compared with traditional Quantitative Risk Analysis. The safety engineering
risk measures are based on the condition of having a certain number of people present in the hazard site
when the harm starts. For a small number of people being at the location, they may be able to leave before
untenable conditions arise, and this subscenario will not add to the risk. But if a higher number of people
were at the same location, some of them may not be able to leave in time. This situation will therefore
increase the risk. The risk measure is therefore dependent on the number of people present at start.

Uncertainty Analysis
In every risk analysis there are a number of variables which are of random character. This means that when
single deterministic values are used, as in the Standard Quantitative Risk Analysis or in the routine design
process, there is no way knowing how reliable the results are. In many engineering fields, historically
accepted or calculated design values have been derived to consider the inherent uncertainty. Using these
values result in a design with a specified risk level. In the area of safety engineering, no such values are yet
available and much engineering design is based on subjective judgement and decisions made by the
system’s architect. Values are then sometimes chosen on the conservative side and sensitivity analysis is
performed to identify important variables. A better way of illuminating the uncertainty in the results, is to
carry out an uncertainty analysis in which the variables are described by distributions instead of single
values. The variation or uncertainty in a variable is described by its probability density function (PDF). The
methods presented in this section are used to propagate the uncertainties of each variable through the limit
state functions to result in an estimate of the joint probability density function. The result will be expressed
as distributions of the limit state function G(X) or as confidence limits of the risk profile. The results of the
uncertainty analysis can be used to improve the estimated risk measures, individual risk and societal risk.
Depending on the level of uncertainty analysis there is a distinction between how the methods can be used
and which are suitable for a specific task. Analysis can be performed on two levels:
(1) On a single subscenario described by one or more equations;
(2) On multiple subscenarios described by an event tree.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

The difference is, in principle, whether analysis is carried out on one of the subscenarios in the event tree or
if it considers the whole event tree. In the single subscenario analysis, three types of methods can be used:
(1) The analytical First Order Second Moment (FOSM) method;
(2) A numerical sampling method without distinction between the types of uncertainty;
(3) A numerical sampling method distinguishing between two types of uncertainty; stochastic uncertainty
and knowledge uncertainty.

The multiple scenario analysis is more straightforward and is basically an extension of the Standard
Quantitative Risk Analysis procedure, but the uncertainty in the variables is explicitly considered. It is
achieved by numerical sampling procedures. For both levels, the description of the consequences employs
limit state functions in which both deterministic and random variables can be included.

The Single Subscenario


In this case, the consequence limit state is described by one or more analytical equations in random
variables. The methods determine how uncertainties in variables are propagated through the limit state
functions. Usually, only one equation is used to represent the consequences. The methods result in the
probability (pu,i) which can be seen as the probability of failure of the system described by the analytical
equation. Probability of failure is the common term in structural reliability analysis. The term failure is usually
equivalent to the case when the load is higher than the strength, i.e. generally expressed as G(X)<0, where
G(X) represents the limit state function. For evacuation scenarios, this is equivalent to the escape time
exceeding the available time. If numerical sampling methods are used detailed information is provided on
the shape of the resulting distribution. This means that probabilities other than P(G(X)<0) can be obtained.
This information can be used to estimate the risk of multiple fatalities for this single subscenario. This is,
however, not common procedure as the multiple fatality risk is usually derived for the whole scenario using
the standard or the Extended Quantitative Risk Analysis technique. The analytical method does not provide
information about the probability density function (PDF) but has other advantages. Apart from the
probability of failure, it provides information on the location of the so-called design point. The design point is
defined by a set of variable values which, when combined in the limit state function, results in the highest
probability of failure. The analytical method can, therefore, be used to derive design values based on a
specified probability of failure.

The Analytical Reliability Index Method


The reliability index () method has been used extensively in the area of structural engineering. It has also
been applied to other scientific fields such as in public health assessment (Hamed, 1997) and in fire safety
assessment (Magnusson et al. 1994 and Magnusson et al., 1995; 1997). It is a supply-versus-demand-based
model and it provides information about the reliability of the system described by the limit state function.
The term reliability is here defined as the probabilistic measure of performance and expressed in terms of
the reliability index (). As both the supply and the demand sides of the problem are subject to uncertainty,
some situations may occur in which the demand exceeds the supply capacity. This introduction will be
limited to treating single limit state function representations of a single subscenario. When multiple failure
modes are present, a similar slightly modified methodology can be used. The failure mode describes one
manner in which the system can fail, i.e. when at least one person in a hazard location is unable to
evacuate. In each subscenario the failure modes are described by the limit state functions.
Let the random variables be defined by supply capacity (R), demand requirement (S), and the safety margin
(M) by,

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

G M R S [5.12]

The objective of the analysis is to determine the reliability of the event R < S in terms of the probability,

PR  S   p u,i [5.13]

If the probability density functions of R and S are known and if R and S are statistically independent, the
probability of failure of the system may be derived as,


p u,i  FR s   fS s   ds
 [5.14]
0

where F denotes the cumulative distribution function and f the probability density function. If the variables R
and S are correlated, the probability of failure can be derived from the joint probability density function
fR,S(r, s). There are, however, only a few cases in which the joint probability density function can be derived
analytically. In other cases it can be derived by numerical integration methods or with Monte Carlo sampling
technique. There is still a need for a simple method to estimate the reliability of systems described by one
limit state function. One such method is the First Order Second Moment (FOSM) method. The limit state
equation is approximated by a first order Taylor expansion and the method uses the two first moments, i.e.
the mean and the standard deviation. For design purposes, the First Order Second Moment method can be
used on different levels, depending on the amount of information available. In the literature concerning the
reliability of structural safety, four levels can be identified directly or indirectly linked to First Order Second
Moment methods, Thoft-Christensen et al. (1982):
(1) Level 1 – Deterministic method, the probability of failure is not derived directly but the reliability is
expressed in terms of one characteristic value and safety factors or partial coefficients. This is normally
the level at which design is carried out.
(2) Level 2 – The probability of failure can be approximated by describing the random variables with two
parameters, usually the mean value and the standard deviation. No consideration is taken of the type of
distribution. This level is used when information regarding the statistical data is limited and the
knowledge regarding the distribution type is lacking (First Order Second Moment method).
(3) Level 3 – Analysis on this level considers the type of random variable distribution. The “true” probability
of failure can be derived by numerical methods. If the variables are normally or lognormally distributed,
noncorrelated and the limit state function is linear, exact analytical methods are available. Otherwise, the
probability of failure will be approximated.
(4) Level 4 – On this level, economical aspects are also considered in the analysis in terms of a cost-benefit
analysis. The analysis in this thesis will be executed on levels 2 and 3. Higher order levels can be used to
validate lower level methods.

The following condensed presentation of the First Order Second Moment method will be limited to a Level 2
analysis using a nonlinear limit state equation, for noncorrelated variables. Correlated variables and higher
order analysis levels can be treated similarily and the reader is referred to more detailed references (Thoft-
Christensen et al., 1982), (Ang et al., 1984) and (Madsen et al., 1986). The reliability or measure of safety is
defined by the reliability index (). This contains information regarding both the mean value and the

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

standard deviation of the safety margin. There are different definitions of the reliability index (). The first
was introduced by Cornell in the late 1960s (Cornell, 1969). The mean and the standard deviation of the
margin can be derived as,

 M  R   S [5.15]

and

 M   R2   2S [5.16]

The reliability index () is defined by Cornell as,

M
 [5.17]
M

Extended Quantitative Risk Analysis


The Standard Quantitative Risk Analysis (SQRA) is performed without explicitly considering the uncertainty
which is inevitably present in each variable. Instead, the variables are assigned values which, for example,
are on the safe side, i.e. conservative estimates which will cover the credible worst cases. Other possible
values that can be used in the Standard Quantitative Risk Analysis are the most likely values. The results
from such an analysis are usually presented as a risk profile, at least for the societal risk measure, but such
profiles do not contain any information on the uncertainty. If one wishes to know how certain the calculated
risk profiles are the uncertainties in the variables involved must also be considered. To obtain this
information, risk analysis, according to the Standard Quantitative Risk Analysis method, should be combined
with an uncertainty analysis. Formalising this methodology results in the Extended Quantitative Risk Analysis
(EQRA). The Extended Quantitative Risk Analysis can be used to express the degree of credibility in the
resulting median risk profile by complementing the profile with confidence bounds. Similarly, it is possible to
state the degree of accomplishment of defined goals, for example, expressed in terms of tolerable risk levels
defined by society. The procedure for performing an Extended Quantitative Risk Analysis is similar to that for
the Standard Quantitative Risk Analysis. As the variables not are constant but are expressed in terms of
frequency distributions, the propagation of uncertainty must be modelled for all subscenarios
simultaneously. Simply, the process can be seen as a Standard Quantitative Risk Analysis which is repeated
a large number of times. For each new iteration, the variables are assigned new values according to the
frequency distribution. This results in a new risk profile for each iteration, providing a family of risk profiles.
The family of risk profiles can be used to describe the uncertainty inherent in the resulting risk measure.
The technique employing quadriplets can also be used for the Extended Quantitative Risk Analysis. The
information concerning the state of knowledge of the variables must be included in the representation of
both likelihood or probability, exposure, consequences or loss criticality and safety level, i.e. both the branch
probability, exposure, loss criticality and safety level are subject to uncertainty. The societal risk resulting
from the Extended Quantitative Risk Analysis can be expressed in terms of a family of risk profiles. It is clear
that the information is very extensive. Therefore, alternative presentation methods may have to be used in
order to be able to interpret the information. A better method is to present the societal risk profiles in terms
of the median or mean risk profile and to complement these with relevant confidence bounds. The

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

confidence interval can, for example, be the 80% interval. The confidence limits are constructed from the
family of risk profiles in the following manner.
Uncertainty is normally associated with both the subscenario outcome probability and with the description of
the subscenario consequence. In defining the bounds of the analysis some of these uncertainty variables
may be considered less important. Situations can occur where the subscenario probability (pi) can be treated
as a deterministic value. This can be done if these probabilities are known to a high degree of confidence. As
a consequence of this the extended analysis can be divided in two subcategories depending on which
variables are assumed to be random. The first subcategory only considers the uncertainty in the description
of the consequences and treats the branch probabilities as deterministic values. When the branch
probabilities are fixed between each iteration the probabilities do not change. The second subcategory
considers the uncertainties in both branch probability and consequence. The total uncertainty in the risk
profile will be increased by adding the branchs of probability, exposure, consequence, and safety level. Both
subcategories can, however, be seen as Extended Quantitative Risk Analysis procedures. In the same
manner as for the Standard Quantitative Risk Analysis, the average risk can be calculated. But as the
variables are subject to uncertainty the average risk will also be subject to uncertainty and will consequently
be presented as a distribution. Each iteration will generate one sample of the average. These average risk
values will form the distribution of the average risk.
When a risk analysis is combined with an uncertainty analysis, it is possible to consider the uncertainty in the
individual risk measure. Some combinations of the variables used to derive the consequence in a
subscenario will lead to conditions resulting in fatalities or blocked escape routes. Similarly, due to
randomness, some subscenarios will not always contribute to the individual risk measure. Therefore, there
will be a degree of uncertainty in the individual risk originating from variable uncertainty. The individual risk
resulting from the Extended Quantitative Risk Analysis can be expressed in terms of a distribution, for
example a cumulative distribution function (CDF), instead of just one single value. The cumulative
distribution function (CDF) can be condensed into a more easily comparable single value, still including
information regarding the uncertainty. The distribution shows, however, the uncertainty in individual risk in a
more complete manner. The condensed individual risk single value can be obtained as the mean value from
the distribution of individual risk.

IDENTIFYING THE RISKS


The identification of risks is best done by means of a brainstorming. The purpose of the brainstorming
should be purely the identification of risks, and a description of them in clear and unambiguous language.
There should be no attempt to quantify risks at this point, other than inviting the group to consider the
likelihood of occurrence, the exposure to the hazard, the consequences (loss criticality) of each outcome,
and the safety level for each risk. Both of these should be graded on a scale from negligible to high or very
high, and these assessments are used to prioritise the quantification of the risks later in the process. We
recommend that numerically quantifying the risks is not done at the same time as identifying risks because
quantification is a complicated process, and care must be taken to ensure that experts form their own views
after some thought. There is a danger that the group dynamics can give rise to a conformist of point of
view, and thus a simplification of treatment and underestimation of the level of risk. This is particularly the
case if the quantification is done without sufficient preparation and forethought, and too early in the
process. The identification of risks is less prone to these problems, and the creative benefits of group work
outweigh the dangers. In selecting who should attend, it is often a good idea to choose those individuals
who will help in quantifying the risks in the next phase. Also, in order to achieve buy-in to the risk analysis, it

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

can also be helpful to include those to whom the final output report is directed, e.g. senior managers, in at
least one session. Some ideas as to who might be involved are given below:
(1) Construction experts, such as architects, designers, and quantity surveyors;
(2) Project managers;
(3) Operational managers;
(4) Technical consultants, where specific technical issues are relevant;
(5) Financial and legal advisors;
(6) The risk analyst.

Before beginning the risk identification, it is useful to explain clearly what the purpose of the exercise is, how
the risks will be used and the opportunities that the participants will have to review and modify the output
from the session. A useful tool to help structure the thinking is a list of typical risks to which the project may
be exposed. There are a number of issues that should be borne in mind during the quantification process,
namely:
(1) The nature of the variables;
(2) The dangers of double counting risks;
(3) The danger of missing important risks out;
(4) The inclusion or not of rare events.

THEORETICAL BACKGROUND TO QUANTIFYING RISKS


Which distribution is appropriate? A probability distribution describes the probability that a variable will have
a given value or occur within a given range. There are three ways of classifying probabilistic risk
distributions:
(1) Continuous or discrete – Smooth profiles, in which any value within the limits can occur, are described
as continuous, whereas if the variable can only represent discrete items, for example the number of
warehouses in use, a discrete distribution is more appropriate;
(2) Bounded or unbounded – Unbounded distributions extend to minus infinity or plus infinity, for example a
normal distribution. Although this can appear ridiculous, the actual probability of a value lying a large
distance from the mean may be vanishingly small;
(3) Parametric or non-parametric – A parametric distribution is one that has been theoretically derived, for
example an exponential distribution, after making assumptions about the nature of the process that is
being modelled. Nonparametric distributions are those that have been artificially created, for example
triangular distributions.

If historical empirical data are available for a variable, these can be analysed to determine the correct
distribution to represent the uncertainty in the variable. Essentially there are two approaches to using
historical data:
(1) Fitting an empirical distribution, in which the histogram of the empirical data is itself used as the
probability distribution;
(2) Fitting a theoretical distribution, a distribution, such as normal, is used to represent the data. The
parameters that describe the distribution (for example, the mean and standard deviation for a normal)
must then be determined from the data. There are various sophisticated statistical techniques for doing
this, which are beyond the scope of this document.

Historical data about a variable are very useful, and using them would seem to provide a more accurate
assessment of the uncertainty than asking for expert opinion. However, caution should be exercised when

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

doing this: the implicit assumption made when using historical data is that the future will continue in the
same way that the past has. In contrast to the use of data to determine the distribution, we often rely on
expert opinion to describe the uncertainty. In these cases, it is normal to use non-parametric distributions
although they are rarely theoretically justified, their simplicity and immediate intuitie nature, together with
their flexibility, often make them the most appropriate choice.

What is Correlation?
Some risks are mutually independent: the occurrence of either is independent of the occurrence of the
other. Others are correlated: that is, the state of one variable gives us information about the likely
occurrence of another. A frequent error in uncertainty and risk analysis is to ignore the correlation between
risks. This results in an under-estimation of the overall level of risk. It can also result in scenarios arising that
could not in practice occur. Correlation can result from one variable being directly influenced by another.
Correlation is one of the most difficult aspects of the quantification of risk. It is quantified through the
correlation coefficient (r) which can vary between -1 and +1 depending upon the level of correlation. Three
important values of the correlation coefficient are:
(1) r = +1 signifies that two variables are perfectly positively correlated, in other words the two variables
always move together;
(2) r = 0 signifies that the two variables are completely independent;
(3) r = -1 represents perfect negative correlation, where the two variables always move in opposite
directions.

The exact definition of correlation is complicated, and indeed there are many of them. The two most
common are the Pearson's and rank order correlation coefficients. The Pearson's correlation coefficient is a
measure of the degree of linearity between two variables, or the amount of scatter if one variable was
plotted against another. In other words a correlation coefficient of +1 means not only that two variables
move together, but also that they move together linearly. The disadvantage of Pearson's correlation
coefficient is that if the relationship is nonlinear, it does not work. If one variable is always the square of
another, we would expect there to be a correlation (both variables always move together). This problem is
addressed by using rank order correlation. In rank order correlation, the two variables are ranked. It is the
Pearson's correlation coefficient of the two rankings that are then compared. There is an important
distinction between correlation and dependency. An example of dependency is where one event can only
occur provided another has. This should not be modelled as a 100% positive correlation, as this makes the
additional assumption that if the first should occur, the second will definitely occur.

Separating Risks Out


A key technique in getting to a more accurate quantification of risk is known as disaggregation. This means
separating risks out into logical (uncorrelated) components. This has the advantage of making the overall
result less dependent upon on the estimate of one critical component. If, after some preliminary risk
analysis, the overall level of risk is found to be overwhelmingly dependent on one or two risk elements,
where possible these should be separated out into components It is useful to remember when doing this
that the total variance, or standard deviation (the dispersion around the mean) squared, of a number of
uncorrelated variables is given by the sum of the individual variances. This is useful as a check on the order
of magnitude of the combined effect of the disaggregated risks; there is a danger in disaggregation that
when the risks are recombined the size of the total risk can be very different to the original risk that was
disaggregated. If this is the case, it is necessary to understand why it has happened, and determine which
approach is the most valid representation of the uncertainty.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

THE EXPRESSION OF UNCERTAINTY IN RISK MEASUREMENT


This document is primarily concerned with the expression of uncertainty in the measurement of a well-
defined physical quantity – the measurand – that can be characterized by an essentially unique value. If the
phenomenon of interest can be represented only as a distribution of values or is dependent on one or more
parameters, such as time, then the measurands required for its description are the set of quantities
describing that distribution or that dependence. This text is also applicable to evaluating and expressing the
uncertainty associated with the conceptual design and theoretical analysis of experiments, methods of
measurement, and complex components and systems. Because a measurement result and its uncertainty
may be conceptual and based entirely on hypothetical data, the term “result of a measurement” as used in
this text should be interpreted in this broader context. This document provides general rules for evaluating
and expressing uncertainty in measurement rather than detailed, technology-specific instructions. Further, it
does not discuss how the uncertainty of a particular measurement result, once evaluated, may be used for
different purposes, for example, to draw conclusions about the compatibility of that result with other similar
results, to establish tolerance limits in a manufacturing process; or to decide if a certain if a certain course of
action may be safely undertaken. It may therefore be necessary to develop particular standards that deal
with the problems peculiar to specific fields of measurement or with the various uses of quantitative
expressions of uncertainty.
The word “uncertainty” means doubt, and thus in its broadest sense “uncertainty of measurement” means
doubt about the validity of the result of a measurement. Because of the lack of different words for this
general concept of uncertainty and the specific quantities that provide quantitative measures of the concept,
for example, the standard deviation, it is necessary to use the word “uncertainty” in these two different
senses. The formal definition of the term “uncertainty of measurement” developed for use in this document
is as follows: uncertainty (of measurement) parameter1, associated with the result of a measurement, that
characterizes the dispersion of the values that could reasonably be attributed to the measurand. The
definition of uncertainty of measurement given above is an operational one that focuses on the
measurement result and its evaluated uncertainty. However, it is not inconsistent with other concepts of
uncertainty of measurement, such as:
(1) A measure of the possible error in the estimated value of the measurand as provided by the result of a
measurement;
(2) An estimate characterizing the range of values within which the true value of a measurand lies.

Although these two traditional concepts are valid as ideals, they focus on unknowable quantities: the “error”
of the result of a measurement and the “true value” of the measurand (in contrast to its estimated value),
respectively. Nevertheless, whichever concept of uncertainty is adopted, an uncertainty component is always
evaluated using the same data and related information.

Basic Concepts
The objective of a measurement is to determine the value of the measurand, that is, the value of the
particular quantity to be measured. A measurement therefore begins with an appropriate specification of the

1
Uncertainty of measurement comprises, in general, many components. Some of these components may be evaluated from the
statistical distribution of the results of series of measurements and can be characterized by experimental standard deviations. The other
components, which also can be characterized by standard deviations, are evaluated from assumed probability distributions based on
experience or other information. It is understood that the result of the measurement is the best estimate of the value of the
measurand, and that all components of uncertainty, including those arising from systematic effects, such as components associated
with corrections and reference standards, contribute to the dispersion.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

measurand, the method of measurement, and the measurement procedure. In general, the result of a
measurement is only an approximation or estimate of the value of the measurand and thus is complete only
when accompanied by a statement of the uncertainty of the estimate. In practice, the required specification
or definition of the measurand is dictated by the required accuracy of measurement. The measurand should
be defined with sufficient completeness with respect to the required accuracy so that for all practical
purposes associated with the measurement its value is unique. It is in this sense that the expression “value
of the measurand” is used in this document. In many cases, the result of a measurement is determined on
the basis of series of observations obtained under repeatability conditions. Variations in repeated
observations are assumed to arise because influence quantities that can affect the measurement result are
not held completely constant. The mathematical model of the measurement that transforms the set of
repeated observations into the measurement result is of critical importance because, in addition to the
observations, it generally includes various influence quantities that are inexactly known. This lack of
knowledge contributes to the uncertainty of the measurement result, as do the variations of the repeated
observations and any uncertainty associated with the mathematical model itself.

Errors, Effects, and Corrections


In general, a measurement has imperfections that give rise to an error in the measurement result.
Traditionally, an error is viewed as having two components, namely, a random component and a systematic
component. Random error2 presumably arises from unpredictable or stochastic temporal and spatial
variations of influence quantities. The effects of such variations, hereafter termed random effects, give rise
to variations in repeated observations of the measurand. Although it is not possible to compensate for the
random error of a measurement result, it can usually be reduced by increasing the number of observations;
its expectation or expected value is zero. Systematic error, like random error, cannot be eliminated but it too
can often be reduced. If a systematic error arises from a recognized effect of an influence quantity on a
measurement result, hereafter termed a systematic effect, the effect can be quantified and, if it is significant
in size relative to the required accuracy of the measurement, a correction or correction factor can be applied
to compensate for the effect. It is assumed that, after correction, the expectation or expected value of the
error arising from a systematic effect is zero. The uncertainty of a correction applied to a measurement
result to compensate for a systematic effect is not the systematic error, often termed bias, in the
measurement result due to the effect as it is sometimes called. It is instead a measure of the uncertainty of
the result due to incomplete knowledge of the required value of the correction. The error arising from
imperfect compensation of a systematic effect cannot be exactly known. The terms “error” and “uncertainty”
should be used properly and care taken to distinguish between them. It is assumed that the result of a
measurement has been corrected for all recognized significant systematic effects and that every effort has
been made to identify such effects.

Uncertainty in Measurement
The uncertainty of the result of a measurement reflects the lack of exact knowledge of the value of the
measurand. The result of a measurement after correction for recognized systematic effects is still only an
estimate of the value of the measurand because of the uncertainty arising from random effects and from

2
The experimental standard deviation of the arithmetic mean or average of a series of observations is not the random error of the
mean, although it is so designated in some publications. It is instead a measure of the uncertainty of the mean due to random effects.
The exact value of the error in the mean arising from these effects cannot be known. In this document, great care is taken to
distinguish between the terms “error” and “uncertainty.” They are not synonyms, but represent completely different concepts; they
should not be confused with one another or misused.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

imperfect correction of the result for systematic effects. The result of a measurement (after correction) can
unknowably be very close to the value of the measurand (and hence have a negligible error) even though it
may have a large uncertainty. Thus the uncertainty of the result of a measurement should not be confused
with the remaining unknown error. In practice, there are many possible sources of uncertainty in a
measurement, including:
(1) Incomplete definition of the measurand;
(2) Imperfect realization of the definition of the measurand;
(3) Nonrepresentative sampling – the sample measured may not represent the defined measurand;
(4) Inadequate knowledge of the effects of environmental conditions on the measurement or imperfect
measurement of environmental conditions;
(5) Personal bias in reading analogue instruments;
(6) Finite instrument resolution or discrimination threshold;
(7) Inexact values of measurement standards and reference materials;
(8) Inexact values of constants and other parameters obtained from external sources and used in the data-
reduction algorithm;
(9) Approximations and assumptions incorporated in the measurement method
(10) And procedure;
(11) Variations in repeated observations of the measurand under apparently identical conditions.

These sources are not necessarily independent. Of course, an unrecognized systematic effect cannot be
taken into account in the evaluation of the uncertainty of the result of a measurement but contributes to its
error. Recommendation INC-1 (1980) of the Working Group on the Statement of Uncertainties groups
uncertainty components into two categories based on their method of evaluation, “A” and “B”. These
categories apply to uncertainty and are not substitutes for the words “random” and “systematic”3. The
uncertainty of a correction for a known systematic effect may in some cases be obtained by a “Type A”
evaluation while in other cases by a “Type B” evaluation, as may the uncertainty characterizing a random
effect. The purpose of the “Type A” and “Type B” classification is to indicate the two different ways of
evaluating uncertainty components and is for convenience of discussion only; the classification is not meant
to indicate that there is any difference in the nature of the components resulting from the two types of
evaluation. Both types of evaluation are based on probability distributions, and the uncertainty components
resulting from either type are quantified by variances or standard deviations.
The estimated variance (u2) characterizing an uncertainty component obtained from a Type A evaluation is
calculated from series of repeated observations and is the familiar statistically estimated variance (2). The
estimated standard deviation (u), the positive square root of u2, is thus u =  and for convenience is
sometimes called a “Type A” standard uncertainty. For an uncertainty component obtained from a “Type B”
evaluation, the estimated variance u2 is evaluated using available knowledge, and the estimated standard
deviation u is sometimes called a “Type B” standard uncertainty. Thus a “Type A” standard uncertainty is
obtained from a probability density function derived from an observed frequency distribution, while a “Type
B” standard uncertainty is obtained from an assumed probability density function based on the degree of

3
In some publications uncertainty components are categorized as “random” and “systematic” and are associated with errors arising
from random effects and known systematic effects, respectively. Such categorization of components of uncertainty can be ambiguous
when generally applied. For example, a “random” component of uncertainty in one measurement may become a systematic”
component of uncertainty in another measurement in which the result of the first measurement is used as an input datum. Categorizing
the methods of evaluating uncertainty components rather than the components themselves avoids such ambiguity. At the same time, it
does not preclude collecting individual components that have been evaluated by the two different methods into designated groups to be
used for a particular purpose.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

belief that an event will occur (often called subjective probability). Both approaches employ recognized
interpretations of probability.
The standard uncertainty of the result of a measurement, when that result is obtained from the values of a
number of other quantities, is termed combined standard uncertainty and denoted by uc. It is the estimated
standard deviation associated with the result and is equal to the positive square root of the combined
variance obtained from all variance and covariance components, however evaluated, using what is termed in
this document the law of propagation of uncertainty. To meet the needs of some industrial and commercial
applications, as well as requirements in the areas of health and safety, an expanded uncertainty (U) is
obtained by multiplying the combined standard uncertainty uc by a coverage factor (k)4. The intended
purpose of expanded uncertainty is to provide an interval about the result of a measurement that may be
expected to encompass a large fraction of the distribution of values that could reasonably be attributed to
the measurand. The choice of the factor k, which is usually in the range 2 to 3, is based on the coverage
probability or level of confidence required of the interval.

Practical Considerations
If all of the quantities on which the result of a measurement depends are varied, its uncertainty can be
evaluated by statistical means. However, because this is rarely possible in practice due to limited time and
resources, the uncertainty of a measurement result is usually evaluated using a mathematical model of the
measurement and the law of propagation of uncertainty. Thus implicit in this document is the assumption
that a measurement can be modeled mathematically to the degree imposed by the required accuracy of the
measurement. Because the mathematical model may be incomplete, all relevant quantities should be varied
to the fullest practicable extent so that the evaluation of uncertainty can be based as much as possible on
observed data. Whenever feasible, the use of empirical models of the measurement founded on long-term
quantitative data, and the use of check standards and control charts that can indicate if a measurement is
under statistical control, should be part of the effort to obtain reliable evaluations of uncertainty. The
mathematical model should always be revised when the observed data, including the result of independent
determinations of the same measurand, demonstrate that the model is incomplete. A well-designed
experiment can greatly facilitate reliable evaluations of uncertainty and is an important part of the art of
measurement. In order to decide if a measurement system is functioning properly, the experimentally
observed variability of its output values, as measured by their observed standard deviation, is often
compared with the predicted standard deviation obtained by combining the various uncertainty components
that characterize the measurement. In such cases, only those components (whether obtained from Type A
or Type B evaluations) that could contribute to the experimentally observed variability of these output values
should be considered. Such an analysis may be facilitated by gathering those components that contribute to
the variability and those that do not into two separate and appropriately labeled groups.
In some cases, the uncertainty of a correction for a systematic effect need not be included in the evaluation
of the uncertainty of a measurement result. Although the uncertainty has been evaluated, it may be ignored
if its contribution to the combined standard uncertainty of the measurement result is insignificant. If the
value of the correction itself is insignificant relative to the combined standard uncertainty, it too may be
ignored. It often occurs in practice, especially in the domain of legal metrology, that a device is tested
through a comparison with a measurement standard and the uncertainties associated with the standard and
the comparison procedure are negligible relative to the required accuracy of the test. In such cases, because

4
The coverage factor (k) is always to be stated, so that the standard uncertainty of the measured quantity can be recovered for use in
calculating the combined standard uncertainty of other measurement results that may depend on that quantity.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

the components of uncertainty are small enough to be ignored, the measurement may be viewed as
determining the error of the device under test.
The estimate of the value of a measurand provided by the result of a measurement is sometimes expressed
in terms of the adopted value of a measurement standard rather than in terms of the relevant unit of the
International System of Units (SI). In such cases the magnitude of the uncertainty ascribable to the
measurement result may be significantly smaller than when that result is expressed in the relevant
International System of Units unit. In effect, the measurand has been redefined to be the ratio of the value
of the quantity to be measured to the adopted value of the standard.

Evaluating Standard Uncertainty


In most cases a measurand (Y) is not measured directly, but is determined from other quantities (X1, X2,…,
Xn) through a functional relationship function,

Y  f X 1 , X 2 ,..., X n  [5.18]

The input quantities (X1, X2,…, Xn) upon which the output quantity (Y) depends may themselves be viewed
as measurands and may themselves depend on other quantities, including corrections and correction factors
for systematic effects, thereby leading to a complicated functional relationship f that may never be written
down explicitly. Further, the functional relationship (f) may be determined experimentally or exist only as an
algorithm that must be evaluated numerically. The functional relationship (f) as it appears in this document
is to be interpreted in this broader context, in particular as that function which contains every quantity,
including all corrections and correction factors, that can contribute a significant component of uncertainty to
the measurement result. Thus, if data indicate that the functional relationship (f) does not model the
measurement to the degree imposed by the required accuracy of the measurement result, additional input
quantities must be included in functional relationship (f) to eliminate the inadequacy. This may require
introducing an input quantity to reflect incomplete knowledge of a phenomenon that affects the measurand.
The set of input quantities (X1, X2,…, Xn) may be categorized as:
(1) Quantities whose values and uncertainties are directly determined in the current measurement. These
values and uncertainties may be obtained from, for example, a single observation, repeated
observations, or judgement based on experience, and may involve the determination of corrections to
instrument readings and corrections for influence quantities, such as ambient temperature, barometric
pressure, and humidity;
(2) Quantities whose values and uncertainties are brought into the measurement from external sources,
such as quantities associated with calibrated measurement standards, certified reference materials, and
reference data obtained from handbooks.

An estimate of the measurand (Y) denoted by y, is obtained from equation (1) using input estimates (x1,
x2,…, xn) for the values of the n quantities X1, X2,…, Xn. Thus the output estimate (y) which is the result of
the measurement is given by,

y  f x 1 , x 2 ,..., x n  [5.19]

In some cases the estimate y may be obtained from,

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

n
1
y  y 
n
  f X i , X i1 ,..., X n  [5.20]
i 1

That is, y is taken as the arithmetic mean or average of n independent determinations Yi of Y, each
determination having the same uncertainty and each being based on a complete set of observed values of
the n input quantities Xi obtained at the same time. The estimated standard deviation associated with the
output estimate or measurement result (y) termed combined standard uncertainty and denoted by uc(y), is
determined from the estimated standard deviation associated with each input estimate (xi), termed standard
uncertainty and denoted by u(xi). Each input estimate (xi) and its associated standard uncertainty, u(xi), are
obtained from a distribution of possible values of the input quantity (Xi). This probability distribution may be
frequency based, that is, based on a series of observations of Xi, or it may be an a priori distribution. “Type
A” evaluations of standard uncertainty components are founded on frequency distributions while “Type B”
evaluations are founded on a priori distributions. It must be recognized that in both cases the distributions
are models that are used to represent the state of our knowledge.

Type A Evaluation of Standard Uncertainty


In most cases, the best available estimate of the expectation or expected value (µq) of a quantity q that
varies randomly and for which n independent observations (qi) have been obtained under the same
conditions of measurement, is the arithmetic mean or average of the n observations.

n
1
q 
n
  qi [5.21]
i1

Thus, for an input quantity (Xi) estimated from n independent repeated observations (Xi,k), the arithmetic
mean (X) obtained is used as the input estimate (xi) to determine the measurement result y; that is, xi =
X. The individual observations (qi) differ in value because of random variations in the influence quantities,
or random effects. The experimental variance of the observations, which estimates the variance (2) of the
probability distribution of q, is given by,

n
1
 
2  q 
n 1
  qi   q 2 [5.22]
i1

This estimate of variance and its positive square root (q), termed the experimental standard deviation,
characterize the variability of the observed values (qi), or more specifically, their dispersion about their mean
(q) . The best estimate of the variance of the mean is given by,

 2 q i 
 
2  q 
n
[5.23]

The experimental variance of the mean 2(q) and the experimental standard deviation of the mean (q),
equal to the positive square root of 2(q), quantify how well q estimates the expectation of q, and either
may be used as a measure of the uncertainty of q. Thus, for an input quantity (Xi) determined from n
independent repeated observations (Xi,k), the standard uncertainty, u(xi), of its estimate (xi = Xi) , is u(xi) =

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

(Xi), calculated according to Equation [5.23]. For convenience, u2(xi) = 2(Xi) and u(xi) = (Xi) are
sometimes called a “Type A” variance and a “Type A” standard uncertainty, respectively. The number of
observations n should be large enough to ensure that q provides a reliable estimate of the expectation µq of
the random variable q and that 2(q) provides a reliable estimate of the variance (see Equation [5.23]). In
this case, if the probability distribution of q is a normal distribution, the difference is taken into account
through the t-distribution. Although the variance 2(q) is the more fundamental quantity, the standard
deviation (q) is more convenient in practice because it has the same dimension as q and a more easily
comprehended value than that of the variance.
If the random variations in the observations of an input quantity are correlated, for example, in time, the
mean and experimental standard deviation of the mean may be inappropriate estimators of the desired
statistics. In such cases, the observations should be analysed by statistical methods specially designed to
treat a series of correlated, randomly-varying measurements. Such specialized methods are used to treat
measurements of frequency standards. However, it is possible that as one goes from short-term
measurements to long-term measurements of other metrological quantities, the assumption of uncorrelated
random variations may no longer be valid and the specialized methods could be used to treat these
measurements as well.

Type B Evaluation of Standard Uncertainty


For an estimate (xi) of an input quantity (Xi) that has not been obtained from repeated observations, the
associated estimated variance u2(xi) or the standard uncertainty u(xi) is evaluated by scientific judgement
based on all of the available information on the possible variability of Xi. The pool of information may
include:
(1) Previous measurement data;
(2) Experience with or general knowledge of the behaviour and properties of relevant materials and
instruments;
(3) Manufacturer’s specifications;
(4) Uncertainties assigned to reference data taken from handbooks.

For convenience, u2(xi) and u(xi) evaluated in this way are sometimes called a “Type B” variance and a
“Type B” standard uncertainty, respectively. When xi is obtained from an a priori distribution, the associated
variance is appropriately written as u2(xi), but for simplicity, u2(xi) and u(xi) are used throughout this
document. It should be recognized that a “Type B” evaluation of standard uncertainty can be as reliable as a
“Type A” evaluation, especially in a measurement situation where a “Type A” evaluation is based on a
comparatively small number of statistically independent observations. The quoted uncertainty of xi is not
necessarily given as a multiple of a standard deviation. Instead, one may find it stated that the quoted
uncertainty defines an interval having a 90, 95, 95,45, 99 or 99,73 percent level of confidence. Unless
otherwise indicated, one may assume that a normal distribution was used to calculate the quoted
uncertainty, and recover the standard uncertainty of xi by dividing the quoted uncertainty by the appropriate
factor for the normal distribution. The factors corresponding to the above three levels of confidence are
1,645; 1,960; 2; 2,576; and 3.
For a normal distribution with expectation (µ) and standard deviation (), the interval µ encompasses
approximately 99,73 percent of the distribution. Thus, if the upper (µ+) and lower bounds (µ-), define
99,73 percent limits rather than 100 percent limits, and Xi can be assumed to be approximately normally
distributed rather than there being no specific knowledge about Xi between the bounds, then we have,

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

u 2 x i  
   2 [5.24]
9

It is important not to “double-count” uncertainty components. If a components of uncertainty arising from a


particular effect is obtained from a “Type B” evaluation, it should be included as an independent component
of uncertainty in the calculation of the combined standard uncertainty of the measurement result only to the
extent that the effect does not contribute to the observed variability of the observations. This is because the
uncertainty due to that portion of the effect that contributes to the observed variability is already included in
the component of uncertainty obtained from the statistical analysis of the observations. The discussion of
“Type B” evaluation of standard uncertainty in is meant only to be indicative. Further, evaluations of
uncertainty should be based on quantitative data to the maximum extent possible.

Determining Combined Standard Uncertainty


This subclause treats the case where all input quantities are independent. The standard uncertainty of y,
where y is the estimate of the measurand Y and thus the result of the measurement, is obtained by
appropriately combining the standard uncertainties of the input estimates x1, x2,…, xn. This combined
standard uncertainty of the estimate y is denoted by uc(y). The combined standard uncertainty uc(y) is the
positive square root of the combined variance (c2), which is given by,

n
u 2c y    c i  ux i 2 [5.25]
i1

where f is the function given in Equation [5.19]. Each u(xi) is a standard uncertainty evaluated as described
in the “Type A” uncertainty evaluation or as in “Type B” uncertaity evaluation. The combined standard
uncertainty uc(y) is an estimated standard deviation and characterizes the dispersion of the values that could
reasonably be attributed to the measurand Y. When the nonlinearity of f is significant, higher-order terms in
the Taylor series expansion must be included in the expression for uc2(y).
f f
The partial derivatives are equal to evaluated at Xi = xi. These derivatives, often called sensitivity
x i X i
coefficients, describe how the output estimate y varies with changes in the values of the input estimates (x1,
x2,…, xn). In particular, the change in y produced by a small change xi in input estimate xi is given by,

f
y i   x i [5.26]
x i

If this change is generated by the standard uncertainty of the estimate (xi), the corresponding variation in y
f
is  x i . The combined variance, uc2(y), can therefore be viewed as a sum of terms, each of which
x i
represents the estimated variance associated with the output estimate (y) generated by the estimated
variance associated with each input estimate (xi). This suggests writing Equation [5.26] as uc2(y),

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

n n
u 2c y    c i  ux i 2   ui2 y  [5.27]
i 1 i1

where

f
ci  [5.28]
x i
f f
Strictly speaking, the partial derivatives are  evaluated at the expectations of the Xi. However, in
x i X i
practice, the partial derivatives are estimated by,

f f
 [5.29]
x i X i x1 , x 2 ,...,x n

f
Instead of being calculated from the function f (Equation [1]), sensitivity coefficients are sometimes
x i
determined experimentally: one measures the change in Y produced by a change in a particular Xi while
holding the remaining input quantities constant. In this case, the knowledge of the function f (or a portion of
it when only several sensitivity coefficients are so determined) is accordingly reduced to an empirical first-
order Taylor series expansion based on the measured sensitivity coefficients.

Correlated Input Quantities


Equation [5.25] and those derived from it such is valid only if the input quantities (Xi) are independent or
uncorrelated (the random variables, not the physical quantities that are assumed to be invariants). If some
of the Xi are significantly correlated, the correlations must be taken into account. When the input quantities
are correlated, the appropriate expression for the combined variance associated with the result of a
measurement is,

n n
f f
u 2c y     x i  x j  ux i , x j  [5.30]
i 1 j1

where xi and xj are the estimates of Xi and Xj and u(xi,xj) = u(xj,xi) is the estimated covariance associated
with xi and xj. The degree of correlation between xi and xj is characterized by the estimated correlation
coefficient,


u xi,x j 

r xi,x j   
ux i   u x j
[5.31]

where -1  r(xi, xj)  + 1. If the estimates xi and xj are independent, r(xi, xj) = 0, and a change in one does
not imply an expected change in the other. In terms of correlation coefficients, which are more readily
interpreted than covariances, the covariance term of Equation [5.30] may be written as,

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

n1 n
f f
u 2c y   2      
 u xi , x j  r xi , x j  [5.32]
i1 j j1 x i x j

Consider two arithmetic means (q) and (r) that estimate the expectation of two randomly varying
quantities q and r, and let q and r be calculated from n independent pairs of simultaneous observations of q
and r made under the same conditions of measurement. Then the covariance of q and r is estimated by,

n
1

  q , r   n  n  1
  qi   q   ri   r  [5.33]
i1

where qi and ri are the individual observations of the quantities q and r, and q and r are calculated from
the observations. If in fact the observations are uncorrelated, the calculated covariance is expected to be
near 0. Thus the estimated covariance of two correlated input quantities Xi and Xj that are estimated by the
means (Xi) and (Xj) determined from independent pairs of repeated simultaneous observations is given by
u(xi,xj) = (Xi,Xj), with (Xi,Xj) calculated according to Equation [5.33]. This application of Equation [5.33] is
a “Type A” evaluation of covariance. The estimated correlation coefficient of Xi and Xj is obtained from the
following equation,


  x i  ,  x  j  
 
r  x i  ,  x  j 
  
  x i    x  j   [5.34]

There may be significant correlation between two input quantities if the same measuring instrument,
physical measurement standard, or reference datum having a significant standard uncertainty is used in their
determination. For example, if a certain thermometer is used to determine a temperature correction required
in the estimation of the value of input quantity Xi, and the same thermometer is used to determine a similar
temperature correction required in the estimation of input quantity Xi, the two input quantities could be
significantly correlated. However, if Xi and Xj in this example are redefined to be the uncorrected quantities
and the quantities that define the calibration curve for the thermometer are included as additional input
quantities with independent standard uncertainties, the correlation between Xi and Xj is removed.
Correlations between input quantities cannot be ignored if present and significant. The associated
covariances should be evaluated experimentally if feasible by varying the correlated input quantities, or by
using the pool of available information on the correlated variability of the quantities in question (Type B
evaluation of covariance). Insight based on experience and general knowledge is especially required when
estimating the degree of correlation between input quantities arising from the effects of common influences,
such as ambient temperature, barometric pressure, and humidity. Fortunately, in many cases, the effects of
such influences have negligible interdependence and the affected input quantities can be assumed to be
uncorrelated. However, if they cannot be assumed to be uncorrelated, the correlations themselves can be
avoided if the common influences are introduced as additional independent input quantities.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

Determining Expanded Uncertainty


Recommendation INC-1 (1980) of the Working Group on the Statement of Uncertainties on which this
document is based, advocate the use of the combined standard uncertainty, uc(y), as the parameter for
expressing quantitatively the uncertainty of the result of a measurement. Although uc(y) can be universally
used to express the uncertainty of a measurement result, in some commercial, industrial, and regulatory
applications, and when health and safety are concerned, it is often necessary to give a measure of
uncertainty that defines an interval about the measurement result that may be expected to encompass a
large fraction of the distribution of values that could reasonably be attributed to the measurand.

Expanded Uncertainty
The additional measure of uncertainty that meets the requirement of providing an interval of the kind is
termed expanded uncertainty and is denoted by U. The expanded uncertainty (U) is obtained by multiplying
the combined standard uncertainty, uc(y), by a coverage factor (k).

U  k  u c y  [5.35]

The result of a measurement is then conveniently expressed as,

Y  y U [5.36]

which is interpreted to mean that the best estimate of the value attributable to the measurand Y is y, and
that (y–U) to (y+U) is an interval that may be expected to encompass a large fraction of the distribution of
values that could reasonably be attributed to Y. Such an interval is also expressed as (y–U)  Y  (y+U). The
terms confidence interval and confidence level have specific definitions in statistics and are only applicable to
the interval defined by expanded uncertainty when certain conditions are met, including that all components
of uncertainty that contribute to uc(y) be obtained from “Type A” evaluations. Thus, in this document, the
word “confidence” is not used to modify the word “interval” when referring to the interval defined by
expanded uncertainty (U); and the term “confidence level” is not used in connection with that interval but
rather the term “level of confidence”. More specifically, expanded uncertainty is interpreted as defining an
interval about the measurement result that encompasses a large fraction p of the probability distribution
characterized by that result and its combined standard uncertainty, and p is the coverage probability or level
of confidence of the interval. Whenever practicable, the level of confidence (p) associated with the interval
define by expanded uncertainty (U) should be estimated and stated. It should be recognized that multiplying
uc(y) by a constant provides no new information but presents the previously available information but
presents the previously available information in a different form. However, it should also be recognized that
in most cases the level of confidence (especially for values of p near 1) is rather uncertain, not only because
of limited knowledge of the probability distribution characterized by y and uc(y), particularly in the extreme
portions, but also because of the uncertainty of uc(y) itself.

Choosing A Coverage Factor


The value of the coverage factor (k) is chosen on the basis of the level of confidence required of the interval
(y–U) to (y+U). In general, the coverage factor will be in the range 2 to 3. However, for special applications
the coverage factor may be outside this range. Extensive experience with and full knowledge of the uses to
which a measurement result will be put can facilitate the selection of a proper value of the coverage factor

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

(k). Ideally, one would like to be able to choose a specific value of the coverage factor (k) that would
provide an interval,

Y  y  U  y  k  u c y  [5.37]

corresponding to a particular level of confidence (p) such as 95 or 99 percent; equivalently, for a given value
of the coverage factor (k), one would like to be able to state unequivocally the level of confidence associated
with that interval. However, this is not easy to do in practice because it requires extensive knowledge of the
probability distribution characterized by the measurement result y and its combined standard uncertainty,
uc(y). Although these parameters are of critical importance, they are by themselves insufficient for the
purpose of establishing intervals having exactly known levels of confidence. Recommendation INC-1 (1980)
does not specify how the relation between the coverage factor (k) and level of confidence (p) should be
established. However, a simpler approach, is often adequate in measurement situations where the
probability distribution characterized by y and uc(y) is approximately normal and the effective degrees of
freedom of uc(y) is of significant size. When this is the case, which frequently occurs in practice, one can
assume that taking k = 2 produces an interval having a level of confidence of approximately 95 percent, and
that taking k = 3 produces an interval having a level of confidence of approximately 99 percent.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

RELIABILITY

In today's technological world nearly everyone depends upon the continued functioning of a wide array of
complex machinery and equipment for their everyday health, safety, mobility and economic welfare. We
expect our cars, computers, electrical appliances, lights, televisions, and so on, to function whenever we
need them – day after day, year after year. When they fail the results can be catastrophic: injury, loss of life
and costly lawsuits can occur. More often, repeated failure leads to annoyance, inconvenience and a lasting
customer dissatisfaction that can play havoc with the responsible company's marketplace position. It takes a
long time for a company to build up a reputation for reliability, and only a short time to be branded as
“unreliable” after shipping a flawed product. Continual assessment of new product reliability and ongoing
control of the reliability of everything shipped are critical necessities in today's competitive business arena.
Accurate prediction and control of reliability plays an important role in the profitability of a product. Service
costs for products within the warranty period or under a service contract are a major expense and a
significant pricing factor. Proper spare part stocking and support personnel hiring and training also depend
upon good reliability fallout predictions. On the other hand, missing reliability targets may invoke contractual
penalties and cost future business. Companies that can economically design and market products that meet
their customers' reliability expectations have a strong competitive advantage in today's marketplace.
Sometimes equipment failure can have a major impact on human safety and health. Automobiles, planes,
life support equipment, and power generating plants are a few examples. From the point of view of
“assessing product reliability”, we treat these kinds of catastrophic failures no differently from the failure
that occurs when a key parameter measured on a manufacturing tool drifts slightly out of specification,
calling for an unscheduled maintenance action. It is up to the reliability engineer (and the relevant
customer) to define what constitutes a failure in any reliability study. More resource (test time and test
units) should be planned for when an incorrect reliability assessment could negatively impact safety and
health.

FAILURE OR HAZARD RATE


The failure rate is defined for non repairable populations as the (instantaneous) rate of failure for the
survivors to time (t) during the next instant of time. It is a rate per unit of time similar in meaning to reading
a car speedometer at a particular instant and seeing 60 Km/h. The next instant the failure rate may change
and the units that have already failed play no further role since only the survivors count. The failure rate (or
hazard rate) is denoted by h(t) and calculated from,

f t  f t 
ht    [6.01]
1  F t  R t 

and it is the instantaneous (conditional) failure rate. The failure rate is sometimes called a “conditional
failure rate” since the denominator 1F(t), i.e. the population survivors, converts the expression into a
conditional rate, given survival past time (t). Since h(t) is also equal to the negative of the derivative of R(t),

dlnR t 
ht    [6.02]
dt

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

we have the useful identity,


 t 
F t   1  exp   ht   dt  [6.03]
 
 o 

If we let

t
Ht   ht   dt
 [6.04]
0

be the Cumulative Hazard Function (CHF), we then have,

Ft   1  exp Ht  [6.05]

One other useful identity that follow from these formulas is,

Ht    lnR t  [6.06]

It is also sometimes useful to define an average failure rate over any interval that “averages” the failure rate
over that interval. This rate, denoted by AFR(t1;t2) is a single number that can be used as a specification or
target for the population failure rate over that interval. If t1 is 0, it is dropped from the expression. Thus, for
example, AFR(40,000) would be the average failure rate for the population over the first 40,000 hours of
operation. The formulas for calculating AFR's are,

lnR t 1   lnR t 2 
AFR t 2 ; t 1   [6.07]
t 2  t1

Proportional Hazards Model


The proportional hazards model, proposed by Cox (1972), has been used primarily in medical testing
analysis, to model the effect of secondary variables on survival. It is more like an acceleration model than a
specific life distribution model, and its strength lies in its ability to model and test many inferences about
survival without making any specific assumptions about the form of the life distribution model. Let z = {x,
y,...} be a vector of one or more explanatory variables believed to affect lifetime. These variables may be
continuous (like temperature in engineering studies, or dosage level of a particular drug in medical studies)
or they may be indicator variables with the value one if a given factor or condition is present, and zero
otherwise. Let the hazard rate for a nominal (or baseline) set z0 = {x0,y0,...} of these variables be given by
h0(t), with h0(t) denoting legitimate hazard function (failure rate) for some unspecified life distribution
model. The proportional hazards model assumes we can write the changed hazard function for a new value
of z as,

h z t   gz   h 0 t  [6.08]

In other words, changing z, the explanatory variable vector, results in a new hazard function that is
proportional to the nominal hazard function, and the proportionality constant is a function of z, g(z),

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

independent of the time variable (t). A common and useful form for f(z) is the Log Linear Model which has
the equation,

gx   e ax [6.09]

for one variable, and the following equation,

gx , y   e ax by [6.10]

for two variables, etc.

SAFETY RELIABILITY
Important factors in risk process and safety system selection and design are peformance and reliability in
meeting permit requirements. Two approaches in risk process and safety system selection and design are
the use of arbitrary safety factors, and statistical analysis of risk treatment, risk quality and the probable
frequency of occurrence. The latter approach, termed the reliability concept, is preferred because it provides
a consistent basis for analysis of uncertainty and a rational basis for the analysis of performance and
reliability. The application of the reliability concept to process selection and design is based on material
presente below. Reliability of a safety system may be defined as the probability of adequate erformance for
at least a specified period of time under specified conditions, or, in terms of risk treatment, risk
performance, the percent of time that risk performance meet the permit requirements. For each specific
case where the reliability concept is to be employed, the levels of reliability must be evaluated, including the
cost of safety system maintenance required to achieve specific levels of reliability, associated sustaining and
maintenance costs, and the cost of adverse environmental effects and risk impacts of a violation. Because of
the variations in risk quality performance, a safety system should be designed to maintained an average risk
value below the permit requirements.
The following question arises: “What mean value guarantees that an risk value is consistently less than a
specified limit with a certain reliability?”. The approach involves the use of a coefficient of reliability () that
relates mean constituint alues (or design values) to the standard that must be achieved on aprobability
basis. The mean risk estimate value (x) may be obtained by the relationship,

 x  s   [6.11]

where s is a fixed standard. The coefficient of reliability is deermined by,

1  1 

  CVX2  1 2 
 exp  Z 1  ln CVX2  1 
2  [6.12]
 

where CVx is the ratio of the standard deviation of existig distribution (x) to the mean value of the existing
distribution (x), and is also termed the coefficient of variation; Z1- is the number of standard devitions
away from mean of a normal distribution, 1 is the cumulative probability of occurrence (reliability level).

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

Values of Z1- for various cumulative probability levels (1) are given in Table 6.01. Selection of an
appropriate design value of CVx must be based on experience from actual risk or risk data published.

Table 6.01 – Values of standardized normal distribution.

1 Z1
99.9 3.090
99.0 2.326
98.0 2.054
95.0 1.645
92.0 1.405
90.0 1.282
80.0 0.842
70.0 0.525
60.0 0.253
50.0 0.000

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

REFERENCES

@Risk, Risk analysis for spreadsheets. Palisade Corp., Newfield, 1994.


ACGIH (ed), 1994-1995. Threshold limit values for chemical substances and physical agents in the work.
Environmental and biological exposure indices with intended changes for 1994-1995.
AIChE, 1994. Guidelines for preventing human error in process safety. Center for Chemical Process Safety,
New York.
AIChE, 2000. Guidelines for chemical process risk analysis. Center for Chemical Process Safety, New York.
American Industrial Hygiene Association (AIHA). A strategy for occupational exposure assessment, edited by
N. C. Hawkins, S. K. Norwood and J. C. Rock. Akron: American Industrial Hygiene Association, 1991.
Andersson, P. Evaluation and mitigation of industrial fire hazards. Report 1015, Dept. of Fire Safety Eng.,
Lund University, Lund, 1997.
Ang, A.H-S., Tang, W.H. Probability concepts in engineering planning and design, Volume 1 – Basic
principles. John Wiley & Sons, New York, 1975.
Ang, A.H-S., Tang, W.H. Probability concepts in engineering planning and design, Volume 2 – Decision, risk
and reliability. John Wiley & Sons, New York, 1984.
AS/NZS 4360 2004. Risk management. Third Edition, Standards Australia/Standards New Zealand, Sydney,
Australia, Wellington, New Zealand.
Ascher, H. (1981), Weibull distribution versus Weibull process, Proceedings Annual Reliability and
Maintainability Symposium, pp. 426-431.
Attwood, D.A., Deeb, J.M. and Danz-Reece, M.E., 2004. Ergonomic solutions for process industries. Gulf
Professional Publishing, Oxford.
Augusti, G., Baratta, A., Casciati, F. Probabilistic methods in structural engineering. Chapman and Hall,
London, 1984.
Bain, L.J. and Englehardt, M. (1991), Statistical analysis of reliability and life-testing models: Theory and
methods, 2nd ed., Marcel Dekker, New York.
Bare, N. K., 1978. Introduction to fire science and fire protection. John Wiley and Sons, New York.
Barker, T. B. (1985), Quality by experimental design. Marcel Dekker, New York, N. Y.
Barlow, R. E. and Proschan, F. (1975), Statistical theory of reliability and life testing, Holt, Rinehart and
Winston, New York.
Baybutt, P., 1996. Human factors in process safety and risk management: Needs for models, tools and
techniques. International Workshop on Human Factors in Offshore Operations. US Mineral Management
Service, New Orleans, pp. 412-433.
BBR94, Boverkets Byggregler 1994. BFS 1993:57 med ändringar BFS 1995:17, Boverket, Karlskrona 1995.
Bellamy, L.J. et al., 1999. I-RISK Development of an integrated technical and management risk control and
monitoring methodology for managing and quantifying on-site and off-site risks. Contract No. ENVA-CT96-
0243. Main Report.
Bley, D., Kaplan, S. and Johnson, D., 1992. The strengths and limitations of PSA: Where we stand. Reliability
Engineering and System Safety, 38: 3-26.
Blockley, D. The nature of structural design and safety. Ellis Horwood Ltd Chichester, 1980.
BNQP, 2005. Criteria for performance excellence. In: M.B.N.Q. Award (Editor). American Society for Quality,
Milwaukee.
Brown, D. B., 1976. Systems analysis and design for safety. Prentice-Hall International Series in Industrial
and systems Engineering, N. J.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

BSI Draft for development DD 240: Fire safety engineering in buildings. Part 1: Guide to the application of
fire safety engineering principles. British Standards Institution, London 1997.
Bukowski, R.W. Engineering evaluation of building fire safety performance. CIB W14 TG1, 1997.
Bukowski, R.W., Clarke III, F.B., Hall Jr, J.R., Steifel, S.W. Fire risk assessment method: Description of
methodology. National Fire Protection Research Foundation, Quincy, 1990.
Bukowski, R.W., Peacock, R.D., Jones, W.W., Forney, C.L. Technical reference guide for the Hazard I fire
hazard assessment method. NIST Handbook 146, Vol. II. National Institute of Standards and Technology,
Gaithersburg, 1989.
Bureau International des Poids et Measures et al., (ed.), 1993. Guide to the expression of uncertainty in
measurement, 1st edition, Géneve: International Organization for Standardization.
BureaunInternational du Travail (ed.). Encyclopédie de médecine, hygiéne et sécurité du travail, 4éme
edition en CD-ROM.
BVD (Brand-Verhütungs-Dienst für Industrie und Gewerbe), Fire risk evaluation – Edition B: The Gretener
fire risk quantification method. Draft December 1979, Zürich 1980.
Chapanis, A.R., 1986. Human-factors engineering. The new Encyclopaedia Britannica, 21. Encyclopaedia
Britannica, Chicago.
Charters, D.A. Quantified Assessment of Hospital Fire Risks. Proc. Interflam 96, pp 641-651, 1996.
Chen, S.-J. and Hwang, C.-L., 1992. Fuzzy multiple attribute decision making. Springer- Verlag, Berlin.
Cornell, C.A. A Probability-based structural code. ACI-Journ., Vol. 66, 1969.
Cote, A. E. (ed.), 1987. Fire protection handbook, 18th edition, NFPA, Quincy, Massachussets.
Covello, V.T., Merkhofer, M.W. Risk assessment methods, approaches for assessing health and
environmental risks. Plenum Press, New York, 1993.
Cox, A.M., Alwang, J. and Johnson, T.G., 2000. Local preferences for economic development outcomes: An
application of the analytical hierarchy procedure. Journal of Growth and Change, Vol 31(Issue 3 summer
2000): 341-366.
CPQRA, Guidelines for chemical process quantitative risk analysis. Center for Chemical Process Safety of the
American Institute of Chemical Engineers, New York, 1989.
Crow, L.H. (1974), Reliability analysis for complex repairable systems, Reliability and Biometry, F. Proschan
and R.J. Serfling, eds., SIAM, Philadelphia, pp 379-410.
Cullen, L., 1990. The public inquiry of into the piper alpha disaster, HMSO, London.
Davidsson, G., Lindgren, M., Mett, L. Värdering av risk (Evaluation of risk). SRV report P21-182/97, Karlstad
1997.
Davies, J.C. 1996. Comparing environmental risks: Tools for setting government priorities. Resources for the
Future, Washington, DC.
DOD, 1997. MIL-STD-1472d Human engineering design criteria for military systems, Equipment and
Facilities. In: D.o. Defence (Editor).
Dougherty, E.M., 1990. Human reliability analysis – Where shouldst thou turn? Journal of Reliability
Engineering and System Safety, 29: 283-299.
Dykes, G.J., 1997. The Texaco incident. In: H.-J. Uth (Editor), Workshop on Human Performance in
Chemical Safety. Umweltbundesamt, Munich.
Edwards, E., 1988. Human factors in aviation, Introductory Overview, San Diego, CA.
Einarsson, S., 1999. Comparison of QRA and vulnerability analysis: Does analysis lead to more robust and
resilient systems? Acta Polytechnica Scandinavica Civil engineering and building construction series no. 114,
Espoo, Finland.
Embrey, D.E., 1983. The use of performance shaping factors and quantified expert judgement in the
evaluation of human reliability: An initial appraisal, Brookhaven National Laboratory.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

Embrey, D.E., 1992. Incorporating management and organisational factors into probabilistic safety
assessment. Engineering & System Safety, Vol. 38(No. 1-2): 199-208.
Embrey, D.E., Humphreys, P., Rosa, E.A., Kirwan, B. and Rea, K., 1984. SLIM-MAUD: An approach to
assessing human error probabilities using structured expert judgement, Nuclear Regulatory Commission,
Washington, DC.
EU, 1996. Council Directive 96/82/EC (Seveso II) on control of major accident hazards involving dangerous
substances. European Union.
Evans, D.D., Stroup, D.W. Methods of calculating the response time of heat and smoke detectors installed
below large unobstructed ceilings. NBSIR 85-3167, National Bureau of Standards, Gaithersburg, 1985.
FAA, 2004. Human Factors Awareness Course. www.hf.faa.gov/webtraining/index.htm.
FEG, Fire engineering guidelines. Fire Code Reform Centre, Sydney, 1996.
Ferry, T. S., 1988. Modern accident investigation and analysis. John Wiley and Sons, New York.
Finney, D.J. Probit analysis. 3rd ed. Cambridge University Press, Cambridge, 1971.
Firenze, R., 1971. Hazard Control. National Safety News, 104(2): 39-42.
Fischhoff, B. 1995. Risk perception and communication unplugged: Twenty years of process. Risk Analysis
15:137-145.
Fitzgerald, R. Risk analysis using the engineering method for building fire safety. WPI, Boston, 1985.
Frantzich, H., Holmquist, B., Lundin, J., Magnusson, S.E., Rydén, J. Derivation of partial safety factors for
fire safety evaluation using the reliability index  method. Proc. 5th International Symposium on Fire Safety
Science, pp 667-678, 1997.
FULLER, W. A. (1987), Measurement error models. John Wiley and Sons, New York, N.Y.
Gärdenfors, P., Sahlin, N-E. Decision, probability and utility. Selected readings. Cambridge University Press
Cambridge, 1986.
Goodstein, L.P., Anderson, H.B. and Olsen, S.E., 1988. Tasks, errors and mental models. Taylor and Francis,
Washington, DC.
Gough, J.D. 1991. Risk communication: The implications for risk management. Information Paper No. 33.
Centre for Resource Management, Lincoln University, Cantebury, New Zealand.
Groeneweg, J., 2000. Preventing human error using audits, Effective Safety Auditing, London.
Hacker, W., 1998. Algemeine Arbeitspsychologie – Psychische Regulation von Arbeitstaetigkeiten. Verlag
Hans Huber, Bern.
Hagiwara, I., Tanaka, T. International comparison of fire safety provisions for means of escape. Proc. 4th
International Symposium on Fire Safety Science, pp 633-644, 1994.
Hahn, G.J., and Shapiro, S.S. (1967), Statistical models in engineering, John Wiley & Sons, Inc., New York.
Haimes, Y.Y., Barry, T., Lambert, J.H., (ed). Editorial text. Workshop proceedings “When and How Can You
Specify a Probability Distribution When You Don’t Know Much?” Risk Analysis Vol. 14, No. 5, pp 661-706,
1994.
Hamed, M.M., First-Order Reliability Analysis of Public Health Risk Assessment. Risk Analysis, Vol. 17, No. 2,
pp 177-185, 1997.
Harms-Ringdahl, L., 2001. Safety analysis. Principles and practice in occupational safety. Taylor & Francis,
London.
Harris, R. L. (ed.), 2000. Ptty’s industrial hygiene, 5th edition, Volume 4.
Hartzell, G.E., Priest, D.N., Switzer, W.G. Modeling of toxicological effects of fire gases: Mathematical
modeling of intoxication of rats by carbon monoxide and hydrogen cyanide. Journal of Fire Science Vol. 3,
No. 2, pp 115 - 128, 1985.
Hasofer, A.M., Lind, N.C. An exact and invariant first order reliability format. Proc. ASCE, J Eng. Mech. Div.
1974.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

Health and Safety Executive (HSE). The tolerability of risk from nuclear power stations. HMSO Publications
Centre, London, 1997.
Heinrich, H.W., Petersen, D. and Roos, N., 1980. Industrial accident prevention: A safety management
approach. McGraw Hill, New York.
Helton, J.C. Treatment of uncertainty in performance assessment for complex systems. Risk Analysis, Vol.
14, No. 4, 1994.
Helton, J.C., Anderson, D.R., Marietta, M.G., Rechard, R.P. Performance assessment for the waste isolation
pilot plant: From regulation to calculation for 40 CFR 191.13. Op. Res. Vol. 45, No. 2, pp157-177, 1997.
Hirschberg, H. and Dang, V.N., 1996. Critical operator actions and data issues, Task Report by Principal
Working Group 5. OECD/ NEA.
Hollnagel, E., 1998. CREAM – Cognitive Reliability and Error Analysis Method. Elsevier Science Ltd, Oxford.
Holmstedt, G., Kaiser, I. Brand i vårdbäddar. SP-RAPP 1983:04 Statens Provningsanstalt, Borås, 1983.
Hourtolou, D. and Salvi, O., 2004. ARAMIS Project: Development of an integrated accidental risk assessment
methodology for industries in the framework of SEVESO II directive – ESREL 2004. In: T. Bedford and
P.H.A.J.M.v. Gelder (Editors), pp. 829-836.
Hoyland, A., and Rausand, M. (1994), System reliability theory, John Wiley & Sons, Inc., New York.
HSE, 1999. Reducing error and influencing behaviour. Health And Safety Executive, Sudbury.
ILO, 2001. Guidelines on occupational safety and health management systems, International Labour Office,
Geneva.
Iman, R.L., Helton, J.C. An investigation of uncertainty and sensitivity analysis techniques for computer
models. Risk Analysis, Vol. 8, No. 1, 1988.
Integrating Human Factors into Chemical Process Quantitative Risk Analysis 110. HSE, 2003. Organisational
change and major accident hazards. HSE Information sheet CHIS7.
Integrating Human Factors into Chemical Process Quantitative Risk Analysis 112 Reason, J., 1990. Human
error. Cambridge University Press, New York.
International Atomic Energy Agency (IAEA). Evaluating the reliability of predictions made using
environmental transfer models, Safety Series No. 100, Vienna, 1989.
International Electrotechnical Commission (IEC). International Standard 60300-3-9, Dependability
management – Part 3: Application guide – Section 9: Risk analysis of technological systems, Genéve, 1995.
ISO 3534-1:1993, Statistics – Vocabulary and symbols – Part 1: Probability and general statistical terms,
International Organization for Standardization, Geneva, Switzerland.
ISO/CD 13387 Committee Draft, Fire safety engineering. The application of fire performance concepts to
design objectives, ISO/TC92/SC4, 1997.
ISO/CD 13388 Committee Document, Fire safety engineering. Design fire scenarios and design fires,
ISO/TC92/SC4, 1997.
Itoh, H., Mitomo, N., Matsuoka, T and Murohara, Y., 2004. an Extension of m-SHEL model for analysis of
human factors at ship operation, 3rd International Conference on Collision and Groundings of Ships, Izu,
Japan, pp. 118-122.
Jeffreys, H. (1983), Theory of probability, 3rd edition, Oxford University Press , Oxford.
Joschek, H.I., 1981. Risk assessment in the chemical industry, Proceeding of the International Topical
meeting on Probabilistic Risk Assessment. American Nuclear Society, New York.
Kalbfleisch, J.D., and Prentice, R.L. (1980), The statistical analysis of failure data, John Wiley & Sons, Inc.,
New York.
Kandel, A. and Avni, E., 1988. Engineering risk and hazard assessment, Vol 1. CRC Press, Boca Raton,
Florida.
Kaplan, S. The words of risk analysis. Risk Analysis, Vol. 17, No. 4, 1997.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

Kaplan, S., Garrick, B.J. 1981. On the quantitative definition of risk. Risk Analysis 1:11-27.
Kaplan, S., Garrick, B.J. On The quantitative definition of risk. Risk Analysis, Vol. 1, No. 1, pp11-27, 1981.
Kariuki, S.G. and Löwe, K., 2005. Integrating human factors into process hazard analysis. In: k. Kolowrocki
(Editor), Advances in Safety and Reliability. Taylor and Francis, Tri-City pp. 1029-1035.
Kariuki, S.G. and Löwe, K., 2006. Increasing human reliability in the chemical process industry using human
factors techniques. Process Safety and Environmental Protection, 84(B3): 200-207.
Kawano, R., 2002. Medical Human Factor Topics. Website:
http://www.medicalsaga.ne.jp/tepsys/MHFT_tiics103.html.
Kirwan, B., 1996. The validation of three human reliability quantification techniques – THERP, HEART and
JHEDI: Part 1 Journal of Applied Ergonomics, 27(6): 359 - 373.
Kleijnen, J.P.C. Sensitivity analysis, uncertainty analysis, and validation: A survey of statistical techniques
and case studies. International Symposium on Sensitivity Analysis of Model Output 95, 1995. Organised by
JRC Ispra.
Klein, G., Ovasanu, J., Calderwood, R., Zsambok, C. Decision making in action: Models and methods. Alex
Publishing Corp. Norwood, 1993.
Kletz, T.A., 1999. Hazop and Hazan – Identifying and assessing chemical industry hazards. Institution of
Chemical Engineers, Rugby, UK.
Kletz, T.A., 2001. Learning from accident. Gulf Professional Publishing, Oxford, UK.
Klinke, A., Renn, O. 2002. A new approach to risk evaluation and management: Risk-based, precaution-
based, and discourse-based strategies. Risk Analysis 22:1071-1094.
Kovalenko, I.N., Kuznetsov, N.Y., and Pegg, P.A. (1997), Mathematical theory of reliability of time dependent
systems with practical applications, John Wiley & Sons, Inc., New York.
Löwe, K. and Kariuki, S.G., 2004a. Methods for incorporating human factors during design phase, Loss
Prevention and Safety Promotion in the Process Industries. Loss Prevention Prague, pp. 5205-5215
Löwe, K. and Kariuki, S.G., 2004b. Berücksichtigung des Menschen beim Design verfahrenstechnischer
Anlagen, 5. Berliner Werkstatt Mensch-Maschine-Systeme. VDI-Verlag, Berlin, pp. 88-103.
Löwe, K., Kariuki, S.G., Porcsalmy, L. and Fröhlich, B., 2005. Development and validation of a human factors
engineering. Guideline for Process Industries Loss Prevention Bulletin (lpb)(Issue 182): 9-14.
MacDonald LA, Karasek RA, Punnett L, Scharf T (2001). Covariation between workplace physical and
psychosocial stressors: Evidence and implications for occupational health research and prevention.
Ergonomics, 44:696-718.
MacIntosh, D.L., Suter II, G.W., Hoffman, F.O. Use of probabilistic exposure models in ecological risk
assessment of contaminated sites. Risk Analysis, Vol. 14, No. 4, 1994.
Madsen, H.O., Krenk, S., Lind, N.C. Methods of structural safety. Prentice-Hall, Englewood Cliffs, 1986.
Magnusson, S.E., Frantzich, H., Harada, K. Fire safety design based on calculations: Uncertainty analysis and
safety verification. Report 3078, Dept. of Fire Safety Eng., Lund University, Lund 1995.
Magnusson, S.E., Frantzich, H., Harada, K. Fire safety design based on calculations: Uncertainty analysis and
safety verification. Fire Safety Journal Vol. 27, pp 305-334, 1997.
Magnusson, S.E., Frantzich, H., Karlsson, B., Särdqvist, S. Determination of safety factors in design based on
performance. Proc. 4th International Symposium on Fire Safety Science, pp 937- 948, 1994.
Malchaire J (2000). Strategy for prevention and control of the risks due to noise. Occupational and
Environmental Medicine, 57:361-369.
Mann, N.R., Schafer, R.E. and Singpurwalla, N.D. (1974), Methods for statistical analysis of reliability and life
data, John Wiley & Sons, Inc., New York.
Martz, H.F., and Waller, R.A. (1982), Bayesian reliability analysis, Krieger Publishing Company, Malabar,
Florida.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

McCafferty, D.B., 1995. Successful system design through integrating engineering and human factors.
Process Safety Progress, 14(2): 147-151.
McCone, T.E. Uncertainty and variability in human exposures to soil contaminants through home-grown
food: A Monte Carlo assessment. Risk Analysis, Vol. 14, No. 4, 1994.
McLeod, R., 2004. Human Factors Assessment Model Validation Study: Final Report.
McNab, W.B. 2001. Basic principles of risk management and decision analysis. Notes for employees of the
Ministry of Agriculture, Food & Rural Affairs. Draft. Ontario Ministry of Agriculture, Food & Rural Affairs,
Guelph, Canada. pp 1-16.
Meeker, W.Q., and Escobar, L.A. (1998), Statistical methods for reliability data, John Wiley & Sons, Inc.,
New York.
Meister, D., 1966. Report No. AMLR-TR-67-88. Applications of human reliability to the production process.
In: W.B. Askren (Editor), Symposium on Human Performance in Work.
Meister, D., 1977. Methods of predicting human reliability in man-machine systems. In: S. Brown and J.
Martin (Editors), Human Aspects of Man-Machine Systems. Open University Press, Milton Keynes, UK.
Morgan, G.M., Henrion, M. Uncertainty – A guide to dealing with uncertainty in quantitative risk and policy
analysis. Cambridge University Press, Cambridge, 1990.
Murphy, D.M., Paté-Cornell, M.E., The SAM framework: Modeling the effects of management factors on
human behavior in risk analysis. Risk Analysis, Vol.16, No. 4, 1996.
NASA, 2002. Probabilistic risk assessment procedures guide for NASA managers and practitioners. North
America Space Agency, Washington, D.C.
Nelson, H.E., Shibe, A.J. A system for fire safety evaluation of health care facilities. NBSIR 78-1555, National
Bureau of Standards, Washington, 1980.
Nelson, H.E., Tu, K.M. Engineering analysis of the fire development in the hillhaven nursing home fire,
October 5, 1989.
NFPA 101M – Manual on alternative approaches to life safety. National Fire Protection Association, Quincy
1987.
Nielsen, L., Sklet, S. and Oien, K., 1996. Use of risk analysis in the regulation of the norwegian petroleum
industry, proceedings of the probabilistic safety assessment international topical meeting. American Nuclear
Society, IL, USA, pp. 756-762.
NISTIR 4665, National Institute of Standards and Technology, Gaithersburg, 1991.
NKB Draft. Teknisk vejledning for beregningsmoessig eftervisning af funktionsbestemte brandkrav. NKB
Utskotts och arbejtsrapport 1996:xx, Nordiska Kommittén for byggbestämmelser, NKB, 1997.
Norman, D.A., 1988. The psychology of everyday things. Basic Books, New York.
Notarianni, K.A. Measurement of room conditions and response of sprinklers and smoke detectors during a
simulated two-bed hospital patient room fire. NISTIR 5240. National Institute of Standards and Technology,
Gaithersburg, 1993.
NPD, 1992. Regulations concerning implementation and use of risk analysis in the petroleum activities with
guidelines, Norwegian Petroleum Directorate.
NR, Nybyggnadsregler, BFS 1988:18, Boverket, Karlskrona, 1989.
NUREG, 1983. PRA procedures guide – A guide to performance of probabilistic risk assessments for nuclear
power plants. U. S. Nuclear Regulation Commission, Washington, D.C.
NUREG, PEA Procedures guide: A guide to the performance of probabilistic risk assessment for nuclear
power plants, 2 vols. NUREG/CR-2300, U.S. Nuclear Regulatory Commission, Washington D.C., 1983.
O'Connor, P.D.T. (1991), Practical reliability engineering, 3rd edition, John Wiley & Sons, Inc., New York.
Ozog, H., 1985. Hazard identification, analysis and control. Chemical Engineering Journal: 161-170.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

Park, K. S., 1987. Human reliability, analysis, prediction and prevention of human errors. Advances in human
factors/ergonomics. Elsevier, Amsterdam.
Pate-Cornell, M.E., 1993. Learning from the piper alpha accident: A postmortem analysis of technical and
organisational factors. Risk Analysis, 13(2): 215-232.
Paustenbach, D. J.: Occupational exposure limits, pharmacokinetics, and unusual work schedules. In: Patty's
Industrial Hygiene and Toxicology, 3rd ed., Vol. III, Part A, edited by R. L. Harris, L. J. Cralley and L. V.
Cralley. New York: John Wiley & Sons, Inc. 1994. pp. 191-348.
Peacock, R.D., Jones, W.W., Forney, G.G., Portier, R.W., Reneke, P.A., Bukowski, R.W., Klote, J.H. An
update guide for Hazard I Version 1.2, NISTIR 5410, National Institute of Standards and Technology,
Gaithersburg, 1994.
Peterson, D., 1978. Techniques of safety management. McGraw-Hill Book, Co., New York.
Pitblado, R.M., Williams, J.C. and Slater, D.H., 1990. Quantitative assessment of process safety programs.
Plant and Operations Progress,, Vol 9(No. 3).
Press, S. J. (1989), Bayesian statistics: Principles, models, and applications. John Wiley and Sons, New York,
N.Y.
Prince MM, Stayner LT, Smith RJ, Gilbert S J (1997). A re-examination of risk estimates from the NIOSH
Occupational Noise and Hearing Survey (ONHS). Journal of the Acoustic Society of America, 101:950-963.
PRISM, 2004. Incorporation of human factors in the design process. http://www.prism-network.org.
Purser, D.A., Toxicity assessment of combustion products. The SFPE Handbook of Fire Protection
Engineering. National Fire Protection Association. 2nd ed. Quincy, 1995.
Quinlan M (2002). Workplace health and safety effects of precarious employment. In the Global
Occupational Health Network Newsletter, Issue No. 2. World Health Organization, Geneva.
Quintiere, J.G., Birky, M., McDonald, F., Smith, G. An analysis of smouldering fires in closed compartments
and their hazard due to carbon monoxide. NBSIR 82-2556, National Bureau of Standards, Washington, 1982.
Raja S, Ganguly G (1983). Impact of exposure to noise on the hearing acuity of employees in a heavy
engineering industry. Indian Journal of Medical Research, 78:100-113.
Rasmussen, J., 1981. Models of mental strategies in process strategies. In: J. Rasmussen and W. Rouse
(Editors), Human Detection and Diagnosis of System Failures. Plenum Press, New York.
Rasmussen, J., 1983. Skills, rules and knowledge: signals, signs and symbols and other distinctions in
human performance models. IEEE Transcations on Systems, Man Cybernetics, SMC, 3: 257-2666.
Reason, J. Human error. Cambridge University Press, Cambridge, 1990.
Report 3085, Dept. of Fire Safety Eng., Lund University, Lund 1996.
Riihimäki H (1995a). Back and limb disorders. In: Epidemiology of work related diseases. McDonald C, ed.
BMJ Publishing Group, London.
Riihimäki H (1995b). Hands up or back to work-future challenges in epidemiologic research on
musculoskeletal diseases. Scandinavian Journal of Work, Environment and Health, 21:401-403.
Roach S (1992). Health risks from hazardous substances at work – Assessment, evaluation and control.
Pergamon Press, New York.
Rutstein, R. The estimation of the fire hazard in different occupancies. Fire Surveyor, Vol. 8, No. 2, pp 21-25,
1979.
Saaty, T.L. and Kearns, K.P., 1985. Analytic planning – The organization of systems. International Series in
Modern Applied Mathematics and Computer Science 7. Pergamon Press, Oxford, England.
Saaty, T.L., 1980. The analytic hierarchy process. McGraw-Hill, New York.
Saaty, T.L., 2000. Fundamentals of decision making and priority theory with the analytic hierarchy process.
RWS Publications, Pittsburgh.
Sanders, M.M. and McCormick, E.J., 1993. Human factors in engineering and design. McGraw-Hill, NY.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

Sanderson, P.M. and Harwood, K., 1988. The skills, rules and knowledge classisfication: A discussion of its
emergence and nature. In: L.P. Goodstein, H.B. Anderson and S.E. Olsen (Editors), Tasks, Errors and Mental
Models. Taylor and Francis, Washington DC.
Schein, E.H., 1985. Organizational Culture and Leadership. Jossey-Bass, San Fransisco.
Shannon HS, Mayr J, Haines T (1997). Overview of the relationship between organizational and workplace
factors and injury rates. Safety Science, 26:201-17.
Shannon HS, Walters V, Lewchuk W et al. (1996). Workplace organizational correlates of lost-time accident
rates in manufacturing. American Journal of Industrial Medicine, 29:258-268.
Shields, T.J., Silcock, G.W., Dunlop, K.E. A methodology for the determination of code equivalency with
respect to the provision of means of escape. Fire Safety Journal Vol. 19, pp 267-278, 1992.
Silverstein BA, Stetson DS, Keyserling WM, Fine LJ (1997). Work-related musculoskeletal disorders:
comparison of data sources for surveillance. American Journal of Industrial Medicine, 31:600-608.
Silverstein BA, Viikari-Juntura E, Kalat J (2002). Use of a prevention index to identify industries at high risk
for work-related musculoskeletal disorders of the neck, back, and upper extremity in Washington state,
1990–1998. American Journal of Industrial Medicine, 41:149-169.
Simpson M, Bruce R (1981) Noise in America: The extent of the noise problem. (EPA Report No. 550/9-81-
101). Environmental Protection Agency, Washington, DC.
Sjöberg, L., Ogander, T. Att rädda liv - Kostnader och effekter (To safe life-Costs and effects), Ds 1994:14,
Stockholm 1994.
Snyder, H.L., 1973. Image quality and observer performance. In: L.M. Biberman (Editor), Perception of
Displayed Information. Plenum, New York.
Sørensen, J.D., Kroon, I.B., Faber, M.H. Optimal reliability-based code calibration. Structural Safety, Vol. 15,
pp 197-208, 1994.
SRV. Att skydda och rädda liv, egendom och miljö. Statens Räddningsverk, 1989.
Standard for Automatic Water-sprinkler Systems, RUS 120:4. The Swedish Association of Insurance
Companies, Stockholm, 1993.
Steinbach, J., 1999. Safety assessment for chemical processes. Wiley - VCH, Weinheim.
STRUREL, A structural reliability analysis program system. Reliability Consulting Programs GmbH, München,
1995.
Swain, A.D. and Guttman, H.E., 1983. Handbook of human reliability with emphasis on nuclear power plants
applications, US Nuclear Regulatory Committee, Washington, DC.
Swain, A.D., 1990. Human reliability analysis: need, status, trends and limitations. Reliability Engineering
and System Safety, 29: 301-313.
Thoft-Christensen, P., Baker, M.J. Structural reliability and its applications. Springer Verlag, Berlin, 1982.
Toren K, Balder B, Brisman J et al. (1999) This risk of asthma. European Respiratory Journal, 13:496-501.
Tsai SP, Gilstrap EL, Cowles SR, Waddell LC, Ross CE (1992). Personal and job characteristics of
musculoskeletal injuries in an industrial population. Journal of Occupational Medicine, 34:606-612.
Turner, B.A., 1978. Man-made disasters. Wykeham, London.
Tvedt, L. PROBAN Version 2, Theory manual. A.S Veritas Research, Report No.89-2023, Høvik, 1989.
U.S Occupational Safety and Health Administration (OSHA). Department of Labor. 1995. Occupational Safety
and Health Administration Technical Manual, Section I, Chapter 1, Personal sampling for air contaminants.
OSHA Instruction TED 1.15. Washington, D.C.: U.S.
U.S. Environmental Protection Agency (EPA). Environmental radiation protection standards for the
management and disposal of spent nuclear fuel, high-level and transuranic radioactive waste; Final Rule, 40
CFR Part 191. Federal Register, 58, 66398- 66416, 1993.

SAFETY MANAGEMENT SERIES


QUANTITATIVE RISK ANALYSIS

U.S. Environmental Protection Agency (EPA). Environmental standards for the management and disposal of
spent nuclear fuel, high-level and transuranic radioactive waste; Final Rule, 40 CFR Part 191. Federal
Register, 50, 38066-38089, 1985.
UN (2000). International standard industrial classification of all economic activities (ISIC). Third revision.
United Nations publication (St/ESA/STAT/SER.M/4/Rev.3). United Nations, New York.
UN (2001). World population prospects. The 2000 revision highlights. Population Division. Department of
Economic and Social Affairs, United Nations, New York.
USDHHS (1986). Perspectives in disease prevention and health promotion, leading work-related diseases
and injuries. Morbidity and Mortality Weekly Report, 35(12).
USDOL OSHA (2000). Docket for the Federal Register. (Vol. 65, No. 220.) U.S. Department of Labor,
Occupational Safety and Health Administration, Washington, DC.
USDOL OSHA (2002a). Permissible exposure limits, codified at 29 CFR 1910.1000. U.S. Department of Labor,
Occupational Safety and Health Administration. Available at http://www.osha.gov/SLTC/pel/index.html.
USDOL OSHA (2002b). Noise and hearing conservation. U.S. Department of Labor, Occupational Safety and
Health Administration. Available at http://www.osha-slc.gov/SLTC/noisehearingconservation/index.html.
USEIA (2001). U.S. Energy Information Administration, U.S. Department of Energy, International Energy
Database. Accessed January 2001 at http://www.eia.doe.gov/emeu/iea/coal.html.
Uth, H.J., 1999. Trends in major industrial accidents in Germany. Journal of Loss Prevention in Process
Industries, 12: 69-73.
Vaaranen V, Vasama M, Toikkanen J et al. (1994). Occupational diseases in Finland l993. Institute of
Occupational Health, Helsinki.
Vadillo, E.M., 2006. Development of a human factors assessment technique for the chemical process
industry. Master Thesis, Technical University of Berlin, Berlin.
Vardeman, S.B., Statistics for engineering problem solving. PWS Publishing Company, Boston, 1993.
Venables KM, Chang-Yeung M (1997). Occupational asthma. The Lancet, 349:1465-1469.
Vestrucci, P., 1988. The logistic model for assessing human error probabilities using SLIM method. Reliability
Engineering and System Safety, 21: 189-196.
Volinn E, Spratt KF, Magnusson M, Pope MH (2001). The Boeing prospective study and beyond. Spine,
26:1613-1622.
Vrijling, J.K., van Hengel, W., Houben, R.J., A framework for risk evaluation. Journ. Haz. Mat. Vol. 43, pp
245-261, 1995.
Vuuren, W.v., Shea, C.E. and Schaaf, T.W.v.d., 1997. The development of an incident analysis tool for the
medical field, Eindhoven University of Technology, Eindhoven.
Waddell G (1991). Low back disability: A syndrome of Western civilization. Neurosurgery Clinics of North
America, 2:719-738.
Wagner GR, Wegman DH (1998). Occupational asthma: Prevention by definition. American Journal of
Industrial Medicine, 33:427-429.
Waitzman N, Smith K (1999). Unsound conditions: Work-related hearing loss in construction, 1960-75. The
Center to Protect Worker’s Rights. CPWR Publications, Washington, DC.
Ward E, Okan A, Ruder A, Fingerhut M, Steenland K (1992). A mortality study of workers at seven beryllium
processing plants. American Journal of Industrial Medicine, 22:885-904.
Watts, J.M. Fire Risk Ranking. The SFPE handbook of fire protection engineering, 2nd ed. National Fire
Protection Association, Quincy, 1995.
Westgaard RH, Jansen T (1992). Individual and work related factors associated with symptoms of
musculoskeletal complaints: I. A quantitative registration system. British Journal of Industrial Medicine,
49:147-153.

SAFETY MANAGEMENT SERIES


THEORY AND MODEL

Westgaard RH, Winkel J (1997). Ergonomic intervention research for improved musculoskeletal health: a
critical review. International Journal of Industrial Ergonomics, 20:463-500.
WHO (1999). International statistical classification of diseases and related health problems (ICD-10) in
occupational health. Karjalainen A, ed. Protection of the Human Environment, Occupational and
Environmental Health Series, (WHO/SDE/OEH/99.11). World Health Organization, Geneva.
WHO/FIOSH (2001). Occupational exposure to noise: Evaluation, prevention and control. Goelzer B, Hansen
CH, Sehrndt GA, eds. On behalf of WHO by the Federal Institute for Occupational Safety and Health
(FIOSH), Dortmund.
Wickens, C.D. and Hollands, J.G., 2000. Engineering psychology and human performance. Prentice-Hall,
Upper Saddle River, NJ.
Wikstrom B-O, Kjellberg A, Landstrom U (1994). Health effects of long-term occupational exposure to whole-
body vibration: A review. International Journal of Industrial Ergonomics, 14:273–292.
World Bank (2001). World development indicators 2001. Available at http://worldbank.com.
Xia Z, Courtney TK, Sorock GS, Zhu J, Fu H, Liang Y, Christiani DC (2000). Fatal occupational injuries in a
new development area in the People’s Republic of China. Occupational and Environmental Medicine, 42:
917-922.
Xu X, Christiani DC, Dockery DW et al. (1992). Exposure-response relationships between occupational
exposures and chronic respiratory illness: a community-based study. American Review of Respiratory
Disease, 146:413-418.
Yin SN, Li Q, Liu Y, Tian F, Du C, Jin C (1987). Occupational exposure to benzene in China. British Journal of
Industrial Medicine, 44:192-195.
Yip YB (2001). A study of work stress, patient handling activities and the risk of low back pain among nurses
in Hong Kong. Journal of Advanced Nursing, 36:794-804.

SAFETY MANAGEMENT SERIES

You might also like