Professional Documents
Culture Documents
Treatment of
Uncertainties in Reliability
Assessment of Safety
Instrumented Systems
Astrid Folkvord Janbu
NTNU
2009
NTNU
PREFACE
This master thesis is written at the Department of Production and Quality Engineering at NTNU,
during the spring of 2009. The thesis is a part of the 5th year master program in Reliability,
Availability, Maintainability and Safety (RAMS).
The project treats the topic Treatment of Uncertainties in Reliability Assessment of Safety
Instrumented Systems and is written in cooperation with Det Norske Veritas (DNV). The thesis is
based on a literature study described in the project thesis and a case study of a Safety Instrumented
System (SIS). It is assumed that the reader of this thesis has taken an introduction course in system
reliability theory or has similar knowledge.
I want to thank my supervisor at NTNU, Prof. Marvin Rausand, for teaching me how to think
thoroughly through different problematic topics. I also would like to thank my other supervisor at
NTNU, Ph.D. Mary Ann Lundteigen, for good advices during this semester. At DNV, I would like to
thank senior consultant, Marius Lande, for patience and help through the case study.
Trondheim, 5th June 2009
Astrid Folkvord Janbu
Master Thesis
Astrid Folkvord Janbu
NTNU
ABSTRACT
Safety instrumented systems (SIS) are employed to control and mitigate the risk to personnel,
environment and assets in many industries and everyday life. Due to the main purpose of a SIS and
its degree of independence of human actions, reliability is of high importance. Reliability assessments
of SIS provide an important basis for decision making and are performed as part of compliance
studies in order to document whether a SIS meets stated safety requirements or not. Unfortunately,
there are several aspects in a reliability assessment that cause uncertainty associated with the
results. Uncertainty in reliability assessments reduces the confidence in the results, increases the risk
of making wrong decision and should therefore be communicated to the decision maker.
The main objective of this master thesis was to study the reliability assessment procedures that are
used when developing compliance reports and examine how a representation of the uncertainties in
the results may be implemented in compliance studies as a decision support.
A literature study of uncertainty assessments was carried out in order to identify the main sources of
uncertainty in reliability assessments and techniques for quantifying their effects. It was also
investigated how to present the results from uncertainty assessments in compliance reports. Further,
reliability and uncertainty assessments of a case study were performed with fault tree analysis (FTA)
and simulation in order to analyse the uncertainty associated with the results. The findings were
discussed.
The literature study showed that there are three main sources of uncertainty in reliability
assessments; completeness uncertainty, model uncertainty and data uncertainty. The broadly
accepted standard for design and operation of SIS, IEC 61508, does not explicitly treat the subject of
uncertainty. But still it indicates doubts about the validity of the reliability assessments results
through architectural constraints and suggested 70 % upper limit confidence interval for a
conservative approximation of failure rates. Uncertainty assessment techniques may be used to
quantify and evaluate the effects of uncertainty in reliability assessments. It was found that both
sensitivity and importance measures are well applicable to the main sources to uncertainty, while
uncertainty propagation is limited to the treatment of data uncertainty.
Compliance studies are usually carried out during the design phase. Detailed and relevant
information may not be in place during early design, which causes completeness uncertainty. The use
of generic data causes data uncertainty due to the inhomogeneity in the data samples, lack of
relevance and modelling of failure data. The system designer may also not be familiar with the future
characteristics of the system to be developed. Hence; model uncertainty arises when less suited
models are used. The level of uncertainty is higher during early phases of a lifecycle and for new
technology due to the lack of experience and knowledge about the system. The predicted reliability
as described in the compliance report may therefore, due to the sources to uncertainty, be quite
different from the field reliability.
The reliability assessments of the case study gave some interesting findings. The simulation was
expected to give the most realistic results due to the random sampling from assumed lifetime
distributions. It was therefore interesting to see that the unavailability predicted by simulation was
significantly closer to the result from the conservative FTA approach developed by Lundteigen and
Rausand, than the cut set approximation used by CARA FaultTree. In light of the experiences from
ii
Master Thesis
Astrid Folkvord Janbu
NTNU
the reliability assessments, it was concluded that FTA is a sufficiently good model for reliability
assessments of SIS and that the conservative approximation is recommended since it compensates
for some of the uncertainty involved. Simulation was found to be an unnecessarily advanced and less
suited modelling tool for PFD calculations. The low failure rates of a low demand SIS require a high
number of runs in order to stabilize the results and thus cause the simulation to become a consuming
process.
The case study showed that sensitivity and importance measures often are sufficient remedies for
uncertainty assessment in a compliance study since they identify which sources to uncertainty that
are critical to the results. These techniques do not quantify the level of uncertainty associated with
the results, only the effects of changes in the input. The information provided may anyway be used
as input for a qualitative uncertainty evaluation and is further valuable for reliability improvement
during later phases in the lifecycle and identification of efficient maintenance strategies.
When investigating the level of data uncertainty in the results, uncertainty propagation should be
applied. The results are not as intuitively understood as for sensitivity analyses and importance
measures, and require hence competence for interpretation of the statistical quantities from the
analysis. An important experience regarding uncertainty propagation arose from the case study.
Uncertainty propagation should always be performed by using deterministic models or analytical
methods. When performing uncertainty propagation for a simulation model, the number of runs
increases with a factor equal to the number of uncertainty simulation runs, which makes the analysis
very time consuming.
The high level protection system did not comply with the SIL 3 requirement for the combined loop,
or SIL 2 requirement for the single loops. Uncertainty assessments were also performed to see if
acceptance could still be recommended, but none of the assessments results indicated that the high
level system should be SIL 3 verified.
The results from a compliance study should not be seen as any definite property of the system, and
this should be communicated to the decision maker. A qualitative uncertainty evaluation of the
results should therefore be included as a part of the compliance report. The need of feeling safe
should be nuanced with the truth that there is no guarantee in the SIL compliance or future failure
behaviour of the system. Instead of interpreting uncertainty as a necessary evil, one may achieve the
advantage of it by reflecting a more realistic result, and in such way raise awareness of the risks
involved in the decision process. By being informed about the uncertainties one may also easier
reduce them.
Master Thesis
Astrid Folkvord Janbu
iii
NTNU
TABLE OF CONTENTS
Preface...................................................................................................................................................... i
Abstract ....................................................................................................................................................ii
Table of Contents ....................................................................................................................................iv
Table of Figures ...................................................................................................................................... vii
List of Tables .......................................................................................................................................... viii
1
Introduction..................................................................................................................................... 1
1.1
Objective ................................................................................................................................. 1
1.2
1.3
Structure .................................................................................................................................. 2
General .................................................................................................................................... 4
2.2
2.2.1
3
3.1.2
3.1.3
3.1.4
Remarks ........................................................................................................................... 9
3.1.5
Terminology..................................................................................................................... 9
4.1
4.2
Interpretations of uncertainty............................................................................................... 12
4.2.1
4.2.2
Summary ............................................................................................................................... 14
iv
4.3
5
3.1.1
3.2
4
General .................................................................................................................................. 15
Master Thesis
Astrid Folkvord Janbu
5.3
Importance measures............................................................................................................ 17
5.3.1
5.3.2
5.3.3
5.4
5.4.2
6.1.1
Analysis .......................................................................................................................... 25
6.1.2
Realization ..................................................................................................................... 25
6.1.3
Operation ...................................................................................................................... 26
6.1.4
Documentation.............................................................................................................. 26
6.2
6.2.1
6.2.2
6.2.3
6.2.4
6.3
7.1.1
Flare KO drum................................................................................................................ 35
7.1.2
7.1.3
Degasser ........................................................................................................................ 36
7.2
8
5.4.1
5.5
6
NTNU
Master Thesis
Astrid Folkvord Janbu
NTNU
8.1.1
8.2
8.3
9.2
PFD calculations..................................................................................................................... 42
9.3
9.4
9.5
10
Simulation.................................................................................................................................. 49
10.1
ExtendSim software............................................................................................................... 49
10.2
PFD calculations..................................................................................................................... 49
10.3
10.4
11
Discussion .................................................................................................................................. 57
11.1
11.1.1
11.2
11.3
11.4
11.5
12
12.1
13
Conclusions................................................................................................................................ 64
Further work .......................................................................................................................... 65
Bibliography............................................................................................................................... 67
Appendix A
Appendix B
Appendix C
Preliminary study........................................................................................................... 74
Master Thesis
Astrid Folkvord Janbu
NTNU
TABLE OF FIGURES
Figure 1 Simplified model of a safety instrumented system (SIS) ........................................................... 4
Figure 2 SIL requirements, adapted from compliance report (2008) ..................................................... 6
Figure 3 Contributing factors to uncertainty in reliability assessment of safety instrumented systems 7
Figure 4 Framework for uncertainty assessments, adapted from de Rocquigny, Devictor and
Tarantola (2008) .................................................................................................................................... 16
Figure 5 Birbaums measure (Rausand and Hyland 2004) .................................................................. 19
Figure 6 Improvement potential ........................................................................................................... 19
Figure 7 Uncertainty propagation (NASA 2002) .................................................................................... 21
Figure 8 Safety lifecycle (IEC 61508 1997) ............................................................................................ 24
Figure 9 Relationship between FSA, validation and verification (OLF 070 2004) ................................. 26
Figure 10 Documentation hierarchy ..................................................................................................... 27
Figure 11 Lifecycle from a producer perspective (Murthy, sters and Rausand 2007)...................... 28
Figure 12 Overview of data related to PFD, adapted from (Lundteigen 2008)..................................... 29
Figure 13 Integrated decision making for hardware safety integrity.................................................... 31
Figure 14 PSD and ESD High level protection system for topside plant (Compliance report 2008) ..... 34
Figure 15 Base case of high level protection system for reliability assessments (Compliance report
2008)...................................................................................................................................................... 36
Figure 16 Fault tree of combined loop for typical vessel ...................................................................... 42
Figure 17 Fault tree of combined loop for typical vessel, implicit CCF modelling ................................ 44
Figure 18 Uncertainty propagation through CARA FaultTree ............................................................... 48
Figure 19 Extend blocks ......................................................................................................................... 49
Figure 20 High level protection system modelled in Extend................................................................. 50
Figure 21 Extend model for uncertainty propagation ........................................................................... 54
Master Thesis
Astrid Folkvord Janbu
vii
NTNU
LIST OF TABLES
Table 1 Failure classification for Safety Instrumented Systems (SIS) (IEC 61508 1997) ......................... 4
Table 2 Safety Integrity Levels (IEC 61508 1997) .................................................................................... 6
Table 3 Notions of uncertainty and associated representations, adapted from Flage, Aven and Zio
(2009) .................................................................................................................................................... 14
Table 4 Summary of uncertainty assessment methods ........................................................................ 23
Table 5 Classification of technology (DNV 2001) .................................................................................. 30
Table 6 Printout from OREDA (OREDA 2009) ........................................................................................ 39
Table 7 Comparison of selected models (Janbu 2008).......................................................................... 40
Table 8 Importance measures for combined loop typical, FTA............................................................. 45
Table 9 OREDA taxonomy (OREDA 2002) .............................................................................................. 47
Table 10 Extend results for combined loop .......................................................................................... 51
Table 11 Importance measures for combined loop typical, simulation................................................ 52
Table 12 Parameters for gamma distribution ....................................................................................... 55
Table 13 Comparison of results ............................................................................................................. 57
Table 14 Architectural Constraints on Type A and B systems (IEC 61508 1997) .................................. 58
viii
Master Thesis
Astrid Folkvord Janbu
NTNU
1 INTRODUCTION
Safety Instrumented Systems (SIS) are crucial for controlling and mitigating risk, in many industries
and everyday life. Their independence of human actions makes reliability of high importance.
Reliability assessments of SIS are performed to evaluate the reliability and provide a basis for
decision making regarding design, safety, economic stakes and legal requirements.
Unfortunately, there are several aspects in a reliability assessment process that cause uncertainty
associated with the final results. The main contributions to uncertainty are due to modelling, data
and incompleteness in assessments. Also, underlying factors like time pressure, competence and
system complexity, affect the level of uncertainty. Uncertainty in reliability assessments reduces the
validity of the results and thus increases the risk of making wrong decision. Effort should therefore
be paid in order to minimize the uncertainty (Janbu 2008).
Uncertainty assessments techniques may be used for estimation and evaluation of uncertainty in
probabilistic assessments. The assessments provide valuable information for further uncertainty
reduction and reliability engineering through design and development. In practice, the use of such
techniques varies a lot between different industries (Rausand and ien 2004).
Compliance reports are issued to document if the results from reliability assessments of SIS meet the
stated requirements or not. But, a reliability assessment does not provide any truth about the future
reliability, only a prediction. The communication of this fact to the decision maker should be
improved. Awareness of uncertainties stresses the focus for both risk and uncertainty reduction. A
new framework for reliability assessments should be established for improving the basis for making
decisions in a more realistic manner, where uncertainties are reflected and integrated into
compliance reports as a decision support.
1.1 Objective
The project thesis (Janbu 2008) identified the main contributions to uncertainty in reliability
assessments and how these are related to the result of the assessment. It was further found that
uncertainty evaluation in reliability assessments of safety instrumented systems is of high
importance in order to ensure that decision making is based on the right foundation.
The main objective of this thesis is to study the reliability assessment procedures that are used when
developing compliance reports and, based on the findings from Janbu (2008), examine how a
representation of the uncertainties in the results may be implemented in compliance studies as a
decision support.
The main objective may be divided into the following subobjectives:
1. Become familiar with a specified SIS (case study) and outline when and how the compliance
report should be developed.
2. Identify issues (sources to uncertainty) in the development of the compliance reports that
may influence the uncertainty of the results.
3. Discuss various approaches for uncertainty assessments and how the results of such analyses
may be implemented in compliance studies.
Master Thesis
Astrid Folkvord Janbu
NTNU
4. Apply different models for reliability assessments for a specific safety instrumented system.
Compare the results and discuss the differences and uncertainty associated with the results.
1.3 Structure
This report answers the sub objectives given in section 1.2;
Chapter 2 presents basic concepts and aspects related to the reliability of SIS and IEC 61508.
Chapter 3 gives an introduction to how uncertainties are involved in reliability assessments. The
chapter presents the main findings from the project thesis Uncertainty in Reliability Assessment of
Safety Instrumented Systems (Janbu 2008).
Uncertainty is described more thoroughly in chapter 4. Here, theoretical discussions regarding
different interpretations are presented.
Uncertainty assessments techniques are then handled in chapter 5. Sensitivity analysis, importance
measures and uncertainty propagation are presented, described and linked to the treatment of the
main contributions to uncertainty in reliability assessments.
Chapter 6 presents a discussion of when and how the compliance reports are developed, and
uncertainties are related to this process. Further, the chapter also discusses how to implement the
results from an uncertainty assessment into the compliance reports.
2
Master Thesis
Astrid Folkvord Janbu
NTNU
Chapter 7 describes the case study used for this thesis which is a high level protection system for a
topside process plant.
Chapter 8 discusses the scope, model and data used for reliability assessment of the case study. The
chapter also discusses the possible uncertainty related to these elements.
Chapter 9 is the fault tree analysis of the case study, where CARA FaultTree is used to perform
calculations. Unavailability results for base case are presented, together with a conservative FTA
approximation. Results from sensitivity analyses, importance measures and uncertainty propagation
are also presented.
Chapter 10 is the simulation analysis of the case study system, where Extend is used as software tool
for the simulations. The unavailability of the base case is calculated. Results from sensitivity analyses,
importance measures and uncertainty propagation are also presented.
Chapter 11 discusses the results from Chapter 9 and 10, and the uncertainty related to the results.
The conclusions from this master thesis are then presented in chapter 12.
Master Thesis
Astrid Folkvord Janbu
NTNU
2.1 General
A SIS is used to mitigate risks associated with the operation of a specified hazardous system, by
detecting the onset of a hazardous event or reducing the consequences. The specified hazardous
system is referred to as equipment under control and should be considered as the source of the
hazard and can vary from being a single component to an entire plant. The equipment under control
is protected by safety instrumented functions (SIF) in a SIS or other suitable protection measures that
will control the hazard. The special about SIS compared to other safety systems is its ability to
evaluate signals by the help of instrumentation and thus perform decisions to carry out barrier
function upon a demand, independent of human actions. Figure 1 shows a simplified model of a SIS.
A SIS is composed of three main elements; input elements for detection, logic solvers for evaluation
and decisions and final elements for action if needed. Input elements may be gas or fire detectors, a
logic solver may be a computer and the final element a safety valve. The SIS itself may consist of
several SIF.
The most important reliability measure for a SIF is called Probability of Failure on Demand (PFD). This
measure quantifies the safety unavailability due to random hardware failures and denotes the
probability that a SIF will fail to respond adequately upon a demand, a socalled dangerous failure. A
SIS failure may be classified with regard to three aspects; cause, effect and detectability. Table 1
show the IEC 61508 definitions of these failure classification criteria.
Table 1 Failure classification for Safety Instrumented Systems (SIS) (IEC 61508 1997)
Causes
Effects
Detectability
Dangerous failure
Detected failure
Safe failure
Systematic failures
Failure, related in a deterministic
Undetected failure
In relation with hardware,
Master Thesis
Astrid Folkvord Janbu
NTNU
Functional safety requirements describes what the safety function shall perform
Safety integrity requirements describes how well the safety function shall perform
NTNU
In addition to the requirements for hardware, one must also show compliance for the software and
systematic safety requirements. The software requirement is a qualitative requirement which
expresses the level of functional safety and quality assurance program required for the software
development, testing and integration. This involves techniques for control and avoidance of
systematic failures in the software. Avoidance and control are also the main focus in the qualitative
requirements, also called systematic safety integrity requirements. Systematic safety requirements,
similar to software requirements, are expressed in terms of adequacy of the management of
functional safety and required quality assurance program (Compliance report 2008). Figure 2 show
the SIL requirements from IEC 61508.
Master Thesis
Astrid Folkvord Janbu
NTNU
Uncertainty in reliability assessments arises from limitations in perfectly reflecting the real life
system and its environment. There are three direct factors which affect the results and hence the
associated level of uncertainty; the model, data and the completeness of the assessment (Drouin, et
al. 2009). Figure 3 shows the underlying and direct factors in a reliability assessment that affect the
numerical value of the PFD and thus the uncertainty.
Master Thesis
Astrid Folkvord Janbu
NTNU
structure of the system and are purely deterministic. Reliability models are applied to the
architectural structure and hence the system model achieves probabilistic properties.
A model can be described as an analysts attempt to represent a system (Parry 1996). The model
used for the assessments is therefore strongly dependent on the properties of the system and the
analysts competence. The analyst has to struggle with the tradeoff between the need to simplify
and accuracy. But there are also other underlying factors for the choice of model. Regulations,
standards, guidelines and internal company policies may often require or recommend specific types
of models. Further, the model is also dependent on which life cycle phase the assessment should be
carried out in since the level of detail required in an assessment often is increasing with time. The
level of detail or suitability of a model is also restricted by the time, approximation formulas and
software solutions available.
Choice of model forces the analyst into a system structure that is more or less is in accordance with
the real life system. The model uncertainty is dependent on the validity of the model assumptions.
Due to the limitations in including the natural variability in the real life system, a model will at its best
only be an approximation (NASA 2002). Model uncertainty will therefore, up to a certain degree,
always exist.
Master Thesis
Astrid Folkvord Janbu
NTNU
3.1.4 Remarks
The contributing factors to uncertainty in reliability assessments are closely linked and can be hard to
separate. Example of this can be uncertainty in reliability data due to poor statistical modelling. What
is defined as data uncertainty and what is model uncertainty in this case? Another example can be
lack of understanding of some system property. Is the system too complex or is it due to lack of
competence? The separation is a challenge for both the underlying and direct factors.
3.1.5 Terminology
An important aspect in this thesis is the treatment of the different concepts model, method, tool and
technique. The concepts are often used in the same context, hence; a clarifying of what the different
concepts comprise of is necessary:
A model is a simplification of the world and can be either architectural or mathematical. An
architectural model is a structural model which shows the logic/relations between the different
model elements. A mathematical model is a model described in terms of mathematical operators,
constants and variables. Model is easily confused with the expression method, which is a
systematic and logical arranged description of how you do something. Take for example a
mathematical expression. In one way it is a model due to its way of simplifying the world in terms of
mathematics. On the other hand it is a logical description of how to calculate something. In this
thesis the expression model is preferred in such cases as for the mathematical expression. But the
expression method is still used where it is suited, like for an algorithm. A technique is seen as the
same as method. A tool is a remedy for employment of a model, method or technique.
In literature, the concept reliability analysis usually covers the systematic approach for describing
and/or calculating reliability, while reliability assessment covers the overall process of reliability
analysis and reliability evaluation. Since this thesis treats reliability analysis in the context of
compliance studies, it is rather referred to reliability assessments than analyses. There may anyway
be some sentences where reliability analysis should be used, but assessment is preferred in order to
not confuse the reader.
NTNU
lambda is greater than the upper limit. Hence, the use of upper limit failure rate is a conservative
approximation.
There are several methods for identification and quantification of the different sources to
uncertainty. In most cases, the focus is on data uncertainty related to the model parameters instead
of the uncertainty related to the model itself. Quantitative uncertainty assessments, like uncertainty
propagation, sensitivity analysis and importance measures, are generally for data uncertainty. Model
uncertainty may also be treated with sensitivity studies and importance measures, but not
quantified. Quantification of model uncertainty is thus still a research subject, and consists of
development of epistemic models for model assumptions (NASA 2002). Uncertainty from
incompleteness is a recently discovered topic and has not yet been paid much attention. The
development of conservative calculation approaches increases and hence the completeness
uncertainty may be taken into account.
Quantitative uncertainty assessment in reliability assessments has been treated quite differently
within different industries. The Norwegian offshore industry has never explicitly included uncertainty
into reliability and risk assessments. It has been argued that the results themselves contain so much
uncertainty that it would be meaningless to express uncertainty about uncertainty. The offshore
industry has handled uncertainty in assessments by operating with conservative estimates, and
makes that as an argument for always being on the right side within a confidence interval for a best
estimate (Rausand and ien 2004).
Uncertainty is treated a lot more comprehensive within the nuclear industry, where uncertainty
assessments always has been explicitly included in reliability assessment or Probabilistic Safety
Assessment (PSA, named PRA in the U.S.). This has led to a systematic work of developing new
methods for identifying and reducing the uncertainties involved in the assessments. Reliability
assessments of nuclear power plants have also been required to be public, which has resulted in
open criticism and new requirements for improvements (Rausand and ien 2004).
10
Master Thesis
Astrid Folkvord Janbu
NTNU
11
NTNU
some more conditions (O'Hagan og Oakley 2004). This discussion raises a question of where the
limit goes between our lack of ability to understand and true randomness, or if there exist such a
limit at all.
At present time, with available technology and resources, it is practicable impossible to achieve
complete knowledge about every system or process within reasonable time. The only advantage
must be to separate those uncertainties that can be reduced from those that are less prone to
reduction in nearest future (Kiureghian and Ditlevsen 2009). In the words of Kiureghian and
Ditlevsen:
there is a degree of subjectivity in the selection of models and categorization of uncertainties in
engineering problem solving. This is inevitable. It constitutes the art of engineering. Ironically, this is
the element of engineering that most distinguishes the quality of its practice.
12
Master Thesis
Astrid Folkvord Janbu
NTNU
due to that the outcomes, like the probability of one failure vs. zero failure within a timeinterval, are
in most cases not equally likely to occur.
Relative frequency
The relativefrequency theory defines the probability of an event as the proportion of times the
event occurs during a long series of repetition of independent and identical trials. This can often be
difficult in practice, especially for problems where events rarely occurs like failures of safety
instrumented systems. The observation time for each interval must be sufficiently large in order for
the relative frequency to reflect the true probability. Collection of data for such systems is
therefore very expensive and timeconsuming. Generic databases help the analyst in collecting data
based on several similar systems. These data are not plantspecific and one can therefore question
how identical the trials must be before it significantly affects the quality of the data.
A priori
A parallel development with the relative frequency theory was the a priori theories which integrates
the other interpretation of probability; a measure of degree of belief (Watson 1993). In this
philosophy, a priori knowledge refers to prior knowledge about a population, rather than that
estimated by recent observation. The a priori theory may be seen as both objective and subjective; it
is objective due to that a priori knowledge should be based on available data, and not be a personal
opinion. But it is also subjective since the evaluation is based on what one person belief, which may
vary and is not a solid answer. Hence; a priori knowledge is the limit between the Bayesian and
Frequentist approach to statistics.
13
NTNU
4.3 Summary
According to the relativefrequency interpretation, it is due to aleatory uncertainty that we cannot
predict if the event, A, will occur in the next trial or not. For every trial performed, the fraction of
successes will tend to limit against the true probability, P(A). P(A) therefore reflects the aleatory
uncertainty of the occurrence of the event A. Our lack of knowledge about the true probability,
P(A), is due to epistemic uncertainty and can be reduced by collecting more data (Flage, Aven og Zio
2009).
This report is limited to the probabilistic framework. This framework has received its name due to
the use of probability density functions to describe the uncertainty. Other methods for representing
uncertainty than the probabilistic framework are the imprecise/interval probability, evidence theory,
fuzzy probability or possibility theory. For more information about alternative uncertainty
representation the reader is recommended to read the article by Flage, Aven and Zio (2009). How
you understand uncertainty will define how you represent it, see Table 3.
Table 3 Notions of uncertainty and associated representations, adapted from Flage, Aven and Zio (2009)
Notions of uncertainty
Representation
Indeterminacy
Imprecise/interval probability
The realist interpretation is the most common way of understanding probability in reliability
assessments. It is objective and thus it is also uncontroversial as a basis for decision making. But it is
also in conflict with an important part of reliability assessments; the subjective interpretation and
hence the use of expert judgement. This contradiction has been discussed since the origins of
probabilities. The need for objective interpretation results in understanding probability as purely
statistical based on stochastic laws of chance processes. But as a result of the objective assessments
lack of including knowledge, a need for epistemological interpretation arises. Here, probability is
defined as a degree of belief which is great contrast to the objective view (Watson 1993).
IEC 61508 and other standards or regulations often require demonstration of compliance according
to a quantitative criterion. Such requirements are built upon a realist interpretation where
probabilities are objective and measurable as a true property of the system under evaluation, or else
such requirements would not make sense.
The predictive epistemic approach is fully subjective and believes that probability is only an
uncertainty measure for (possible) observable quantities. In this framework it is therefore
meaningless to discuss uncertainties in the PFD estimate. This is often problematic since there are
several aspects in a reliability assessment that give reasonable doubts if the right answer is achieved.
14
Master Thesis
Astrid Folkvord Janbu
NTNU
5.1 General
Randomness of physical processes modelled in reliability assessments leads to the use of
probabilistic models. Based on the scenario, model assumption and parameters are set based on
available knowledge about the behaviour of the system at hand. Uncertainty is associated with these
conditions, and hence probabilistic models are also used to represent our state of knowledge
regarding the numerical values of the parameters and the validity of the model assumptions (NASA
2002). It is important that the uncertainties in natural variability of physical processes like (aleatory
uncertainty) and uncertainties in knowledge of these processes (epistemic uncertainty) are
sufficiently accounted for in the decision making process.
The main difference between a reliability assessment and an uncertainty assessment is that reliability
assessments express the aleatory uncertainty about the future failure behaviour of a system, while
uncertainty assessments express mainly epistemic uncertainty about the information (model output)
which the reliability assessment provide. That is, the uncertainty in the system model prediction.
Generally, there are three main techniques used for quantifying the effect uncertain model input has
on the model output:
Sensitivity analysis: methods for analyzing how the variation (uncertainty) in the model
output can be apportioned to different sources of variation in the model input
Importance measures (sensitivity coefficients): methods used to identify the dominant
contributors to failures in a system model
Uncertainty propagation (uncertainty analysis): methods for analyzing how an input
uncertainty transforms onto the model output
All these techniques are well suited for describing data uncertainty due their application on
numerical values. But, they can also describe model uncertainty by changing assumptions and
structure of the model and observe how this affects the level of uncertainty. Further, uncertainties
due to incompleteness may be described through sensitivity analysis or conservative approaches,
where the model and/or data are updated with the new scope.
As discussed in Chapter 3, there are several sources to uncertainty in a reliability assessment.
Generally the process of a reliability assessment can be describes as described in the stippled frame
in Figure 4 shows. Here, the model inputs are assumed consist of both uncertain and fixed inputs.
The system model with the given input, like failure rates, calculates the model output which may be
results like PFD.
The input uncertainty may be the assumed uncertainty distribution of the uncertain input, like the
assumed uncertainty distribution for a failure rate. The output uncertainty is the uncertainty
distribution for the estimated results, like the results from uncertainty propagation. It is also often of
Master Thesis
Astrid Folkvord Janbu
15
NTNU
interest to study quantities of the uncertainty distribution, called quantities of interests. These may
be the variance of the uncertainty distribution, the min and max values from a simulation etc. If a
decision is to be made after the uncertainty assessment, a predefined criterion has to be set. The
criterion may either be linked to the results from model output, like the PFD, but also to the output
uncertainty, like confidence bounds which the model output should have satisfied. If the decision
criteria are not met, the feedback actions should use the results from the assessments to reduce the
dominant contributors to uncertainty. This could be done through design changes, expert
evaluations, extended data collection etc, depending on what type of uncertainty that is dominating.
Figure 4 Framework for uncertainty assessments, adapted from de Rocquigny, Devictor and Tarantola (2008)
Indication of which of the inputs in the analysis whose change in value cause the largest
change in reliability
Identification of which components whose data quality are sensitive or not for the analysis
If the portion change in the model output (result), is large compared to the change in input, we say
that the system is sensitive to the input element that was changed. It is important that only one
16
Master Thesis
Astrid Folkvord Janbu
NTNU
element is change at a time, and the others remain fixed at a baseline value, in order to make the
results comparable.
Notice that sensitivity analyses do not describe the level of uncertainty related with uncertain inputs
or elements, only the effect of their changes. Elements with high sensitivity may therefore not
necessarily be associated with great uncertainty. However, uncertain inputs that are found to have a
great impact on the result are also suspected to have the associated level of uncertainty spread onto
the result with the same degree of impact. Hence; input with high sensitivity should be further
investigated with uncertainty analysis. Elements with low sensitivity, on the other hand, should not
be dedicated resources for further analysis since their impact on the results are not of a significant
order.
Sensitivity analyses can therefore be used as a tool for identification of which sources to uncertainty
that weights more on the conclusions of a reliability assessment, and then allocating resources more
efficiently. Further, sensitivity analyses can also be applied as a quality assurance tool for the
robustness of the reliability modelling and hence avoidance of model uncertainty.
The modelling is conditional on the validity of its assumptions. The assumptions may change the
architectural model or data used for the assessment, and can have a high influence on the final
results. Model assumptions are in fact often used to overcome data uncertainty due to shortcomings
of data. In order to balance for lacking information, great weight is put on the analysts judgment
(NASA 2002). The impact of such assumption can also be dealt with sensitivity analysis for ensuring
that the simplifications/limitations made will not significantly affect the results, in case of high model
uncertainty.
As discussed in this section, sensitivity analysis is well suited for investigating the possible effects that
both input data and model uncertainty may have on the results. Also, uncertainty from
incompleteness may be investigated with this analysis technique. A sensitivity analysis can also be
performed for measuring the effects of completeness uncertainty by including or excluding possible
relevant elements like failure modes and then evaluate if they are significant for the results or not.
17
NTNU
which is the partial differentiation of the system reliability with respect to the component reliability,
pi. The letter h is here used to express that the components are independent. We see from this that a
large value of Birbaums measure occurs when a small change in the reliability of component i results
in a large change in the overall system reliability, and hence the system is sensitive for component i.
We recognize this as classical sensitivity analysis. Through pivotal decomposition when the
components are independent, Birbaums measure can further be written as
I B (i | t ) =
h( p(t ))
= h(1 i , p(t )) h(0 i , p(t )) .
pi (t )
Birbaums measure can then be written as the difference in system reliability from a when
component i is considered perfect, h(1i, p(t)), till the situation when the component has failed h(0i,
p(t)). This situation is illustrated in Figure 5. We see that the slope of the line is equal to Birbaums
measure since the function is with respect to pi(t) which increases here with one unit. Mark that
Birbaums measure does only depend on the system structure and reliability of the other
components, not the actual reliability of component i. This may be considered as a weakness of
Birbaums measure since it provides an importance measure with respect to a specific component
and is independent of the reliability of the component to be measured.
A component i is said to be critical for the system if the other components in the system are in such a
state that the system only functions if component i functions. Rausand and Hyland showed that
Birbaums measure can, based on this definition of a critical component, be defined as ... the
probability that the system is in such a state at time t that component i is critical for the system. For
the deduction of this definition, the reader is referred to Rausand and Hyland (2004).
A component with a Birbaums measure equal to a high value indicates a component that the system
reliability is vulnerable to. A failure of this component is thus critical for the overall reliability. This is
not the same as saying that the component is probable to cause a system failure (such probabilities
are often called blaming measures). Take for example a parallel structure of two 1oo1 voting
components, where a CCF of the two components in parallel is modelled as a component in series
18
Master Thesis
Astrid Folkvord Janbu
NTNU
with the parallel structure. The CCF component will score a high value according to Birbaums
measure since the change in system reliability will be large due to that the system fails if the CCF
occurs, h(0CCF, p(t)) = 0. But, given that the system fails the CCF may not be the most likely to blame
since the probability of a CCF failure may be lower than the probability of failure for the parallel
structure. Birbaums measure evaluates the effect of a failure with regard to system reliability rather
than the likelihood of a failure with regard to system reliability (blaming). Two well known blaming
measures are the improvement potential measure and the criticality importance measure.
Since the ranking of this measure give an indication of which components that is most important to
improve the reliability of with respect to the overall system reliability, it can thus be interpreted as
which of the components that is also most likely to blame if a failure occurs. In reality is impossible to
Master Thesis
Astrid Folkvord Janbu
19
NTNU
achieve perfect reliability for a component. Thus, it may also be possible to calculate the
improvement potential for a more credible value which reflects the possible achievable reliability,
noted pi(n)(t). A credible improvement potential (CIP) is then
I CR (i |t ) =
I B (i |t ) qi (t )
,
Q0 (t )
where qi(t) and Q0(t) is respectively the fault tree notation of component (basic event) and system
unavailability.
In other words; the criticality importance measure describes the probability that component i caused
the system failure, when we know that the system is failed at time t. In practice, this measure is
frequently used to prioritize maintenance task, because a repair of the blamed component will make
the system function again. Such prioritizing of maintenance tasks is a great timesaver, especially for
large, complex systems.
20
First, assign a probability density function (pdf) to each of the random (uncertain) input
parameters. The pdf reflects the state of knowledge and represents the epistemic
Master Thesis
Astrid Folkvord Janbu
NTNU
uncertainty related to the parameter. The pdf can be selected from different distributions,
depending on what properties that is best suited for the component or system they
represent. In reliability assessments, the lognormal or gamma distribution is usually used as
pdf for data uncertainty.
Then, generating a pdf for the output function by combining the input pdf
The combined pdf then reflects the uncertainty associated with the estimated reliability, but should
still be carefully interpreted because the combined distribution itself does only reflect a portion of
the uncertainty, namely the data uncertainty. In addition, the confidence of combined pdf also
depends on the validity of the model assumptions of the system model and the selected distributions
assumed for the input parameters (NASA 2002). Figure 7 shows the relation between the uncertain
parameters, , the uncertain events like the unavailability of components, x, and the reliability of the
system as a function of x, R = h (x1, x2, ...).
Three main techniques are used to propagate uncertainty; simulation, moment propagation and
discrete probability distribution (NASA 2002). Due to the development of computer tools, simulation
has become the most common technique and thus this section is limited to simulation techniques.
By the use of computer software tools it is possible to simulate the data uncertainty through a
system model several times, by generating the combined pdf several times and let the state statistics
define the properties, like fit, mean, variance, percentiles and other quantities of interest
(deRocquigny, Devictor and Tarantola 2008). The simulation tool repeats scenarios either a number
of times or over a defined scope of time. Simulation is a useful tool when modelling future behaviour
of systems with low frequencies since the real life system produce small amounts of data, and thus
the data uncertainty may be large.
Master Thesis
Astrid Folkvord Janbu
21
NTNU
The simulation maps distributions for the input parameter into a combined distribution by using
sampling techniques. The two most known sampling techniques in software tools at present time is
the Monte Carlo or Latin Hypercube sampling (LHS).
22
Master Thesis
Astrid Folkvord Janbu
NTNU
Source/Method
Uncertainty propagation
Data uncertainty
Model uncertainty
Completeness
uncertainty
Master Thesis
Astrid Folkvord Janbu
23
NTNU
A compliance study is the documented verification process of functional safety for a SIS. The results
are foundations for important decisions with regard to design, safety, economic stakes and legal
requirements. The information provided by the reliability assessments is associated with uncertainty
due to the process of the assessment. The level of uncertainty in the results should be presented to
the decision maker. This chapter presents the process of documentation and compliance studies
through a lifecycle and a framework for uncertainty representation.
24
Master Thesis
Astrid Folkvord Janbu
NTNU
The safety life cycle has 16 phases, each of them described in detail in IEC 615081, section 7.
Roughly speaking, we can divide them into the three following groups:
6.1.1 Analysis
A concept and scope is selected based on an understanding of the equipment under control and its
environment. A functional safety management plan then defines the responsibilities, organisation
and management planning for the for the SIS development. Hazard and risk analyses are performed
in order to identify the necessary safety functions for prevention and control of the risk associated
with the equipment under control. Thus, we say that IEC 61508 is a riskbased approach. The overall
safety requirements are defined and documented in a safety requirement specification (SRS) report.
The requirements are allocated to the technology and functions, and a preliminary design is
established.
6.1.2 Realization
A compliance study is then carried out in the beginning of the realization phase, where the reliability
calculations of safety functions are performed in order to evaluate if requirements from SRS are met.
At this time, the detailed design has just started and the data available for reliability assessment vary
a lot. OLF 070 recommends the use of generic data for compliance studies, but this it not always
possible mainly due to three things; the newness of the technology, the progress of the design
(vendors may already have been selected before execution of compliance study) and the technology
available on the market. Generic data do not exist for new technology due to the lack of field
experience. The SIS integrator (manufacturer) may also already have decided which supplier to use
for the different components and thus have vendor data available. In some cases, the possible
suppliers for a certain technology are limited. Hence; the SIS integrator is forced to use or prefers a
specific supplier and thus know which vendor data to use from earlier experience. The SRS may be
updated based on the results from the compliance study.
During realization, the detailed design is updated with possible new vendor data and more specific
requirements from the SRS. A functional safety assessment (FSA) should be carried out at during
realization in order to verify the hardware, software and integrated system against the specified
requirements.
Detailed plans are developed for maintenance, operation, safety validation, installation and
commissioning. Vendors for components are selected and have to deliver a safety analysis report
where it is shall be documented that the equipment to be delivered satisfies the requirements from
the SRS.
Installation and commissioning are executed according to plans. Overall validation is then performed
before operation begins. During the whole realization phase, the focus should be on revealing
failures and avoid introducing them.
Master Thesis
Astrid Folkvord Janbu
25
NTNU
6.1.3 Operation
Operation should be performed according to the plans. Also here the focus should be on both
failures and avoidance of introducing them. This phase is extremely important for collection of field
data. The data may be used as input to generic databases, but also as input for verification that the
SIS has achieved required SIL.
Necessary modifications are performed, then iterated back to relevant phase until decommissioning.
6.1.4 Documentation
The documentation through the safety lifecycle is vital for achieving function safety in a proper
manner. IEC 61508 is seen as a large and complex standard and there have been published several
reports for how to interpret and apply it. Among these are the OLF 070, guidelines for application of
IEC 61508 and IEC 61511 in the Norwegian petroleum industry.
The guideline stresses the difference between verification and validation. According to IEC standards,
verification is an independent check for each phase in the safety lifecycle in order to demonstrate
that the deliverables meet the requirements. Validation is seen as an extension of verification, where
the check covers several phases, not only one. The quality assurance of the safety requirements
specifications is not defined as validation, but as a FSA. The difference between the concepts is
shown in Figure 9.
Figure 9 Relationship between FSA, validation and verification (OLF 070 2004)
During the whole lifecycle process FSA are carried out through typically audits. The FSA may be
performed by an independent 3rd party in order to ensure that the process of achieving functional
safety for the SIS during a safety lifecycle is on the right track. The compliance reports are the
documentation of whether the safety integrity requirements are met or not. The FSA, on the other
hand, checks if the overall functional safety for the SIS is met through the specification and
achievement of requirements during the safety lifecycle.
IEC 61508 does not provide an explicit method to perform a FSA, only a framework. The necessary
documentation through the lifecycle and responsible is shown in Figure 10.
26
Master Thesis
Astrid Folkvord Janbu
NTNU
Due to the IECs interpretation of validation and verification, a compliance study may be seen as both
verification and validation. For example, the results from the reliability assessments are checked
against the quantitative hardware requirements as stated in phase 5. But these requirements were
already stated in the initial SRS from phase 4, thus the hardware compliance is also a validation. A
compliance study is performed by the SIS integrator, who has to document that the safety
requirements from the SRS are fulfilled for the integrated system. This means showing compliance
with all the SIL requirements, as shown in Figure 2:
1. The calculated PFD shall be less than the maximum accepted PFD from required SIL
(Checked through reliability assessments compared against SIL requirements for hardware in
IEC615081, table 2 and 3)
2. Required HWFT shall be achieved (Checked through tables in IEC 615082, 7.4.3.1.2)
3. Software requirements shall be fulfilled (Fulfilment of requirements from IEC615083,
usually covered by the instrumentation supplier)
4. Avoidance and control of Systematic Failures (Must be able to document low technological
uncertainties and a QA system. The QA system is usually documented through confirmation
to ISO 9001:2000. If new technology one shall document avoidance as described in IEC
615082, clause 7.4.4 & 7.4.5 and Annex A & B. Detailed completion of tables in annex A and
B is regarded as compliance with requirements)
27
NTNU
the system components and reliability models are not selected yet. Hence, the necessary data for the
product development are not known at this point of time.
Stage II is the development stage and includes phase 4 and 5. This stage is mainly used for new
technology since proven technology usually is sufficiently developed and verified. The specifications
derived from the predevelopment stage are tested in a closed environment, similar to future field
environment, both at component and system level. For new technology, this stage will be used to
review the design in phase 2 and 3.
Stage III is the postdevelopment stage and includes phase 6, 7 and 8. Phase 6 deals with the
processes from production until commissioning, which involves production, construction and overall
installation and commissioning planning. Phase 7 treats the logistics, installation, operation and
disposal of the product. Phase 8 is an evaluation phase of the product at a business level, where
actual reliability and improvement potential are presented (Janbu 2008).
Figure 11 Lifecycle from a producer perspective (Murthy, sters and Rausand 2007)
Three important PFD measures are discussed in relation to the lifecycle model; the required PFD, the
predicted PFD and the estimated PFD. The required PFD is the requirement which arise from the SRS.
This value is a result of the reliability allocation to the SIF. The compliance report has to document
whether the SIS, during early design, satisfy the required PFD by performing reliability calculations
with either generic or vendor data. The calculated PFD from the reliability assessment is called the
predicted PFD. The estimated PFD is discovered during operation and field experience, and is the PFD
calculated with the data collected out in the field. For an overview of these concepts, see Figure 12
The requirements are set, allocated and thereafter documented in the SRS in phase 1. A compliance
study should then be executed after the preliminary design in phase 2. Thus a compliance study is
often performed during phase 2 or in the beginning of the detailed design in phase 3.
Early at stage I the product is still roughly planned. The components, reliability model and data to be
used may still be unknown to the SIS integrator. The results and thus the predicted PFD from a
28
Master Thesis
Astrid Folkvord Janbu
NTNU
reliability assessment at this stage are therefore more uncertain than at a later stage in the lifecycle
due to the lack of knowledge.
The development stage (stage II) often results in a redesign due to failures or improvement potential
identified by the laboratory testing and design review of the detailed design from phase 2 and 3. If
redesign is necessary, the producer has to go back to phase 2 and 3.
Master Thesis
Astrid Folkvord Janbu
29
NTNU
Application Area
Technology
Proven
New or unproven
Known
New
Master Thesis
Astrid Folkvord Janbu
NTNU
be taken into consideration when setting scope and evaluating the results. Such uncertainties may be
taken into account for when using conservative approximations, performing uncertainty propagation
for the available data and use of expert judgement where data is not available or limited.
For compliance studies, the whole framework presented is the SIS integrators responsibility. In
practice, reliability assessments are often handled by consultancy companies in order to hire
competence or due to independent evaluations.
Definition of decision forms the problem to be assessed. In compliance studies, the decision for
hardware safety integrity is if the SIS complies with the required SIL according to procedures from IEC
61508. The requirements then have to be assessed and later documented in a SRS.
STEP 1 involves initiating a reliability assessment, where scope has to be defined, models have to be
selected and data collected. The scope has to include all significant contributors to unavailability. The
level of detail should also be of such a degree that the assessment is sufficiently documented. STEP 1
for the case study is presented in Chapter 8.
STEP 2 is the execution of the reliability assessments as described from STEP 1. Probabilistic
assessment is here mentioned due to the following quantitative uncertainty assessments in STEP 3.
Master Thesis
Astrid Folkvord Janbu
31
NTNU
For case study this is presented in Chapter 9 (fault tree analysis) and 10 (simulation). For further
information about reliability assessments, the reader is referred to Janbu (2008).
STEP 3 comprises of the uncertainty assessments for model, data and completeness uncertainty,
both quantitative and qualitative assessments (judgements). Relevant methods are described in
chapter 5, and performed for case study in chapter 9 and 10.
STEP 4 is the basis for decision making. The conclusions drawn from the uncertainty assessment in
STEP 3 should be included in the compliance report in order to state the uncertainties related to the
results. A qualitative evaluation of the level of uncertainty associated with the predicted reliability
should be presented. The decision maker may not have the competence needed in order to interpret
quantitative results from an uncertainty assessment. A presentation of only these may therefore be
less valuable.
The qualitative uncertainty evaluation may be presented as an own chapter before the conclusions.
As a part of this evaluation, the compliance report should include a process description of the
reliability assessment. Further, the following information from the uncertainty assessments should
be presented:
Sensitivity analysis: The information from a sensitivity analysis should provide an overall
evaluation of the significance of modelling assumptions, scope and data that are suspected
to be uncertain. The reasons for suspicion should also be documented. The results may be
presented in a table including the original assumption (base case), the alternative
assumption, and the change in the numerical results. The difference in numerical value
between the quantities may easily state the possible significance.
Importance measures: The results should include an overview of system features that are
the dominating contributors to unavailability, and thus risk. Different measures should be
used in order to reflect several important aspects of the system regarding reliability. A
qualitative evaluation of the numerical values should be given.
Uncertainty propagation: The statistical quantities, and their scope, from an uncertainty
propagation analysis may be difficult to interpret. It is therefore important that a qualitative
evaluation of the results is presented where the conclusions drawn from plot, min and max
interval and standard deviation are explained.
Based on this information, an overall evaluation of the three main sources to uncertainty should be
discussed. Further, it should also be emphasized which uncertainties that are critical to the results or
not (Drouin, et al. 2009). The accuracy of the results should be at such a level that the decision maker
may distinguish risksignificant elements from those of less importance.
The compliance report should in the conclusion recommend compliance or not with the
requirements from the SRS based on the results from both the reliability and uncertainty
assessment. The confidence in the recommendation should also be described in light of findings from
STEP 2 and 3. If the uncertainty assessment concludes another recommendation regarding
compliance than what the reliability assessment does, this should be argued for with results from the
uncertainty assessment and documented in the report.
32
Master Thesis
Astrid Folkvord Janbu
NTNU
If the base case results from STEP 2 do not meet the requirements from SRS, but acceptance is still
recommended based on findings in STEP 3, then the following should be documented (Drouin, et al.
2009);
In the opposite case where acceptance is met, but not recommended, the analyst should do the
following;
Identify possible critical uncertainties related to the results (STEP 3) and use conservative
approximations for the sources to uncertainty.
Justification of the conservatism.
Assessment of the confidence in the recommendation.
Master Thesis
Astrid Folkvord Janbu
33
NTNU
Figure 14 PSD and ESD High level protection system for topside plant (Compliance report 2008)
The floating production, storage and offloading (FPSO) vessel has a topside plant designed for
separation and stabilization of the produced crude from the field (Compliance report 2008). The
process plant consists of four vessels; 1st stage separator, 2nd stage separator, degasser and knock out
(KO) drum.
The crude first arrives at the 1st stage separator, where oil is separated from gas, water, sand etc.
Pressure drop at the separator inlet encourages the gas to flash off. Water and sand, due to mass
density, settles out below oil and is kept behind the weir. The oil then flows over the weir and is
pumped further to the second stage separator. Produced water is sent to the degasser for water
treatment. The process may be controlled by monitoring level, pressure and flow rate.
34
Master Thesis
Astrid Folkvord Janbu
NTNU
The oil flows further from the 1st stage separator to the 2nd stage separator, where the flow usually is
driven by pressure reduction. The oil from the 1st stage separator still contains about 510% water.
The oil is further cleaned in the 2nd stage separator into the right
The oil is further cleaned in the 2nd stage separator, where it is separated into finer quality, right
temperature and pressure. Gas is sorted out and routed to the KO drum and the oil is sent to storage
tanks. Water remaining in the oil after this process is about 2% or less. The water portion usually
increase with field time, especially if water injection is used (Vedvik 2004). If overpressure in the 1st
or 2nd separator, pressure safety valves (PSV) routes gas to the flare KO drum.
The produced water behind the weir flows to the degasser where gas is removed and sent to the
flare KO drum which routes the gas to the flare. Gas at this process plant is only waste since it is not
enough in the crude for production, and will not be reinjected into the field. All gas is therefore sent
to KO drum for flaring. The intention of the flare KO drum is to avoid liquid hydrocarbons into the
flare stack. The routing of gas from the separators will cause condensate in the KO drum. This
condensate will therefore be pumped back to the 2nd stage separator during normal circumstances.
A scenario for this topside plant is the overfilling of liquids in the flare KO drum where the liquids
flow into the flare. This would result in a possibility of burning hydrocarbons falling down on the
FPSO and a build of backpressure within the flare and vent system (Compliance report 2008). In
order to prevent overfilling, each vessel is equipped with safety instrumented systems (SIS).
The purpose of the flare KO drum is to prevent liquid into the flare stack and hence the KO drum has
no relief system for liquid. According to ISO 10418, for a pressure vessel where the relief system is
not designed for liquid, two levels of protection for overfilling are required. The same argument is
valid for the separators and degasser and thus, all four vessels each have two layers of protection.
These are the high level shutdown through the PSD logic system and the high level shutdown
through the ESD logic system. In order for the PSD and the ESD system to be counted as two different
barriers, it is important that they are independent of each other. The main purpose of these systems
is to, given a demand, close the inlet stream to the plant.
The PSD and ESD system consist of more valves and solenoids than shown on Figure 14, but since the
main purpose of the high level protection systems for each of the four vessels is to close the inlet
stream to the plant, the most important valves are shown. A closer description of the SIS for each
vessel is given in section 7.1.1, 7.1.2 and 7.1.3.
Master Thesis
Astrid Folkvord Janbu
35
NTNU
The PSD system trips the XY solenoid on the XV3, and the PSD solenoid on the ESD valve EV1. The
ESD system is of a higher safety rank than the PSD system and hence the ESD system, given a
demand, trips the ESD valve EV1 and initiates a total process shutdown through the PSD system.
7.1.3 Degasser
The degasser is like the other vessels, protected with two safety instrumented systems, the PSD and
ESD systems with the same requirement of independence.
The PSD system receives input from the level switch indicator transmitter, LSIT10, a differential
pressure transmitter, while the ESD system receives input from LSIT9, a radarlevel transmitter.
The PSD system trips the XY solenoid on XV8. The ESD system shuts the EV11 by using the ESD
solenoid valve and also carries out a total process shutdown through the PSD system.
Figure 15 Base case of high level protection system for reliability assessments (Compliance report 2008)
36
Master Thesis
Astrid Folkvord Janbu
NTNU
37
NTNU
transmitters used for respectively the PSD and ESD systems. According to documentation, the
difference in failure rates for these level transmitters has a minor effect on the total reliability for the
combined loops (Compliance report 2008). Identical failure rates for the level transmitters may
therefore be assumed for both the PSD system and for the ESD systems.
The environmental conditions inside the different vessels vary. The level transmitters in the 1st stage
separator are exposed to a tougher environment than the level transmitters located in the KO drum.
This may affect the failure rates since the vulnerability and thus the reliability of the level
transmitters depend on the physical conditions. The failure rate for a level transmitter, even of the
same type, may vary depending on the location. It may be vessel specific. An assumption of identical
failure rates for a SIS located at different vessels may therefore not be valid. Since the level
transmitters are exposed to such various environments, the data used for level transmitters for a
typical vessel are assumed to be uncertain and should therefore be investigated further. The other
components, the node, solenoid and valve, are assumed to have quite similar environment since they
are not located inside the vessels and are therefore assumed to be independent of which vessel the
components control.
38
Master Thesis
Astrid Folkvord Janbu
NTNU
presented in OREDA is the average of the installation specific means. A 90 % confidence interval is
also presented for each failure rate.
Several dataset are said to be a homogeneous sample if the statistical properties of any of the
datasets are valid for all datasets. The data used in OREDA are collected from several installations.
Similar components within the same installation may be exposed to different environmental and
operational conditions. The external conditions are even more different between various
installations (Rausand and Hyland 2004). Hence; the data collected will therefore not be a
homogeneous sample.
The OREDA MLE is the average failure rate defined by number of failures per time unit. The measure
is valid for the assumption that the collected data are from a homogenous sample. If the data
collected from several installations satisfy the criteria for a homogeneous sample, the mean value in
OREDA would be equal to the MLE. A big difference in these two values indicates the opposite; the
samples are inhomogeneous. The data samples collected for the compressors in Table 6 clearly are
inhomogeneous due to the great difference between the mean and MLE.
The standard deviation for the mean describes the dispersion between the samples. A high value of
the standard deviation may therefore indicate inhomogeneous samples (Rausand and Hyland
2004). The standard deviation in Table 6 confirms the suspicion about inhomogeneous samples for
the compressors. The 90 % confidence interval is for the mean value. This may be interpreted as an
average lambda from an installation has a 90 % probability of being within the confidence bounds.
Table 6 Printout from OREDA (OREDA 2009)
Taxonomy no
1.1.1.1.1
Item
Machinery
Compressors
Centrifugal
Electric Motor Driven
(1001000) kW
6
Population Installations
5
Failure mode
Calendar time *
0.1248
No of
fail.
Critical
Failed to start
Unknown
Vibration
Master Thesis
Astrid Folkvord Janbu
Operational time
0.0832
No of demands
Mean
Active
Upper
SD
MLE
Repair (manhours)
rep.hrs
Min
Mean
Max
23*
23
1.31
2.02
217.71
471.18
827.93
1806.90
304.49
665.33
184.33
276.36
10.0
0.5
24.3
186.3
1*
0.94
0.29
8.42
17.10
22.20
61.58
7.02
22.41
8.01
12.02
13.0
13.0
13.0
14*
14
0.97
1.28
132.09
285.42
499.13
1093.54
183.39
402.61
112.20
168.22
10.0
0.5
24.0
186.3
1*
0.94
0.29
8.42
17.10
22.20
61.58
7.02
22.41
8.01
12.02
11.4
11.4
11.4
7*
0.71
65.50
243.34
89.14
56.10
0.5
28.5
117.5
39
0.70
140.94
538.64
198.24
NTNU
84.11
The data uncertainty increase with the degree of inhomogeneity due to the variation caused by the
effects of external factors like operational, environmental and physical aspects. When using generic
data from inhomogeneous samples, the effects of external factors and variation between samples
will be a part of the reliability evaluation for a system with other plant specific condition than
reflected in the data used. Inhomogeneous data samples are therefore unfortunate in reliability
assessments.
Model
Properties
Advantages
Disadvantages
Reliability block
diagram
Functional blocks in
sequential structure
Binary analysis
Static system behaviour
Success oriented
network
Easier to identify
failures
Logical structure
More comprehensive
than RBD
Only two possible
states; functioning or
failed
40
Master Thesis
Astrid Folkvord Janbu
NTNU
Markov analysis
Simulation
System operation
simulated in a software
program
Multiple state analysis
Dynamic system
behaviour
Process oriented
network
Simple to apply
Able to model more
than two states
Produce results that are
hard to solve
analytically
Results depend on
number of simulations
Cannot simulate
systems with static
components
Reliability data
generated from a
believed distribution
Both the PSD and ESD function as illustrated in Figure 15 are based on a series structure, where
single elements all consist of 1oo1 logic. This means that a common cause failure (CCF) can only
occur between PSD and ESD elements due to the similarity between the PSD and ESD. These
elements are either the input elements, logic or output elements for the combined loop may fail due
to a CCF. The CCFs are modelled with a standard factor model, where the s used are presented
in the data dossier.
Master Thesis
Astrid Folkvord Janbu
41
NTNU
42
Master Thesis
Astrid Folkvord Janbu
NTNU
For each basic event i, the average PFD for a single component i is calculated in CARA FaultTree by
the approximation formulas found in Rausand and Hyland (2003);
qi (t )
DU ,i i
2
A fault tree with m minimal cut sets may be modelled as a series structure of the m minimal cut
parallel structures. The PFD for a minimal cut set j with independent components can be written as
mj
QMC j (t ) qi (t )
(*)
i =1
The PFD of the system can the be approximated with a conservative upper bound approximation;
m
Q0 (t ) 1 (1 QMC j (t))
j =1
CARA FaultTree and several other software tools for FTA use this calculation approach when
estimating the PFD for the TOPevent. Rausand and Hyland (2003) showed that this approximation
good when the qi(t)s are small. The probability of the TOPevent overfilling of vessel given a
demand is calculated by CARA FaultTree and is equal to
Q0(t) = 4,1357 103
This estimate is treated as the base case result for the rest of the fault tree analyses results.
43
NTNU
{PSD LT,ESD LT}, {PSD LT,ESD node}, {PSD LT,ESD sol}, {PSD LT,EV}, {PSD node,ESD LT}, {PSD node,ESD
node}, {PSD node,ESD sol}, {PSD node,EV}, {PSD sol,ESD LT}, {PSD sol,ESD node}, {PSD sol,ESD sol},
{PSD sol,EV}, {XV,ESD LT}, {XV,ESD node}, {XV,ESD sol}, {XV,EV}
Figure 17 Fault tree of combined loop for typical vessel, implicit CCF modelling
There are four possible formulas to use (see article), depending on the conditions;
1)
2)
3)
4)
independent components
identical and dependent components (factor model applied)
nonidentical and dependent components (factor model applied)
more complex minimal cuts
When the average PFDMCj is calculated for each cut set, the system unavailability can then be found
by an conservative upper bound approximation;
m
Q0 (t ) 1 (1 QMC j (t))
j =1
The calculations are documented in Appendix B. The PFD for the conservative FTA was found to be
Q0(t) = 5,1436 103
Master Thesis
Astrid Folkvord Janbu
NTNU
Birbaum's measure
CCF LT
9,96E01
ESD node
2,83E01
ESD node
1,17E03
CCF sol
9,96E01
PSD node
2,71E01
PSD node
1,12E03
CCF node
9,96E01
PSD LT
2,68E01
PSD LT
1,11E03
CCF valve
9,96E01
ESD LT
2,48E01
ESD LT
1,03E03
ESD node
5,43E02
EV
1,14E01
EV
4,73E04
ESD LT
5,43E02
XV
1,09E01
XV
4,53E04
EV
5,43E02
CCF LT
9,81E02
CCF LT
4,06E04
ESD sol
5,43E02
CCF sol
9,49E02
CCF sol
3,93E04
PSD node
5,20E02
CCF node
5,27E02
CCF node
2,18E04
PSD LT
5,20E02
ESD sol
5,16E02
ESD sol
2,13E04
XV
5,20E02
PSD sol
4,94E02
PSD sol
2,04E04
PSD sol
5,20E02
CCF valve
4,22E02
CCF valve
1,74E04
Birbaums measure ranks components after how sensitive the system is to a change in their
reliability, which is dependent on the components failure data and the location of the component in
the system structure. This implies that the components or basic events which the system reliability is
vulnerable to, like CCF, will have a high ranking. The Birbaums measure for FTA ranks the CCF basic
Master Thesis
Astrid Folkvord Janbu
45
NTNU
events first. These may thus be interpreted as the events which the system reliability is most
vulnerable to, which also is the case since they are common cause failures. Remember from section
5.3.1 that Birbaums measure can be written as h(1i , p(t)) h(0i , p(t)) for a component i. A numerical
value of 9,96 101 as for the CCF basic events indicates that the average system reliability is equal to
9,96 101 since h(0i , p(t)) for a CCF will assumed to be equal to 0. In practice not all CCF cause
system failure, but for this fault tree it will.
After the ranking of CCF components follow the ESD components and then the PSD components. PSD
and ESD are modelled similar in the system structure, but differs in their reliability data for the level
transmitter. Birbaums measure is a statement of the other components than the one being
analysed. Since the reliability data for the node, solenoid and valve is assumed to be the same for
respectively the PSD and ESD system, Birbaums measure for the components in the parallel
structure will therefore depend on the opposite safety systems failure rate for the level transmitter.
The branch in the parallel structure with the lowest reliability will therefore cause the highest ranking
for the opposite branch due to the highest gap in h(1i , p(t)) h(0i , p(t)). Since the PSD level
transmitter failure rate is higher than ESD level transmitter failure rate, the ESD components are thus
ranked higher.
The improvement potential measures the possible improvement in system reliability by assuming
that a component i is a perfect component such that pi(t) = 1. The potential will then be the
difference between h(1i, p(t)) and h(p(t)). The improvement potential measure for the FTA ranks the
ESD node first, followed by the PSD node, PSD LT and ESD LT. We see that the overall system
reliability would be improved with 1,17 103 if the ESD node was a perfect component. The ranking
of this measure indicate which components that is most important to improve the reliability of with
respect to the overall system reliability. The ranking of these components can thus be interpreted as
which of the components that is most likely to blame if a failure occurs. Some software tools, like
Miriam Regina, call therefore this measure for blaming.
The criticality measure describes the probability that component i caused the system failure, when
we know that the system is failed at time t. This is also a type of blaming measure, and we get the
same ranking for the criticality importance measure as for the improvement potential, but with other
numerical values. We see that if the system has failed, the ESD node is assumed, with a probability of
almost p = 0.3, to have caused the failure. We see that the CCF basic events, which the system
reliability is vulnerable to according to Birbaums measure, score low on the blaming measures
ranking list. This is due to the associated low failure frequencies. The criticality of CCFs are high, but
the likelihood of their occurrence is low.
46
Master Thesis
Astrid Folkvord Janbu
NTNU
The median value of a variable is selected to be the value such that for a large data sample about
50% of the data is greater and 50% is smaller (Sydvest 1999). The median and mean of a lognormal
distribution are not equivalent. This is because the lognormal distribution is skewed to the right. The
mean will always be greater than the median since the greatest portion of the distribution is located
to the right for the most likely value to pick, median < mean. Selecting the median value is often
difficult since it is usually not presented in reliability data sources. CARA FaultTree solves this
problem by weighting the mean such that it represents a possible median value.
It is assumed that the parameter values for the level transmitters are uncertain due to the reasons
presented in section 3.1.2 and 8.2. The sensitivity analysis for different failure rates in section 9.4
indicates that this uncertainty may have significant impact on the results. These are thus the only
components to be assigned a lognormal distribution in the fault tree. The error factor k may be found
by evaluating confidence intervals in a generic database, like OREDA, see printout in Table 9 below;
Table 9 OREDA taxonomy (OREDA 2002)
Taxonomy
Lower
Mean
Upper
SD
n/tau
1,48E06
4,65E06
9,26E06
2,44E06
4,30E06
The taxonomy is for standard level transmitter, for critical failure modes during operational time. The
confidence bounds presented are a 90 % confidence interval for the mean value. It should be noticed
that the interval is presented for the mean, not the median. But it is here assumed that the variation
in value is approximately the same. Hence; we use this variation to estimate the error factor k;
lower = mean / k
1,48E06 = 4,65E06 / k
k = 3,14
upper = mean k
9,26E06 = 4,65E06 k
k = 1,99
A conservative value of k = 3 is therefore selected as error factor for the CCF, ESD and PSD level
transmitters. The error factors are implemented into the fault tree. CARA FaultTree uses Monte Carlo
simulation in order to approximate the distribution of Q0(t), which now is a random variable due to
the random input parameters for the CCF, ESD and PSD level transmitters. For each run, parameter
values for the level transmitters are generated from the lognormal distribution, and a Q0(t) for the
TOP event is calculated. The distribution of Q0(t) and is shown in Figure 18:
Master Thesis
Astrid Folkvord Janbu
47
NTNU
= 0,00407673
Var
= 8,98473e007
St.dev.
= 0,000947878
Minimum
= 0,00251559
Maximum
= 0,00811804
The range between the results are 5,60 103. The maximum value is above the conservative
approximation achieved in section 9.3. The histogram indicates that the distribution of Q0(t) is
skewed to the right. The unavailability of components, qi(t), with lognormal distributed parameters,
will also be lognormal distributed since the property is closed under multiplication for deterministic
calculation formulas. The skew distribution from qi(t) is spread onto the results. We therefore also
achieve a skewed distribution of Q0(t).
The standard deviation is a measure for the propagation of values from a stochastic variable, in this
case the system unavailability, the PFD. The results from FTA show that the standard deviation is
approximately 1,0 103. The smaller a standard deviation is, the more sure we are about the value of
the stochastic variable under the given conditions.
48
Master Thesis
Astrid Folkvord Janbu
NTNU
10 SIMULATION
Simulation is a powerful tool when analyzing phenomena with high complexity due to its flexibility.
This chapter presents the reliability assessments results for the high level protection system where
simulation is used.
A model for the high level protection system is built in ExtendSim as shown in Figure 20. The
components, with id tag below block symbol, are modelled binary, they either function or not. This
Master Thesis
Astrid Folkvord Janbu
49
NTNU
also means that they are given one out of two possible values. In this case, it is used the value 1 if the
component functions, and 0 if it is in a failed state. It is also assumed, as for the fault tree, no repair
within the test interval of 8760 hours. The simulation is performed for independent test intervals for
a specified number of times. The PFD will then be the average unavailability of the system for a
certain number of simulation runs.
The state variable, X(t), denotes the state of a safety system at time t, where X(t) = 0 equals the state
where the safety system is not able to work as a safety barrier and X(t) = 1 denotes the state where
the safety system works as a safety barrier(Rausand and Hyland 2004). The state for a component i
may be denoted Xi(t). The function blocks decide if the input components are modelled as a series or
parallel structure. The function min refers to series structure since the first component failure, will
lead to failure of the whole structure, X(t) = 0. The function max refers to parallel structure since
the system structure will function until the last component fails, that is until all the n components in
the structure is Xi(t) = 0, for i = 1, 2, , n.
The PSD and ESD structure are each modelled as a series structure, and the combined loop is
modelled as a parallel structure. The common cause failures are modelled explicitly as a series
structure in series with the combined loop. This is due to the assumption that if a CCF occurs, the
system will also fail. The function block Result receives the minimum state variable from each
branch since a failure of one of these branches will cause system failure, X(t) = 0. The system state
variable in Results is time weighted such that the mean uptime for a test interval for the system
can be calculated. The function Downtime calculates then the downtime by Downtime = 1
50
Master Thesis
Astrid Folkvord Janbu
NTNU
(output from Results). The Stat block calculates the mean and confidence bounds for the
downtime during a test interval. The failure data used are collected from the data dossier in
Appendix A and updated in the linked Excel sheet.
Simulation generates random numbers from probability distributions. It is therefore important to run
the model a sufficient number of times for achieving convergence and thus stable results. NASA
(2002) presented the following five step technique in order to increase precision in simulation
results, which also was used for the simulation in this report;
1. Set the number of iterations to at least 5000 and run the simulation.
2. Records the statistics for:
a. mean;
b. standard deviation;
c. 5th percentile;
d. median (50th percentile); and
e. 95th percentile.
3. Perform additional simulations by increasing the number of iterations by increments of at
least 1000.
4. Monitor the change in above statistics.
5. Stop if the average change for each statistic (in two consecutive simulations) is less than 1.5%
The system analyzed is comprised of components with very low failure rates. A high number of runs
are therefore needed in order to stable the output. It was discovered that about 750 000 runs were
necessary in order to stable the results. Due to the high number of runs needed, step 3 were roughly
performed by increasing the number with 100 000 each time. The change in the estimated PFD was
then 0,76 %.
The printout from Excel is calculated into unavailability by setting unavailability = 1 estimated
uptime. The results are listed in Table 10;
Table 10 Extend results for combined loop
Block
Unavailability
CCF
1,2124E03
PSD
5,4050E02
ESD
5,2085E02
parallel
3,7158E03
Results
4,9235E-03
4,8066E03
Upper limit
5,0404E03
Master Thesis
Astrid Folkvord Janbu
51
NTNU
We see from this that the estimated PFD for the base case, the combined loop, is
Q0(t) = 4,9235 103
This value is treated as the base case result throughout this chapter. It should also be noticed that
the PSD block counts for the greatest share of the unavailability, then the ESD, the parallel and the
smallest share is caused by the CCF block.
Birnbaums measure
Improvement potential
CCF sol
9,95E01
ESD node
1,52E03
CCF LT
9,95E01
PSD node
1,51E03
CCF node
9,95E01
PSD LT
1,47E03
CCF valve
9,95E01
ESD LT
1,37E03
ESD LT
7,24E02
EV
6,02E04
ESD sol
7,14E02
XV
5,67E04
ESD node
7,05E02
CCF sol
4,36E04
PSD LT
6,97E02
CCF LT
4,10E04
PSD node
6,95E02
ESD sol
2,89E04
52
Master Thesis
Astrid Folkvord Janbu
6,89E02
PSD sol
2,49E04
XV
6,55E02
CCF node
2,26E04
PSD sol
6,30E02
CCF valve
1,67E04
NTNU
According to the Birbaums measure results from the simulation, the CCF LT, solenoid, node and
valve are considered to be the most important components, almost like the results from FTA where
CCF LT and solenoid have the opposite ranking. Simulation does not give the same results as the FTA
due to that the formula for Q0(t) is not deterministic when simulation is applied, but stochastic.
Hence; Birbaums measure will also be stochastic when using simulation. FTA will get the same
ranking for the same data each time due to the deterministic formulas. A high number of runs ensure
that the simulation results convergence against a limit, but some randomness due to the Monte
Carlo sampling will still exist. Therefore we get some small differences in the ranking for those
components that are similar in data and location in the system structure. The interpretation of the
results may read in section 9.4.
The improvement potential has the same top ranked components as FTA. This indicates that the
same components are considered to be most important to improve their reliability, with regard to
the overall system reliability. Due to the same reasons as described for Birbaums measure above,
the ranking of components is some stochastic due to the use of Monte Carlo sampling techniques.
The interpretation of the results may read in section 9.4.
Master Thesis
Astrid Folkvord Janbu
53
NTNU
1
1 e ,
f ( , , ) =
( )
where is the shape parameter and the scale parameter. The mean is defined as E[] = = and
Var[] = 2 = 2. G. Rausand (2005) showed that the integration of the product of the two pdf, the
gamma distribution for the parameter into the exponential distribution for the lifetime, give the
marginal distribution of the lifetime t. This is the lifetime distribution we get when repeatedly
generating a value from the gamma distribution and then generating a value from the exponential
distribution (G. Rausand 2005). The marginal distribution of t in this case is then found to be a special
version of the Pareto distribution;
fT (t) =
( 1 + t) +1
Instead of drawing values from a gamma distribution and then from an exponential distribution, we
could just draw directly from the marginal distribution and get the same results. But due to the built
in distributions in Extend, it is more traceable to use the gamma and exponential distribution.
The parameters can be found by;
54
Master Thesis
Astrid Folkvord Janbu
NTNU
1) E[] = =
2) Var[] = 2 = 2
From 2) we get =
= =
The mean and standard deviation is found in OREDA, see appendix A and Table 9 in section 9.5. The
standard deviation and mean is values from OREDA multiplied with given betafactor. This gives the
following parameters for the gamma distribution for level transmitters;
Table 12 Parameters for gamma distribution
Parameter
PSD
ESD
CCF
Shape
4,08
3,21
3,63
Scale
1,21E06
1,36E06
2,56E08
The statistical quantities from this analysis are listed below, and the results from the uncertainty
propagation runs are plotted in a histogram which is shown in Figure 21. ;
Q0(t) = 4,8155 103
Lower confidence bound
= 4,6998 103
= 4,9313 103
Standard deviation
= 1,1490 103
Master Thesis
Astrid Folkvord Janbu
55
NTNU
The results have decreased a bit compared to the results presented for the base case in section 10.2.
This may be due to the use of gamma distribution with given parameters which are skewed.
56
Master Thesis
Astrid Folkvord Janbu
NTNU
11 DISCUSSION
This chapter discusses the uncertainty in reliability assessments in light of the results from the case
study. The results are compared and each contribution to uncertainty is discussed.
Simulation
Base case
Conservative
Standard deviation
9,47878 104
1,1490 103
[4,6998 103, 4,9313 103]
The results are ranked as expected. The base case results show that the calculations performed in
CARA FaultTree give the lowest unavailability which is due to the nonconservative assumption of
independent basic events. Simulation should reflect the most realistic result of the system
unavailability since no approximations are used. The lifetimes are collected directly through assumed
lifetime distributions. The unavailability result from simulation is higher than the unavailability
estimated by CARA FaultTree, and lower than the unavailability estimated by the conservative
approximation formulas. What is interesting to notice is that the unavailability result from the
simulation is significant closer to the conservative approximation (0,2201 103 difference) than the
unavailability estimated by CARA FaultTree (0,7878 103 difference). These results indicate that the
conservative approximation formulas are good, and not too conservative.
Another interesting aspect from the base case study is that none of the results gave compliance with
the SIL 3 requirement for the combined loop. Even though the fault tree modelling of the base case
as used in this report was exactly the same as used in the confidential compliance report, the results
were still not equal due to the use of different data. The PSD and ESD single loops should also satisfy
a SIL 2 requirement. The simulation results from Table 10 show that this requirement was not met
Master Thesis
Astrid Folkvord Janbu
57
NTNU
for both the PSD and the ESD system. The data uncertainty problem is further discussed in section
11.4.
The architectural constraints for the case study have so far not been treated. The architectural
constraints tables from IEC 615082, 7.4.3.1.2, for type A and B systems are shown below;
Table 14 Architectural Constraints on Type A and B systems (IEC 61508 1997)
Safe Failure
Fraction
Type A Systems
Type B Systems
< 60 %
SIL 1
SIL 2
SIL 3
Not allowed
SIL 1
SIL 2
60 % 90 %
SIL 2
SIL 3
SIL 4
SIL 1
SIL 2
SIL 3
90 % 99 %
SIL 3
SIL 4
SIL 4
SIL 2
SIL 3
SIL 4
> 99 %
SIL 3
SIL 4
SIL 4
SIL 3
SIL 4
SIL 4
The compliance report classified all the nodes (instrumentation) and some of the level transmitters
(radar for example) as type B systems. The PSD and ESD system should each satisfy a SIL 2
requirement and their subsystems (level transmitters, node, solenoid valve and valve) are designed
in a 1oo1 architecture, thus HWFT = 0. According to Table 14 they should satisfy a SFF of at least 60 %
if they are type A systems and SFF of at least 90 % if they are type B systems.
According to the data as shown in the data dossier in Appendix A, neither the node nor the level
transmitters which are assumed to be of type B systems satisfy these criteria. The high level
protection system does therefore not comply with the semiquantitative requirements either.
The sensitivity analyses showed that the difference in numerical values for the PSD and ESD level
transmitters as modelled in the base case was insignificant for the results. An assumption of equal
failure rates, like the rest of the functions in the system, would be sufficient.
The sensitivity analyses also showed that if the failure rate from OLF 070 was used, the unavailability
would be reduced with approximately 50 % compared to the base case. The OLF 070 failure rate
could have been selected as the base case failure rate for the level transmitters for these reliability
assessments. The results from the sensitivity analyses therefore show that selection of data can
easily be the difference between compliance or not.
The Birbaums measure ranks the CCF basic events first, both for the FTA and the simulation model.
According to Birbaums measure, the system reliability is most vulnerable to the top ranked events,
which means that the occurrence of these events cause the greatest change in system reliability.
Since they are CCF, this seems logical. The Birbaums measure for the deterministic FTA further
ranked the ESD and the PSD with quite similar results. Birbaums measure for the simulation model,
58
Master Thesis
Astrid Folkvord Janbu
NTNU
where Q0(t) is stochastic, ranked the PSD and ESD components in a mixed order. This is due to the
similar results between the loops and randomness from Monte Carlo simulation.
The ESD node, PSD node, PSD LT and ESD LT are ranked highest for the improvement potential
measure, both for the FTA and simulation. The rank order of this measure indicates which
components that is most important to improve the reliability of with respect to the overall system
reliability. Both analysis techniques therefore see these components as the most likely components
to blame if a failure occurs. Some software tools, like Miriam Regina, call therefore this measure for
blaming.
Another blaming measure is the criticality measure which was only calculated for the FTA. The
criticality measure ranked the components in the same order as the improvement potential measure,
but with other numerical results. We see that the CCF basic events, which the system reliability is
vulnerable to according to Birbaums measure, score low on the blaming measures ranking lists. This
is due to the low failure rates for CCF. The criticality of CCF for the high level system are high, but the
likelihood of their occurrence is low.
Uncertainty propagation in CARA FaultTree was performed correctly and gave a lower unavailability
than the base case. It may be theoretically proven, due to the skew lognormal distribution for the
failure rates, that for a system of nonrepairable components the mean value of Q0(t) (as estimated
by the uncertainty analysis) is always below the value of Q0(t) obtained by computation using only
the mean values of the failure rates (Sydvest 1999).
The plot indicated a skew distribution of Q0(t), which is due to the skew lognormal distributed failure
rates. The range of the unavailability results from the uncertainty propagation varied with 5,60 103,
but the lowest value did still not comply with the SIL 3 requirement. The standard deviation was
found to be approximately 1 103. We can expect that most of the results due to data uncertainty
only will be within mean 2 standard deviations, that is a range of approximately 4 103.
The uncertainty propagation in Extend turned out to be a bit problematic, and is further discussed in
section 11.1.1 below.
11.1.1
The uncertainty propagation in Extend was performed according to the description in section 10.4
and modelled as illustrated in Figure 21.
It was found at that this way of performing uncertainty propagation is not right. The reason for this is
that the simulation process itself produce uncertainty due to the random Monte Carlo sampling. It is
therefore impossible to sort out from the results which uncertainty is caused by the data and which
uncertainty that is caused by the simulation. Since we dont know how these uncertainties interfere
with each other, it is hard to draw any valid conclusion from these results. Many analysts today still
use the incorrect assessment of uncertainty distribution, as performed in this case study.
What is interesting to notice anyway, is that the results from uncertainty propagation, both for the
FTA (performed correctly) and simulation, is quite similar relatively to the base case results. Both
mean values from the propagation are less than the mean values from the base case. Also, the
standard deviations are quite similar. The standard deviation for simulation is of higher value than
Master Thesis
Astrid Folkvord Janbu
59
NTNU
the standard deviation from FTA. This is as expected since the standard deviation from simulation
also includes the uncertainty from the Monte Carlo simulation.
If uncertainty propagation should be performed with Extend, one would have to
1. Simulate a set of failure rates from uncertainty distributions for each parameter
2. Simulate the system unavailability with the simulated data set sufficient enough times to
receive stable results
3. Repeat step 1) and 2) a number of times in order to get the uncertainty distribution for the
data uncertainty (the spread of stable results with stochastic failure rates). This number is
often selected to be about 1000 runs.
The problem with performing this procedure in Extend is that the low failure frequencies and system
structure demands about 750 000 runs for each system simulation in order to get stable results. This
would demand 750 000 1000 = 750 000 000 runs in order to get the uncertainty distribution for the
data. This is a very time consuming and an unnecessarily complicated process. Uncertainty
propagation is far easier in CARA FaultTree. This is because the calculation formulas for Q0(t) are
deterministic and thus a predefined set of failure rates will give only one determined answer. CARA
FaultTree only uses Monte Carlo sampling when simulating a set of failure rates for the components
assumed to have uncertain data. This is done a 1000 times, and then an uncertainty distribution for
Q0(t) is estimated.
Helge Hellebust (1989) showed that a mean and a standard deviation for an uncertainty distribution
for Q0(t) in a fault tree may be calculated by using only analytical methods (Hellebust 1989).
Analytical methods for uncertainty propagation are often less time consuming. Uncertainty
propagation by the use of simulation is in this case study found to be less suited for a simulation
model.
60
Master Thesis
Astrid Folkvord Janbu
NTNU
Further, the scope for the assessment defines what should be included or excluded from the
assessment. Reliability databases, like OREDA, have clear boundary definitions. The incompleteness
in the data due to scope definition is therefore assumed to be insignificant.
The blaming measures rank the PSD and ESD nodes and level transmitters highest. These
components are also classified as type B components, which are considered to be the most uncertain
components with regard to completeness uncertainty. Since the components that are most probable
to cause system failure also are associated with completeness uncertainty, effort should be paid to
improve these components reliability.
The reliability assessment modelling comprised of only one typical vessel. The level of detail could
have been more precise with modelling and exact data for all four PSD and ESD systems. Some
completeness uncertainty may therefore exist due to the level of detail in the assessment. The
sensitivity analyses for similar failure rates indicated that a small difference in failure rates was
insignificant compared to the use of similar failure rates for the PSD and ESD system. Hence;
completeness uncertainty due to the modelling of a typical vessel instead of all four seems to be very
small. This uncertainty may also be interpreted as a model uncertainty.
This report has not treated the effects of human errors. This may be seen as an unknown
completeness uncertainty which may be significant. Human involvement is common during
maintenance, modifications and testing, and should be included into the reliability assessments. The
effect of human involvements and human errors is a relatively new research area which is usually not
paid very much attention. Many vendors claim that they deliver systems with a MTTF equal many
thousands of years. Still, many operators experience critical failures. There are obviously some
aspects that affect the reliability of SIS, which should be further investigated. The subject of human
errors is one of them.
61
NTNU
difference in component reliability due to environment is for the level transmitters, since they are
the only part of the safety systems that are located inside the vessels. The failure rate for the level
transmitters may therefore be uncertain when modelled for a typical vessel with generic data. This
uncertainty is unknown since it only can be measured through vessel specific data which in this case
are not available.
The reliability assessments in this thesis have used generic data. Many compliance reports use
vendor data for the equipment, which usually presents lower failure rates than what generic data do.
This may be due to several reasons. The vendor data are often collected from newer technology than
what the generic data are based on. Thus, the reliability of the same type of equipment may have
been improved. But, vendor data can also have been collected during only laboratory testing, or
limited field experience. If so, important factors like human involvement, field environment, etc are
not reflected and may thus underestimate the failure rate. Large gaps between vendor and generic
data contribute to data uncertainty. If the difference is significant, expert judgement should be used
to weight the data.
Two data sources were used in this assessment, OREDA and OLF 070. It is often difficult for an analyst
to know which data to use. The sensitivity study showed that the two different failure rates for the
level transmitters resulted in significant differences between the estimated unavailability. This
indicates that the level of data uncertainty is quite high and may be decisive for which decision that is
going to be made. An interesting observation is anyway the OREDA data for level transmitters,
presented in Table 9, where the mean and MLE are quite similar and standard deviation is not
significantly high. This can be seen as an indication of homogeneity between the samples. The data
from OREDA may therefore be of a good quality. The data uncertainty in this reports assessments is
rather caused by confusion of what data to use.
The numerical value of the failure rate for the level transmitters is some uncertain due to the reasons
above. Uncertainty propagation was used and revealed that the base case for FTA had a range for the
min and max unavailability results with approximately 6 103, where the min value still was higher
than the SIL 3 requirement. In reality, the data uncertainty is probably greater due to that the
uncertainty propagation was only related to level transmitters, the other components were given no
uncertainty distribution.
Master Thesis
Astrid Folkvord Janbu
NTNU
conservative approach is recommended for systems where FTA is applicable, because the
approximation compensates for some of the uncertainty involved. Also, the approximation appears
to not be unnecessarily conservative either.
Another solution to the conservative approximation may be to use an upper limit failure rate from a
70% confidence interval, as suggested in IEC 61508 and described in section 3.2. The 70% confidence
interval for an upper limit value is not theoretical proved, and may thus be interpreted as a bit
random. It might be that implementing conservative failure rates into software tools like CARA
FaultTree is easier than using the conservative approach developed by Lundteigen and Rausand. But,
the use of such random limits can also be seen as less objective. The conservative approximation as
used in this thesis is based on mathematical theories and may be proven to be conservative for the
calculated unavailability.
Sensitivity analyses and importance measures together with a qualitative uncertainty evaluation of
the reliability assessment usually provide sufficient enough information about the total level of
uncertainty. Importance measures also give valuable information for redesign and maintenance
strategies. Redesign may be a result of a compliance report if requirements are not met. Importance
measures identifies the bottlenecks and risk contributors such that reliability improvement during
redesign may be done more efficiently.
In cases where the data are suspected to have a high level of uncertainty, uncertainty propagation is
recommended. Uncertainty propagation should be avoided for simulation models, due to the high
number of runs needed. Deterministic models should rather be preferred if applying Monte Carlo
simulation for estimation of uncertainty distribution to Q0(t). The results from uncertainty
propagation may be difficult to interpret for a decision maker. It is more valuable to present the
results as a qualitative evaluation. The decision maker may not have the required competence to
understand the meaning and scope of statistical quantities.
Master Thesis
Astrid Folkvord Janbu
63
NTNU
12 CONCLUSIONS
The main objective of this master thesis was to study the reliability assessment procedures that are
used when developing compliance reports and, based on the findings from Janbu (2008), examine
how a representation of the uncertainties in the results may be implemented in compliance studies
as a decision support. The main objective was further divided into four sub objectives, which this
report tries to give an adequate answer to.
The first sub objective was to become familiar with a SIS through a case study and outline when and
how compliance reports should be developed. The system familiarization was documented in
Chapter 7 and the process of compliance reports was described in Chapter 6. The next sub objective
was to identify issues in the development of compliance reports that may influence the uncertainty
of the results. This sub objective was partly answered in the project thesis, but due to its importance
in this thesis it was further described in Chapter 3. Sub objective number 3 was to discuss the various
approaches for uncertainty assessments and how to implement the information from such
assessments into compliance studies. The first part of the sub objective is documented in Chapter 5.
How to implement results from uncertainty assessments into the basis for decision making is treated
in Chapter 6.3 and in the discussion in Chapter 11. The last sub objective, number 4, was to perform
reliability assessments for the case study, compare the results and discuss the level of uncertainty in
the results. This objective was a lot of work and is documented for the FTA in Chapter 9, the
simulation in Chapter 11 and discussed in Chapter 11.
Reliability assessments of SIS provide valuable information to the decision maker regarding design
and safety. Compliance reports are the documentation of whether a SIS meets the required SIL or
not, whereas the required PFD has to be estimated by a reliability assessment. Due to the process of
reliability assessments, uncertainty is introduced and reduces the confidence in the results. Not only
does uncertainty lower the quality of the assessment, it also increases the risk of making wrong
decisions. This report distinguishes between three main sources to uncertainty; model, data and
completeness uncertainty.
The compliance studies are usually executed during the design phase. At this stage, detailed and
relevant information may not be in place, which causes completeness uncertainty. Generic databases
are used where data uncertainty often arise due to confusion of what data to use, lack of relevance
and the modelling of the data. Models are selected without complete knowledge about the system
characteristics and thus the lack of representing the real life system cause model uncertainty. The
level of uncertainty is higher during early phases of system development, therefore the predicted
PFD provided by the compliance reports may not reflect the true PFD discovered during field
experience. The uncertainties involved in the estimated PFD are seldom communicated to the
decision maker. Hence; a misinterpretation that the predicted reliability is for certain according to
requirements, frequently arise. IEC 61508 does not explicitly treat the subject of uncertainty, but
indicates doubts about the validity of the results through the AC and suggested 70 % upper limit
confidence interval for failure rates.
The results from the case study of the high level protection system were ranked as expected, but still
gave some interesting indications. The use of simulation in Extend was unnecessarily complex for a
reliability assessment of SIS. SIS usually have very low failure rates which make the simulations
needed for achieving stable results, a time consuming process. It is impossible to verify that the
64
Master Thesis
Astrid Folkvord Janbu
NTNU
results are correct due to the random sampling, but if performed correctly we can expect that the
estimates provided by simulation are the most realistic. In light of this, it was therefore interesting to
see that the predicted unavailability achieved from simulation was closer to the conservative FTA
approach developed by Lundteigen and Rausand than the cut set approximation used by CARA
FaultTree. The conclusions from the case study are that FTA is a sufficiently good model for reliability
assessments of SIS and that the conservative approximation is recommended since it compensates
for some of the uncertainty involved.
The uncertainty assessments revealed that sensitivity analyses and importance measures often
provide sufficient information about the identified sources to uncertainty in a compliance study since
they manage to point out which of the sources that are critical to the results or not. Sensitivity
analysis identifies the significance of scope, data selection, modelling structure and model
assumptions. Importance measures may describe the component importance with regard to
improvement potential, contribution to unavailability (blaming) and achievement of system function,
information which also is valuable during redesign and identification of efficient maintenance
strategies during operation. Uncertainty propagation is a useful method when investigating the level
of uncertainty in the data, but the results are not as easy to interpret as the results from sensitivity
analyses and importance measures. The uncertainty propagation may therefore be a limited remedy,
depending on the analysts competence. It was also discovered that uncertainty propagation should
not be performed for simulation models due to the extremely high number of runs needed to
estimate the uncertainty distribution. A thumb rule should be to always use deterministic models
when performing uncertainty propagation or use analytical methods.
The decision maker should always be given an evaluation of the uncertainty assumed to be related
with the results. The evaluation should rather qualitatively describe the level of confidence in the
results than presenting the uncertainty assessments results, which often is not intuitively understood
by the decision maker. The case study showed that compliance was not met for the SIS. Neither did
the uncertainty assessment indicate any unnecessary conservatism that could have been reduced in
order to recommend compliance. The level of uncertainty was seen to be greatest for the data due to
the confusion of what data to use.
The results from a compliance study should not be seen as any certain property of the system, and
this should be communicated to the decision maker. The need of feeling safe should be nuanced with
the truth that there is no guarantee in the SIL compliance. Instead of interpreting uncertainty as a
necessary evil, one may achieve the advantage of it by reflecting a more realistic result, and thus
raise awareness of the risks involved in the decision process. By being informed about the
uncertainties one may also easier reduce them.
65
NTNU
The effects of human involvement on SISs reliability should be further studied. Human errors which
are not accounted for in reliability assessments may be seen as a great source to completeness
uncertainty. Some of the human errors may be included when generic data are used, but the amount
of unavailability caused by human factors is highly uncertain. Equipment with estimated mean time
to failure equal to many thousands of years is not consistent with the failures reported through
maintenance records. Human errors are expected to be one of the factors to this and should be
encountered for in order to not underestimate the risk.
Software reliability in relation to the reliability of SIS is also a subject that should be paid more
attention. Due to the complexity of the system, lack of knowledge cause uncertainty related to the
failure behaviour. The completeness uncertainty is assumed to be higher for software systems since
failure modes often are unknown due to the difficulties in identifying all relevant bugs. Software
reliability should therefore be further investigated in order to reduce the related uncertainty and
avoid underestimation of risk.
66
Master Thesis
Astrid Folkvord Janbu
NTNU
13 BIBLIOGRAPHY
Aven, Terje. Foundations of Risk Analysis - A Knowledge and Decision-oriented Perspective.
Chichester: Wiley, 2003.
. Risikostyring. Oslo: Universitetsforlaget, 2007.
Bedford, Tim, and Roger Cooke. Probabilistic Risk Analysis - Foundations and Methods. Cambridge:
Cambridge University Press, 2001.
Compliance report. SIL compliance report for xxxxx. SIL compliance, 2008.
deRocquigny, Etienne, Nicolas Devictor, and Stefano Tarantola. Uncertainty in Industrial Practice: A
Guide to Quantitative Uncertainty Management. Chichester: Wiley, 2008.
DNV. DNV-RP-A203 Qualification procedures for new technology . Standard, Hvik: Det Norske
Veritas, 2001.
Drouin, M., G. Parry, J. Lehner, G. MartinezGuridi, and J., Wheeler, T. LaChance. Guidance on the
Treatment of Uncertainties Associated with PRAs in Risk-Informed Decision Making. Office of Nuclear
Regulatory Research, Office of Nuclear Reactor Regulation, 2009.
Flage, R., T. Aven, and E. Zio. Alternative representations of uncertainty in system reliability and risk
analysis Review and discussion. Safety, Reliability and Risk Analysis: Theory, Methods and
Applications, 2009.
Hellebust, Helge. Exact and approximate calculations of uncertainty in system reliability evaluations.
Master Thesis, Trondheim: NTNU, 1989.
IEC 61508. Functional safety of electrical/electronic/programmable electronic safety-related systems.
Standard, Geneva: International Electrtechnical Commision, 1997.
Imagine That Inc. ExtendSim. 2009. http://www.extendsim.com/ (accessed April 1, 2009).
Janbu, Astrid Folkvord. Uncertainty in Reliability Assessments of Safety Instrumented Systems .
Project Thesis, Trondheim, 2008.
Kiureghian, Armen Der, and Ove Ditlevsen. Aleatory or epistemic? Does it matter? Structural
Safety, 2009: 105112.
Lundteigen, Mary Ann. IEC 61508 and IEC 61511: What, when, why? Presentation, Trondheim:
Department of Production and Quality Engineering, 2008.
Lundteigen, Mary Ann. Implementing strategies for follow-up of safety instrumented systems.
Presentation, Trondheim: Department of Production and Quality Engineering, 2008.
Lundteigen, Mary Ann. Safety instrumented systems in the oil and gas industry. PhD Thesis,
Trondheim: Department of Production and Quality Engineering, 2009.
Master Thesis
Astrid Folkvord Janbu
67
NTNU
Lundteigen, Mary Ann, and Marvin Rausand. Reliability assessment of safety instrumented systems
in the oil and gas industry: A practical approach and a case study. International Journal of Reliability,
Quality and Safety Engineering, 2008.
Morgan, M. G, and M. Henrion. Uncertainty: A Guide to Dealing with Uncertainty in Quantitative Risk
and Policy Analysis. Cambridge: Cambridge Press, 1990.
Mosleh, Ali, Nathan Siu, Carol Smidts, and Christiana Lui. Model Uncertainty: Its Characterization and
Quantification. Maryland: Center for Reliability Engineering, University of Maryland, 1995.
Murthy, D.N. Prabhakar, Trond sters, and Marvin Rausand. Product Reliability - Specification and
Performance. Springer, 2007.
NASA. Probabilistic Risk Assessment Procedures Guide for NASA Managers and Practitioners .
Guideline, Washington: NASA Office of Safetyand Mission Assurance, 2002.
nature.com. 2008. http://www.nature.com (accessed March 24, 2009).
O'Hagan, A., and J. E. Oakley. Probability is perfect, but we can't elicit it perfectly. Reliability
Engineering & System Safety, 2004: 239248.
OLF 070. Application of IEC 61508 and IEC 61511 in the Norwegian Petroleum Industry. Guideline,
Oljeindustriens Landsforening, 2004.
OREDA. Offshore Realibility Data. 2009. www.oreda.com (accessed May 12, 2009).
. Offshore Reliability Data 4th edition. Hvik: DNV, 2002.
Parry, Gareth W. The characterization of uncertainty in Probabilistic Risk Assessments of complex
systems. Reliability Engineering and System Safety, 1996: 119126.
Rao, Durga, K., H. S. Kushwaha, A. K Verma, and A. Srividya. Quantification of epistemic and aleatory
uncertainties in level1 probabilistic safety assessment studies. Reliability Engineering & System
Safety, 2007: 947956.
Rausand, Guro. Uncertainty Management in Reliability Analyses. Master Thesis, Trondheim:
Department of Production and Quality Engineering, NTNU, 2005.
Rausand, Marvin, and Arnljot Hyland. System Reliability Theory: Models, Statistical Methods and
Applications. New Jersey: Wiley, 2004.
Rausand, Marvin, and Knut ien. Risikoanalyse. Tilbakeblikk og utfordringer. In Fra flis i fingeren til
ragnarok, by Sikkerhetsdagene, 85110. Trondheim: Tapir akademiske forlag, 2004.
Saltelli, A, et al. Global Sensitivity Analysis. The Primer. John Wiley & Sons, 2008.
Sydvest. CARA FaultTree. Help menu, http://www.sydvest.com/Products/Cara/, 1999.
Vedvik, Atle. Offshore - Topside Systems. Presentation, Hvik: Det Norske Veritas, 2004.
68
Master Thesis
Astrid Folkvord Janbu
NTNU
Master Thesis
Astrid Folkvord Janbu
69
70
CCF
Typical
Vessel
1%
OLF 070
1,20E08
Transmitter (OLF070)
2,00E06
9,30E08
9,00E07
Solenoid
Transmitter (OREDA)
5,00E06
62 %
72 %
Master Thesis
Astrid Folkvord Janbu
8760
8760
2%
10 %
OLF 070
OLF 070
OLF 070
SIL report
83 %
2%
OREDA
-factor Source
80 %
SFF
SIL report
Node (logic)
Level transmitter
ESD
8760
Test interval
(hours)
NTNU
4,93E06
0,60E06
Level transmitter
Level transmitter
4,65E06
Differential level
transmitter
Type
Failure rate
(per hour)
Level transmitter
Component
DATA DOSSIER
PSD
System
APPENDIX A
3)
1,2)
1,2)
1)
Master Thesis
Astrid Folkvord Janbu
71
The data from SIL assessment separates between actuator and valve, while OLF 070 includes actuator in the data.
3)
4,00E08
Valve
-factor Source
Gap between highest and lowest failure rate for LT from compliance report ( = 0,562 * 10^6 [hours]1) are used as gap
around the typical value in order to evaluate the effects of different failure rates. ESD is given the most conservative estimate
since it is assumed that it will have safer equipment due to its safety ranking.
9,00E08
Solenoid
SFF
2)
5,00E08
Type
Node
Component
Test interval (hours), SFF and values are gathered from OLF 070. OREDA data are based on operational time.
System
Test interval
(hours)
NTNU
1)
Comments
CCF
Vessel
Failure rate
(per hour)
APPENDIX B
NTNU
CONSERVATIVE FTA
The calculations are performed in Excel, and data are gathered from the data dossier in Appendix A.
Conservative calculations of minimal cut sets
index j
{PSD LT,EV}
{PSD node,EV}
10
11
12
{PSD sol,EV}
13
{XV,ESD LT}
14
{XV,ESD node}
15
{XV,ESD sol}
16
{XV,EV}
Independent components
j,1
mj
(hour)
j,2
-1
(hour)
-1
(hours)
PFD
(MCj)
4,93E06
5,00E06
8760
6,31E04
4,93E06
9,00E07
8760
1,13E04
{PSD LT,EV}
4,93E06
2,00E06
8760
2,52E04
5,00E06
4,37E06
8760
5,59E04
5,00E06
9,00E07
8760
1,15E04
{PSD node,EV}
5,00E06
2,00E06
8760
2,56E04
9,00E07
4,37E06
8760
1,01E04
10
9,00E07
5,00E06
8760
1,15E04
12
{PSD sol,EV}
9,00E07
2,00E06
8760
4,60E05
72
Master Thesis
Astrid Folkvord Janbu
NTNU
13
{XV,ESD LT}
2,00E06
4,37E06
8760
2,24E04
14
{XV,ESD node}
2,00E06
5,00E06
8760
2,56E04
15
{XV,ESD sol}
2,00E06
9,00E07
8760
4,60E05
2)
mj
(hour)
(hours)
-1
PFD
(MCj)
5,00E06
8760
1%
8,46E04
11
9,00E07
8760
10 %
4,11E04
16
{XV,EV}
2,00E06
8760
2%
2,73E04
3)
1
4)
j,2
mj
(hour)-1
(hour)-1 (hours)
4,93E06
4,37E06
j,1
j,2
8760
2%
PFD
(MCj)
2,36E03
mj
(hour)
-1
(hour)
-1
(hours)
PFD
(MCj)
Results
Formula
PFD for system
5,14E-03
Master Thesis
Astrid Folkvord Janbu
73