You are on page 1of 12

QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL

Qual. Reliab. Engng. Int. 14: 314 (1998)

PROGRESS TOWARDS THE DEVELOPMENT OF A MODEL


FOR PREDICTING HUMAN RELIABILITY
j. e. strutt*, wei-whua loa and k. allsopp
Centre for Industrial Safety and Reliability, Cranfield University, Cranfield, Bedford MK43 0AL, UK

SUMMARY
A methodology for predicting the probability of human task reliability during a task sequence is
described. The method is based on a probabilistic performance requirementresource consumption
model. This enables error-promoting conditions in accident scenarios to be modelled explicitly and a
time-dependent probability of error to be estimated. Particular attention is paid to modelling success
arising from underlying human learning processes and the impact of limited resources. The paper
describes the principles of the method together with an example related to safety and risk of a diver
in the wreck scenario. 1998 John Wiley & Sons, Ltd.
key words:

human reliability; reliability model

1. INTRODUCTION
The lack of accurate quantitative human reliability
data is seen by many as a serious limitation in QRA
studies and a major source of uncertainty in risk
assessments. Models for the prediction of human
reliability are an alternative to reliability data and
offer significant advantages, but currently available
methods are highly empirical and strongly dependent
on judgemental factors. This has prompted the
authors to consider alternative methods for predicting human reliability. The approach currently
under investigation is to generate a probabilistic
model for a human task in which failure to achieve
task objectives results in loss. In effect this models
an accident sequence in terms of the underlying
physical processes and conditions associated with
the task. The probability of task success is a measure
of human reliability in the specific context of the
task.
The study of accidents provides an essential input
to the development of realistic human reliability
models. Figure 1 shows a simplified event tree
which outlines the sequence of events that a number
of accidents take. The key stages are (i) an initiating
event, (ii) loss of safety barriers/defences,
(iii) deterioration of conditions/escalation followed
by (iv) failure to evacuate or escape.
Initiating Event
Many accidents are triggered either by human
error or the failure of a piece of equipment. The
This paper was originally presented at the 12th Advances in
Reliability Technology Symposium (12th ARTS), 1617 April
1996, at the University of Manchester, UK.
*Correspondence to: J.E. Strutt, Centre for Industrial Safety
and Reliability, Cranfield University, Cranfield, Bedford MK43
0AL, UK.

CCC 07488017/98/01000312$17.50
1998 John Wiley & Sons, Ltd.

trigger event itself may be quite small, it may be


deliberate or accidental and is often associated with
routine, relatively insignificant activities. The underlying causes of such events, however, are complex,
with a range of interacting human, organizational
and hardware factors.
Loss of Defences
The ability to control an event at an early stage
depends critically on prompt human reaction
together with the availability and integrity of emergency response equipment and safety control systems.
Such systems must be robust and capable of withstanding the loads imposed by initiating events, as
experience has shown that the initiating event may
prevent some of the emergency control systems
from functioning, resulting in a reduced capability
to control the incident and a more rapid escalation
of events.
Deterioration of conditions
This is the point at which an incident usually
escalates from a minor to a major accident. In the
case of fires and explosions, for example, the rate
of escalation will depend on the scale of the initiating event and on the inventory of materials (e.g.
flammability) as well as on the design and construction of the plant and surrounding buildings. Incidents
which escalate rapidly increase the potential losses
of assets and life, escape routes may become
blocked or difficult to find and evacuation/escape
of personnel may become more difficult.
Failure to Escape or Evacuate
The greatest impact on potential loss of life arises
when evacuation or escape of personnel from a lifeReceived 17 April 1997

j. e. strutt, w.-w. loa and k. allsopp

Figure 1. Event tree for a generic incident

threatening situation is impaired. This usually makes


the difference between accidents with fatalities and
those with no loss of life. Evacuation and escape
can be greatly facilitated by design, e.g. provision
of additional escape routes in buildings, protection
of escape routes, temporary refuge areas, transport
systems to provide rapid evacuation, etc.
The risk of an accident is dependent on the
frequency (F) with which the accident occurs and
the consequences. Where the consequences involve
potential loss of life (Nf ) the risk can be defined as
risk = (frequency of incident)
(potential loss of life)

(1)

Figure 1 is a simplified event tree model of an


accident which includes the key stages described
above. The four stages can be considered as system
states with transition probabilities Pa, Pb, Pc and Pd.
The four end points of the tree represent the possible
consequence categories. Figure 2 is the risk diagram
corresponding to the event tree in Figure 1. This
diagram illustrates how the risk changes, from a
minor incident to a major accident, as an incident
develops. Minor incidents are more likely than major
accidents, but the risk, when defined as in equation
(1), may remain more or less constant but with
increasing risk uncertainty (implied by the size of
the event boxes in Figure 2) as events become less

and less frequent. From Figure 1 the risk of a major


accident (event 4) is given by
risk (event 4) = {FaPb.Pc.Pd}.Nf4

where Fa is the frequency of the initiating event (a)


(e.g. major leakage of flammable or toxic
substance), Pb is the probability of transition from
control to loss of immediate control (breached
defence); Pc is the probability of escalation and Pd
is the probability of failure to escape. Nf4 is the
potential loss of life associated with end event 4,
given the preceding events. Figure 1 and equation
(2) illustrate that the risk of an accident is dependent
on the effectiveness of human actions (i) to reduce
the frequency (Fa ) of initiating events and the transition probabilities Pb, Pc and Pd and (ii) to reduce
the potential loss of life (Nf ).
2. HUMAN RELIABILITY ANALYSIS
It is widely recognized that human error plays a
major part in accidents and this has focused attention
on the need for techniques to predict human error for
inclusion in probabilistic risk assessments (PRAs).
Various approaches to human error prediction1,2
have been developed since the early 1980s, e.g.
THERP,3, HEART,4 SLIM,5 ASEP,6 TESEO7 and
HCR.8 These models are all semiempirical and rely
heavily on the judgmental performance-shaping factors. In the HEART methodology, for example, the
failure rate is estimated using an empirical
expression of the form
P = P0

[(EPCi 1).Api + 1]

Figure 2. Risk diagram corresponding to event tree of Figure 1


1998 John Wiley & Sons, Ltd.

(2)

(3)

where P is the probability of human error, P0 is the


nominal human unreliability (Table I), EPCi is the
ith error-promoting condition and Api is a proportion
assessment factor for the ith EPC. The method4
provides a very useful list of error-promoting conditions, with suggested values for each EPC
(Table II).
Qual. Reliab. Engng. Int. 14: 314 (1998)

predicting human reliability


Table I. HEART baseline error rates (after Williams4 )
Generic task
(A)

Table II. Error-promoting conditions (after Williams4 )

P0

Totally unfamiliar, performed at speed


with no real idea of likely consequences

055

(B)

Shift/restore system to new or original


state on a single attempt without supervisor or procedure

026

(C)

Complex task requiring high level of


comprehension and skill

016

(D)

Fairly simple task performed rapidly or


given scant attention

009

(E)

Routine highly practised rapid task


involving relatively low level of skill

002

(F)

Restore or shift system to original or


new state following procedures + checking

0003

8
9

(G)

Completely familiar, well-designed,


highly practised, routine task occurring
several times per hour, performed to
the highest possible standards by highly
motivated, highly trained and experienced person, totally aware of implications of failure with time to correct
potential error but without the benefit
of significant job aids

00004

10

Respond correctly to system command


even when there is an augmented or
automated supervisory system providing
accurate interpretation of system state

0000002

(H)

(M) Miscellaneous task for


description can be found

which

no

3
4
5
6
7

12
13
14
15
16
17

The HEART technique has found favour with


some for its simplicity and ease of application, but
there are several problems with this and other similar
approaches. Firstly, the error-promoting conditions
are not independent of each other. For example, the
top EPC unfamiliarity may be closely connected
with inexperience lower in the list. It does not
make sense to multiply a nominal rate by 17 and
by 3, although the method does allow the user to
weight the EPCs with Ap. Secondly, some of the
error-promoting factors are included in the description of the nominal failure rate categories. Thirdly,
the use of the method is extremely subjective and
heavily reliant on the experience of the analyst.
Fourthly, as far as the present authors are aware,
there is little in the way of experimental evidence
to validate the published values. Qualitatively the
error-promoting conditions are a useful list of factors
to guide safety managers. However, the numerical
values are context-sensitive and the predictive equation is empirical.
3. TASK REQUIREMENTRESOURCE
CONSTRAINT MODEL
The motivation for the present research has been a
desire to move away from empirical formulations of
human reliability prediction, such as those described
1998 John Wiley & Sons, Ltd.

11

003

Error-promoting condition

EPC
value

Unfamiliarity with novel or infrequent


situation which is potentially important
Shortage of time for error detection or
correction
Noisy/confused signals
A means of suppressing or overriding
information
No means of conveying spatial or functional information to human operator
Poor system/human user interface
No obvious means of reversing an unintended action
Information overload
Technique
unlearning/one
which
requires application of opposing philosophy
Transfer knowledge from one task to
another
Ambiguity in required performance
standard
Mismatch between perceived and
actual risk
Poor, ambiguous or ill-matched feedback
No clear/direct/timely confirmation of
intended action from system
Inexperience (newly qualified but not
an expert)
Poor instructions or procedures
Little or no independent checking or
testing of output

17
11
10
9
9
8
8
6
6
5
5
4
4
4
3
3
3

above, to methods based on prediction of the underlying physical processes in human tasks. A task or
project is viewed as a human system comprising a
goal or objective possibly involving a number of
subtasks which have intermediate goals. They have
a start, duration and end and one or more resources
to support the task. The nature and amount of work
to be carried out, the work rate and the resources
available and their rate of consumption are key
factors which can be related to performance-shaping
factors or error-promoting conditions. For example,
the effects unfamiliarity and time stress can be
explicitly modelled in the work rate parameter and
in resource availability and usage parameters. In the
model a distinction is made between the time
required to complete a task, given the particular
conditions, and the actual task duration which may
be limited by time or resource constraints. These
points are explained below and illustrated in Figure 3.
Required Task duration
The nature of the task determines the amount of
work necessary to complete the task. The required
task duration depends both on the total amount of
work required to achieve the task objective and on
the work rate or the rate of progress towards successful task completion. For simple routine manual
Qual. Reliab. Engng. Int. 14: 314 (1998)

j. e. strutt, w.-w. loa and k. allsopp

Figure 3. Task requirementresource constraint model

tasks with well-defined procedures in which there is


little or no learning required, e.g. a trained maintenance engineer removing a pump from service for
maintenance, there will be little uncertainty in the
amount of work involved in removing the pump
and the variance in work rate will be relatively
small. At the other extreme a problem-solving task
may be very complex, poorly defined, with no procedures or prior experience and a significant learning
process to achieve the task objective. In this case
there is likely to be a great deal of uncertainty
both on how much work is required to achieve the
objectives and on the rate of progress. An important
issue here is how to measure complexity.9 The
probability of success in the former is likely to be
very much greater than in the latter case. The
success of problem-solving tasks is dependent on
the learning rate, access to or availability of information and the intelligence of the person performing
the task. Examples of tasks in this category are
research projects, emergency management, inspection of structures, fault diagnosis, etc. The work
requirement in problem-solving tasks can be equated
to the level of information required and the work
rate to the information-gathering rate.
Timeresource limitations
The actual duration of a task may be limited by
a time constraint or the loss or depletion of some
essential resource. As illustrated in Figure 3, if the
work rate is insufficient, resource constraints lead
either to late completion of the task or to a shortfall
in the performance when the available time or
resources run out. In practice these two modes of
failure may result in very different consequences
depending on the context and so both situations will
need to be assessed. For example, if a SCUBA
diver, with a limited air supply, enters a wreck to
salvage a valuable asset, the task fails if he searches
until his air runs low and returns without the asset
or if he continues to search until his air runs out.
The consequence of the latter implies the loss of
the divers life, while the former implies an asset
loss but no loss of life.
1998 John Wiley & Sons, Ltd.

4. STOCHASTIC MODEL DESCRIPTION


A stochastic model has been developed for predicting the probability of successfully completing a
task in which the key error-promoting conditions,
namely
unfamiliarity,
time
stress
and
noisy/confusing signals, can be assessed. The concept is illustrated schematically in Figures 4 and 5.
Mathematical details of the model are provided in
the Appendix. The model assumes that there is a
specific task to be completed or problem to be
solved within some time or resource constraint. A
task (see Figure 4(a)) requires a quantity of work
to be carried out to achieve the task objective. It is
assumed that the problem/task has a certain level
of complexity which influences the amount of work
which must be completed and hence the time to
complete the task. If the task is one of problem
solving, then work rate is equated to information
generation rate. The information-gathering rate or
work progress rate is modelled as a combination of
periods of linear progress rates (variable) over some
time period (stochastic or deterministic) and jumps
in knowledge (e.g. task short-cuts). Changes in progress rate or task complexity are caused by events
which influence progress rate, e.g. loss of control
increases variance in progress rate, deterioration in
conditions reduces mean progress rate, as illustrated
in Figure 5. These events are modelled stochastically. Both task progress rate and work (knowledge)
requirement may possess significant variance
depending on the conditions and situational characteristics. Thus the overall quantity of work to be
carried out and the amount of work completed by
a particular time are both distributed variables. As
the task progresses in time, the two distributions
interact and from this the probability of successful
task completion increases with time as shown in
Figure 4(c).
Task completion will be limited by the resources
available to support the task. The resource consumption model is illustrated in Figure 4(b). The total
resource available can be treated as a statistical
variable as can the resource consumption rate, such
that at any given time as the task progresses there
Qual. Reliab. Engng. Int. 14: 314 (1998)

predicting human reliability

Figure 4. Graphical representation of task requirementresource constraint model

Figure 5. Diving accident event sequence

will be a distribution of both the amount of resource


consumed and the total resource available. The interaction of these two distributions can be used to
generate a resource availability curve, i.e. the probability that the available resource will not be consumed by time t. This decreases with time as shown
in Figure 4(c).
The overall probability of success is a joint probability distribution calculated as the product of the
probability of successfully completing the task
(Ps(W,t)) and the probability of not consuming the
1998 John Wiley & Sons, Ltd.

resources (Ps(R,t) = 1 Pf(R,t) within the task time


period. W is the work (or information generation)
rate, R is the resource consumption rate and t is the
time. The probability of successful task completion
at any given time can then be calculated (see
Appendix) using
R(t) =

f(R,W;t) FW (1 FR ) dW dR (4)

This joint probability is a time-dependent cumulative


Qual. Reliab. Engng. Int. 14: 314 (1998)

j. e. strutt, w.-w. loa and k. allsopp

direction at each node. These were used only to


give an overall mean value for the probability of
success and are not used in the underlying programme. As expected, the model predicts a significant difference between the shortest route and the
longest route given the particular conditions of the
dive. For the short route the probability of success
reaches a maximum of about 095 after approximately 13 min, compared with a maximum of 005
after 18 min for the longest route taken.

Figure 6. Diver escape route network

distribution function which exhibits a maximum as


illustrated in Figure 4(c). The leading edge of the
probability curve is dominated by the task completion rate or learning rate, while the trailing edge
is dominated by the resource availability.
5. MODEL APPLICATION AND RESULTS
A relatively simple application has been developed
to test model capability. The particular scenario
selected for consideration is part of the overall task
of a diver involved in salvaging an asset from a
shipwreck using only a swim line and SCUBA gear.
The task set is to safely exit from the wreck. To
do this, the diver must find his way through the
wreck to the exit and open water and from there to
the surface. The accident sequence is initiated by
the loss of the swim line in the wreck; the diver
must then find a way out. The network diagram
corresponding to the possible escape paths is shown
in Figure 6. From the location of the asset, at point
A, there are two principle routes, one of which is
shorter than the other. The diver can take a route
via node X or node Y to reach the exit point at B.
There is a connecting path between X and Y which
introduces the possibility of one or more wrong
turnings. There are therefore a number of possible
exit routes of different lengths that the diver may
take. The key parameters of the model are listed
in Table III and the relationship between model
parameters, physical parameters and their dependences are listed in Table IV. A SCUBA cylinder
water capacity of 12 l and an initial mean pressure
of 207 bar were fixed for all runs.
Effect of Route selection (Task complexity)
Figure 7 shows the effect of taking different
routes to exit from a wreck at 30 m depth. The path
probabilities are based on the random selection of
Table III. Model parameter listing
Model parameter
Linear work/learning rate
Jump in knowledge/short-cut
Time between work rate changes
Task requirement/complexity
Initial resource capacity
Resource consumption rate
1998 John Wiley & Sons, Ltd.

Symbol
a(,)
b(,)
t(,)
W(,)
X(C,dc)
R(,)

Effect of diving depth (Figure 8)


For wrecks in shallow water, e.g. 10 m, divers
who take the short route are almost certain to make
a successful exit from the wreck under the conditions specified. For deep wrecks the depth-adjusted
volume of air available decreases significantly, from
1241 l of air which would be available at 10 m to
414 l at 50 m depth. The probability of a successful
exit decreases to a maximum of about 065 after
10 min; thereafter it decreases, reaching very low
values by 15 min. Speed of exit becomes more and
more important as the depth increases.
Effect of Breathing rate (Figure 9)
For low levels of physical exertion it is well
known that inexperienced divers breathe at a faster
rate than expert divers. This is most likely caused
by psychological rather than physiological factors.
This has the effect of reducing the dive time but,
for the range of breathing rates tested, has only a
marginal effect on reducing the probability of a
successful exit. Results indicate that at a breathing
rate of 30 l min1, the probability of a successful
exit is beginning to fall after 15 min. For some
novice divers the breathing rate may be even higher
than 30 l min1, which would have the effect of
reducing the probability of success further and to
shorter times.
Effect of swim rate (Figure 10)
For a free-swimming diver at higher levels of
physical exertion there is a direct correlation
between swim rate and breathing rate. As a diver
swims faster, the breathing rate increases owing to
physical stress. This has been allowed for in the
model (see Appendix) and typical results are shown
in Figure 10. Swimming faster and consuming the
air faster decreases the time of task completion.
Swimming slower and consuming less air results in
a later exit from the wreck, but there is little change
in the probability of a successful exit. In the present
model, psychological factors which affect the breathing rate have been ignored.
Expert versus Novice (Figure 11)
The difference in the probability of a successful
exit between an experienced well-trained diver and
Qual. Reliab. Engng. Int. 14: 314 (1998)

predicting human reliability

Table IV. Meaning and dependence of model parameters


Model parameter

Physical meaning

Dependence

Task requirement (W)


Task progress rate (a,b,t)

Swim distance (m)


Divers swim speed (m s1 )

Initial resource available (X)

Divers air volume (l)

Resource consumption rate (R)

Breathing rate (l min1 )

Complexity of escape route network


Fitness
Level of experience and training
Availability of swim line
Underwater visibility
Cylinder capacity
Initial pressure
Level of experience and training
Stress
Water depth

Figure 7. Effect of escape route distance on task reliability

an inexperienced diver is modelled in Figure 11.


For several reasons the expert is likely to breathe
at a slower rate, is unlikely to lose his swim line,
will look for the swim line rather than try and exit
unaided in the event of losing the swim line and
so will be much more likely to take the short route
back. The novice has been modelled as having a
faster breathing rate and as losing his swim line so
that a longer route might be taken. If by chance the
short route is taken, the novice and expert have
about the same probability of success, but the novice
has less margin for error since his probability of
success curve falls more quickly after 15 min. If the
novice takes a longer route, then the likelihood of
a successful exit is reduced. Averaging over the
various possible paths, the novice has at best an
80% chance of a successful exit after 20 min. After
1998 John Wiley & Sons, Ltd.

30 min the chances of a successful exit are almost


zero for the novice but still about 50:50 for the
expert. After 40 mins neither has much chance of
survival.
6. DISCUSSION AND IMPLICATIONS
The model has been used to assess the risk of a
particular class of diving accident. When used in
this way, it is an alternative method of predicting the
end event probabilities of an event tree. However, it
is more powerful and more accurate, since the task
and accident contexts are explicitly modelled and
more realistic.
A simple event tree description of the diving
accident sequence, similar to Figure 2, includes
the following critical stages: (1) the initiating event
Qual. Reliab. Engng. Int. 14: 314 (1998)

10

j. e. strutt, w.-w. loa and k. allsopp

Figure 8. Effect of water depth on task reliability

Figure 9. Effect of breathing rate on task reliability

1998 John Wiley & Sons, Ltd.

Qual. Reliab. Engng. Int. 14: 314 (1998)

predicting human reliability

11

Figure 10. Effect of swim rate on task reliability

(enter wreck); (2) breach defence, i.e. lose swim


line in wreck; (3) deterioration in conditions, i.e.
divers fins stir up silt which drastically reduces
visibility and hence swim rate; (4) failure to escape,
i.e. divers air is consumed before finding the exit.
Expert opinion could be elicited to estimate the
probabilities of occurrence listed in Table V. The
results of the event tree analysis are listed in Table
VI. The simple event tree results predict that the
experienced diver has a much higher probability of
a successful exit (Ps = 091) than the inexperienced
diver (Ps = 01) and this conforms reasonably well
with the model predictions shown in Figure 11 if
the worst-case scenarios for the divers are taken.
However, the event tree tended to overestimate the
risks to the divers, particularly the risks to the
inexperienced diver. One of the significant differences between the event tree description and the
prediction model is that the event tree gives no

indication of the time dependence which is evident


in the physical model. Time dependence in human
reliability is important and likely to be present in
many task-oriented problems where stress is generated by time and resource constraints.
The model, although tested for a specific diving
application, is being developed as a generic model
and further tests and developments are in progress
to deal with more demanding scenarios. One particularly important and complex task, where task
performanceresource constraint considerations are
important, is in the management of major emergencies on offshore, nuclear and other installations.
For example, if an offshore installation is engulfed
in an escalating fire, the task is to bring the fire
under control. An important resource constraint in
this case is the availability of a temporary refuge
to protect the workforce and provide an information
control centre for managing the emergency. The task

Table V. Event tree data


Event
Initiating event
Lose swim line
Degraded conditions
Fail to escape

1998 John Wiley & Sons, Ltd.

P (yes)

P (no)

1
09
001
09
03
09
07
001

0
01
099
01
07
01
03
099

Comment
Assume diver has entered wreck
Inexperienced diver
Experienced diver
Inexperienced diver: assume fine silt conditions
Experienced diver: assume fine silt conditions
Inexperienced diver: given low visibility, no swim line
Experienced diver: given low visibility, no swim line
Experienced diver: given good visibility and swim line
Qual. Reliab. Engng. Int. 14: 314 (1998)

j. e. strutt, w.-w. loa and k. allsopp

12

Table VI. Event tree data and predictions


Yes

Initiating event
Lose swim line
Degraded conditions
Fail to escape
Probability

No

001
07
0002

03
03
9E-04

07
0005

099
07
03
0002

07
0069

01
03
003

Yes

Initiating event
Lose swim line
Degraded conditions
Fail to escape
Probability

001
001

09
099
088

Expert
Ps = 091
Pf = 009

No

09
09
0729

09
01
0081

09
0081

01
01
01
0009

09
0009

01
01
0001

09
008

09
01
001

Novice
Ps = 010
Pf = 090

Figure 11. Effect of divers experience on task reliability

of bringing the fire under control is essentially a


learning process and correct decisions are critically
dependent on the information available to the management. Time is a key controlling factor in such
accidents. The rate at which an incident develops
(e.g. rate of smoke ingress into the TR) can have
a major influence on the ability of the emergency
response team to control the incident and prevent a
major disaster. An incident which develops slowly
provides a greater amount of time for planning
and implementation of mitigation measures than one
which is rapidly evolving and for which there is a
limited time to bring it under control. In this situation it is necessary to model a time-varying task
load of varying complexity. The ability to control
1998 John Wiley & Sons, Ltd.

an incident is related to the difference between


the time available to control an incident (inversely
proportional to rate of evolution and escalation of
the event) and the time required for control of the
incident, which depends on the complexity, the flow
of information and information noise. In principle
the model is applicable to this type of problem and
if successful would provide a practical method of
predicting the reliability of the emergency management process.
7. CONCLUSIONS
1. A task requirementresource constraint method
for modelling the reliability of human tasks
Qual. Reliab. Engng. Int. 14: 314 (1998)

predicting human reliability


has been developed and its capability tested
for a typical diving incident. In this simple
case, for which a number of simplifying
assumptions were made, the incident model
successfully predicted the incident sequence,
providing results which were consistent with
experience.
2. The key benefit obtained from a model of this
kind, compared with conventional techniques
such as event trees or fault trees, is the ability
to assess the impact of performance-shaping
factors, situational characteristics, accident
scenarios and error-promoting factors such as
unfamiliarity, stress, noisy signals, etc. more
directly and within the correct context.
3. A number of issues will be addressed in the
next phase of the research, e.g. how to quantify
task complexity for problem-solving tasks, how
to model the effects of psychological stress
and the use of Bayesian updating methods as
an integral part of the learning rate.

APPENDIX
Task progress rate
The task progress rate is represented by the
amount of work completed by time tn, where tn is
modelled as a time series tn = ntn in which it is
assumed that there is no correlation between time
tn+1 and time tn. The work completed at time t is
given by

13

b = 3(lnR3 )1/3
where Ri is a random number between zero and one.
Resource consumption process
The resources used in task performance accumulate in time with the same time steps as those for
task progress. The resource consumption rate is also
made dependent on 2 and 2. This provides, where
appropriate, a connection between the consumption
rate of resources given by

r(t) =

(Cri ti ) + Crn+1 (ttn )

(6)

i=1

where
Cri = Cb (1 + 2 R4 dc 2 /ai )

(7)

The parameter Cb is considered as the steady (basic)


rate of resource usage. Cri is the actual resource
consumption rate in a particular time interval (i).
The parameter dc is a fractional rate increase scale
factor for extra resource usage. This fractional
increase in the rate of resource usage is dependent
on the work progress rate through the parameter a.
This dependence is assumed to be inversely proportional to a, hence the factor 2 /a. The strict
proportionality is moderated through a further random factor R4. R1 R4 are successive random numbers equally likely between zero and one. A Monte
Carlo simulation method is applied.

W(t) =

(ait + bi ) + an+1(ttn )

(5)
Prediction of Human Reliability

i=1

where

tn =

ti

i=1

tn t tn+1
The assumption is that work progresses by a series
of random jumps bi at random interval t with
random rate of learning ai in between. Where the
task is problem solving, work progress is understood
to mean knowledge accumulation, i.e. a learning
process, and a, b and t model the learning rate;
b0 is the task start point or the initial level of
knowledge learned from past experience. For each
time interval (i) the values of a, b and t are
chosen from Weibull distributions at random with
scale parameter and shape parameter . These
three values are given by
t = 1 (lnR1 )1/1
a = 2 (lnR2 )1/2
1998 John Wiley & Sons, Ltd.

The total amount of work (the work requirement)


to complete a task or the information needed to
solve a problem is modelled as a Weibull distribution with scale factor W and shape factor W.
The cumulative distribution of work completed (or
useful information gathered) will be
FW = 1 exp[(W/W )W ]

(8)

The probability of success is represented by


Ps (W,t) =

(t) FW dt

(9)

fL

where fL(t) is the PDF of the work completed


(amount learned) between t and t + dt.
The resources available to complete the task are
also given by a Weibull distribution with scale
factor R and shape factor R. The cumulative distribution of resources consumed will be
FR = 1 exp[(R/R )R ]

(10)

Qual. Reliab. Engng. Int. 14: 314 (1998)

j. e. strutt, w.-w. loa and k. allsopp

14

The probability of failure due to resource restriction


is then represented by
Pf(R,t) =

6.
7.

(t) FR dt

(11)

fc

8.

where fc(t) is the PDF of the resources consumed


between t and t + dt. In this case the probability of
success is represented as

9.

Ps(R,t) = 1 Pf(R,t)

(12)

At a given time T there exists a bivariate distribution f (W,R;t). The human reliability in completing
the task is regarded as the product of the probability
of success in problem solving and the expression is
given by
R(t) =

f(R,W;t) FW (1 FR ) dW dR
(13)

REFERENCES
1. W.-W. Loa and J. E. Strutt, Development of human
reliability prediction methods: Part I. A survey of human
reliability assessment techniques, Proc. 1st Symp. of Chinese
Institute of Engineers in UK, p. 20, Cambridge, April 1995.
2. Human Reliability Assessment Group, Human Reliability
Assessors Guide, Warrington, AEA Technology, 1988.
3. A. D. Swain and H. E. Guttmann, Handbook of human
reliability analysis with emphasis on nuclear plant applications: technique for human error rate prediction (THERP),
NUREG/CR-1278, US NRC, 1983.
4. J. C. Williams, HEARTA proposed method for assessing
and reducing human error, Proc. 9th Advances in Reliability
Technology Symposium, Univ. of Bradford, April 1986,
paper B3/R.
5. D. E. Embrey, P. C. Humphreys, E. A. Rosa, B. Kirwan,
and K. Rea. SLIM-MAUD: An approach to assessing human
error probabilities using structured Expert judgement.
NUREG/CR-3518, (BNL-NUREG-51716) Dept. of Nuclear

1998 John Wiley & Sons, Ltd.

Energy, Brookhaven National Laboratory, Upton, New York


11973, 1984.
A. D. Swain, Accident sequence evaluation procedure
(ASEP), NUREG/CR-4277, US NRC, 1987.
G. C. Bello and V. Colombari, The human factors in risk
analysis of process plants: the control room operator model
(TESEO), Reliab. Engng. 1, 314 (1980).
G. W. Hannaman, Human cognitive reliability model for
PRA analysis, NUS-4531, NUS Corp., 1984.
R. L. Flood and E. R. Carson Dealing with Complexity: An
Introduction to the Theory and Application of Systems
Science, Plenum, New York, 1990.

Authors biographies:
John Strutt is a Senior Lecturer and Head of Industrial
Safety, Reliability and Risk Management in the School of
Industrial and Manufacturing Science at Cranfield University. He has 20 years experience in research and education
at postgraduate level, largely related to reliability engineering and materials performance. His responsibilities include
teaching of reliability engineering and risk management to
engineers and managers across the University and research
into the development of quantitative risk analysis tools for
application to engineering and human systems. Current
research includes the development of quantitative models
for the prediction of risk and reliability of a range of
mechanical systems, including helicopter transmission systems, submarine pipelines, electrical/hydraulic actuation
systems and smoke and gas ingress into temporary refuges
on offshore installations, as well as methods for predicting
human reliability. He is an active member of the Hazards
Forum and Vice-Chairman of the Mechanical Reliability
Committee of the IMechE, in which capacity he is leading
an IMechE initiative for the development of a national
strategy in reliability engineering and risk management.
W. W. (Paul) Loa is an engineer with a Masters degree
in Systems Engineering from the California State University. Since 1994 he has been researching into the development of human reliability predictions models at Cranfield
University as part of a PhD programme. He is currently
employed with the Institute of Nuclear Energy Research
in Taiwan where he leads the research into human factors.
Keith Allsopp is a Senior Research Officer at Cranfield
University with a degree in Mathematical Physics from
Birmingham University. He has forty years experience in
mathematical modelling of human, ecological and engineering processess.

Qual. Reliab. Engng. Int. 14: 314 (1998)

You might also like