Damage Aetiology through Logical Analysis | My Assignment Tutor

9
Classifying and
Analysing Risk:
Damage Aetiology
through Logical Analysis
Introduction
The primary purpose of Risk analysis is to:
• accurately describe how something adverse can happen and the results of it, if it does;
• provide a means of synthesising estimates of the Frequency and value of the associated
adverse Consequences.
Risk analysis uses logic diagrams, which reflect the logic of the failures and processes needed
to produce adverse Consequences. Their origin (Ericson, 1999) is to be found in the Cold War
need to determine methods to test intercontinental ballistic missile launch systems up to the
point of launch but being sure not to actually launch. Application to the design of aircraft quickly
followed, then the nuclear power industry, rail, road and the petrochemical industry. These
methods provide the most structured means available to us to gain insight into the detailed
minutiae of Occurrence processes and potentially provide us with the insight with which to
direct control measures at specific predictable processes.
Despite this long history, it is not clear that the methods of risk analysis are well developed or
well used, or perhaps adequately used where needed. Significant disasters are the best evidence
that can be advanced in support of this assertion. A minimal list of these, amongst many that are
easy to retrieve from the internet record and all documented well in numerous places, is the
Bhopal disaster of 1984, the Deepwater Horizon oil spill in 2010 and the Fukushima nuclear power
plant in 2011. Post hoc judgement finds it easy to identify equipment and management failings and
much along these lines has been written about each case. What is of concern to anyone interested
in the management of risk is the lack of prior awareness of the failings and their implications.
Unfortunately, relatively little is written in this vein. These disasters occurred in the industries
most involved in the application of risk analysis since its inception. One possible conclusion,
probably out of many, is that it is hard for day-to-day managers to keep in view the assumptions
and implications made in risk analysis at the plant design stage. A more productive response
to such cases, rather than censuring and fining the individuals or organisation on whose watch
these disasters took place, would be to improve the technical management of risk in high-energy
Viner, Derek. Occupational Risk Control : Predicting and Preventing the Unwanted, Taylor & Francis Group, 2016. ProQuest Ebook
Central, http://ebookcentral.proquest.com/lib/swin/detail.action?docID=2004776.
Created from swin on 2021-06-11 08:24:00.
Copyright © 2016. Taylor & Francis Group. All rights reserved.
144 OccupatiOnal Risk cOntROl
industries. For this reason, an emphasis in this chapter is on the organisational implications of the
assumptions made when risk is being analysed. A later chapter is concerned with the technical
management of risk and the structures and processes required in the engineering management
of high-hazard industries. The reader should recognise, however, that these techniques are as
applicable to non-energy threats (business processes) as they are to energy-based ones.
Risk analysis makes use of two types of logic diagram, one to understand the logic of system
faults, known as fault tree analysis (FTA), that lead to a significant Event (called aTop Event in FTA),
and the other to show what can follow the Event by describing the logical pathways (here called
Outcomes) that lead from it to Consequences. This latter is known, somewhat confusingly, in the
petrochemical industry as event analysis (EA) and so this is the term that will be used here to refer
to the analysis of possible Outcomes following an Event. A search through the literature will make
it evident that these terms can be used in a conflicting and hence confusing variety of ways by
different authors. These tools are used by engineers to understand the Occurrence scenarios that
are possible in typical industrial electromechanical systems. All that is said in FTA is what needs to
happen to create the Event. For example, that for a leak to happen in a pressurised system either
a valve is opened unintentionally, or at the wrong time, or a failure of the vessel to contain the
pressure occurs, perhaps due to a structural failure or to the operation of the pressure relief valve.
These are Mechanisms of failure. It could also be that for something to occur two simultaneous
requirements exist, for example the tank must be pressurised and the valve must be opened. FTA
is based on the construction of a logic diagram (shaped like a tree root) that places all these OR
(either this or that will produce the result) and AND (it requires this and that to produce the
result) logical operators in their appropriate place. The endpoint of an FTA is the discovery of the
individual component failures or human action failures needed to produce the Top Event. Once
the leak has occurred the EA looks at how it is identified and responded to, which may involve
automatic functions or manual reaction, themselves capable of failure.
Industrial safety practitioners have discovered these forms of analysis in relatively recent times,
adopted them for their own purposes and given names such as ‘cause analysis’ to them.A simplified
version of risk analysis, given the name ‘bow-tie’ because of its appearance, has become increasingly
popular from the 1990s onwards. While their appearance as logic diagrams may be similar, their
content may be and is often very different from that of FTA or EA, either because they contain causes
(which are not the same as Mechanisms of Failure) or because their purpose is more to demonstrate
the variety of possible reasons for Events and to summarise their effects, rather than include the
logic of these options. Hence, it is necessary to emphasise that the terms ‘accident’ and ‘cause’ have
no place in FTA and EA, which are solely concerned with physical possibilities (i.e. following the
universal laws of physics) capable of objective description. Any term that is clearly judgemental has
no place in an FTA or EA, for example ‘insufficient’,‘ignorant’ and so on.This chapter is concerned
with FTA and EA, and not with bow-tie (although this is mentioned briefly) or with cause analysis.
One aspect of what can be taught about these tools is the simple structural conventions
associated with them and the mathematics of deducing from component failure probabilities the
Frequency of possible Consequence Values. Specialist texts devoted to the former, for example
that of Sutton (2007), are readily available. Boolean algebra is the form of mathematics that handles
probabilities through AND and OR points (known as ‘gates’) in the logic diagram. To these are
added VOTING gates, to represent the way in which replicated multiple inputs are handled by
automated control systems. Where three devices provide the same input (for reliability purposes),
the control system may, for example, take any two inputs that agree with each other as the correct
input and ignore the dissenting input. Similarly, explanations of this simple branch of mathematics
are easily found on the internet. Consequently, in this chapter the very simplest of explanations
will be given. The intention is to not obscure the underlying logic and purpose of the analysis with
matters whose complexity is best understood at another time and through dedicated sources.
Viner, Derek. Occupational Risk Control : Predicting and Preventing the Unwanted, Taylor & Francis Group, 2016. ProQuest Ebook
Central, http://ebookcentral.proquest.com/lib/swin/detail.action?docID=2004776.
Created from swin on 2021-06-11 08:24:00.
Copyright © 2016. Taylor & Francis Group. All rights reserved.
classifying and analysing Risk 145
It is particularly important to realise that the choice of failure probability at the component or
human action failure level is what determines the Risk. Representative values of failure probability
are fortunately available to the risk engineer, but actual failure probabilities depend on everyday
management decisions about operating, maintenance, inspection, equipment renewal and training
practices. I have never met, in high-energy industrial plant, operating managers who have taken the
time to understand the assumptions of the design engineers. This is akin to pilots jumping into the
cockpit without bothering to enquire too deeply about the designed performance of the aircraft
they are about to fly: unheard of! Of interest in this context is the revelation that in the operation
of NASA’s space shuttle, the failure probability estimates made in risk analysis at the outset were
shown to be incorrect in practice by a significant amount (NPR, 2011).
What I have found, in decades of teaching this topic, is that risk analyses are surprisingly
susceptible to the interpretation of the individual. This problem can be partly overcome by
following a formal process and some practical guidelines, both of which I attempt to explain
succinctly here. Not all systems that can fail are made up of tanks, pipes, pumps and controllers
connected together in a simple and defined way that is relatively amenable to being analysed in
the same way by different people. An extensive search of the literature reveals a dearth of papers
providing either a theory for or critical appraisal of FTA. The paper of Russo and Kolzow (1994)
is an exception. These authors are concerned with the structure and content of fault trees and
assert that most fault trees are inaccurate, suggesting that this is due to exclusions from the
analysis not being explicitly stated and to ambiguity in the details of what is analysed. Much of their
argument depends on the presence of a standard branch in fault trees, called ‘all other events’. Not
being further analysed, this branch is a convenient but obscure location for anything of which the
analyser has not thought. It is the intention of this chapter to address these failings.
The TSM (Chapter 3) conveys the underlying architecture of the way in which FTA and EA are
connected through the defined Event.It follows therefore that,in principle,any Event that is capable
of being identified and described is amenable to the analysis of Mechanisms and Outcomes. For
example, I have used this to construct successful risk analysis diagrams of such diverse matters as:
• a biped (human, robot) falling;
• being struck by a tree branch when felling trees in a forest;
• a car hitting a fallen tree branch on country roads;
• a car and train reaching a level crossing at the same time;
• the loss of an air operator’s certificate by an airline;
• the failure of an aircraft to achieve take-off safety speed;
• a person unable to leave a walk-in freezer;
• the complete operations of an open-cut mine up to the point of loading coal at the railhead;
• numerous Events at thermal power stations;
• errors while carrying out switching instructions on an electricity distribution system.
The form of the energy-damage theory that is presented in this book specifically defines an Event
in terms of an energy source, as well as extending this to accommodate non-energy threats.
For any type of industry or commercial organisation, it is therefore feasible to determine all the
Event types of concern. It follows that all possible Mechanisms and Outcome pathways are also
able to be modelled. This is neither a gigantic nor an improbable job. It is arguably the most useful
contribution a risk engineer or adviser can make.
The skeleton of a risk analysis is determined by physical possibilities and there is a natural limit
to these. For example, a risk analysis of the Event ‘a person starts to fall’ (on a surface) uncovers
six basic Mechanisms that can be expanded to 17 subcategories, an application used in this chapter
to illustrate a theory for determining Event Mechanisms. One could expand this almost without
Viner, Derek. Occupational Risk Control : Predicting and Preventing the Unwanted, Taylor & Francis Group, 2016. ProQuest Ebook
Central, http://ebookcentral.proquest.com/lib/swin/detail.action?docID=2004776.
Created from swin on 2021-06-11 08:24:00.
Copyright © 2016. Taylor & Francis Group. All rights reserved.
146 OccupatiOnal Risk cOntROl
limit if one started to add banana skins, oil leaks, rainy weather and so on to the slip category.
All this circumstantial material detracts from a proper understanding of the basic principles and
reinforces the perception that everything is very complex. With very few exceptions, it is not. First
let us obtain a thorough understanding of the structure of possible Occurrences and only then
add relevant circumstantial detail.
To illustrate this important point, all airlines are equal when it comes to what they do: fill
aircraft with fuel, food, baggage and passengers, taxi, take off, climb, cruise, descend, land, taxi and
disembark passengers. However, not all airlines experience the same Circumstantial influences
of environmental, culture, economics and people. An untimely emphasis on Circumstances is
distracting and unproductive. It is inefficient to start our understanding of risk by studying these
Circumstantial differences, even though research at into this may uncover some valuable insights,
it is highly likely to be reactive insight (studying crashes) and not at all proactive.
There are two possibly unexpected benefits to be gained from the intellectual capital that a risk
analysis represents: the work can be used to assist with the analysis of Occurrences (investigation of
accidents) as well as to classify them by type. It is very useful to have a guide to investigation that has
preconsidered all the options.The benefit works both ways as an investigation may expose omissions
and other weaknesses in the risk analysis and be a very useful partner in the development of quality
risk analysis models.The classification of cases that have occurred is useful in the development of
historical data, which can be used to measure performance in risk control over time.
Classifying Risks and Mechanisms
In Chapter 5 it was argued that risks could arise from one of two sources, either from exposure
to energies or to non-energy threats. There are 10 or so different forms of energy, depending on
how they are classified, each of which we could therefore say gives rise to a unique type of risk.
Gravitational risks are quite different from those arising from ionising radiation, for example.
Chapter 5 identified four different types of non-energy-based threat. Each of these risk types can
give rise to an Event, being the point in time when control is lost over the potentially damaging or
loss-making properties of the Threat. Figure 9.1 shows this with the convention that items to the
left are subsets of items to their right. That is, energy-based threats are subdivided into the various
Figure 9.1 Classification diagram of risks by source and type
Risk Type Risk Source
(EVENT)

Energy form prior to release
Energy-based threat

Criminal acts
Non-energy-based threat

Failures of or in system or process

Liabilities

Adverse external influences

Viner, Derek. Occupational Risk Control : Predicting and Preventing the Unwanted, Taylor & Francis Group, 2016. ProQuest Ebook
Central, http://ebookcentral.proquest.com/lib/swin/detail.action?docID=2004776.
Created from swin on 2021-06-11 08:24:00.
Copyright © 2016. Taylor & Francis Group. All rights reserved.
classifying and analysing Risk 147
forms of energy, not listed here due to their number. Non-energy-based threats are subdivided
into the four categories introduced in Chapter 5.
In our role as risk engineers or managers, we are most commonly concerned to understand
and manage unwanted Mechanisms of Occurrences. It is useful to understand the types of possible
Mechanisms before delving in great detail into those associated with specific Conditions and
Circumstances. In Figure 9.2, use is made of Divisions proposed by Rowe (1977) and which were
described in Table 3.4 as types of Hazard Control failure Mechanism. Each of these is potentially
the Mechanism for an Event based on the energy form of interest.
A Theory for Analysing Unintentional Mechanisms
The objective is to understand all possible Mechanisms that could give rise to an unintentional
Event. On some few occasions, these Mechanisms may appear self-evident from the context.
Chapter 5 describes a structured approach to identifying risk, based on energy sources and Threats,
and by way of illustration shows how these can be associated with Mechanisms and Outcomes
without resorting to the use of special techniques. Such simple methods (using experience and
judgement) are a valuable way of filling in details. However, a greater sense of certainty results
from using a structured approach based on a set of rules or conventions. If we expect risk analysis,
as suggested in the introduction to this chapter, to be the basis of a predictive approach to risk
engineering and management, we need a theoretical basis, however simple it might be. It is a long
way from saying we are interested in an unintentional Mechanism for the release of the potentially
damaging properties of a chemical bonding energy source to the point of being able to decide on
the Top Event that is suited to an FTA of a part of an automated petrochemical plant.
If a small and self-sufficient item of plant is to be analysed, the Top Event is probably
self-evident from the context, as it simply describes whatever it is that is to be analysed.
Figure 9.2 Classification diagram of Mechanisms for energy-based threats
MECHANISM
CLASSIFICATION
STRUCTURE
RISK
Division Risk Type Risk Source
(EVENT)

Purposeful

Incidental
Energy form prior to release
Energy-based threat

Unintentional

Viner, Derek. Occupational Risk Control : Predicting and Preventing the Unwanted, Taylor & Francis Group, 2016. ProQuest Ebook
Central, http://ebookcentral.proquest.com/lib/swin/detail.action?docID=2004776.
Created from swin on 2021-06-11 08:24:00.
Copyright © 2016. Taylor & Francis Group. All rights reserved.
148 OccupatiOnal Risk cOntROl
As examples: the fire alarm fails to operate, the standby generator fails to start on demand, the
tank is overfilled.All that is needed to complete the FTA is an understanding of the logic and
how to make the logic diagrams reflect that adequately. None of this is especially complex in
itself, although even simple electromechanical systems can lead to a surprising and undesirable
variety in the FTA produced by different people. If, however, we are concerned with the
potential for an Occurrence involving a large energy source in a far larger and more complex
piece of plant, we can see that each of these examples may be a small part of something
bigger – but what?
The contention here is that if an analyst follows a process, they are more likely to create
a reproducible result. This process benefits greatly from first describing the intended positive
functioning of the system, as the analysis of failure to do so is then more understandable to
others. The process is a simple one and involves:
1. Rules for determining the Top Event.
2. Selection of the Mechanism Division (Figure 9.2).
3. Question 1: What abilities does the system have that normally prevent it from falling? An
explicit statement is made of what enables the system to control its hazardous properties
under normal operating Conditions and Circumstances: the abilities or capabilities of
the system in this regard. The basic ability may be subdivided into functional abilities, for
example the ability to control temperature in a domestic hot water service will depend
on the functional ability to measure water temperature and then influence energy input to
the service.
4. Question 2: What mechanisms does the system have that give it the abilities determined
by Question 1? An explicit statement is made of the Mechanisms (called Hazard Control
failure Mechanisms in Chapter 3) which enable the system to have these functional
capabilities. For example a thermocouple and an electric heating element (alternatively
a person using a finger to sense temperature and a wood fire under the hot water tank).
The process is conveniently illustrated by analysing something everyone has experienced,
namely a person falling.The Event, being defined in Chapter 4 as the point in time when control
is lost over the potentially damaging properties of the Threat in which we are interested (which
is the gravitational potential energy of the person in this case), can be stated as ‘a fall begins’.
The Risk Type from Figure 9.2 is ‘Gravitational Potential Energy – fall begins.’ In the purposeful
Division, a fall is a voluntary jump. No incidental Mechanisms are envisaged. In the unintentional
Division, which will be analysed here, falls do not normally begin, because of the abilities that we
have to hold our body upright, to know what upright means and to so arrange ourselves that
forces and moments are balanced in the vertical direction and the horizontal plane. Forces and
moments arise from gravity, inertia, the wind and other forces (such as pushes) acting on us at
the feet, hands or other parts of the body. We need to be able to balance forces and moments
whether stationary or walking. These system abilities are:
• the ability to hold our skeleton in shape;
• the ability to balance forces and moments by moving our body as needed to provide support
when and where required – our movements must not be constrained;
• the ability to balance forces and moments by developing the necessary support forces at
points of interaction of our body with its surrounds.
If any one of these system abilities is taken from us, then an Event will exist – a fall will begin.
A negative statement of abilities is the way to describe the nature of the Mechanism, for example
Viner, Derek. Occupational Risk Control : Predicting and Preventing the Unwanted, Taylor & Francis Group, 2016. ProQuest Ebook
Central, http://ebookcentral.proquest.com/lib/swin/detail.action?docID=2004776.
Created from swin on 2021-06-11 08:24:00.
Copyright © 2016. Taylor & Francis Group. All rights reserved.
classifying and analysing Risk 149
the inability to hold our skeleton in shape. For brevity, in Table 9.3, the failures of these abilities
are called Collapse, Constraint and Release respectively. The word failure, depending on context,
could mean one of the following:
• a literal failure of a component, for example a bone breaks or a ligament snaps;
• a missed visual cue or other sensory input,for example not seeing a banana skin on the floor;
• a missing or inappropriate output of a processing function, for example the brain makes the
wrong decision or no decision.
Further analysis could and arguably should be completed using terms uninfluenced by the
technology of the bipod being considered – which could be either a mammal or a robot. However,
at this point a person is referred to in order to avoid obscure generalised terms. Collapse means
the person is no longer able to hold their body upright. This ability is derived from the functional
abilities of the neuromuscular system:
• the ability of the brain to know what is needed and to tell the muscles what to do;
• the ability of the muscles and tendons to hold the skeleton in the necessary configuration.
Each of these abilities exists because of the ways in which the system works. The way in which the
neuromuscular ability is provided is that:
• The brain receives inputs from the eyes, ears and the rest of the body – the ‘seat of the pants’
senses that pilots make use of (called sensory input problem’).
• The brain processes that information and decides what needs to be done to stay upright
(called ‘brain processing failure’). For practical reasons, the functioning of the proprioceptors
in the body, which provide feedback to the brain about the position of joints, is included in this.
• The brain sends action signals to the muscles (called ‘signal failure’).
Knowing what the system has to do (system ability) and how it functions (system functional
capability), it is possible to know how these functions can be disrupted. In Figure 9.3 the terms
Class, Order, Family and Genus are used to name these levels in the analysis as, apart from being
readily recognised as part of the biological classification systems, they have inherently relevant
meanings and it is useful to have names for the various parts of the Mechanism. In the biological
classifications, Kingdom and Phylum precede Class. Here, Kingdom is analogous to Risk Type and
Phylum to Mechanism Division.
Question 1 is asked as many times as necessary. The ability to keep the body in a
controlled posture relies on the ability to decide what is needed and give effect to that
decision. From then on, we can’t answer this question meaningfully, as we have to ask in what
ways (Mechanisms) are these abilities made possible? Generally it is a physical mechanism
that does this or a procedure that encourages people to behave in a defined way. Notice
that the abilities (Class and Order) are of general applicability and do not depend on the
technology employed for the Mechanisms – either a mammal or a robot. The technology of
the bipod could be brains, blood, nerves and muscles, or electronic processor, power supply,
signal cables and electric actuators. The generality of the analysis is derived from the need of
the system being investigated to comply with physical laws. In this case, our bipod exists on
earth with gravity and to remain stable all forces and moments must be in either static or
dynamic balance. This applies to the body itself, since it is made up of joints and linkages, as
well as to the body as a whole as it interacts with its environment by standing or walking or
being pushed or pulled and so on.
Viner, Derek. Occupational Risk Control : Predicting and Preventing the Unwanted, Taylor & Francis Group, 2016. ProQuest Ebook
Central, http://ebookcentral.proquest.com/lib/swin/detail.action?docID=2004776.
Created from swin on 2021-06-11 08:24:00.
Copyright © 2016. Taylor & Francis Group. All rights reserved.
150 OccupatiOnal Risk cOntROl
Successful FTA Requires a Suitable Top Event
The example of the fall shows how the use of Question 1 above leads to an understanding of the
Class and Order of the Mechanism. These are capable of being expressed in a generic form that
should not depend on the actual technology of the system, even though it may be useful to use
technology-specific terms for brevity. Further analysis below the Order, which is what FTA does,
Figure 9.3 Classification diagram illustrating the theory of Mechanism structure
Note: MECHANISM CLASSIFICATION STRUCTURE

Genus

Family

Order

Class

How this is made to fail

(FO functional

provided
ability)
ability)

Division (EVENT)
Medical condition,
confusing visual cues,
inertia effects, blood supply
failure etc.
(controller –
actuator)
Medical condition, drugs,
blood supply failure,
unconsciousness etc.
failure)
Medical condition, drugs,
blood supply failure,
disease etc.
(controller to actuator
failure)
Medical condition, drugs,
torn tendon or muscle,
blood supply failure etc.
(Actuator failure)
(actuator –
structural)
Failed ligament, foreign
body, physical damage etc.
Medical condition, impact
damage etc.
(structural failure)
Trip (below cg) restriction
Knockdown (above cg) Added load Increasing load Advancing load Static acceleration)
Dynamic Missing support reduction
Reduced support Retreating support Viner, Derek. Occupational Risk Control : Predicting and Preventing the Unwanted, Taylor & Francis Group, 2016. ProQuest Ebook
Central, http://ebookcentral.proquest.com/lib/swin/detail.action?docID=2004776.
Created from swin on 2021-06-11 08:24:00.
Copyright © 2016. Taylor & Francis Group. All rights reserved.
classifying and analysing Risk 151
necessarily depends on the technology in use (people or robots?) and the specific design of the
system. As argued previously, the ability to conduct a high-level Mechanism analysis is valuable as it
enables Top Events for FTA to be derived from logical analysis rather than intuition or guesswork.
As an example, consider the ejector seat used in high-performance military aircraft, the
functions of which can be succinctly described as follows. Once triggered by the pilot, the cockpit
canopy is jettisoned, the legs of the pilot are pulled back and restrained, the seat together with
the pilot is propelled up its launching rails, its motion stabilised by drogue parachutes before
the pilot is released from the seat and the pilot’s parachute is deployed. Mostly, the logic is of
the type: if this happens then that happens. This is akin to a chain, as the failure of any one link
in the chain will lead to a complete failure of function. What is the Top Event on which a failure
analysis could be based? A pilot abandons an aircraft if its use as a support against gravity is
compromised – it is going to crash. In Table 9.3 this is the release/missing support route to a
fall. In this case, however, the support of the aircraft is replaced by the support of a parachute,
as both a parachute and an aeroplane can be landed. The successful function of the device is to
provide the pilot with the support of a parachute when needed. The failure of this function is
well described as failure to do this, so the Top Event would be stated as ‘Failure to provide the
support of a parachute when needed’. The phrase ‘Missing support’ is highly relevant with the
qualification ‘when needed’.
The need for careful consideration of the logic when identifying Top Events can be illustrated
by another aeronautical example. Aeronautical engineers are familiar with a design condition for
multiple engine aircraft known as ‘engine failure on take-off’ or EFTO. At take-off, weights are
high and speeds are low. Low speeds reduce the power of aerodynamic control surfaces, but
high weights require larger forces. If an engine fails, the aeroplane must be able to maintain a
controlled climb (the design condition for roll and yaw control) and maintain a minimum climb
gradient (the engine power and maximum take-off weight design condition). Hence EFTO is
intuitively also a Top Event. EFTO essentially means that the thrust from the engine in question
(an outboard engine in the case of a four-engine aircraft) is no longer being delivered to the
airframe. Obviously, this can occur in one of two ways: either the engine stops working or the
engine separates from the airframe. Because the former is far more likely and because the
phrase EFTO is in common use, it is easy to regard this as being the Top Event for the case.
One day, the much less common ‘engine separates’ case did occur and the Outcome was that
the engine accelerated past the wing, destroying all the triplicated hydraulic control lines that
lay in its path, hence disabling all roll control devices on that wing. The aircraft rolled on its back
and crashed near the airport boundary, killing all on board. As will be explained later, different
Mechanisms can lead to different Outcomes. It is clear that an inaccurately stated Top Event can
result in significant omissions from a risk analysis.
The Structure and Content of a Fault Tree
The earlier figures in this chapter illustrate the relationship between different levels of classification.
The style used has been chosen because it is similar to the form of logic diagrams used to
construct an FTA. By convention, an FTA is constructed either downwards from the Top Event, or
from the Top Event towards the left. Because the horizontal form replicates the style of the TSM,
this is the form that will be used here. By convention, in engineering and science time is shown on
a horizontal axis moving from earlier times on the left to later times on the right. Mechanisms give
rise to Events and, as Mechanisms always take time to unfold, the Event usually occurs some time
after the Mechanism begins. Certainly, the Event logically follows the Mechanism and, of course,
Outcomes follow the Event.
Viner, Derek. Occupational Risk Control : Predicting and Preventing the Unwanted, Taylor & Francis Group, 2016. ProQuest Ebook
Central, http://ebookcentral.proquest.com/lib/swin/detail.action?docID=2004776.
Created from swin on 2021-06-11 08:24:00.
Copyright © 2016. Taylor & Francis Group. All rights reserved.
152 OccupatiOnal Risk cOntROl
In an FTA, the logic shows what needs to happen to produce the fault of interest. For
example, if a petrol engine stops (the fault) it is due either to there being no ignition spark,
or to the fuel/air ratio being incorrect due to too little or too much of either or because the
power demand exceeds the capability of the engine. Each of these OR logic possibilities is shown
diagrammatically to the left of (or below) the fault. Other faults may require two or more things
to happen simultaneously, for example for the power supply to an industrial process to fail it
may be necessary for the primary (the normal electricity supply) AND the backup (the standby
generator) power supply to fail. Figures 9.4 and 9.5 illustrate these examples in the form of logic
diagrams in which ‘OR’ and ‘&’ show the logic appropriate to the node. In either example, the
analysis could be continued further to the left to uncover the ways in which any of the faults could
be produced. For example, air flow to an engine could be blocked by ice in the carburettor or
(unlikely but possible) by a blockage in the exhaust pipe. Carburettor icing will only occur if the
dew point is close enough to the current temperature. The increasing level of detail is taken to
the point where there is no intention to analyse the Mechanisms further. Usually, this is the point
at which the expected function of a component has failed (for example ignition failure, standby
generator failure), a person acts inappropriately (for example switched off in error) or a sufficient
or relevant environmental condition exists (for example air flow blocked by ice).
A functional failure means that the normal and intended purpose or function of a component
or condition is not satisfied.The inappropriate interaction of people with the system being analysed
refers to their choice of working method or procedure, receiving and acting on information, direct
personal interaction with the environment and so on. This is inappropriate in the sense that it
would lead to the unintended Mechanism.
Figure 9.4 A fault tree with OR logic
Figure 9.5 A fault tree with AND logic

Switched off in error
OR
No ignition
OR
Engine stops

Ignition system failure

OR
Fuel/air ratio incorrect
Air flow blocked

High power demand
Too much fuel (flooded)

Too little fuel (starved)

Incoming feeder fault
&
Power supply fails

Stand-by generator fails to
start

Viner, Derek. Occupational Risk Control : Predicting and Preventing the Unwanted, Taylor & Francis Group, 2016. ProQuest Ebook
Central, http://ebookcentral.proquest.com/lib/swin/detail.action?docID=2004776.
Created from swin on 2021-06-11 08:24:00.
Copyright © 2016. Taylor & Francis Group. All rights reserved.
classifying and analysing Risk 153
Very commonly, the analysis of the ways in which a Mechanism develops ends at the Genus
with either a component failure or an inappropriate action by a person – see Figure 9.6. For
example, either a tank structure fails and releases hazardous liquid or a person opens a valve and
releases the liquid. Proponents of accident theory would be comfortable calling these Genera
‘unsafe conditions’ and ‘unsafe acts’. The scientist’s objection to these terms is simply the use
of the essentially judgemental qualifier ‘unsafe’. An unsafe act is better seen as an inappropriate
interaction of people with their environment (being more objective and capable of further analysis).
Wigglesworth (1972) saw the benefit of replacing the term ‘unsafe act’ with ‘human error’ and in
defining ‘error’ as ‘a missing or inappropriate response to a stimulus’.
There are two aspects, for which the (biological) term Species would apply, to this Genus
of inappropriate action: an inappropriate method properly (or even incorrectly) followed or an
appropriate method incorrectly followed, called error for brevity. In the former, the person is
doing what is intended (or makes a mistake doing this) but this intended method is inappropriate.
A detailed understanding of these two Species is essential for the risk engineer, but this would
require a substantial digression from the subject of this chapter. It is the subject matter of the
study of what is known as human factors or ergonomics.
Functional failures of components are the subject matter of reliability and maintenance
engineering. For the purposes of FTA it is sufficient to note the three commonly accepted Species
of component failure:
1. A primary failure is one due to the malfunction of an item under conditions for which it was
designed. Typically these failures occur due to a deterioration in the component arising from
corrosion, fatigue, erosion, embrittlement and similar ageing processes. Examples include:
• an electric motor fails while subject to load within its design capacity;
• a tank or pipe develops a leak due to corrosion;
• a valve leaks due to damage to the valve seat or ageing of gaskets and packings.
2. A secondary failure of an item occurs when it is subject to conditions which are not part
of its designed operating condition. Typically, these failures are due to the imposition of
excessive loads on an otherwise healthy component. Examples include:
• a pump fails when solid objects are sucked into it;
• pipework fails when hit by a vehicle;
Figure 9.6 The endpoints of an FTA
Species Genus

Primary failure
OR
Component fails

Secondary failure

Command failure

Inappropriate method
OR
Inappropriate action

Error

Viner, Derek. Occupational Risk Control : Predicting and Preventing the Unwanted, Taylor & Francis Group, 2016. ProQuest Ebook
Central, http://ebookcentral.proquest.com/lib/swin/detail.action?docID=2004776.
Created from swin on 2021-06-11 08:24:00.
Copyright © 2016. Taylor & Francis Group. All rights reserved.
154 OccupatiOnal Risk cOntROl
• a conductor is destroyed by excessive current arising from a lightning strike;
• a conveyor bridge collapses under the weight of spilled materials.
3. A command failure occurs if the item is made to operate or to stop operating at a time or in
a manner that is not appropriate. The command can either come from an automated control
system or from a person in the case of a manually controlled process. Examples include:
• a person or automatic controller switches a pump on (or off) at the wrong time;
• a pilot inappropriately switches off a hydraulic system.
More could be said about the classification of ways a component can fail to perform as intended,
but a digression into the details of reliability and maintenance engineering is not appropriate. As
with error and work methods, knowledge of this detail is essential for a risk engineer.
The term component is intended to be interpreted loosely here. In an industrial plant, the
term is used for electrical, electronic, pneumatic, hydraulic and mechanical components of the
plant. The structure of the plant itself, the structure of tanks and their bunds, of conveyor galleries,
pipe racks and so on are also components of the plant. Outside the industrial environment, the
term can be applied to all physical components that contribute to the function of any system, as
the examples in Table 9.1 illustrate.
Table 9.1 Examples of physical components

Field
Components include

Railways
Airports and airways
Forestry
Civil works
Commerce
Trackbeds and tracks, electricity supply hardware, bridges, tunnels, cuttings, drains,
platforms, signalling apparatus
Terminal buildings, service tunnels, aerobridges, taxiways and runways, drainage
works, lighting, radar, radio communication equipment, navigation aids
Trees and tree branches, winches, mobile equipment, portable equipment
Drains, trenches, mobile powered equipment
Computer systems, communication pathways, records facilities

Defined methods of work exist in many situations,for example electrical switching work,electrical
live-line work, piloting/driving work, operating machinery, energy isolation work, maintenance
activities, tapping a furnace and so on. Commercial examples are also numerous: how loans are
approved, the way advice is given, the way investment or planning decisions are made and so on.
It is entirely possible to subject a defined method of work to FTA.An inappropriate method can
be said to exist if:
1. The method is formal (probably in writing) but is incorrect or impractical. Examples include:
• an electrical switching instruction that is intended to apply to the job in which it is used
but there is an error (for example wrong action or missing action) in it;
• an energy isolation procedure that is complex and time consuming to the point of
being impractical;
• a required work method that is not up to the required standard of control measure,
given the nature of the risk intended to be controlled by it.
2. A method that is adopted to do a job lacks the qualities necessary to avoid damage. An
example is a method of isolating plant that has developed over time but which ignores
relevant energy sources.
Viner, Derek. Occupational Risk Control : Predicting and Preventing the Unwanted, Taylor & Francis Group, 2016. ProQuest Ebook
Central, http://ebookcentral.proquest.com/lib/swin/detail.action?docID=2004776.
Created from swin on 2021-06-11 08:24:00.
Copyright © 2016. Taylor & Francis Group. All rights reserved.
classifying and analysing Risk 155
Error, in the simplest sense, means that a person acts in a way that was not intended or that
is subsequently shown to be inappropriate in the circumstances. Many errors of judgement or
action occur without ever resulting in a complete Occurrence. Table 9.2 shows the context in
which this needs to be understood. While in most cases people are aware of what is required,
this is not always the case. It is not uncommon for people to have no intention of complying
with an approved or expected method of work. It is common to hear of people deciding to
‘work to rule’ as a form of industrial action as this means delays can be expected as the work
takes far longer to do when done to rule. In this case, compliance is nonproductive (and hence
not feasible in this sense) and noncompliance has been accepted as the norm, with managers
turning a blind eye to the expected method of work. Ordinary errors do not normally constitute
negligence and even intended noncompliance with rules may not be negligent if the rules are
known to be seldom followed or impractical. Negligence requires careful definition having
consideration of relevant laws.
Table 9.2 The context of error

Knowledge of what
is expected
Intention
Compliance
feasible?
Result
Interpretation

Aware
To comply
To not comply
Unintentional action
Yes
Yes
Yes
No
Yes or No
Correct
In error
In error
Irrelevant
In error
What we hope for
Error type 1
Negligent action
Unsuitable method
‘Accident’ type 1

Unaware


Unintentional action



Correct
In error
In error
Luck
Error type 2
‘Accident’ type 2

Error type 1 is in the normal context of understanding the term. The correct action is
known and appropriate and the person has every intention of complying, but does not do so.
Pilots are familiar with this as the piloting task involves numerous required actions, including
following checklists, making radio calls, making navigational decisions and flying the aircraft.
Pilots are retrospectively aware of the number of incorrect or missing actions or decisions
they have made during a flight. Error type 2 applies to a situation in which the person has no
awareness of the need to behave in a particular way and by chance acts in an inappropriate way.
For example, a person on a farm enters a slurry tank with no awareness of the need to check
the atmosphere is breathable. Whether of type 1 or 2, error is a Species of great variety and
theoretical development and requires detailed study in its own right.
Table 9.3 is an analysis of the structure of a controlling function. Automated systems
are (typically) electromechanical devices that replace the human as the system controller.
A controlling function, whether human or automatic, always involves the three sequential steps
of detection of the system state, understanding whether that state is different from a desired
state (cognition) and taking action to correct any discrepancy. An automated system does not
suffer from the problem of intention with regard to compliance to which human beings are
subject. It may nevertheless make errors at either step due to functional failures or limitations
in the provided capability.
Viner, Derek. Occupational Risk Control : Predicting and Preventing the Unwanted, Taylor & Francis Group, 2016. ProQuest Ebook
Central, http://ebookcentral.proquest.com/lib/swin/detail.action?docID=2004776.
Created from swin on 2021-06-11 08:24:00.
Copyright © 2016. Taylor & Francis Group. All rights reserved.
156 OccupatiOnal Risk cOntROl
Table 9.3 The structure of the controller function

Step
Human responder
Automated system responder

1. Detection (perception)
Attention is directed towards relevant
indications.
Able to see the indications because these
can be seen, heard or felt (e.g. smoke,
leaks, overflow, noise, flames) and the
presence of them is not masked.
Serviceable and calibrated sensors
detect abnormal conditions.
Sensor transmitters function as intended
and the signal is received by the
controller.

2. Cognition
Correctly understands the meaning of
what is perceived (e.g. that is an alarm
siren).
Correctly deduces the implications of
what is perceived.
Correctly decides on the action required.
The controller function is designed to
process the received signals.

3. Action
Correctly takes the action.
The action is timely and effective.
The controller sends commands to
actuators, which function as intended, or
alerts to human operators.

Developing Understandable Fault Trees
It is easy to develop an FTA that has no evident incorrect features but which nevertheless is
obscure in that it fails to capture or make evident the essence of possible Mechanisms. For
example, in a typical industrial environment, almost every FTA will include equipment failure,
operating method failure, power failure and control system failure. One can list all the equipment
and provide detail of the logical connection of all the components, but without making the
nature or implication of the possible failures evident to the user.
On the other hand, if the analyst has an understanding of what normally happens routinely
and successfully (e.g. when tank temperature reaches a set point then a valve is opened after a
time delay to transfer the contents to the next stage in the process) then it is easy to express
the opposite of this in failure statements, for example: the tank temperature exceeds the set
point; the valve does not open; the contents are not transferred. That is, the failure modes of
the subassembly of the tank and its associated components are being described. Different failure
modes may make different contributions to the Top Event.
Alternatively, the analyst can first understand what set of conditions could bring about
the Top Event and then use a plain language statement of this to guide the development of the
FTA. For example, for an explosive gas/air mixture to exist in an oven the rate of gas flow into
the oven must be high in relation to the rate of air flow.This may result from gas flowing (leaking
valve or valve opened) when the blower is off or from low blower flow (blocked inlet, blower
failure, etc.) when the gas is on. It is generally not the fact that doggedly constructing an FTA in
the absence of this understanding will make the reason for the Top Event clear to the analyst. An
FTA will be created in this way, but its meaning may be very unclear. Not only will the analyst be
no wiser but the result may bear little resemblance to someone else’s analysis.
Of equal importance is the fact that the resulting FTA will not provide insight into what
conditions need to be guarded against in the operation of the plant. This is of great importance
in determining realistic failure probabilities, as discussed below. Consequently, plant operating and
maintenance practices will not be sensitive to these conditions.
Viner, Derek. Occupational Risk Control : Predicting and Preventing the Unwanted, Taylor & Francis Group, 2016. ProQuest Ebook
Central, http://ebookcentral.proquest.com/lib/swin/detail.action?docID=2004776.
Created from swin on 2021-06-11 08:24:00.
Copyright © 2016. Taylor & Francis Group. All rights reserved.
classifying and analysing Risk 157
Estimating the Probability of the Top Event
As shown in Figure 9.6, FTA ends at the Species level of component or action failure. If the
probability of these failures can be estimated, then it is possible to follow the AND/OR logic of
the diagram to deduce the probability of the Top Event happening. The ability to do this relies
on an understanding of the mathematical meaning of the logical operators AND and OR when
associated with failure probabilities. For this discussion, it will be assumed that component or
action failure probability can be estimated and that it can be represented by a single value.
The point has been made in Chapter 6 that although probability is a dimensionless number,
it nevertheless has the meaning of the number of failures per unit of Exposure and it is of great
importance for this meaning to be explicit. The exposure denominator can be of three types:
1. On demand – the demand for the service being provided by the component occurs
occasionally. Examples include components on standby (such as fire water pumps,
emergency generators), alarms, smoke or flame detectors, fire suppression devices,
security systems, switching devices, batch delivery (for example tanker supply from or to a
fuel tank) and transport arrivals (for example trains at a station). The Exposure is the unit
of demand and the failure probability has the meaning of [failures to work per demand].
2. Process time – the demand for the service is continuous when the system of which it is
a part is in use and this may be measured by natural time or production cycles. Examples
include duty pumps, power supplies and engines, sensors and gauges. The Exposure is the
running hours of the component and the failure probability has the meaning of [failures
per running hour]. Failure may alternatively be measured per operating cycle, related
obviously by the number of operating cycles in an hour, day, week or year.
3. Permanent – the demand for the service is continuous during the lifetime of the component.
Examples include structures of all kinds (gravity never rests) such as bridges, buildings,
and towers, and continuous services such as electrical or gas power supply and radio
communications. The Exposure is the same in principle as that of routine Exposures but
the unit typically taken is that of the year, so that the meaning is [failures per year]. This is
indistinguishable from Frequency, of course.
Using Figure 9.4 as an example of OR logic, ignition failure can be brought about either by an
action failure (switched off in error) or by a component failure (ignition system failure). As this is
only of interest when the engine is running, the running hour is chosen as the unit of Exposure.
For the purpose of illustrating the logic only, probability values are chosen as below:
pA = 0.001 [action failures per running hour],
pC = 0.0001 [component failures per running hour].
The overall probability of an ignition failure arising from pA OR pC is properly given by Boolean
algebra as:
pA OR pC = pA + pC – pA × pC = 0.001 + 0.0001 – 0.001 × 0.0001 [ignition failures per running hour].
It is common practice in risk engineering to ignore the second-order product term (in this case pA
× pC) as this has a value orders of magnitude less than the other terms and probability estimates
are not precise.The equation is simplified to:
Viner, Derek. Occupational Risk Control : Predicting and Preventing the Unwanted, Taylor & Francis Group, 2016. ProQuest Ebook
Central, http://ebookcentral.proquest.com/lib/swin/detail.action?docID=2004776.
Created from swin on 2021-06-11 08:24:00.
Copyright © 2016. Taylor & Francis Group. All rights reserved.
158 OccupatiOnal Risk cOntROl
pA OR pC = pA + pC = 0.001 + 0.0001 = 0.0011 [ignition failures per running hour].
In words, the more failure types that individually could bring this state of ignition failure about, the
greater the probability that it will happen.
Using Figure 9.5 as an example of AND logic, as power supply failures are only of interest
when the power is being used, the hour is again selected as the unit of Exposure for the feeder.
As the standby generator is not normally running, its exposure is best expressed as the number
of start demands, which is the same as the number of incoming feeder faults. For illustration,
probability values are chosen as below:
pC Feeder = 10-5 [incoming feeder faults per running hour],
pC Generator = 10-2 [start failures per incoming feeder fault].
Boolean algebra gives the overall probability of a power supply failure as the product of these:
pC FeederAND pC Generator = pC Feeder × pC Generator = 10-5 × 10-2 = 10-7 [power supply failures per running hour].
In words, the more failure types that must happen to bring this failure about, the lower the
probability that it will happen. This is true only if the two failures are independent of one
another. Because of this it is very important when constructing fault trees to ensure that this
independence condition really is satisfied when using AND logic. When two failures are not
independent, they are said to have a common mode of failure. For example, if a gas turbine
control system makes use of two air pressure sensors for reliability purposes, the analyst would
need to be satisfied that these two sensors really had no common modes of failure in order
to use AND logic. Common modes of failure for such components could include dust build-up
and insect nests. Situations can also arise in the testing of plant when AND logic protection
devices are intentionally overridden in order to test a final level of protection. Many cases
have occurred of failures in such circumstances. AND logic in an FTA results in low Top Event
probabilities and potentially to a perception that the Risk is very low. Common mode failure
possibilities increase the Top Event probability and result in an increase in the estimated Risk.
The simplicity of the mathematical treatment of the logic diagram is countered by the
complexity of finding appropriate failure probability values. To properly understand the nature
of failure probability, it is necessary to understand something of reliability mathematics as
well as have access to a source of data. Reliability mathematics is introduced in Chapter 10,
where it is explained that components in general may exhibit early failures (so-called infant
mortality or burn-in failures) as well as late failures (commonly called wear-out). Between
the end of the early failures and the start of the late failures there is often a lengthy period
in which the failure probability is reasonably constant and assumed to be due to random
processes. It is possible to find sources of random failure probability data and extract a value
that seems to be relevant. However, actual failure probability depends on many factors. Some
components may have more than one failure mode and the probability of failure may be
different for each. For example, the failure modes of a pump will include fail to start, fail while
running and fail to deliver the required flow. It will be no surprise that failure probability
also depends on the manufacturing quality of the component (usually reflected in the price),
the physical environment in which it operates as well as the inspection, maintenance and
equipment renewal practices in the organisation.
Viner, Derek. Occupational Risk Control : Predicting and Preventing the Unwanted, Taylor & Francis Group, 2016. ProQuest Ebook
Central, http://ebookcentral.proquest.com/lib/swin/detail.action?docID=2004776.
Created from swin on 2021-06-11 08:24:00.
Copyright © 2016. Taylor & Francis Group. All rights reserved.
classifying and analysing Risk 159
With regard to component failure probability, one can generally (there are few
exceptions) take it for granted that the user organisation will not have had sufficient failures
to properly determine failure probability. While the fact of component failure is often stored
in a maintenance management system, it is unlikely that the time in service (which is often the
relevant measure of exposure) of the failed component will have been determined or recorded.
It is also unlikely that the component manufacturer will be able to supply data. There are
exceptions to this observation, such as the aircraft industry, where manufacturers carefully
monitor engine and other critical equipment failure rates and modes, and the nuclear power
generation industry.
Action failure probability depends not only on the task to be done but also on a number of
factors that influence the performance of the individual. These factors are both external (that
is, derived from the organisation or the physical and social environment) and internal (that is,
state of knowledge, fatigue, stress imposed by an emergency condition). As has previously been
noted, human error is a complex subject on which much has been written over many decades
and of which a risk engineer needs a detailed understanding. Due to their significance, action
failures have been studied (Gertman and Blackman, 1994), particularly in catastrophe-sensitive
industries. Efforts have been made to estimate human error probabilities in commerce and the
medical industry – for example, Edmondson (1996).
As a search of the internet with search terms such as ‘failure probability data’ and ‘human
error probability’ will show, this is a field that has attracted much attention over many
years, particularly in the nuclear power industry (US Nuclear Regulatory Commission, 1984;
International Atomic Energy Agency, 1988). There are many sources of component and action
failure probability data of which the risk engineer should be aware.
To avoid the general difficulty of obtaining plant-specific failure probability data, it is
common practice amongst risk engineers to use representative standard values (see, for
example, Table 9.4) for different classes of components. A benefit of this approach is that the
Top Event probability can be estimated consistently and with relative ease, at least to within a
reasonable order of magnitude. At the very least, representative probability values make clear
the relative significance of different branches of the fault tree as contributors to the Top Event.
In Chapter 10 the mathematical origin of these values is explained, particularly that they assume
random failures and not age-related failures. If the plant being analysed is reaching old age, if
maintenance practices are reactive and equipment renewal investment minimal, the estimates
of failure probability used in FTA must reflect this reality.
Human error probability possibilities cover a wide range from one error in less than five
chances to one in millions of chances. This is influenced by the context. Many formal data
collections arise from studies of people doing skilled tasks, with the implication that they
intend to succeed. Examples include opening a valve or replacing a circuit board. The resulting
probabilities cluster around one error in 1,000. However, people often do not intend to comply.
In my own experience, unskilled people working in a loosely managed environment may fail to
do what is expected of them as many as once or more in three opportunities. I have seen this
even where a fatality has happened in very recent memory. I have even seen zero compliance
(error probability of one) when a rule is seen as impractical even when associated with severe
or fatal likely Consequence Values. On the other hand, I have also seen people performing
production tasks which could readily be automated with an apparent error probability of less
than one in a million actions: the body is doing a learned physical task with extraordinary
reliability. When using error probability estimates it is important to break the task or activity
down into its component parts. For example, the act of reading a gauge consists of identifying
the gauge to be read, reading it and recording the result. Any of these parts of the task could
be done incorrectly and each has its own probability of error.
Viner, Derek. Occupational Risk Control : Predicting and Preventing the Unwanted, Taylor & Francis Group, 2016. ProQuest Ebook
Central, http://ebookcentral.proquest.com/lib/swin/detail.action?docID=2004776.
Created from swin on 2021-06-11 08:24:00.
Copyright © 2016. Taylor & Francis Group. All rights reserved.
160 OccupatiOnal Risk cOntROl
Table 9.4 Representative failure probability estimates (various sources)

Failure type
Exposure base
(prob. denominator)
Failure probability

Human error (occasional tasks, unskilled process operator)
per task action or decision
10-1–10-2

Skilled human error (maintenance tasks)
per task action or decision
10-2–10-5

Small component failures:
Sensors and logic controllers fail
Small pipe and gasket leaks
Alarms fail to sound
Relief valve fails to open
per operating hour
per operating hour
per demand
per demand
10-3–10-4

Engineered large component failures:
Inspected major mechanical controls fail
Storage vessels spontaneous rupture
Valve rupture, major leaks
Human error in rapid routine tasks (manual production work)
per operating hour
per operating hour
per operating hour
per task
10-5–10-6
10-5–10-6

Pipe spontaneous rupture, containment weld failure
per operating hour
10-7–10-8

In-built design failures (e.g. unsuitable material selection)
per operating hour

Outside the industrial field, ‘component’ failure probabilities are unlikely to be so readily
available. Estimates can nevertheless be made by asking people how often they have experienced
the failures in question; see Chapter 6.
When modelling Mechanisms using FTA it is common for component and action failures to
have Exposure denominators of different types. The example given above of electrical supply
failure (see Figure 9.5 and the associated discussion) is pertinent. In the case of AND logic, the
resulting probability has a natural meaning that is derived from the individual meanings. In the
case of OR logic, as probability figures are being added it is necessary that each has the same
meaning, that is the same Exposure denominator. It is nonsensical to add a failure probability
based on an occasional demand to one based on a routine running hour. It is feasible, however,
to convert the demand-based figure to a per-running-hour-based figure by multiplying by
the estimated number of demands per running hour. When quantifying a complete FTA this
process needs to be followed right through the logic structure to ensure a meaningful figure
for the Top Event probability. The logic of this result should become evident as the work
proceeds. The Top Event probability will have an Exposure denominator that is a characteristic
of the system under consideration. For example, if the risk of a production system failing to
provide raw materials for delivery to market via a supply train is being analysed, a natural unit
of exposure for the whole plant is a ‘per train’ figure. Likewise, if the risk of a person being
trapped in a freezer is being analysed, the natural unit of exposure is ‘per freezer entry’.
An essential conclusion to the quantification of an FTA is a reality check on the estimated Top
Event probability. This can be achieved by converting the probability into a Frequency, the number
of estimated Top Events per year, and comparing this with experience, or at least with judgement.
For example, if the FTA estimates the Frequency of a person being trapped in a freezer as one in
100 years, but this is known to have happened once in the first five years of the freezer’s operation,
one may conclude the probability estimates do not accord with reality and adjust them accordingly.
Reality does not fit the assumed failure probabilities, they have to fit reality – see also the discussion
in Chapter 6 on estimating probabilities in the absence of data. Naturally, the more sensitive the result
and the less evident the discrepancy with reality, the more useful it will be to involve a statistician in
interpreting the confidence that may be placed on the estimatedTop Event probability.
Viner, Derek. Occupational Risk Control : Predicting and Preventing the Unwanted, Taylor & Francis Group, 2016. ProQuest Ebook
Central, http://ebookcentral.proquest.com/lib/swin/detail.action?docID=2004776.
Created from swin on 2021-06-11 08:24:00.
Copyright © 2016. Taylor & Francis Group. All rights reserved.
classifying and analysing Risk 161
It is their common interest in component failure probabilities that unites the risk engineer
and the reliability engineer. For this reason the risk engineer needs to be familiar with reliability
mathematics and to have access to a source of failure probability data. The risk engineer
also needs a good working understanding of error (action failures) and the situations that
promote them.
Modelling Outcomes
The second part of risk analysis is the modelling of the Outcome process possibilities associated
with the Event, Event Analysis. The purpose of the model is to show all physically possible ways
in which the Outcome and associated Consequences could unfold. It is the end of the Outcome
pathway that results in the Consequences: it is not the fall that hurts, but the impact at the
end of it.
It is generally, but not always, true that the great majority of Outcomes do not result in any,
or any significant, noticeable Consequences. In some cases, the energy or non-energy Threat
is brought back under control so that the Event is short-lived. In Figure 9.7, this is the short
pathway shown as the Null Outcome. Examples are numerous, but include by way of illustration:
• A roof tile begins to slide on a sloping roof but is soon brought to a halt by the presence of
an obstruction on the roof.
• The primary system that controls pressure, temperature and/or level in a chemical plant
reactor vessel fails to keep these within normal control boundaries when vessel outlet
flow is stopped by a failed pump.Within a short time, the standby pump starts and outflow
resumes, leading to conditions becoming stabilised.
• A driver realises their vehicle is about to enter a slide on loose gravel as a bend is rounded at
high speed but is able to reduce speed and change steering inputs so that a slide is averted.
• A person is trapped inside a freezer room in a pharmaceutical company when the powered
door fails to open, but is able to use the backup manually operated door to leave the room.
• In a petrochemical plant, a gas leak develops from a failed control valve, but the automatic
control system recognises the leak from increased flow and reduced pressure indications
and shuts a nearby stop valve, thereby ending the leak.
Figure 9.7 Outcome possibilities
Viner, Derek. Occupational Risk Control : Predicting and Preventing the Unwanted, Taylor & Francis Group, 2016. ProQuest Ebook
Central, http://ebookcentral.proquest.com/lib/swin/detail.action?docID=2004776.
Created from swin on 2021-06-11 08:24:00.
Copyright © 2016. Taylor & Francis Group. All rights reserved.
162 OccupatiOnal Risk cOntROl
On the other hand, an Outcome may result in a Null Consequence because of the pathway
that develops. In this case, after the Event, the energy or non-energy Threat is not brought back
under control but the process unfolds in a way that leads to no significant (Null) Consequence.
Examples include:
• A roof tile begins to slide on a sloping roof and gathers speed as it drops over the edge of the
roof and onto the ground beneath, fortuitously missing the builders standing there.
• The primary system that controls pressure, temperature and/or level in a chemical plant
reactor vessel fails to keep these within normal control boundaries when vessel outlet
flow is stopped by a failed pump. Conditions inside the vessel continue to develop until a
pressure relief valve operates, which prevents a pressure explosion from rupturing the vessel.
Operators trigger a water deluge system which contains the temperature and further reduces
the pressure rise. This gives the operators a chance to implement an emergency shutdown of
the process, which proceeds successfully and leads to no damage to equipment or to injury.
• A driver finds their vehicle sliding on loose gravel as a bend is rounded at high speed and
successfully responds so that the car stays on the road.
• A person is trapped inside a freezer room in a pharmaceutical company when the powered
door fails to open and the backup manual door opener does not work, but the alarm is
sounded and others are able to respond successfully to open the door before the trapped
person is adversely affected by the cold.
• In a petrochemical plant, a gas leak develops from a failed control valve. It is some time before the
operators of the plant detect and respond to the problem as it is night time and the leak is not
visible. A large vapour cloud forms, which drifts over the nearby plant boundary and then disperses
over nearby fields without igniting or creating any known adverse effects due to its toxicity.
Null Outcomes and Null Consequences are both colloquially known as near misses.
Table 9.5 Physical possibilities and situational effects in Outcome pathways

Outcome/Consequence
process affected by:
Explanation and examples

Energy form changes
Explosions result in heat and sound waves and in flying objects. Fires result in fumes
and heat flux. Fumes and spilled liquids may result in chemical changes in water. A fall
results in the conversion of gravitational potential energy into kinetic energy. The way
in which Recipient exposure occurs may be different for each energy form.

Component operation
Devices such as pressure relief valves, alarms, fire suppression, smoke extraction,
emergency shutdowns, backup power supplies, emergency lighting and similar may
fail to operate.

Environmental
Circumstances and
Conditions, including
weather effects
Wind or water may affect the process once the Event has occurred. If a flammable
gas leak occurs, wind may blow the gas towards people or towards an ignition
source. If it is raining heavily when the leak occurs the gas may dissolve in the
rain water and the run-off may contaminate adjacent farm land. A flammable gas
leak at a remote desert wellhead has very different Outcomes from one within a
petrochemical plant surrounded by critical equipment and in close proximity to
people. Contaminated dust may settle on crops and flowers and the contamination
may affect mammals and insects some distance from the source of the release.

Viner, Derek. Occupational Risk Control : Predicting and Preventing the Unwanted, Taylor & Francis Group, 2016. ProQuest Ebook
Central, http://ebookcentral.proquest.com/lib/swin/detail.action?docID=2004776.
Created from swin on 2021-06-11 08:24:00.
Copyright © 2016. Taylor & Francis Group. All rights reserved.
classifying and analysing Risk 163

Geometrical uncertainty
The direction in which a tower or tree falls may be uncertain. The location along a train
line where a derailing might occur is uncertain. The location of a hazardous goods
vehicle accident along a route is uncertain. The location and direction of a landslide, rock
or tree fall may be uncertain. An electricity transmission line fuse capable of dropping hot
materials may operate over flammable ground cover or not depending on its location.

Uncertainty in the way
in which Consequences
could arise, including
random chance
A gas leak or fire occurring in normal working hours will expose more people to the
chance of injury than if it occurs outside those hours. If train doors fail open during
rush hours, the chance of passengers falling out is increased, as is the chance of
them being hit by passing trains.
Random chance that a sensitive Recipient is in a location affected by an Outcome
pathway at the same time as the Outcome occurs can be called a space/time
coincidence. A landslide/tree fall/rock fall hits that point on a road or path that is occupied
by a car or person. A car arrives at a road/rail crossing at the same time as a train.

Human behaviour
Response by control room operators to plant fault indications. Response by
drivers, pilots, etc. to incipient unstable vehicle movements or to possible collision
scenarios, to warnings and instructions.

The general purpose of Event Analysis is to identify all other Outcomes and their associated
Consequences.The first need in Event Analysis is to understand all the physical possibilities and
situational effects, up to and including the way in which Consequences arise. In Chapter 3 the
point was made that Outcome processes are influenced by the situation and the Conditions
and Circumstances within which the Event takes place. See Table 9.5 for a summary of these.
The second need in Outcome Analysis is to understand how the unfolding process can
or should be responded to. Much of this will be sequential: if this happens then that should
happen and after that something else, as in: leak detected, pipe flow shut down, alarm sounds,
suppression operates, people evacuate the area, emergency response team gathers and does its
job, and so on. Some of the Outcome may be in parallel, as in the activation of the alarm being
at the same time as the operation of stop valves.
Modelling of the Outcome pathway is based on the expected sequential (or parallel) response
process, while recognising that what is intended does not always occur: the excess flow sensor
may not work, the stop valves may not operate, the alarm may not sound, a person under a
breaking tree branch may not hear the sound made by the branch and so on. If all these things
happen as intended the pathway proceeds down the preferred, intended, expected or hoped-for
route, if not then different and adverse Outcome scenarios get played out. Intended and preferred
Outcome pathways will exist where Events have been predicted and the equipment or process has
been designed to accommodate a predicted Outcome process. There are many examples of this:
• the provision of fire-hardened structures, fire detection and suppression mechanisms, smoke
control doors, evacuation alarms and evacuation plans;
• the provision of gas detection apparatus where toxic or non-respirable gases may leak;
• the provision of core balance relay circuit breakers in electrical reticulation;
• the provision of automated emergency shutdown (‘soft landing’) capability in complex plant;
• the provision of an uninterruptible electrical power supply;
• the protection of power transformers by gas detection (Buchholz relay) devices, venting and
inert gas flooding;
• the provision of emergency communication devices in building lifts for use in the event
of breakdown;
• the provision of explosion overpressure vent doors in dust handling plant;
Viner, Derek. Occupational Risk Control : Predicting and Preventing the Unwanted, Taylor & Francis Group, 2016. ProQuest Ebook
Central, http://ebookcentral.proquest.com/lib/swin/detail.action?docID=2004776.
Created from swin on 2021-06-11 08:24:00.
Copyright © 2016. Taylor & Francis Group. All rights reserved.
164 OccupatiOnal Risk cOntROl
• business interruption plans;
• the provision of emergency and first aid personnel on standby during events attracting large
numbers of people;
• plans for managing large numbers of passengers at short notice when an airliner becomes
unserviceable or environmental conditions make it impossible to fly.
When an Event has occurred, it may be responded to only if it has been detected. The earliest
and most desirable detection is of the Event itself, but detection may only occur at later stages in
the unfolding Outcome pathway. If neither the Event nor the pathway are evident, detection may
only arise when the Consequence becomes evident. This is obviously undesirable, but the path
of history is paved with such cases, particularly in the exposure of people to toxic substances
(artists to lead paint, asbestos workers) whose presence is unknown due to general ignorance of
the processes involved.
Outcome Logic Diagrams – ‘Event Analysis’
Analysis of the ‘All other Outcomes’ pathways in Figure 9.7 relies on a structured understanding
of the process possibilities (Table 9.5) and possible responses. This understanding is conveyed in
a logic diagram in which the intended or hoped-for route through the Outcome is defined by a
series of individual stages that interpret the general ideas conveyed in Table 9.5 in a manner that
describes the sequence of the Outcome pathway for the system under analysis. Each of these
stages is conventionally written as a question, such as ‘Fire alarm operates?’, as this enables a
binary choice logic diagram to be drawn for both the affirmative answer (yes, it does operate)
and the negative (no, it does not operate). The sequence of affirmative answers describes what
we intend to happen in the Outcome (because it has been preconsidered and planned for), or at
the least the best we could hope to happen. Negative answers at any or all points in the sequence
define the various possible pathways through the ‘All other Outcomes’ of Figure 9.7. As the
answer ‘Yes’ or ‘No’ can be given to each of the questions, at each question the logic diagram
branches in two ways. This concept is illustrated in Figure 9.8. For the purpose of illustration, this
is a very simple three-stage Outcome based on a response process only.
In Figure 9.8 each of the questions that define the Outcome sequence is expressed positively
and the ‘Yes’ response pathway proceeds upwards, as indicated by the ‘Yes’/‘No’ convention
shown at the first question. This convention results in the best possible Outcome pathway being
the uppermost one and the worst possible pathway being the lowest one. Whatever convention
is adopted, it is good practice to ensure that all questions are stated in the positive sense or all
in the negative, as a mixture is certain to lead to confusion in the construction or interpretation
of the analysis. Notice that not all questions have to be relevant to all the pathways. If the alarm
does not operate then the second question in Figure 9.8 is irrelevant.
The theory is disarmingly simple, but the practice is not always so. A good logic diagram
is one that contains sufficient detail to be a useful model but not so much that the analysis
becomes hard to understand and use. To achieve suitable simplicity, the analyst must define
the stages in the Outcome with care, such that there are just sufficient of these to adequately
describe how Consequences arise and no more. It will be evident that Figure 9.8 could rapidly
become very complex (many more pathways) if the number of questions was doubled, for
example. One means of achieving this practical simplicity is to collapse a series of complex
possibilities into a single result, which is what has been done in Figure 9.8. The question
‘Alarm responded to as planned?’ could be subdivided into such detail as: ‘Can the alarm be
heard?’; ‘Is the meaning of the alarm understood?’; ‘Is the correct action taken as a result of it?’
Viner, Derek. Occupational Risk Control : Predicting and Preventing the Unwanted, Taylor & Francis Group, 2016. ProQuest Ebook
Central, http://ebookcentral.proquest.com/lib/swin/detail.action?docID=2004776.
Created from swin on 2021-06-11 08:24:00.
Copyright © 2016. Taylor & Francis Group. All rights reserved.
classifying and analysing Risk 165
Doing so, in this case, would add significantly to the complexity of the analysis diagram without
adding value to the result. The substructure of this ‘top’ question can be considered when
determining the probability to be associated with the Yes/No responses.
It is common, when working with complex industrial plant, to find that different Event
Mechanisms can lead to different Outcomes. In such cases, a separate Outcome Analysis should
be created for each Mechanism.
As with FTA, there is no substitute for learning from experience or for looking at examples
based on real situations.
Estimating Outcome and Consequence Probabilities and Values
The probability associated with any given Outcome and Consequence pathway can be estimated
from the probabilities of YES or NO answers to each of the questions used to describe the
pathways. In Figure 9.8, the alarm is a component the failure probability of which can be estimated
as discussed above for FTA component failures. With an estimate of the probability of the alarm
not working (NO), the probability of it working (YES) can readily be determined as (1 – pNO). Put
another way:
pNO + pYES = 1.
Table 9.6 (derived from Table 9.5) illustrates the many different ways in which Outcome probabilities
can be estimated.
Figure 9.8 Outcome logic diagram concept

Successful
evacuation?
Alarm responde
Alarm operates?
d to as planned?

EVENT
Yes

No

Viner, Derek. Occupational Risk Control : Predicting and Preventing the Unwanted, Taylor & Francis Group, 2016. ProQuest Ebook
Central, http://ebookcentral.proquest.com/lib/swin/detail.action?docID=2004776.
Created from swin on 2021-06-11 08:24:00.
Copyright © 2016. Taylor & Francis Group. All rights reserved.
166 OccupatiOnal Risk cOntROl
Table 9.6 Outcome pathway probability estimation methods

Outcome process affected by:
Outcome pathway probability estimation

Energy form changes
In most cases, an Event gives rise to one form of energy. Where
multiple forms can occur it is likely that they will occur (YES probability
= 1) and the Outcome pathway will be unique to the energy form.

Component operation
Component failures estimated as discussed for FTA. Each
component-related functional failure in the Outcome Analysis can be
analysed using its own FTA if necessary.

Environmental Circumstances and
Conditions, including weather effects
Meteorological data provides the probabilities of wind strengths
and directions (relevant to geometrical uncertainty) and of rainfall.
Lightning strike data is used to estimate probabilities of a strike
happening on a given sensitive area. Likewise with earthquake, storm
and sea surge data.

Geometrical uncertainty
Analysis of the geometry, for example:
the damage-sensitive area within the possible impact area divided by
the possible impact area;
the length of train line with damage-sensitive assets within the
derailment zone divided by the total length of train line;
the length of road with damage-sensitive areas in the event of a
hazardous goods vehicle accident divided by the total length of road;
the length of road typically affected by a landslide divided by the total
length of road;
the number of transmission line fuses over flammable ground cover
divided by the total number of similar fuses.

Uncertainty in the way in which
Consequences could arise, including
random chance
Analysis of presence, for example:
the number of working hours or rush hours in a week divided by the
total number of hours in a week;
the number of times a day trains pass others that are stationary at
railways stations.
Analysis of random chance is an application of AND logic, for example:
A landslide occurs AND a car is on the road below the land slide site.
A car must be on the level crossing AND a train must arrive too.
The presence of a vehicle on any particular stretch of road or rail is
dependent on the number of vehicles using the route per year, their
speed and their length.

Human behaviour
Analysis of human error possibilities and probabilities. This is a
specialist task, but generic figures can be used, as discussed for FTA.

With probabilities estimated for each of the Outcome and Consequence logic diagram
branches, it is possible to multiply them through from the Event probability onwards to
estimate the probability of any one pathway occurring. As before, care must be taken with the
meaning of these probabilities to ensure that the Exposure denominator has been handled in a
sensible manner.
The end of each Outcome pathway is a point at which an adverse Consequence does or does
not (Null Consequence) occur. Knowing the way in which the Occurrence has unfolded assists
the analyst to make a statement of the nature of the Consequence and to use this to estimate the
likely Consequence Value, as shown in Figure 9.9.
Viner, Derek. Occupational Risk Control : Predicting and Preventing the Unwanted, Taylor & Francis Group, 2016. ProQuest Ebook
Central, http://ebookcentral.proquest.com/lib/swin/detail.action?docID=2004776.
Created from swin on 2021-06-11 08:24:00.
Copyright © 2016. Taylor & Francis Group. All rights reserved.
classifying and analysing Risk 167
A complete quantified risk analysis consists of the FTA joined to the Event Analysis by their
common feature, the Event itself, and with probability estimates included – see Figure 9.10.
Figure 9.9 Adding Consequences and their Values to the Outcome analysis
Figure 9.10 Completed analysis of Mechanisms and Outcomes (generic form)

Successful
evacuation?
Alarm respond
Alarm operates?
ed to as planned?

EVENT
Yes

No

onsequence
Nil
Value, £
0

few minor injuries 500
il 0

many minor injuries
Nil
2,000

Few major, few
minor injuries
100,000

Q3
Q2
& M1
Q1
Y
N

F1

F2

F3

F4
&
M2

F5

M3

EVENT
OR Viner, Derek. Occupational Risk Control : Predicting and Preventing the Unwanted, Taylor & Francis Group, 2016. ProQuest Ebook
Central, http://ebookcentral.proquest.com/lib/swin/detail.action?docID=2004776.
Created from swin on 2021-06-11 08:24:00.
Copyright © 2016. Taylor & Francis Group. All rights reserved.
168 OccupatiOnal Risk cOntROl
Bow-tie Diagrams
The origin of these is to be found in the general shape evident in Figure 9.10, which at its simplest
conveys the idea that there are many possible reasons for Events and many possible results of
them. Figure 9.11 is a style of diagram I used in teaching in the late 1970s and early 1980s to
illustrate the essential points being made by the risk analysis methods of FTA and EA. Figure
9.11 is a simplified version of a complete analysis in order to illustrate the point rather than be a
comprehensive analysis of chemical exposures.
This type of analysis can be quickly completed for a number of different Event types and
has the advantage that a person investigating an incident involving chemicals (in this case) has a
ready reference of the possibilities to be investigated.The figure shows in bold the route taken
by a particular Occurrence, in which a flange of a pipe under pressure was opened, resulting in
an escape of carbon dioxide and an acidic liquid. The similarity of the shape to that of a bow-tie is
obvious.The concept is at its most useful if Events are defined as suggested in this text. Bow-tie
analysis methods are now widespread and of increased complexity, including control measures for
each line in the diagram, as well as including causes, as a brief exploration of the internet on this
subject will make clear.
The Effect of Management Practices on Risk Analysis
The obvious results of a risk analysis are first a structured understanding of the different ways an
Event can occur and how these can lead to adverse Consequences, and secondly an estimate of
the size of the Risk.The first of these is of value for one main reason – it enables the most likely
pathways to be known. The second is of value because it enables the Risk to be compared with
others, possibly also with respect to acceptability criteria (see Chapter 8) and it enables the costeffectiveness of any proposed risk reduction measures to be calculated.
A less obvious result is that the analysis draws attention to the assumptions that have to
be made in order to quantify the analysis: to estimate the various probabilities. As previously
noted, component failure probabilities depend on the quality of the original components, on the
inspection and maintenance practices adopted for them and the equipment renewal practices
Figure 9.11 ‘Bow-tie’ representation of a simplified risk analysis

EVENT

Ingestion
Inhalation
Splash/spray
Deluge
Release stopped
Containment failure
Vessel entry
Structure
failure
opening
Uncontrolled
chemical
Valve/flange
Viner, Derek. Occupational Risk Control : Predicting and Preventing the Unwanted, Taylor & Francis Group, 2016. ProQuest Ebook
Central, http://ebookcentral.proquest.com/lib/swin/detail.action?docID=2004776.
Created from swin on 2021-06-11 08:24:00.
Copyright © 2016. Taylor & Francis Group. All rights reserved.
classifying and analysing Risk 169
of the organisation, as well as the physical environment in which the components do their
work. Component failure probabilities most often used in risk analyses are the single figure that
represents a random failure, preferably including a range of values around this figure to account
for uncertainty in its value. A real component will often have early-in-life and late-in-life failure
probabilities that are quite different and much higher than this. The following chapter provides
a more detailed discussion of this point.
The literature on human error shows that the probability of action failures depends
greatly on performance-shaping factors. These factors reflect the organisational, social and
physical nature of the environment in which work is done. Experience shows that the cautious
behaviour and working to rule that occurs in the aftermath of an accident has a life of only some
six weeks. Social researchers (see Chapter 8) call this phenomenon ‘discounting in time’: the
value or meaning of the event is discounted over the time since it happened or could happen
in the future.
It is an inevitable conclusion that the result of a quantified risk analysis is only as good as the
technical knowledge and organisational contextual assumptions made at the time it was done. If
this knowledge changes or if this context changes over the life of the plant, the estimated Risk
will necessarily also change. Some industrial examples help to illustrate this point:
• Flammable material storage tanks are required to be earthed and there is a requirement
for the earth resistance to be less than a certain value. Over time, the contract to annually
measure earth resistance lapses. Because the contracts department is unaware of it as
a requirement, its absence is not noted. The effect of this is that the natural steady rise
in earth resistance resulting from deterioration of the earth rod bond with the earth is
not noticed.
• Bulk chemical storage tanks are provided with bunds to contain spills from them. There is
a technical requirement that incompatible chemicals cannot be stored in different tanks
that share the same bund. The sales department of the storage company is rewarded
for increasing the occupancy of the tank facility and over time their knowledge of this
requirement diminishes so that it becomes common practice to have incompatible chemicals
sharing the same bund.
• A standby pump is used as the duty pump after the duty pump fails. The company is
going through a hard time financially and the order for a new duty pump is delayed for
many months.
• A critical conveyor gallery carrying coal in a steel plant is subject to an increasing dead
load over time as coal spillage is not cleaned up. The gallery structure has not been painted
for a long time as there is no preventive maintenance programme on site. Over time, the
structure corrodes. The gallery structure fails due to both overload and weakening, leading
to a critical loss of production capability.
• A critical temperature sensor in a power plant is not calibrated to its required schedule as
it is very difficult and uncomfortable to access. For thermal efficiency reasons, the boiler
is controlled to the highest steam delivery temperature that the delivery pipework metal
is able to withstand. Potentially, an unintended high temperature could lead to an early
delivery pipe failure with catastrophic results.
The EDM (Chapter 3) shows how Mechanisms arise from Prerequisites. FTA shows how
the probability of the Mechanism is affected by these Prerequisites (enabling Conditions and
Circumstances). The analysis of Outcomes shows how Outcomes are similarly affected by the
situation (concluding Conditions and Circumstances) in which they arise.
Viner, Derek. Occupational Risk Control : Predicting and Preventing the Unwanted, Taylor & Francis Group, 2016. ProQuest Ebook
Central, http://ebookcentral.proquest.com/lib/swin/detail.action?docID=2004776.
Created from swin on 2021-06-11 08:24:00.
Copyright © 2016. Taylor & Francis Group. All rights reserved.
170 OccupatiOnal Risk cOntROl
Summary
A theory is developed for the classification of possible Events and their Mechanisms. The
value of these in both predicting and analysing Occurrences, including accident investigation, is
noted. The theory contributes to the identification of Top Events for FTA and in developing the
analysis in an understandable and repeatable manner. The chapter discusses component, action
and controller function failures and the choice of failure probability, drawing attention to their
sensitivity to operational management decisions during the life of the process.
A detailed understanding of the development and possible content of the logical analysis of
Outcomes (Event Analysis) is given.
Viner, Derek. Occupational Risk Control : Predicting and Preventing the Unwanted, Taylor & Francis Group, 2016. ProQuest Ebook
Central, http://ebookcentral.proquest.com/lib/swin/detail.action?docID=2004776.
Created from swin on 2021-06-11 08:24:00.
Copyright © 2016. Taylor & Francis Group. All rights reserved.

Generated by Feedzy
WhatsApp
Hello! Need help with your assignments?
Loading...