Timeliness, an investigators challenge
Lund University, Sweden, Delft University of Technology, the Netherlands
John Stoop graduated in 1976 as an aerospace engineer at Delft University of Technology and did his PhD on the issue of 'Safety in the Design Process'. He is a part-time Associate Professor at the Faculty of Aerospace Engineering of the Delft University of Technology and is a guest professor at Lund University (Sweden). Stoop has completed courses in accident investigation in the Netherlands, USA and Canada.
Stoop is Affiliated Member of ISASI, and has been actively involved in accident investigations in the road and has played a role as safety analyst in maritime, railway and aviation accidents.
At the Sapporo ISASI seminar, a new approach for safety investigations has been proposed, dealing with recent developments in systems engineering, chaos and complexity theory and systems dynamics (1). This contribution explores several theoretical notions regarding dynamic behavior, systems states and safety enhancement interventions. During the discussion on these notions, challenges were put on the role for investigating accidents and incidents, in particular how to deal with the dimension of time in complex, dynamic and interrelated systems. Based on a series of case studies in various modes of transportation, the dimension of time is explored in its practical application as a diagnostic dimension, to be applied in safety investigation theory and practices.
In the academic community, interested in accident investigation theory and practices, the use and usefulness of accident modeling is debated. Based on methodological grounds, the use of generic and linear models such as the Swiss Cheese model are criticized, if not rejected at all on theoretical grounds, among others by Stoop and Dekker (1). Instead of modeling accidents, a systems approach is favored, not only dealing with the event itself, but also dealing with higher systems levels, taking into account chaos and complexity notions. Such a dynamic systems perspective should be applied in the forensic phase of an investigation as well as in the analytical phase, bearing consequences for the eventual recommendations and the nature and scope of the subsequent safety measures. More sophisticated system theories and change management concepts are mobilized in order to provide a credible and trustworthy explanation of the occurrence, based on the safety criticality of the factors that emerge from the investigation of the event and the analysis of the aviation system itself. To achieve a sustainable improvement in the safety performance of the aviation system, Stoop and Dekker propose a synthesis of these safety critical factors into credible and plausible accident scenarios. Such scenarios may serve as critical load cases to test and validate safety solutions, which are designed on basis of the recommendations as formulated during the investigation of accidents. Such a systems engineering perspective focuses on the dynamics of the event itself in the context of the system’s design and operating conditions. Other perspectives however, focus on the resilience of organizations within the system to enhance safety performance, adding a recovery potential from critical loads which are considered emergent properties of systems. Both perspectives however, deal with a specific class of systems, the so-called Non-Plus Ultra-Safe systems.
These two perspectives stem from different paradigm in the scientific community, emerging from either the socio-technical disciplines or the socio-organizational disciplines. In safety thinking three consecutive paradigms have been developed which exist concurrently in practice (2, 3, 4):
a technical paradigm, based on the load concept, dealing with failure, cause and design envelopes. This load concept has evolved from mechanical loads towards mental loads and from a deterministic, analytical approach towards a probabilistic, reliability and availability modeling. The concept deals primarily with engineering design of technical system components in establishing a design and performance envelope, dealing with reliability, redundancy and robustness
a medical paradigm, based on the transfer of hazards as a specific type of ‘disease’ and the consequences of an exposure to this ‘disease’. This exposure concept focuses on (re-)gaining control over the exposure, minimizing losses and reducing deviations from standards in performance indicators. The concept primarily deals with control over operational performance from a managerial perspective by preventing deviations from a normative performance level.
a biological paradigm, based on a mutual and dynamic adaptation of an agent and its systemic environment. This adaptation is based on feedback and achieving transparency over the primary processes of an organization by responding to emergent properties during operation by monitoring, anticipation and learning. The concept focuses on recovery from disturbances outside the operating envelope by adhering to a systems engineering approach in designing properties into the system, such as recovery, resilience, relianceness, rescue and emergency, reintegration and rehabilitation.
Systems with a very high level of technological complexity, in general also require a very high level of safety performance, such as in aviation, maritime, railways, process industry and (nuclear) power supply. Current safety enhancement strategies have aimed at a complete elimination of technical breakdowns and human error. Such strategies however, separating technological design engineering from human and social intervention seem to have reached their limits (5, 6). Addition of new strategies to the existing arsenal seem to lead to over-extensive linear extrapolation of protective measures. On one hand, more sophisticated mathematical modeling and knowledge based engineering principles are developed to cope with the complex interrelations between systems functionalities, embedded subsystems architecture, based on Neural Networking, Bayesian Belief and Semantic Networks. On the other hand, from a sociological perspective, a more encompassing, integral approach seems to become inevitable by introducing concepts such as resilience engineering (7).
Fig 1: A third systems dimension
These developments have demonstrated a gradual shift in systems modeling, which can be expressed as a transition from accident investigation, via static systems modeling towards dynamic systems modeling (8).
Such a shift in systems modeling should coincide with a shift in paradigm in safety thinking in order to coordinate the integration of safety into these new systems modeling perspectives.
2. Towards a new concept in safety enhancement
In accordance with such a new conceptual thinking in complex and dynamic systems, safety can be considered a system state, either stable or unstable, safe or unsafe. While safe and stable system states assess safety a non-critical value, unsafe and stable system states identify safety as a critical design and operational value, which has to be designed, managed and controlled carefully to avert disaster. Providing transparency over the actual systems behavior becomes pivotal in such critical and unsafe system states. This appeals to the afore mentioned transition in safety investigations to provide a timely transparency in the factual functioning of the system.
A combined transition in safety investigation and systems modeling has the potential to provide a generic basic methodology and investigation notions for all kinds of event investigations across industrial sectors and scientific domains. This transition serves the identification of safety critical knowledge deficiencies and establishes a working relation between forensic engineering and knowledge based engineering design. This concept of safety investigations enables the transition from decomposing an event into isolated accident causation factors to a representation of the actual system state by identifying accident scenarios as the actual system state vector. In such a transition, two major changes have to be taken into account in order to establish the actual system state:
- a shift in focus from the practical level of analysis to a methodological level, mobilizing new scientific concepts and theories
- a merging between the socio-technological perspective and the and the socio-organizational perspective.
Safety enhancing interventions can be categorized in two main classes, complying with a systems perspective:
Linear interventions and first order solutions. Simple problems allow restricting the design space. This is valid only if the number of solutions is small, the number of design variables is small, their values have limited ranges and optimizing within these values deals with sacrificing of aspects among the limited set of variables. Such interventions reinforce the design space in the detailed design phase by reallocation of factors, more stringent compliance with rules and regulations, elimination of deviations, applicable to simple, stand alone systems
Complex interventions and second order solutions. Complex dynamic problems demands expansion of the design space. Such solutions focus on concepts and morphology, reallocation of functions to components, reconfiguration and synthesizing of sub-solutions, involvement of actors, aspects, teamwork, communication, testing and simulation. Such an expansion of the design space occurs in the functional design phase by developing conceptual alternatives and prototypes, applicable to complex and embedded systems.
When first order solutions have failed and do not prevent an event, a redesign of the system as such becomes necessary. In order to achieve such redesign, the event must be redefined in terms of engineering design methodology, identifying critical design aspects. In complex and dynamic systems, time is such a critical aspect. A combined socio-organizational and socio-technical design strategy requires a systems design approach at the functional level to design system properties into a solution space (1).
3. Modeling, a challenging issue
Although systems theory has seen rapid developments over the past two decades, the dynamics of socio-technical and socio-organizational systems and the interactions between system components and aspects are hard to model.
Historically, accident investigation has served two goals:
either to provide proof in a judicial procedure in order to allocate blame and liability
or to identify systemic and knowledge deficiencies in order to learn from mishap.
Distinguishing these two goals is pivotal to facilitate drafting recommendations for improving the safety performance of a system, process or operator.
In conducting independent and blame free investigations, a conceptual shift is made in the investigation process itself from finding the truth towards achieving or regaining trust in the safety performance of a system. Truth finding serves the goal of allocating responsibilities and consequently, accountabilities. Establishing an undisputed sequence of events by a credible, plausible, timely and knowledgeable description of the event should create a starting point for understanding the failure phenomenon and sustainable change in a system. Such a shift from truth towards trust also changes the outcomes of an investigation.
Fig 2: Organizational accident model development
Instead of identifying the causal factors in order to establish the liable involvement of
actors and their motives during the event, the operational performance of the system as such becomes relevant in the potential change towards a safer performance and the ability to learn from undesirable disruptions. Instead of the event and the causal relation to the mishap of any factor, actor or aspect, systemic deficiencies and knowledge deficiencies become the critical issue in system change and knowledge development. Consequently, an increasing number of mixed accident causation and systemic models have been developed (7).
In order to enable such a change from event to system, two transitions in the investigation process are critical:
a transition from descriptive variables and their causal relations as the answer to the what and how necessary and sufficient conditions were present for the event to occur, towards explanatory variables which provide an answer to why the event could occur. This is the domain of forensic sciences, evidence based and case based learning.
a transition from explanatory variables towards control, change and design variables. Such a transition shift the focus from influencing safety dimensions towards systemic dimensions and knowledge development. It adds a systems engineering perspective in order to identify the available solution space for safety enhancements. This is the domain of knowledge based engineering, simulation and dynamic modeling.
The dynamics and interrelations in such a systems perspective plays a very important role in such modeling, but has seen relatively little attention in the modeling process or is in a very early phase of theoretical development.
This has raised interest in the dynamics in the accident process as a critical dimension in accident investigation methodology. Consequently, the dimension of time in the investigation process and event analysis becomes critical as an input parameter for redesigning the system
4. The dimension of time
A study into the time dimension in the investigation process reveals several steps where such modeling will be beneficial for enhanced understanding of the accident phenomenon and a systems response to the occurrence, such as :
analyzing human factors, with respect to the skill, rule and knowledge level of decision making at the individual and crew level
exploring the temporal and spatial state of the system and perceivable changes of systems states during the occurrence
recovery and resilience capacity with respect to a safe completion of the mission
early detection and analysis of safety performance indicators, events, incidents as precursors to occurrences and accidents
incremental change in actual operational use versus intended, designed use of technical and organizational resources as a cause for potential drift into failure.
validating and testing of strategic points of no return as a precautionary principle in designing missions, routes, policy making procedures, operating procedures and operator task loads.
Based on a series of accident investigations, the dimension of time is explored on a case base level in all modes of transportation.
4.1 Time restraints on the operator level
With respect to analyzing human factors at the operator level, a systemic collection of data is required to analyze to what extend and how tasks can be prone to error and where interference of tasks may lead to incidents and accidents. This question has been addressed in the design of road systems for several decades. A designer needs to know which rules or combination of rules should be avoided, or, more in general, what errors may arise when drivers conform in their behavior with particular rules or designs. This creates a need for cognitive psychologists to translate their human error rules such as GEMS into production rules and error classifications. A simplification of reality discriminates three levels of task classification (9) on one dimension against three levels of behavior (10) on the other. The first axis corresponds to the hierarchy of rules, each category roughly related to a time constant for the task duration (control = milliseconds, manoeuvre = seconds, planning = minutes to hours). The second axis corresponds to the level of attentional control which is given to the (sub-)task.
In order to perform these task appropriately, the necessary information should be available, while time should be available to process the information and decide accordingly. Otherwise, operators runs out of time when their decisions are notably incorrect. Since skilled responses deal with milliseconds, rule based responses deal with seconds and knowledge based decisions take minutes or more. The available response time may run short once an error has been detected and corrected by a knowledge based decision. In such a case, the temporal point of no return has long been passed once the error has been detected and the accident becomes inevitable.
Fig 3: Operator task complexity
Within each box of the matrix, the designer needs to look at the potential conflicts which the use of a set of rules could produce, selection of priorities between rules, while the time necessary to discover error and to recover from a wrong decision should be provided. What is currently missing from psychological theory is systematic information about human recovery; which types of error are most or least likely to be noticed by the operator or compensated by the other operator in order to prevent the situation to develop into a disaster.
4.2 Temporal and spatial changes in the system
In December 2002 the vessel Tricolor, carrying 2000 new cars, collided with the Kariba in the English Channel and sank, merely submerging below the high tide waterline. Two days later, the cargo vessel Nicola collided with the vessel, while two weeks later the oil tanker Vicky ran into the wreckage. Before the wreckage was removed about one year later, more than 100 incidents and near misses had been reported by the authorities, while the wreckage was under constant survey of wreck marking buoys and standby vessels. Eventually, IALA issued regulations to safeguard similar sites by emergency wreck marking buoys and deployed a rapid intervention vessel in the area.
Sailing the English Channel is submitted to two main systems: sailing in a TSS and sailing under radar coverage. The general SOLAS conventions are in force, dealing with observation and communication, triggering actions to avoid potential collisions. These systems can be in a regular, complex or chaotic state, defined by conditions such as traffic intensity, the weather, vision, sea swell and the state of the vessels. In addition, the Tricolor sank on a crossing between two shipping lanes, the Doverstrait and Westhinder TSS, increasing the complexity of the situation compared to a collision in a shipping lane. Due to crossing maneuvers, increased traffic intensity and increased need for traffic information between the vessels, the transition from a transparent traffic image in a TSS to a crossing is quite distinct.
Directly after the accident, every sailor was well aware of the situation, responding to the emergency situation, facilitating a quick stabilization of the situation. However, since the removal of the wreckage took about one year time, the duration of the situation of increased complexity sustained, requiring a constant vigilance in safeguarding the accident site and providing additional information to the traffic. Since over 100 incidents occurred, it is questionable whether the buffer in the system worked to deal with this sudden, unexpected and lasting disturbance. An instable system state occurred over a long time.
Fig 4: System state diagram
A study into different accident investigation perspectives showed different insights, conclusions and subsequent recommendations in the occurrence (8):
the regular investigations as conducted by the authorities, focused on the accident process itself; causes, consequences, probabilities and scenarios. The fundamental prevention mechanism is buffering and damping, focusing in time on the moment of the collision itself. IALA as the responsible authority, issues recommendations focusing on the infrastructure: emergency wreck marking buoys and rapid intervention vessels
in explaining the collision, Normal Accident Theory and High Reliability Organizations theories focus on systems aspects, in particular technical aspects (NAT) and human factors/traffic processes (HRO). They represent a static, retrospective approach, applying feedback from past performance as their principal mechanism. Internal changes within the system should facilitate a prevention of similar accidents
applying a systems theory, taking into account chaos and complexity notions, applies feed forward and anticipation as principal mechanisms. By predicting the emerging system state, appropriate measures should be taken to reduce the probability and consequences of the occurrence. The goal of the analysis is to identify undesirable system states before they emerge in practice and become inevitable or highly likely. Recommendations for intervention focus on either reducing or dealing with complexity, or redesigning the system, reducing complexity, dynamic interrelations and coupling.
4.3 A safe completion of the mission
Damage that has been incurred upon the system may go unnoticed for some time, but may jeopardize a future safe performance, shortly afterwards or even years later on (11).
During the evening rush hour of December 3, 2008 an Amsterdam metro train derailed in the tunnel section of line 54, a main transport artery in Amsterdam. There were no casualties, but there was significant damage. At first sight, the cause seemed to be obvious: a catcher on the front bogie had worked loose and had dropped on the railhead. When it struck a checkrail in a set of points, it deformed in such a way that it obstructed a wheel, causing it to derail. Failure of detecting the lose bolts in the catchers’ mounting bracket during routine maintenance was the most obvious cause of this derailment. It seemed like an open and shut case to the inspecting officers.
Yet there were doubts. Parts of the disintegrated front bogie were missing, including the bolts with which the catchers were mounted on the front bogie. The Inspectorate decided to start a full investigation into this derailment. Two days later, the true cause was found, including the missing evidence. Through forensic engineering and reverse process reconstruction the investigators were able to unravel the sequence of events leading to the derailment.
Approximately 1½ hour before the derailment, another driver had crashed into a buffer stop at Gaasperplas terminus with this same train, partly derailing it. Not only did he not report this accident, he tried to cover it up by rerailing his train. It was this latter (unauthorised) movement that caused considerable (hidden) damage to the front bogie including the catchers and the power transmission. Later on that same afternoon his colleague took over the train, unaware that he was driving a severely damaged metro.
The initial run into Amsterdam Central on route 53 was uneventful. Things changed on the return leg, when the train was running as route 54. Shortly after the train left Weesperplein tunnel station, the transmission in the front bogie broke apart, thereby starting severe vibration. This vibration caused the partly failed and damaged catchers to drop and trigger the derailment. The cause of the derailment in the tunnel was found 3 km away on a different part of the network with a different driver.
The most important lessons are that the accident showed to be far more complex than it looked at first sight. In fact, it was a set of two accidents, with two different locations and three drivers involved, spread over nearly two hours time. The second lesson was that it is difficult to determine the end of an investigation. Sometimes, the factual crash site can be far larger than originally thought of. The third lesson learned is, that what looked like a technical problem turned out to be a severe case of misbehaviour resulting from human error.
Fig 5: Two accidents, 1.5 hours apart
The Amsterdam metro derailment showed how a relative simple accident can turn out to be far more complex than thought at first. At the site of the crash it was unclear if the investigators were dealing with one or two incidents and what if ever their relation was. The factual crash site was much larger than initially expected and stretched over a substantial part of the metro system. It covered a much larger time span, involving not one but three drivers. To explain the sequence of events, the starting point of the mishap was found by reversing the driving process. Also, reversing the technical failure process through forensic engineering proved vital in solving this case.
4.4 Early detection of safety performance indicators
In aviation, early detection of damage and deficiencies in the system are critical for enhancing and maintaining a safe performance. Such a safe performance is assessed during design and certification, submitted to a balanced and encompassing system of rules, regulations, standards and procedures, setting the scene for a safe operational performance. Despite such an encompassing safety assessment accident occur. In the industry, a number of accident case studies have gained iconic value in lessons learned from mishap and knowledge deficiencies in the actual behaviour of aircraft during their operational life. After being exposed to a higher load than anticipated during design, an eventual exceeding of the ultimate load may occur, leading the aircraft into disaster. Such an exceeding may occur due to design knowledge deficiencies on material fatigue properties, such as with the De Havilland Comet, extended duration of the economic life beyond the design values, such as with the Aloha Airways B737 case, or due to stretching maintenance intervals, such as with the Alaska 261 jackscrew lubrication intervals. Although a system may seemingly perform beyond expectations, the actual performance may deteriorate unnoticed under a minimal acceptable safety integrity level.
Fig 6: Recovery time frame
Safety investigations provide an indispensable feedback into the knowledge system that supports the aviation industry, providing several levels of defence in identifying recovery and resilience opportunities in the system. Such opportunities do not only manifest themselves during the sequence of an event, but may be designed into the system to enable a graceful degradation during the event and safe termination of a mission. Consequently, the time required to diagnose a malfunction during the flight and the time to develop an appropriate response should not fall outside the boundaries of a safe continuation of a nominal flight. Diagnosing multiple warnings, uncertainty on (partial) loss of critical systems, lack of information on actual system states may require a timescale, expertise and experience that exceeds the available timeframe and capabilities for a nominal crew to continue a mission. The handling of the QF32 A380 loss of containment has demonstrated the time criticality and expert judgement abilities of such diagnostic processes. Such a discrepancy between applied load and allowable load should not be solved during operations because of the discrepancy in available time and necessary time to diagnose and solve a problem before its becomes critical.
4.5 Incremental change in operating conditions and operating envelopes
During the life cycle of an aircraft in its operational life, a gradual transition takes place from technical uncertainties to operational uncertainties, dealing with adaptations, based on feedback from operational experiences. During operations, a balancing of safety takes place against other operational aspects such as environment, noise, health, terrorism threat, market changes, embedded in a context of operating conditions and company cultural values. Diagnosing events in such an operational context takes place from a socio-organisational perspective, focusing on a company’s policy making decision, efficiency of its business processes and quality of its service provision. During operations, trade-off are made, dealing with efficiency-thoroughness considerations, organisational resilience and recovery from critical situations (12). Technological aspects are taken as a constant, covered by the design and certification framework, by training and proficiency checks of the crew. If the assumptions, conditions and limitations of these frameworks are not taken into account in the operational decision making processes of management, crew and maintenance staff, a gradual drift into failure may occur, eventually creating mishap and disaster. An intermediate assessment of changes in operating conditions and practices should be evaluated on their safety consequences before these changes are put in practice, similar to a technical re-certification of an aircraft after major technical changes and adaptations. .
4.6 Safety as a long term strategic value
As the previous changes and adaptation can be considered internal to the aviation system, also external changes may have their consequences, creating emergent behaviour of the aviation system. Expansion and major modifications on airport infrastructure, introducing new aircraft in the fleet, changing the international network and reconfiguration of the aircraft/ATM system will have their impact on the eventual safety performance level of the aviation system (13). In terms of chaos and complexity theory, such changes are dealing with systemic perturbations, disturbances, state transitions and bifurcation points (8). They may bring the system in new, unprecedented states which are yet to be assessed as safe or stable.
Such long term changes, adaptations and modifications are based on arguments and considerations, that not necessarily are transferred in time across stakeholders, market segments or world regions. Lessons learned from safety investigations may hold in stable systems, based on historical insights in their functioning. It is a question of strategic importance however, to deal with newly designed and modified systems, introducing major innovations of a technological as well as an organisational nature. A study into the long term effects and sustainable impact of the safety recommendations after the B747 ELAL crash at Schiphol Airport in 1992 demonstrate a decay in safety awareness, deteriorating the coherence and persistent focus on safety. Even at the level of institutional arrangements, lessons seem to become forgotten, while safety is degraded from a strategic and social value to an operational constraint (13). Governance, policy making arenas and external conditions may create a shift in safety thinking, awareness and acceptance at a societal level, having an impact on all sectors of society.
Safety investigations may contribute to the disclosure of such societal influences on the aviation system, providing a timely transparency in the factual functioning of the aviation system at a societal level.
Several conclusions can be drawn from this exploration of time as a dimension in safety investigations:
in dealing with dynamic and complex interacting systems, the dimension of time in the analysis of events as well as the systems is indispensable, both as a sequencing tool in event recomposition, in the analysis of establishing causal relations and in assessing changes and adaptations that occur throughout the systems life cycle
based on case study experiences, the dimension of time can be applied to any level of the system, varying from the operator level, to management and governance as well as to technical, behavioural and organisational aspects
the dimension of time creates a series of feedback loops between the various life cycle phases, systems levels and system states, facilitating exchange of information about the factual functioning of each of the systems aspects, elements and components between all actors
in a dynamic environment, lessons learned are not necessarily sustained. Time may erode them to become lessons forgotten, if feedback from this learning is not assured in the systems memory as a shared knowledge repository, accessible to all actors, stakeholders and participants
time is a systems dimension that is particularly of interest for investigators. It may provide them with a timely transparency of the factual functioning of the system.
Stoop J.A. and Dekker S, 2010. Limitations of ‘Swiss Cheese’ Models and the Need for a Systems Approach. In: proceedings of the 41st Annual International Seminar “investigating ASIA in Mind – Accurate, Speedy, Independent and Authentic’. Sept 7-9, 2010, Sapporo, Japan
McIntyre J, 2000. Patterns in Safety Thinking. Ashgate
ESReDA 2005. Editors Roed-larsen S., Funnemark E. and Stoop J. Shaping public safety investigations of accidents in Europe. ESRedA Working group, Det Norske Veritas, Oslo, 2005
Stoop J.A., 2010. From factor to vector, a transition in safety investigations. In: Proceedings ATOS Conference, Faculty of Aerospace Engineering. Delft University of Technology. 28-29 March 2010
Amalberti R., 2001. The paradoxes of almost totally safe transportation systems. Safety Science 37 (2001) 109-126, 2001
Holden R., 2009. People or systems. To blame is human. The fix is to engineer. Professional safety December 2009, www.asse.org pp 34-41, 2009
Hollnagel E., Paries J., Woods, C. and Wreathall J., 2010. Resilience engineering in practice. A guidebook. Ashgate Studies in resilience Engineering. 2010
Hendriksen B., 2008. Usability of the chaos theory by learning of shipping disasters. In: Proceedings of the 36th ESReDA Seminar 2009 in Lessons Learned from Accident Investigation. Coimbra Portugal, June 2-3, 2009
Michon, J., 1985. A critical review of driver behavior models: what do we know, what should we do? In: L.Evans and R.C. Schwing (eds). Human behavior and traffic safety. (Plenum Press, new York) 485-530
Rasmussen J., 1987. The definition of human error and a taxonomy for technical systems design. In: J. Rasmussen, K. Duncan and J. Leplat (eds). New technology and human error. Wiley, Chichester, 23-30
Beukenkamp W., 2009. Investiogating the derailment of an Amsterdam metro: an open and shut case ….. or not? In: Proceedings of the 36th ESReDA Seminar 2009 in Lessons Learned from Accident Investigation. Coimbra Portugal, June 2-3, 2009
Hollnagel E., 2008. Remaining Sensitive to the Possibility of Failure. Edited by Erik Hollnagel, Christopher P. Nemeth and Sidney Dekker. Ashgate Studies in Resilience Engineering. Ashgate Publishers
Stoop J.A., 2009. Before, during and after the event; the ElAl Boeing 747 case study. In: Proceedings of the 36th ESReDA Seminar 2009 in Lessons Learned from Accident Investigation. Coimbra Portugal, June 2-3, 2009