Presentation is loading. Please wait.

Presentation is loading. Please wait.

HPRCT Workshop June 21-25, 2010 Richard S. Hartley, Ph.D., P.E. This presentation was produced under contract number DE-AC04-00AL66620 with.

Similar presentations


Presentation on theme: "HPRCT Workshop June 21-25, 2010 Richard S. Hartley, Ph.D., P.E. This presentation was produced under contract number DE-AC04-00AL66620 with."— Presentation transcript:

1 HPRCT Workshop June 21-25, 2010 Richard S. Hartley, Ph.D., P.E. This presentation was produced under contract number DE-AC04-00AL66620 with

2 An organization that repeatedly accomplishes its high hazard mission while avoiding catastrophic events, despite significant hazards, dynamic tasks, time constraints, and complex technologies A key attribute of being an HRO is to learn from the organization’s mistakes Aka a learning organization 2

3 3

4 SYSTEM ACCIDENT TIMELINE 1979 - Three Mile Island 1984 – Bhopal India 1986 – NASA Challenger 1986 – Chernobyl 1989 – Exxon Valdez 1996 – Millstone 2001 – World Trade Center 2005 – BP Texas City 2007 – Air Force B-52 2008 – Stock Market Crash What is Next? Who is Next?

5 5 Some types of system failures are so punishing that they must be avoided at almost any cost. These classes of events are seen as so harmful that they disable the organization, radically limiting its capacity to pursue its goal, and could lead to its own destruction. Laporte and Consolini, 1991 Some types of system failures are so punishing that they must be avoided at almost any cost. These classes of events are seen as so harmful that they disable the organization, radically limiting its capacity to pursue its goal, and could lead to its own destruction. Laporte and Consolini, 1991 Some types of system failures are so punishing that they must be avoided at almost any cost. These classes of events are seen as so harmful that they disable the organization, radically limiting its capacity to pursue its goal, and could lead to its own destruction. Laporte and Consolini, 1991

6 Is it right for you? 6

7 Data as of 7/7/2009 Contractor ISM deployed DOE injury rates have come down significantly since Integrated Safety Management (ISM) was adopted 7

8 Rx Trips/ Scrams Cost ( ¢ /kwh) Significant Events/Unit Capacity Factor (% up) Nuclear Energy Institute (NEI) Data 8

9 Individual Accidents OR Systems Accidents? 9

10 An accident occurs wherein the worker is not protected from the plant and is injured (e.g. radiation exposure, trips, slips, falls, industrial accident, etc.) Plant (hazard) Human Errors (receptor) 10 Focus: Protect the worker from the plant

11 An accident wherein the system fails allowing a threat (human errors) to release hazard and as a result many people are adversely affected Workers, Enterprise, Surrounding Community, Country 11 Human Errors (threat) Plant (hazard) Focus: Protect the plant from the worker The emphasis on the system accident in no way degrades the importance of individual safety, it is a pre-requisite of an HRO

12 Goal of a High Reliability Organization Strive daily for High Reliability Operations A systems approach Every individual is not going to have a perfect day every day To avoid the catastrophic accident a systems approach is required 12

13 Reality Engineering Understanding Socio-Technical Systems to Improve Bottom-Line 13

14 Focus on what is important Measure what is important 14 The most important thing, is to keep the most important thing, the most important thing. Steven Covey, 8 th Habit Not a New Initiative Logical, Defensible Way to Think Based on Logic & Science Logic & Science are Time and New Initiative Invariant

15 Take a physics-based system approach Measure gaps relative to physics-based system Explicitly account for people People are not the problem, they are the solution People are not robots, pounding won’t improve performance People provide safety, quality, security, science etc. Sustain behavior – account for culture Improve long-term safety, security, quality 15

16 Spectrum of Safety Squishy People Part of Safety Average IQ of the organization It is a systems approach Gaussian curve As People Do Hard Core Safety Physics Physics invariant Prevent flow of unwanted energy Delta function As Engineers Write

17 Spectrum of Safety Old Mind-Set Compliance-based safety High Reliability Organization Explicitly consider human error Take into account org. culture Maximize delivery of procedures Improve system safety Hard Core Safety Physics Physics invariant Prevent flow of unwanted energy Delta function Squishy People Part of Safety Average IQ of the organization It is a systems approach Gaussian curve

18 Step #1: Ensure the operation has a defined and justified safety basis Step #2: Develop and deploy HRO framework to use strengths of organization to maintain safety Step #3: Measure performance of organization to safety basis Step #4: Leverage organizational learning to reduce variability to following safety basis 18

19 Step #1: Ensure the operation has a defined and justified safety basis Understand physics and chemistry of processes Unsafe Zone Do not Operate Zone (DOZ) 19

20 Unsafe Zone Violates physics of safety High consequence event 20 In the red part of the unsafe zone and as delineated by the deterministic line, there are some levels of physics beyond which the outcomes (consequences) are certain. Unsafe Zone Violates physics of safety High consequence event

21 DOZ (don’t operate zone - signified by orange cloud) Unsafe Zone Violates physics of safety High consequence event Region noted by DOZ should provide safety but can’t prove The orange cloud signifies the DOZ (don’t operate zone). It extends to the unsafe zone (red circle) and signifies that area which because of uncertainty we try to stay out of by establishing conservative margins of safety. 21

22 Step #1: Ensure the operation has a defined and justified safety basis Understand physics and chemistry of processes Unsafe Zone Do not Operate Zone (DOZ) Define and justify safety basis relative to Unsafe Zone and DOZ Ensure individual processes are within safety basis Ensure collective processes are within safety basis Determine margin of safety 22

23 DOZ (don’t operate zone - signified by orange cloud) Unsafe Zone Violates physics of safety High consequence event Region noted by DOZ should provide safety but can’t prove The safe zone/safety basis (green oval) represents a physics-based zone bounded with hazard analyses and defined using operating procedures. Safe Zone - Safety Basis Assured safety based on physics Processes if followed (i.e. stay within safety basis) assures safety 23

24 DOZ (don’t operate zone - signified by orange cloud) Unsafe Zone Violates physics of safety High consequence event Margin of Safety (i.e. safety factors) Safe Zone - Safety Basis Assured safety based on physics Processes if followed (i.e. stay within safety basis) assures safety The Margin of Safety represents the gap between the established safety basis and the unsafe zone. 24

25 Step #2: Develop and deploy HRO framework to use strengths of organization to maintain safety Compliance-based safety Work-as-imagined equals work-as-done, except Bad apples 25

26 DOZ (don’t operate zone - signified by orange cloud) Unsafe Zone Violates physics of safety High consequence event Margin of Safety (i.e. safety factors) Safe Zone - Safety Basis Assured safety based on physics Processes if followed (i.e. stay within safety basis) assures safety work-as-imagined = work-as-done Based on assumption that most people will follow established safety rules. Regulation and oversight ensure compliance with established safety basis. 26 Management assumes work- as-imagined equals work-as- done Engineer’s Field of Dreams Build it and they will come

27 DOZ (don’t operate zone - signified by orange cloud) Unsafe Zone Violates physics of safety High consequence event Margin of Safety (i.e. safety factors) Safe Zone - Safety Basis Assured safety based on physics Processes if followed (i.e. stay within safety basis) assures safety work-as-imagined = work-as-done X X bad-apples 27 Why do we remove “bad apples?” They represent the $ M lesson learned! Those that don’t follow established safety systems are just those few bad apples that need to be removed.

28 Step #2: Develop and deploy HRO framework to use strengths of organization to maintain safety Compliance-based safety Work-as-imagined equals work-as-done, except Bad apples HRO Approach to safety Reality between work-as-imagined vs. work-as-done Socio-technical systems Explicit Explicit consideration of the affect of organizations on technical safety 28

29 DOZ (don’t operate zone - signified by orange cloud) Unsafe Zone Violates physics of safety High consequence event Processes if followed (i.e. stay within safety basis) assures safety work-as-imagined work-as-done Safe Zone - Safety Basis Assured safety based on physics Green cloud signifies organizations’ struggles to stay within safety basis. 29

30 DOZ (don’t operate zone - signified by orange cloud) Unsafe Zone Violates physics of safety High consequence event Holes in safety basis because of poor analysis (potentially drops you into the DOZ). Processes if followed (i.e. stay within safety basis) assures safety work-as-imagined work-as-done Safe Zone - Safety Basis Assured safety based on physics 30

31 DOZ (don’t operate zone - signified by orange cloud) Unsafe Zone Violates physics of safety High consequence event Reduced Margin of Safety Processes if followed (i.e. stay within safety basis) assures safety work-as-imagined work-as-done Safe Zone - Safety Basis Assured safety based on physics 31. Every excursion into DOZ decreases margin of safety.

32 DOZ (don’t operate zone - signified by orange cloud) Unsafe Zone Violates physics of safety High consequence event Processes if followed (i.e. stay within safety basis) assures safety work-as-imagined work-as-done Safe Zone - Safety Basis Assured safety based on physics 32 HROs: Explicitly consider how the organizational behavior affects ability to buy-in to the established safety basis. Attempt to leverage this to improve the margin of safety.

33 33 Builiding a High Reliability Organization

34 Provide capability to make conservative decisions Make judgments based on reality Openly question & verify system Generate decision- making info Tiered approach Refine HRO system Deploy system Evaluate operations – meas. variability Adjust processes Ensure system provides safety Manage system, evaluate variability Foster culture of reliability Model organizational learning HRO Practice #1 Manage the System, Not the Parts HRO Practice #2 Reduce Variability in HRO System HRO Practice #3 Foster a Strong Culture of Reliability HRO Practice #4 Learn & Adapt as an Organization 34

35 The Limits of Safety, Scott D. Sagan Normal Accidents – Living with High-Risk Technologies, Charles Perrow Managing the Unexpected, Karl E. Weick & Kathleen M. Sutcliffe Managing the Risks of Organizational Accidents, James Reason Organizational Culture and Leadership, 3rd ed., Edgar Schein Field Guide to Human Error Investigations, Sidney Dekker The 8 th Habit, From Effectiveness to Greatness, Stephen Covey Pantex High Reliability Operations Guide Pantex Causal Factors Analysis Handbook 35

36 36


Download ppt "HPRCT Workshop June 21-25, 2010 Richard S. Hartley, Ph.D., P.E. This presentation was produced under contract number DE-AC04-00AL66620 with."

Similar presentations


Ads by Google