Accendo Reliability

Your Reliability Engineering Professional Development Site

  • Home
  • About
    • Contributors
    • About Us
    • Colophon
    • Survey
  • Reliability.fm
  • Articles
    • CRE Preparation Notes
    • NoMTBF
    • on Leadership & Career
      • Advanced Engineering Culture
      • ASQR&R
      • Engineering Leadership
      • Managing in the 2000s
      • Product Development and Process Improvement
    • on Maintenance Reliability
      • Aasan Asset Management
      • AI & Predictive Maintenance
      • Asset Management in the Mining Industry
      • CMMS and Maintenance Management
      • CMMS and Reliability
      • Conscious Asset
      • EAM & CMMS
      • Everyday RCM
      • History of Maintenance Management
      • Life Cycle Asset Management
      • Maintenance and Reliability
      • Maintenance Management
      • Plant Maintenance
      • Process Plant Reliability Engineering
      • RCM Blitz®
      • ReliabilityXperience
      • Rob’s Reliability Project
      • The Intelligent Transformer Blog
      • The People Side of Maintenance
      • The Reliability Mindset
    • on Product Reliability
      • Accelerated Reliability
      • Achieving the Benefits of Reliability
      • Apex Ridge
      • Field Reliability Data Analysis
      • Metals Engineering and Product Reliability
      • Musings on Reliability and Maintenance Topics
      • Product Validation
      • Reliability by Design
      • Reliability Competence
      • Reliability Engineering Insights
      • Reliability in Emerging Technology
      • Reliability Knowledge
    • on Risk & Safety
      • CERM® Risk Insights
      • Equipment Risk and Reliability in Downhole Applications
      • Operational Risk Process Safety
    • on Systems Thinking
      • Communicating with FINESSE
      • The RCA
    • on Tools & Techniques
      • Big Data & Analytics
      • Experimental Design for NPD
      • Innovative Thinking in Reliability and Durability
      • Inside and Beyond HALT
      • Inside FMEA
      • Institute of Quality & Reliability
      • Integral Concepts
      • Learning from Failures
      • Progress in Field Reliability?
      • R for Engineering
      • Reliability Engineering Using Python
      • Reliability Reflections
      • Statistical Methods for Failure-Time Data
      • Testing 1 2 3
      • The Manufacturing Academy
  • eBooks
  • Resources
    • Accendo Authors
    • FMEA Resources
    • Glossary
    • Feed Forward Publications
    • Openings
    • Books
    • Webinar Sources
    • Podcasts
  • Courses
    • Your Courses
    • Live Courses
      • Introduction to Reliability Engineering & Accelerated Testings Course Landing Page
      • Advanced Accelerated Testing Course Landing Page
    • Integral Concepts Courses
      • Reliability Analysis Methods Course Landing Page
      • Applied Reliability Analysis Course Landing Page
      • Statistics, Hypothesis Testing, & Regression Modeling Course Landing Page
      • Measurement System Assessment Course Landing Page
      • SPC & Process Capability Course Landing Page
      • Design of Experiments Course Landing Page
    • The Manufacturing Academy Courses
      • An Introduction to Reliability Engineering
      • Reliability Engineering Statistics
      • An Introduction to Quality Engineering
      • Quality Engineering Statistics
      • FMEA in Practice
      • Process Capability Analysis course
      • Root Cause Analysis and the 8D Corrective Action Process course
      • Return on Investment online course
    • Industrial Metallurgist Courses
    • FMEA courses Powered by The Luminous Group
    • Foundations of RCM online course
    • Reliability Engineering for Heavy Industry
    • How to be an Online Student
    • Quondam Courses
  • Calendar
    • Call for Papers Listing
    • Upcoming Webinars
    • Webinar Calendar
  • Login
    • Member Home
  • Barringer Process Reliability Introduction Course Landing Page
  • Upcoming Live Events
You are here: Home / Articles / Project Risk and Emergency Management: Response or Reaction

by Greg Hutchins Leave a Comment

Project Risk and Emergency Management: Response or Reaction

Project Risk and Emergency Management: Response or Reaction

Guest Post by Malcolm Peart (first posted on CERM ® RISK INSIGHTS – reposted here with permission)

In project management we can’t always be in control of the environment around us.  We can only forecast rather than predict risk and despite our ‘reasonable’ or even ‘best’ efforts to mitigate risk; shit happens and emergencies ensue!  It’s not just physical emergencies but also those related to time and cost; overbudget or late projects can create an emergency for stakeholders. Perhaps it’s because we tend to look at the ‘big risks’ or the ‘top ten’ after some semiquantitative assessment but then fail to consider that risks can change with time as more information becomes available.  Or, maybe it is because only those risks that can be clearly defined and are ‘likely’ are communicated to the eyes and ears on the ground.  Those low probability, high impact risks can tend to slip under any risk radar.

Then there is the matter of ‘optimism bias’ and a belief that ‘nothing can go wrong’ or that ‘risk only happens to other people’. Or perhaps, it’s just a matter of management not appreciating what is going on and ‘taking their eye off the ball’.

Consequential Disproportionality and Risk

Risks happen, hazards are realised and, because of our forecasting ability, we believe we can manage everything.  But forecasts rely on incomplete information, experience and ‘lessons learned’ coupled, perhaps with expert judgment and possibly a Monte Carlo simulation.  This is fine until the forecast is wrong, everybody sees it and we suffer the symptoms of shock, denial and even anger. We then realise all to quickly that the risk mitigation planning was inadequate, and an ongoing crisis is rapidly approaching a calamity and a catastrophe is looming.  We are now up to our neck in alligators ‘cos we got it wrong and we have an emergency.

But why?  The consequences of a hazard may well be inversely proportional to the triggering event and small or low likelihood events are oftentimes overlooked or even ignored.  Many major disasters are a consequence of seemingly ‘small’ matters; NASA’s Challenger crash resulted from the failure of an ‘O’-ring; Deep Water Horizon had a faulty pipe fitting, and a small but neglected leaking pipe at Union Carbide’s Bhopal plant caused devastation in India.  We all know the fictional Dutch story of how a little boy plugged a small hole in a dyke and saved Holland from flooding; this only goes to reinforce the concept of consequential disproportionality.

Engineering emergencies can cause millions of dollars’ worth of damage, disruption and delay but emergencies can arise from something as innocuous as office procedures, just look at Bearings Bank in the 90’s.  The 2008 banking collapse cost trillions but was a consequence of many relatively small events culminating in a worldwide economic disaster; perhaps the sight of quick profits blinded the foresight of risk proponents.

Optimism Bias and Groupthink

The UK Government investigated “optimism bias” in 2003 in recognition that project appraisers have a systematic tendency to be overly optimistic.  There was a mandate that appraisers should make explicit, empirically based adjustments to the estimates of a project’s costs and durations to create realistic expectations.  The mandate has been updated in recognition of the optimism that continues to prevail and, unfortunately, the recently announced delays and overspend to London’s Crossrail Project bear witness to our ongoing lessons in optimism bias.

This optimism bias results from people believing that they are less likely to experience a negative event than statistics would have them believe.  It has also been shown to be independent of gender, ethnicity, nationality and age.  Even when risk is appraised by an experienced group of people there can be a tendency for the group to convince themselves that risks will be unlikely, or that planned contingency measures will be so robust that nothing can go wrong – this is groupthink.

Most people want a venture to be successful and anybody who would ‘rock the boat’ or raise controversial issues may well alienate themselves from the group.  Unfortunately, by wanting to belong, group members make efforts to cultivate harmony and conformity and avoid conflict, even the constructive kind.  Decisions become a compromise leading to dysfunctional behavior whereby alternative views are dismissed and critical thinking is curtailed.  Nay-sayers are outcast and the remaining herd believe they have become stronger as everybody ‘goes with the flow’.

Emergency Management

The recognized emergency management process is characterized by ‘mitigation’, ‘preparedness’, ‘response’ and ‘recovery’.  Mitigation identifies hazards while preparedness requires obtaining equipment/technology and the training and drilling of personnel in expectation of the predicted emergency.

If the emergency happens there is the ‘response’ and the controlled implementation of plans which are then adapted accordingly.  After any emergency there is the recovery during which the aftermath is addressed. If the planning is right, then the recovery boils down to responding to the situation according to plan and all’s well that ends well.  However, if the emergency is unpanned and unexpected the resultant shock can create panic and effort is then wasted as people protect their position by blaming anybody and everybody who was also unable to see the future properly.

After any emergency there is the recovery; if the planning was correct and the response successful then the team merely did their job and the response was controlled.  Sometimes it may be perceived that the emergency was ‘business as usual’; just look at the Y2K prediction of potential global mayhem because of a couple if digits in the date (another small thing by the way).  However, if the emergency response is an uncontrolled reaction an enquiry will inevitably be convened.  After all, and in true human behavior, blame needs to be apportioned, the guilty punished, and the non-involved can even be recognised or promoted.

The aftermath of an enquiry can also result in the imposition of more rigorous controls, new rules and regulations or even statutory legislation.  These may provide ‘assurance’ that the last emergency shouldn’t repeat itself but a culture of blame and coverup, and ways of avoiding bureaucratic hurdles can result while the real lesson behind these controls is sadly forgotten.  As Churchill said, “Those who fail to learn from history are condemned to repeat it” and if we do not understand why we do something then the resultant blind obedience may lead to shortsightedness when it comes to foresight.  As they say rules and regulations are ‘for the obedience of fools and guidance of wise men’.

Conclusions

Risk identification is at the core of forecasting emergencies and making adequate preparation.  However, an overly optimistic view of risk will inevitably reduce the ability of a project team to respond to a crisis in an effective of efficient manner.  Effort may then be wasted as it is spent dealing with the shock, anger and denial of unplanned and unexpected events.  Effort is also wasted on blaming others as parties get in with their story first rather than dealing with the issue at hand.

Emergencies happen and crises will occur if risks and scenarios are missed or, worse, ignored.  Low probability events can hide away in a risk register and are overlooked by optimists, but low probability does not mean no probability.  All risks can materialise if you wait long enough, so it is important that probabilities are reviewed as projects progress.  Risk identification is not just a one-off exercise and any risk register and response planning must keep up with the times. If there is groupthink it can be difficult for any realist to advocate that risks can happen.  When one is part of a herd of optimists it can be easier to ‘go with the flow’ rather than risk being ostracised.  But when an emergency happens that flow may well become a raging torrent and saying ‘I told you so’ after the fact will not control the stampede as the herd runs for cover.

Bio:

UK Chartered Engineer & Chartered Geologist with over thirty-five years’ international experience in multicultural environments on large multidisciplinary infrastructure projects including rail, metro, hydro, airports, tunnels, roads and bridges. Skills include project management, contract administration & procurement, and design & construction management skills as Client, Consultant, and Contractor.

Filed Under: Articles, CERM® Risk Insights, on Risk & Safety

About Greg Hutchins

Greg Hutchins PE CERM is the evangelist of Future of Quality: Risk®. He has been involved in quality since 1985 when he set up the first quality program in North America based on Mil Q 9858 for the natural gas industry. Mil Q became ISO 9001 in 1987

He is the author of more than 30 books. ISO 31000: ERM is the best-selling and highest-rated ISO risk book on Amazon (4.8 stars). Value Added Auditing (4th edition) is the first ISO risk-based auditing book.

« 3 Best Practices for Lubrication Cleanliness
Action Strategies to Reduce Severity Risk »

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

CERM® Risk Insights series Article by Greg Hutchins, Editor and noted guest authors

Join Accendo

Receive information and updates about articles and many other resources offered by Accendo Reliability by becoming a member.

It’s free and only takes a minute.

Join Today

Recent Articles

  • Gremlins today
  • The Power of Vision in Leadership and Organizational Success
  • 3 Types of MTBF Stories
  • ALT: An in Depth Description
  • Project Email Economics

© 2025 FMS Reliability · Privacy Policy · Terms of Service · Cookies Policy