Accendo Reliability

Your Reliability Engineering Professional Development Site

  • Home
  • About
    • Contributors
    • About Us
    • Colophon
    • Survey
  • Reliability.fm
  • Articles
    • CRE Preparation Notes
    • NoMTBF
    • on Leadership & Career
      • Advanced Engineering Culture
      • ASQR&R
      • Engineering Leadership
      • Managing in the 2000s
      • Product Development and Process Improvement
    • on Maintenance Reliability
      • Aasan Asset Management
      • AI & Predictive Maintenance
      • Asset Management in the Mining Industry
      • CMMS and Maintenance Management
      • CMMS and Reliability
      • Conscious Asset
      • EAM & CMMS
      • Everyday RCM
      • History of Maintenance Management
      • Life Cycle Asset Management
      • Maintenance and Reliability
      • Maintenance Management
      • Plant Maintenance
      • Process Plant Reliability Engineering
      • RCM Blitz®
      • ReliabilityXperience
      • Rob’s Reliability Project
      • The Intelligent Transformer Blog
      • The People Side of Maintenance
      • The Reliability Mindset
    • on Product Reliability
      • Accelerated Reliability
      • Achieving the Benefits of Reliability
      • Apex Ridge
      • Field Reliability Data Analysis
      • Metals Engineering and Product Reliability
      • Musings on Reliability and Maintenance Topics
      • Product Validation
      • Reliability by Design
      • Reliability Competence
      • Reliability Engineering Insights
      • Reliability in Emerging Technology
      • Reliability Knowledge
    • on Risk & Safety
      • CERM® Risk Insights
      • Equipment Risk and Reliability in Downhole Applications
      • Operational Risk Process Safety
    • on Systems Thinking
      • Communicating with FINESSE
      • The RCA
    • on Tools & Techniques
      • Big Data & Analytics
      • Experimental Design for NPD
      • Innovative Thinking in Reliability and Durability
      • Inside and Beyond HALT
      • Inside FMEA
      • Institute of Quality & Reliability
      • Integral Concepts
      • Learning from Failures
      • Progress in Field Reliability?
      • R for Engineering
      • Reliability Engineering Using Python
      • Reliability Reflections
      • Statistical Methods for Failure-Time Data
      • Testing 1 2 3
      • The Manufacturing Academy
  • eBooks
  • Resources
    • Accendo Authors
    • FMEA Resources
    • Glossary
    • Feed Forward Publications
    • Openings
    • Books
    • Webinar Sources
    • Podcasts
  • Courses
    • Your Courses
    • Live Courses
      • Introduction to Reliability Engineering & Accelerated Testings Course Landing Page
      • Advanced Accelerated Testing Course Landing Page
    • Integral Concepts Courses
      • Reliability Analysis Methods Course Landing Page
      • Applied Reliability Analysis Course Landing Page
      • Statistics, Hypothesis Testing, & Regression Modeling Course Landing Page
      • Measurement System Assessment Course Landing Page
      • SPC & Process Capability Course Landing Page
      • Design of Experiments Course Landing Page
    • The Manufacturing Academy Courses
      • An Introduction to Reliability Engineering
      • Reliability Engineering Statistics
      • An Introduction to Quality Engineering
      • Quality Engineering Statistics
      • FMEA in Practice
      • Process Capability Analysis course
      • Root Cause Analysis and the 8D Corrective Action Process course
      • Return on Investment online course
    • Industrial Metallurgist Courses
    • FMEA courses Powered by The Luminous Group
    • Foundations of RCM online course
    • Reliability Engineering for Heavy Industry
    • How to be an Online Student
    • Quondam Courses
  • Calendar
    • Call for Papers Listing
    • Upcoming Webinars
    • Webinar Calendar
  • Login
    • Member Home
  • Barringer Process Reliability Introduction Course Landing Page
  • Upcoming Live Events
You are here: Home / Articles / Anatomy of a ‘Failure’

by Robert (Bob) J. Latino Leave a Comment

Anatomy of a ‘Failure’

Anatomy of a ‘Failure’

Where does ‘failure’ come from? Why do some things not go as we planned? When bad things happen, at that time, it can be chaotic and appear very complex. Often, in hindsight (when the urgency has faded), we find that good people made bad decisions at that time. Most of the time such failures were not complicated and we find that most likely, anyone else put in the same position, would have made the same decision. So why do things not always go as planned?

We are going to take a very macro view of this together. I will let you fill in the blanks about situations you have encountered, where this model just seems to fit. If it doesn’t, please ‘Comment’ about your experience(s) and deviations that you observe.

The preview above shows the general pathway from when a ‘failure’ is beginning to form and the steps it will progress through until we have to face the music and deal with its consequences. Let’s break that path down in very simplistic terms.

Most failures originate in the form of flawed systems. They can be inadequate, insufficient and oftentimes just non-existent (there were no rules to follow so we relied on our knowledge to ‘wing it’). I have listed just some examples of these cultural norms and systems, but in effect, these contribute to our reasoning processes for our decisions.

Such organizational systems are put into place to assist those that use them, make better decisions. When flaws exist in such systems, this feeds less than adequate information to the decision-maker. We refer to these ‘systems’ failures as Latent Root Causes. This is appropriate because they are latent or dormant; they are always there, but by themselves they cannot hurt anyone because they are essentially just paper (or electronic files:-).

So less than adequate information is fed to a well-intentioned decision-maker, and at this point they have to make a decision. This decision will normally come in the form of taking an action (potential error of commission) or choosing NOT to take an action (potential error of omission). The decision error itself becomes the ‘active’ error, or what activates the latent errors. Remember, at this point, all of the reasoning going on is between the ears of the decision-maker. As outsiders, we can’t see anything.

This is an important point because the decision itself will trigger a visible consequence. It may be a turn of a switch, a check on a checklist, the changing of a setting, the implementation of a procedure or a host of other possibilities…but as a result of the decision, we see the consequences now as they are visible.

At this point we can now view the cause-and-effect chain of consequences which we will call the Physical Root Causes. If people in our workplaces do not recognize/identify this chain of consequences forming and take actions to stop the chain, then eventually a bad outcome will occur that will have to be addressed.

We often hear the term ‘situational awareness’ to describe this sensory awareness to our surroundings. Oftentimes in low morale environments, such awareness is often dulled as employees because human robots and do nothing more, or less, than what is expected. They often operate as if they have blinders on and see only their work spaces. Nonetheless, High Reliability Organizations (HRO) go out of their way to train their staff on how to recognize these error chains and to take actions to stop them from progressing. This is also emphasized from the managerial oversight perspective to prevent normalization of deviance (when our norms or practices tend to slowly deviate from our standards).

If we are astute enough to identify the chain and break it, we will likely not suffer the full consequences that could have happened. We call these ‘near misses’…I call them ‘we got lucky!’ Irregardless, stopping the full consequences can likely save lives and prevent catastrophic damages.

If we are not able to stop the chain, then it will progress through what I call ‘the threshold of pain’. This means a regulatory trigger will be met and a full blown investigation will be required. At this point the suits show up and we will have no choice but to analyze the failure due to purely reactionary forces.

So to recap, flawed systems influence decision reasoning. As a result, decision errors are made that trigger visible consequences. If the chain of consequences is not stopped, then bad outcomes will likely occur and have to be dealt with.

When you reflect on this macro view, it is not industry specific. No matter where humans work, this process is likely at play. Can you cite some examples from your experience?

Bob Latino /CEO /Reliability Center, Inc./blatino@reliability.com /www.Reliability.com

Filed Under: Articles, on Maintenance Reliability, The RCA

About Robert (Bob) J. Latino

Robert Latino is currently a Principal at Prelical Solutions, LLC, along with his brother Ken Latino. Bob was a Founder and CEO of Reliability Center, Inc. (RCI), until it was acquired in 2019. RCI is a 50-year-old Reliability Consulting firm specializing in improving Equipment, Process and Human Reliability. Mr. Latino received his Bachelor’s degree in Business Administration and Management from Virginia Commonwealth University. For any questions, please contact Bob at blatino@prelical.com

« The 7 Steps to Proper Machinery Alignment
Reliability happens when you become impatient and selfish »

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

logo for The RCA article series image of BobArticle by Robert (Bob) J. Latino
Principal at Prelical Solutions, LLC

in the The RCA article series

Join Accendo

Receive information and updates about articles and many other resources offered by Accendo Reliability by becoming a member.

It’s free and only takes a minute.

Join Today

Recent Posts

  • Gremlins today
  • The Power of Vision in Leadership and Organizational Success
  • 3 Types of MTBF Stories
  • ALT: An in Depth Description
  • Project Email Economics

© 2025 FMS Reliability · Privacy Policy · Terms of Service · Cookies Policy