I have often heard employees complain they have no data to initiate a proper Reliability Improvement Program. This is not always true. And to no fault of theirs. They just don’t know how to use what they already have in terms of records. If you are running an operation, you should at least have production records – i.e. how much you are producing on a daily basis. If you don’t have this, then maybe you should not be in business at all. This article looks at ways to initiate a Reliability Program using the Barringer Process Reliability (BPR) methodology. The greatest advantage of this methodology is that it only requires productions records as an input. That is how many units of production the plant produces on a daily basis. For example, the barrels of crude oil processed per day in a refinery. Or the hectoliters of beer brewed in a brewery daily.
[Read more…]The Reliability Mindset: Practical Applications in Industry
These articles offer practical and effective aspects of Reliability Engineering in an operating environment. Through short and easy to read articles, the author shares his experiences, the different tips and techniques he has learnt over the years illustrating the vast and sometimes untapped potential of this specialty.
Interpretation of Moubray’s Failure Patterns
Failure Patterns according to Moubray
In his book Reliability Centered Maintenance1, John Moubray highlights 6 patterns of failure. However, one needs to be careful about how those patterns are interpreted and used. Or misused. These 6 failure patterns are as follows:
- A: Bathtub Pattern
- B: Age Related or Wear Out Pattern
- C: Fatigue Pattern
- D: Initial Break-in Pattern
- E: Random Pattern
- F: Infant Mortality Pattern
Building a PF Curve Using Life Analysis Concepts
Definition of a PF Curve
A PF curve is a graphical tool used in the field of maintenance and reliability. It essentially illustrates a component’s health degradation over its lifetime. As well as a visual guide on when to conduct appropriate action to minimizes operational risks related to unplanned failures. It is essentially a planning tool.
[Read more…]Using a Weibull Distribution to model Production Output
In probability theory and statistics, the Weibull distribution is a continuous probability distribution. It models a broad range of random variables. Largely in the nature of a time to failure or time between events. It addresses mechanical or structural failures in the field of Reliability Engineering. By nature, the Weibull distribution provides a lot of information such as aging characteristics or expected asset lifetime. One of its most common outputs is the Bathtub Curve.
[Read more…]Transforming FMEAs into RAM Models
Failure Modes and Effect Analysis (FMEA) is an excellent foundation for Reliability Programs. It is also a stepping stone to build Reliability, Availability, and Maintainability (RAM) models. Here lies the “beauty” of doing Reliability and Maintenance analysis. Whereby information from one analysis can help enhance another analysis.
[Read more…]Concurrent Failure Analysis and Prevention
Concurrent or simultaneous failures can happen with redundant or spared systems. This means that both spared equipment can fail at the same time leaving the operator with no production output. For example, we have two alternating pumps operating in a parallel configuration. Each one acts as a spare and at any one time can take over if the other one fails. This article is based on a question I was asked during a recent industry presentation. I thought the example was interesting and informative enough to share with the Maintenance and Reliability community.
[Read more…]Dependency Concept in RAM Analysis
Assets or components in an operation can be dependent on each other. The network of equipment contributing to the operation output is complex and intricate. A RAM model helps account for the complexities including dependencies.
The fundamental purpose of Reliability, Availability, and Maintainability (RAM) modeling is quantifying system performance, typically in a future interval of time. A system is a collection of items whose coordinated operation leads to the output, generally a production value. The collection of items includes subsystems, components, software, human operations, etc. For example, an automobile is a system. Its sub-components being the drivetrain, engine, gearbox, etc. In RAM models, it is crucial to account for relationships or dependencies between items. This helps determine the final output of the system. In various industries, RAM models have proven to be effective as cost avoidance or decision-making tools, as well as their ability to confirm or counter stated assumptions by internal stakeholders.
[Read more…]Operational Data – An Untapped Goldmine
It is becoming more apparent that companies are treating their operational data with the “respect” deserved. About time and if not, then they should start now. Operational Data is typically stored in the CMMS (Computer Maintenance Management System) or historical records repositories.
Data is knowledge. Data is power. Data is the basis of good decision making.
This article also suggests that data could be revenue generator. Therefore, a company’s data is akin to extracting gold from a mine. Like gold, data requires hard work to get and hold on to. Yet, it is highly lucrative when exploited, refined, and traded. The organization running the equipment generates the Operational Data. It can also be generated by a third party. In the following paragraphs, we see the untapped opportunities data and historical records can offer.
[Read more…]Evaluating Equipment Redundancy using RAM Models
The Concept of Equipment Redundancy
Adding equipment redundancy to a system can improve uptime and reliability leading to increased output. When adding new equipment, it is cheaper to evaluate the benefits, or lack of thereof, on paper before implementing the change. Typically done as part of the design phase. However it can also happen after commissioning but its is more expensive. In other words, its better to get it right before “shovels go in the ground”.
[Read more…]Effective Quantification of RCA Benefits
Barringer Process Reliability – Your “factory” performance on a single page
Paul H. Barringer invented the Barringer Process Reliability (BPR) process. Paul was a fellow Reliability Engineer “extraordinaire” and an outstanding mentor for myself and countless others in this field of practice. BPR highlights operational issues. If not addressed or mitigated, those could have significant revenue impacts on an operation. A BPR analysis uses the Weibull probability plot which happens to be a very well-known tool in the field of Reliability Engineering. On one side of a sheet of paper only, the BPR plot can tell the true “story” on the operation. One of those “stories” relates to high impact events best tackled by a Root Cause Analysis.
[Read more…]Parametric versus Non-Parametric Life Estimations – Sacrificing Reality for Elegance
A parametric Life Analysis involves “forcing” or “imposing” a distribution’s parameters on a data set in order to obtain the “best fit”. However, it can lead to errors in results. The non-parametric estimation suggests that there are other approaches though not necessarily the easiest or “most elegant” ones. In the field of reliability engineering, we tend to like something so much that we use them in every “sauce”. A classic example is the Weibull Distribution. It has become so popular that Life Analysis is also known as a “Weibull Analysis”. As a reminder, the Weibull distribution is only one parametric distribution amongst a myriad of others, invented by Walodi Weibull in 1937. Dr Bob Abernathy’s New Weibull Handbook1 quotes: “the Weibull distribution provides reasonably accurate failure and failure forecasts……”. Thus, parametric distributions are good enough but not perfect to make a decision.
[Read more…]Defining Inventory Min-Max Levels
A better approach using RAM models
Traditional spare parts or inventory calculation methods involving Min-Max levels lack rigor, especially for critical equipment. Reliability, Availability and Maintainability (RAM) models provide a better approach to this calculation.
[Read more…]Maintenance Interval Optimization – Identifying the best Maintenance Strategy
What is Maintenance Optimization?
Maintenance Optimization is a Reliability Engineering process which helps organizations avoid unnecessary spend whilst minimizing the risk of a costly failure. Planned replacements or inspections detect or prevent failures for components or systems with increasing failure rates. This improves asset reliability and helps control maintenance spend. Increasing failure rates refer to having a Weibull distribution shape parameter Beta (β) greater than one. Specifically, the failure rates located in the right section of the bath tub curve as shown in Diagram 1 below. Admittedly, the life characteristics have to follow a Weibull distribution in this case.
What is the “M” In RAM? – The Concept and Importance of Maintainability
In line with the RAM acronym sequence, we often start and go no further than the “R” in Reliability. In doing so, we forget about the “M”. The question often asked is: “what is the reliability of the system?” But rarely asked is: “what is the maintainability of the same system?” Myself, am guilty of this omission. Hence this article to remind myself and you the reader, of the importance of Maintainability in industry.
[Read more…]10 Crucial Attributes to Optimize your CMMS set-up for Reliability Analysis
CMMS overview
A proper CMMS (Computerized Maintenance Management System) setup can make a world of difference in an organization’s asset management journey. Conversely, a substandard setup can be a living hell for Reliability Engineers like myself and other analysts. I have personally wasted hundreds of hours of my work life sifting through a poor CMMS structure trying to find records. Ever wanted to make your Reliability Analysts more productive and engaged? If yes, then this article highlights 10 highly recommended set-up requirements.
[Read more…]