Accendo Reliability

Your Reliability Engineering Professional Development Site

  • Home
  • About
    • Contributors
    • About Us
    • Colophon
    • Survey
  • Reliability.fm
  • Articles
    • CRE Preparation Notes
    • NoMTBF
    • on Leadership & Career
      • Advanced Engineering Culture
      • ASQR&R
      • Engineering Leadership
      • Managing in the 2000s
      • Product Development and Process Improvement
    • on Maintenance Reliability
      • Aasan Asset Management
      • AI & Predictive Maintenance
      • Asset Management in the Mining Industry
      • CMMS and Maintenance Management
      • CMMS and Reliability
      • Conscious Asset
      • EAM & CMMS
      • Everyday RCM
      • History of Maintenance Management
      • Life Cycle Asset Management
      • Maintenance and Reliability
      • Maintenance Management
      • Plant Maintenance
      • Process Plant Reliability Engineering
      • RCM Blitz®
      • ReliabilityXperience
      • Rob’s Reliability Project
      • The Intelligent Transformer Blog
      • The People Side of Maintenance
      • The Reliability Mindset
    • on Product Reliability
      • Accelerated Reliability
      • Achieving the Benefits of Reliability
      • Apex Ridge
      • Field Reliability Data Analysis
      • Metals Engineering and Product Reliability
      • Musings on Reliability and Maintenance Topics
      • Product Validation
      • Reliability by Design
      • Reliability Competence
      • Reliability Engineering Insights
      • Reliability in Emerging Technology
      • Reliability Knowledge
    • on Risk & Safety
      • CERM® Risk Insights
      • Equipment Risk and Reliability in Downhole Applications
      • Operational Risk Process Safety
    • on Systems Thinking
      • Communicating with FINESSE
      • The RCA
    • on Tools & Techniques
      • Big Data & Analytics
      • Experimental Design for NPD
      • Innovative Thinking in Reliability and Durability
      • Inside and Beyond HALT
      • Inside FMEA
      • Institute of Quality & Reliability
      • Integral Concepts
      • Learning from Failures
      • Progress in Field Reliability?
      • R for Engineering
      • Reliability Engineering Using Python
      • Reliability Reflections
      • Statistical Methods for Failure-Time Data
      • Testing 1 2 3
      • The Manufacturing Academy
  • eBooks
  • Resources
    • Accendo Authors
    • FMEA Resources
    • Glossary
    • Feed Forward Publications
    • Openings
    • Books
    • Webinar Sources
    • Podcasts
  • Courses
    • Your Courses
    • Live Courses
      • Introduction to Reliability Engineering & Accelerated Testings Course Landing Page
      • Advanced Accelerated Testing Course Landing Page
    • Integral Concepts Courses
      • Reliability Analysis Methods Course Landing Page
      • Applied Reliability Analysis Course Landing Page
      • Statistics, Hypothesis Testing, & Regression Modeling Course Landing Page
      • Measurement System Assessment Course Landing Page
      • SPC & Process Capability Course Landing Page
      • Design of Experiments Course Landing Page
    • The Manufacturing Academy Courses
      • An Introduction to Reliability Engineering
      • Reliability Engineering Statistics
      • An Introduction to Quality Engineering
      • Quality Engineering Statistics
      • FMEA in Practice
      • Process Capability Analysis course
      • Root Cause Analysis and the 8D Corrective Action Process course
      • Return on Investment online course
    • Industrial Metallurgist Courses
    • FMEA courses Powered by The Luminous Group
    • Foundations of RCM online course
    • Reliability Engineering for Heavy Industry
    • How to be an Online Student
    • Quondam Courses
  • Calendar
    • Call for Papers Listing
    • Upcoming Webinars
    • Webinar Calendar
  • Login
    • Member Home
  • Barringer Process Reliability Introduction Course Landing Page
  • Upcoming Live Events
You are here: Home / Articles / Consider Variation for Reliable Designs

by Fred Schenkelberg Leave a Comment

Consider Variation for Reliable Designs

Consider Variation for Reliable Designs

The better reliability performing systems start the design process with controlling variability.

Variability of materials and processes involved thought the product lifecycle. Reliability performance occurs as a result of the decisions made throughout the design process.

When focused on understanding and minimizing variability, the design becomes robust and reliable.

One of the activities in Army basic training is learning to fire a rifle. The majority of the discussion is on consistency. After our first attempt to hit a target with 10 shots, a drill sergeant would look at the results. In my case, the holes in the paper target were widely scattered across the right side of the target and not all shots even hit the target at all. He sighed and told me to focus on taking aim, breathing, and squeezing the target exactly the same each time. He said we first had to get a consistent process or tight shot group, then they could help me adjust my aim to center the shot group on the center of the target.

It’s the same with statistical process control. We start by looking at the range chart and if it is not stable and consistent, the plot of the average readings will not help improve the process. First, get a tight and consistent shot group.

In designing for reliability, again it’s the same. We make decisions all through the process focused on the functions and performance of the product. This decision process may include time to market and cost considerations.

What separates reliable designs from not so good, is the inclusion of the impact of variability.

Measurement variability

The source of all our information is data based on the measurements we or our supplier make.

Every measurement system adds some amount of measurement error. Errors may include bias, linearity, stability, repeatability and reproducibility types of errors.

A great first step for any measurement system you will rely on for data to make design decisions is a Gage Repeatability and Reproducibility study. There are a few ways to conduct a Gage R&R study and the easiest is called a Range and Average Method. It will provide a breakdown of the portion of measurement error contributed from appraiser or equipment sourced errors, and a proportion of the tolerance consumed by measurement error.

In general, if the measurement error is greater than 10% of the tolerance related to the measurements, the measurement system is not adequate for the task.

Calibration is not sufficient to minimize measurement error. Learn about and conduct Gage R&R studies to really understand and improve the data you collect and use to make decisions especially in the design process.

Tolerance analysis

I cringe when reviewing a drawing or set of specifications and all the tolerances at set at a blanket value. This implies that every tolerance has the same importance as all others. This may be true, yet is often not possible, nor necessary for the design and resulting system to function correctly.

We know there are many sources of variability when creating components or parts. That is the purpose of tolerances, is in part to acknowledge the amount of variation that will be present and limit the variability such that the system still functions.

In the design process setting tolerance balances creates a robust and reliable product that performs even with the random set of actual sizes and values of the assembled components. Thus a crucial step in the design process is to understand the variability of the components, parts, and assembly processes.

In many cases, we have the expected variability and in some we have to collect measurements and estimate the range of variability that will occur.

Worst case analysis

Many design teams use either worst case analysis for setting tolerances or root sum squared( RSS) analysis to set tolerances if they are not use a default setting. Worst case is conservative as it estimates the ability of the design to function even is the collection of parts are at their extreme values.

Will the circuit still work if the resister is at it’s maximum value and the capacitor is on the low end of it’s range of values? Same applies to mechanical systems.

RSS analysis

While worst case analysis is conservative and fairly easy to implement, it is possible the design will not function or be possible to assembly (I.e. Hole alignment) under worst case situations, given the technology and assembly processes.

Instead we count on the very rare probability of every part in a system being at worst case values. It is not likely to occur, thus using a RSS analysis approach provides a way to combine the standard deviations of the part variations.

While not as conservative as the absolute worst case values, it will minimize failures to only a small fraction of total systems created. Basically it implies that most parts will be near the nominal or target value and few will exist near the extremes.

Monte Carlo analysis

A third method is more accurate and requires more information. Monte Carlo analysis permits us to simulate a hole alignment, for example, using the distributions of the part variation.

Yes, this takes knowing the part variation distribution, not just the mean and standard deviation values, yet it allows the distributions to accurately reflect the spread of the values and avoid assuming they are normally distributed.

Process control and capability

We often assume statistical process control is a manufacturing tool to monitor and control the assembly process. The tools and techniques apply in the design process, as it forms the basis for setting tolerances and design approaches to accommodate the naturally occurring variation in the parts.

Do design a robust product we need to know the variability of the parts and assembly process and the best way to determine these values is to measure production. At which point it is often too late to alter the fundamental design. Thus, we need to collect variation data using similar assembly processes, from our suppliers processes, and though experimentation.

Like measurement error, processes to create parts or system vary. And, not all variation is bad, too much is often very bad. Uncontrolled or unstable variation will lead to product failures.

Early in the design, focusing on understanding and controlling variation allows us to select the best parts, design stable assembly processes, and create products with a ‘tight shot group’ that hits the reliability (cost, yield, schedule, and function) targets.

Summary

It’s in the design process that variability really matters.

This is just a summary of the tools that your design team should be using to identify and design reliable products. Good measurements, stable processes allow meaningful tolerances, which then convey the design intent. Creating products that accommodate the variation of parts and processes are robust and reliable.

These systems do not occur by chance, it take focusing on and minimizing variability to achieve.

Filed Under: Articles, Musings on Reliability and Maintenance Topics, on Product Reliability

About Fred Schenkelberg

I am the reliability expert at FMS Reliability, a reliability engineering and management consulting firm I founded in 2004. I left Hewlett Packard (HP)’s Reliability Team, where I helped create a culture of reliability across the corporation, to assist other organizations.

« CRE in Application
Purpose of Tolerances »

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Article by Fred Schenkelberg
in the Musings series

Join Accendo

Receive information and updates about articles and many other resources offered by Accendo Reliability by becoming a member.

It’s free and only takes a minute.

Join Today

Recent Articles

  • Gremlins today
  • The Power of Vision in Leadership and Organizational Success
  • 3 Types of MTBF Stories
  • ALT: An in Depth Description
  • Project Email Economics

© 2025 FMS Reliability · Privacy Policy · Terms of Service · Cookies Policy