Accendo Reliability

Your Reliability Engineering Professional Development Site

  • Home
  • About
    • Contributors
    • About Us
    • Colophon
    • Survey
  • Reliability.fm
  • Articles
    • CRE Preparation Notes
    • NoMTBF
    • on Leadership & Career
      • Advanced Engineering Culture
      • ASQR&R
      • Engineering Leadership
      • Managing in the 2000s
      • Product Development and Process Improvement
    • on Maintenance Reliability
      • Aasan Asset Management
      • AI & Predictive Maintenance
      • Asset Management in the Mining Industry
      • CMMS and Maintenance Management
      • CMMS and Reliability
      • Conscious Asset
      • EAM & CMMS
      • Everyday RCM
      • History of Maintenance Management
      • Life Cycle Asset Management
      • Maintenance and Reliability
      • Maintenance Management
      • Plant Maintenance
      • Process Plant Reliability Engineering
      • RCM Blitz®
      • ReliabilityXperience
      • Rob’s Reliability Project
      • The Intelligent Transformer Blog
      • The People Side of Maintenance
      • The Reliability Mindset
    • on Product Reliability
      • Accelerated Reliability
      • Achieving the Benefits of Reliability
      • Apex Ridge
      • Field Reliability Data Analysis
      • Metals Engineering and Product Reliability
      • Musings on Reliability and Maintenance Topics
      • Product Validation
      • Reliability by Design
      • Reliability Competence
      • Reliability Engineering Insights
      • Reliability in Emerging Technology
      • Reliability Knowledge
    • on Risk & Safety
      • CERM® Risk Insights
      • Equipment Risk and Reliability in Downhole Applications
      • Operational Risk Process Safety
    • on Systems Thinking
      • Communicating with FINESSE
      • The RCA
    • on Tools & Techniques
      • Big Data & Analytics
      • Experimental Design for NPD
      • Innovative Thinking in Reliability and Durability
      • Inside and Beyond HALT
      • Inside FMEA
      • Institute of Quality & Reliability
      • Integral Concepts
      • Learning from Failures
      • Progress in Field Reliability?
      • R for Engineering
      • Reliability Engineering Using Python
      • Reliability Reflections
      • Statistical Methods for Failure-Time Data
      • Testing 1 2 3
      • The Manufacturing Academy
  • eBooks
  • Resources
    • Accendo Authors
    • FMEA Resources
    • Glossary
    • Feed Forward Publications
    • Openings
    • Books
    • Webinar Sources
    • Podcasts
  • Courses
    • Your Courses
    • Live Courses
      • Introduction to Reliability Engineering & Accelerated Testings Course Landing Page
      • Advanced Accelerated Testing Course Landing Page
    • Integral Concepts Courses
      • Reliability Analysis Methods Course Landing Page
      • Applied Reliability Analysis Course Landing Page
      • Statistics, Hypothesis Testing, & Regression Modeling Course Landing Page
      • Measurement System Assessment Course Landing Page
      • SPC & Process Capability Course Landing Page
      • Design of Experiments Course Landing Page
    • The Manufacturing Academy Courses
      • An Introduction to Reliability Engineering
      • Reliability Engineering Statistics
      • An Introduction to Quality Engineering
      • Quality Engineering Statistics
      • FMEA in Practice
      • Process Capability Analysis course
      • Root Cause Analysis and the 8D Corrective Action Process course
      • Return on Investment online course
    • Industrial Metallurgist Courses
    • FMEA courses Powered by The Luminous Group
    • Foundations of RCM online course
    • Reliability Engineering for Heavy Industry
    • How to be an Online Student
    • Quondam Courses
  • Calendar
    • Call for Papers Listing
    • Upcoming Webinars
    • Webinar Calendar
  • Login
    • Member Home
  • Barringer Process Reliability Introduction Course Landing Page
  • Upcoming Live Events
You are here: Home / Articles / Making Statistically Confirmed Decisions

by Ray Harkins Leave a Comment

Making Statistically Confirmed Decisions

Making Statistically Confirmed Decisions

Co-authored with Mike Vella

Leaders and managers play many roles: planning, scheduling, coaching, teaching, supervising, hiring, and sometimes firing. While much of this work is routine, it often involves making decisions. Some decisions are low-risk with clear facts and limited options. Others are made with murky details, unknown options, and high risks if incorrect. Regardless, decision-making often falls to leaders and managers. Theodore Roosevelt, the 26th President of the United States, famously said: “In any moment of decision, the best thing you can do is the right thing, the next best thing is the wrong thing, and the worst thing you can do is nothing.”

Leaders and managers understand that timely and correct decision-making is critical to their success. You might consider yourself successful if 95% of your decisions are correct. However, in reality, it seems people remember your mistakes more than your successes.

As Teddy Roosevelt suggested, leaders and managers need to make the right decisions more often than not. While not all decisions can be reduced to numbers, many can be. By organizing data in certain ways, we can apply statistically valid tests and experiments to determine the best course of action. This approach ensures our decision-making is rational, justifiable, and supported by statistical evidence, allowing us to make decisions with a known level of risk.

Understanding the risks of making wrong decisions is essential when using hypothesis testing. In statistical vocabulary, H0 represents the null hypothesis, which suggests that the data sets being considered are the same and there is no difference. In statistical decision-making, we determine if a hypothesis is true or not. We can make two correct decisions: accepting the null hypothesis when it is true and rejecting it when it is false. Conversely, we can make two types of errors: a type I error (rejecting the null hypothesis when it is true) and a type II error (accepting the null hypothesis when it is false).

Consider the risks a jury faces in a capital crime trial. The defendant either committed the crime or did not. The jury must determine guilt or innocence based on the evidence and defense. No error is made if the jury frees an innocent defendant or finds a guilty defendant guilty. However, a Type I error occurs if an innocent defendant is found guilty, and a Type II error occurs if a guilty defendant is freed. Recognizing that the judicial system is not perfect, society must determine the acceptable risk of errors in this process, especially in areas where capital punishment is imposed.

One of the most widely used hypothesis tests to evaluate how averages of data sets differ from each other is the t-test, which is used when the sample standard deviation or variance is unknown. Here is an example:

A manufacturer of synthetic rubber claims the average hardness of their rubber discs is 65.0 on the Shore A scale. Your company processes this rubber into vibration insulators for stationary machines. To ensure performance and reliability, your quality team needs to confirm the supplier’s claim. Assuming the rubber production process is normally distributed and in statistical control, but with unknown variability, the team selects 25 samples. With a 5% risk of error, the laboratory measures an average hardness of 62.44 Shore A with a standard deviation of 3.056 Shore. 

To determine whether the manufacturer’s claim about the average hardness of the rubber discs is valid, we perform a t-test. Here are the steps:

  1. State the Null Hypothesis (Ho) and Alternative Hypothesis (Ha):
    • Ho: The average hardness of the rubber discs is 65.0 Shore A (µ = 65.0 Shore).
    • Ha: The average hardness of the rubber discs is not 65.0 Shore A (µ ≠ 65.0 Shore).
  2. Collect the sample data:
    • Sample mean (x̄): 62.44 Shore A
    • Sample size (n): 25
    • Sample standard deviation (s): 3.056 Shore A
  3. Calculate the test statistic (t): The t-statistic is calculated using the formula:
$$ \displaystyle t=\frac{\bar{x}-\mu}{{s}/{\sqrt{n}}} $$

Substituting the values, we get:

$$ \displaystyle t=\frac{62.44-65.0}{{3.056}/{\sqrt{25}}}={-2.56}/{0.6112}\approx-4.19 $$
  • Determine the degrees of freedom (df):
$$ \displaystyle df=n-1=25-1=24 $$
  • Find the critical t-value from the t-distribution table:

At a 5% significance level (α = 0.05) for a two-tailed test with 24 degrees of freedom, the critical t-value is approximately ±2.064.

  • Compare the calculated t-value with the critical t-value:
  • Calculated t-value: -4.19
  • Critical t-value: ±2.064

Since the absolute value of the calculated t-value (4.19) is greater than the critical t-value (2.064), we reject the Null Hypothesis.

Conclusion

The t-test result shows that we cannot accept the manufacturer’s claim that the average hardness of the rubber discs is 65.0 Shore A. Instead, we conclude that there is a statistically significant difference between the claimed and observed average hardness of the rubber discs.

Decision-makers are tasked with making correct choices. Sometimes the solutions are straightforward, other times not. Regardless, decisions must be made with an understanding of the associated risks. Statistical tools like hypothesis testing provide greater reliability, validity, and precision in decision-making.

Ray Harkins is the General Manager of Lexington Technologies in Lexington, North Carolina. He earned his Master of Science from Rochester Institute of Technology and his Master of Business Administration from Youngstown State University. He also teaches manufacturing and business-related skills such as Quality Engineering Statistics,Reliability Engineering Statistics, Failure Modes and Effects Analysis (FMEA), and Root Cause Analysis and the 8D Corrective Action Process through the online learning platform, Udemy. He can be reached via LinkedIn at linkedin.com/in/ray-harkins or by email at the.mfg.acad@gmail.com.

Mike Vella is a retired quality and operations leader, most recently serving as Senior VP Operations at the Suter Company and prior, Vice President and General Manager of TI Automotive’s Brake and Fuel Group in North America. Mike was with TI Automotive for 30 years. Mike received his MBA from Wayne State University and his BS from Western Michigan University. His contributions to the Manufacturing Academy include training courses focused on leadership, quality, problem solving and statical analysis.

Filed Under: Articles, on Tools & Techniques, The Manufacturing Academy

About Ray Harkins

Ray Harkins is a senior manufacturing professional with over 25 years of experience in manufacturing engineering, quality management, and business analysis.

During his career, he has toured hundreds of manufacturing facilities and worked with leading industry professionals throughout North America and Japan.

« Using a Weibull Distribution to model Production Output
Opportunities for Maintenance and Operations: PM Route »

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Logo for The Manufacturing Acadamey headshot of RayArticle by Ray Harkins
in the The Manufacturing Academy article series

Join Accendo

Receive information and updates about articles and many other resources offered by Accendo Reliability by becoming a member.

It’s free and only takes a minute.

Join Today

Recent Posts

  • Gremlins today
  • The Power of Vision in Leadership and Organizational Success
  • 3 Types of MTBF Stories
  • ALT: An in Depth Description
  • Project Email Economics

© 2025 FMS Reliability · Privacy Policy · Terms of Service · Cookies Policy