Accendo Reliability

Your Reliability Engineering Professional Development Site

  • Home
  • About
    • Contributors
    • About Us
    • Colophon
    • Survey
  • Reliability.fm
  • Articles
    • CRE Preparation Notes
    • NoMTBF
    • on Leadership & Career
      • Advanced Engineering Culture
      • ASQR&R
      • Engineering Leadership
      • Managing in the 2000s
      • Product Development and Process Improvement
    • on Maintenance Reliability
      • Aasan Asset Management
      • AI & Predictive Maintenance
      • Asset Management in the Mining Industry
      • CMMS and Maintenance Management
      • CMMS and Reliability
      • Conscious Asset
      • EAM & CMMS
      • Everyday RCM
      • History of Maintenance Management
      • Life Cycle Asset Management
      • Maintenance and Reliability
      • Maintenance Management
      • Plant Maintenance
      • Process Plant Reliability Engineering
      • RCM Blitz®
      • ReliabilityXperience
      • Rob’s Reliability Project
      • The Intelligent Transformer Blog
      • The People Side of Maintenance
      • The Reliability Mindset
    • on Product Reliability
      • Accelerated Reliability
      • Achieving the Benefits of Reliability
      • Apex Ridge
      • Field Reliability Data Analysis
      • Metals Engineering and Product Reliability
      • Musings on Reliability and Maintenance Topics
      • Product Validation
      • Reliability by Design
      • Reliability Competence
      • Reliability Engineering Insights
      • Reliability in Emerging Technology
      • Reliability Knowledge
    • on Risk & Safety
      • CERM® Risk Insights
      • Equipment Risk and Reliability in Downhole Applications
      • Operational Risk Process Safety
    • on Systems Thinking
      • Communicating with FINESSE
      • The RCA
    • on Tools & Techniques
      • Big Data & Analytics
      • Experimental Design for NPD
      • Innovative Thinking in Reliability and Durability
      • Inside and Beyond HALT
      • Inside FMEA
      • Institute of Quality & Reliability
      • Integral Concepts
      • Learning from Failures
      • Progress in Field Reliability?
      • R for Engineering
      • Reliability Engineering Using Python
      • Reliability Reflections
      • Statistical Methods for Failure-Time Data
      • Testing 1 2 3
      • The Manufacturing Academy
  • eBooks
  • Resources
    • Accendo Authors
    • FMEA Resources
    • Glossary
    • Feed Forward Publications
    • Openings
    • Books
    • Webinar Sources
    • Podcasts
  • Courses
    • Your Courses
    • Live Courses
      • Introduction to Reliability Engineering & Accelerated Testings Course Landing Page
      • Advanced Accelerated Testing Course Landing Page
    • Integral Concepts Courses
      • Reliability Analysis Methods Course Landing Page
      • Applied Reliability Analysis Course Landing Page
      • Statistics, Hypothesis Testing, & Regression Modeling Course Landing Page
      • Measurement System Assessment Course Landing Page
      • SPC & Process Capability Course Landing Page
      • Design of Experiments Course Landing Page
    • The Manufacturing Academy Courses
      • An Introduction to Reliability Engineering
      • Reliability Engineering Statistics
      • An Introduction to Quality Engineering
      • Quality Engineering Statistics
      • FMEA in Practice
      • Process Capability Analysis course
      • Root Cause Analysis and the 8D Corrective Action Process course
      • Return on Investment online course
    • Industrial Metallurgist Courses
    • FMEA courses Powered by The Luminous Group
    • Foundations of RCM online course
    • Reliability Engineering for Heavy Industry
    • How to be an Online Student
    • Quondam Courses
  • Calendar
    • Call for Papers Listing
    • Upcoming Webinars
    • Webinar Calendar
  • Login
    • Member Home
  • Barringer Process Reliability Introduction Course Landing Page
  • Upcoming Live Events
You are here: Home / Articles / Why Various Distributions Are Appropriate

by Enrico Belmonte 2 Comments

Why Various Distributions Are Appropriate

Why Various Distributions Are Appropriate

Why are Weibull, lognormal, and exponential distributions more appropriate than others for modeling failure distributions?

In this article, I want to show you why Weibull, lognormal, and exponential distributions are more appropriate than others for modeling failure distributions. Instead of starting from the explanation of the most used fitting distributions in reliability engineering, I will begin with the analysis of artificial datasets created assuming specific failure behaviors. In other words, I will start from the data and not from the statistical distributions. I think that this approach can help those who deal with this topic for the first time.

Before delving into the topic let’s recall the concept of failure rate and failure probability. These two quantities will be used when elaborating the artificial datasets.

Failure rate is defined as the number of failures in a certain time interval (eq.1)

$$ \lambda(t)=(n(t)-\frac{n(t+\delta t))}{(\delta t\centerdot n(t))} \tag{eq1} $$

Where: 

  • $-n(t)-$: number of operational units at time t.
  • $- \delta t -$: time interval

The numerator $-n(t) – n(t + \delta t)-$ represents the number of failures occurred in the time interval $- \delta t -$.

Failure probability is defined as the probability that an item fails after a certain time (eq.2). 

$$ R(t)=\frac{n(t)}{n_{0}} \tag{eq2} $$

Where:

  • $-n_{0}-$: number of operational units at time t = 0.

In the following I’m going to show four different examples of datasets, They are created assuming different failure behaviors. In all the examples, the number of samples at the beginning of the analysis is n = 10.000 and a unitary time frame is used.

Example 1 – constant failure rate.

For the dataset of failures in Table 1 I’ve assumed a constant failure rate: λ = 0.1.

Table 1. Example 1: constant failure rate. λ = 0.1; Inputs: 

Survival at the
beginning of the
time interval
Failure
rate
FailuresTime
interval
Failures
(cumulated)
Failure
probability
100000.11000110000.10
90000.1900219000.19
81000.1810327100.27
72900.1729434390.34
65610.1656540950.41
59050.1590646860.47
53140.1531752170.52
47830.1478856950.57
43050.1430961260.61
38740.13871065130.65
34870.13491168620.69
31380.13141271760.72
28240.12821374580.75
25420.12541477120.77
22880.12291579410.79
20590.12061681470.81
18530.11851783320.83
16680.11671884990.85
15010.11501986490.86
13510.11352087840.88
12160.11222189060.89

A constant failure rate implies that in each time interval the quotient between the number of failures at the end of the time interval and the surviving items at the beginning of the time interval is constant (Figure 1a). Does the assumption of constant failure rate apply to any failure-related event? Yes, if failure is caused by random phenomena. Examples are: flat tire due to a sharp object, electronics failure due to power surge, electronics failure due to cosmic radiation. In these cases, the failure rate doesn’t change with time. In more simple words, the failure rate related to a flat tire due to a sharp object does not depend on the tire age.

Figure 1. Example 1: constant failure rate; (a) Failure rate vs time; (b) Failures vs time.

Figure 1b shows the number of failures over time. Data points shown in Figure 1b can be well fitted by an exponential distribution. Alternatively, the Weibull distribution with shape parameter β = 1 can be applied.

Example 2 – linearly increasing failure rate.

For the dataset in Table 2, I’ve assumed a linearly increasing failure rate (λi+1 = λi + 0.04; λ0 = 0.04). 

Table 2. Example 2: increasing failure rate: λi+1 = λi + 0.04; λ0 = 0.04.

Survival at the
beginning of the
time interval
Failure
rate
FailuresTime
interval
Failures
(cumulated)
Failure
probability
100000.0440014000.04
96000.08768211680.12
88320.121060322280.22
77720.161244434710.35
65290.201306547770.48
52230.241253660310.60
39690.281111771420.71
28580.32915880570.81
19430.36700987560.88
12440.404981092540.93
7460.443281195820.96
4180.482011297830.98
2170.521131398960.99
1040.56581499541.00
460.60281599821.00
180.64121699931.00
70.6841799981.00
20.7221899991.00
10.76019100001.00
00.80020100001.00

Does this assumption apply to any failure-related event? Yes, if failure is caused by wear-out mechanisms. Wear-out mechanisms are time dependent. That means the older the product, the higher the number of failures for the same time interval will be. Examples of wear-out mechanisms are: fatigue, wear, creep, thermal ageing. Plotting failures over time results in a different shape when compared to example 1 (Figure 2b). Failures initially increase but with decreasing number of alive items they start to decrease until all items have failed.

In this case, positively skewed distributions such as Weibull or lognormal distribution can be used for fitting failures.

Figure 2. Example 2: increasing failure rate; (a) Failure rate vs time; (b) Failures vs time.

Example 3 – linearly decreasing failure rate.

For the dataset in Table 3, I’ve assumed a linearly decreasing failure rate (λi+1 = λi – 0.01; λ0 = 0.05).  

Table 3. Example 3: linearly decreasing failure rate: λi+1 = λi – 0.01; λ0 = 0.05.

Survival at the
beginning of the
time interval
Failure
rate
FailuresTime
interval
Failures
(cumulated)
Failure
probability
100000.05050015000.05
95000.04946629660.10
90350.048434313990.14
86010.047404418030.18
81970.046377521800.22
78200.045352625320.25
74680.044329728610.29
71390.043307831680.32
68320.042287934550.35
65450.0412681037230.37
62770.0402511139740.40
60260.0392351242090.42
57910.0382201344290.44
55710.0372061446350.46
53650.0361931548290.48
51710.0351811650100.50
49900.0341701751790.52
48210.0331591853380.53
46620.0321491954870.55

Failures decrease over time from the beginning. Does this assumption apply to any failure-related event? Yes, we might observe a decreasing failure rate when failures occur due to manufacturing-related issues. In such cases, product fail just after installation. Since they are just a fraction (hopefully!) of the entire population, as soon as they fail, the chance of failure for the rest of population decreases over time.

In this case, a Weibull distribution with shape parameter β < 1can be used for fitting failures.

Figure 3. Example 3: decreasing failure rate; (a) Failure rate vs time; (b) Failures vs time.

Example 4 – constant number of failures over time.

In this last example, I’ve assumed that the number of failures is constant over time (failures = 500). 

Table 4. Example 4: constant number of failures over time. Failures = 500.

Survival at the
beginning of the
time interval
Failure
rate
FailuresTime
interval
Failures
(cumulated)
Failure
probability
100000.05050015000.05
95000.053500210000.10
90000.056500315000.15
85000.059500420000.20
80000.063500525000.25
75000.067500630000.30
70000.071500735000.35
65000.077500840000.40
60000.083500945000.45
55000.0915001050000.50
50000.1005001155000.55
45000.1115001260000.60
40000.1255001365000.65
35000.1435001470000.70
30000.1675001575000.75
25000.2005001680000.80
20000.2505001785000.85
15000.3335001890000.90
10000.5005001995000.95

Despite the number of survivals decreases over time, the number of failures doesn’t. Do we find something similar in nature? Honestly speaking, we don’t. For this reason, we won’t deal with modeling data shown in Figure 4.

Figure 4. Example 4: constant number of failures in each time interval; (a) Failure rate vs time; (b) Failures vs time.

Conclusions

We have seen that different fitting distributions apply to different failure-related phenomena. Exponential, Weibull, and lognormal distributions are the most used functions since they can model failures in different but realistic situations. 

Filed Under: Articles, on Product Reliability, Reliability Competence Tagged With: Statistics distributions and functions

About Enrico Belmonte

Responsible for developing and implementing the Design for Reliability (DfR) approach at Electrolux Group. In this role, he has developed a full-scale education program called the Reliability Excellence Program to empower reliability engineers to apply the DfR approach in the company.

« Role of MSDS Sheet in Maintenance: Why MSDS Sheets Are Essential?
Join the Linkedin Group »

Comments

  1. John says

    June 30, 2024 at 5:02 PM

    Thanks Enrico, but I’m a little confused here. Using your definitions for 𝑛(𝑡): number of operational units at time t, and 𝑛0: number of operational units at time t = 0. Then eq2 (that is labelled as the Probability of Failure) should provide the Probability of Failure in Table 1 at row 1 as 0.9; row 2 as 0.81; row 3 as 0.73 etc.

    However, if we subtract these from 1 we get 0.1; 0.19; 0.27 etc. This implies that eq 2 is the Probability of Success if column 6 in Table 1 is the Probability of Failure. and as we are using the number of operational units at time t, we are stating the number of operational units that “succeeded”.

    In short, eq2 is the Probability of Success, not the Probability of Failure…at least imho.

    Reply
    • Enrico Belmonte says

      July 22, 2024 at 8:39 AM

      Dear John,
      Eq.2 is the equation of reliability R = n(t)/n0)). It shows the number of samples that are still functioning after a certain time over the number of samples at time 0.
      Failure probability is defined as 1-R as you have correctly pointed out.
      I will correct the article accordingly.

      Thank-you.

      Enrico

      Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

The Reliability Competence logo Photo of Enrico BelmonteArticles by Enrico Belmonte

in the Reliability Competence article series

Recent Posts

  • Gremlins today
  • The Power of Vision in Leadership and Organizational Success
  • 3 Types of MTBF Stories
  • ALT: An in Depth Description
  • Project Email Economics

Join Accendo

Receive information and updates about articles and many other resources offered by Accendo Reliability by becoming a member.

It’s free and only takes a minute.

Join Today

© 2025 FMS Reliability · Privacy Policy · Terms of Service · Cookies Policy