Do you want easy demand forecasts or do you want to learn and use the reliabilities of service parts and make demand forecasts and distribution estimates, without sample uncertainty? Would you like to do something about service parts’ reliability? Would you like demand forecast distributions so you could set inventory policies to meet fill rate or service level requirements? Without sample uncertainty? Without life data? Don’t believe people who write that it can’t be done!
Time Series vs. Actuarial Forecasts
Time Series Analysis (TSA) extrapolates time series to forecast demands, for any commodity, product or even service parts, based on past demand data [Box and Jenkins, Reilly, Levenbach, SAS, JMP, MathWorks, Mathematica, Apple 1990, and others]. To estimate the probability distribution of demands during lead time, bootstrap time-series demand forecasts [Bühlmann 1969, Efron 1979]. Time Series forecasts are easier than estimating actuarial demand rates, making actuarial demand forecasts, and bootstrapping the distributions of demands for service parts. But TSA tells nothing about the force of mortality, the underlying age-specific reliability or failure rate functions of products or their service parts.
The “force of mortality” drives the demand for service parts, such as automotive aftermarket parts. The force of mortality is quantified as “actuarial” (demand) rate. See Wikipedia for life table [Ulpian 220 AD], insurance [Halley 1693], and last-century history of actuarial science [railroad safety 1920s, engines RAND and US AFLC, 1960s, credibility 1969, …].
There is value in installed base information and its ages, in quantifying actuarial rates and forecasting demands for service parts [Van der Auweraer et al. and many others]. Entropy quantifies the information value in data. Entropy is negative Σp(t)*LN(p(t)) where p(t) is computed from the empirical probability distribution functions of returns (TSA) or actuarial demand hindcasts (forecasts of already observed returns) computed from ships and returns data. Table 1 compares the entropy in past demands (service parts’ sales) vs. entropy in ships (installed base by age) and service parts’ sales.
Table 1. Ships and service parts’ sales (returns) are input data. Returns could come from any period or prior periods.
Period t,months | Ships | Returns | TSA Entropy | Actuarial Rate | Actuarial Entropy |
1 | 66 | 1 | -0.161 | 0.015 | -0.161 |
2 | 85 | 3 | -0.299 | 0.020 | -0.161 |
3 | 25 | 3 | 0.000 | 0.023 | -0.161 |
4 | 79 | 3 | 0.000 | 0.000 | -0.161 |
5 | 24 | 6 | -0.161 | 0.204 | -0.161 |
6 | 17 | 7 | -0.161 | 0.028 | -0.161 |
7 | 17 | 8 | -0.161 | 0.000 | -0.161 |
8 | 24 | 10 | -0.161 | 0.000 | -0.161 |
9 | 45 | 11 | -0.161 | 0.000 | -0.161 |
10 | 56 | 13 | -0.161 | 0.000 | -0.161 |
11 | 33 | 15 | -0.161 | 0.000 | -0.161 |
12 | 98 | 16 | -0.161 | 0.000 | -0.161 |
13 | 97 | 19 | -0.161 | 0.000 | -0.161 |
14 | 54 | 20 | -0.244 | 0.009 | -0.161 |
15 | 25 | 20 | 0.000 | 0.290 | -0.161 |
16 | 44 | 21 | -0.161 | 0.000 | -0.161 |
17 | 55 | 25 | -0.161 | 0.000 | -0.161 |
18 | 67 | 28 | -0.161 | 0.000 | -0.161 |
Entropy | 2.630 | 2.890 |
The bottom row entropies show that returns contain less information than ships and returns combined. TSA extrapolates and forecasts returns without regard to ships, the installed base, or its ages. The entropy column entries are p(t)*ln(p(t)) computed from the empirical probability distribution, p(t), of returns (TSA) and actuarial demand hindcasts (forecasts of already observed returns), Σa(t-s)*n(s); actuarial rates for dead-forever parts are a(t)=p(t)/Σp(s) for s<t.
An actuarial forecast (or hindcast) is the convolution of ships and actuarial demand rates. It uses all information in ships and returns data. TSA accounts for serial dependence among returns; actuarial hindcasts account for installed base and returns over time. Table 2 compares actuarial and Excel FORECAST.ETS results. The actuarial forecast accounts for infant mortality and the 15th month failure rate. The FORECAST.LINEAR and FORECAST.ETS overestimate. ETS stands for Exponential Triple Smoothing or Holt Winters method; it accounts for trend and quarterly seasonality). The FORECAST.ETS.CONFINT function drastically underestimates, and the FORECAST.ETS.STAT function doesn’t help explain. The small ETS smoothing factors indicate not much smoothing.
Table 2. Forecasts for period 19 from same ships and returns data as in table 1. The actuarial forecast uses ships data in addition to returns data, Alpha is the data smoothing factor, Beta is the trend smoothing factor, and Gamma is the seasonal change smoothing factor [Wikipedia on exponential smoothing].
Forecast | Confidence | Methods |
12.72 | Average | |
20.70 | Average+1 stdev | |
18.42 | Actuarial | |
25.89 | FORECAST.LINEAR | |
31.26 | FORECAST.ETS | |
0.18 | 5% | FORECAST.ETS.CONFINT |
5.71 | 95% | FORECAST.ETS.CONFINT |
Name | Value | Methods |
Alpha | 0.001 | FORECAST.ETS.STAT |
Beta | 0.002 | FORECAST.ETS.STAT |
Gamma | 0.005 | FORECAST.ETS.STAT |
So I plugged the returns data into Mathematica TSA functions. The first two rows of table 3 parameter entries are guesses. The last four agree tolerably with the actuarial forecast. The next step would be to use TSA on vector valued function {ships, returns}. But why do TSA on the ships and returns vector if the actuarial forecast 18.42 is the maximum entropy forecast and bootstrap gives its distribution? TSA still wouldn’t tell anything about reliability.
Table 3. Mathematica TimeSeriesForecast[.] function forecasts from returns data.
Forecast | Parameters | Process |
22.8 | (-2, 0.4, 0.6) | AR |
5.03 | (0.2,-0.1), (-0.2,0.3) | ARMA |
17.66 | TimeSeriesModelFit | AR |
17.66 | TimeSeriesModelFit | ARMA |
18.17 | TimeSeriesModelFit | MA |
17.90 | TimeSeriesModelFit | SARMA |
Claims and Counter-Claims
A TSA paper says: Vendors claim to provide solutions to sparse demand forecasts that incorporate conventional forecasting techniques or adaptations of them. Even though the average demand per period can be accurately predicted using standard statistical forecasts when demand is sparse, the entire distribution of lead time demand cannot be accurately estimated. They frequently produce inaccurate inputs for inventory control models, which has expensive repercussions.
The paper claims: Standard statistical forecasts cannot provide estimates of a sparse demand distribution during lead time.
FALSE. Estimate the distribution of the actuarial (age-specific) demand forecast Σ[d(t-s)*n(s), s=1,2,…,t+lead-time], where d(t-s) is the actuarial demand rate for age t-s, and n(s) is the installed base of age s. This demand distribution model is valid whether the actuarial demand rate estimate is for dead-forever parts or for parts’ demands from renewal processes. Its distribution is asymptotically normal according to the martingale CLT (Central Limit Theorem for least-squares estimates of actuarial demand rates d(t-s)). Bootstrap the actuarial demand rate d(t-s) estimates to estimate the actuarial demand distribution and the variance-covariance matrix of the normal approximation due to the actuarial demand rate estimates [https://fred-schenkelberg-project.prev01.rmkr.net/covariance-of-renewal-process-reliability-function-estimates-without-life-data/#more-510749].
A TSA paper says: Reliability models suppose that the physical wear and tear of working parts can be used to predict the demand for service parts. This may be the case for some components with predictable usage, like engines or oil filters, but many components may need replacement at any time or wear out more quickly in extreme circumstances. When thousands of items are being used, reliability models place a significant data cost on the forecaster and the service organization. One would need to keep track of how many parts are in use, how many operating hours each part has endured, and how much wear occurred during those hours in order to forecast demand. Furthermore, without a reliability model relating part use, these data are useless.
The paper claims: The cost of data for reliability models is extremely high. One would have to track parts in the field, hours that each part has been used, and wear that occurred during those hours.
FALSE. Generally Accepted Accounting Principles (GAAP) require statistically sufficient data to make nonparametric, population estimates of reliability and actuarial failure rate functions. OEM product revenue encodes price*sales, and warranty returns, spares sales, and/or service costs encode returns counts. Some work is required to extract ships and returns counts from production, revenue, returns, warranty, spares, and service records! Periodic ships and returns counts are statistically sufficient to estimate product reliability and failure rate functions. If ships are products and returns are parts, then use bills-of-materials and gozinto theory to convert product installed base by age into parts’ installed base by age [Vaszonyi]. These estimates are population estimates and have no sample uncertainty except for extrapolation of failure rates for oldest installed base and estimating the newest installed base in the forecast future interval.
Reality Strikes Again, Although it Takes a While
A TSA paper claims, Most service parts inventory managers do not have the expertise to make a reliability model for each part.
PROBABLY TRUE. Triad Systems Corporation downloaded stores’ parts sales by part number and store zip codes [US Patent 5765143, Sheldon, Leach, and Pisarsky, June 1998]. Triad’s New Products VP, an econometrician, bought vehicle registrations by zip code from R. L. Polk (VIN, year, make, model, engine, zip code). Econometricians believe in modeling automotive aftermarket parts’ sales with regression; part sales(t) = Σb(s)*n(s), s=1,2,…,t, where n(s) is the vehicle installed base of age s (that use the part) and b(s) are regression coefficients. The VP’s sales model had trouble with autocorrelation (no pun intended).
The regression coefficients b(s) are actuarial demand rates and can be estimated from ships and returns counts (table 1). Triad’s “Electronic Parts Catalog” listed which part and how many go into each vehicle. Gozinto theory computes parts’ installed base by age from vehicle registrations and ages [Vazsonyi]. Triad assigned vehicles to stores by Voronoi tessellation [https://en.wikipedia.org/wiki/Voronoi_diagram/]. Maximum likelihood and least squares gave nonparametric estimates of parts’ actuarial demand rates from parts’ installed base and store sales. Regression extrapolated oldest actuarial rates and their standard error. The martingale central limit theorem and bootstrap quantify demand distributions and their distribution estimates to help recommend spares stocks. Triad’s forecasts and stock level recommendations were evidently better than alternatives, especially for older vehicle parts. Triad’s revenue and profits increased so much that a competitor bought Triad in a leveraged buyout in 1997. Later the resulting company was merged into www.epicor.com.
Triad’s successor laid off the New Products’ VP. A Triad employee who didn’t understand statistics got Triad’s successor to “partner” with a TSA company to forecast auto parts’ demands. Last year, that employee was finally fired for incompetence, BS, and failing to deliver on promises to customers. Before he was fired, he got Epicor to contract with Predii, an enterprise AI software company,…to apply AI “to capture part and service insights from vehicle repair events” [https://www.predii.com/about/].
Take your pick: TSA or Actuarial Statistics
Time Series Analysis can forecast service part demands; Apple Computer did that 33 years ago with the same TSA model for every service part. Bootstrapping can be used to estimate the distribution of demand during lead time. Unfortunately, extrapolating time series and bootstrapping yields no information about the underlying force of mortality that drives demand. Same problem with AI.
A few companies and government agencies collect samples of times-to-failures and survivors’ ages to estimate service parts’ field reliability and failure rate functions. IBM required Sequent Computer to do so when IBM bought Sequent. (After Sequent’s Jerry Ackaret learned how to do the same thing without collecting service parts’ life data!) The FAA requires tracking approximately 75 “fracture-critical aircraft parts by name, serial number, hours, and cycles”. The US Air Force Air Material command tracks gas turbine engines and their major modules by hours and cycles, but not service parts.
Fortunately, GAAP (Generally Accepted Accounting Principles) requires statistically sufficient data to make nonparametric estimates of age-specific field reliability and failure rate functions of products and their service parts, without life data. Periodic sales data, revenue = price times quantity sold, tells product installed base by age. Bills-of-materials tell how many of each service part go into each product so gozinto theory converts age-specific product installed base into age-specific service parts’ installed base. Periodic service costs, warranty claims, spares sales, etc. tell parts returns (failure) counts. You may have to work to collect this data, but it’s free data, accountants could get in trouble for data errors, and it’s population data so estimates from population data have no sample error!
Want service parts’ demand forecasts driven by force of mortality? Want population instead of sample estimates? Want reality in reliability? Collect installed product base by age, bills-of-materials, and parts’ sales and do the population actuarial statistics, make nonparametric reliability and failure rate function estimates, make actuarial forecasts, and bootstrap demand distribution for recommending spares inventory and quantifying parts’ reliability!
References
Sarah Van der Auweraer and Robert Boute, “Forecasting spare part demand using service maintenance information,” International Journal of Production Economics, Volume 213, pp. 138-149, July 2019
Sarah Van der Auweraer, Sha Zhu, and Robert N. Boute, “The Value of Installed Base Information for Spare Part Inventory Control,” International Journal of Production Economics, vol. 239, issue C, 2021
Box, George; Jenkins, Gwilym, Time Series Analysis: Forecasting and Control, San Francisco: Holden-Day, 1970
Bühlmann, Hans, “Experience Rating and Credibility,” ASTIN Bulletin, vol. 4(3), pp. 199–207, 1967
Bradley Efron, “Bootstrap methods: Another look at the jackknife,” The Annals of Statistics, 7(1): pp. 1–26, doi:10.1214/aos/1176344552, 1979
Hans Levenbach, “A New Way of Dealing with Forecast Accuracy for Intermittent Demand,” https://www.linkedin.com/pulse/new-way-monitor-accuracy-intermittent-demand-forecast-hans/, May 2020
David Reilly and Tom Reilly, “About AFS” [AutoBox], https://autobox.com, 1975
Andrew Vaszonyi, https://en.wikipedia.org/wiki/Andrew_V%C3%A1zsonyi/
Leave a Reply