Featured Video
This Week in Quality Digest Live
Operations Features
Chad Kymal
A single set of FMEA requirements will ease the burden on suppliers
Trevor Blumenau
Inexpensive wireless pick-to-light systems put warehouse productivity in reach for everyone
Michelle LaBrosse
Projects go more smoothly if you have a consistent process for doing them
Tab Wilkins
Resolve to check out these resources
Dirk Dusharme @ Quality Digest
Manufacturing, urgent urgency, and a robot took my job

More Features

Operations News
Management's role in improving work climate and culture
Cricket Media and IEEE team up to launch TryEngineering Together
125 strategies to achieve maximum confidence, clarity, certainty, and creativity
More effective and less expensive than heavy-zinc galvanize
Earn continuing education units
$79 device delivers dedicated neural network processing to a range of host devices
Features faster measurement speed, onscreen averaging, and auto-rotating display
If you want to understand a system, try and change it

More News

Fred Schenkelberg

Operations

MTBF and Mean of Wearout Data

MTBF is just the mean, right?

Published: Tuesday, April 18, 2017 - 11:02

A conversation the other day involved how or why someone would use the mean of a set of data described by a Weibull distribution.

The Weibull distribution is great at describing a dataset that has a decreasing or increasing hazard rate over time. Using the distribution we also do not need to determine the mean time between failures (MTBF)—which is not all that useful, of course.

Walking up the stairs today, I wondered if the arithmetic mean of the time-to-failure data, commonly used to estimate MTBF, is the same as the mean of the Weibull distribution. Doesn’t everyone think about such things?

So, I thought, I’d check. Set up some data with an increasing failure rate, and calculate the arithmetic mean and the Weibull distribution mean.

The data set

I opened R and using the random number-generating function, rweibull, created 50 data points from a Weibull distribution with a shape (β) of 7 and scale (η) of 1,000.

Here’s a histogram of the data.


Image: Histogram of 50 randomly generated time-to-failure data points

Calculating the mean two ways

Let’s say the randomly generated data are complete. No censoring, no replacements, etc. All 50 items ran for some amount of time and then failed. We could calculate the MTBF by tallying up all the time-to-failure data and dividing by the number of failures.

This is the arithmetic mean, that one we use commonly for all sorts of data summarization work.

Doing so we find the mean is 951.1.

Now, is the mean of the Weibull distribution the same or not?

According to Reliawiki’s discussion of the Weibull distribution, the formula for the mean of a Weibull distribution is:
$latex \displaystyle&s=4 \bar{T}=\eta \centerdot \Gamma \left( \frac{1}{\beta }+1 \right)$

OK, let’s calculate the Weibull mean, given the distribution has a β of 7 and η of 1,000. We find the Weibull mean is 935.4.

Comparison and an aha! moment

Since 935.4 ≠ 951.1, I will conclude the two ways to calculate the mean is not the same. Hm, wait a minute. A set of random values from a distribution does not mean the data are best described by the generating distribution, especially for a small dataset.

So, let’s check something. If I generate 50,000 data points from the same distribution as above, the data should be very close to the distribution used to create the data.

With 50,000 data points, the arithmetic mean is 935.0, which is very close to the Weibull mean, 935.4, based on the β and η of the random-generating function.

I have to now conclude the mean calculated both ways is the same. Both determine the first moment of the dataset, the center of mass, etc.

My initial error was not determining the distribution parameters based on the data.

Summary

Question answered: Calculating MTBF or the mean from the data, or calculating it based on the distribution parameters, is the same.

That leaves the question of why anyone would want to calculate the mean of a set of time-to-failure data in the first place. I’ve been trying to convince you and everyone else not to bother doing so.

If you have a good reason to calculate the mean of a dataset with a clear increasing hazard rate, leave a comment below. I need to check my assumption that the Weibull mean is not all that useful and not worth the effort to calculate, using any method.

Discuss

About The Author

Fred Schenkelberg’s picture

Fred Schenkelberg

Fred Schenkelberg is an experienced reliability engineering and management consultant with his firm FMS Reliability. His passion is working with teams to create cost-effective reliability programs that solve problems, create durable and reliable products, increase customer satisfaction, and reduce warranty costs. Schenkelberg is developing the site Accendo Reliability, which provides you access to materials that focus on improving your ability to be an effective and influential reliability professional.

Comments

Your MTBF article

Choosing beta = 7  makes the distribution look more gaussian.  Suppose you were gathering failure data on operations to compare with what vendors claim.  In the case of servers, the vendor might claim to use an exponential distribution with a given MTBF.  There is a difference between their number and what  you see in the operational environment.  Next suppose that your operaional environment has 2000 servers.   How do you propose determining the MTBF? 

Using a Weibull distribution seems to make sense to me, and I expect the result to be significantly different from the arithmetic mean for a couple of reasons.  My sample size is less than the 50,000 that you experimented with, and my beta is significantly lower than 7.

OK, so what is the value of the article beyond provoking comments?