Featured Product
This Week in Quality Digest Live
Operations Features
Del Williams
Options to address the risk of combustible dust explosions for NFPA 61 compliance
Gleb Tsipursky
The claims of traditionalists don’t add up
Mark Hembree
Is collaboration overrated?
Matt Fieldman
German system offers tips for U.S. counterparts

More Features

Operations News
Safe trading practices to secure supply chain activities
VSL hosts special edition of show at new center in Rotterdam
Latest line touts comprehensive coverage, ease of use
Same price, double the resolution and accuracy
Former service partner provides honing and deep-hole drilling solutions
Low-cost prevention of catastrophic failures
Diagnoses LVDT failures, disconnects, or short circuits
Quality doesn’t have to sacrifice efficiency
January 24–26, 2023, Booth 1701, San Diego Convention Center

More News

Fred Schenkelberg

Operations

A Make-Believe World of Random Failure Rates

The effect of assuming exponential distribution for ‘ease of analysis’

Published: Tuesday, May 3, 2016 - 12:25

What if all failures occurred truly randomly? Well, for one thing the math would be easier.

The exponential distribution would be the only time to failure distribution—we wouldn’t need Weibull or other complex multi-parameter models. Knowing the failure rate for an hour would be all we would need to know, over any time frame.

Sample size and test planning would be simpler. Just run the samples at hand long enough to accumulate enough hours to provide a reasonable estimate for the failure rate.

Would the design process change?

Yes, I suppose it would. The effects of early life and wear-out would not exist. Once a product is placed into service, the chance to fail the first hour would be the same as in any hour of its operation. It would fail eventually, and the chance of failing before a year would solely depend on the chance of failure per hour.

A higher failure rate would suggest it would have a lower chance of surviving very long. 

Would a warranty make sense?

Since, by design, we can’t create a product with a low initial failure rate, we can only focus on the overall failure rate or the chance of failing during any hour. The first hour is convenient and easy to test, yet still meaningful. Any single failure in a customer’s hands could occur at any time and, by itself, not suggest the failure rate has changed.

A warranty might make sense for customer satisfaction—maybe we could estimate the number of failures over a time period, set aside funds for warranty expenses, and at the same time encourage our design team to create products with a lower failure rate per hour. Maybe a warranty would make sense.

How about maintenance?

If there are no wear-out mechanisms (don't forget, this is a make-believe world), changing the oil in your car wouldn’t make any economic sense. The existing oil has the same chance of engine seize failure as any new oil. The lubricant doesn’t break down. Seals don’t leak. Metal-on-metal movement doesn’t cause damaging heat or abrasion.

You may have to replace a car tire due to a nail puncture, but the chance of an accident due to worn tire tread won’t occur any more often than with new tires. We won’t need to monitor tire tread or break-pad wear. These things won’t happen.

If a motor in your operation is running now, and if we know the failure rate, we can calculate the chance of it running for the rest of the current shift, even when the motor is as old as the building.

The concepts of reliability-centered maintenance or predictive maintenance or even preventive maintenance wouldn’t make sense. There would be no advantage to swapping a part for a new one because the chance to fail would remain the same.

Physics of failure (PoF) and prognostic health management (PHM)—would they make sense?

Understanding failure mechanisms so we could reduce the chance of failure would remain important. Yet when the failures don’t create accumulated damage, drift, wear, abrasion, diffusion, degradation, and the like, then many of the predictive powers of PoF and PHM wouldn’t be relevant. We wouldn’t need sensors to monitor conditions that lead to failure because no specific failure would show an indication of failure before it occurred. Nothing would show us that it was about to fail because that would imply its chance of failure has changed.

No more tune-ups or inspections; we would pursue repairs when a failure occurs and not before.

A world of random failures, or a world of failures that occur at a constant rate, would be quite different than the real world. So, why do we make this assumption so often? The real world really doesn't behave along the lines of a constant failure rate, so does it make sense to assume it does for any form of reliability or quality analysis?

In this column, I've tried to paint a tongue-in-cheek picture of a world markedly different than the one we live in and what we actually experience in order to highlight the impact of assuming exponential distribution for “ease of analysis”—or any other reason, for that matter. In the end, never assume a constant failure rate.

Discuss

About The Author

Fred Schenkelberg’s picture

Fred Schenkelberg

Fred Schenkelberg is an experienced reliability engineering and management consultant with his firm FMS Reliability. His passion is working with teams to create cost-effective reliability programs that solve problems, create durable and reliable products, increase customer satisfaction, and reduce warranty costs. Schenkelberg is developing the site Accendo Reliability, which provides you access to materials that focus on improving your ability to be an effective and influential reliability professional.

Comments

Keeping this one!

An excellent article.  I'm keeping this - not necessarily about reliability but for all things that we make "easy" assumptions.  Thanks!