Featured Product
This Week in Quality Digest Live
Quality Insider Features
Gleb Tsipursky
Believing things will normalize soon, many companies are unprepared for new waves of restrictions
Harry Hertz
Communities of Excellence 2026 can help communities work together using a Baldrige-based, community-centric framework
Michael Weinold
GE’s exit from the lighting business is a warning to other players in the sector
Ken Voytek
Productivity will be even more critical as we recover from the current health and economic crisis
Jeffrey Phillips
Putting data to work

More Features

Quality Insider News
3D scans help Chicago Jet Group retrofit old Dassault Falcon avionics system
Real-time data collection and custom solutions for any size shop, machine type, or brand
Lloyd Instruments launches the LS5 high-speed universal testing machine
Measure diameter, ovality of wire samples, optical fibers and magnet wire, including transparent products
Training, tips and tricks, unboxing, and product videos provide additional information for users
How to develop an effective strategic plan and make the best major decisions in the context of uncertainty and ambiguity
Collect measurements, visual defect information, simple Go/No-Go situations from any online device
Laser scanning also used to help create safety covers for credit card readers
A complimentary webinar for novices to experts on May 27-28, 2020

More News

Steven Ouellette

Quality Insider

Bayes, Profit, and You

Why 200-percent inspection is a waste of time and money

Published: Monday, June 11, 2012 - 14:46

Why is improving quality so important? Why not spend our money on something else in the business? I know it seems a little odd to ask this, especially to readers of Quality Digest, but could those not initiated into the mysteries of the quality gurus be right? Is getting it “out the door” the only thing that matters? Or is there a pragmatic reason why we work so hard on improving quality? Give me a few moments of your time, and I think I can prove to you why making quality better makes you more money.

But first, let’s talk about a little thing called “Bayes’ theorem.” (You know me; I couldn’t pass up an opportunity to bring stats into the discussion.)

Now Bayes’ theorem is pretty simple in terms of probabilities but has far-reaching implications for those of us who live in reality (note that I specifically exclude most politicians from this clade). It is also almost never applicable in solving problems in industry, for reasons that will become obvious soon. That does not mean it is unimportant—the principle underlies how science actually works, even if is not going to help you design an experiment in industry.

Bayes’ theorem looks like a bunch of gobbledygook:
https://lh6.googleusercontent.com/2jCizW82jESQrdIb1SBvMyIELv-KloRsxYTOmB1VQFzL_7vJy1sO3dwrpzqpWiHSNki8OxKSHT2V5sGAU9i8rRSgQ__fR5x-5QkTCTmEbqPQdfPZG70

And that is the last time I’ll refer to it that way because we only need to use simple probability to understand what is going on. (There are plenty of sources you can tap to learn more about Bayes’ theorem. Probably my favorite is this one, which really helps you grasp the implications.)

By way of illustration, and of making you more profit, let’s examine our initial question in this way: Should I spend more money on improving inspection or improving quality? This is the type of question that Bayes’ theorem excels at answering.

Let’s start by laying out our simulation. Let’s say we make a million units per year. However, we have a problem—we have a pretty high defective rate of about 10 percent. With that high of a defective rate, we had better have some 100-percent final inspection in place, and we do.

Now we know that inspection is not a perfect process most of the time. There is some chance that our inspectors will miss a defective. They are looking at a large number of parts, and while 10-percent defective is high from a business point of view, 1 in 10 in ten is infrequent enough for a human to get distracted or bored enough to miss something. Make it a moderately subtle defect, and most people would miss more than they catch. But let’s give them the benefit of the doubt and say our inspectors are world-class, and if there is a defective, they will catch it 90 percent of the time. By the way, that 10 percent we miss used to be called “consumer’s risk.” Too bad for them.

However, missing a defective is not the only bad occurrence in inspection. We could also classify a perfectly good part as defective. For continuous measures with a lot of measurement noise, or for defects that have a “blurry line” between good and bad, this can be a substantial probability. But again, let’s say our inspectors are pretty good and only misclassify 5 percent of the good units they inspect as bad. This used to be called “producer’s risk.” Too bad for us.

That is all the information we need to do some calculations.

Each year we make a million units. Of those, 1,000,000 × 0.1 = 100,000 are really defective. Of those 100,000, we catch 100,000 × 0.9 = 90,000 and scrap them. The remaining 10,000 bad units make it to market. Of the 900,000 units that are good, we misclassify 900,000 × 0.05 = 45,000 units as bad and scrap them, too. Let’s summarize this in the table below:

Now I want you to notice something here. We are scrapping a total of 135,000 units a year, but only 67 percent of them are really bad. If only we knew which ones they were…. Oh, and for those of you who want to reinspect all those scrapped units to capture that 33 percent that are good, remember that reinspection is also not a perfect process. For reinspection, you now have those 135,000 units with a 67-percent chance of a defective, with let’s say the same chances for scrapping a good piece or missing a bad piece.

That means that you will correctly detect 81,405 units as defective and scrap them (again), scrap 2,228 units that are perfectly good (again), find and sell 42,323 good units (hooray!), and send your customers 9,045 units that you found were bad the first time, that actually still remain bad, effectively doubling the number of your defectives in the market. (That will look nice at the liability hearing, don’t you think?) This is the folly of 200-percent inspection. Don’t believe me yet? Keep reading.

Well, I don’t know about you, but that is still a lot of numbers that I can’t get a handle on, so let’s put this into terms that even a manager can understand.

Let’s say that we make $1 profit on each item sold, that we lose $0.75 on each unit scrapped (bad or good), and that if a defective makes it to the market, it costs us $2 in warranty costs, and lost customers. The last two are extremely generous, low-ball estimates on losses. I would suspect that each unit scrapped actually costs more than the profit per unit sold because you wasted all that capacity, time, and labor to make something that you are just throwing away instead of a unit you could sell. And $2 for selling a customer a defective unit? Way too low for today’s social media world, where one bad product experience gets communicated to hundreds or millions of other potential customers. But let’s low-ball it for now. Feel free to redo the calculations for your own more reasonable numbers.

And so:

Note that this does not include other quality costs like inspection costs, which would boost our losses higher by however much our inspection department costs.

By the way, for those of you still unconvinced that 200-percent inspection is dumb, if we went ahead and reinspected those 135,000 using the same inspection process as production (i.e., same probabilities of errors, same losses):

Instead of recapturing some of the profits we lost due to incorrectly scrapping a lot of good stuff, we actually ended up losing about $30K plus whatever containment and inspection costs we incur. We lose money on each reinspected part. But we feel better because we got to sell another 51,000 units to the market, right? Oh, and don’t forget that liability judgment for doubling our defective rate….

OK, so we agree that the business has some issues. How should managers spend their money to make it better? Let’s make it simple for them: They can spend the money to make inspection more accurate or to improve the quality of the process itself. Let’s make it really simple and say that for the same amount of money, we could cut the defective rate to 0.1 percent, or we could improve the ability to detect defectives to 99.9 percent. (Let’s keep the probability of scrapping good parts the same; it doesn’t change the conclusions anyway.)

I’ll mention as an aside that it is probably cheaper and easier to make a process better than it is to make inspection, particularly human inspection, that much better. But let’s ignore that in pursuit of numbers no one can disagree with.

What does our model tell us now? If we take our defective rate down to 0.1 percent:

We reduce our losses from $121,250 to $38,000, about 25 percent of what we had before. That is $83,250 more profit per year. Not bad! At a 0.1 percent defective rate, we might even be able to move away from 100-percent inspection and save some of the nonvalue-added inspection costs.

You want to know something really funny about inspection under this reduced defectives scenario? If I am running at 0.1-percent defective rate and my inspectors still have a 90-percent chance of detecting a defect if it is there, and a 5-percent chance of saying there is a defective when there is not, can you guess what the probability is that any given scrapped unit is actually defective? You are not going to believe the answer, which is why I’ll give you a chance to guess in the comment section for this article. If you want to play, first tell us what you think it will be without doing any calculations—just a ballpark impression. Then, if you like, do some math and let us know what you find. Hint: Everything you need to do that calculation is in this article.

All right; reducing the defective rate is one path. How about spending that money to improve our inspection?

Waaait a second! The same effort and money was spent, but I only reduced my losses by $12,375. That probably didn’t even break even with the cost of improving my inspection process.

Are you a little bit surprised at that? I mean, it sure seems like better inspection would be worth more than that piddling amount. The thing is, you are trying to fight against the Rev. Bayes, and let me tell you even a long-dead minister/mathematician will whup your butt.

It boils down to the fact that trying to detect defectives that are already there is always going to incur more costs than not making them in the first place.

“Well I knew that,” you say. OK, fine—you are a quality genius, yada yada yada. But if we know this already, why are managers just as interested or more in improving inspection than in improving the process?

You see, even having an inspection department is a failure of management that managers should consistently seek to eradicate. Inspection adds no value, and as we saw above, improvements to it have little benefit to the business. Managers should be embarrassed when forced to admit that they even do inspection, or—Deming forbid—spend money making inspection better, since that (should) tell everyone that they can’t do math.

Of course, that is not the way of the world, for the very real reason that our monkey brains don’t think statistically, and it seems that making inspection better really should be the right thing to do.

But ol’ Rev. Bayes would disagree, and if you thought Chuck Norris was a tough guy, wait until you are on the receiving end of a Bayes smackdown. (Speaking of which, did you get the probability that a scrapped part is defective under the reduced defectives scenario yet?)

Discuss

About The Author

Steven Ouellette’s picture

Steven Ouellette

Steven Ouellette is the Lead Projects Consultant in the Office for Performance Improvement at the University of Colorada, Boulder. He has extensive experience implementing the systems that allow companies and organizations to achieve performance excellence, as well as teaching Master's-level students the tools used in BPE. He is the co-editor of Business Performance Excellence with Dr. Jeffrey Luftig. Ouellette earned his undergraduate degree in metallurgical and materials science engineering at the Colorado School of Mines and his Masters of Engineering from the Lockheed-Martin Engineering Management Program at the University of Colorado, Boulder.

Comments

Hmm

While I agree with the spirit of your anti-reinspection argument, I would argue that with the numbers in this example, reinspection results in a gain of about 70K, not a loss of 30K.

Costs from units that fail both inspections are not incurred as a result of reinspection. The effect of the second inspection is to unscrap approximately 9000 defective units and 42000 good units. Unscrapping a good unit means you both remove a $0.75 scrap cost and gain a $1.00 profit for each unit. The losses due to reinspection are about ($2250) = 9000*($1.75-$2.00).

(My guess for the 0.1% defect p(defect|scrapped) was in the ballpark, but still too high.)

depends on perspective...and which accounting games to play

I also agree with the assessment of no 200% inspection.  I actually lean much farther to "no inspection".

 

As far as the analysis of profit/loss both analyses so far are a bit off.  Both are valid arithmetic, but not arranged correctly.

The first pass is pretty straightforward and getting to "net income" of $743,750 I think we all agree.

 

Now come the accounting games:  one game is to take the incremental income and subtract the resulting cost and get ~$30k loss. While mathematically correct, this misses that the bad ones were getting scrapped anyhow and already accounted for.  We just added inspection cost to confirm these failures (which we've chosen to leave out...more on that later).  The next game is to show the reinspection as the gross recoup of the good parts (the $1.75 above).  while this is mathematically correct, it neglects that $62,725 is still being scrapped and the additional $18k in market losses, which results in a benefit of ~$65k increase in gross profit. See the analysis below

 

Now to inspection costs: with the scales given, I would venture that $0.05 inspection (including handling, storage, transportation to inspection, etc.) is a fair cost per part to inspect.  Given that the cost to inspect the 1,000,000 the first time around is $50k and the subsequent 135,000 would add an additional $6,700.  Adding these costs to the model essentially gives a "neutral" cost to the overall inspection (with the exception of that liability lawsuit for knowingly and purposefully introducing additional defects into the market place).

 

With the inspection cost "neutral" to a bottom line, the impact of increasing effectiveness of inspection becomes almost impossible to define and justify.  Whereas the reduction of defects directly hits the bottom line (and doesn’t incur the inspection costs!).  this is pretty easy to demonstrate to a business.  The other intangible effect of reducing defectives is to increase the capacity of the organization WHILE reducing the costs.  So any capacity constrained company gets HUGE benefits through reduction of defectives.

 

The roadblock I’ve encountered is the "quality experts" have been traditionally trained on defect recognition not defect prevention and thus see defect prevention as a threat to their livelihood.  Changing the knowledge base and culture of a profession gets to be the significant challenge we face.  That the bean counters are driven by the arithmetic, it gets even harder to justify the change (see the analysis)...the math shows it's not too bad, might even be good!  getting REAL numbers for market losses (include those lawsuits [even if risk based and the magnitude is weighted by a probability of occurrence]!!) and things like market perception, goodwill to customers, all those intangibles that affect(effect?) a buying decision get to be important.

 

The intuition for detecting .1% defective: pretty darn close to 0.

 

 Analysis:

1,000,000 produced

900,000 good

100,000 truly defective

90,000 detected defective

45,000 detected defective and really good

855,000 sold (10,000 of which are truly bad)

gross profit : 855,000-$2(10,000) = $835,000

pending scrap:135,000 units

reinspection: 51367 found good and sold: +~$51k to the $835 (~9,000 actually bad!)

still bad 83633 (not much different than the original 90K is it??) = $~63k

bad released to market ~9k = -$18k from the total

total after 200% inspection: $835 +51 - 63 -18 = ~$805k, from the ~$744 of only 100% inspection

presented this way:  200% inspection LOOKS good for the business to the tune of about $65k

add inspection costs: 1,000,000 * $.05 for first pass + 135,000 *$.05 for 2nd pass = ~$57k... still looks "not bad" as $57 looks less than $65....

 

Fun fun fun!

Yeah, in simplifying it I probably went too far. My oversimplification was that scrap costs incurred on the first go-through are also incurred on the second go-through. I don't think this is unreasonable (costs associated with containment, inventory, lost productivity, etc. could very well be equal to the "throw it away" costs) but I should have explicitly spelled that out and modeled it. I did purposefully leave off inspections costs, so I am glad RAJOHNSON put them in. If the process is more fully modeled, the costs on inspection become truly astronomical. Hmm, maybe I'll expand on the idea and give more detail for another article. Might be too numbers-heavy though. I'll ponder. It would also be fun to build a continuous process, and add in inspection and opportunity costs and Taguchi losses...