Why is improving quality so important? Why not spend our money on something else in the business? I know it seems a little odd to ask this, especially to readers of Quality Digest, but could those not initiated into the mysteries of the quality gurus be right? Is getting it “out the door” the only thing that matters? Or is there a pragmatic reason why we work so hard on improving quality? Give me a few moments of your time, and I think I can prove to you why making quality better makes you more money.
But first, let’s talk about a little thing called “Bayes’ theorem.” (You know me; I couldn’t pass up an opportunity to bring stats into the discussion.)
Now Bayes’ theorem is pretty simple in terms of probabilities but has far-reaching implications for those of us who live in reality (note that I specifically exclude most politicians from this clade). It is also almost never applicable in solving problems in industry, for reasons that will become obvious soon. That does not mean it is unimportant—the principle underlies how science actually works, even if is not going to help you design an experiment in industry.
Bayes’ theorem looks like a bunch of gobbledygook:
And that is the last time I’ll refer to it that way because we only need to use simple probability to understand what is going on. (There are plenty of sources you can tap to learn more about Bayes’ theorem. Probably my favorite is this one, which really helps you grasp the implications.)
By way of illustration, and of making you more profit, let’s examine our initial question in this way: Should I spend more money on improving inspection or improving quality? This is the type of question that Bayes’ theorem excels at answering.
Let’s start by laying out our simulation. Let’s say we make a million units per year. However, we have a problem—we have a pretty high defective rate of about 10 percent. With that high of a defective rate, we had better have some 100-percent final inspection in place, and we do.
Now we know that inspection is not a perfect process most of the time. There is some chance that our inspectors will miss a defective. They are looking at a large number of parts, and while 10-percent defective is high from a business point of view, 1 in 10 in ten is infrequent enough for a human to get distracted or bored enough to miss something. Make it a moderately subtle defect, and most people would miss more than they catch. But let’s give them the benefit of the doubt and say our inspectors are world-class, and if there is a defective, they will catch it 90 percent of the time. By the way, that 10 percent we miss used to be called “consumer’s risk.” Too bad for them.
However, missing a defective is not the only bad occurrence in inspection. We could also classify a perfectly good part as defective. For continuous measures with a lot of measurement noise, or for defects that have a “blurry line” between good and bad, this can be a substantial probability. But again, let’s say our inspectors are pretty good and only misclassify 5 percent of the good units they inspect as bad. This used to be called “producer’s risk.” Too bad for us.
That is all the information we need to do some calculations.
Each year we make a million units. Of those, 1,000,000 × 0.1 = 100,000 are really defective. Of those 100,000, we catch 100,000 × 0.9 = 90,000 and scrap them. The remaining 10,000 bad units make it to market. Of the 900,000 units that are good, we misclassify 900,000 × 0.05 = 45,000 units as bad and scrap them, too. Let’s summarize this in the table below:
Now I want you to notice something here. We are scrapping a total of 135,000 units a year, but only 67 percent of them are really bad. If only we knew which ones they were…. Oh, and for those of you who want to reinspect all those scrapped units to capture that 33 percent that are good, remember that reinspection is also not a perfect process. For reinspection, you now have those 135,000 units with a 67-percent chance of a defective, with let’s say the same chances for scrapping a good piece or missing a bad piece.
That means that you will correctly detect 81,405 units as defective and scrap them (again), scrap 2,228 units that are perfectly good (again), find and sell 42,323 good units (hooray!), and send your customers 9,045 units that you found were bad the first time, that actually still remain bad, effectively doubling the number of your defectives in the market. (That will look nice at the liability hearing, don’t you think?) This is the folly of 200-percent inspection. Don’t believe me yet? Keep reading.
Well, I don’t know about you, but that is still a lot of numbers that I can’t get a handle on, so let’s put this into terms that even a manager can understand.
Let’s say that we make $1 profit on each item sold, that we lose $0.75 on each unit scrapped (bad or good), and that if a defective makes it to the market, it costs us $2 in warranty costs, and lost customers. The last two are extremely generous, low-ball estimates on losses. I would suspect that each unit scrapped actually costs more than the profit per unit sold because you wasted all that capacity, time, and labor to make something that you are just throwing away instead of a unit you could sell. And $2 for selling a customer a defective unit? Way too low for today’s social media world, where one bad product experience gets communicated to hundreds or millions of other potential customers. But let’s low-ball it for now. Feel free to redo the calculations for your own more reasonable numbers.
Note that this does not include other quality costs like inspection costs, which would boost our losses higher by however much our inspection department costs.
By the way, for those of you still unconvinced that 200-percent inspection is dumb, if we went ahead and reinspected those 135,000 using the same inspection process as production (i.e., same probabilities of errors, same losses):
Instead of recapturing some of the profits we lost due to incorrectly scrapping a lot of good stuff, we actually ended up losing about $30K plus whatever containment and inspection costs we incur. We lose money on each reinspected part. But we feel better because we got to sell another 51,000 units to the market, right? Oh, and don’t forget that liability judgment for doubling our defective rate….
OK, so we agree that the business has some issues. How should managers spend their money to make it better? Let’s make it simple for them: They can spend the money to make inspection more accurate or to improve the quality of the process itself. Let’s make it really simple and say that for the same amount of money, we could cut the defective rate to 0.1 percent, or we could improve the ability to detect defectives to 99.9 percent. (Let’s keep the probability of scrapping good parts the same; it doesn’t change the conclusions anyway.)
I’ll mention as an aside that it is probably cheaper and easier to make a process better than it is to make inspection, particularly human inspection, that much better. But let’s ignore that in pursuit of numbers no one can disagree with.
What does our model tell us now? If we take our defective rate down to 0.1 percent:
We reduce our losses from $121,250 to $38,000, about 25 percent of what we had before. That is $83,250 more profit per year. Not bad! At a 0.1 percent defective rate, we might even be able to move away from 100-percent inspection and save some of the nonvalue-added inspection costs.
You want to know something really funny about inspection under this reduced defectives scenario? If I am running at 0.1-percent defective rate and my inspectors still have a 90-percent chance of detecting a defect if it is there, and a 5-percent chance of saying there is a defective when there is not, can you guess what the probability is that any given scrapped unit is actually defective? You are not going to believe the answer, which is why I’ll give you a chance to guess in the comment section for this article. If you want to play, first tell us what you think it will be without doing any calculations—just a ballpark impression. Then, if you like, do some math and let us know what you find. Hint: Everything you need to do that calculation is in this article.
All right; reducing the defective rate is one path. How about spending that money to improve our inspection?
Waaait a second! The same effort and money was spent, but I only reduced my losses by $12,375. That probably didn’t even break even with the cost of improving my inspection process.
Are you a little bit surprised at that? I mean, it sure seems like better inspection would be worth more than that piddling amount. The thing is, you are trying to fight against the Rev. Bayes, and let me tell you even a long-dead minister/mathematician will whup your butt.
It boils down to the fact that trying to detect defectives that are already there is always going to incur more costs than not making them in the first place.
“Well I knew that,” you say. OK, fine—you are a quality genius, yada yada yada. But if we know this already, why are managers just as interested or more in improving inspection than in improving the process?
You see, even having an inspection department is a failure of management that managers should consistently seek to eradicate. Inspection adds no value, and as we saw above, improvements to it have little benefit to the business. Managers should be embarrassed when forced to admit that they even do inspection, or—Deming forbid—spend money making inspection better, since that (should) tell everyone that they can’t do math.
Of course, that is not the way of the world, for the very real reason that our monkey brains don’t think statistically, and it seems that making inspection better really should be the right thing to do.
But ol’ Rev. Bayes would disagree, and if you thought Chuck Norris was a tough guy, wait until you are on the receiving end of a Bayes smackdown. (Speaking of which, did you get the probability that a scrapped part is defective under the reduced defectives scenario yet?)