Featured Product
This Week in Quality Digest Live
Six Sigma Features
Ryan E. Day
What you don’t know can confuse you
Nicola Olivetti
Digital visual management puts visual tools into the hands of team members across multiple sites
Shobhendu Prabhakar
Even in well documented systems, people are the weakest link
Harish Jose
A pull system for improvements
Doug Devereaux
Putting the ‘continuous’ in continuous improvement with AI

More Features

Six Sigma News
Gain visibility into real-time quality data to improve manufacturing process efficiency, quality, and profits
Makes it faster and easier to find and return tools to their proper places
Version 3.1 increases flexibility and ease of use with expanded data formatting features
The FDA wants medical device manufactures to succeed, new technologies in supply chain managment
Provides accurate visual representations of the plan-do-study-act cycle
SQCpack and GAGEpack offer a comprehensive approach to improving product quality and consistency
Customized visual dashboards by Visual Workplace help measure performance
Helps manufacturers by focusing on problems and problem resolution in real time
Ask questions, exchange ideas and best practices, share product tips, discuss challenges in quality improvement initiatives

More News

Steven Ouellette

Six Sigma

The Texas Sharpshooter Fallacy

There is real competitive danger to a one-size-fits-all approach to specifications

Published: Thursday, March 3, 2011 - 15:45

“Come and listen to a story ‘bout a man named Ned / a poor Texas Sharpshooter barely kept his family fed. Then one day he was shootin’ at his barn / and he came up with a plan to spin a silly yarn. ‘Specifications,’ he said, ‘making of… the easy way.’ ” What do a Texas sharpshooter and specifications have to do with each other? And what do you do when your humble author has an old TV show theme song stuck in his head? Let’s find out…

Long-time readers (of two months or so) will know that I found a website with logical fallacies all organized into a snazzy tree diagram.

“Geeky,” says you? Like a fox, says I.

This month I thought I would explore a fallacy that we see all the time in industry, and which coincidentally has the funniest non-Latin name of them all: The Texas Sharpshooter Fallacy. (The Latin ones are only funny if you are into Latin double-entendres… then they are hilarious. Trust me. Te audire no possum. Musa sapientum fixa est in aure.)

The story goes that a fellow in Texas had a bright idea—he would impress his friends with his shooting ability. The problem was that he wasn’t that good a shot. But he was a bit clever, and so he took his gun out, shot a bunch of holes in his barn (OK maybe not that clever), and then drew a bull’s-eye around where his shots happened to hit. He challenged his neighbors to see if they could hit the target as well as he did.

Now the fallacy is that his neighbors jump to the conclusion that he was actually shooting at that target, and that is why the shots are clustered around the bull’s-eye, but in reality the cluster is only a figment of our imagination and is only a random cluster of events (and some paint).

How does this play out in industry? Two different ways come to mind.

The clustering fallacy

The way that is most true to the fallacy itself is how we ascribe causal links when we notice clustering in data. (You will note it is a close cousin of the regression fallacy I previously wrote about.) Just because there is a cluster of cancer cases in a geographic location does not by itself mean that something in that area caused cancer. Similarly, if I look at an existing data set and see that certain raw-material vendors seem to be associated with end-of-line defects, I won’t automatically assume that the vendors are the cause. It could be coincidence; even with a statistically significant measure of association, I can’t make a causal link with after-the-fact data. And without a statistical test, well, as I have said before, the human brain is great at noticing patterns, especially ones that are not real. After all, there is going to be a vendor with the highest defect rate, even if that rate is just due to chance and chance alone. That doesn’t mean that the vendors are off the hook, either; the association just identifies a great hypothesis to be tested in an experiment.

Technoid note: An experiment is a manipulation of a process with a basis for comparison in order to observe the effects, and the only way you might get a fully known probability of Type I or Type II error associated with the causality of what you are testing. Nonexperimental studies like agreement, descriptive, or relational studies might eventually, through sheer weight of evidence, allow you to make claims about causality, but there is always an unquantifiable chance that you are wrong. With a true experiment, there is only a quantifiable chance that you are wrong. That’s better, right?

Painting the target

The other case that the Texas Sharpshooter brings to mind is not, strictly speaking, the cognitive fallacy as described above. This is when a business offers a specification to its customers, but that spec is the painted target on the barn—i.e., it is where the process happens to run and is totally disconnected from what the customers may need.

Now I do understand the need of standard products. When I was working in the aluminum industry, we made a bunch of different alloys and tempers, and all of these were to meet the broad definitions as documented in Aluminum Standards and Data. If we were making 6061-T651, you could be assured that the metal contained the correct elements, that it had been solution heat-treated, quenched, stretched, and aged, and ended up with mechanical properties within the range of those given for that alloy and temper. This is all well and good if the customers can design their process around these standards.

But if you truly believe in Taguchi’s Loss Function (and you should by now if you have read “Are You Capable?”), you will know that the optimum specifications are based on customer (direct and end-user) needs, not just manufacturer needs. When we offer a specification to customers based on what we can make, rather than what they need, we are putting ourselves in an inherently vulnerable competitive position. The first competitor that comes in with something that happens to be closer to what the customer really wants or needs will get the business.

Consider the following totally made-up process. Let’s say that we make a high-volume component with an average strength 40 ksi. Our process is in control and runs right on target with a normal distribution and a standard deviation of 1 ksi. (Yay us!) We decide to offer a lower spec of 35 and an upper spec of 45, because, well, that is where we run plus some space to make mistakes. This spec was arrived at as a standard spec for this material without consideration of the customer’s needs (we painted a target around what we were making). We have a Cp = Cpk = Cpm = 1.6667, so we are highly capable of the spec we made up, too.

But those capability metrics assume you are using the right specification not for you, but for your customer.

You have five customers who make the same thing and, if they only knew it, actually have five different targets for strength that would make their process work optimally due to differences in their assembly equipment. For simplicity’s sake, let’s assume that all the mitigation costs are the same and that the ideal specs for each customer are the same width, just shifted with the targets. Probably not true for a real process, but it will make it easier to see the effect of just the different targets. In the graph in figure 1, each customer has a different Taguchi Loss Function (the parabolas) that indicates how much money is lost by a part having that strength. The minimum is at the customer’s target, the further off the greater the loss. The normal curve is our process.

Figure 1: Customers’ Taguchi Loss Function

 

Use the following formula to estimate the average loss per unit for each customer:


Where Cx is the mitigation cost incurred by making a part right on the spec, Δ is the distance between the target and the lower and upper spec, σ is the standard deviation of the process, μ is the mean of the process, and N is the true target for each customer. With our assumptions, everything stays the same except N and we find the information in figure 2:

Figure 2: Varying average per unit loss

What fascinates me about this is that these are five different customers using product to make the exact same thing from the exact same process who have very different perceptions of how well it runs in their process. They probably don’t know that they are losing that money (heck, they might not even know that they have an optimal target), but I guarantee you that the customers with an off-target nominal perceive that your product just doesn’t run as well as, say, the competitor’s (that happens to be closer to their target).

So put yourself in the place of your customers and ask, “Should we stay with this vendor?” If you knew that you were losing $0.68 per unit, and your competitor making the same thing was losing only $0.04, you would be a fool to keep buying from these yahoos.

We have seen the yahoos, and they are us!

How much cooler (read: profitable) would it be if your company was the one that could go in and offer your product or service right on target for what your customers need?

Of course to do that, why you will have to understand the customer’s process well enough so that you can actually hit the target, rather than draw the target around what you hit. To do that, you probably need some good experimental design. Once you have that process knowledge and can control the mean, there are lean techniques that can make multiple-target processes economically viable (e.g., load leveling and the hard work that goes making a process that can do it).

So there is a real competitive danger to a one-size-fits-all approach to specifications. Sure, it is easy to make yourself look good by painting a target around where you happen to hit, and your process capability metrics look like you are doing a good job. The problem is that your customers’ quality experience differs because their experience relates to what their process needs, not the specifications you happen to hit. If you don’t recognize this, even your capability indices will give you a false sense of confidence.

Conclusion

There are two ways Ned the Texas Sharpshooter can end up shooting you in the foot: by showing you how to make the clustering fallacy or by writing your specifications. Either way, let Ned (oh no, I am singing again) “move to Californee as the place he ought to be / and you stay right where you are in the land of industry. Making money. Drinking tea.”

Discuss

About The Author

Steven Ouellette’s picture

Steven Ouellette

Steven Ouellette is the Lead Projects Consultant in the Office for Performance Improvement at the University of Colorada, Boulder. He has extensive experience implementing the systems that allow companies and organizations to achieve performance excellence, as well as teaching Master's-level students the tools used in BPE. He is the co-editor of Business Performance Excellence with Dr. Jeffrey Luftig. Ouellette earned his undergraduate degree in metallurgical and materials science engineering at the Colorado School of Mines and his Masters of Engineering from the Lockheed-Martin Engineering Management Program at the University of Colorado, Boulder.