That’s fake news. Real news COSTS. Please turn off your ad blocker for our web site.
Our PROMISE: Our ads will never cover up content.
Tom Pyzdek
Published: Tuesday, March 29, 2011 - 05:30 Story update 3/29/2011: We corrected an error in the next to last sentence. "p < 0.05" was changed to "p > 0.05." One of the exercises I assign to students in my training involves creating two histograms from normally distributed random numbers. The results often look similar to those shown in figure 1. When I ask students to comment on their histograms, I usually get comments about the averages, spread, and other statistical properties. However, that misses the point I’m trying to teach. When we do Six Sigma, we usually spend a lot of time mining historical data from databases. Sometimes the sample sizes are large, and sometimes they can be quite small. In fact, even large sample sizes can become small when we slice-and-dice them, drilling down with various categories and subcategories in search of critical-to-quality data. Statistical software will often automatically fit a normal curve to histograms created from these data. It’s often tempting to use the fitted curves to make an eyeball judgment about the normality of the data. Sometimes this is a good idea, and sometimes it isn’t. If the sample sizes are small, then the curve may not appear to fit the data very well simply because of small sample variation. Witness the top histogram in figure 1 for an example of a curve fitted to a histogram from a sample size of n = 20. The histogram looks like a poor fit, but the p-value of a normality test tells us that the fit is pretty good anyway. So we’re probably safe assuming normality and acting accordingly. Fig. 1: Large and small samples of normally distributed data The lower curve is fitted to a sample of n = 500 data values. It appears to be a much better fit, and the p-value will back this conclusion. But what if the eyeballed curve fit and the p-value disagree? Sometimes the fit of the curve is “close enough,” but the p-value will tell you that the fit is awful. Take a look at figure 2. The histogram suggests that the normal curve fits the data pretty well. There are many practical situations where you could use the normal distribution to make estimates, and your estimates would be just fine. These are data on the time it takes to complete technical support calls. If you assume normality and you estimate costs or make a decision about process acceptability, your decisions will be essentially correct. Fig. 2: Decent fit but lousy p-value However, the probability plot and Anderson-Darling goodness-of-fit statistic clearly show that the data are not normal and that the lack of fit is particularly poor in the tails (p < 0.005). A closer examination shows that even in the tail areas the discrepancies are fractions of a percent. For example, the normal distribution estimates that 99.9 percent of all calls will take less than 35 minutes to complete, while the data show about 99.5 percent. Chances are these differences are of little or no practical importance. The point is that in the business world, we often need to make decisions and then get on to other, more urgent matters. The normal distribution is a handy device for getting quick estimates that are useful for such decisions. If your sample size is relatively large (say 200 or more), then you can go with the normality assumption if the fitted curve looks reasonably good. On the other hand, if you only have a small amount of data, you can still use the normality assumption if the histogram fit looks lousy, providing the p-value of the goodness-of-fit statistic says the normal curve is OK, i.e., if p > 0.05. The normality assumption is so useful that it's worth using as a default, even if you bend the rules a bit. Quality Digest does not charge readers for its content. We believe that industry news is important for you to do your job, and Quality Digest supports businesses of all types. However, someone has to pay for this content. And that’s where advertising comes in. Most people consider ads a nuisance, but they do serve a useful function besides allowing media companies to stay afloat. They keep you aware of new products and services relevant to your industry. All ads in Quality Digest apply directly to products and services that most of our readers need. You won’t see automobile or health supplement ads. So please consider turning off your ad blocker for our site. Thanks, Thomas Pyzdek’s career in business process improvement spans more than 50 years. He is the author more than 50 copyrighted works including The Six Sigma Handbook (McGraw-Hill, 2003). Through the Pyzdek Institute, he provides online certification and training in Six Sigma and Lean.Histograms: When to Use Your Eyeballs, When Not
Looks can be deceiving
Click for larger image
Click for larger image
Our PROMISE: Quality Digest only displays static ads that never overlay or cover up content. They never get in your way. They are there for you to read, or not.
Quality Digest Discuss
About The Author
Tom Pyzdek
© 2023 Quality Digest. Copyright on content held by Quality Digest or by individual authors. Contact Quality Digest for reprint information.
“Quality Digest" is a trademark owned by Quality Circle Institute, Inc.
Comments
Use of p value in distribution decisions
When is comes to determining if data is normally distributed, I prefer to do a distrubtion analysis to find the best fit distribution, rather than just use the p value on a normality test. One good explanation of the p value limitation for distribtuion decisions is avialable from Charles Annis' web page http://www.statisticalengineering.com/goodness.htm His note 1 is very educational:
"The Anderson-Darling test, does not tell you that you have a Normal density. It only tells you when the data make it unlikely that you do not. Engineers (and I'm one) hate this kind of statistical double-talk. But the fact remains: Any frequentist test is constructed to disprove something. Just as a dry sidewalk is evidence that it didn't rain, a wet sidewalk might be caused by rain or by the sprinkler system. So a wet sidewalk can't prove that it rained, while a not-wet one is evidence that it did not rain."
When I think of eyeballing is "good enough", it makes me think of Quality Level TCE....."That's Close Enough". It is in common usage, but it is often used when it should not be. The real issue is that the normal curve in the real world is one of three most common occuring curves - normal, uniform and skewed (Weibull or beta). Assume one over the other without pondering which really makes sense is just as negligent as "over-thinking" the distribution. It is also key to uderstand that nearly all measured outputs are multi-modal, as described in the total variance equation.
Another Consideration
Great point about ignoring the curve fit by the software...we live in the real world. Our data come in histograms, not PDF/CDF curves. We don't get an infinite amount of noise-free data. My own belief is that we spend entirely too much time (in the Six Sigma world) worrying about normality; testing for it, torturing perfectly good and representative data sets through transformation, and doing other things that are often (as Don Wheeler says) "victories of computation over common sense."
It's also worthwhile mentioning that testing the data from any histogram for normality is futile until you have some reason to believe that the data are homogeneous, i.e., they come from one universe. When we are using data in Six Sigma, they usually comes from a process, with the intent to work on the system; that means we are usually conducting an analytic study. The best test for homogeneity, then, will be a control chart. Davis Balestracci illustrated this very clearly in "Data Sanity" several years ago. Don Wheeler spent several chapters in "The Six Sigma Practitioner's Guide to Data Analysis" on this issue. If you want to see a quick summary of Balestracci's work, I have one at my blog, http://woodsidequality.blogspot.com.
P Value for Goodness of Fit
I might have missed the meaning of the following sentence in the last paragraph of your article: "On the other hand, if you only have a small amount of data, you can still use the normality assumption if the histogram fit looks lousy, providing the p-value of the goodness-of-fit statistic says the normal curve is OK, i.e., if p < 0.05." Did you intend to say that one could use the normality assumption if the p > 0.05?
"Anderson-Darling Normality Test: If the p-value is equal to or less than a specified alpha risk, there is evidence that the data does not folow a normal distribution" (Picar, p. 123). When the p-value is greater than the alpha value (in this case 0.05) the analysis suggests normally distributed data.
Reference:
Picar, D. (ED.). (2002). Graphical analysis. The black belt memory jogger: A pocket guide for six sigma success. Salem, NH: Goal/QPC.
You are correct, William.
You are correct, William. The next to last sentence should read:
"On the other hand, if you only have a small amount of data, you can still use the normality assumption if the histogram fit looks lousy, providing the p-value of the goodness-of-fit statistic says the normal curve is okay, i.e., if p > 0.05."
In other words, don't trust your eyeball judgment regarding the fitted curve if the sample size is small. I'll see if I can get QD to correct this typo. -
Thomas Pyzdek
www.pyzdekinstitute.com
Histograms
I hope you mean P>=0.05 in your conclusion, not P<0.05.
Excellant article I have been doing the same for years.
Histograms
More six sigma based nonsense. Who gives a damn if the data is normally distributed or not ? Control charts don't need normal data. The purpose of the histogram is to gain insight into the process.