Featured Product
This Week in Quality Digest Live
Six Sigma Features
Paul Laughlin
How to think differently about data usage
Donald J. Wheeler
The origin of the error function
Donald J. Wheeler
Using process behavior charts in a clinical setting
Alan Metzel
Introducing the Enhanced Perkin Tracker
Donald J. Wheeler
What you think you know may not be so
Six Sigma News
How to use Minitab statistical functions to improve business processes
Sept. 28–29, 2022, at the MassMutual Center in Springfield, MA
Elsmar Cove is a leading forum for quality and standards compliance
Is the future of quality management actually business management?
Too often process enhancements occur in silos where there is little positive impact on the big picture
Collect measurements, visual defect information, simple Go/No-Go situations from any online device
Good quality is adding an average of 11 percent to organizations’ revenue growth
Floor symbols and decals create a SMART floor environment, adding visual organization to any environment
A guide for practitioners and managers
Six Sigma

## Histograms: When to Use Your Eyeballs, When Not

### Looks can be deceiving

Published: Tuesday, March 29, 2011 - 05:30

Story update 3/29/2011: We corrected an error in the next to last sentence. "p < 0.05" was changed to "p > 0.05."

One of the exercises I assign to students in my training involves creating two histograms from normally distributed random numbers. The results often look similar to those shown in figure 1. When I ask students to comment on their histograms, I usually get comments about the averages, spread, and other statistical properties. However, that misses the point I’m trying to teach.

When we do Six Sigma, we usually spend a lot of time mining historical data from databases. Sometimes the sample sizes are large, and sometimes they can be quite small. In fact, even large sample sizes can become small when we slice-and-dice them, drilling down with various categories and subcategories in search of critical-to-quality data. Statistical software will often automatically fit a normal curve to histograms created from these data. It’s often tempting to use the fitted curves to make an eyeball judgment about the normality of the data. Sometimes this is a good idea, and sometimes it isn’t. If the sample sizes are small, then the curve may not appear to fit the data very well simply because of small sample variation. Witness the top histogram in figure 1 for an example of a curve fitted to a histogram from a sample size of n = 20. The histogram looks like a poor fit, but the p-value of a normality test tells us that the fit is pretty good anyway. So we’re probably safe assuming normality and acting accordingly.

 Fig. 1: Large and small samples of normally distributed dataClick for larger image

The lower curve is fitted to a sample of n = 500 data values. It appears to be a much better fit, and the p-value will back this conclusion. But what if the eyeballed curve fit and the p-value disagree?

Sometimes the fit of the curve is “close enough,” but the p-value will tell you that the fit is awful. Take a look at figure 2. The histogram suggests that the normal curve fits the data pretty well. There are many practical situations where you could use the normal distribution to make estimates, and your estimates would be just fine. These are data on the time it takes to complete technical support calls. If you assume normality and you estimate costs or make a decision about process acceptability, your decisions will be essentially correct.

 Fig. 2: Decent fit but lousy p-valueClick for larger image

However, the probability plot and Anderson-Darling goodness-of-fit statistic clearly show that the data are not normal and that the lack of fit is particularly poor in the tails (p < 0.005). A closer examination shows that even in the tail areas the discrepancies are fractions of a percent. For example, the normal distribution estimates that 99.9 percent of all calls will take less than 35 minutes to complete, while the data show about 99.5 percent. Chances are these differences are of little or no practical importance.

The point is that in the business world, we often need to make decisions and then get on to other, more urgent matters. The normal distribution is a handy device for getting quick estimates that are useful for such decisions. If your sample size is relatively large (say 200 or more), then you can go with the normality assumption if the fitted curve looks reasonably good. On the other hand, if you only have a small amount of data, you can still use the normality assumption if the histogram fit looks lousy, providing the p-value of the goodness-of-fit statistic says the normal curve is OK, i.e., if p > 0.05. The normality assumption is so useful that it's worth using as a default, even if you bend the rules a bit.

### Tom Pyzdek

Thomas Pyzdek’s career in business process improvement spans more than 50 years. He is the author more than 50 copyrighted works including The Six Sigma Handbook (McGraw-Hill, 2003). Through the Pyzdek Institute, he provides online certification and training in Six Sigma and Lean.