Featured Product
This Week in Quality Digest Live
Six Sigma Features
Paul Laughlin
How to think differently about data usage
Donald J. Wheeler
The origin of the error function
Donald J. Wheeler
Using process behavior charts in a clinical setting
Alan Metzel
Introducing the Enhanced Perkin Tracker
Donald J. Wheeler
What you think you know may not be so

More Features

Six Sigma News
How to use Minitab statistical functions to improve business processes
Sept. 28–29, 2022, at the MassMutual Center in Springfield, MA
Elsmar Cove is a leading forum for quality and standards compliance
Is the future of quality management actually business management?
Too often process enhancements occur in silos where there is little positive impact on the big picture
Collect measurements, visual defect information, simple Go/No-Go situations from any online device
Good quality is adding an average of 11 percent to organizations’ revenue growth
Floor symbols and decals create a SMART floor environment, adding visual organization to any environment
A guide for practitioners and managers

More News

Davis Balestracci

Six Sigma

Four Control Chart Myths from Foolish Experts

Don’t teach people statistics—teach them to solve problems

Published: Wednesday, March 30, 2011 - 10:00

There are four statements regarding control charts that are myths and in my experience, just refuse to die. The next time you're sitting in a seminar and someone tries to teach you how to transform data to make them normally distributed, or at any point during the seminar says, “Normal distribution” twice within 30 seconds, leave. You’ve got better things to do with your time.

The four myths

When you attend statistical seminars, do some statistical calculations seem like this? (2 minutes)

Are you “taught” these four things about control charts?
1. Data must be normally distributed before they can be placed on a control chart.
2. Control charts work because of the central limit theorem.
3. Data must be in control before you can plot them on a control chart.
4. Three standard deviation limits are too conservative.

April fool!

If you happen to be in a seminar where someone tries to teach this nonsense, invite them to click this link. It is a complimentary brain scan that will assess what “color” belt a person’s knowledge warrants. (Take it yourself; it’s a hoot.)

And then, just to make sure they’re certified as claimed, give them the “Certification Activity Book” obtainable through the tab in the left margin of that page.

I’ll leave it to your judgment whether you say “April fool” or not.

OK, time to get serious for a few minutes

As I like to say, the I-chart (a control chart for individual values, i.e., not subgrouped) is the Swiss army knife of control charts. During the early 1990s, statistical process control methods were the favored tool for medical quality improvement. When I used it, I came up against a lot of resistance from the entrenched “randomized double-blind clinical trial” cultural mindset that had been the norm in medicine—and provided the perfect smoke screen not to change.

Let’s consider these four myths in greater detail. (The first three are courtesy of Donald J. Wheeler, Ph.D., in a column 15 years ago.)

Myth No. 1: Data must be normally distributed before they can be placed on a control chart.
Reality: Although the control chart constants were created under the assumption of normally distributed data, the control chart technique is essentially insensitive to this assumption. The normality of the data is neither a prerequisite nor a consequence of statistical control.

Myth No. 2: Control charts work because of the central limit theorem.
Reality: The central limit theorem does indeed apply to subgroup averages. Because many statistical techniques use the central limit theorem, it’s only natural to assume that it’s the basis of the control chart. Ready for a shocker? I don’t even teach it.

It does have some justification in the case of X-bar-R and X-bar-S charts, but, especially in manufacturing, people usually miss the point and superimpose specification limits on the chart. (Wrong!) As I’ve said, these are rarely used in medicine because one generally does not have the luxury of subgrouping.

Actually, the central limit theorem is pretty much irrelevant to the I-chart. This myth has been one of the greatest barriers to the effective use of the I-chart with management and service-industry data, where data obtained one-value-per-time-period is the norm.

Believing this myth to be true and having no doubt endured a lengthy lecture or demonstration of the central limit theorem, people feel compelled to average something to make use of it. As Wheeler says, “The rationality of the data analysis will be sacrificed to superstition.” As a decision criterion, an I-chart with three standard deviation limits—calculated correctly—is very robust to almost any data distribution.

Myth No. 3: Data must be in control before you can plot them on a control chart.
Reality: I find that people generally make this conclusion only from computing limits incorrectly. Among the blunders that have been made in the name of this myth are getting rid of “obvious” outliers prior to charting them, and using limits that aren’t three standard deviations (see myth No. 4).

The purpose of the chart is to detect lack of control. It’s a very, very valuable initial diagnostic tool for a process. So tell me: If a control chart can’t detect lack of control, why use it?

Myth No. 4: Three standard deviation limits are too conservative.
Reality: Walter Shewhart, the originator of the control chart, deliberately chose three standard deviation limits. He wanted limits wide enough so that people wouldn’t waste time interpreting noise as signals (a Type I error). He also wanted limits narrow enough to detect an important signal that people shouldn’t miss (avoiding a Type II error). In years of practice he found, empirically, that three standard deviation limits provided a satisfactory balance between these two mistakes. My experience has borne this out as well.

I’ve seen two standard deviation limits commonly used because people, especially in medicine, are obsessed that they might “miss something.” There are two major reasons people do this:

1. The “two standard deviations” criterion for (alleged) significance has been drummed into peoples’ heads as the gold standard for decision making. This reasoning is based on the central limit theorem and making only one decision. (See my newsletter, “Why Three Standard Deviations?”)

2. They have performed an incorrect calculation of the standard deviation that has (unknowingly) resulted in an inflated estimate.


Novices continually think that they know better and invent shortcuts that are wrong. I once had a chart where my three standard deviation limits, calculated correctly, were equivalent to 1 1/2 standard deviations of the proposed analysis (needless to say, calculated incorrectly).

You almost never use the calculation of standard deviation taught in your “basic” statistics class, which, unfortunately, is so readily available in most spreadsheet programs. If the very special causes you are trying to detect are present, they will seriously inflate the estimate. Not knowing this, people will even try to use one standard deviation as an outlier criterion.

So, in the spirit of baseball season starting, let’s tap into the wisdom of “The Ol’ Perfesser.” If you ever ask a question in a statistical seminar, and the answer in any way resembles the following, leave:

“Well, I will tell you I got a little concerned yesterday in the first three innings when I saw the three players I had gotten rid of, and I said when I lost nine what am I going to do, and when I had a couple of my players I thought so great of that did not do so good up to the sixth inning, I was more confused but I finally had to go and call on a young man in Baltimore that we don’t own and the Yankees don’t own him, and he is doing pretty good, and I would actually have to tell you that I think we are more like a Greta Garbo-type now from success.”

This is how legendary baseball manager Casey “The Ol’ Perfesser” Stengel testified before a special Congressional House subcommittee on July 8, 1958. The committee was studying monopoly power as it applied to baseball’s antitrust exemption, and Stengal was asked if his team would keep on winning. (This is just a fraction of Stengel’s 45-minute discourse, the rest of which is just as priceless, along with Mickey Mantle’s followup.)

In summary

Trying to teach fancy theory does no one any good. W. Edwards Deming emphasized a basic understanding of variation and taught few techniques in his seminars. To do the type of work required to improve everyday culture, only 1–2 percent of people need advanced statistical knowledge.

Deming is probably rolling over in his grave at the subculture of “hacks” (his term) that has been created in the name of quality. Will the 80/20 rule inevitably apply to quality professionals? I answered that question here. And if the following is how your role is perceived, consider yourself forewarned (3 minutes):

As quality professionals, we must be careful not to perpetuate deeply embedded stereotypes of “sadistics” by making seminars nothing short of legalized torture and keeping our roles self-serving. Take it as a given: The people whom we teach will never like statistics as much as we do. So don’t teach people statistics—teach them how to solve their problems.

I will close with some more baseball wisdom. Like the Ol’ Perfesser, Yogi Berra is another beloved baseball icon who tends to unintentionally misspeak. Given the economy, many of us might unexpectedly face a career crossroads during the next few years. In fact, Yogi warns us, “It gets late early out there,” so I'm sure he would advise: “When you come to the fork in the road, take it.” Because (and I paraphrase) if people ain’t gonna go to statistics classes, how we gonna stop ‘em?


About The Author

Davis Balestracci’s picture

Davis Balestracci

Davis Balestracci is a past chair of ASQ’s statistics division. He has synthesized W. Edwards Deming’s philosophy as Deming intended—as an approach to leadership—in the second edition of Data Sanity (Medical Group Management Association, 2015), with a foreword by Donald Berwick, M.D. Shipped free or as an ebook, Data Sanity offers a new way of thinking using a common organizational language based in process and understanding variation (data sanity), applied to everyday data and management. It also integrates Balestracci’s 20 years of studying organizational psychology into an “improvement as built in” approach as opposed to most current “quality as bolt-on” programs. Balestracci would love to wake up your conferences with his dynamic style and entertaining insights into the places where process, statistics, organizational culture, and quality meet.


Involve the people

I'm not a statistician but someone who strive for spreading continuous improvement culture in my organization.

In practice I've found that Davis and Don teachings can be used as "icebreakers" on people which are inherently suspicious and resistant to change. Believe me, when you are able to create a common improvement language with some easy tools to be applied in almost everyday activities you gain tremendous results.

Also, this new mindset in people help to generate new ideas for improvement which open even more space for increased performance.

And "Quality" or "Excellence" becomes really a common aim for everybody in the group!

Kind regards.




Davis, You have certainly stirred it up...And I totally agree with you!!  Even after nearly 30 years of studying and using Process Behavior Charts, I am still learning and gaining insight into their role in turning data into knowledge as a basis for action.  We are swimming (drowning?) in data these days due to technological advances in data aquisition.  Process Control vendors often try to sell their systems by saying, "You can get a data point every 5 seconds - Isn't THAT wonderful!!!"  No.....it isn't.  Such data from a continuous process is usually so auto-correlated that it is useless.  Enter rational sub-grouping and knowledge of the context of your data.  Then and only then can one properly transform data into knowledge as a basis for action.  All data costs MONEY to collect; therefore, you had better be able to gain some knowledge from it!

Those who howl when the four myths are exposed would do well to educate themselves by reading (and understanding) Shewhart's book "Economic Control of Quality of Manufactured Product".  It is 500 pages of sheer genius.  I walk away from those who profess to be SPC experts, and yet have never read this landmark work.  I can usually tell whether or not someone understands control charts just by how they react to the four myths.

Another source of great knowledge in this arena is Donald Wheeler's book "Normality and the Process Behavior Chart".  Dr. Wheeler definitively blows up Myth 1 in this study of 1143 distributions.

Finally, I must say that the Individuals chart is indeed the "Swiss army knife" of control charts and has many uses in industry.   Yes, it is simple and easy to use - and that is the beauty of the tool!  From Dr. Shewhart: "...the fact that the criterion which we happen to use has a fine ancestry in highbrow theorems does not justify its use.  Such justification must come from EMPIRICAL EVIDENCE THAT IT WORKS."  Imagine that!



Feedback process control for continuous processes

If you can get a data point every five seconds, it might be possible to deploy feedback process control similar to that used in the chemical industry. Box and Luceno's "Statistical Control by Monitoring and Feedback Adjustment"  seeks to connect SPC methods to continuous control techniques such as proportional, integral, and derivative control.

Let's Get Real

The problem with SPC is the charts are not being used as designed, to hunt down and remove special causes and reduce variation. Just passive process monitoring and wall paper charts for ISO auditors. This HUGE issue is not addressed at all by "Stat Experts" who continue to waste time writting about ARL and false alarm rates etc.....

Being lead astay 1% of the time when looking for a "special cause" is something I should be concerned about? Are you kidding? Process improvement is a result of the interaction of the line operator with the chart and the ACTION he takes. All this other stuff just gets in the way.......

To understand the issues surronding SPC implementation, most authors need to spend more time on the factory floor and less time in their little statistical world.

Rich DeRoeck  

Comment on Control Chart Myths

If the purpose of control charts is to detect process changes (special causes) - how can we specify the required chart (or sample size) if we haven't even determined how much of a process shift are we are interested in detecting? In MANY of the applications I have worked on, Individuals charts are COMPLETELY INEFFECTIVE in controlling the process since as you know, Individuals possess much more variation than averages do.  That is, the power to detect changes QUICKLY is too low.  Of course, before charting anything we need to know that the charactistic is worth charting AND have some idea of how much of a process change (mean shift or variation change) is of PRACTICAL importance so that an appropriate chart can be designed.  Without asking and answering these questions, how can the tool (control chart) serve it's purpose?

I agree that in many cases traditional Xbar charts (of appropriate sample sizes) are not viable, so when only Individuals are feasible, CUSUM charts are very useful since they improve sensitivity. 

I see LOTS of data that STABLE - yet NON NORMAL.  If I were to use 3 sigma limits on an I-MR chart, I would expect a large number of false alarms since 3 sigma limits do not account for the skewness (especially on characteristics such as flatness, runout, etc.)

Of course, Normality of the individual data is of little concern when working with averages due the CLT...

I don't know why in the year 2011, with all of the tools available, we need to continue to over simplify (dumb down) everything - just to make it super easy.   

The challenge many statisticians have is communicating methods effectively without resorting to oversimplifications.


Such utter nonsense is not

Such utter nonsense is not even wrong.  You do not have a clue to what you are missing.  In the video they got the answer wrong because they did not understand the concept of place value.  You got your answer wrong because you fail to understand that probability models do not generate your data, processes do.


Donald J. Wheeler, Ph.D.
Fellow American Statistical Association
Fellow American Society for Quality

Probability Models and Processes

It is true that processes as opposed to probability models generate data; on the other hand, certain distributions are known from experience to model the data accurately. I had workplace experience with (1) particle count data in semiconductor tools and (2) vendor quality reports about impurity levels in semiconductor chemicals. The gamma distribution turned out to be an excellent model for both as shown by the chi square test for goodness of fit and the quantile-quantile plot, and it is backed up by scientific reasoning.           Impurities and particles are undesirable random arrivals, for which the Poisson distribution is the standard model. The gamma distribution is the continuous scale analogue of the Poisson. If one uses a standard 3-sigma Shewhart chart in the production line, the manufacturing operators will soon tire of responding to false alarms at the upper control limit. (Since the quality characteristic is something undesirable, the lower limit is not particularly important.) Even worse, the traditional CPU or PPU will have little to do with the actual nonconforming fraction; it is possible to be off by a factor of 1000 or more. E.g. the PPU may promise 1 defect or nonconformance per million opportunities when the process actually delivers 1 per thousand.            It is important to remember that Shewhart's methods were developed for use in an era when people did not have computers or calculators. The sample range is easy to calculate, and I suspect that the median chart was invented so production workers would not even have to add up the measurements and then divide. Today however it is easy to deploy on an Excel spreadsheet a control chart for a gamma distribution in which the spreadsheet calculates the exact quantile of the measurement and turns the cell red if it exceeds 0.99865 (or any other desired false alarm risk). It is also easy to deploy an attribute control chart that does the same for the quantile of a binomial or Poisson distribution, thus bypassing the requirement of an expected count of 4-6 (almost universally something bad like defects or scrap) for the traditional attribute charts to work. All the operator needs to do is enter the measurement or defect/scrap count and the rest happens automatically. The lesson here is that today's technology allows us to use methods in a production environment that would have been prohibitively time-consuming if not impossible during the early 20th century.

Normality IS a prerequisite for traditional SPC charts

Many processes, especially those in which the critical to quality characteristic is an undesirable random arrival (particles, impurities, pollutant levels) do not follow the normal distribution. It is therefore correct to say that the distribution need not be normal for the process to be in control. "In control" means simply that the distribution's shape, scale, and threshold parameter (if applicable) remain constant and that assignable causes do not produce outliers.        



The x-bar chart is somewhat robust in the presence of non-normality because of the Central Limit Theorem but the chart for individuals is definitely not. I can supply examples of X charts in which the false alarm rate exceeds 1 percent versus the 0.135% promised by the Shewhart chart's assumptions.              



Data transformations (e.g. square root or cube root for the gamma distribution) will often however make the result sufficiently normal for plotting on a traditional 3-sigma chart. Software such as StatGraphics or Minitab (or even Microsoft Excel with some user programming) can fit a gamma or Weibull distribution where appropriate, and the user can then select the 0.00135 and 0.99865 quantiles as Shewhart-equivalent control limits, with the median (not the mean) as the center line.      



In addition, individuals as opposed to sample averages are in or out of specification so assumption of normality will deliver totally unrealistic estimates of the process performance index. The result can be off by orders of magnitude (in terms of the nonconforming fraction) for a gamma distribution.          



None of this is to say that non-normal data can't be plotted on a preliminary SPC chart with normal control limits; doing so may make the non-normal distribution quite obvious up front, as will a histogram and a normal probability plot. Deployment for routine use on the shop floor requires however that the control limits and center line be changed to reflect the non-normal distribution or that a transformation be used to make it behave like a normal distribution.

More Leptokurtophobia

There are those who are afraid of clarity because they fear that their ideas may not sound profound.  While you try to dance around the issue, you continue to get the wrong answer.  Transforming your data is simply a fancy way of lying with statistics.  Davis is right.


Donald J. Wheeler, Ph.D.
Fellow American Statistical Association
Fellow American Society for Quality

Normality needed? Not necessarily...Not usually...

I am writing up a simulation that I just completed, looking into the issue of the robustness of ImR charts. In this simulation I used a random generater to create 197 Binomial Distributions, with 5000 in each. I used values of p set to 0.1, 0.2, 0.3, 0.4, 0.5, 0.6 to 0.7, 0.8, and 0.9; with sample sizes in increments of 5, from n=5 to n=95. The simulated data thus reflected very skewed data sets (where p = .1 and .9 and n was small), through a lot of sets that were much more unimodal, symmetrical, bell-shaped. (I also include p=0.05 at the higher sample sizes). I then ran ImR charts for each data set, and recorded the number of false signals in each chart for each of the four common Western Electric Zone tests. For all four rules, the total number of "signals" was 14,405, for an observed probability of a false signal of 0.014, and an observed ARL of about 69. Most of the indicated signals came from the lower-left and lower-right corners of the space (n and p low); there were a lot of rule three signals (4 out of 5 more than one sigma away on the same side of the centerline); I'm guessing that that is because the binomial is more heavy-tailed than the normal.

Looking at all the data, if we just use rule 1 (point outside the control limits), the empirical probability was 0.0037, ARL 274. When I added rule 2 (run of 9 above or below the centerline), the empirical probability was .0063, and the ARL was 160. This is not bad, when you consider that I included all the signals (including those from the more skewed corners of the space).  The probabilities are much lower, and the ARL much higher, when p is from 0.3 to 0.7, or n is higher than 15. Of course, the probabilities are much higher and the ARL is much lower, when p is very low or very high, and n is very low. These are the most skewed conditions. The point is that ImR is very robust in most cases... 

Contrast this with a similar study of 75,000 randomly-generated data from a "normal" distribution: an ImR chart of those data had an empirical probability of a false signal of .008 for rules one and two, with an observed ARL of 130.

We don't get infinite amounts of noise-free data; we get finite amounts of noisy data. We don't get theoretical curves; we get histograms. We are fortunate that many of our data fit some models closely enough that we can use theory for prediction ("if," as Shewhart said, "we don't examine too carefully what we mean by 'close'"). The ImR chart has proven robust enough in most cases for which I've used it. I haven't found transformation to be particularly useful in SPC (with the possible exception of rational subgrouping to transform the distribution by averaging); it's certainly possible, and I have done it. A few years ago I mentioned that in a discussion. Lloyd Provost asked, "What would it mean if you got a special cause in the transformed data?" While I was still trying to formulate an answer to that, he asked, " What would it mean if you didn't get one?"

Since then, I have relied on Deming's claim that statistical theory is only half of what you need for SPC; the other 50% is process knowledge. I may get a few more false signals in an ImR chart of, say, cycle-time data from a call center; I expect that, and usually investigate only those that go well beyond the upper control limit. I will, however, fit a Weibull or use the "non-normal" option in Minitab when I do a capability study with those call-center data.

"It depends"

I'm sure in your industry that there is a need for such minutiae.  What I'm trying to address in my article is the rampant "belt" sub-culture being created, especially in service industries, regarding organization-wide implementation of "quality fad du jour."  As I said, only 1-2% of people need advanced statistical knowledge. 

I stand by my statements about normality and ask you to be careful:  Deming HATED the use of probability limits.  I saw him once growl at a quesioner during a seminar, "Normal distribution?  I've never seen one!"

Conference Board Index

Davis - please tell me you wrote the Conference Board Consumer Index article in this issue as a joke to accompany this one. . . Please tell me you pulled a fast one on us. . . .


I'm with Davis on this one.  But don't take it from us.  Go back and actually read "Economic Control of Quality of Manufactured Product" and you will find that Dr. Shewhart invokes the Tchebychev Inequality in his derivation of control chart structure.