Featured Product
This Week in Quality Digest Live
Six Sigma Features
Anthony D. Burns
Upcoming interactive mobile app demonstrates Deming’s process variation experiment
Tom Taormina
Champion business success and avoid risk
Michael Popenas
Enables companies to develop, design, test, and release products and services more quickly than other waterfall methods
James J. Kline
A sound quality management foundation exists in local government
Greg Hutchins
Risk is becoming our lens for everything from managing, to working, and even going to the store.

More Features

Six Sigma News
Too often process enhancements occur in silos where there is little positive impact on the big picture
Collect measurements, visual defect information, simple Go/No-Go situations from any online device
Good quality is adding an average of 11 percent to organizations’ revenue growth
Floor symbols and decals create a SMART floor environment, adding visual organization to any environment
A guide for practitioners and managers
Making lean Six Sigma easier and adaptable to current workplaces
Gain visibility into real-time quality data to improve manufacturing process efficiency, quality, and profits
Makes it faster and easier to find and return tools to their proper places
Version 3.1 increases flexibility and ease of use with expanded data formatting features

More News

John Flaig

Six Sigma

The Sound of One Tail Flapping

Normality, tail probabilities, and SPC

Published: Wednesday, July 6, 2011 - 15:07

Story update 7/08/2011: We corrected an error in Figure 2, and in the section preceded by "Expressed symbolically for a stable process...".

Two topics that have generated significant interest and frequent comments are, “Is normality required for control charts?” and “You need to estimate the tail probabilities for nonnormal processes for SPC to work.” Let’s examine these and see what we find.

Question: Are process normality or distribution tail probabilities critically important for a control chart to guide the practitioner’s decision making on whether to search for assignable causes?

In my opinion, people who argue that normality or process distribution tail probabilities are critically important haven’t actually read Walter A. Shewhart, or they don’t understand him. On the other side of the controversy, people who support Shewhart’s position haven’t done a good job explaining his reasoning, either. In fact, W. Edwards Deming didn’t help matters when he would make Zen-like comments that are true but did not offer much insight into Shewhart’s reasoning. For example:

“It is nothing to do with probabilities. No, no, no, no: not at all. What we need is a rule which guides us when to search in order to try to identify and remove a specific cause, and when not to. It is not a matter of probability. It is nothing to do with how many errors we make on average in 500 trials or 1,000 trials. No, no, no—it can’t be done that way. We need a definition of when to act, and which way to act. Shewhart provided us with a communicable definition: the control chart.” (From Henry R. Neave’s The Deming Dimension, SPC Press, 1990.)

I think Deming was fond of overstating the case in an effort to shock people into thinking for themselves. Or perhaps it was just so obvious to him that he felt an explanation was not necessary. If you were as smart as Deming, this might be true, but the rest of us probably need a little coaching. So I’m going to try and fill in the gaps so that the logic of Shewhart’s argument becomes clear.

First, it does have a little to do with probabilities, or at with least being able to empirically estimate the frequency of unusual observations. To do this, the practitioner must first determine if the process is essentially stable. This is the point where advocates of “normality and tail probabilities are required” stop thinking. They usually assume that the process is stable, and they think it is all about tail probabilities, but they are misguided on both issues. Because if they actually read Shewhart, they would know that his view of a control chart was that it is a heuristic tool for deciding when it was economically reasonable to search for assignable cause(s) of the unusual behavior—assuming that determining the cause(s) might allow us to improve the process, or at least restore stability.

Or as Shewhart stated:

“How then shall we establish allowable limits on the variability of samples? Obviously, the basis for such limits must be, in the last analysis, empirical. Under such condition, it seems reasonable to choose limits UCL and LCL on some statistic such that the associated probability P is economic in the sense now to be explained. If more than one statistic is used, then the limits of all statistics should be chosen so that the probability of looking for trouble when any one of the chosen statistics falls outside its own limits is economic.

“Even when no trouble exists, we shall look for trouble (1–P) N times on average after inspecting N samples of size n. On the other hand, the smaller the probability P, the more often in the long run may we expect to catch trouble if it exists. We must try to strike a balance between the advantage to be gained by increasing the value P through reduction in the cost of looking for trouble when it does not exist, and the disadvantages occasioned by overlooking troubles that do exist.” (From Shewhart’s Economic Control of Quality of Manufactured Product, D. Van Nostrand Co., 1931.)

For a stable system, this is equivalent to saying we should look for assignable causes when “the expected cost of not looking because of a beta error” is greater than “the expected cost of looking because of an alpha error.” That is, when the cost of false negatives is greater than the cost of false positives. Or expressed economically, the savings resulting from finding assignable causes and preventing failures is greater than the cost of searching for assignable causes.

Expressed symbolically for a stable process, this is:

If (Beta N (Failure Cost) ≥ Alpha N (Search Cost)), then Search.
Alpha = 1–P, P + p = 1, and let Beta and Alpha errors be equal, then
If ((1–P) N (Failure Cost) ≥ (1–P) N (Search Cost)), then Search.
If (E (Failure Cost) ≥ E (Search Cost)), then Search for assignable causes.

If the system experiences a shift instability, then the practitioner should search for assignable causes:

If (p (Failure Cost) ≥ p (Search Cost)), where p is the area beyond the control limits of the unshifted process.

To assess the economic ramifications of the process performance, we will need to multiply the estimated frequency of occurrence of unusual behavior observations (i.e., the tail probability estimates) by the estimated cost of such deviations from the targeted process behavior we desire. This can be done using cost data from the cost of poor quality (COPQ)—or better yet, from Genichi Taguchi’s quadratic loss function. Once you do the multiplication, it becomes readily apparent that the decision to search for assignable cause(s) is a function of the expected loss and is driven by the cost and not by tail probabilities.

In fact, process distributions can have vastly different tail probabilities, and yet the decision to search or not to search for assignable causes of unusual behavior is exactly the same. This is why normality (or distribution shape in general, or tail probabilities) are not a necessary condition for a process behavior chart to work, and work well, in most cases.

Consider the following example:
1. Assume the unit production cost is $20

2. Assume the unit search cost is $10

3. Assume the unit field failure cost can range from $1 to $10,000.

4. Assume that a successful search will result in the removal of the failure cause and thus increase gross profit by eliminating the failure cost.

5. The range of tail probabilities that we might expect under the three sigma control limit assumption is displayed in figure 1 below. In figure 2 below, the probabilities range from about 0.002 for a bell-shaped distribution to about 0.02 for skewed distribution. (From Donald J. Wheeler’s article, “Estimating the Fraction Nonconforming,” Quality Digest Digest, May, 31, 2011.)



Figure 1: The area under the curve for mean ± 3 standard deviations


Search Criteria: If E(Failure Cost) ≥ E(Search Cost), then Search for Assignable Causes

 

Figure 2: The tail probabilities are an order of magnitude different, but the decision on whether to search for assignable causes is exactly the same.


Could we do a better job of estimating costs by modeling the process distribution? Perhaps, especially if the cost in one tail was different than the other, but only if we could overcome the uncertainties of a dynamic system—e.g., changing process stability, changes in p, the accuracy with which we can estimate p (i.e., the standard error of p, per Wheeler, above), and the variation in costs, some of which are difficult to determine, and some, as Deming would say, are unknowable.

So given the real-world situation and the second law of thermodynamics, it seems reasonable to accept Shewhart’s concept of a control chart and not be overly concerned with normality or tail probabilities. After all, it is not the tail probabilities but the trade-off in costs between searching and the failure costs of not searching that determines the correct economic decision.

Note: For bell-shaped distributions, the fraction nonconforming p is approximately linear in the tails, but failure costs can be exponential.

Discuss

About The Author

John Flaig’s picture

John Flaig

John J. Flaig, Ph.D., is a fellow of the American Society for Quality and is managing director of Applied Technology at www.e-at-usa.com, a training and consulting company. Flaig has given lectures and seminars in Europe, Asia, and throughout the United States. His special interests are in statistical process control, process capability analysis, supplier management, design of experiments, and process optimization. He was formerly a member of the Editorial Board of Quality Engineering, a journal of the ASQ, and associate editor of Quality Technology and Quantitative Management, a journal of the International Chinese Association of Quantitative Management.

Comments

Synergize cost of quality and false alarm risk

My position is that it IS important to know the tail areas; a Shewhart chart's false alarm risk can easily be ten times what is expected (0.135% at each tail) for a gamma distribution's upper tail if not even worse. It costs time to have production workers chase false alarms and, if the boy cries "wolf" too many times, it may undermine their confidence in SPC. I stand by my recommendation that the practitioner identify the underlying distribution and set control limits with known false alarm risks.   HOWEVER, this article brings up the idea of synergizing SPC with the costs of quality as discussed in this article. I recommended that the false alarm risk be set at 0.135% at each end but that is only because it is the way Shewhart did it for the normal distribution. Does the false alarm risk HAVE to be 0.135%? Maybe not. A more scientific approach, and it seems that others have looked into this, involves economic design of control charts. For example, http://www.jstor.org/pss/1269598 Another article talks about economic design of control charts using the Taguchi loss function. However, no economic design model will deliver optimum results unless the underlying statistical model is correct. "The Sound of One Tail Flapping" has definitely stimulated my interest in seeing whether a synergy between process economics and use of the correct underlying distribution can be carried even further to overturn existing paradigms about SPC (e.g. the concept of the 0.135% false alarm rate).

Always interesting

William, please keep in mind that the "concept" of the 0.135% false alarm rate is NOT from Dr. Shewhart.  Dr. Shewhart invokes Tchebychev.


Obviously the theoretical debate continues.  I agree that the deciding factor is - which application of SPC is the lowest total cost:


1.  Shewhart / Tchebychev / Deming using 3 sigma limits as a heuristic


2.  Distribution Fitting and using probability limits


Keep in mind in the total cost includes the cost of training of management to use the technique.  Does the cost of doing and explaining method 2 outweigh the losses of either missed trends or false trends due to use of method 1?


- Steve Prevette, user of method 1.