Featured Video
This Week in Quality Digest Live
Six Sigma Features
Anthony D. Burns
Why has it taken so long to understand that processes need analytic methods, not enumerative ones?
Cheryl Pammer
Using intervals to get at the tail ends of the problem
Rip Stauffer
It helps to build a table
Mike Richman
A conversation with Neil Polhemus
Matthew E. May
One place where root cause analysis has no real place is in strategy formulation

More Features

Six Sigma News
SQCpack and GAGEpack offer a comprehensive approach to improving product quality and consistency
Customized visual dashboards by Visual Workplace help measure performance
Helps manufacturers by focusing on problems and problem resolution in real time
Ask questions, exchange ideas and best practices, share product tips, discuss challenges in quality improvement initiatives
Says capitalization gives false impression that Six Sigma is more significant than other methodologies
His influence on the methodology can’t be denied
Nov. 30, 2016, in Copenhagen
A story about how organizations rise and fall—and can rise again

More News

Anthony D. Burns

Six Sigma

Predictable

Why has it taken so long to understand that processes need analytic methods, not enumerative ones?

Published: Wednesday, June 13, 2018 - 12:03

Quality is related to processes. A process is “a series of actions or steps taken in order to achieve a particular end.” It doesn’t matter whether the process is the handling of invoices, customers in a bank, the manufacture or assembly of parts, insurance claims, the sick passing through a hospital, or any one of thousands of other examples. A process involves movement and action in a sequential fashion.

Every quality professional is concerned about the improvement of processes. By making processes better, we get less waste, lower costs, and happier customers.

lakewater.jpg
Photo used with permission of Samuel Ferrara.

The image above depicts two opposed states: a dynamic, changing state and a static state. The lake is static, unchanging. We might take temperature measurements in the lake at different depths and come back tomorrow to find no difference. We might take samples of the lake water to compare with other lakes at a later date when we travelled to them.

By contrast, the stream is dynamic. It changes second to second. It is a combination of myriad chaotic processes that would take myriad Navier-Stokes equations to solve—that is, once the Millennium Prize had been won showing how to solve a Navier-Stokes equation. Measure the flow rate in different parts of the stream, and you would not be surprised to find constant changes.

The stream represents the changing and dynamic businesses with which we are all familiar and are all concerned with improving. W. Edwards Deming referred to the methods to study dynamic systems as “analytic.” He referred to the lake as being studied using “enumerative” methods.

Researchers in lake compositions, psychologists, and demographers all use enumerative methods. The first use of such methods was carried out by John Arbuthnot, who published the first statistical test in 1710. Pierre-Simon, Marquis de Laplace, pioneer of the Laplace transform, in 1812 issued his Théorie Analytique des Probabilities, which laid down many of the fundamentals of statistics. Such statistics are based on the normal distribution, derived by Carl Friedrich Gauss in 1809. During the ensuing centuries, statistical tests, or hypothesis tests, were devised by men such as K. B. Wilson, George Box, David Cox, George Mood, David Mann, D. Ransom Whitney, William Kruskal, W. Allen Wallis, and Milton Friedman. There’s lots of wonderful and interesting statistics. None include the element of time. They are not designed for process improvement.

Whilst enumerative methods proved powerful, they were unsuitable for dynamic, analytic situations. In 1944 Walter Shewhart made the brilliant observation that: “Classical statistics start with the assumption that a statistical universe exists, whereas [SPC] starts with the assumption that a statistical universe does not exist.” That is, while enumerative statistics usually rely on assumptions about the distribution of data, commonly the normal distribution, the prediction of the behavior of processes does not. We can never know the distribution for a changing process.

Shewhart’s discovery was that classical, enumerative statistics were inappropriate for process improvement. On this basis, he created the control chart for analytic analysis.

This led Deming to state in 1986 that “The student should avoid passages in books that treat confidence intervals and tests of significance, as such calculations have no application in analytic problems in science and industry.... Analysis of variance, t-test, confidence intervals, and other statistical techniques taught in the books, however interesting, are inappropriate because they bury the information contained in the order of production.... a confidence interval has no operational meaning for prediction, hence provides no degree of belief in planning.”

During the 32 years since Deming’s illuminating statements, the message still hasn’t sunk in for most folk. Today, most process improvement courses focus on enumerative methods, that is, hypothesis testing. This is exactly what Deming warned against. Quality has regressed to the days before 1944. It is hardly surprising that a survey showed 80 percent of Six Sigma improvement projects fail (of those brave enough to admit failure).

Deming’s key word for process improvement is prediction. Businesses want to be able to be sure not only that their processes will be improved, but also that they will stay that way into the future. Shewhart derived his control charts with statistical knowledge, but based on economics. His charts indicate when a process is predictable and when special causes that disrupt stability are likely to exist and should be investigated. Most important, a control chart is not a probability chart. It does not give probabilities of a process being predictable or otherwise. It does not depend on any particular data distribution, in the way that enumerative methods do.

A preoccupation with enumerative methods has led many people to falsely believe data need to be normally distributed for control charting. Have a look at the data distributions below. It is difficult to imagine just what processes might produce them, but which of the processes below do you think could be charted with a control chart, without any data manipulation of any kind? That is, which can be control charted without pressing a button on ridiculously expensive statistical software to torture the data to make it confess?

https://i.pinimg.com/originals/15/ac/d5/15acd53ff83ab8bff782a59d50fbef9c.gif
Reproduced with permission of Don Wheeler.

The answer is all of them. It doesn’t matter what the histogram or the data distribution look like. Data from any distribution can be charted on a control chart. Furthermore, simple XbarR charts, manually drawn, as Shewhart did, produce just as good results as those from folk selling software to draw XbarS charts.

Importantly, all control charts have an averages and a ranges chart. Leptokurtophobes (those fearing non-normality) will be leaping for their Librium if they care to look at the distribution for ranges in the example below (subgroup of three, calculated for normally distributed data). Despite the normal distribution for the averages chart in this example, the ranges chart is far from normal. That is, the prediction of process behavior is not affected by either averages, or ranges data being non-normal.

ranges_3_point.jpg

We have created a system that can demonstrate the power of analytic methods and control charts to students. Q-Skills3D is an interactive 3D training product in quality. It has been built using what is known as a 3D-gaming engine. This engine simulates real-world physics and behaviors. Using it, we can investigate and wonder about dynamic systems from our laptops.

Q-Skills3D includes interactive games, simulations, and exercises across the gamut of process improvement. One is a ship game based on a real-world, historical story of process improvement. The game can be used for histogram and control chart training, as well as explaining the meaning of “world class quality.” It does not use “defects (i.e., misses) per million opportunities (shots)” or “zero defects (misses).” These were tossed overboard into the briny 48 years ago by Genichi Taguchi, with Deming and Donald Wheeler in support. Sadly, many still cling desperately to this waterlogged jetsam of quality. Instead, it calculates an “on target with minimum variance” score for each student.

ship2208.jpg

The ship game simulates a situation that occurred more than a century ago, on a heaving sea, with a yawing, pitching, rolling vessel, shooting at another ship. As you can imagine, the hit rate was terrible. For the game simulation, the data for a very experienced gunner (guess who) are plotted on the left below. As you might expect, the results are excellent and in control. We can depend on the gunner to produce consistent shooting in the future.

Now suppose we take the data and swap three pairs of values. Draw another control chart. It is using exactly the same data but with a different sequence. We get the result on the right, below. You might imagine this result as coming from a second gunner. He might appear to be a better gunner, but his control chart shows an out of control point. We cannot depend on his shooting. He is unpredictable.

prediction3.jpg
Click for larger view

Remember that the data for these two gunners are identical—except for the order. Any enumerative test would not have been able to distinguish between these two performances. As Deming emphasized, hypothesis tests are inappropriate because they bury sequence of the data.

It is also important to look at the histogram for the data. The histograms are also identical, and the data are clearly nonsymmetrical. It is not normally distributed data. Enumerative methods are commonly based on the assumption of normal data, but skewed data, such as these, are fine for control charting.

histo_ship.jpg

It is important that you should never draw a normal distribution over a histogram for an analytic process. Not only does it provide no benefit and is meaningless, but it also can hide the voice of the process speaking in the histogram. In this case, we can see that the gunner seems more likely to overshoot than undershoot. We might decide to collect more data and to investigate the causes.

Although we can eliminate special causes, such as wild shooting and inexperience, a process improvement requires a system change. This is what a clever admiral did in 1898 to achieve dramatically better quality. The ship game simulates the process improvement that he devised. Students play the game again with the process improvement, and control limits are adjusted.

It seems incredible that it took decades for such a simple yet brilliant idea to be adopted in the Navy. Yet how incredible is it that there is still such poor understanding of Shewhart’s and Deming’s simple yet brilliant control charts? Why has it taken so long for people to understand that processes need analytic methods, not enumerative ones?

Enumerative studies are suited to studying existing, fixed populations, where the population can be fully characterized. The analytic methods of Shewhart are used to study changing processes and to predict their future behavior.

Acknowledgement
I would like to thank Scott Hindle for his thoughtful contributions, comments, and discussion on this article.

Discuss

About The Author

Anthony D. Burns’s picture

Anthony D. Burns

Anthony Burns, Ph.D., has a bachelor of engineering and a doctorate in chemical engineering from the University of New South Wales in Sydney, Australia. He has 36 years of experience and his company, MicroMultimedia Pty. Ltd., is responsible for the development of the e-learning quality product Q-Skills and its support tools.

Comments

Hypothesis Testing

Are you saying Quality professionals have no use for Hypothesis Testing?  I understand Hypothesis Tests assume static states.  What if I am trying to reduce variation in a molding process (process improvement) and want to compare products coming from two cavities.  Of course to compare means from the 2 cavities the mean would have to be a useful statistic (implying a stable process characterized by a single distribution).   I do not see why you are bashing the fact that quality professionals learn statistical methods for making valid comparisons while taking into account the inherent variability in the data.

Molding Cavity Variation

I would refer the reader to Wheeler and Chambers' Understanding Statistical Process Control for an excellent example of the use of control charts to solve the problem you have posed. Through the use of rational subgrouping, the manager of that process was able to look at hour-to-hour, cycle-to-cycle and cavity-to-cavity variation. Cavity-to-cavity variation turned out to be the greatest driver of variation. The manager used the charts (and the process knowledge of his molding machine operators) to reduce between-cavity variation, and then monitor it using a dynamic and very sensitive method. 

Casting

Thanks Rip.  Dr Wheeler also discusses such casting here:  https://www.qualitydigest.com/inside/quality-insider-column/060115-ratio... and in "Advanced Topics in SPC" "Rational Subgrouping" pages 143 to 157. These books are essential for anyone serious about quality.

As you can see from Dr Wheeler's examples and the example in my article, hypothesis testing throws away key data that can be used to help analyse your process.  There is absolutely nothing wrong with hypothesis testing when used correctly.  Hypothesis testing is fine for static, non process related situations. 

Enumerative studies were promoted in Six Sigma courses. However the enumerative tools of Six Sigma are inappropriate for analyzing processes.  Six Sigma was developed by a psychologist.  It is hardly surprising that it focuses on the enumerative tools that are far more appropriate for lab rats in a psychology laboratory, than for process improvement.

Absolutely

The problem, I think, is in the education system. It is very difficult to find a stats class, or a "business statistics" class that teaches even the tools of analytic studies (much less that there are different types of studies). Those that do often do it badly (e.g., they might teach you to construct a control chart using the mean and 3 standard deviations above and below the mean for control limits). 

More on "static" processes

Further to above points, what proportion of many man made processes display stable behaviour over time (i.e. static)?

I think a valid concern is the use of hypothesis tests without a prior check on whether the data being used in the test are “in control” or not.

When the data are not stable the use of the hypothesis tests is questionable.

When the data are stable the control chart can often be used to answer the question to be answered through a hypothesis test.

Great point, Scott

I don't know the proportion of man-made processes that are stable; you can't know until you look. You make a great point about stability and hypothesis tests, though. This was the reason Shewhart developed control charts - to test whether the data exhibited a reasonable degree of statistical control. If they do, then distributional assumptions could be made. Without the evidence of homogeneity we get from the charts, we have no evidence for any particular distributional model. If the process is stable, then you are correct; processes can be compared handily using those charts...no need for t-tests, z-tests, ANOVA. 

Thanks, Rip

I had an example this afternoon.

Three subgroups of n=3: No signal on the range chart (or S chart). No signal on the average chart.

We ran the ANOVA and looked at the table of values: No signal. I asked my colleague which was easier to understand. What do you think the answer was?

My guess

Just a guess, but if your colleague was someone who grew up in the enumerative world, I'm guessing they thought they understood the high p-value in the ANOVA better. My other guess is that they don't really understand that p-value (and what it DOESN'T say). I've never understood why a simple visualization would not be preferable, but it might be as Wheeler points out, "Some people are afraid of simplicity, because complexity looks profound even if it's absurd." 

Dr. Deming did a series on the difference between analytic and enumerative studies for Ford engineers in the early '80s. At one point he lamented that he could not get engineers to use two simple tools: a piece of paper and a pencil. ("They will be damned before they do"). He then drew a plot on the newsprint: Two high points followed by a low point, two high points followed by a low point - repeated a few times. He said, three shifts...two are high, the third is low. There is clearly a difference. No need for any advanced mathematics...just get to work finding out where the difference comes from! 

Of course, one engineer asked, "But wouldn't you do a hypothesis test, just to verify the difference?"

Deming thundered, "Why? Why ruin it...waste time? You can see there's a difference, get to work!" 

Love it

Love it Rip.  What a wonderful reply!

Six Sigma - Prediction Failure

Six Sigma focusses on enumerative tools.  M Harry, the creator of Six Sigma, was asked a similar question to that of this article "Could you explain the best way to predict the outcome of a process in the future?"  Harry's answer showed complete ignorance of analytic methods and the meaning of a predictable process:  "Reference the quality literature on statistical process control, also known as “SPC.”  There are many excellent books on the subject. Process improvement and optimization is often accomplished by way of statistically designed experiments, or “DOE” as it is known."

A Six Sigma process with its +/-1.5 sigma shift is wildly out of control and is hence unpredictable.  It cannot produce good quality product/service in the future, no matter where specification limits are set.