Our PROMISE: Our ads will never cover up content.
Our children thank you.
Donald J. Wheeler
Published: Monday, July 10, 2017 - 12:03 The simplest type of process behavior chart is the chart for individual values and a moving range. It allows us to plot a point every time we get a value, making it perfect for data that occur one value at a time. A brief history of this simple chart follows. In the 1920s, Walter A. Shewhart developed “control charts” as a technique for analyzing observational data. Today we use the more descriptive name of process behavior charts. The original process behavior chart for measurements used a series of subgroups containing two or more original data. The original chart plotted the averages and root mean square deviations of each subgroup. In practice this quickly became the average and range chart we know today. The fundamental concept that makes this chart work is the notion of rational subgrouping. According to Shewhart, when we place two values together in the same subgroup we are making a “judgment” that these two values were collected under “essentially the same conditions.” Thus, the objective of rational subgrouping is to end up with homogeneous subgroups. As long as we organize the data in such a way that the subgroups are internally homogeneous, the average chart will allow us to detect changes that occur between the subgroups, while the range chart will check for consistency within the subgroups. This need for homogeneity within the subgroups will favor the use of smaller subgroup sizes. Regarding this Shewhart wrote [1]: “Obviously, if the cause system is changing, the sample [subgroup] size should be as small as possible so that the averages of samples do not mask the changes. In fact, single observations would be the most sensitive to such changes. Why then do we not use a sample size of unity?” Shewhart’s answer to this question had two parts: First, with single values we cannot compute a within-subgroup range; and second, using the global standard deviation statistic is unsatisfactory when the cause system is changing. Shewhart then observed that the sensitivity of an average and range chart will increase as the subgroup size decreases until a point is reached where the subgroups are internally homogeneous. In consequence, in the absence of any a priori information about how to organize the data into subgroups, “there would be some advantage in reducing the subgroup size to unity.” Thus, from the very beginning, the philosophy behind the process behavior chart has been pushing us in the direction of the Chart for Individual Values. Shewhart understood the practical effect of using individual values, but he had a dilemma about how to compute effective limits. Today we resolve this by using the two-point moving ranges, or as they were originally known, the successive differences. The method of successive differences effectively creates a series of moving subgroups of size two, and computes the ranges for each of these moving subgroups. Thus, with k original values we obtain (k–1) two-point moving ranges. When we average these moving ranges and divide by d2 = 1.128 (the bias correction factor for subgroups of size two) we obtain an estimate of the standard deviation parameter for the distribution of the original values. (The justification for this use of d2 comes from A.R. Kamat [2] who was a student of H. O. Hartley.) The method of successive differences first appears in the statistical literature in a 1941 paper by John von Neumann of the Institute for Advanced Study at Princeton, and three researchers from Aberdeen Proving Ground [3]. This paper cites E. Vallier [4] as the first to use successive differences for estimating dispersion in 1894, and attributes the first use of the average moving range to C. Crane and K. Becker [5]. Since these sources come from ballistics, the idea of using successive differences to characterize dispersion appears to have its origins at the end of the 19th century in determining the range of field artillery. While von Neumann focused on a statistic known as the mean square successive difference (MSSD), it turns out that there is no practical advantage in working with the MSSD rather than the average moving range. When using only two values to compute a measure of dispersion, the standard deviation statistic, s, is simply the range of the two values, R, divided by the square root of 2. As a result of this equivalency, both the MSSD and the average moving range are equally efficient when used with a predictable process. However, because the MSSD computation squares the differences before averaging them, the MSSD is more easily inflated by extreme range values than is the average moving range. Since extreme range values are to be expected when the process is changing, the average moving range is the preferred statistic for computing robust limits for the XmR chart. The XmR chart is attributed to W. J. Jennett, who is said to have created this technique while working with the MO Valve Co. in England in 1942. When we look back at Jennett’s work we find that in 1942, he co-authored Quality Control Charts [6]. However, in this book there is no mention of the XmR chart. This would suggest that this chart was not around when the book was prepared (in 1941 and earlier). Based on this it seems likely that Jennett got the idea of using the method of successive differences with individual values from the 1941 von Neumann paper. The first mention of the XmR chart that I have found in the literature is in L. H. C. Tippett’s 1950 book [7] where he described the XmR chart using two pages of text and two examples. Tippett was the head statistician for the British Cotton Industry Research Association, and in 1948, he was invited to give a series of lectures at Massachusetts Institute of Technology. These lectures were then edited into book form by Shewhart, and the result was Technological Applications of Statistics. The second mention of the XmR chart comes in 1951 when the American Society for Testing and Materials, Committee E-11 on Quality Control of Materials, issued Version C of their Special Technical Publication 15 [8]. Here the XmR chart formulas are presented in a couple of pages of text and an example. In the past Shewhart had been instrumental in getting this committee to include the “Control Chart Method of Analysis and Presentation of Data” in their Version B of this booklet, and it is not hard to imagine that he was influential in getting the XmR chart included in Version C. Thus, Shewhart is linked to the first two mentions of the XmR in print both as an editor and through his friends and associates. The third mention of the XmR chart is in 1953 in a 10-page article by Joan Keen and Denys Page of the Research Laboratories of the General Electric Co. (GE) in Wembley, England [9]. In addition to crediting Jennett with creating the technique, Keen and Page present five examples of the XmR chart from what they describe as its extensive use at GE over the preceding decade. (Hartley appears to have encouraged Keen and Page to write this article to accompany Kamat’s treatment of the theoretical foundation of the moving range technique.) The next notable mention of the XmR chart is found in the 1956 Western Electric Statistical Quality Control Handbook [10]. There we have about three pages of text and one complete example. When combined, these first four mentions of the XmR chart cover a total of about 17 pages of text and include nine examples. So there is substantial evidence that this technique was widely known and used within the first decade following its creation. It was promoted by both Shewhart and Hartley. However, over the next 30 years very little was added to this body of material. Some statistical quality control textbooks would not mention the XmR chart at all. Among those that did mention the XmR chart only about half included an example, and several authors were uncomfortable with this technique. Among this latter group some felt that the XmR chart would be insensitive to real signals, while others felt that it would be prone to having too many false alarms. These contradictory notions about how the technique might work in practice reveal a widespread lack of familiarity with the XmR chart. So the XmR chart had been around since 1942. While it had been promoted by prominent statisticians (Hartley and Shewhart), and while it was used in industry in the 1940s and 1950s, it appears to have dropped out of use shortly thereafter. When I came along in the 1970s, the XmR chart was little known and rarely used. Starting in 1982, when I left the university and began full-time consulting and instruction in industry, I did a lot of work with specialty chemical operations. With all sorts of one-at-a-time data, these clients found the XmR chart to be extremely useful. As I worked with them I saw the power and utility of the XmR chart over and over again. One of my chemical plant clients managed to win the Ford Q1 award on the first audit. (Up to that time everyone had failed Ford’s first audit.) My client won the award simply because the XmR charts provided documented answers for virtually every process-related question asked by the auditors. At dinner one evening in 1985, W. Edwards Deming asked me about the XmR chart. He had seen one used by one of my students, and he had not come across this technique before. As we discussed the technique, I explained something of its history and rationale. After looking at some of his own data in this format, and following another discussion, Deming seemed to be satisfied with the role of the XmR chart—he could see the great variety of ways it could be applied. At this same time David Chambers and I were writing our book, Understanding Statistical Process Control [11], that came out in 1986. In this book we included five examples of the XmR chart and 14 pages of text on this topic. Then, in 1989, the Dow Chemical Co. asked me to develop a special module on the managerial uses of the XmR chart for their internal training program. Using the experiences of my clients I produced that module, and then refined that material into the little book Understanding Variation—the Key to Managing Chaos, which came out in 1993 [12]. This book was the first book dedicated to the XmR chart. Its 32 examples and case histories and 136 pages of text easily exceeded all the combined material that had previously been written about this technique. By focusing on managerial types of data it also expanded the process behavior chart beyond the realm of manufacturing. This book was excerpted in Quality Digest [13] and has proven to be one of the most popular books on data analysis ever written. Several companies have credited this little book with turning their operations around. Today, the XmR chart is an integral part of almost every SPC software package. Yet having access to the technique does not guarantee that it is always used appropriately. The following sections discuss some of the issues and questions surrounding the use of the XmR chart. In order for the XmR chart to work as intended there are two things that need to happen. The first of these is that successive values need to be logically comparable. The second is that the moving ranges need to capture the routine variation of the underlying process. A time series that mixes apples and oranges together will not satisfy the two criteria above. You have to organize your data so that you are dealing with all apples or all oranges. A time series with very short time periods might not allow the successive differences to capture all of the routine variation present in the process. You have to exercise some judgment about the relationship between your sample frequency and the way your process operates. At the other extreme, if your sample frequency is too low and the process changes in between every one or two observations, then the moving ranges will be inflated by these changes and the XmR chart will not work as intended. The sample frequency needs to be high enough to result in a situation where most of the moving ranges represent routine variation. (This means that if you are using monthly values on an XmR chart, you are assuming that changes occur no more than once or twice a year.) The name given to this art of getting the right frequency is rational sampling [14]. Just as the average and range chart needs logically homogeneous subgroups to work right, the XmR chart needs for most of the moving subgroups to be homogeneous in order for the moving ranges to capture the routine process variation. Highly aggregated measures will tend to have a lot of noise. (An example would be companywide, report-card metrics reported to management.) When such measures are placed on an XmR chart they will tend to show a predictable process. However, the limits will often be so wide that no manager will be pleased with the uncertainty defined by the limits. (Heads will roll, or promotions will occur, long before you reach one limit or the other.) Here the solution is to disaggregate the time series into its component parts. As the data become more specific and closer to real time they become more useful in that they allow you to discover the points where your processes are changing. So, while charting the highly aggregated report card measures may present the big picture and make processes look predictable, disaggregating those same measures is essential for learning about your processes and operations. For an example of how this works see [15]. “Don’t we need to know if the data are normally distributed before we can use an XmR chart?” No, it simply does not matter. Countless statisticians and others have foundered on the fallacious idea that the data have to be normally distributed in order for the chart to work. Regardless of the shape of the histogram, the generic, three-sigma limits of a process behavior chart will filter out approximately 99 percent or more of the routine variation. Because of this conservative filtration of the routine variation, any point that falls outside the limits may be considered as a potential signal of a process change. To understand this consider the summary graphic of reference [16] given in figure 1. There we have 19 different probability models covering essentially the full range of gamma models, Weibull models, and lognormal models. Even with these extreme models, the three-sigma limits cover at least 98 percent of the area in every case. However, probability models do not generate our data. Processes do. And in this world all processes are subject to change. The process behavior chart does not assume any specific probability model for your data, but rather checks to see if your data display enough consistency to make the notion of a probability model make sense. The charts ask the question “Has a change occurred?” without making reference to any specific probability model. As Shewhart wrote in 1943 [17], “Classical statistics start with the assumption that a statistical universe exists, whereas [SPC] starts with the assumption that a statistical universe does not exist.” and also, “… measurements of phenomena in both social and natural science for the most part obey neither deterministic nor statistical laws until assignable causes of variability have been found and removed.” When the process is changing no single probability model can be used to describe the data produced by the process. “But the false alarm rate will change with different probability models.” Yes, different probability models will allow you to compute different false alarm rates. This is what we should expect with generic, fixed-width limits. For the extreme spectrum of probability models shown in figure 1, the false alarm rates vary from one part per thousand to 22 parts per thousand. But these theoretical calculations are simply the slight of hand by which statisticians distract themselves (and you) from answering the right question. While a probability model may be used to describe a data set, the process behavior chart is not concerned with fitting a model to the process, but rather the opposite. A process behavior chart seeks to fit the process to a model. It seeks to characterize the past process behavior as belonging to one of two classes: either predictable or unpredictable. It asks if the process fits a very broad model of being “predictable within limits.” When a process is being operated unpredictably there will be signals of process changes. In this case having one or two false alarms per hundred points is of no real consequence because the number of signals will usually greatly outnumber the number of false alarms. Here the story told by the chart will not depend on whether the false alarm rate is one per thousand or two per hundred. When a process is reasonably predictable, the routine variation will be common-cause variation. This will be the result of many different cause-and-effect relationships where no one cause-and-effect relationship is dominant [18]. Under these conditions Pierre-Simon Laplace’s central limit theorem guarantees that the resulting histogram will tend toward normality, and as this happens the false alarm rate will tend to drop well below one per hundred. So, if the process is unpredictable, the false alarm rate is not important. And if it is indeed predictable, then we do not need to worry about the shape of the histogram because the generic, three-sigma limits will naturally result in a very low false alarm rate. Some authors were concerned that the XmR chart would be less sensitive than an average and range chart. This concern comes from a failure to consider the nature of one-at-a-time data. When your data have a frequency attached, waiting for five values requires waiting for five time periods. So, in the time required to get one subgroup of size five and plot one point on an average chart you will have plotted five points on your XmR chart. This difference in the number of points plotted more than compensates for the perceived lack of sensitivity. When comparing average charts with X charts we must hold the amount of data constant. When we do this we find that the power functions are very similar. When the XmR chart is used with all four of the Western Electric Zone Tests it is essentially as sensitive as any statistical technique can possibly be [19], [20]. “Can we put autocorrelated data on a process behavior chart?” Shewhart did. You can too. In order for your data to display any appreciable degree of autocorrelation, your process will have to be changing over time. You cannot have substantial autocorrelation without a process that is changing. Thus, autocorrelation in your data is simply another sign that you have an unpredictable process. While the autocorrelation might have an impact on the computation of the limits, any time series with a large positive or a large negative autocorrelation is going to have a highly coherent running record. With a highly coherent running record the limits become moot—the signals are clear for all the world to see just by looking at the running record. For more on this see reference [21]. While most time series will have equally spaced intervals of time, this is not a requirement for the XmR chart. As long as the moving ranges capture the routine variation in the process, the limits will work as advertised. In those cases where the time-order sequence of the data has been lost, you can use any arbitrary ordering of the data to create an XmR chart. While such a chart may end up being slightly less sensitive than a chart based on the time-order sequence, any signals you find with an arbitrary ordering are still likely to be real. The only ordering to avoid is a rank ordering where the data are arranged according to their numerical values prior to placing them on the XmR chart. When the data are rank-ordered the method of successive differences breaks down, the moving ranges will not capture the routine variation, and the XmR chart will not work as intended. By making no presuppositions about your data, the XmR chart treats them in a completely empirical manner. If you are not certain about how to organize your data into subgroups, you can start with subgroups of size one—it is hard to mess up the subgrouping on an XmR chart. As long as you pay attention to the two requirements of comparing apples to apples and allowing the moving ranges to capture the routine process variation, the XmR chart will separate the potential signals from the probable noise. The simplicity of the structure of the XmR chart makes it easy to explain to others. In the words of one supervisor, “Nothing can hide on that XmR chart—every point has to sink or swim on its own.” So while it is possible to put your data on an XmR chart in such a way that the technique will not work, with minimal thought and some process knowledge you can organize your data so that the XmR chart will work as intended. The flexibility of the XmR chart makes it the Swiss Army Knife of process behavior charts. So ignore the distractions; pay attention to rational sampling; put your data on an XmR chart; and start learning how to improve your quality, your productivity, and your competitive position! [1] Economic Control of Quality of Manufactured Product, by Walter A. Shewhart (D. Van Nostrand Company Inc., 1931, p. 314) [2] “On the Mean Successive Difference and Its Ratio to the Root Mean Square,” by A.R. Kamat (Biometrika, 1953, Vol. 40, pp. 116–127) [3] “The Mean Square Successive Difference,” by J. von Neumann, R.H. Kent, H.R. Bellinson, B.I. Hart (Annals of Mathematical Statistics, 1941, Vol. 12, pp. 153–162) [4] Balistique Experimentale, by E. Vallier, (1894) [5] Exterior Ballistics, by C. Crane, K. Becker (His Majesty's Stationary Office, translated from translated from the second German edition, 1921) [6] Quality Control Charts, by B.P. Dudding, W.J. Jennett (British Standards Institution, 1942) [7] Technological Applications of Statistics, by L.H.C. Tippett (John Wiley and Sons, 1950) [8] ASTM Manual on Quality Control of Materials, Special Technical Publication 15-C, 1951 [9] “Estimating Variability from the Differences Between Successive Readings,” by J. Keen, D. Page (Applied Statistics, 1953, Vol. 2, pp. 13–23) [10] Statistical Quality Control Handbook, (Western Electric, 1956) [11] Understanding Statistical Process Control, by Donald J. Wheeler and D. S. Chambers (SPC Press, 1986) [12] Understanding Variation–The Key to Managing Chaos, by Donald J. Wheeler (SPC Press, 1993) [13] “Book Excerpt—Understanding Variation,” by Donald J. Wheeler (Quality Digest, August 1993, Vol. 13, pp. 22–28) [14] “Rational Sampling,” by Donald J. Wheeler (Quality Digest Daily, July 1, 2015) [15] “Process Behavior Charts as Report Cards,” by Donald J. Wheeler (Quality Digest Daily, June 6, 2016) [16] “Properties of Probability Models: Part Three,” by Donald J. Wheeler (Quality Digest Daily, Oct. 5, 2015) [17] “Statistical Control in Applied Science,” by Walter A. Shewhart (Transactions of the ASME, April 1943, pp. 222–225) [18] “Two Routes to Process Improvement,” by Donald J. Wheeler (Quality Digest Daily, May 5 and May 6, 2010) [19] “Detecting a Shift in Process Average: Tables of the Power Function for X-bar Charts,” by Donald J. Wheeler (Journal of Quality Technology, October 1983, Vol. 15, No. 4, pp. 155–169) [20] Advanced Topics in Statistical Process Control, by Donald J. Wheeler (SPC Press, 1995) Chapter 10: Comparing Different Types of Control Charts [21] “Myths About Process Behavior Charts,” by Donald J. Wheeler (Quality Digest Daily, Sept. 7, 2011) Quality Digest does not charge readers for its content. We believe that industry news is important for you to do your job, and Quality Digest supports businesses of all types. However, someone has to pay for this content. And that’s where advertising comes in. Most people consider ads a nuisance, but they do serve a useful function besides allowing media companies to stay afloat. They keep you aware of new products and services relevant to your industry. All ads in Quality Digest apply directly to products and services that most of our readers need. You won’t see automobile or health supplement ads. So please consider turning off your ad blocker for our site. Thanks, Dr. Wheeler is a fellow of both the American Statistical Association and the American Society for Quality who has taught more than 1,000 seminars in 17 countries on six continents. He welcomes your questions; you can contact him at djwheeler@spcpress.com. A History of the Chart for Individual Values
The ultimate in homogeneous subgroups
Successive differences
The history of the XmR chart
Out of obscurity
Rational sampling
Noise-filled data
What is the probability model?
Figure 1: How three-sigma limits work with Weibulls, gammas, and lognormalsWhat about false alarms?
What about the sensitivity of the XmR chart?
What about autocorrelated data?
Must I use fixed frequency data?
The Swiss Army Knife of process behavior charts
Our PROMISE: Quality Digest only displays static ads that never overlay or cover up content. They never get in your way. They are there for you to read, or not.
Quality Digest Discuss
About The Author
Donald J. Wheeler
© 2023 Quality Digest. Copyright on content held by Quality Digest or by individual authors. Contact Quality Digest for reprint information.
“Quality Digest" is a trademark owned by Quality Circle Institute, Inc.