Our PROMISE: Our ads will never cover up content.
Our children thank you.
Published: Monday, February 1, 2010  06:00
In my column of Jan. 7, “The Right and Wrong Ways of Computing Limits,” I looked at the problems in computing limits for average charts. This column will consider the right and wrong ways of computing limits for charts for individual values. As before, a data set will be given that you can use to evaluate your own software so that, hopefully, you can select an option that results in the correct limits. But first, some history might be of interest.
On page 314 of Walter A. Shewhart's Economic Control of Quality of Manufactured Product (Van Nostrand, 1931 original publication; American Society for Quality Control, 1980 commemorative reissue) wrote:
“Obviously, if the cause system is changing, the sample [subgroup] size should be as small as possible so that the averages of samples do not mask the changes. In fact single observations would be the most sensitive to such changes. Why then do we not use a sample size of unity? The answer is that if we do, we are faced with the difficulty of choosing the standard deviation to be used in the control charts. … In fact, the sensitivity of the test [the process behavior chart] will increase, in general, with the decrease in subgroup size until the size of the subgroup is such that the data in any given subgroup come from a constant system of chance [common] causes. In the absence of any a priori information making it possible to divide the data into rational subgroups, there would be some advantage therefore in reducing the subsample [subgroup] size to unity.”
Thus, from the very beginning, the philosophy behind the process behavior chart has been pushing us in the direction of the chart for individual values. Initially, a technical problem prevented us from reaching this goal. While Shewhart found a way around this problem for certain types of count data (pcharts and ccharts are charts for individual values), his solution would not generalize to other types of data.
In 1942, W. J. Jennett provided an answer to the problem posed by Shewhart. In his work with the M O Valve Co. Ltd., Jennett proposed using a technique that had been used in studying the ballistics of artillery shells in the 1890s. This technique was known as the method of successive differences. By using the differences between successive values as a measure of dispersion, Jennett effectively created the chart for individual values and a moving range, or XmR chart.
The attribution to Jennett comes from a 1953 article by Joan Keen and Denys J. Page in the journal Applied Statistics, which claims that the XmR chart was used extensively at General Electric Ltd. in Wembley, England, from 1943 to 1953. During this period, it also appears to have found its way into the textile industry, since physicist and statistician Leonard H.C. Tippett included a description and example of the XmR chart in his 1950 text. In this same time period, theoretical work by John von Neumann of the Institute for Advanced Study, in Princeton; and H.O. Hartley and A.R. Kamat of University College, in London; established that the average twopoint moving range is 100percent efficient, and that it is sufficiently similar to the ordinary range for n = 2 that the usual scaling factors can be used. In short, the twopoint moving range has the highbrow theoretical ancestry that we like to have in our statistical techniques. Moreover, its use in the XmR chart is completely consistent with this mathematical background.
However, for the next 40 years, in book after book, the XmR chart would get no more than a brief mention as a miscellaneous technique. Then, one night in 1985 while we were at dinner, Deming asked me to tell him about this strange chart he had seen at a company where we were both consulting. It turned out that he had never seen an XmR chart before. While XmR charts had been around since 1942, they were barely known and rarely used.
Fortunately, in working with several different chemical companies during the 1980s, I had seen the power and utility of the XmR chart over and over again. With logical subgroups of size one, the XmR chart was readymade for their singlepoint data. But it's not only the process industries whose data occur one value at a time. All sorts of service sector, administrative, and managerial data, fall into this category. So, in 1993, to open up new areas to the power and utility of process behavior charts, I produced a book on how to use XmR charts to transform the job of interpreting management data. This book, Understanding Variation, the Key to Managing Chaos, was excerpted in Quality Digest in August, 1993, and has proven to be one of the most popular books on data analysis ever written.
Today, the XmR chart is an integral part of almost every package of SPC software. Unfortunately, just as with the average chart, most of these software packages provide options that compute the limits incorrectly.
The first hurdle to using a chart for individual values is the hurdle of rational subgrouping. Even though the subgroup size is unity, the fact that we are going to compute the limits using the successive differences means that successive values must be logically comparable. That is, under the conditions expected to prevail when the process is operated predictably, successive values should differ only by what amounts to routine variation. In this way, the moving ranges will capture the routine variation and the limits will filter out the routine variation. In other words, you cannot knowingly place apples and oranges together on an XmR chart. Of course, as always, rational subgrouping is a matter of judgment that requires process knowledge. Then, once the data have been organized in a rational manner, there remains the question of how to compute the limits.
The first, and most common, way to compute limits for an XmR chart is to compute the average and the average moving range, and then to combine them using:
Approach 1
Limits for X = average ± 2.66 * average moving range
and upper limit for mR = 3.268 * average moving range
This approach should always be the default option for computing limits for individual values.
For our example consider the first 16 values in the data set presented on page 20 of Shewhart’s Economic Control of Quality of Manufactured Product. The values are the resistances, in megohms, for an insulation material. The values and moving ranges are shown in figure1. The average is 4135.5, and the average moving range is 355.93. Using these values we find the limits shown in figure 2.
X 
mR 
X 
mR 
X 
mR 
X 
mR 


5045 
— 
4290 
315 
3980 
305 
3300 
460 

4350 
695 
4430 
140 
3925 
55 
3685 
385 

4350 
0 
4485 
55 
3645 
280 
3463 
222 

3975 
375 
4285 
200 
3760 
115 
5200 
1737 
Here, one observation and one moving range fall outside their limits. The large moving range confirms the impression from the X chart that a sudden change occurred between the 15th and 16th values. This is sufficient evidence to say that this process is changing. When this happens the important questions are no longer questions about the process location or the process dispersion, but rather questions about what is causing this process to change. Estimation is moot. Discovery of the assignable cause is paramount.
An alternate way to compute limits for an XmR chart is to use the median moving range:
Approach 2
Limits for X = average ± 3.145 * median moving range
and upper limit for mR = 3.865 * median moving range
Because of the lower efficiency of the median moving range this approach should not be used as the default approach. However, when some large moving ranges may have inflated the average moving range, this approach will allow the computation of limits that will be less severely inflated by those large ranges. For the data of Figure 1 the median moving range is 280.0, resulting in the XmR chart shown in figure 3.
Here we find two individual values and one moving range outside the limits. Compared with the default approach, the limits here are 3.5percent narrower. While the details change, both figure 2 and figure 3 tell the same story about this process. This process is changing without warning, and we need to find out why.
A variation on method 2 would be to use the median of the X values as the central line for the X chart.
Approach 2.1.
Limits for X = median ± 3.145 * median moving range
and upper limit for mR = 3.865 * median moving range
It is unlikely that you will need to use the median of the X values except in conjunction with the median moving range. This is because the average moving range is more prone to inflation than the average of the X values, so that any time the median of X is appropriate, the median moving range will also be appropriate. For figure 1 the median of the X values is 4132.5, which leaves figure 3 essentially unchanged.
There is another computation that is technically right but practically wrong. This is the use of the square root of the mean square successive difference. To find this measure of dispersion we begin with the moving ranges and then (a) square each moving range, (b) average them, (c) divide by 2.0, and (d) find the square root. This root mean square successive difference (RMSSD) is then used with the average to compute limits for the X chart. This is usually done using the formula:
Approach 3
Limits for X = average ± 3 * RMSSD
Typically this approach doesn't include a chart for dispersion. For the data of figure 1, the root mean square successive difference is RMSSD = 382.95. This approach will result in the limits shown in figure 4.
The RMSSD has exactly the same degrees of freedom as the average moving range. This means that, as an estimator of the process dispersion, it has the same efficiency as the average moving range. However, as we saw earlier, the question here is not one of estimation, but rather one of finding what is causing the changes in the process. In this regard the RMSSD is not equivalent to the average moving range, for it lacks the required robustness.
The RMSSD lacks robustness because it averages the squared ranges, which makes it more prone to being inflated by large range values. This will inflate the limits and desensitize the chart. Compared to the default approach in figure 2, the RMSSD limits in figure 4 are inflated by 11 percent. As a result, figure 4 has no points outside the limits. Since this approach is no more efficient, yet is more prone to being inflated, we must conclude that the RMSSD has nothing to offer except complexity for the sake of complexity. There is simply no situation where it will ever prove to be advantageous, and therefore should be avoided in practice.
There are only two measures of dispersion to use when creating a chart for individual values: The average moving range and the median moving range (approaches 1 and 2 above). The use of any other measure of dispersion is wrong.
Nevertheless, many software packages compute limits for the X chart using:
Approach 4
Limits for X = average ± 3 * global standard deviation statistic
While the formula above may be said to give you symmetric, threestandarddeviation limits, this computation is wrong simply because it will be substantially inflated whenever the data are not homogeneous. Because the purpose of the chart is to examine the data for possible evidence of a lack of homogeneity, any approach that implicitly assumes the data to be homogeneous is inherently and philosophically flawed. For the data of figure 1 the global standard deviation statistic is 525.14, resulting in the erroneous limits shown in figure 5.
Specifically, the global standard deviation statistic computed using all of the individual values is wrong for the same reason that it was incorrect for use with an average chart. While you might be able to compute reasonably correct limits when the data happen to be homogeneous, this approach will severely inflate the limits in the presence of signals of process changes. Compared with the default approach, the limits in figure 5 have been inflated by 23 percent. While this may paint a pretty picture of your process, it is simply nothing more than another way to lie with statistics.
It has been suggested that the moving range (mR) chart adds so little to the X chart that you should not bother to show it. Certainly, approaches 3 and 4 ignore the mR chart. Sometimes even software using approaches 1 and 2 will drop the mR chart. The basis for this recommendation seems to be the documented fact that—in theory at least—the combined XmR chart does not have any appreciably greater ability to detect signals than does the X chart alone. However, this mathematical analysis overlooks the interpretative benefits to be gained by including the mR chart.
For example, inspection of figure 2 will show that the last point could have fallen well within the limits on the X chart (say 4,700 or above) and still would have resulted in an mR value that was above the upper limit on the mR chart. Thus, the mR chart can detect sudden shifts even when the individual values are still within the limits on the X chart. This ability to complement and reinforce the impressions gained from the X chart is one of the practical reasons that you should not ignore the mR chart.
A second practical reason to keep the mR chart is the fact that many software options compute the limits incorrectly. If you are shown a naked X chart you will have no way of knowing if the limits have been computed correctly. However, if you are shown an XmR chart, you will immediately have a higher level of confidence that the limits have been computed correctly. Moreover, by using the central line of the mR chart, you can quickly check to see if the limits are indeed correctly computed. Thus, the mR chart is the secret handshake of those who know the correct way of computing limits for an X chart. Omit it and your readers cannot be sure that you are a member of the club.
A third practical reason for keeping the mR chart is that it will allow you and your audience to check for the problem of chunky data. This problem occurs when the measurement increment is too large for the application at hand. When your data are chunky you will find three or fewer possible values for the moving ranges (including zero) below the upper range limit. When this happens the round off in the measurements will obscure the process variation and the excessive number of zero ranges that result will deflate the average moving range and tighten the limits. In consequence the process behavior chart will have many false alarms and so will become unusable. The only remedy for this problem is to use a smaller measurement increment.
So, while it may be difficult to find a mathematical justification for showing the mR chart, there are three practical reasons to do so, any one of which is sufficient to justify the inclusion of the mR chart with your X chart.
Thus, just as with average charts, there are right and wrong ways to compute limits for a chart for individual values. Because most software will provide you with the option of computing your limits incorrectly, it is your responsibility to determine how to get the right limits using your software. There are many novel ways of computing the limits incorrectly, but only two measures of dispersion that will yield correct limits for an XmR chart.
Comments
Distribution of values
Shall, the values, be distributed according to a normal distrubution to use these limits or they are valid even for Weibull or other distributions?
As Dr. Wheeler documents in
As Dr. Wheeler documents in some of his writings, the XmR methodology (a.k.a. "process behavior charts") is valid regardless of the underlying distribution of the data (or even if it doesn't fit well with a distribution), which makes it extremely useful and applicable for realworld data.
deflation of control limits by using global dispersion measure
In an attempt to advocate the calculation of control limits of XmR charts by avrg. mR * 2,660 over the use of a global dispersion measure (the global standard deviation) I evaluated sets of our real world data (quality characteristics measured as one value per batch) and compared limits for both ways of calculation. What i found was limits beeing inflated as well as limits beeing deflated in some cases. I checked my calculations and couldn`t find a possible error so far.I could also not find the cause for deflated limits.Can deflation of the limits happen?Is it also "only" due to inhomogeneity of the data or is there a more specific reason behind it?I suspected short periods of decreased dispersion in the data from visual examination of the run charts but couldn`t find that systematically nor pin it down mathematically
All my arguments seem to be obsolete now because i claimed, that limits could be inflated in the first place and now have to admit, that deflation of limits is also possible and in fact it might be seen as an optional choice for favouring the method of calculation giving wider limits in the majority of cases over the other one.
I would be really thankful about hints on this issue!
Response for HMINDLER
I would be glad to discuss this directly with you.
Your results might be due to your approach.
If the data come from a random number generator, then what you report may well be true.
But when comparing limit computations using non homogeneous data, the use of the average moving range
will result in tighter limits than any approach using a global measure of dispersion.
You can contact me at djwheeler@spcpress.com
Application of Nelson's Rule
I entered your data into a software program and it calculates a different average from the one on your R chart (their 257 vs. your 356). The author claims it is because Nelson's rule automatically removes any data point more the 3.5 times the mean from that calculation, so it removes the final difference before calculating the R chart mean. Your use of the median instead of the mean in the second set of charts has a similar effect, though not as extreme. Is there a justification for ignoring outliers when calculating the mean?
Thanks.
reply to Sprevette
1. We do not have to prequalify our data prior to placing them on a chart. These are simply the first 16 values out of Shewhart's larger data set. They are sufficient to show that the process is changing.
2. It only takes one counterexample to shoot down such a theory. The example provided is a real world data set where the two computations give different results.
3. Yes, lazy bones can compute the standard deviation easily. Unfortunately, this computation makes a strong assumption of homogeneity, which is essentially what we want to examine.
4. Dr. Shewhart did compare the root mean square deviation with the Range and correctly noted that there is a difference in their efficiency when the subgroup size gets large. He also notes that the differences are minimal when n is less than 10. In 1935 he was instrumental in writing the ASTM Supplement B which gave the scaling factors for using the range, so he was okay with this approach.
5. If you read Shewhart very carefully you will find that he rejects the Chebychev approach as not being sufficiently general to do what he wanted to do.
In general, the choice of statistic is not the issue, but rather the computation of withinsubgroup measures of dispersion versus the computation of global measures of dispersion. You simply cannot change the underlying mathematics, no matter how many times you may read things to the contrary.
Hope this will help.
Charts Done Right....
Most authors that I've been exposed to use 8 consecutive points to indicate a special cause (run). I believe L.S. Nelson uses 9 points. I guess it depends on how exploratory or conservative you want your analysis to be. Since there is usually another data point around the bend why not wait?
I believe the issue here is using a method to calculate an estimate of standard deviation that minimizes the effect of signals even though the baseline data contains signals. What Wheeler states as "getting good limits from bad data). We're not looking for "perfect" limits but only useful limits that allow us to take the appropriate action.
Sigma
I do think that the "problems" with using the statistical estimate of the standard deviation need to be revisited. Some things to consider:
1. The example provided by Dr. Wheeler has a run of 7 points in a row below average, which by many authors (but not Dr. Wheeler, who uses 8 points) to be a signal, which would cause that data not to be put all into the same baseline.
2. In 16 years of operational SPC, I have yet to run into an example from real world data where the moving range and the statistical standard deviation provide two different interpretations of what is and what is not an outlier.
3. With modern spreadsheets, the statistical standard deviation is easier to calculate rather than the moving range conversion. That is the opposite of the situation for precomputer days, when the moving range was developed.
4. Dr. Shewhart himself, in Economic Control of Quality of Manufactured Product evaluates several methods for determining the spread of the data, and he rates the statistical standard deviation as best (better than the range).
5. Dr. Shewhart and other authors do invoke the Tchybychev Inequality as the theoretical basis for SPC. The Tchybychev Inequality is nonparametric and uses the statistical estimate for the standard deviation. The moving range formula (2.66 times the average moving range) has the Normal distribution built into it as an assumption.
Autocorrelated data
Great article,
I am convinced that Control Charting is the most effective way to monitor data & process control however I am facing problems to track my autocorrelated data from a chemical system which is inflenced by external assignable causes. It is impossible to eleminate these external causes because they are linked to periodic, unpredictable production and raw material variations.
I am sure that the moving ranges of my succesive analysis data are not correctly reflecting routine variation because the small differences are induced by assignable sources (which are part of the longer term routine variation). So when constructing an individual XmR chart on these data the points just are swinging out of the limits all the time..Trying to use a moving average chart yields too low sensitivity and is no solution as well..
My question is: what is the right way for subgrouping autocorrelated data as in cases described above, yielding an effective estimation of routine variation for computing the process control limits?
Sincerely thanks for your inputs!
Frank
Autocorrelated Data
I have written a chapter on dealing with autocorrelated data. It is in my Advanced Topics in Statistical Process Control book. Send me an email and I will respond more completely. Know this, autocorrelation is simply another way a process has of telling you that it is changing.
Hope this will help.
Great article
Another informative and eloquent article from the master, which stands in marked contrast to some of the six sigma and hypothesis testing, based rubbish on control charts published in this journal recently.