



© 2023 Quality Digest. Copyright on content held by Quality Digest or by individual authors. Contact Quality Digest for reprint information.
“Quality Digest" is a trademark owned by Quality Circle Institute, Inc.
Published: 10/03/2008
Suppose you had 16 months of data on an important process (as plotted on the run chart seen in figure 1). For improvement purposes, an intervention was made after the sixth observation to lower this key process indicator. This intervention is equivalent to creating a special cause for a desired effect.
There is no trend as defined statistically (despite the trend downward of length five from observations 5 to 9; with 16 data points, one would need length six, or five successive decreases). Neither is there any run of length eight either all above or below the median. So, would you want to conclude that the intervention had no effect? I doubt it!
Think of the median as a reference, and consider each data point as a 50-50 coin flip (heads = above the median; tails = below the median). The question is: Would you expect to flip a coin 16 times and obtain the specific pattern of seven heads (run of length seven above the median), immediately followed by seven tails (run of length seven below the median), then a head (run of length one above the median) and, finally, a tail (run of length one below the median)? Intuition would seem to say, “No.” Is there a statistical way to prove it?
There is a third test used in conjunction with the trend and runs-of-length-eight tests. It is based on the total number of runs observed above and below the median in a data collection. A run is counted from the first data point until it is broken by crossing the median (points that are literally on the median are ignored for this analysis). From the graph, the 16 data points yielded only four runs total (of lengths 7, 7, 1, and 1)--two above and two below the median.
Looking in the left column of the table in figure 2, under “Number of data points,” at 16 and reading across: 5-12 runs are expected to occur if the variation is only random (common cause). Four is below the expected lower limit of five. Hence, with low risk, the conclusion is that the special cause intentionally imposed after the sixth observation most probably did create the desired effect.
Generally, a successful intervention will tend to create fewer than the expected number of runs. It is relatively rare to obtain more than the expected number of runs. In my experience, this usually indicates that two process inputs are being reflected by them being unintentionally sampled alternately.
I also encountered a situation where someone had fudged data by trying to make it look random--and did too good a job. There is such a thing as being “too random!”
So, unless one gets fewer than the number of runs, one must assume, based on the current data, that there is no evidence of process change. Ott’s, Schilling’s, and Neubauer’s book, Process Quality Control: Troubleshooting and Interpretation of Data, Fourth Edition (ASQ Quality Press, 2005), and Acheson J. Duncan’s classic, Quality Control and Industrial Statistics, Fifth Edition (Richard D. Irwin, 1986), discuss runs analysis extensively and contain more tables. My table uses p < 0.05.
Subsequent plot data may tell a different story, providing a choice to either abandon the current strategy or try a different intervention. But, then again, because of no special cause signals, one could also proceed to making an I-chart of the data, which has a few more powerful tests to apply to detect special causes. I will show an example of this next month.