PROMISE: Our kitties will never sit on top of content. Please turn off your ad blocker for our site.

puuuuuuurrrrrrrrrrrr

Quality Insider

Published: Tuesday, April 9, 2013 - 10:20

I received the following note from a physician who is very interested in improvement: “I am not sure I understand what a process behavior chart and a moving range chart do to the discussion, and what do the colored lines represent? aka ‘Still confused.’”

His comment was in response to my column, “What Are *You* Tolerating?”, which included the two graphs in figure 1. I’d like to take a step back and address his question in the hopes of helping him (and you) avoid yet another vague meeting about a vague problem with some vague data. “Off to the Milky Way,” incidentally, was one of W. Edwards Deming’s favorite expressions and describes that scenario very well.

When you are at meetings like this and don’t have your computer, there are still some very useful, simple things you can do by hand that will take all of 10 to 20 minutes and could drastically change the room’s conversation and the project's direction.

**Ellis Ott: “First, you plot the data. Then you plot the data. Then, you plot the data.”**

Good improvement practice frames any situation as a process. One needs to assess the process stability to determine whether a common cause and/or special cause strategy should be used. How is this done? Plot the dots! You could easily sketch a run chart (i.e., a time-ordered plot with the median of the 23 observations shown in figure 2 drawn in as a reference line). Here are the actual data, the data sorted to find the median, and the resulting run chart:

Only because the run chart shows no evidence of any shifts, one can now take the average of all the data. During these 23 months, there were a total of 125 falls: Process average = 125 / 23 = 5.4.

So there are now two key questions:

1. Given that the process average is 5.4, what is the expected range of observed falls for any given month?

2. Of course, most of the time, two consecutive months will probably have different numbers, but how much of a difference between two consecutive months is “too much?”

To answer these, we must calculate the moving ranges. The moving range is defined as the absolute value of the difference between any time-ordered observation and its *immediate* predecessor. Its value is always positive, and it turns out to have very nice statistical properties. In finding the moving ranges of a set of time-ordered data, the average moving range or median moving range contains the information needed to understand the inherent variation (i.e., common cause) of a process (see figure 3).

The moving ranges for these 23 values are shown in the table (Note: 23 values yield 22 moving ranges). They are also sorted to find the median value; in a boring meeting without a calculator, this is much easier to do than calculate an average.

Many of you have learned to make process behavior charts using the *average* moving range, which, theoretically, is considered (slightly) more “accurate.” For most purposes, however, using the *median* moving range is usually good enough. The procedures are identical, except when using the average moving range, one uses the constant 2.66 in place of 3.14 and 3.268 instead of 3.865 in the calculations below. These are all derived from statistical theory and can be used with the moving ranges from pretty much *any* time-ordered set of data, regardless of length.

So, to answer question one for these data, the common cause range of expected values is:

5.4 +/– (3.14 x median moving range): 5.4 +/– (3.14 x 3)** **

Or approximately -4 to 15. But, since you can’t have negative falls the lower limit would be zero, not -4.

(If we were using the average MR our limits would be approximately 0 – 14.)

In chart form, this looks like the graph in figure 4. The red-line (15) is the largest value that could be expected from common cause variation for this process “centered” at 5.4:

Many of you have been taught the classic nine Western Electric tests for special causes. They are programmed into many software packages and were applied to this process behavior chart:

1. Using these standard tests for special cause patterns, there are none.

2. They are averaging 5.4 falls a month, but for any one particular month, a value between zero and 15 (upper red-line limit) will manifest as a natural result.

3. All the data fall within this common cause range.

4. Getting people pizza for the month where zero was obtained was most probably treating a common cause as if it were special.

Now, about that increase from zero (seen in October 2012) to nine (in November 2012). To answer question two, the *maximum expected difference* between any two *consecutive* months due to common cause is:

(3.865 x median moving range): (3.865 x 3)

Or approximately 12 (approximately 11 if using average MR)

Figure 5 shows this in chart form:

The charts have demonstrated that the variation is *common cause*. So, while the meeting was “off to the Milky Way,” poring over useless and inappropriate analyses (perhaps month after month?), you did all this in about 10 to 20 minutes. How would this now change the conversation? Do you dare?

Of course, they’re going to ask you what they should do instead. What do you tell them? You could start with one of Brian Joiner’s favorite expressions, “Don't just do something, stand there!”

## Comments

## moving ... first of all

After many years of often insignificant average & range charts, in a recent Stastics refreshing training course, a Statistician brought again to light the moving range approach that I learned at school, in the late 60's. May be in those years production batches were as small as they have become since year 2008, or so. It may also be that ststisticians and their associate software makers have been milking industry for decades, and they now have to turn back to earn a meal out of their job. Isn't it odd that people charting everything don't chart their own processes? Thank you.

## Thank you!

As always, Umberto, a heartfelt comment. The data plotted here are hardly "production batches," but, in service industries, a run chart/Individuals chart is THE bread-and-butter tool and extremely powerful as an initial analysis.

I have an MS in statistics, yet am seen as "pariah" amongst my peers -- they just keep developing "better jackhammers to drive tacks." At a conference a couple of weeks ago, someone mentioned how they had been mentored by a (relatively well-known) applied statistician; but my approach was much simpler, EVERY bit as effective, and much more intuitively understandable for people who aren't (and who don't want to be) statisticians.

Historical note: There were many internal statistical consulting groups in the '50-'70s who were allowed to act as (relatively high-salaried) "corporate eggheads" because they would run an occasional design that would salvage a major disaster. A LOT of good research came out of these groups. In the early '90s, I predicted that most of these groups would be analyzed as a business and be disbanded in 10 years...and have been proven right.

The easy availability of computers, software packages, and color printers -- coupled with statisticians' recalcitrance -- has rendered the formally degreed statistics profession moribund (except for pure research). In 1984, I saw a handwritten note from Deming to a very famous applied statistician at GE who shared some work with him in quality and productivity, "Sorry about your misunderstanding -- TOTAL! When will statisticians wake up?" From what I see, most of them are still sleeping...

Kind regards to my paesano,

Davis