Featured Product
This Week in Quality Digest Live
Six Sigma Features
Steve Moore
You can find quality concepts anywhere
Harish Jose
A cybernetic view of quality control
Jody Muelaner
A primer for quality professionals
Rohit Mathur
Identifying common-cause and special-cause variation in processes is key to process improvement
Scott A. Hindle
Can they help to do a better job?

More Features

Six Sigma News
A guide for practitioners and managers
Making lean Six Sigma easier and adaptable to current workplaces
Gain visibility into real-time quality data to improve manufacturing process efficiency, quality, and profits
Makes it faster and easier to find and return tools to their proper places
Version 3.1 increases flexibility and ease of use with expanded data formatting features
The FDA wants medical device manufactures to succeed, new technologies in supply chain managment
Provides accurate visual representations of the plan-do-study-act cycle
SQCpack and GAGEpack offer a comprehensive approach to improving product quality and consistency
Customized visual dashboards by Visual Workplace help measure performance

More News

Davis Balestracci

Six Sigma

Control Charts: Simple Elegance or Legalized Torture?

Once again, I’m beginning to understand Deming’s hatred of statistical ‘hacks’

Published: Monday, January 6, 2014 - 10:26

For all the talk about the power of control charts, I can empathize when audiences taking mandated courses on quality tools are left puzzled. When I look at training materials or books, their tendency is to bog down heavily in the mechanics of construction without offering a clue about interpretation.

Some seminars even teach all seven control charts! And then there is the inevitable torturous discussion of "special cause tests" (usually the famous eight Western Electric rules). People are then left even more confused. Does each test signal need to be individually investigated, i.e., treated as a special cause? Not to worry—most people usually investigate only the points outside the control limits. The focus tends to be on individual observations. But what if there is one underlying explanation generating many of these signals that has nothing to do with individual outliers, e.g., a step change?

Someone once presented me with the graph shown in figure 1. (Yes, the y-scale started at 0.) It almost convinces you that there is a trend, eh?

Figure 1: Trend analysis of  percentage goal compliance

I can almost picture Six Sigma Black Belt No. 1 scolding them: "Now, now, now. Test the data for normality, and if it passes, you need to plot that as a control chart." Note: It does indeed pass (p-value = 0.507), but the test is totally inappropriate and irrelevant.

Using standard control chart software that also performs the eight Western Electric tests, the individuals' chart for percentage conformance is shown in figure 2:

Figure 2: Individuals Chart of percentage conformance to goal (click here for larger image)

Look at all those special causes in figure 2 shown on the top line of the graph. Sixteen of the 52 data points generate 30 signals. Where should you start? Many people would investigate the four points outside the three standard deviation limits (observations No. 9 and Nos. 50–52).

Then Black Belt No. 2 says, "The control chart needs to be adjusted for the trend." There's plenty of customer-friendly software that will do just that (figure 3):

Figure 3: Trend-adjusted percentage conformance to goal with seven special cause signals (click here for larger image)

That was obviously the solution. Figure 3 shows we're down from 30 special cause signals to seven. Better still, there are no data points outside the limits. Now what? Perhaps investigate each signal, but which one do you start with? And what about observations with more than one signal?

As I've said many times: The computer will do anything you want.

Trend lines and bar graphs seem to be the two most ubiquitous tools that people use for (alleged) analysis. Regression was my favorite course in grad school, and I'm very good at it. But I rarely use it. In my 30 years as a statistical practitioner, I have never ever seen an appropriate use of a trend line on data from a service industry (e.g., healthcare) plotted over time. Never. I have the utmost respect for my distinguished colleague Tom Nolan, and he agrees.

Over the years, I have developed an increasing affection for the much-neglected run chart: a time plot of your process data with the median drawn in as a reference (yes, the mediannot the average). It is "filter No. 1" for any process data and answers the question: "Did this process have at least one shift during this time period?" (This is generally signaled by a clump of eight consecutive data points either all above or below the median.)

If it did, then it makes no sense to do a control chart at this time because the overall average of all these data doesn't exist. (Sort of like: If I put my right foot in a bucket of boiling water and my left foot in a bucket of ice water, on average, I'm pretty comfortable.)

All the time, I hear people saying, "We use run charts," but I see very few of them. It's generally taught as a boring prelude to the (allegedly) more important and powerful control chart. Usually that is left to the end of the training because it is the most difficult, which further complicates matters—peoples' brains are already full.

So, they stumble by rote through the hand calculations of each of the seven chart types and are then assured that they can now use their company's designated computer software to generate future charts—aided by the smart sofware add-in of guiding one to the allegedly correct chart. Many computer packages don't generate run charts. Well, who needs them? Why not jump right to the more advanced control chart analysis of all the data? One can then look at the special cause signals and try to find reasons for each individual signal.

The poor run chart: It does not find individual special-cause observations, but that is not its purpose. The control chart is "filter No. 2"—plotting the data after the shifts have been determined, which then usually reduces the number of special cause signals and results in a lot less confusion.

What does the run chart of these data in figure 4 tell us?

Figure 4: Run chart of percentage conformance to goal

With the y-axis scale a lot healthier and no control limits as a distraction, doesn't it look like the process "needle" shifted twice—around August 17 (observation  No. 21) and February 17 (observation  No. 47)? In fact, when I asked the clients about those two dates, they looked at me like I was a magician and asked, "How did you know?" Those dates coincided with two major interventions to improve this process. As the chart in figure 4 shows, they worked—two distinct needle bumps (step-change special cause)—not a continuously increasing improvement trend.

In other words, a process goes from what it is "perfectly designed" to get with its original inputs and transitions to what it is "perfectly designed" to get based on the new inputs. It eventually settles in to the new average based on these inputs. This also puts the trend special-cause test (six successive increases or decreases) into perspective: A step change can manifest as a trend during the transition. It won't continue.

So, the correct resulting control chart is shown in figure 5...

Figure 5: Percentage conformance to goal sans special cause (click here for larger image)

...and there's not a special cause to be found (other than the programmed step changes).

Interpretation: Their original performance was 78.5 percent. Their first intervention improved the process to 83.8 percent, and their second intervention improved that further to 91 percent.

They had recently started yet another intervention, and based on the last four data points, it's looking relatively promising. What would indicate success?

1. There are three immediate increases. Two or three more would be good evidence (trend transition).
2. Maybe a weekly performance will go outside the upper limit (97.8%).
3. Maybe the next four to six weeks will all be above the average (91%).
4. Two out of three consecutive weeks' performances will be between 95.5 percent (two standard deviations above the average) and 97.8 percent (three standard deviations' upper limit). This is a very useful test known as the two-out-of-three rule.

And, not to overreact, if performance goes down from one week to the next, be advised that it could differ as much as 8.4 percent from the previous week simply due to common cause (upper limit of the moving range chart). What could be simpler—if taught correctly?

Once again, I'm beginning to understand Deming's curmudgeonliness and his hatred of what he termed statistical "hacks."

How many hours are you spending in meetings looking at trends?

Discuss

About The Author

Davis Balestracci’s picture

Davis Balestracci

Davis Balestracci is a past chair of ASQ’s statistics division. He has synthesized W. Edwards Deming’s philosophy as Deming intended—as an approach to leadership—in the second edition of Data Sanity (Medical Group Management Association, 2015), with a foreword by Donald Berwick, M.D. Shipped free or as an ebook, Data Sanity offers a new way of thinking using a common organizational language based in process and understanding variation (data sanity), applied to everyday data and management. It also integrates Balestracci’s 20 years of studying organizational psychology into an “improvement as built in” approach as opposed to most current “quality as bolt-on” programs. Balestracci would love to wake up your conferences with his dynamic style and entertaining insights into the places where process, statistics, organizational culture, and quality meet.

Comments

There is another issue

Why are you waiting a year before creating a control chart? The statistician that waits to get a year's worth of data may be a "hack" by not understanding the importance of responding to events (good or bad) as soon as possible after they occur.

The second phase is clearly seen using 2 out of 3 points beyond 2-sigma limits in the first control chart. However, if the chart had been used as a monitoring tool with signals to be responded to when they occur, the more powerful control chart would have been based on the first phase in which even one point would have been enough to signal a change.

In addtion, the article does not menton why the subgrouping used was appropriate for whatever purpose (also lacking) the control chart was intended to address.

Good questions...and I agree with you

Background: This occurred at the very end of a seminar and it was the first time I had seen the data. They wanted to use what I had just taught them for insight. Initially, I'm not sure there was any other purpose than "accounting for" their activity and using the trend to show that they had made "general improvement" -- i.e., there was no underlying process understanding or attempt to quantify the improvements. It was a passive use of a trend line to "see how we're doin'."

Regarding sub-grouping -- they had one point per week. At the end of the week, they looked at number of events and number of events that complied with a goal -- period. They had no idea about how many 'events' there were. What you suggest leads to maybe they could disaggregate by day-of-the-week to see whether certain days were special causes.

This is an example of what happens in many cases -- one has to take the data one is given and start from there to ask the questions that you pose. And the insights gained from this analysis form a good starting point for the questions. An "outsider" doesn't always have the luxury of planning a collection, but can certainly recommend a better plan. Many times, "hacks" are just concerned with applying tools and being pedantic about which ones. As Dr. Wheeler says, the purpose of analysis in insight.

Nice Article

Clearly written and concise. Great examples.

Too bad in my case (ISO/IEC 17025 Assessments of Environmental laboratories) there is no requirement for control charts only trend analysis.

Your examples illustrate the importance of getting the whole picture of what is going on with the process, so that you can see when the control limits need to be re-assessed.

It was good to see actual dates/times instead of run numbers on a control for a change.

Thanks,Harold

(response to KKBARI below)

[Moved this response to answer KKBARI's comment directly]

Control charts are not always adapted

Hi Davis,

I'm Ok with the global approch, but fundamentally is it not an error to use SPC for a not stable process? Or not expected to be stable? And a process we correct is not expected to be stable, no? In that case SPC/Control chart is not the appropriate tool and sure that run chart is more adapted.

Maybe the trainings are to blame or the visualization of the expected response. If I expect a non stable response, for any reason, so I can start to choose the correct tool.

Thank you

Luc

Control charts on an unstable process (Luc)

Careful -- one of the lingering myths about control charts is that they are to be used only on processes in control. HOW ELSE can you determine that the process is unstable? It could just be as simple as using a run chart to determine where there are step changes, as in my example. If you can't use a control chart at all, then your process could be in Dr. Wheelers state of "chaos" -- i.e., totally unpredictable. Even though at times it SEEMS as if this is true, how else can you determine it and then use it to look at the effects of your interventions? It NEVER hurts to begin by plotting some process data over time. Most statistics courses (enumerative) treat the process as if it's a "pond" and can be characterized by repeated sampling. In the real world (analytic), processes are more like whitewater rapids! You can maybe characterize one sample, but how would it compare to the next sample? And even though it might SEEM chaotic, it could still be "stable."

Bravo on real-world statistics

I don’t have all the credentials that you do, but I concur with your real-world, practical assessment of statistics.  I have a masters in statistics and have been teaching and using statistical methods daily for almost 30 years.  I have been fortunate to work for organizations that valued statistical analysis and thinking long before Six Sigma was rolled out, so we never used SS – we were further ahead already.  Except for one area of business (not manufacturing) I have done few regressions as well.  I bristle every time someone mentions testing for normality as the first step.  I have convulsions anytime someone says we have to transform the data because it isn’t normal and they don’t know why other than the book or teacher says so. 

 

If you asked any student of mine what the first thing they should do any time they have data, they would unequivocally state “plot a run chart”.  I, too, have learned by the school of hard knocks that more information is gained by this one simple tool than anything else, and if it is not stable then “do not pass go, do not collect $200”.  Even if data is not collected serially, it’s worth the 10 seconds it takes to plot it.  Outliers, indicators of stratification, etc. will show up quickly and you don’t waste time creating graphs and statistics that are meaningless. 

 

As for trends, my concept is simple.  I tell my classes, clients and mentees, “Anyone can look at the clouds and find a rabbit.  When you look at a scatterplot of residuals or a time series plot, you are not looking for rabbits in clouds.  The patterns, trends, signals, etc. have to be clear to everyone.  Otherwise, move on.”

Thank you for your comments (KKBARI)

I have more credentials? -- I doubt it! I have the same MS in statistics that you do...and the same 30 years of hard knox! We are indeed "colleagues." BRAVO! on your approach in the real world. You have no idea how lucky you are to have supportive management. Corporate culture wore me down.

The bigger challenge:  Have you had any luck in applying similar thinking to everyday leadership? -- i.e., cutting executive meeting time where data is involved by HALF and stopping middle management from "drawing little circles" and asking the front line "Why" a number went up...or down...or was red...or yellow...? Try as I might, executive resistance to applying these ideas to daily management is still FIERCE.

ALL the best, Davis

Resistance

Hi Davis,

I'd like to quote Dr. Wheeler here:

"Unfortunately, there are people who are afraid of clarity because they fear it may not seem profound."

For a species that is naturally lazy, I would think we would all want the simple easy way to get a result. But, apparently management are different: looking for complexity everywhere.

Best regards, Shrikant Kalegaonkar (LinkedIn: http://www.linkedin.com/in/shrikale/, Twitter: @shrikale)

Words To Be Banned

The hackles on the back of my neck always go up whenever someone showing data in a meeting uses the words "trend" or "up-tick". Ugh!!! We need to ban these words from intelligent conversations... or maybe the implication that the conversation is "intelligent" assumes these two words are already banned! ;-)

Definition of "trend?"

Good question. I'm not sure. Actually, if I had my way, the term "trend" would be banished from corporate language unless used in the "qualitative" sense of "tendency." Whenever someone blurts out the term "trend" because it seems like the thing to do, why don't you ask them exactly what they mean -- you will get as many different answers as people whom you ask. My use in the article is to call it a "transition" to a new steady state based on changed inputs to the process.

Interesting !

As someone said  "if you  torture data enough , it will confess " .

Also ,  I think Dr Deming actually meant statistical 'quacks' . To me , "trend" means "fashion" as in "trendy" !! But we should check with a data sceintist for a better definition !!

Control charts are quite powerful compared to a run chart , but it all depends how one wishes to use them

Definition of "Trend"

Hello Davis:

What is your definition of "trend"?

Thank you, Dirk