Featured Product
This Week in Quality Digest Live
Quality Insider Features
Eric Stoop
Your competitive advantage could be at stake
Jennifer V. Miller
Inclusiveness + Compulsion to act = Making things more complicated
Shobhendu Prabhakar
Remote inspection shouldn’t be a stop-gap tool just for the pandemic
Tom Siegfried
Machine learning has uses in diverse fields, and its influence is growing. But so is the understanding of its limits.
Paul Laughlin
The upside of being a beginner

More Features

Quality Insider News
Scan was used to create 80% to-scale 3D-printed replica
Quickly captured complex geometries resulting in optimally designed roof racks and greatly accelerated completion time
Provides improved thermal stability for stored materials, risk mitigation advantages, and processes that are documented and repeatable
Designed for scientists, engineers, technicians working in medical devices, automotive, aerospace, and other fields
New line of micrometers are easy to use, have enhanced features and performance, competitively priced
Patient safety is a key focus in update of ISO 14155, the industry reference for good practice in clinical trials.
This book is a tool for improvement and benchmarking
Is the standard adequate, or should it be improved? Deadline: Dec. 31, 2020
Allows manufacturing companies and educational institutions to establish agile, dynamic training programs

More News

Davis Balestracci

Quality Insider

A Statistician’s Favorite Answer: ‘It Depends,’ Part 1

When should I use what chart?

Published: Monday, March 14, 2011 - 05:30

Quality improvement people sure love those tools. A particular favorite, of course, is the control chart, of which, I think, seven are usually taught. Two questions I’m always asked are, “Which chart do I use for which situation?” and “When and how often should I recalculate my limits?”

Wrong questions!

Regarding the first (we’ll deal with second question in part 2), I’ve seen many flowcharts in books to help you determine which chart to use for which situation. I find them far too confusing for the average user. (They even give me sweaty palms.) I don’t even teach this in my work.

As my respected colleague Donald Wheeler likes to say, “The purpose is not to have charts. The purpose is to use the charts…. You get no credit for computing the right number—only for taking the right action. Without the follow-through of taking the right action, the computation of the right number is meaningless.”

So let’s get back to basics… again

As I said in my Feb. 21 newsletter, “Making Variation Disappear on Paper Doesn’t Make It Disappear in Reality,” after you know how your data were defined and collected, the next step is to assess the process that produced the data. This is done via a run chart (or as I like to say, “filter No. 1”) and most of the time, a subsequent control chart (“filter No. 2”). This rationale was explained in my Jan. 24 newsletter, “What Part of ‘Never’ Don’t People Understand?”

Because the data are in a time sequence (or should be), the control chart of choice with which to start is the individuals chart; it uses the moving range (MR) between consecutive points to determine the limits. The individuals chart is the “Swiss army knife” of control charts. (In some of the figures in the newsletters mentioned above, it is advertised as an I-MR chart combination. Don’t worry too much about the MR chart for the moment; I will address this in a future column.) It usually approximates the “correct” chart under most conditions.

I can hear the chorus, “So, what are the conditions when it isn’t correct?” Remember Dr. Donald M. Berwick’s recommendation, “If you follow only one piece of advice from this lecture when you get home, pick a measurement you care about and begin to plot it regularly over time.” Note that he doesn’t say, “Plot a control chart of that data.”

The bottom line is that before you ask, “Which chart should I use for which situation?” or challenge me with a “what if” doomsday scenario (I’m truly amazed at the creative hypothetical situations thrown at me during my teaching), let me request:
• Could you please show me the data (or describe an actual situation) that are making you ask me this question?
• Please tell me why this situation is important.
• Please show me a run chart of these data plotted over time.
• What ultimate actions would you like to take with these data?


If you have the patience to answer these and follow them through with a data set to an appropriate action, you will have probably answered the question yourself—solving a major problem in the process—and saved yourself a major side trip into the “swamp” of calculation minutiae.

Let’s consider the various charts

X-bar/R chart and X-bar/S chart. Since a lot of readers are in health care, I’ll tell you right now: You virtually never use these. They were designed for manufacturing processes where thousands of parts are made per day, and it’s no big deal to grab, say, four to five parts consecutively produced every hour (which takes seconds). People, even at manufacturing facilities, seem to find these confusing and hard to use because the limits are based on averages. 

In health care clinical applications, you don’t have patients coming through in numbers similar to an assembly line. I literally cannot remember the last time I’ve used these charts in my health care work. They might lend themselves to high-volume administrative processes, but once again, “it depends,” and people still find them confusing.

So I never formally teach them and would do so only if needed in the context of solving an issue.

C-chart (for counts). These are easily approximated by the individuals chart, especially if the average is at least five. Remember, the process’s stability is the key question and determines whether you subsequently use a common- or special-cause strategy.

Small numbers (and rare events) get very tricky and usually require guidance by a statistical expert to get the “right number.” Regardless, the run chart and individuals chart will generally lead you to the right initial action.

P-chart and u-chart. When p-charts (percentages) and u-charts (rates) are plotted over time, pretty much everyone gets confused by the stair-step limits caused by the varying denominator sizes. This confusion only escalates by well-meaning attempts to explain them. Occasionally, the chart might come in handy for finding an individual outlier.

More important, these also become especially problematical, especially for p-charts, when data are aggregated monthly, quarterly, or even annually. The resulting large denominators (as in hundreds or thousands) create many artificial out-of-control signals (i.e., above or below the limits).

Donald J. Wheeler, Ph.D., whose books are very practical, well-written, and offer good examples (you can peruse them at www.spcpress.com), is of the opinion that true independence of occurrence of events in reality is rarely encountered. So, he feels it’s correct to use only the individuals chart. My experience has pretty much borne this out. But, “it depends,” and gets tricky with small denominators.

Np chart. As far as the np chart goes, it’s a marginal pedagogical classroom exercise—and that’s about it. I virtually never use it because having equal sample sizes in the denominator is a rare luxury indeed. Or the machinations to create equal denominators then explain the resulting chart to one’s puzzled audience far outweigh any benefits.

In defense of p-charts and u-charts

I have found p-charts and u-charts to be helpful in stratification (a common-cause strategy). One uses them to compare, statistically, individual rates that each have been obtained by:
• Plotting a run chart
• Following up with a control chart
• Determining the most recent stable history, and only then,
• Aggregating the data from No. 3 into summed numerators and denominators for statistical comparison


In these cases, the horizontal axis is not “time.” It could be, for example, individual doctors or locations. Many of you have no doubt encountered these via the fancy euphemism “funnel plots,” where the results are sorted horizontally by increasing denominator size.

Always start with process data gathered over time and plot a run chart

First, find out what you’re “perfectly designed” to get, and second, see whether common or special-cause strategies are needed to further solve the problem. This gives you a baseline with which to assess the current state as well as your subsequent intervention efforts.

Many projects fail because they lack a baseline. So you see, there’s never an escape from plotting your process data over time. As Berwick says at the end of his quote, “You won’t be sorry.”

Part 2 will look at “the question that drives me nuts” and explode some common myths about control charts.



About The Author

Davis Balestracci’s picture

Davis Balestracci

Davis Balestracci is a past chair of ASQ’s statistics division. He has synthesized W. Edwards Deming’s philosophy as Deming intended—as an approach to leadership—in the second edition of Data Sanity (Medical Group Management Association, 2015), with a foreword by Donald Berwick, M.D. Shipped free or as an ebook, Data Sanity offers a new way of thinking using a common organizational language based in process and understanding variation (data sanity), applied to everyday data and management. It also integrates Balestracci’s 20 years of studying organizational psychology into an “improvement as built in” approach as opposed to most current “quality as bolt-on” programs. Balestracci would love to wake up your conferences with his dynamic style and entertaining insights into the places where process, statistics, organizational culture, and quality meet.