Featured Product
This Week in Quality Digest Live
Quality Insider Features
Mark Hembree
Maintenance and quality control are early adopters
Rebecca Saenz
AI-driven technology should lighten the load for workers, not replace them
Bert Thornton
Enter into a get-get arrangement for success this year and beyond
Josh Wilson
Never let a serious crisis go to waste
Megan Wallin Kerth
One important lesson learned was maintaining quality customer service in the face of unpredictability

More Features

Quality Insider News
Provides clear factory performance indicators, automatically identifies and analyzes greatest improvements and productivity risks
Measures ultra-high temp oils in heating and cooling circuit systems
Planning and sourcing in “The Big Shortage”
InfinityQS’ quality solutions have helped cold food and beverage manufacturers around the world optimize quality and safety
University acquires the Invizo 6000 atom probe tomography (APT) instrument
Seegrid partners with Applied Intuition to accelerate delivery of next generation material handling automation solutions
Strategic move to maintain high quality while innovating and scaling
Initiatives include collaborations with printer manufacturers pro-beam, Sciaky, DM3D, Gefertec, and Meltio
Providing high-quality semiconductors in challenging times

More News

Douglas C. Fair

Quality Insider

3 “Nevers” of Control Limits, Part 1

Never type in control limits

Published: Monday, December 17, 2007 - 22:00

I don’t believe in ghosts. Yet quality professionals chase them every day. Why? Because erroneous control limits tell them to. Control limits should be statistically based, 100-percent reliable, and reveal natural process variability. Hence, they should help to uncover unnatural events. Yet when I work with companies that are using SPC, I continue to encounter control limits that are not statistically based. In case it isn’t obvious, control charts are statistical tools and should therefore be based upon process data and statistical information. Doing so ensures that control limits can be trusted and that quality professionals aren’t wasting their energy by chasing erroneous, statistically insignificant events whereby an assignable cause is simply nonexistent.

My last column recounted a phone call wherein the caller misunderstands the role of control limits and control charts. This column highlights the first of three things that one should never allow when creating or calculating control limits. Those bimonthly callers I discussed last month usually believe that it’s bad when a plot point falls outside control limits. Therefore, they try to manipulate control limits to be something they weren’t designed to be.

Contrary to what some may believe, out-of-control conditions aren’t bad. They present opportunities for process improvement. Out-of-control conditions indicate that something in the process has significantly changed. This is valuable information. An out-of-control chart alerts users that an unusual event has occurred. Sometimes these unusual events are positive, confirming an improvement to the process. As such, alarms triggered by control charts should be viewed positively, as learning opportunities and as valuable communication from processes which need to run consistently and effectively.

Therefore, we must be confident that when a control chart indicates a statistical alarm, it’s an indication that something very different has occurred. Control limits must be accurate and they must be representative of the process being controlled.

Therefore, here is the first of my 3 “nevers” concerning control limits.

Never allow control limits to be “typed in.”

Control limits should never be manually typed into an SPC sytem. And I mean never. If your SPC software allows you to type in control limits, beware. Doing so violates all manner of foundational statistical principles. Control limits should represent a process’s natural variability, and that natural variability should be calculated from data gathered directly from the process. Never should they be simply typed in. Control limits should always be calculated based upon:

  1. Mean
  2. Standard deviation
  3. Subgroup size

Typing in control limits ignores all three of these vitally important items. You might argue that one could type in a mean value that is equivalent to the engineering nominal value, but is it really? And yes, you might argue that the overall mean should be identical to an engineering nominal value. But do you know that? Just because you want the overall mean to be equivalent to the nominal does not mean that, in reality, it will be. Instead, the mean should be calculated from process data.

What about the standard deviation? The width of control limits is primarily based (don’t forget about subgroup size) upon the value of standard deviation. Standard deviation indicates a process’s natural, inherent variability. But if you type in control limits, just how wide should they be? Plus and minus 3 standard deviations from the mean, that’s how wide they should be. That is, control limit width should not just be a “guess.” It shouldn’t be a “plausible scenario,” nor should it be something that one “wishes would happen.” Instead, they should simply be +/– 3 standard deviations away from the calculated process mean.

What about subgroup size? Well, the larger the subgroup size, the closer together an X-bar chart’s control limits will be. The smaller the subgroup size, the wider the control limits will be. It’s just a function of the mathematics.

Assume that one is using an X-bar and S chart to help control a process. Take a look at the X-bar control limit formulas below:

Notice the A3 factor in the formula. A3 is based upon subgroup size. Correct A3 values are found using a statistical constant table where the A3 value is based upon—you guessed it—subgroup size.

Say an X-bar and S chart is in use with a subgroup size of 5. The A3 value used in calculating control limits for the X-bar chart is 1.427. Control limits are different if the X-bar chart uses a subgroup size of 15 since the A3 factor for n =15 is 0.789. So, the bottom line is that for an unchanging mean and standard deviation, control limits will change with the change in n. That’s right: for the same data, same process, control limits are completely different based upon subgroup size. See the screen shot below for an example of a process with a mean of 8 and a standard deviation of 1. You will find that the control limits are quite different based upon whether the subgroup size is 5, 10, 15, or 20.

When would it be necessary for a subgroup size to vary? I have seen many situations in which an operator is simply unable to enter the expected subgroup size. For example, in an injection-molding situation (e.g., where a single mold is used to manufacture 10 items), it’s possible that some of the cavities will become “plugged.” This renders the cavity inoperable and therefore prevents an operator from entering data for the cavity that is no longer in use. Another situation concerns a quality professional who decides to change the sampling scheme to require a subgroup size of 10 instead of the previous subgroup size of 5. Again, if the subgroup size changes, control limits must also change.

Lastly, typing-in control limits ignores the fundamental rule that X-bar control limits should be linked to corresponding range or standard-deviation charts. This association is necessary because a range chart’s overall average (R-bar) is used in calculating control limits for the X-bar chart. Take a close look at the X-bar control limit formulas below. You’ll find that the “R-bar” (average range) is found in each.

This is how the central tendency chart (IX, X-bar, median, etc.) is linked with its corresponding variability chart (such as a range or standard deviation chart). When control limits are typed in, clearly there’s no consideration for basing those control limits on the value of the average range or standard deviation.

In summary, what happens to control limits control limits that are typed-in? Well, they stay the same. They don’t change, because they aren’t able to. They remain static and wholly incorrect. Typed-in control limits don’t represent natural process variability. When carefully considered, typed-in control limits are simply another form of specification limits—they identify what someone wants from a process, not what is expected or natural. 

I have seen the “typing in” of control limits touted as a feature in many statistical software packages. As a statistician, I’m thunderstruck by the concept of marketing a feature that is statistically unsound. Control limits should always be based upon process data. They should never be typed in, and they should never be based upon how someone thinks the process shoulda/mighta/oughta/coulda performed. Walter Shewhart and W. Edwards Deming would turn over in their graves if they knew about that “feature.”

So if your software allows you to type in control limits, run away and don’t look back. If you do choose to type in your own control limits, don’t be surprised if you spend precious time and resources chasing ghosts that weren’t there to begin with.

Next month I will cover the second of the “3 Nevers of Control Limits.” See you then.


About The Author

Douglas C. Fair’s picture

Douglas C. Fair

A quality professional with 30 years’ experience in manufacturing, analytics, and statistical applications, Douglas C. Fair serves as chief operating officer for InfinityQS. Fair’s career began at Boeing Aerospace, and he worked as a quality systems consultant before joining InfinityQS in 1997. Fair earned a bachelor’s degree in industrial statistics from the University of Tennessee, and a Six Sigma Black Belt from the University of Wisconsin. He’s a regular contributor to various quality magazines and has co-authored two books on industrial statistics: Innovative Control Charting (ASQ Quality Press, 1998), and Quality Management in Health Care (Jones and Bartlett Publishing, 2004).