Featured Product
This Week in Quality Digest Live
Six Sigma Features
Mark Rosenthal
The intersection between Toyota kata and VSM
Scott A. Hindle
Part 7 of our series on statistical process control in the digital era
Adam Grant
Wharton’s Adam Grant discusses unlocking hidden potential
Scott A. Hindle
Part 6 of our series on SPC in a digital era
Douglas C. Fair
Part 5 of our series on statistical process control in the digital era

More Features

Six Sigma News
Helps managers integrate statistical insights into daily operations
How to use Minitab statistical functions to improve business processes
Sept. 28–29, 2022, at the MassMutual Center in Springfield, MA
Elsmar Cove is a leading forum for quality and standards compliance
Is the future of quality management actually business management?
Too often process enhancements occur in silos where there is little positive impact on the big picture
Collect measurements, visual defect information, simple Go/No-Go situations from any online device
Good quality is adding an average of 11 percent to organizations’ revenue growth

More News

Donald J. Wheeler

Six Sigma

Can We Adjust Our Way to Quality?

Part 2: By trying to do better, we can make things worse

Published: Monday, October 30, 2023 - 11:03

All articles in this series:

In last month’s column, we looked at how process-hyphen-control algorithms work with a process that is subject to occasional upsets. This column will consider how they work with a well-behaved process.

Last month we saw that process adjustments can reduce variation when they are reacting to real changes in the process. To be clear, I know that process control technology is everywhere, and it allows us to do things we couldn’t otherwise do. To borrow from Page 326 of my Advanced Topics in SPC (SPC Press, 2004), among other things, process-control technology allows us to:
1. Operate processes safely
2. Maintain process characteristics near a set point
3. Improve process dynamics
4. Reduce the effect of disturbances that can’t be economically removed
5. Accomplish complex control strategies

As an example of the last item, when I was working with IBM at NASA, I was told that the space shuttle couldn’t be manually returned from orbit. The sequence of actions needed for the de-orbit burn were too complex for manual control and had to be automated. So, from the thermostat on the wall to outer space, process controllers are everywhere around us. They’ve been thoroughly proven to work.

The purpose of these columns isn’t to belittle process controllers but to bring to light some aspects and consequences of process adjustment approaches that haven’t been well understood. Since the principles behind the examples given here are mathematical imperatives, an understanding of the limitations of process adjustment can make your efforts more successful.

We’ll start with the data shown in figure 1 and tabled in figure 8. The X-chart shows no evidence of unpredictable operation for this process. The average is 10.11, and the standard deviation statistic is 1.79. The specs are 6.5 to 13.5, and 4.5% of these 200 values fall outside these specs.


Figure 1: Line 3 data

The question here is whether we can improve on this fraction nonconforming by using some adjustment scheme. To answer this question, I used four P-controllers with dead bands of different widths. (For a review of the formulas for PID controllers, see Part 1. Our three P-controllers will use PID weights of a = 1.0, b = 0, and c = 0.)

Natural process limit dead band

The natural process limits in figure 1 are 4.8 to 15.4. Since the values are recorded to the nearest whole number, a dead band of

10 ± 5.5 = 4.5 to 15.5

will be equivalent to the natural process limits. Here, our P-controller will make an adjustment only when a point goes outside the dead band. Otherwise it makes no adjustments to the process aim.


Figure 2: P-controller with natural process limit dead band

Because this process was already operating on-target with minimum variance, no adjustments were needed, and none were made by this controller. So the fraction nonconforming remained unchanged at 4.5%. Had an upset occurred, the controller would have made an adjustment.

(It’s important to note that natural process limits must be based on a within-subgroup measure of variation rather than a global standard deviation statistic. For individual values, this requires the use of successive differences, known as moving ranges, rather than the common standard deviation statistic. While there’s little difference when a process is operated predictably, there’s a large difference between these two measures of dispersion when a process is operated unpredictably.)

Specification limit dead band

Would a “zero-defects” P-controller do any better than the P-controller above? Here, our P-controller will make an adjustment equal to the difference between the target and the observed value every time the observed value is outside the specifications (i.e., either smaller than 6.5 or greater than 13.5). The result is shown in figure 3.


Figure 3: P-controller with specification limit dead band

By trying to do better, we’ve increased the fraction nonconforming from 4.5% to 21%. What will your boss say when you tell him your “zero-defect” process-hyphen-controller has achieved a fourfold increase in the fraction nonconforming?

The 42 adjustments here were all unnecessary, and every unnecessary adjustment will inevitably increase the variation. Probability theory tells us that making unnecessary adjustments will roughly double the process variance. And when we double the process variance, we increase the standard deviation by 41%. Here, the standard deviation statistic increased by 59%.

Control theory often assumes that it costs nothing to make an adjustment. But here we see that there’s no such thing as a zero-cost adjustment. Every adjustment adds variation to the product stream, and increased variation always increases costs. The trick is to make adjustments that remove more variation than they create. Here there was no need for any adjustment. The process was already operating on-target with minimum variance, and the mistakes started with the first adjustment.

In this example, the process standard deviation statistic in figure 1 is 1.79. So, the ±3.5 dead band of this controller is essentially a two-sigma dead band. The next controller will use a tighter dead band.

A one-sigma dead band

What will happen if we are more aggressive about keeping the process on-target and tighten up the dead band? To find out, we use a dead band of

10 ± 1.5 = 8.5 to 11.5

Here, our P-controller will make an adjustment equal to the difference between the observed value and the target value every time the observed result is smaller than 8.5 or greater than 11.5. The result is shown in figure 4.


Figure 4: P-controller with one-sigma dead band

Here our P-controller with a one-sigma dead band makes 108 adjustments and turns a process with 4.5% nonconforming into one with 14% nonconforming. This controller did slightly better than the zero-defect controller above because some of these 108 adjustments were actually needed to compensate for erroneous adjustments made earlier. In fact, we can see this controller oscillating in figure 4 as it often overcompensates following large or small values.

A straight p-controller

If we drop the dead band and pretend that all the values should be found at the target, then we’ll treat every deviation from the target as a signal that the process is off-target. Here, our P-controller will adjust whenever the observed value isn’t equal to 10. The results of using this straight P-controller with our data are shown in figure 5.


Figure 5: A straight P-controller with no dead band

This P-controller oscillates even more than the others. It adjusted the process 174 times and created 13% nonconforming product—three times as much nonconforming product as found in figure 1. The standard deviation statistic in figure 5 is 40% larger than that of figure 1 (what the laws of probability theory predict). Any way you look at it, needless adjustments always increase variation and reduce quality.

Two types of action

Specifications are the voice of the customer. They are intended for taking action on the product—to sort the conforming product from the nonconforming product at the end of the line.

Natural process limits are the voice of the process. They were created for taking action on the process. Whether we’re seeking to keep the process on-target or are looking for assignable causes of exceptional variation in order to improve the process, our actions on the process will need to be based on the voice of the process.

While we want the voice of the process to line up with the voice of the customer, the first step in doing this is to understand the difference. When we confuse action on the product with action on the process, when we fail to distinguish the voice of the customer from the voice of the process, we can make things worse. And one way we confuse these two different actions is by using specification limits to take action on the process.

Summary

If you’re simply concerned with keeping a process on-target, you need to use a controller with a dead band that’s at least as wide as the natural process limits. While a process behavior chart is the simplest such device, process behavior charts were created to do so much more than maintain the status quo. They offer the quickest way to get any process to operate up to its full potential.

But for those interested in process-hyphen-control, it’s important to understand that there’s no advantage to using dead bands tighter than the natural process limits. They will not “squeeze the process.”


Figure 6: Effect of P-controllers on well-behaved process

With a well-behaved process, dead bands tighter than the natural process limits will always increase the variation in the product stream. This increased variation will increase costs and reduce quality. Here they resulted in three to four times more nonconforming product:


Figure 7: Effect of P-controllers on unpredictable process

With a poorly behaved process, such as the example data set used last month, a process-hyphen-controller may reduce the variation in the product stream. The four P-controllers defined above, when used with the example data set from last month’s column, cut the original 48% nonconforming roughly in half. The four P-controllers reduced the standard deviation of the product stream from 5.3 to about 3.5, but in spite of making between 22 and 177 adjustments, none of these controllers managed to reduce the standard deviation down to the 1.79 of figure 1.

So, regardless of how well-behaved or poorly behaved your process may be, there’s no advantage whatsoever to using a process-hyphen-controller with a dead band that’s narrower than the width of the natural process limits.

World-class quality is defined as operating on-target with minimum variance. And the only way to operate a process up to its full potential is to use a process behavior chart to identify the assignable causes of exceptional variation and then to take action to remove their effects from the process. With an unpredictable process, we might be able to reduce the variation by using an adjustment scheme. But adjustments can’t take you all the way to minimum variance.

Appendix

The Line 3 data shown in figure 1 are listed in figure 8 in time order by columns.


 Figure 8: Line 3 data

Discuss

About The Author

Donald J. Wheeler’s picture

Donald J. Wheeler

Dr. Wheeler is a fellow of both the American Statistical Association and the American Society for Quality who has taught more than 1,000 seminars in 17 countries on six continents. He welcomes your questions; you can contact him at djwheeler@spcpress.com.

 

Comments

Excellent article

I shared this on LinkedIn. It underscores the fact that we cannot adjust random variation out of a process and, if we try, we will make it worse.

Box and Luceno wrote a book that combines process controller principles with SPC and they showed that you can in fact adjust "variation" out of a process, but only it it's a time series in which the long-term variation exceeds the short term variation (the portion measured by moving ranges on a chart for individuals). If you know what the time series looks like, you can apply process control algorithms to suppress the long-term portion of the variation but nothing can be done about the short term variation.