Featured Product
This Week in Quality Digest Live
Six Sigma Features
Mark Rosenthal
The intersection between Toyota kata and VSM
Scott A. Hindle
Part 7 of our series on statistical process control in the digital era
Adam Grant
Wharton’s Adam Grant discusses unlocking hidden potential
Scott A. Hindle
Part 6 of our series on SPC in a digital era
Douglas C. Fair
Part 5 of our series on statistical process control in the digital era

More Features

Six Sigma News
Helps managers integrate statistical insights into daily operations
How to use Minitab statistical functions to improve business processes
Sept. 28–29, 2022, at the MassMutual Center in Springfield, MA
Elsmar Cove is a leading forum for quality and standards compliance
Is the future of quality management actually business management?
Too often process enhancements occur in silos where there is little positive impact on the big picture
Collect measurements, visual defect information, simple Go/No-Go situations from any online device
Good quality is adding an average of 11 percent to organizations’ revenue growth

More News

Donald J. Wheeler

Six Sigma

Can We Adjust Our Way to Quality?

Part 1: Process-hyphen-control illustrated

Published: Monday, October 2, 2023 - 11:03

All articles in this series:

Many articles and some textbooks describe process behavior charts as a manual technique for keeping a process on target. For example, in Norway the words used for SPC (statistical process control) translate as “statistical process steering.” Here, we’ll look at using a process behavior chart to steer a process and compare this use of the charts with other process adjustment techniques.

Process behavior charts allow us to detect process upsets. Clearly, when we have an upset it’s important to get things operating normally again, so it’s natural to think of using process behavior charts to make adjustments to keep the process at a desirable level. When thinking in this manner, it’s natural to unconsciously insert a hyphen between the last two words to obtain “statistical process-control.” And the hyphen changes the meaning. Instead of a nominative phrase referring to a holistic approach to analyzing observational data, the hyphen changes SPC into a process-control algorithm that uses statistics.

Thus, by virtue of background, training, and nomenclature, many people have come to think of a “process control” chart as simply a manual technique for maintaining the status quo. While it’s much more than this, we’ll examine how a process behavior chart functions as a process-hyphen-controller and compare it with other process adjustment techniques.

To obtain an example data set containing known upsets, we’ll begin with data from a predictable process and add some simple step changes. By using data with known upsets, we can see how each technique deals with these upsets. This allows us to make direct and equitable comparisons between the techniques.


Figure 1: X chart for the 200 data of line 3

The 200 values from Line 3 show a process that’s being operated predictably. These data were given in my December 2015 Quality Digest column “The Secret Foundation of Statistical Inference.” The X chart for these data is shown in figure 1. The average is 10.11 units, and the average moving range is 1.98 units. Let’s assume that the target value for this process is 10.0 units, and that the specifications are 6.5 units to 13.5 units.

Next, we modify the data of figure 1 by adding the step changes of figure 2. These step changes result in values that are 0, 2, 4, 6, 8, or 10 units above or below zero. The labels in figure 2 show the net effect of these shifts in sigma units.


Figure 2: Shifts used to create signals within the data set

When we add the shifts of figure 2 to the values of figure 1, we have 20 process upsets. A change occurs on every tenth point, starting with point number 1 and continuing on to point number 191. Six of these shifts result in a net change of zero, and four shifts result in a net change of ±1.1 sigma, so that only 10 groups were actually shifted from their original values by ±2.2 sigma or more.

 


Figure 3: X chart for the example data

Our resulting example data set is shown on an chart in figure 3 and is tabled in figure 12. The average value remains 10.11, but the average moving range is now 2.51. This results in limits that are 26% wider than those in figure 1. Despite these wider limits, nine of the 10 large shifts are detected by points outside the limits. Three more shifts are detected by runs beyond two sigma. So the X chart in figure 3 correctly identifies 12 of the 14 periods when the example data set was off target. This ability to get useful limits from bad data is what makes the process behavior chart such a robust technique.

 


Figure 4: Histogram for the example data of figure 3

With specifications of 6.5 to 13.5, our example data set has 48% nonconforming. Since the data of figures 3 and 4 will be used in what follows, this fraction nonconforming will be our baseline for comparisons as we consider different approaches to process adjustment. Specifically, we’re interested in how much we can reduce the fraction nonconforming while keeping the average close to the target value of 10.

The first technique we’ll consider will be using a process behavior chart as an adjustment technique.

Statistical process steering

Here we make an adjustment to the process aim every time a point goes outside the limits. The cumulative nature of these adjustments means that we’ll have to process the values in our example data set in sequence. Using the limits of figure 3, we will interpret values of 17 or greater and values of 3 or smaller as signals of an upset. Whenever an upset is detected, an adjustment equal to the difference between the target value and the most recent value is made. In this way, we end up making 18 adjustments as shown in figure 5.


Figure 5: X chart used as a process-hyphen-controller (18 adjustments)

 


Figure 6: Histogram for result of using a process behavior chart to adjust process

The histogram in figure 6 shows that by using the process behavior chart to adjust the process aim, we cut the fraction nonconforming down from 48% to 26%. By correctly identifying and adjusting for the changes in level, the process behavior chart improved the process outcomes while keeping the process centered near the target value. With a total of 18 adjustments, we have cut the fraction nonconforming in half.

PID controllers

Once we view the process behavior chart as a process steering wheel, we open the door to consider other process-hyphen-control algorithms. These algorithms are intended to keep the process on target by continually adjusting the process aim. There are many types of these devices, and they greatly facilitate all kinds of operations today. The type of process-control algorithm that will be considered here is a simple proportional-integral-differential (PID) controller of the type that has been around for more than 100 years.

As the name suggests, a PID controller makes adjustments based on the size of three elements. The first element (proportional) will depend upon the difference between the current value and the target value. This difference is known as the error for the current period. Letting the subscript t denote the current time period:

Error for time period t = Et = [Xt – Target]

The second element (integral) will depend upon the sum of all the prior error terms, which I’ll denote here as Sum(E), where:

Sum of Error Terms up to time t = Sum(Et) = E+ E+ E+…+ Et-1 + E

And the third element (differential) will depend upon the difference between the last two error terms, which I’ll denote here as Delta(E), where:

Difference of Error Terms at time t = Delta(Et) = [Et – Et-1]

Given these three elements and following the observation at time period t, we can compute the adjustment to make before time period t+1 as:

ADJUSTMENTt+1 = – a Et b Sum(Et) ­– c Delta(Et )

where a, b, and c are the proportional, integral, and differential weights. The reasoning and constraints behind selecting these weights is beyond the scope of this article. Instead, we’ll look at how various PID controllers work with the example data set. We’ll begin with a simple proportional controller known as Gibbs’ rule.

Gibbs’ rule

Using weights of a = 0.5, b = 0.0, and c = 0.0, the general PID algorithm becomes the simple P-controller known as Gibbs’ rule. This process-hyphen-control algorithm adjusts the process following each value by an amount equal to half the difference between the target and the most recent value. When this rule is sequentially applied to the example data set, we get the values shown in figures 7 and 8.


Figure 7: Gibbs’ rule applied to the example data set (153 adjustments)

Gibbs’ rule makes 153 adjustments to the process aim. Because there were only 20 process changes present in these data, many of the adjustments made by Gibbs’ rule have to have been either needless adjustments or corrections for those needless adjustments. Nevertheless, by constantly adjusting the process aim, Gibbs’ rule managed to reduce the fraction nonconforming in this case from 48% to 20.5%.

 


Figure 8: Histogram for the outcome of using Gibbs’ rule

Instead of merely reacting to those shifts that are large enough to be of economic consequence, Gibbs’ rule reacted to almost everything, only failing to make an adjustment when the adjusted values happened to be equal to the target value (46 times out of 199 opportunities).

Other PID controllers

In looking for a PID controller that might do better than Gibbs’ rule, I considered a total of 176 additional sets of PID weights. Of these, 61 actually increased the fraction nonconforming! The remaining 116 controllers reduced the fraction nonconforming by various amounts as shown in figure 9.

 


Figure 9: The fraction nonconforming produced by various PID controllers

Only 14 of the 177 PID controllers (8%) did better than manually adjusting with a process behavior chart. One reason for this is that many sets of PID weights will result in an oscillating set of adjustments. With a process like the example data set that is subject to occasional upsets, these oscillations may never die out. This is why about one-third of the PID controllers increased the fraction nonconforming. It turns out that the art of tuning a PID controller is more complex than theory predicts simply because most processes are subject to unpredictable upsets. When your process is going on walkabout, it’s hard for your controller to fully stabilize.

Repeatedly, when my clients put the data from their automatic process-controllers on a simple XmR chart, they are surprised by two things. First, they are surprised to discover how their controller oscillates; and, second, they are surprised by the substantial increase in the process variation that results from those oscillations. In most cases their reaction has been to turn off the automatic process controller and return to manual control.

So, can we adjust our way to quality? Clearly, many different process-hyphen-controllers were found that reduced the fraction nonconforming for our example data set. But is this all there is?

Process adjustment techniques are always reactive. They simply respond to what has already happened. They can do no other. At their best they can help to maintain the status quo. Yet sometimes they actually make things worse.

But there is one more dot in figure 9 that has not yet been explained.

The sources of variation

Adjustments are necessary because of variation. And the variation in your process outcomes doesn’t come from your controlled process inputs. Rather, it comes from those cause-and-effect relationships that you don’t control. This is why it’s a low-payback strategy to seek to reduce variation by experimenting with the controlled process inputs. To reduce process variation, and thereby reduce the need for adjustments, we must understand which of the uncontrolled causes have dominant effects upon our process. And this is exactly what the process behavior chart allows us to do.

Of all the data analysis techniques we have invented, only the process behavior chart listens to the voice of the process and identifies exceptional variation within the data. This helps us to know when to look for the assignable causes of the exceptional variation. As we identify these uncontrolled assignable causes and make them part of the set of controlled factors, we’ll effectively reduce the process variation while gaining additional variables to use in operating on-target.

Time after time, my clients have told me that as they learn how to operate their processes predictably, they find that the process variation has also been reduced to one-half, one-third, one-fourth, or one-fifth of what it was before. With these reductions in the process, variation capability soars, productivity increases, and the product quality improves.

With the example data set, predictable operation would be equivalent to removing the step functions of figure 2. This would leave the running record of figure 1 and the histogram of figure 10.

 


Figure 10: What predictable operation can achieve for this process

Operating this process predictably, rather than simply adjusting it after each upset, reduces the fraction nonconforming by an order of magnitude from 48% to 4.5%.

 


Figure 11: What predictable operation can achieve

Figure 11 compares what can be achieved by reactive process adjustments and what can be achieved by proactive process improvement. A process is operated up to its full potential only when it’s operated predictably and on-target. Anything less results in excessive variation, which creates excess costs. This process is capable of operating with about 5% nonconforming. While process adjustment techniques may cut the 48% nonconforming in half, they’re not capable of operating this process up to its full potential.

To achieve full potential, we must not only adjust for process upsets but also find and control the causes of the upsets. And that’s why we’ll never be able to adjust our way to quality.

Appendix

The data for our example data set shown in figure 3 are listed in figure 12 in time order by columns.

 

Author’s note: This article is a corrected version of a column that appeared in Quality Digest on July 11, 2016. The earlier version suffered from a programming error that affected all of the PID results.

Discuss

About The Author

Donald J. Wheeler’s picture

Donald J. Wheeler

Dr. Wheeler is a fellow of both the American Statistical Association and the American Society for Quality who has taught more than 1,000 seminars in 17 countries on six continents. He welcomes your questions; you can contact him at djwheeler@spcpress.com.

 

Comments

Adjust but not so much?

I am curious how "Figure 5" would have looked like, had the adjustment been something like 50% of the error. When my car's tires start making noise on a lane marker, I gently adjust steering - I don't yank the wheel. Any idea whether the same basic rule, but with a lesser adjustment, would have played out? Thanks!

A bit late but I think that

A bit late but I think that is essentially what figure 7 is.