Featured Product
This Week in Quality Digest Live
Statistics Features
Scott A. Hindle
Part 4 of our series on SPC in the digital era
Donald J. Wheeler
What are the symptoms?
Douglas C. Fair
Part 3 of our series on SPC in a digital era
Scott A. Hindle
Part 2 of our series on SPC in a digital era
Donald J. Wheeler
Part 2: By trying to do better, we can make things worse

More Features

Statistics News
How to use Minitab statistical functions to improve business processes
New capability delivers deeper productivity insights to help manufacturers meet labor challenges
Day and a half workshop to learn, retain, and transfer GD&T knowledge across an organization
Elsmar Cove is a leading forum for quality and standards compliance
InfinityQS’ quality solutions have helped cold food and beverage manufacturers around the world optimize quality and safety
User friendly graphical user interface makes the R-based statistical engine easily accessible to anyone
Collect measurements, visual defect information, simple Go/No-Go situations from any online device
Good quality is adding an average of 11 percent to organizations’ revenue growth
Ability to subscribe with single-user minimum, floating license, and no long-term commitment

More News

Donald J. Wheeler


The ‘Think System’ for Improvement

First you guess what the problem might be…

Published: Monday, May 7, 2018 - 11:03

In The Music Man, the con man Prof. Harold Hill sells band instruments and uniforms and then tells the kids that they can play music if they will “just think about the notes and then play them.” In many ways this “think system” is similar to what you are asked to do with the define, measure, analyze, improve, control (DMAIC) approach to quality improvement.

During the define phase of DMAIC, you and your fellow band members are told to think about your process inputs and identify those inputs that have the greatest impact on the process outcomes. (As if R&D has not already done this.) To this end the tools offered in this “think system” are cause-and-effect diagrams, failure mode effect analysis, and designed experiments. To illustrate how this approach works, we will look at some of my clients’ experiences.

One client had a problem with the conformity of a high-volume product. Historically, 50 percent of their production had failed to meet the blueprint specification of less than 8 mils. Since 50-percent conforming would not allow them to ship enough product, the customer had agreed to a “deviated specification” of less than 12 mils. This allowed about 90 to 95 percent of the product to be shipped. The manufacturer and customer had operated with this deviated specification for several years, and they had found that the product worked satisfactorily in the field.

Figure 1: Blueprint and deviated specifications

But as a new model year came along, the automaker decided that it was time to build a quality car. (…!) To this end they sent out banners for their suppliers to hang in their plants. (Oh, if it were only this easy.) The slogan plastered on these banners was “Whatever it takes.” In this environment of quality by exhortation, the customer decided that 50-percent conformance to the blueprint specification was not good enough. So the manufacturer was told that it was going to have to improve on this product.

In response, the manufacturer organized a project team to work on this product. Over the course of the next year, this team spent more than $1 million as they conducted 30 experiments on the process. At the end of the year, they had a cause-and-effect diagram that listed more than 600 causes for the problem, but their process only had 55-percent below the blueprint specification.

Figure 2: One year and 30 experiments later

During this yearlong effort, the team had identified various potential causes and had run experiments to quantify the effects of these causes. But experimental knowledge is always limited and is hard to integrate into the process as a whole. Why is experimental knowledge limited? Whenever we perform an experiment, there are three things we can do with a process input: We can study it; we can hold it constant; or we can ignore it. And the number of variables we hold constant or ignore will always outnumber the number of variables that we study. (Remember, they had more than 600 causes on their cause-and-effect diagram.) Thus, although they had spent a year and a million dollars conducting 30 experiments, they were still not sure about how to reduce the variation in the product. And the merest glance at figure 1 or 2 will show that they were clearly going to have to reduce variation to meet the blueprint specification.

So where does process variation originate? It comes from those cause-and-effect relationships that we do not control during production. This means that the secret to reducing process variation is to identify those causes that are not being controlled during production but that have a big effect on the process outcomes. When we identify such a cause and make it part of the set of controlled process inputs, we will remove a chunk of variation from the process stream while gaining an additional variable to use in setting the process average.

Well, after figure 2, the manufacturer was getting desperate and thought it might lose the business. So the CEO took the whole project team and sent them to my SPC class. After the class the team identified 60 causes to track on process behavior charts. Over the next nine months, the cause-and-effect diagram grew from 600 causes to more than 1,300 causes, while the number of process behavior charts dwindled from 60 down to about a dozen. During this time the team made 15 changes in the production process. Twelve of these 15 changes involved inputs that they had studied during the first year. The other three changes involved inputs that they had not even thought of during the first year.

Figure 3: Nine months later with process behavior charts

Following these changes the manufacturer had 98-percent conformance to the blueprint specification of 8 mils. Seeing this improvement, the customer offered to help with that last 2 percent nonconforming by changing the specification on the blueprint to 13 mils!

So how does the process behavior chart-approach differ from the traditional DMAIC approach? It begins by listening to the process rather than with experimenting on the process piecemeal. It does not require the user to guess which inputs might be important before collecting data. Rather, it looks at the process as a whole and lets the process tell the user whether there are dominant, uncontrolled cause-and-effect relationships present. In the words of one engineer from the project team, “The process behavior charts gave us the courage of our convictions. We learned what we really had to fix. We discovered which causes mattered the most, and even had a few surprises.” Or as a manager at another of my clients expressed it, “With these charts we actually discovered new technical knowledge as well as some dumb things we had been doing.” When you stop doing the dumb things, and then use the new technical knowledge, all sorts of improvements are possible.

Variation reduction

In another case a client had an assembly that had excessive warranty costs. In their internal testing, they found a failure rate at performance loads that ranged from 4 percent to 11 percent, and this high failure rate had gone on for six years. This was such a problem that the head engineer could recite off the top of his head 11 different studies they had performed trying to fix this problem. While they knew a lot about the chemistry and physics of the various steps in this process, they still could not do anything about the strong seasonal component to these failures, nor could they substantially change the overall failure rate. They had gotten winter and fall down to about 4 percent failures, but the summer was still showing 8-percent failures at performance loads.

Figure 4: Percent failing under performance load by quarter

After taking my SPC class in the winter of year seven, they started using process behavior charts throughout the different stages of production for this assembly. Four months later the spring quarterly failure report showed only 3-percent failing under performance loads. They had never done this well before!

The general manager immediately wanted to know what they had fixed—what was the magic bullet that had eluded them for so long? Upon investigation, the plant manager found that they were not doing anything different. They were doing the same things they had done before, but they were doing all of them more consistently. They had 15 process behavior charts for this fabrication and assembly operation, and they all showed a reduction in variation over the previous three months. They had had the right process, and they had had the right set of control factors, but they had not been operating each process step on target with minimum variance. As they reduced the variation at each step, the successive steps produced better product, and the cumulative result was a lowering of the failure rate. That summer, instead of the typical 8 percent, they had 2.5-percent failing at performance load. And the successive quarters got even better.

Variation always creates costs. Reducing the variation at the source will reduce subsequent costs. Here the cumulative effect was a dramatic reduction in the percent of warranty failures. It was not new knowledge, or a process upgrade, but operational consistency that was needed here.

Figure 5: Percent failing under performance load by quarter

Process behavior charts let us have a dialogue with our processes. This allows us to learn what we need to know and to take appropriate actions. Since we have dozens or hundreds of cause-and-effect relationships to consider, listening to the process is always more effective than rushing in with a program of experimentation. Although experiments allow us to get specific answers to specific questions, they are not very helpful when we do not know what questions to ask.

Problems with DMAIC models

An integral part of every Six Sigma program is the use of a DMAIC model to provide a framework for reengineering your process.  These DMAIC models require a lot of steps up front to help you to think of those factors that you need to study. You have to make the effort to figure out what your process needs. This is the “think system” for process improvement. As seen in the examples, the “think system” can be used for years without any satisfactory results.

In contrast to all the elaborate warmup and mental effort needed to guess how you might improve your process, you could instead use process behavior charts to let the process itself identify those things that need to be investigated. In both of the examples above, it was the introduction of the process behavior charts that was the key element in breaking the logjam and making progress. The investigators already possessed the needed subject matter knowledge, but it was the process behavior charts that told them how to use that knowledge effectively in spite of having to deal with hundreds of cause-and-effect relationships.

Another problem with the DMAIC model is the assumption that every process needs to be reengineered. In the second example, there was no need to reengineer the process. The process was already the right process. All that was needed was to operate each step up to its full potential. And process behavior charts are the catalyst for operating any process at full potential.

Since reengineering a process is never cheap, it should only be used when it is needed. If you do not know what can be accomplished by operating your current process predictably and on target, how can you know if you need to reengineer that process? This failure to investigate what can be accomplished by operating the current process up to its full potential is a major flaw of most DMAIC models.

A third problem with DMAIC models is the assumption that the only methodology for process improvement is to use a series of experiments. As noted earlier, every experiment involves studying how a few inputs affect a response while all other inputs are held constant or ignored. If a process is being operated unpredictably, there will be dominant, yet unknown, assignable causes present. Since they are unknown, these assignable causes will almost always be ignored by any experiments we conduct. And experiments done while dominant cause-and-effect relationships are ignored will be of dubious utility.

In contrast to the experimental approach, process behavior charts allow you to learn by listening to the process. Since process behavior charts are the operational definition of whether assignable causes are present, they allow you to discover the unknown factors that affect your process. And while discovering these unknown causes is a prerequisite for conducting the right experiments, the process behavior chart will often make things so clear that experiments will no longer be required. After all, you can learn a lot by listening.

Finally DMAIC models assume that, even though you have given no thought to the support and operational discipline required to operate your current process predictably, you will be able to magically operate the new, upgraded process predictably by simply throwing a “control chart” over the wall to production. Unfortunately, all of our experience with process upgrades tells us otherwise. Everywhere I go people tell me that process upgrades rarely deliver all that they promise. I think this happens for the following reason.

If you cannot operate your current process, whose idiosyncrasies you know, up to its full potential, then how will you operate a new process, with new idiosyncrasies, up to its full potential? If you do not have the operational discipline to get the most out of your current process, how will you get the most out of your reengineered process?

Many have found that operating their current process up to its full potential has resulted in a reduction of the process variation down to one-half, one-third, or one-fourth of its previous level. With these reductions, operating the current process at full potential is often good enough to eliminate the need for reengineering.

“But doesn’t DMAIC include a control chart in the last step?”

Yes, most DMAIC models suggest that a process behavior chart can be used to maintain the status quo after the process has been reengineered. But a process behavior chart is so much more than a process monitor. As noted above, if your organization does not have the discipline to use a process behavior chart to get the most out of your current process, how will it manage to use a chart to get the most out of a process involving new technology, new materials, new techniques, and the resulting new cause-and-effect relationships? Improvement is more a matter of practicing a way of thinking than it is a matter of having the right technique.

An operational definition for improvement

An operational definition has three parts: a criterion to be met; a test procedure for determining compliance with the criterion; and a decision rule for interpreting the results of the test procedure. For many years I had the privilege to hear W. Edwards Deming paraphrase the whole concept of an operational definition using three questions. “What do you want to accomplish?” “By what method will you accomplish it?” and “How will you know when you have accomplished it?”

What Deming called the Shewhart cycle of “plan, do, study, act” (PDSA) was simply an expanded version of an operational definition. As others came along the PDSA cycle morphed, evolved, and was transformed into DMAIC. But along the way, the ideas got confused with the techniques until all similarity to the original was lost. Yet, according to Walter Shewhart, a process behavior chart is an operational definition of how to get the most out of any process.

“What do you want to accomplish?” The limits define the ideal of what your process can deliver when it is operated up to its full potential. It is only when your process is operated predictably that it will be operating with maximum consistency. Thus, predictable operation is equivalent to getting the most out of your process, and the limits define what your current process can achieve. Any managers who do not operate their process predictably and on target are not getting what the company bought and paid for.

“By what method?” The running record shows the actual process performance. Any points that fall outside the limits identify those occasions when the assignable causes are actively changing your process. As Aristotle taught us, the time to discover causes is when changes occur. And process behavior charts provide us with a systematic methodology for learning about the assignable causes of exceptional variation that are present in our process. They provide the only method that will allow us to operate our processes up to their full potential.

“How will you know?” Finally, by combining the process performance with the process potential on a single graph, the process behavior chart allows us to judge how closely our process has come to operating up to its full potential. In this way it allows us to answer the question about how we will know when we are operating our process with maximum consistency.


In short, there is nothing wrong with a DMAIC model that cannot be remedied by incorporating process behavior charts in each and every step. When these charts are used at the start as the major technique for defining, measuring, analyzing, improving, and controlling your process outcomes, they can transform the complexity of the DMAIC model into a very simple and straightforward process of continual improvement.

DMAIC models use many different tools to accomplish their goal. (One client had trained their Black Belts in 47 different statistical techniques, and as a result everyone was simply confused about when to use which technique.) When process behavior charts are understood and used as an operational definition, they accomplish the same goal as the various DMAIC models, and they do it with an integrated approach using one simple technique. By avoiding technique paralysis, process behavior charts allow people to be more successful in taking action. As in the two examples above, this has proven to be the case time after time.

“But wait,” you might say, “what about all the progress that has been made using DMAIC models?” The fact that so much progress has been reported is either a tribute to the ability of the Six Sigma practitioners to identify things that escaped the notice of previous generations of process engineers, or else it is a commentary on how completely disorganized everyone was to begin with. Since the statistical and organizational tools of Six Sigma do not bring new subject matter knowledge to the table, I will let the reader decide which of these two alternatives is the most likely explanation.


About The Author

Donald J. Wheeler’s picture

Donald J. Wheeler

Dr. Wheeler is a fellow of both the American Statistical Association and the American Society for Quality who has taught more than 1,000 seminars in 17 countries on six continents. He welcomes your questions; you can contact him at djwheeler@spcpress.com.