Featured Product
This Week in Quality Digest Live
Six Sigma Features
OpusWorks
Eight unique best-practice sessions featuring 11 process improvement and thought leaders
Harish Jose
Learning how to better ask “Why?”
Richard Harpster
Good news? You are probably already doing it.
Donald J. Wheeler
Does your approach do what you need?
James J. Kline
Quality professional organizations need to adjust their body of knowledge to include an understanding of big data

More Features

Six Sigma News
Sept. 28–29, 2022, at the MassMutual Center in Springfield, MA
Elsmar Cove is a leading forum for quality and standards compliance
Is the future of quality management actually business management?
Too often process enhancements occur in silos where there is little positive impact on the big picture
Collect measurements, visual defect information, simple Go/No-Go situations from any online device
Good quality is adding an average of 11 percent to organizations’ revenue growth
Floor symbols and decals create a SMART floor environment, adding visual organization to any environment
A guide for practitioners and managers
Making lean Six Sigma easier and adaptable to current workplaces

More News

Donald J. Wheeler

Six Sigma

Two Routes to Process Improvement--Part 2

Assignable causes and common causes

Published: Thursday, May 6, 2010 - 07:00

Having an effective model for the nature of data will inevitably identify two different paths to process improvement. One path seeks to operate a process up to its full potential while the other path seeks to operate to meet requirements. This article explains how these two paths differ and how they can be used together to successfully improve any process. Some of the figures (figures 1 through 10) referred to in this part of the series can be found in Part 1.

Assignable causes and common causes

If we use the Pareto principle to organize the uncontrolled factors of figure 10, we end up with figure 11. There, we see that the set of uncontrolled factors contains two dominant cause-and-effect relationships. Walter Shewhart called such dominant but uncontrolled causes, "assignable causes." W. Edwards Deming called the group of remaining, lesser uncontrolled factors, “common causes.” Thus, we have three types of cause-and-effect relationships: control factors, assignable causes, and common causes. Each of these three types of cause-and-effect relationships plays a different role in how the process behaves over time.


Figure 11: Three types of cause-and-effect relationships

 The levels of the control factors used to operate the process will determine a baseline value for the average level for the product characteristic. The fact that the levels of the control factors are fixed means that they contribute little or no variation to the process.

Virtually all of the process variation will come from the set of uncontrolled factors. Because of their dominant effects, changes in the levels of the assignable causes will tend to result in noticeable changes in the process average. Over time, these process changes will increase the variation displayed by the product stream. In this way, the assignable causes will typically be the dominant source of variation in the product stream.

Because of their lesser effects, and due to their greater numbers, changes in the levels of the common causes will tend to cancel each other out. Individual observations may be displaced, but the overall process will show little change except for the variation introduced by all of these small changes. In this way, the common causes will create the remainder of the variation in the product stream. As a result of these three types of cause-and-effect relationships, a process like the one seen in figure 11 will behave erratically over time, and might be pictured in the manner shown in figure 12.


Figure 12: A process subject to assignable causes will behave erratically over time.

This is why effort spent trying to fine tune the control factors will be of marginal benefit as long as there are assignable causes present. It doesn’t matter what levels of factors 5, 1, 7, and 4 we choose to use, as long as we are doing nothing about factors 14 and 16. The variation introduced by the assignable causes will more than obliterate any fine tuning of the average level we might achieve by adjusting the control factors.

You cannot optimize any system when some of the dominant cause-and-effect relationships remain unidentified. When the set of uncontrolled factors contains assignable causes, it's worthwhile to identify those assignable causes and take steps to make them part of the set of control factors. By doing so, we will be removing a major chunk of the variation in the product characteristic, while, at the same time, gaining added leverage for controlling the average level of the product characteristic. For this reason, effort spent in identifying assignable causes has a high payback and should always be given priority.

But how can we separate assignable causes from common causes? This is where observational studies come in. Unlike experimental studies, where a few selected factors are manipulated to determine their effect upon a product characteristic, observational studies look at how the process behaves over time while it is under the influence of all of the cause-and-effect relationships. If the process displays erratic behavior such as that shown in figure 12, then it is likely that one or more assignable causes are present. The technique for deciding whether the process is erratic or consistent is the process behavior chart.

Process behavior charts separate the routine variation of the common causes from the exceptional variation of the assignable causes by using limits to define the bounds of routine variation. When points fall outside these limits, they are interpreted as signals of the presence of assignable causes. By investigating the context for such points, we can discover the nature of the assignable causes affecting our process. As we take steps to control these assignable causes we will remove their variation from the product stream while gaining added leverage for adjusting the process average. At some point, we will reach that state shown in figure 13, where there are no more dominant cause-and-effect relationships in the set of uncontrolled factors.


Figure 13: A process operating at full potential

 


Figure 14: In the absence of assignable causes, the process will behave predictably.

When, as in figure 13, the set of uncontrolled factors contains nothing but common causes, the variation in the product characteristic will be the result of many different cause-and-effect relationships, where no one cause-and-effect relationship is dominant. The resulting process variation will be that routine, run-of-the-mill variation, which is inherent in every production process. At this point, the product stream should look like the one shown in figure 14.

Seeking to identify and control common causes is a low-payback strategy. When all of the dominant causes are contained in the set of control factors, you will have reached an economic limit on finding and removing causes of variation. At this point of economic equilibrium, the product stream will have the greatest degree of uniformity that’s consistent with economic production. Controlling common causes will be uneconomical. The process will be operating at full potential and the product stream will be predictable over time.

Thus, by providing a mechanism for separating the assignable causes from the common causes, process behavior charts become the locomotive for moving a process to the point of operating at full potential. They provide an operational definition of an assignable cause and they allow us to judge whether a process is operating with minimum variance.

What about meeting requirements?

When a process is operated predictably, as shown in figure 14, it will have cause-and-effect relationships such as those shown in figure 13. Such a process will be operating at a point of economic equilibrium. It will be operating at full potential, and it will have the minimum amount of variation that’s consistent with economic production. However, none of these properties will guarantee that the process will meet requirements.


Figure 15: Outcomes for Characteristic No. 2 corresponding to figures 8, 10, and 13

As shown in figure 15, a point of economic equilibrium may be less than perfect even though it may be the best we can expect to achieve without changing the process in some major way. To achieve greater consistency for Characteristic No. 2, and to thereby eliminate the 3-percent rework, we have to upgrade or reengineer this production process. This will typically require the introduction of new technology, new equipment, new procedures, or new materials into the process. Moreover, when we do this, we are effectively adding new cause-and-effect relationships to our list. Of course, with these new factors to consider, we end up returning to the starting point for research shown in figure 7. While we may have partial knowledge about our upgraded process, we still have to learn how the new cause-and-effect relationships fit in with the original set of cause-and-effect relationships to be able to once again select the dominant causes for our set of control factors.

Thus, as shown in figure 16, when we upgrade or reengineer a predictable process, we are changing the set of cause-and-effect relationships and returning to the starting point for research. Experimentation will be required to begin to identify the new control factors and their appropriate levels for production. However, experimental studies will only provide information about some of the cause-and-effect relationships. It will tell us nothing about those cause-and-effect relationships that aren’t included in the experiment.


Figure 16: The effect of reengineering a process

 


Figure 17: Reengineering and operating at full potential

Therefore, before a reengineered process can be operated up to its full potential, an observational study is required. As assignable causes are identified and made part of the set of control factors, the reengineered process will move toward its point of economic equilibrium with the resulting reduction of variation that inevitably follows.

On the other hand, trying to reengineer an unpredictable process will simply add to the chaos. As long as the process is subject to assignable causes, it will do no good to work on manipulating other factors. Reengineering is premature. If you haven’t learned how to get the most out of your current process, what makes you think that you will be able to get the most out of a new process? Operating a process up to its full potential is a discipline; it has to be practiced. It cannot be installed or implemented. While it’s important to operate to meet requirements, operating at full potential is a key element in meeting requirements in an economic manner.


Figure 18: Reengineering an unpredictable process

The fork in the road to process improvement

Everyone understands the need to meet requirements. If the specifications given by the customer are not met, then the customer may go elsewhere. This economical imperative is clear, yet the way to achieve this objective is hardly ever clear. Conventional wisdom tells us to upgrade our bad processes. Of course the time, trouble, and expense of reengineering a process will make this path one that we will avoid whenever possible. Fortunately, conventional wisdom also tells us to leave things alone as long as the outcomes are satisfactory. Thus, the trouble associated with reengineering and the conventional wisdom about when to work on a process will combine to create alternating periods of benign neglect and intense panic. And the changeover from neglect to panic will usually coincide with an increase in the percentage of nonconforming product.


Figure 19:  Conventional wisdom about process improvement

As a consequence, the conventional approach to improvement is to live with our existing process as long as possible, and then, in an atmosphere of panic, to seek to reengineer the process to the point where we are no longer in trouble. Money is spent on technology, equipment, and experiments until the percentage of nonconforming product drops. Then, regardless of whether or not the technology, equipment, or experiments had anything to do with that drop, the new status quo becomes the existing process du jour.

However, by considering the nature of data, we have discovered a second route to process improvement. This second route does not require expensive experiments, new technologies, or new equipment. It can be used at any time by the existing personnel. It consists of using process behavior charts to evaluate observational studies. This approach will allow you to get the most out of your existing process. Typically, the process variation can be reduced by 50 percent to 80 percent. Moreover, these reductions turn out to be effectively free due to the high payback associated with finding assignable causes and making them part of the set of control factors.

Process behavior charts may be used to evaluate observational studies on any type of process. They can be used whether the existing process is being operated predictably or unpredictably. Moreover, whenever a process is reengineered, process behavior charts will complement and complete the experimental studies and will help in fine tuning the upgraded process.

Thus, there is a time and place for observational studies and a time and place for experimental studies.  Both are needed, but you need to know how and when to use each approach. This means that there is a fork in the road to process improvement, and the fork you need to take will depend upon what type of behavior your process displays.


Figure 20:  The fork in the road to process improvement

If your process is being operated predictably, then your process is operating at full potential and the conventional wisdom will be correct and appropriate. However, if your process is not being operated predictably, then it is not operating at full potential, and you will need to identify the assignable causes and make them part of the set of control factors. As shown in figure 20, if you pick the wrong improvement strategy, your efforts are likely to be wasted. Upgrading an unpredictable process is a waste of time and effort, while looking for root causes of variation in a predictable process will lead to frustration. And the only way to answer the question at the fork in the road is to use a process behavior chart.

Since it has proven to be easier and cheaper to operate existing processes up to their full potential than it is to upgrade and reengineer processes, any attempt to discuss process improvement that does not take advantage of process behavior charts to operate processes predictably is inherently flawed and incomplete.

Shewhart’s fundamental observation that some processes are operated predictably, while others are operated unpredictably, is demonstrably true. It has been proven over and over again. Ignore this fact of life and your process improvement efforts are likely to be ineffectual.

Summary

Process behavior charts and observational studies allow you to identify assignable causes so that you can make them part of the set of control factors and thereby operate a process up to its full potential. They allow you to work with existing processes, and they allow you to polish new or upgraded processes.

When adding new elements to a process as part of a process upgrade, or when designing a new process, experimental studies allow you to examine the impact of selected factors upon a product characteristic.  However, our model for the nature of data tells us that trusting in the results of a series of experimental studies can never be completely satisfactory. Experiments are very successful at answering specific questions and confirming suspected relationships. They are of limited utility when we do not know what questions to ask.

The very nature of an experiment demands that we study some relationships while ignoring other relationships. For this reason, what we learn from experiments is likely to be true, but it is also likely to be incomplete. Thus, our model for the nature of data tells us that we need to complement and complete the results of any experimental study by using an observational study. For it is only with an observational study that we learn what our process does while all of the cause-and-effect relationships are present.  This is the only reliable way to determine if our process contains any assignable causes like factors 14 and 16. Observational studies are a necessary step in learning from the process data and in developing a complete understanding of our processes.

The third, and perhaps the most important, implication of our model for the nature of data is that we should start any investigation of an existing process with an observational study. If we try to adjust, improve, or optimize an existing process without first checking for the presence of assignable causes we may end up playing with less than a full deck. As a result we may find ourselves erroneously adjusting the level of factor 4 rather than working to remove the effects of factors 14 and 16.

Thus, in addition to the tools and techniques that allow us to analyze experimental data, we will also need a way to analyze data from observational studies. And the primary tool for the analysis of observational studies is the process behavior chart. Process behavior charts provide the only path to operating at full potential.

Discuss

About The Author

Donald J. Wheeler’s picture

Donald J. Wheeler

Find out about Dr. Wheeler’s virtual seminars for 2022 at www.spcpress.com. Dr. Wheeler is a fellow of both the American Statistical Association and the American Society for Quality who has taught more than 1,000 seminars in 17 countries on six continents. He welcomes your questions; you can contact him at djwheeler@spcpress.com.

 

Comments

Assignable cause identified - cue in unrealistic solution

During a meeting, it was noted by the external consultant that the metrics for the previous day was excellent - to the point of being an outlier. She then segued that whatever we were doing the previous day should be implemented and made part of the daily routine.

Then someone reminded the consultant that it was Valentine's day the previous day and the firm cannot realistically make all of the programs lined up the previous day to be part of the routine (dating game, live band, free food ...).

Assignable causes

Don't understand a couple of things:
(1) If a dominant cause is always present in a process, then won't its effects stay within the control limits as part of the constant variation?
(2) If a cause is one-time or rare, but dominant when it shows up, won't it likely push the process variation beyond the control limits? This wouldn't be a dominant cause of variation in the process, since it almost always has no effect on the process (by definition), and so wouldn't be an assignable cause in the sense that seems to be used in this article. And hence isn't this a fourth type of cause?

Six Sigma

An article that gets back to basics, which should be simple enough for the myriads of Six Sigma addicts to understand. It should be noted that there are no dark mysterious forces causing predictable processes to drift, shift, or need to be "corrected" by a mystical 1.5 sigma. There is no point in management commanding processes and its workforce to produce a farcical 3.4 defects.

If more people read and understood articles like this, sanity might return to quality. Will managers ever come to realize the Six Sigma emperor has no clothes ?