Featured Product
This Week in Quality Digest Live
Six Sigma Features
Donald J. Wheeler
How you sample your process matters
Paul Laughlin
How to think differently about data usage
Donald J. Wheeler
The origin of the error function
Donald J. Wheeler
Using process behavior charts in a clinical setting
Alan Metzel
Introducing the Enhanced Perkin Tracker

More Features

Six Sigma News
How to use Minitab statistical functions to improve business processes
Sept. 28–29, 2022, at the MassMutual Center in Springfield, MA
Elsmar Cove is a leading forum for quality and standards compliance
Is the future of quality management actually business management?
Too often process enhancements occur in silos where there is little positive impact on the big picture
Collect measurements, visual defect information, simple Go/No-Go situations from any online device
Good quality is adding an average of 11 percent to organizations’ revenue growth
Floor symbols and decals create a SMART floor environment, adding visual organization to any environment
A guide for practitioners and managers

More News

Davis Balestracci

Six Sigma

The ‘Actual’ vs. ‘Should’ Variation Gap

Nonquantifiable human variation plays a large role in this gap

Published: Monday, March 10, 2014 - 17:44

My last column, “Can We Please Stop the Guru Wars?” made the case that the various improvement approaches are all pretty much the same. To recap, there are seven sources of problems with a process. The first three sources help frame the situation.

They are:
Source No. 1. Inadequate knowledge of customer needs
Source No. 2. Inadequate knowledge of how the process currently works
Source No. 3. Inadequate knowledge of how the process should work

In applying improvement theory, there is a gap (variation) in a situation between what is actually happening vs. what should be happening—much of it caused by nonquantifiable “human” variation. This can be reduced through the (nonstatistical) tools of a flowchart and the 13 questions to ask about a process.

This constitutes the initial “plan” stage of any plan-do-study-act (PDSA) action being taken and could even be revisited as needed during the improvement process by looking at the proposed solution as a process. For those of you wedded to Six Sigma, I suppose this would help the “define” stage of the define, measure, analyze, improve, control (DMAIC) methodology to be more robust.

The remaining four sources of problems can now be used to gain further insight, and when using data coupled with the Pareto principle, perhaps allow isolation of a major source of variation that can be addressed. In other words, the solution might already be contained within the current process. This is much easier for a work culture to accept than routine-upsetting experimentation within the lurking shadow of a major change for a reason of which they might not be totally convinced.

However, if the current process isn’t an option because of capacity issues or obsolescence, considering these three sources may alert one to major cultural roadblocks that would impede any solution. Remember, things are the way they are because they got that way. And they are “perfectly designed” (and determined!) to stay that way.

Source No. 4. Errors and mistakes in executing procedures:
• Variation in how individuals are trained to do the work
• Variation in how people actually do the work
• Some people have developed beneficial “knacks” to work around process design limitations and make fewer errors.
• Underlying process issues that cause everyone working in the process to make the mistake, in which case the process is perfectly designed to tolerate the mistake, and is a system issue. Telling people to “be more careful (or else)” will not work.

Good data taken at this step—or even judicious use of previously collected in-house data—will often allow stratification to identify the “20 percent of the process causing 80 percent of the problem” and a subsequent focus. It can also identify the workers and entire departments that are doing good work (unaware of how well they are performing). It’s knowledge that can then be shared with all workers.

With the increasing publicity of tragic medical errors, I’d like to propose a special subclass of errors and mistakes:

Source No. 4a. A series of errors and mistakes occurring serially and simultaneously, as in one after the other, within a series of key procedures for one patient or customer (similar to getting all the red lights on the way to work). This results in unforeseen incidents and tragedies that “shouldn’t” happen, many times causing major outrage and unwanted publicity.

When such incidents occur, they are often dissected down to minutiae via root cause analysis and treated as special causes. Consider this: What if each individual subprocess—each of which is perfectly designed to have occasional lapses—randomly lapsed all at the same time, resulting in the incident? In that case, it was inevitable (i.e., common cause).

It is indeed a special cause to the family of the victim (or customer), but it was a common cause to the system. Put another way, suppose an organization averaged two major incidents per year; in that case 1) this would result in zero to six incidents in any given year; and 2) there is absolutely no way to predict to whom these will happen. This is unsettling. Tick, tick, tick....

I suppose it would be useful to continue root cause analyses, but the result (usually what Brian Joiner calls a Level 2 fix) must go deeper for people to ask: Where else could similar breakdowns happen? And then even deeper if these incidents are happening in every department, which is a system issue, and the workers are not at fault. Do the words “poor communication” come to mind? Just telling them to be more careful won’t fix anything. When appropriate, always push for a Level 3 fix.

Alas, because of the intense pressure, scapegoats often need to be found. It can become a search for the guilty. Ask the question: What if all of the guilty parties were found, fired, and replacements were hired: Could it happen again? If the answer is yes, then it’s common cause. Elsewhere within your organization, similar subprocesses are ticking away and waiting to converge and create a similar incident—many times in a totally unrelated process.

Here is the brilliant article (from 15 years ago) that helped me make this breakthrough in thinking: Lucian Leape explains how the unexpected death of a Boston Globe reporter was common cause, as in she won the wrong lottery! What have we learned in the ensuing 15 years? If you ask me, not much, although the rhetoric has improved, and root cause analyses have become a thriving quality subindustry.

I’ll talk about the final three sources of problems with a process next time. Regarding data: As you see, data are helpful, early and often, for stratification, but there are a lot more opportunities. I’ll talk about that as well.

Discuss

About The Author

Davis Balestracci’s picture

Davis Balestracci

Davis Balestracci is a past chair of ASQ’s statistics division. He has synthesized W. Edwards Deming’s philosophy as Deming intended—as an approach to leadership—in the second edition of Data Sanity (Medical Group Management Association, 2015), with a foreword by Donald Berwick, M.D. Shipped free or as an ebook, Data Sanity offers a new way of thinking using a common organizational language based in process and understanding variation (data sanity), applied to everyday data and management. It also integrates Balestracci’s 20 years of studying organizational psychology into an “improvement as built in” approach as opposed to most current “quality as bolt-on” programs. Balestracci would love to wake up your conferences with his dynamic style and entertaining insights into the places where process, statistics, organizational culture, and quality meet.