



© 2023 Quality Digest. Copyright on content held by Quality Digest or by individual authors. Contact Quality Digest for reprint information.
“Quality Digest" is a trademark owned by Quality Circle Institute, Inc.
Published: 03/19/2014
This is the last in my series making the case that the various improvement approaches are all pretty much the same.
There are seven sources of problems with a process. The first three help frame the situation:
Source 1: Inadequate knowledge of customer needs
Source 2: Inadequate knowledge of how the process currently works
Source 3: Inadequate knowledge of how the process should work
In my last column, I talked about:
Source 4: Errors and mistakes in executing procedures
• How about isolating and focusing on the 20 percent of a process (vague problem) where most of the variation is occurring? This would be the time for more detailed flowcharting.
Source 4a: Sentinel events, when everything that can go wrong goes wrong at once for one patient or customer—inevitably (i.e., common cause)
• Avoid a microscopic, root cause analysis approach, i.e., treating all such events as special cause.
The fifth source considers the human and environmental design factors that keep the process from working as it “should.”
Source 5: Current practices that fail to recognize the need for preventive measures. These include:
• Physical environmental factors that make the process “perfectly designed” to have undesirable variation or incidents.
• Human fatigue—insufficient attention due to sensory overload
• Poor short-term memory
• Fixation on fixing things that go wrong that could have been easily avoided
• Reversion under stress
• Over-generalization
The sixth source addresses unnecessary complexity.
Source 6: Unnecessary steps, inventory buffers, wasteful measures and data, including:
• Complexity added in the past due to inappropriate reactions to experienced variation, resulting in nonvalue-added work
• Implementing untested solutions
• Using poorly collected data to make decisions
• Routine data collections that are rarely or never used
In addition to considering these issues for the process at hand, I feel that the last element requires formal consideration as part of any project. Look at the value and quality of the process’s current default data process (e.g., collection, analysis and display, interpretation, actions taken). Is it adding value? More about this when I talk about Source 7. But before I do, allow me to digress a bit.
There at least three well-known examples that use process-oriented thinking.
The first, Six Sigma, emphasizes creating a process that is consistently, virtually defect-free (or free of undesirable incidents). From what I’ve observed, there is a tendency to study the observed, undesirable variation to intuit potential sources inherent in sources one through six as “theories to be tested” using planned data.
The road map outlined in my last two columns helps make this process more robust.
Next, key elements of lean emphasize initial formal documentation to obtain the true current process (Source 2: Reducing “human variation” in perception), appropriate error-proofing (Source 5), and exposing waste (Source 6), generally in terms of unnecessary complexity.
Lean looks at an entire process with the ultimate goal of having only value-added work (i.e., work that benefits the customer). Anything else is considered nonvalue-added work, such as:
• Processing defects
• Overproduction
• Inventories
• Movement
• Excessive processing
• Transportation
• Waiting
• Underutilization
Much of this is often designed for organizational convenience or to formally rework process errors that should not even be occurring.
Finally, the Toyota Production System (TPS) takes the concept of “inventory buffers” (Source 6) one step further to an obsession with all aspects of wasted time that keep a process from “flowing.” In other words, how does one avoid the various aspects of inherent “batching” so ingrained in current work cultures... usually for the convenience of the workers?
According to Taiichi Ohno, one of the TPS creators, “The Toyota mind develops brilliant processes in which average employees may excel.”
Unfortunately, in healthcare it would be more true to say: “Healthcare systems have discontinuous processes in which brilliant staff struggle to produce average results.”
Process-oriented thinking is the anchoring concept of any sound improvement framework. It creates a common organizational language that looks at any undesirable variation objectively to reduce blame and defensiveness.
Statistics has a huge role in all of this—and it’s not the legalized torture most of you had to endure to get your colored belts or certifications.
The “value-added” work of the statistical education process should be the ability to apply critical thinking to understand and reduce inappropriate and unintended variation. I think we have a slight gap between how this process “does” work and how it “should” work.
Unintended human variation in the perception of any given situation—including any data’s objective and operational (numerical) definition—will most likely render any existing or collected data worthless for improvement. Well, not exactly: If people realize that the data have no value due to this human variation, it will motivate the need to improve the data process for subsequent collections.
In the context of the six sources of problems already discussed, statistics and data are key to:
Source 2: Inadequate knowledge of how a process currently works.
One of the two main reasons most projects fail is lack of a good baseline estimate of the extent of a problem (the other is too much detailed flowcharting). Do in-house data exist, or can a collection be designed to plot a simple chart for assessment, which then allows one to judge the effects of interventions?
Source 3: Inadequate knowledge of how a process should work
This involves using statistical techniques to test competing theories or assess interventions to determine how the process truly “should work,” while holding any gains made going forward.
Source 4: Errors and mistakes in executing procedures; things happening that shouldn’t happen (sentinel events)
Data collected on patterns of errors, mistakes, and incidents can be studied to find hidden opportunities in a process:
• If some people or departments are making the mistake and others aren’t, then there is knowledge in the system to prevent the mistake—or expose inconsistency in trainers’ results.
• If everyone and all departments are making the mistake, then the process is perfectly designed to have the mistake occur. It will take an overall systemic intervention to fix the problem.
Source 7: Variation in inputs and outputs—dealing with a process’s “everyday” variation.
• Daily managerial reaction based on anecdotal incidents or poorly collected data
• Scheduled quarterly or annual review meetings “accounting for” current results
• Meetings based on arbitrary numerical goals, including budgeting
• Routine scheduled meetings treating common cause as special cause
The four points above indeed apply to any process under study: What data are available, and how has the organization routinely dealt with them? How will the current project improve the future use of these data—including holding any gains?
But think beyond the data for any specific project. Couldn’t dealing with Source 7—let’s call it the “everyday data process”—be an ongoing project in and of itself?
Since management continues to be obsessed with cutting costs, let’s play their game. Can I challenge you to calculate a cost of poor quality for routine time-wasting meetings involving vague performance or financial data, special cause reactions to problems that are common cause, and performance review meetings explaining current results vs. (arbitrary) numerical goals and budgeting?
I would highly recommend adding in these additional sources suggested by Mark Graham Brown, a Baldrige Award, balanced scorecard, and data analytics expert: one hour per day for each middle manager poring over useless operational data, in addition to the cost of publishing these routine data reports. Sixty percent of published operational reports and 80 percent of published financial reports are waste.
Then estimate. Look at a two-week or one-month sample of leaders’ and middle managers’ schedules. For meetings involving data, multiply each meeting’s duration by the salaries plus benefits present in the room (all participants), sum it up for all such meetings, then project it out for a year. Add in Mark Graham Brown’s sources if you wish. I predict that the figure obtained would be shocking. Keep management in suspense by presenting it as “the biggest problem of which no one is aware.” Then watch the reaction to the figure. And when they lick their chops and demand to know what it is, watch the reaction when you tell them.
I have yet to see someone do it. Can I challenge any of you to do so and tell me the result... even if you don’t want to risk telling your leadership?
Links:
[1] http://www.qualitydigest.com/inside/quality-insider-column/can-we-please-stop-guru-wars.html
[2] http://www.qualitydigest.com/inside/quality-insider-column/actual-vs-should-variation-gap.html