Content By Davis Balestracci

Davis Balestracci’s picture

By: Davis Balestracci

My last column, “Can We Please Stop the Guru Wars?” made the case that the various improvement approaches are all pretty much the same. To recap, there are seven sources of problems with a process. The first three sources help frame the situation.

They are:
Source No. 1. Inadequate knowledge of customer needs
Source No. 2. Inadequate knowledge of how the process currently works
Source No. 3. Inadequate knowledge of how the process should work

In applying improvement theory, there is a gap (variation) in a situation between what is actually happening vs. what should be happening—much of it caused by nonquantifiable “human” variation. This can be reduced through the (nonstatistical) tools of a flowchart and the 13 questions to ask about a process.

Davis Balestracci’s picture

By: Davis Balestracci

The various improvement approaches are, in essence, all pretty much the same. Any competent practitioner would neither want to be called a “guru” nor have any problems dealing with another competent practitioner of another improvement philosophy.

In my opinion, any approach should also involve the use of data in some way, shape, or form. I once had a lean sensei (local “guru”?) vehemently make the point that lean does not involve data at all. I'll let you decide. Would you rather be effective or right?

Unfortunately, data conjure up the dreaded word “statistics”... and a huge misconception. The influence of W. Edwards Deming has created the mistaken belief that mass training in statistics will cure all quality ills. We all know the monster that has created.

Not ‘statistics’ but ‘variation’

I believe that one of Deming’s most profound statements was: “If I had to reduce my message to management to just a few words, I’d say it all has to do with reducing variation.” Reduced variation ultimately yields a more predictable process. But let me first expand your conception of “variation.”

Davis Balestracci’s picture

By: Davis Balestracci

I remember all too well the “quality circles will solve everything” craze during the 1980s, which died a miserable death. During this time I was exposed to Joseph Juran’s wisdom about quality circles from his outstanding Juran on Quality Improvement video series from the 1970s. He was adamant: They must be separate from an organization’s formal quality improvement efforts and used only to solve everyday, localized, frontline problems—the 80 percent of the processes causing 20 percent of the problems.

For quality circles to be effective, they must be built into a mature, overall organizational improvement process that is actively working on the majority of problems, those 80 percent that are caused by only 20 percent of the processes.

A hot topic at the moment, especially in healthcare, is rapid cycle PDSA (plan-do-study-act). In many cases, I’m seeing it presented as a “Go on... just do it!” process for everyman to test good ideas in his routine work as a way to work around sluggish management.

The rallying cry is: “What can we do now? By next week? By Tuesday? By tomorrow?”

Davis Balestracci’s picture

By: Davis Balestracci

Twenty-five years ago, I learned a wonderfully simple model summarizing the four stages of a change process, whether personal or organizational.

Awareness
Breakthrough in knowledge
Choosing a breakthrough in thinking
Demonstrating a consistent breakthrough in behavior

Here’s the point: Unless thinking changes, behavior will not change—long term.

Changing behavior, as most of us know deep down, is hard work. In my research on personal and cultural behavior change, it takes a visceral, very uncomfortable wrestling with one’s current belief systemingrained, unconscious axioms developed in reaction to life experiences up to age 20. It then takes the stark realization (awareness) that these beliefs are perfectly designed to produce the results one is currently experiencing, both personally and professionally, and that have probably served one well... up until now.

Davis Balestracci’s picture

By: Davis Balestracci

For all the talk about the power of control charts, I can empathize when audiences taking mandated courses on quality tools are left puzzled. When I look at training materials or books, their tendency is to bog down heavily in the mechanics of construction without offering a clue about interpretation.

Some seminars even teach all seven control charts! And then there is the inevitable torturous discussion of "special cause tests" (usually the famous eight Western Electric rules). People are then left even more confused. Does each test signal need to be individually investigated, i.e., treated as a special cause? Not to worry—most people usually investigate only the points outside the control limits. The focus tends to be on individual observations. But what if there is one underlying explanation generating many of these signals that has nothing to do with individual outliers, e.g., a step change?

Someone once presented me with the graph shown in figure 1. (Yes, the y-scale started at 0.) It almost convinces you that there is a trend, eh?

Davis Balestracci’s picture

By: Davis Balestracci

As many of you know, I hate bar graphs. They are ubiquitous, and most of them are worthless. I'll make maybe two exceptions: 1) a Pareto analysis; 2) a comparative set of stratified histograms disaggregating a stable period of performance (a Pareto analysis proxy for continuous data). Displaying the latter as bars is one option; another—and my preferred—is displaying run or control charts on the same page and same scale.

About 15 years ago, I picked up my morning paper and saw an article, with the accompanying graph below (see my last column), rating the 20 health systems in my metropolitan community on the question, “Would you recommend your clinic to adult friends or family members?” I happened to work at Clinic 19 (maybe not the best service, but excellent care).


Click here for larger image.

Davis Balestracci’s picture

By: Davis Balestracci

I just got through looking at an expensive 186-page quarterly summary of (alleged) customer satisfaction data for a hospital. My head was spinning by page 28.

There were lots of bar graphs, “trending,” correlation analysis, and “top box” and percentile rankings on every—and I do mean every—aspect of a patient’s experience, e.g., “Did the TV call button work?” In addition, numbers that were “above average” (> 50th percentile) were blue, and numbers that were “below average” (< 50th percentile) were red.

In my opinion, it was all pretty worthless. I’m not sure how the sample was chosen or if it was the typical “let’s send out a bunch of surveys and analyze what we get back” sample.

Let’s take a step back and reframe customer satisfaction in a more holistic perspective.

Davis Balestracci’s picture

By: Davis Balestracci

As improvement professionals, part of our learning curve is the experience of facilitating project teams that fail miserably. Then, despite the necessary lessons learned, there still remain some very real dangers lurking in any project, but it goes beyond organizing and facilitating a team. What about the choice of project?

In the post mortem—if indeed there even is a post mortem—the question that inevitably comes up for projects that didn’t even get close to desired results is, “Why was this project chosen in the first place?” With a collective shoulder shrug, the consensus many times seems to be, “It seemed like a good idea at the time.”

Here are five project evaluation criteria by Matthew E. May. He suggests scanning the current organizational project portfolio and evaluating your role by giving each project a star rating: one star for each criterion. Ask yourself, “What percentage of my work is five-star projects?”

Davis Balestracci’s picture

By: Davis Balestracci

TQM, Six Sigma, lean, lean Six Sigma, Toyota Production System—wrong focus! It’s time to move the focus from “method” to “improvement.”

Davis Balestracci’s picture

By: Davis Balestracci

I’m in the middle of a hot, humid stretch of weather, as are many of the U.S. readers. I can hardly think straight, so I’ve decided to lighten things up a bit today.

Many of you have seen me present and know that I try to inject healthy doses of humor to make key points. As my mentor and dear friend Rodney Dueck said to me while we tried to transform culture, one has to “think of it all as entertainment.” He kept reminding me that we can’t take ourselves and our passion for improvement seriously all the time.

I quoted W. Edwards Deming in my last newsletter: “Don't waste too much time on tools and techniques. You can learn the lot in 15 minutes.”

But what if you had only five minutes?

We all do a lot of teaching. I think Father Guido Sarducci's* concept of a five-minute university is brilliant. Just think of its potential to help cultures pass external audits when the auditors swarm upon them, or when judges for quality awards visit: