Content By Davis Balestracci

Davis Balestracci’s picture

By: Davis Balestracci

My last column mentioned how doctors and hospitals are currently being victimized with draconian reactions to rankings, either interpreted literally or filtered through the results of some type of statistical analysis. Besides the potential serious financial consequences of using rankings in the current craze of “pay for performance,” many hard-working people are stigmatized inappropriately through what has been called a “public blaming and shaming” strategy. Is it any wonder why many physicians are so angry these days?

A real example

Rankings with alleged helpful feedback to those allegedly needing it are also used as a cost-cutting measure to identify and motivate alleged poor performers. Many are analyzed and interpreted using an analysis that, based on courses people have taken, intuitively feels appropriate, but should actually be avoided at all costs.

Davis Balestracci’s picture

By: Davis Balestracci

Don’t tell me you’re not tempted to look when you spot a magazine cover saying, “How does your state rank in [trendy topic du jour]?” Many of these alleged analyses rank groups on several factors, then compare the groups’ sum totals of their respective ranks to make conclusions.

For example, in 2006, I was at a presentation by someone considered a world leader in quality (WLQ) who has been singing W. Edwards Deming’s praises since the late 1980s. He presented the following data as a bar graph, from lowest score to highest.

It is the sum of rankings for 10 aspects of 21 counties in a small country’s healthcare system (considered to be on the cutting edge of quality). Lower sums are better: minimum = 10, maximum = 210, average = 10 x 11 = 110.

My antennae went up. A bar graph? With absolutely no context of variation for interpretation? And a literal interpretation of the rankings?

What’s wrong with this picture?

Davis Balestracci’s picture

By: Davis Balestracci

Many talk about reducing variation to improve quality. Does that include human variation, where everyone takes a different approach to improving overall improvement processes? What would happen if this variation were reduced?

Would some of you lean folks be interested in spearheading an effort to standardize how the “implementation of improvement” is actually implemented? Lean and nonlean practitioners would no doubt thank you effusively. (I can hear you all: “Davis, what planet are you on?”)

Nah, probably not a good idea. But could we all agree that a useful initial strategy might be to address the routine organizational use of data?

Consider: By reducing daily variations in how routine data are used, we’d find that less tampering (i.e., treating common cause as special) helps boost the credibility of more formal efforts, regardless of the approach. Think how much that liberated time and reduced chaos would increase your effectiveness.

One of W. Edwards Deming’s quotes made a lasting impression on me when I first heard it more than 30 years ago: “A good organization will take five years to turn around. Most will take 10.”

So what’s the hurry?

Davis Balestracci’s picture

By: Davis Balestracci

According to Mark Graham Brown, from his book Keeping Score (Productivity Press, 2006), 50 percent of the time leaders spend in meetings involving data is waste, 80 percent of the pounds of published financial data is waste, 60 percent of the pounds of published operational data is waste, and middle management wastes one hour a day poring over these reports. Do you have any idea what this costs? Perhaps more important, does it even matter?

In meetings involving leaders and middle managers, there seems to be a code of universal leadership laws for using operational data. It has its own vocabulary emphasizing the words “tough,” “stretch,” “low-hanging fruit,” “accountability,” and the phrase, “I don’t like these numbers.” The code also seems to encourage tantrum phrases such as, “Find out what happened here!” “What are you going to do about it?” and, “I want a plan to fix this!”

Davis Balestracci’s picture

By: Davis Balestracci

Just curious: Do you have monthly (and/or quarterly and/or even weekly) “How’re we doin’?” meetings like the end-of-year scenario described in my November and December columns last year—about budgets, financials, never events, incidents, near misses, machine downtime, productivity, root cause analyses, returned shipments, rehospitalizations, complaints, customer satisfaction scores, and employee satisfaction scores? 

Usually these vague meetings’ agenda is to discuss:
1. Only the past month’s overall result , e.g., “Were we red, yellow, or green?” (Special cause strategy.)
2. How overall performance seems to be “trending”—using only this-month, last-month, or 12-months ago results (Ditto.)
3. Each individual incident that occurred during the month and how each could have been fixed (Ditto.)
4. Which particular events need individual root cause analyses? (Ditto.)

Davis Balestracci’s picture

By: Davis Balestracci

In spite of the overwhelming odds against me, every new year I firmly resolve to reignite my relentless passion about creating a critical mass of colleagues committed to practicing improvement as “built-in” to cultural DNA using data sanity.

Will this be the year you join me?

Here is a challenging road map of 12 synergistic resolutions for those of you willing to take this nontrivial risk.

1. Resolve to ask yourself, “Have I unintentionally evolved into a qualicrat?”
The formalization of organizational quality improvement efforts into a separate silo with increasing (and excruciating) formality has been an unstoppable evolution. One could look at it as evolving from Neanderthal to Cro-Magnon. But improvement seems to have settled for and is stuck in the “good enough” mediocrity of Cro-Magnon.

I suppose one benefit of this evolution has been serious acceptance of quality as a viable career path. But the consequence of this has been an expensive, self-sustaining training subindustry—not all of it competent—with countless certifications and belts.

Davis Balestracci’s picture

By: Davis Balestracci

Those of you familiar with W. Edwards Deming know that his Funnel Experiment ultimately shows that a process in control delivers the best results if left alone. Funnel Rule No. 4, also known as a “random walk”—i.e., making, doing, or building your next iteration based on the previous one—has been influencing Deming’s philosophy for the last 35 years. Isn’t it obvious that many times any resemblance between what one observes vs. Deming’s original intent is purely coincidental?

I’ve said it before: People don’t need statistics. They need to know how to solve their problems. All that’s needed are a few simple tools and a working knowledge of variation to be able to distinguish between common and special causes.

To illustrate, let’s return to last month’s column, where we considered a quarterly review meeting at which a department supervisor was charged with accounting for results vs. meeting arbitrary numerical goals.

Davis Balestracci’s picture

By: Davis Balestracci

Have you ever heard something like: “I’m committed to Dr. Deming’s approach [or Six Sigma or lean or TPS, it doesn’t matter], but executives don’t seem to listen anymore. All they do is keep interrupting my very clear explanations with, ‘Show me some results, then show me what to do.’ I was shocked that my demonstration of the red bead experiment neither awed nor convinced them; several of them even walked out during it. Which of Dr. Deming’s 14 Points should I start with to get their attention and the results they want?”

My answer would indeed be: None of them—and all of them!

If anyone either continues to ask that question or is confused by my answer, please read this, then heed the following advice from Deming himself. Why? Because you don’t quite get his message.

Davis Balestracci’s picture

By: Davis Balestracci

Client A came to me for a consultation and told me upfront his manager would allow him to run only 12 experiments. I asked for his objective. When I informed him that it would take more than 300 experiments to test his objective, he replied, “All right, I’ll run 20.”

Sigh. No, he needed either to redefine his objectives or not run the experiment at all.

I never saw him again.

Client B came to me with what he felt was a clearly defined objective. He thought he just needed a 10-mintue consult for a design template recommendation. It actually took three consults with me totaling 2 1/2 hours because I asked similar questions to those required for planning the experiment I wrote about in my column from September 2016.

During the first two consults, Client B would often say, “Oh... I didn’t think of that. I’ll need to check it out.” He eventually ran the experiment, came to me with the data, and asked, “Could you have the analysis next week?” I asked him to sit down and was able to finish the analysis (including contour plots) in about 20 minutes.

Davis Balestracci’s picture

By: Davis Balestracci

Referring back to June’s column, I hope you’ve found C. M. Hendrix’s “ways to mess up an experiment” helpful in putting your design of experiments training into a much better perspective. Today, I’m going to add two common mess-ups from my consulting experience. If you’re not careful, it’s all too easy to end up with data that’s worthless.

Balestracci’s Mess-Up No. 1

Underestimating the unintended ingenuity of human psychology will mess up your experiment—and this includes the study planners! 

Trust me, there is no way you could make up the things busy people will do to (unintentionally) mess up your design and its data. “But I just want to run a simple 2 × 2 × 2 factorial,” someone might say.

To refresh your memory: Suppose you’re interested in examining three components of a weight-loss intervention:
• Keeping a food diary (yes or no)
• Increasing activity (yes or no)
• Home visit (yes or no)