Content By Davis Balestracci

Davis Balestracci’s picture

By: Davis Balestracci

Dealing with individuals is one important aspect of culture change. I addressed this in “How Does Your Organization Define Accountability?” Every organization has many tribes (i.e., departments or specific groups of individuals) that make up its culture. When changes affect departments, the issue of stated reason vs. real reason once again emerges, the most common response being, “We’re in favor of this, but they are going to....”

Davis Balestracci’s picture

By: Davis Balestracci

“It is impossible to tell how widespread data torturing is. Like other forms of torture, it leaves no incriminating marks when done skillfully. And like other forms of torture, it may be difficult to prove even when there is incriminating evidence.”
—J. L. Mills

When will academics, Six Sigma belts, and consultants wake up and realize that, despite their best efforts, most people in their audiences will not correctly use the statistics they’ve been taught—including many of the teachers themselves?

Sometimes I wonder if they are exacting revenge on their captive audiences for being beaten up on the playground 25 years ago.

The clinical publications world is especially a hotbed for inappropriate uses of statistics. Many people are guilty of looking for the most dramatic, positive findings in their data, and who can blame them? If study data are manipulated enough, they can be made to appear to prove whatever the investigator wants to prove. When this process goes beyond reasonable interpretation of the facts, it becomes data torturing.

Davis Balestracci’s picture

By: Davis Balestracci

“When I die, let it be in a meeting. The transition from life to death will be barely perceptible.”
—Anonymous

Today I’m going to share some ideas from an always-thoughtful newsletter written by Steve Harden of LifeWings—a company of pilots who use aviation ideas to create cultures of safety in healthcare. Despite the healthcare bias, these ideas can easily be adapted to any environment as you think about your jobs as improvement leaders.

Harden offered these data:
• The average hospital spends $1.7 million per year on meetings.
• 86 percent of healthcare leaders surveyed said their meetings are a waste of time.

Davis Balestracci’s picture

By: Davis Balestracci

As most of you know, one of my mantras for change agents is, “Those darn humans! God bless ’em,” i.e., don’t be surprised by anything. People never seem to mind change... for other people or departments.

Ever heard this: “It’s about time someone did something about that. They really needed to change. Of course, here’s why that doesn’t apply to me.” (Suppress the urge to strangle.)

Joseph Juran did an excellent video series during the 1970s called Juran on Quality Improvement, based on the ideas in his classic book Managerial Breakthrough, Revised Edition (McGraw-Hill, 1995), which is well worth reading. 3M had several copies of the videos, and I think I wore one set out. Combine Juran’s empirical practicality with Deming’s improvement theory, and you’ve got a dynamite combination.

Juran taught me a lot about applying improvement tools appropriately. However, the most important thing I learned was quite unexpected. These are the resources that finally shed light on something that had been puzzling and frustrating me, which Juran formalized, calling it “resistance to change.”

Davis Balestracci’s picture

By: Davis Balestracci

Is accountability used as an excuse for draconian enforcement of arbitrary goals? We’re familiar with the insanity of wasted time spent overreacting to common cause. I remember the dreaded review meetings, spending all day listening to litanies of excuses, finger pointing, blaming others, and carefully crafted stories to explain lack of results, all of which only yielded confusion and an attitude of helplessness.

Has “Who’s to blame?” been replaced by “Who’s accountable?”

“When we are dealing with people, let us remember we are not dealing with creatures of logic. We are dealing with creatures of emotion, creatures bustling with prejudices and motivated by pride and vanity.”
Dale Carnegie

Accountability should no longer mean “account for.” Accountability should be powered by commitment and hard work with a focus on current and future efforts, rather than reactive and historical explanations, with a mindset of reality, ownership, solutions to problems, and determined action that answer the question, “What else is it going to take?”

Davis Balestracci’s picture

By: Davis Balestracci


This is a continuation of my last column, which I’ve written to honor my late dad who loved golf. As promised, let’s look at the Masters golf tournament final four-round scores for the 55 players who survived the cut. We’ll analyze and then give it a twist based on the ongoing enumerative vs. analytic conundrum.

Analyzing the four-round final scores with analysis of variance (ANOVA):     

Davis Balestracci’s picture

By: Davis Balestracci

To celebrate Father’s Day in the United States (June 21 this year), I’m going to use this and my next column to honor my late dad by using a game he loved—golf—to teach some very basic statistics lessons. Some of these may have been lost on you previously, not through some fault of your own, but rather from trainers’ tendency to concentrate on a technique’s mechanics. Analysis of means (ANOM) might be new to many of you, but even if it’s a review, I hope you have as much fun reading this as I did writing it.

The recent Masters tournament, in which 97 golfers participated, will provide the data. The Masters is the crème-de-la-crème of golf tournaments. One qualifies by winning a major tournament or by formal invitation. Past champions qualify automatically.

The first two rounds of any tournament are used to establish the “cut” to narrow the field for the last two rounds. Cut rule: Following the second round, the 50 golfers with the lowest scores, plus ties, plus any golfer within 10 strokes of the lead, advance to play the final two rounds. In this case, players with scores above 146 were cut, narrowing the field from 97 to 55.

Here’s the analysis of variance (ANOVA) for the first two rounds:

Davis Balestracci’s picture

By: Davis Balestracci

In the past couple of weeks, I have stumbled onto three things that triggered this column. Two of them came from academic-type healthcare journals, one of which shall remain nameless. This type of journal is still trying to make a formal case for improvement. Article after article belabors the obvious in stilted, arcane academic language and models that won’t motivate anyone to do anything. I know what these articles are talking about and even I get confused!

It must be because it went through this process:


Image source

Now there is an aggressive movement encouraging and teaching healthcare quality professionals to prepare their projects for publication in such journals. Isn’t there far too much real work to do to make things easier for the front-line staff and patients instead of spending precious time learning how to jump through hoops to deal with such pretentious nonsense?

Davis Balestracci’s picture

By: Davis Balestracci


Welcome to baseball season! I always do a baseball-themed article around this time, and I found my topic after stumbling on this article recently: How accurate are umpires when calling balls and strikes?

From what I understand, since 2008, home plate umpires have been electronically monitored every game and given immediate feedback on their accuracy—i.e., the number of actual balls they called as strikes, and vice versa.

Using the aggregated data from the 2008–2013 seasons, the author observed that wrong calls were made 15 percent of the time (average of both rates combined), which, according to him, “is just too high.” He provided a table of umpires whose inaccuracy rate was 15 percent or higher—38 umpires out of approximately 80 (Tsk, tsk... that’s close to half of them.)

He also listed the top 10 most accurate umpires. Oops—I mean the 10 umpires who happened to have the lowest rates of wrong calls.

Davis Balestracci’s picture

By: Davis Balestracci

Good data collection requires eight questions to be addressed. The first four involve designing the collection.

Most of these issues were discussed in my last column. To summarize:
1. Why collect the data?
• Is there a clear objective for this collection?

2. What method(s) will be used for the analysis?
• This should be known even before one piece of data is collected.

3. What data will be collected?
• What specific process output(s) do you wish to capture?
• If data are needed from customers, patients, or staff, what is the process for choosing the individuals from whom to obtain the needed data?

4. How will the data be measured?
• How will you evaluate any output to obtain a consistent number, regardless of who measures it?
• In the case of measuring counts, is the threshold between a nonevent (x = 0) and an event (x = 1) clear (e.g., is Pluto a planet or isn’t it?).
• Remember, there is no “true value,” and it depends on the specific objective.
• Does it allow you to take the actions you wish to take?