Content By Davis Balestracci

Davis Balestracci’s picture

By: Davis Balestracci

Don’t tell me you’re not tempted to look when you spot a magazine cover saying, “How does your state rank in [trendy topic du jour]?” Many of these alleged analyses rank groups on several factors, then compare the groups’ sum totals of their respective ranks to make conclusions.

Davis Balestracci’s picture

By: Davis Balestracci

Many talk about reducing variation to improve quality. Does that include human variation, where everyone takes a different approach to improving overall improvement processes? What would happen if this variation were reduced?

Davis Balestracci’s picture

By: Davis Balestracci

Davis Balestracci’s picture

By: Davis Balestracci

In spite of the overwhelming odds against me, every new year I firmly resolve to reignite my relentless passion about creating a critical mass of colleagues committed to practicing improvement as “built-in” to cultural DNA using data sanity.

Will this be the year you join me?

Here is a challenging road map of 12 synergistic resolutions for those of you willing to take this nontrivial risk.

Davis Balestracci’s picture

By: Davis Balestracci

Client A came to me for a consultation and told me upfront his manager would allow him to run only 12 experiments. I asked for his objective. When I informed him that it would take more than 300 experiments to test his objective, he replied, “All right, I’ll run 20.”

Sigh. No, he needed either to redefine his objectives or not run the experiment at all.

I never saw him again.

Davis Balestracci’s picture

By: Davis Balestracci

Referring back to June’s column, I hope you’ve found C. M. Hendrix’s “ways to mess up an experiment” helpful in putting your design of experiments training into a much better perspective. Today, I’m going to add two common mess-ups from my consulting experience. If you’re not careful, it’s all too easy to end up with data that’s worthless.