Content By Davis Balestracci

Davis Balestracci’s picture

By: Davis Balestracci

In my last column, I discussed how even a well-designed study with a statistically significant result doesn’t necessarily mean viability in the real world. Post-study, one must study the manifestations of variation on the result in any environment in which the result is applied—and each environment will have its own unique variation.

This type of study requires the use of analytic statistical methods, which are designed to deal specifically with this type of variation by exposing it and monitoring attempts to reduce that which is inappropriate and unintended.

Not only will variation come into play in the process of applying any result, but it also affects the four statistical data processes:
1. Measurement definition
2. Data collection
3. Analysis
4. Interpretation of the analysis

Davis Balestracci’s picture

By: Davis Balestracci

I have evolved to using fewer, simpler tools in my consulting and have never been more effective, as I commented upon in my last column. It made me ponder the relevance of much of what I learned in my master’s statistics program. Thinking of the most basic concepts, I decided to look up what the American Society for Quality considers the (Six Sigma) Green Belt body of knowledge. If you click on its link, I want to draw your particular attention to: “III. Six Sigma—Measure (B, C, D),” “IV. Six Sigma—Analyze (A. B),” and “V. Six Sigma—Improve & Control (A, B).”

In the foreword to Quality Improvement Through Planned Experimentation, by Ronald Moen, Thomas Nolan, and Lloyd Provost (McGraw-Hill, 2012)—which I believe is the best book on industrial design of experiments—Deming himself writes:

Davis Balestracci’s picture

By: Davis Balestracci

I've been presenting at the Institute for Healthcare Improvement (IHI) annual forum for 21 consecutive years. Maybe the biggest surprise from these two decades has been the awesome power of simply "plotting the dots," i.e., plotting important organizational data in their naturally occurring time order. I have evolved to using fewer, simpler tools in my consulting and have never been more effective.

My all-day course at the forum demonstrated this with a wide variety of data from real situations. I showed how the power of this simple philosophy could be the back door through which improvement practitioners could educate their managers and executives in the best possible way—by getting eye-opening results. I also emphasized the need to learn and use common-cause strategies, which are not generally taught. All this requires very few tools, but it does require a total change in mindset regarding improvement. I'm not comparing myself to Deming, but I realized that he was always criticized for the virtual lack of tools taught in his seminars. Twenty years of hard study has made me evolve to a similar approach to improvement.

Davis Balestracci’s picture

By: Davis Balestracci

For those of you who are improvement practitioners, are you satisfied with the organizational results of your efforts?  I have a feeling most of you would answer, “Far from it,” and would almost unanimously feel that you could be  more effective.  

There is justification for this. A recent Quality Digest Daily  article suggests that only 2 percent of senior executives get daily quality reports, more than half get them quarterly or less, with maybe 25 percent getting them annually.  And the rest? None at all.

Davis Balestracci’s picture

By: Davis Balestracci

During my recent travels speaking at conferences and consulting, root cause analysis (RCA) seems to have taken on a life of its own and is now a well-established subindustry in any organization, regardless of its chosen approach to improvement.

There are many things that “shouldn’t” happen. Why not consider such incidents as undesirable variation and get back to basics? One of Deming’s principles was that there are two kinds of variation—common cause and special cause—and that treating one as the other makes things worse.

And the human tendency is to treat virtually all variation as special cause—of which RCA is another example.

Has anyone considered whether things that “shouldn’t” happen might be common cause—as in, one’s organization is “perfectly designed” to have them occur? What might be the effect of multiple RCAs in such cases?

‘Because we worked so hard!’

I was at a conference where one of the poster sessions proudly declared that they had reduced infections in a pediatric unit. It used the following display:

Davis Balestracci’s picture

By: Davis Balestracci

There has been an explosion in new technology for acquiring, storing, and processing data. The “big data” movement (and its resulting sub-industry, data mining) is becoming more prevalent and having major effects on how quality professionals and statisticians do their jobs.

Big data are a collection of data sets that are too large and complex to be processed using traditional database and data-processing tools. Any change this big will require new thinking. However, one thing won’t change and now becomes more important. My respected colleagues Ron Snee and Roger Hoerl call this an “inquiry on pedigree,” which asks if you know the quality and origin of your data to answer the following questions:
• What was the original objective of these data, if any?
• How were these data defined and collected?
• What was the state of the processes that produced these data—both the data process itself and the process by which data were collected?

Davis Balestracci’s picture

By: Davis Balestracci


My last article demonstrated a common incorrect technique—based in “traditional” statistics—for comparing performances based on percentage rates. This article will use the same data to show what should be done instead.

To quickly review the scenario: In an effort to reduce unnecessary expensive prescriptions, a pharmacy administrator developed a proposal to monitor and compare individual physicians’ tendencies to prescribe the most expensive drug within a class. Data were obtained for a peer group of 51 physicians including the total number of prescriptions written, and, of that number, how many were for the target drug.

Davis Balestracci’s picture

By: Davis Balestracci

I chatted about u-charts for rates last time, and this column was going to be about p-charts for percentage data. These are the two major charts for dealing with count data and are helpful for stratifying a stable section of process performance.

But something recently happened that saddens me and has become all too common in many organizations for which I have consulted. It reminded me of the need to warn you about a very common approach to (allegedly) stratify data—to find the “bad” performers. I have a wonderful data set using percentages on which, next time, I will demonstrate the proper analysis and interpretation via p-charts; but I am going to use it today to make a major point about something to be avoided at all costs.

I have been mentoring a very good data analyst for the past three years. Despite the support of the medical director, it has been pretty much an all-out war with the C-suite executives to implement “data sanity”—resistance, to put it mildly, has been fierce from the start. I received the following note from this analyst last week:

Davis Balestracci’s picture

By: Davis Balestracci

My last column, “Dealing With Count Data and Variation,” showed how a matrix presentation of stratified count data could be quite effective as a common-cause strategy. I’ll use this column to review some key concepts of count data as well as to demonstrate the first of two common statistical techniques that can be useful for further analysis. Obtaining the counts themselves is only half of the job.

First, make sure the operational definition is clear: What’s the threshold whereby something goes from a “nonincident” (i.e., a value of 0) to an “incident” (i.e., a value of 1)? Would two or more people assessing the situation concur that the “incident” had occurred?

In addition, all count data have an implicit denominator that’s defined by the “area of opportunity” for that count. It depends on what's being counted, how it’s being counted, and what possible restrictions there might be upon the count.

Davis Balestracci’s picture

By: Davis Balestracci

In my last column, I showed the power of process-oriented thinking with a safety scenario. A simple run chart demonstrated that, despite meeting an aggressive 25-percent reduction goal (i.e., 45 accidents during the first year, and 32 the following year), the process that produced the 32 was no different from the process that produced the 45. It was common cause. Now what?

One advantage to the common-cause nature of the problem is that all 77 incidents were produced by the same process. Therefore, they can be aggregated, then stratified by process inputs to reveal hidden special causes.