Content By Davis Balestracci

Davis Balestracci’s picture

By: Davis Balestracci

Human perception of variation and how we execute the methods of four data processes—measurement, collection, analysis, and interpretation—were discussed in part one of this column. Because human variation can compromise the quality of data and render any subsequent analysis virtually useless for project purposes, its effects must be anticipated and minimized. To deal with this, eight questions need to be addressed.

The first four questions: reducing human variation in design

• Why collect the data? Is there a clear objective for this collection?
• What method(s) will be used for the analysis? This should be known even before one piece of data is collected.
• What data will be collected? What specific process output(s) does one wish to capture?
• How will the data be measured? How will one evaluate any output to obtain a consistent number, regardless of who measures it? Remember, there is no “true value,” and it depends on the specific objective.

Davis Balestracci’s picture

By: Davis Balestracci

Have you ever been responsible for a data collection where any resemblance between what you designed and what you got back was purely coincidental? When that happens, yet again, I say to myself, “Well, it was perfectly clear to me what I meant.”

Consider the use of statistics as a data process, or rather, four processes: measurement, collection, analysis, and interpretation. Each of these has six sources of process inputs: people, methods, machines, materials, measurements, and environments. Each also has an inherent “quality” associated with it and is subject to the influences of outside variation to compromise this quality. Unless the variation in these processes is minimized, there is a danger of reacting to the variation in the data process instead of the process you are trying to understand and improve.

What is the biggest danger? Human variation, which includes our perception of the variation (“measurement”) and our executing the “methods” of measurement, collection, analysis, and interpretation.

In that context, let’s consider each of these four data processes:

Davis Balestracci’s picture

By: Davis Balestracci

Editor’s note: This is the third of a three-part series on effective, focused data analysis. Part one discussed helping management deal with common cause; the first common cause strategy—stratification—was discussed in part two.

In my last column, I introduced some aspects of common cause strategies using Juran’s wisdom of “exhaust in-house data.” That involved: defining recurring problems, assessing the effect of each problem, and localizing each major problem. I suggested these as a preliminary process for change agents to organize data before getting other people involved;  in doing this, you will no doubt come to some conclusions of your own. The next step is to discuss conclusions with key players. Do the results of your in-house data analyses seem logical to the people involved? Are there obvious changes that would eliminate the problem? Are there obvious ways to prevent similar problems in the future?

Davis Balestracci’s picture

By: Davis Balestracci

As you all know, the influence of W. Edwards Deming on my career and thinking has been profound. A criticism always leveled at him was that he was short on specifics—but he would always growl at someone who alluded to this, “Examples without theory teach nothing!”

Enter Joseph Juran, the other quality giant of the 20th century. When I worked at 3M during the 1980s, they had several sets of his 16-video Juran on Quality Improvement series. I studied it hard and watched several tapes many, many times. He had a good empirical sense (and sense of humor), and having been around the block once or twice, a lot of wisdom. So, let’s apply some of that wisdom to the two scenarios from part one of this three-part series.

Juran always advised as a first strategy: “Exhaust in-house data.” An initial chart analyzing your process is important before doing this. Here’s a key principle of common cause analysis that almost everyone overlooks: The data from any common cause period on a control chart can be aggregated to attempt a “stratification” and apply Juran’s beloved Pareto Principle.

Davis Balestracci’s picture

By: Davis Balestracci

Let’s revisit two scenarios from my July 2012 column, “The Sobering Reality of ’Beginner’s Mind.’” First, a medical center’s Harvard MBA COO insisted on nothing less than 100-percent computer uptime, no excuses. His IT department’s inability to get 100-percent uptime consistently has resulted in yet another monthly “account for results” meeting.

The agenda: Get the result for the past month; show some type of table or bar graph summary (typically a red/yellow/green assessment accompanied by a variation on “this month, last month, 12 months ago”); listen to the predictable litany of excuses for why it didn’t happen the past month; and come up with the (latest) plan on how to fix it... until the next unexpected thing happens.

As an alternative, I showed a run chart and its analysis, which demonstrated the behavior to be common cause. The monthly meeting to discuss that specific month’s downtime and “do something” is a special cause strategy. Based on the run chart, there is no evidence of improvement so far, but do you think some complexity might have been added during this time? Let’s go a little further.

Davis Balestracci’s picture

By: Davis Balestracci

“If Japan Can… Why Can’t We?” was an American television episode that aired on June 24, 1980, broadcast by NBC as part of its show, NBC White Paper. That episode is often credited with beginning the quality revolution and introducing the methods of W. Edwards Deming to American managers.

In the mid-1970s, bachelor’s degree in chemical engineering in hand, I remember my initial eye-opening experience of being exposed to applied statistical methods. I realized that, as an engineer, how could I not use them? It inspired me to get a master’s degree in statistics and become a member of one of the ubiquitous internal statistical consulting groups for major corporations that existed at that time—groups that have, sadly, long since disappeared. As a relatively new and very excited industrial master of science statistician, I was consistently mystified by the fierce resistance I encountered to the obvious need to use applied statistics to improve the quality of manufacturing products and the research in their development.

Davis Balestracci’s picture

By: Davis Balestracci

I am in the midst of teaching an online MBA course in statistical thinking. This is actually my second go-round, and I've heavily revised my inherited materials, which were well-meaning but had some obvious gaps.

ADVERTISEMENT

I insisted on using Brian Joiner’s Fourth Generation Management (McGraw-Hill) as the key text, and I still use Donald Wheeler’s excellent classic, Understanding Variation  (SPC Press, 1993), and W. Edwards Deming’s own The New Economics (MIT Press, 2000 reprint), probably his most readable, which were the two main texts as originally taught.

Davis Balestracci’s picture

By: Davis Balestracci

“I’m shocked... shocked to find that gambling is going on in here!”

—Casablanca’s Captain Renault, as he’s closing down Rick’s Cafe... while being handed his gambling winnings

I saw an abstract of a recent talk by several “experts” who have been very active selling (expensive) improvement initiatives during the last 5–10 years. They do this via lots of training, tools, “sharing best practices,” and exhorting people with, “If they can do it, you can do it.” Meanwhile they are creating a massive subculture of qualicrats.

Davis Balestracci’s picture

By: Davis Balestracci

In my March 7, 2012, column, “An Elegantly Simple but Counterintuitive Approach to Analysis,” I suggested the necessity to formally assess the stability of the process producing any data—a step not usually taught in most academic statistics courses. This is done by plotting the data in their naturally occurring time order with the median put in as a reference line—known as a run chart.

ADVERTISEMENT

Davis Balestracci’s picture

By: Davis Balestracci

For those of us practicing improvement in a medical culture, presenting this “funny new statistical way” of doing things to a physician audience triggers a predictable stated reason: “This isn’t in line with rigorous, double-blind clinical trial research.” And your response should be, “True! Nor could it be, nor should it be.”

ADVERTISEMENT

Clinical trial research statistical methods make assumptions and control variation in ways that can’t be replicated in the unstable environment of the real world, making them less suitable for improvement. This is true for any work environment as well.

Most basic academic statistics requirements are based in a context of “estimation” and teach methods appropriate for research. These, unfortunately, have limited applicability in everyday work, which is based on process-oriented thinking—a concept foreign to most academics—and whose need is “prediction.” This affects data collection, use of statistical tools, and validity of analyses.