Featured Product
This Week in Quality Digest Live
Six Sigma Features
Richard Harpster
Good news? You are probably already doing it.
Donald J. Wheeler
Does your approach do what you need?
James J. Kline
Quality professional organizations need to adjust their body of knowledge to include an understanding of big data
Donald J. Wheeler
In spite of what everyone says to the contrary
Brittney McIver
Every CAPA should begin with investigation

More Features

Six Sigma News
Elsmar Cove is a leading forum for quality and standards compliance
Is the future of quality management actually business management?
Too often process enhancements occur in silos where there is little positive impact on the big picture
Collect measurements, visual defect information, simple Go/No-Go situations from any online device
Good quality is adding an average of 11 percent to organizations’ revenue growth
Floor symbols and decals create a SMART floor environment, adding visual organization to any environment
A guide for practitioners and managers
Making lean Six Sigma easier and adaptable to current workplaces
Gain visibility into real-time quality data to improve manufacturing process efficiency, quality, and profits

More News

Davis Balestracci

Six Sigma

Are You Unknowingly Reacting to the Data Process?

Everyday variation lurks in many guises

Published: Monday, March 16, 2015 - 13:17

In my last column, I discussed how even a well-designed study with a statistically significant result doesn’t necessarily mean viability in the real world. Post-study, one must study the manifestations of variation on the result in any environment in which the result is applied—and each environment will have its own unique variation.

This type of study requires the use of analytic statistical methods, which are designed to deal specifically with this type of variation by exposing it and monitoring attempts to reduce that which is inappropriate and unintended.

Not only will variation come into play in the process of applying any result, but it also affects the four statistical data processes:
1. Measurement definition
2. Data collection
3. Analysis
4. Interpretation of the analysis

In formal research such as clinical trials, these are tightly controlled (and subsequently ignored) via well-known enumerative statistical methods. However, post-study, not only is control on the method of applying the result significantly lessened, but environmental variation (due to work culture) will also come into play in each of the four data processes.

I used whitewater rapids as an analog for a real-world environment. In the cases of applying a research result or even the simplest rapid cycle plan-do-study-act (PDSA) design:

Besides the subsequent study of the variation of the actual application process, do people even consider studying the variation in each of the four data processes? If not, there is a very real danger: People could act on the basis of interpreting variation due solely to any or all of the four data processes. When these processes are ad hoc and not formally designed, there is real danger for variation from these data processes to overshadow and cloud the study of any variation (beneficial or otherwise) caused by the actual process being tested.

Cursory consideration of the environmental (i.e., cultural) effects of variation on the four data processes inevitably results in vague plans leading to vague data and vague results. Wouldn’t it be easier to test the study if the data processes had their variation minimized as part of the plan so as not to cloud the interpretation of the tested process application? This is a most nontrivial process, which I will deal with in the rest of this column and next time.

Human variation rears its ugly head yet again

Each of the four data processes has the exact same six sources of process inputs we’ve all been taught: people, methods, machines, materials, measurements (literal numbers), and environments. Each also has an inherent “quality” associated with it, and its inputs are subject to the same influences of outside variation as the process being tested. All of this can potentially compromise the quality of the study.

What is the biggest danger? Human variation, which includes cultural perception of the planned study and how it affects executing the test and “methods” of measurement, collection, analysis, and interpretation.

In that context, let’s consider some key questions for each of these four data processes.

Measurement. Are the data operationally defined in line with clear objectives? W. Edwards Deming was fond of saying, “There is no true value of anything.”
• What is the concept you are trying to evaluate?
• What data will allow you to attach a value to this concept?
• By what standards or measures will you judge it?
• Can you write down clear descriptions of how to measure the characteristic?
• What are some factors that might cause measurements of the same item or situation to vary?
• In the case of measuring discrete events, is the threshold of occurrence between a “nonevent” (0) and an “event” (1) clear and understood?
• How can you reduce the varied perceptions and ultimate impact of these factors?

Collection. Consider your plan for collecting these data:
• Will the data collectors have to take samples? If so, how often, how many, and where?
• If people must be sampled, specifically who should be chosen? (See my last column about the fallacy of random sampling in analytic statistics, starting at: “For example, we may take a group of patients....”)
• How will these data be recorded?
• Can you design a data sheet (i.e., check sheet) to record the data as simply as possible?
• Another issue to consider in many cases: Do your customers or suppliers (both internal and external) collect the same kind of data? If so, what procedures or instruments do they use? Are your definitions, standards, and procedures comparable to those used by customers and suppliers?

Analysis. Are you aware that your analysis should be known before one piece of data has been collected? When one applies a specific statistical technique, there’s an underlying assumption that the data were collected in the specific way that makes the analysis appropriate. The danger here is that the computer will do anything you want—whether the data were collected appropriately or not. Imagine you have the data in hand:
• What could these data tell you?
• What will you do with the data?
• What specific statistical technique(s) will you use?
• Were the data collected in a way that makes this analysis appropriate?
• What will you do after that?
• Would another kind of data be more helpful?

Interpretation. Statistics is not a set of techniques used to “massage” data. Proper use allows more proactive interpretation of the variation on which you must take appropriate action and ultimately make predictions. The danger: Any variation is one of two types, and treating one as the other can actually make the situation worse—certainly no better.

Want to avoid ‘vague?’ Then study the data process as part of the plan

Human perception of variation and how they execute the “methods” of the four data processes—measurement, collection, analysis, and interpretation—can compromise the quality of data and cloud the effect of the tested change, rendering any subsequent analysis virtually useless. This must be anticipated and minimized as part of any study’s plan—to the point of being tested separately before any formal test on a potentially beneficial change. Eight questions need to be addressed, which I will talk about next time.

Discuss

About The Author

Davis Balestracci’s picture

Davis Balestracci

Davis Balestracci is a past chair of ASQ’s statistics division. He has synthesized W. Edwards Deming’s philosophy as Deming intended—as an approach to leadership—in the second edition of Data Sanity (Medical Group Management Association, 2015), with a foreword by Donald Berwick, M.D. Shipped free or as an ebook, Data Sanity offers a new way of thinking using a common organizational language based in process and understanding variation (data sanity), applied to everyday data and management. It also integrates Balestracci’s 20 years of studying organizational psychology into an “improvement as built in” approach as opposed to most current “quality as bolt-on” programs. Balestracci would love to wake up your conferences with his dynamic style and entertaining insights into the places where process, statistics, organizational culture, and quality meet.