Featured Product
This Week in Quality Digest Live
Quality Insider Features
Richard Harpster
Good news? You are probably already doing it.
Adam Zewe
Researchers find the root cause of side-channel attacks that are easy to implement but difficult to detect
Constance Noonan Hadley
The time has come to check whether the benefits of teamwork still outweigh the costs
Lily Chen
The cornerstone of cybersecurity
Jeremy L. Boerger
To keep your business running, you need visibility into your IT assets

More Features

Quality Insider News
New standard leads to smoother production in 3D printing
Making designs a physical reality with the know-how to make more
Sapphire XC will ship in late Q3 beginning with aerospace companies
Major ERP projects take six months longer than companies were told
Program inspires leaders to consider systems perspective for continuous improvement and innovation
Collaboration produces online software for collecting quality inspection data
Serving the needs of employers and educators
Powder reuse schemes affect medical device performance

More News

Davis Balestracci

Quality Insider

Dealing With Count Data and Variation

A non-statistical common cause strategy from Joseph Juran

Published: Tuesday, August 19, 2014 - 15:58

In my last column, I showed the power of process-oriented thinking with a safety scenario. A simple run chart demonstrated that, despite meeting an aggressive 25-percent reduction goal (i.e., 45 accidents during the first year, and 32 the following year), the process that produced the 32 was no different from the process that produced the 45. It was common cause. Now what?

One advantage to the common-cause nature of the problem is that all 77 incidents were produced by the same process. Therefore, they can be aggregated, then stratified by process inputs to reveal hidden special causes.

Joseph Juran’s wisdom comes to the rescue, again

I came across the technique below in the classic “Juran on Quality Improvement” series from the 1980s. This video series was a seminal part of my quality improvement development. I watched many of them until they practically wore out. He used this series to teach the concepts in his 1964 book, Managerial Breakthrough (McGraw-Hill, 1995 revised edition), which is still worth reading. When I saw Juran during his 90th birthday farewell tour, I had him sign my copy. He smiled at me as he gave it back and said, “You know, I think this is my best work.”

Juran “invented” the Pareto Principle (20 percent of a process accounts for 80 percent of its problems), and he demonstrated a brilliant variation to show its power in two dimensions—a Pareto matrix, if you will. In my presentations over the years, many people have told me that this has been by far their most useful diagnostic tool for finding significant opportunities to improve a “vague” situation. In the case of the accident data, note the advantage of looking at the data in two dimensions—accident type and unit:

In line with the Pareto principle, two units account for most of the accidents, and two accident types account for many of the accidents. However, look at the power of the matrix presentation when the high numbers are investigated further.

Unit B, despite its many accidents, has excellent performance except for accident type three. Also, because no one else is having trouble with this accident type, the odds for rectifying this situation are quite good. It might not even reflect departmental safety competence, just a different “process” with six possible inputs (people, methods, machines, materials, measurement, and environment).

In cases like this, people generally assume it’s methods, but one or more of the other five inputs are probably making this department “perfectly designed” to be more dangerous compared to other departments. Something is unique to that department that is making its work environment inherently more dangerous. Now, what would a plantwide safety seminar on accident type three accomplish? It would treat a special cause as if it were a common cause and waste a lot of people’s time in the process.

Unit E, on the other hand, has no such clear localized action. For whatever reason, its entire safety performance is suspect because it’s experiencing all the accident types. This will take further investigation—of Unit E’s accidents. The situation has been focused, resulting in a lot less subsequent work.

After studying accident type five, it becomes obvious that it’s a problem for the entire plant because everyone is experiencing it. It’s not as simple as saying, “Be more careful.” The plant is perfectly designed to have this hazardous situation.

If appropriate action could be taken on these three significant sources of undesirable variation, there’s the potential to reduce accidents by approximately 40 percent. But by concentrating only on the monthly total and overlooking this common-cause strategy that stratifies by process “inputs”—i.e., people (in this case, unit) and measurement (definitions of accident types)—Unit B would continue to have accident type three, Unit E would continue its poor safety performance, and accident type five would continue unabated.

Be careful when you aggregate

My last column also contained a medication-error run chart that showed a special cause every July. Once that special cause is exposed, July’s data should be kept separate and not aggregated with the other 11 months’ data because it has a different “process.” The pattern of the aggregated three July errors should be compared with the pattern of the other aggregated 11 months. By seeing how these two patterns are different, one hopes to prevent (or at least minimize the impact of) this currently predictable July special cause from recurring. However, if the patterns were similar—i.e., the same mistakes being made but at higher rates—a different action would be required.

It’s much more productive to have people brainstorm ways to stratify data in two dimensions rather than wasting energy to explain why this month’s result is different from last month’s. It also helps to focus subsequent, more detailed diagnosis on the 20 percent of the process causing 80 percent of the problem. This preliminary work would result in a much less overwhelming cause-and-effect diagram and a more focused subsequent effort than what could be produced in response to the vague question, “What causes accidents?” Not to mention the potential resulting plethora of rapid-cycle PDSAs!

Before any action can be taken, the process producing the data must be assessed. As W. Edwards Deming used to say, “Figures on accidents do nothing to reduce the frequency of accidents.”

Discuss

About The Author

Davis Balestracci’s picture

Davis Balestracci

Davis Balestracci is a past chair of ASQ’s statistics division. He has synthesized W. Edwards Deming’s philosophy as Deming intended—as an approach to leadership—in the second edition of Data Sanity (Medical Group Management Association, 2015), with a foreword by Donald Berwick, M.D. Shipped free or as an ebook, Data Sanity offers a new way of thinking using a common organizational language based in process and understanding variation (data sanity), applied to everyday data and management. It also integrates Balestracci’s 20 years of studying organizational psychology into an “improvement as built in” approach as opposed to most current “quality as bolt-on” programs. Balestracci would love to wake up your conferences with his dynamic style and entertaining insights into the places where process, statistics, organizational culture, and quality meet.

Comments

Count data analysis needs a lot of help

Those of us lucky enough to be "data rich" with continuous data are spoiled.

Count data is always messy in some way, it seems to me.

And yet reliability and safety people have to work with count data in many high risk scenarios. 

Thanks for your focus on "small data" when "big data" is so hyped now days.

What if we compare different sized units: normalize?

Great article!

While looking at how I could use this I came accross a question: what if the work activity in the different departments is very different?

For instance if one of the units has twice the number of employees and/or produces twice the output of any other unit, we would certainly expect a higher number of incidents.

This basically means that the areas of opportunity are different in the different units. Should we in this case use the two entry table with "normalized" data to make it relevant (i.e., nbr of accidents/nbr of employees or hrs worked...)?

Thank you for your insight.

Best regards, Pierre

Great Idea

A simple, yet effective way to look at the data.  Well worth sharing with others.