Featured Product
This Week in Quality Digest Live
Six Sigma Features
William A. Levinson
Quality and manufacturing professionals are in the best position to eradicate inflationary waste
Donald J. Wheeler
What does this ratio tell us?
Donald J. Wheeler
How you sample your process matters
Paul Laughlin
How to think differently about data usage
Donald J. Wheeler
The origin of the error function

More Features

Six Sigma News
How to use Minitab statistical functions to improve business processes
Sept. 28–29, 2022, at the MassMutual Center in Springfield, MA
Elsmar Cove is a leading forum for quality and standards compliance
Is the future of quality management actually business management?
Too often process enhancements occur in silos where there is little positive impact on the big picture
Collect measurements, visual defect information, simple Go/No-Go situations from any online device
Good quality is adding an average of 11 percent to organizations’ revenue growth
Floor symbols and decals create a SMART floor environment, adding visual organization to any environment
A guide for practitioners and managers

More News

Davis Balestracci

Six Sigma

Count Data: Easy as 1-2-3?

Hardly!

Published: Monday, July 7, 2014 - 12:18


Author's note: To my non-U.S. readers, I apologize for using the sport of baseball to make my points today—and during the World Cup, no less! It’s a perfect context, however, and I hope you will be able to understand the main points.

In my last column, I talked about the different types of control charts and encouraged the use of the individuals chart almost exclusively. I also mentioned that p-charts (percentages) and u-charts (rates) can be very useful for stratification. I’m going to pursue that more in the next couple of columns, but today it’s “back to basics” to set up their foundation by talking about count data.

Count data seem intuitively simple

Let’s begin with a common issue: tracking complaints. How do you count them? Do you count the number of complaints received each month, or do you count the number of customers who complained? The people tallying such data will certainly need careful instruction before you can begin to collect useful count data—lurking “human variation” can seriously compromise the quality of the data and its use.

Let’s say a certain pediatrics unit reported the number of concerns on a monthly basis. The values for one period of 21 months were, respectively, 20, 22, 9, 12, 13, 20, 8, 23, 16, 11, 14, 9, 11, 3, 5, 7, 3, 2, 1, 7, and 6. Even though you know the counts, however, you don’t know the whole story because you don’t know the context for the counts. Before anyone can make sense of these counts, certain questions must be answered.

For instance, how is “concern” defined? Is it just an antiseptic term for “complaint?” Are these customer complaints, internally generated counts, or a mixture of both? Why does the number of concerns drop? What about the rumor that the hospital administrator is using these numbers to rank departments for bonuses? What exactly constitutes a complaint? Does a complaint about a chilly reception room count?

Always know your objective for collecting such data. This will help determine what should be included in your count—the threshold that causes something to go from a nonevent (0) to a tallied, countable event (1).

The more specific, the better

As I sit here listening to a Boston Red Sox game, it occurs to me that baseball might be one of the best examples for giving extremely specific criteria for tallying, and then using count data for comparison and prediction. Not only that, when it is felt that a certain criterion is not reflecting its original intent—usually some ranking of ability for relative comparison—baseball is good at redefining the operational definition. As you will see from the example below, baseball is also good at defining new supplemental empirical measures that seem to better reflect measurement of ability.

Let’s track the evolution of the indicator “save.” The term was being loosely used as far back as 1952. Some coaches thought it reflected the particular skill of a relief pitcher to “seal the deal”—in other words, to successfully maintain a lead until the end of the game, regardless of the margin of victory. Because they could not be credited with the win, record keepers wanted some recognition of this unique skill as well as the ability to compare relief pitchers’ abilities to perform this task. However, being unofficial, this statistic went largely unnoticed.

A formula with more criteria for saves was invented in 1960 in part because of the feeling that the major existing comparative statistics for pitchers at the time, earned run average (ERA—the average number of earned runs a pitcher gives up during nine innings) and won-loss record, did not sufficiently measure a relief pitcher’s effectiveness. Each of these well-established statistics proved flawed for relief pitchers, as follows:

• ERA does not account for runners already on base when a reliever enters the game. Officially, these runners are the responsibility of the relief pitcher’s predecessor, but it’s the reliever’s job to keep them from scoring once he is in the game. Some relief pitchers, as it happens, are much better at this than others. How could this be reflected?
• Won-loss record does not account for a reliever’s skill at protecting leads. A case in 1959 provided particular motivation for closer scrutiny of this statistic, as one relief pitcher that year had a won-loss record of 18–1. On the surface, that looks impressive; however, 10 of the man’s 18 wins occurred in games when he came in and gave up the lead, but, fortunately for him, his team regained the lead and won the game. He got the win, but this was hardly a person who instills confidence in his team!

“Save” became an official baseball statistic in 1969. In the process of routinely using the initial definition, some situations appeared that awarded a save a bit too liberally for the intent, even though on these occasions, the reliever met the formal criteria. In 1974, tougher criteria were adopted, but the use of this revised definition showed it to be too stringent. It was redefined once more in 1975 and has remained the operational definition ever since.

Today, there are four very specific criteria necessary for a relief pitcher to earn a save, all of which must be met:

• He is the finishing pitcher in a game won by his team
• He is not the winning pitcher
• He is credited with at least one-third of an inning pitched
• He satisfies one of the following three conditions: He enters the game with a lead of no more than three runs and pitches for at least one inning; or he enters the game, regardless of the count, with the potential tying run either on base, at bat, or on deck; or he pitches for at least three innings

It is possible to look back retroactively at official box scores from seasons in which both earlier criteria of the save rule was in force. If you apply the new definition of saves to these games, not surprisingly, you will find some in which pitchers were credited with saves in situations where they would not earn them under the current rule.

One similar situation familiar to most of us is how Pluto, depending on the criteria used, is either a planet (1) or not a planet (0). Regardless of the definition, Pluto continues to exist. An interesting bit of fallout from all this discussion is that if criteria are applied that once again declare Pluto to be a planet, you would then have to also declare 100 other heavenly bodies as planets, too!

W. Edwards Deming: “There is no true value of anything”

In counting things, one comes up with a process to look at a situation, apply criteria, and conclude whether an event “occurred.” Using different criteria could result in a different conclusion, and no one measure will ever be 100-percent perfect. Initially, you want criteria robust enough so that the same decision is made regardless of who evaluated the situation (healthcare people—think “chart review”). That’s only half the battle. The real test then becomes whether subsequent use of the number, using this definition, fits one’s original objective and allows the appropriate desired action.

As you also see in the evolution of the definition of “save,” there was implicit plan-do-study-act (PDSA) inherent in using “save” as a performance statistic. A judgment was made that the 1969 definition did not adequately reflect the purpose for which it was intended. So, two changes occurred—one in 1974, and the final one in 1975. The criteria were clear enough that situations before 1969 could be retrofitted with the new criteria and reevaluated. There was agreement that the resulting set of numbers now allowed fairer comparisons and rankings.

But obtaining the count itself, though important, still provides only limited information. For example, if pitcher No. 1 has 24 saves and pitcher No. 2 has 37 saves, is the latter is a better relief pitcher? Not necessarily. To interpret these counts, one must also know the area of opportunity for each count. That will be the subject of my next column.

Meanwhile, track down some important count data in your organization and find out 1) who collects it (and are there multiple collectors?), 2) whether the collectors know what it is used for, 3) whether they agree on the definition, 4) what actions are taken because of these data, 5) whether these actions are consistent with the collectors’ view, and 6) whether fear could affect collectors’ reporting. Until then....


Discuss

About The Author

Davis Balestracci’s picture

Davis Balestracci

Davis Balestracci is a past chair of ASQ’s statistics division. He has synthesized W. Edwards Deming’s philosophy as Deming intended—as an approach to leadership—in the second edition of Data Sanity (Medical Group Management Association, 2015), with a foreword by Donald Berwick, M.D. Shipped free or as an ebook, Data Sanity offers a new way of thinking using a common organizational language based in process and understanding variation (data sanity), applied to everyday data and management. It also integrates Balestracci’s 20 years of studying organizational psychology into an “improvement as built in” approach as opposed to most current “quality as bolt-on” programs. Balestracci would love to wake up your conferences with his dynamic style and entertaining insights into the places where process, statistics, organizational culture, and quality meet.