PROMISE: Our kitties will never sit on top of content. Please turn off your ad blocker for our site.
puuuuuuurrrrrrrrrrrr
More than 30 years ago, W. Edwards Deming was hailed in the U.S. workplace as a potential savior of automakers. Years of Japanese automakers’ supremacy in terms of quality had resulted in a steady gain of market share for manufacturers such as Honda and Toyota. Automotive manufacturers in the U.S. were in a period of crisis; they realized that the way they operated had to change if they were to remain competitive.
Health care in the United States is now in a similar state. Instead of a single person bringing the message of change, it’s a nonprofit think tank, the Institute for Healthcare Improvement (IHI), headquartered in Cambridge, Massachusetts. The IHI was founded in 1991 to address the growing concern regarding the quality of health care.
The IHI has been largely successful at driving quality improvement by using several different approaches. One is to run improvement campaigns with specific, measurable goals and completion dates. “How much and by when?” is a question often heard at the institute and is embedded in its two major efforts during the past three years--the 100,000 Lives Campaign, which was a national effort to reduce preventable deaths in U.S. hospitals between January 2005 and July 2006, and the ongoing 5 Million Lives Campaign, an effort to prevent harm to patients.
All of IHI’s work is grounded in a very practical approach to quality improvement. Initially developed by the Associates in Process Improvement (API), the IHI model for improvement focuses on three questions fundamental to all improvement activities and the plan-do-study-act (PDSA) cycle:
• “What are we trying to accomplish?”
• “How will we know that a change is an improvement?”
• “What changes can we make that will result in improvement?”
The IHI has made a concerted effort to spread this improvement methodology through annual seminars, multiple-day workshops, and even a nine-month course for individuals interested in becoming “improvement advisors.”
Huge gains have been made with regard to how health care providers address the first and third questions. Hospital staffs now engaging in improvement methods are much more likely to create solid “aim statements” that include the population affected, the unit of measure, and the time expected to realize the improvement. The IHI has been instrumental in teaching hospital staffs how to break the habit of setting broad “solve world hunger” goals by crafting aim statements such as:
• “We will reduce ventilator-associated pneumonia by 90 percent before the end of 2008.”
• “The percent of post-surgical site infection for cardiac patients will be less than 2 percent by end of fiscal year 2007.”
• “Pneumonia patients will receive first-dose antibiotics within four hours of arrival by the end of calendar year 2008.”
All three examples are actionable and state clear goals, a unit of measure, the population involved, and an expected date of completion. Health care in the United States is much better served by being specific and actionable rather than being broad and vague.
Clearly, health care has become much better at defining what it will do (first question) and rapidly implementing supported changes (the third question). Unfortunately, creating reliable indicators to answer the second question, “How will we know that a change is an improvement?” lags behind.
Not only is there a general lack of understanding about how to create, define, and collect indicator data, but statistical process control (SPC) isn’t widely used in the health care industry. Explaining why SPC hasn’t gained the level of acceptance in health care that it has in other industries is beyond the scope of this discussion. Readers wanting guidance for using SPC within health care would be best served by reviewing the books Quality Health Care: A Guide to Developing and Using Indicators, by Robert C. Lloyd (Jones and Bartlett Publishers, 2004) and Improving Healthcare with Control Charts: Basic and Advanced SPC Methods and Case Studies, by Raymond G. Carey and Larry V. Staker (ASQ Quality Press, 2003).
Why has health care been so slow to adopt improvement methods long used by industries around the globe? Most improvement teams have far too many action items to complete, even with individual members working 50-60 hours a week. Meanwhile, external regulatory bodies, as well as federal and state regulations, have caused an increase in the administrative work associated with health care. This has created a situation where people are too burdened to complete any one demand thoroughly and with high quality.
So how does one create solid indicators by which to gauge improvements in light of these constraints? To begin, it’s necessary to define the essential measurement elements used to assess the effect of a perceived improvement. Ten elements should be included to create a trustworthy indicator:
• Type of data
• Indicator name
• Purpose for monitoring or improving effect, i.e., why measure this?
• Owner
• Level
• Kind
• Operational definition
• Where the data reside
• Persons responsible for collecting and entering data
• When to stop
Simply put, indicators fall into one of two types--”attributes” or “continuous” (the latter is also referred to as “measurement” or “variables” data). Continuous data are measured along a continuous scale, with the core-defining quality being that each interval is equal. Some classic examples of variables data within health care are blood-pressure readings, wait time in the emergency room, number of needle sticks, and monthly operating margin.
Attributes data are characterized by sorting characteristics into categories, or “buckets.” For instance, at discharge a patient is either living or deceased. In the maternity ward, a woman either had a C-section or didn’t. Both of these examples are cases where the denominator is a known value. We know how many women gave birth and how many by C-section. Many people in health care erroneously refer to this as “C-section rate,” although it isn’t. The distinction is important because a rate is very different than a percent and should be decided upon and documented while creating an indicator. A rate is used any time a numerator is known but the denominator is an unknown. The best example of this is the classic “fall rate,” which is simply the number of falls over the total number of inpatient days.
Treating patient- satisfaction data as continuous data is another misuse of data type in health care. If I give a hospital a 4-out-of-5 score on some measure of satisfaction, and you give it a score of 2, am I twice as pleased as you are? This certainly isn’t the case, but it is the way these data are treated once they’re converted into a 100-point scale and plotted as continuous data with decimals. The most accurate way of interpreting these data is by using a percentage. A common patient- satisfaction indicator is the “percent of top box.” This is the percentage of responses where the patients rated the questions at the highest level.
Type of data should be the first thing documented and in many ways is critical in shaping the reader’s knowledge. What’s more valuable, knowing the percentage of days that the temperature was more than 90 degrees or knowing the average temperature? Both provide information, but on the whole, continuous data are preferable to attributes and should be used when either is possible. Selecting a control chart will be determined by the type of data used, and charts using continuous data will be more powerful than charts using attributes.
Understanding and selecting data type is the first, and arguably the most important, step in creating a healthy indicator.
A useful indicator name will contain the unit of measure and avoid a statement of judgment. “Thou shall” labels can bring into play defensiveness of those involved in the collection and improvement processes. “The patient should never wait too long” and “Amount of bad feedback ratings” are typical examples.
Another problem with these two examples is the lack of a unit of measure. A much-improved version would be “Average minutes wait in ultrasound” and “Percentage of top-box rating in concern for privacy.” Not only do both include the unit of measure and avoid negative connotations, they are also more descriptive in what they purport to measure.
Remember, the indicator name should convey a clear and easily understood meaning to a first-time reader.
It’s also important to document whether the indicator is being created for use within an improvement project, or just as a monitoring device. Sometimes a measure may warrant constant attention and analysis, even if no improvement effort is underway. An example of this is using restraints in a hospital setting. The Joint Commission (JC), based in Oakbrook Terrace, Illinois, has long required monitoring of this indicator, and although many hospitals aren’t actively working on lowering restraints, they believe it’s too important an indicator to stop monitoring.
Another function of purpose is to explain the rationale for collecting the indicator. Why should time and resources be sacrificed for collecting this information? What makes it important in the organization’s fundamental mission?
Who will “own” the indicator? Who will ensure that it’s collected accurately and continually? Typically, this person is in leadership and will ultimately be the point person for questions about the indicator’s performance. Without documenting ownership, indicators can become orphans with no one taking responsibility for answering questions and learning from the data. I recommend a single point person for each indicator. More than one person can create confusion about responsibility.
There are three levels of indicators: big dots, drivers, and project. Big dots can be thought of as the crude measures of an organization’s mission. In a hospital, a perfect example of this is percent mortality. A driver is any midlevel indicator that directly affects a big dot. A driver that supports the big dot of percent mortality might be “percent pneumonia mortality.” Pneumonia is the fourth leading cause of death in U.S. hospitals. It would seem that a lowering of the mortality related to pneumonia will ultimately reduce the overall percent mortality. Indicators at the big dot and driver levels are too large for running improvement efforts; this is where project-level indicators come in. These are used within an actual improvement effort and support the related higher-level driver indicator. A great example of a project-level indicator would be “antibiotics within four hours of arrival.” Administering antibiotics as soon as possible has been shown to reduce mortality, which is a driver-level indicator for “percent pneumonia mortality.” Defining the level of your indicator helps to ensure that the creator and the reader both understand its role in the organization’s mission.
Three kinds of measures describe where in the system an indicator dwells: process, outcome, and balancing. Process indicators measure some aspect of a step within a process. This might be the number of calls within a telemarketing center or the time to see a physician in an emergency department. Outcome measures live downstream from process measures and are the end result of the overall process. Percent mortality is an example of an outcome measure in that it’s the outcome of many different processes within the hospital system. Keep in mind that an outcome indicator with one process may become a process measure in a subsequent process.
Balancing measures are used to guard against suboptimization by monitoring whether gains in the main project indicators weren’t made at the expense of other processes not currently involved in improvement. A classic example of this in health care is a project to reduce the average length of stay for congestive heart failure patients, only to find the percent of readmissions within 30 days skyrockets. Ideally, a project will have several process and outcome indicators and at least one balancing measure. Be wary of projects without at least one balancing measure. If a health care organization embraces systems thinking as described by Deming, it’s inevitable that changes made and verified in project indicators will affect measures outside the scope of the current project.
At the heart of any form documenting an indicator is the operational definition. An operational definition, according to Lloyd (2004:72), is “a description, in quantifiable terms, of what to measure and the specific steps needed to measure it consistently.” One of the benefits of a good operational definition is that it removes anxiety from those involved by clearly laying out what it is and what it is not. All the inclusions and exclusions are spelled out along with criteria that can be used to help the data collector make tough calls. When measuring falls, what defines a fall? Is it both knees on the floor, or just an awkward stumble? What about falls where a nurse assists the patient to the floor? When measuring mortality, will in-house hospice deaths be included in the numerator? An effective operational definition can be established when ten people independently observe the same phenomenon, and all derive the same results. The rule is to be excessively detailed. A disaster when collecting an indicator is having to throw out data after an obscure aspect is discovered while creating an operational definition. Taking the time to create a solid operational definition is time well spent.
It’s important to document the exact location of data. If the data will be collected on site, then what floor will be the site of the collection? If data are to be extracted from information systems, what particular screen, or module, or report title is to run? This step supports consistency of measurement, and its lack of ambiguity reduces anxiety in collectors.
If the indicator is the output of a report, document who will run the report and, if needed, who will enter the data into the database. If the indicator is collected on paper, then document who will be involved in collecting the forms and where they will be stored. Inevitably, without delegating and documenting these responsibilities, all involved will conclude that someone else is collecting and entering the data.
If there’s one lesson learned when defining and measuring within the health care context, it’s that indicators die slow and lonely deaths. Health care leadership is often hesitant to stop collecting an indicator. It’s important to agree on when to stop because it shows line staff that something will be removed from their to-do lists at the conclusion of the project. I suggest allowing three to six months after the project ends to measure for sustainability. This doesn’t mean you can’t do spot measures occasionally, but most indicators should have a clearly defined shelf life. Keep in mind that some measures, such as percent mortality within a hospital, may be so mission-critical that they will be measured indefinitely. Even in such cases, it’s important to document that expectation.
Before collecting data, take the time to be explicit about what’s being measured, and avoid rework and frustration. How else will we know that a change is an improvement?