Author: Sankowitz, Richard
Date published: September 1, 2010
How can hospitals use data to improve the quality of care provided in their facilities? The availability of data does not, in and of itself, help to prevent events such as healthcare-associated infections. The first step toward making data actionable is to determine how the data will be used.
Data for Judgment, or Action?
It is important to understand whether the data are to be used only for judgment, or whether they will also be used to find opportunities for improvement. Using data for judgment involves asking a rather narrow question: "Is Provider/ Department/Hospital A significantly better than Provider/Department/Hospital B?" Using the data for improvement asks a broader question: "Can areas for improvement be found?" The type of approach taken will make a big difference in both practitioner acceptance of the data and the overall value of improvement initiatives.
Using data exclusively for judgment can lead to missed opportunities for real improvement. It requires minimizing the number of false posi - tives- instances in which differences between practitioners are identified when in fact no differences exist. Finding true differences is often difficult because it involves looking for small differences in practice patterns that overlap to a great extent. Often, statistical inference testing is needed to draw conclusions. Ultimately, when differences are found and corrected, the overall impact to the organization is relatively small.
When using data for improvement, the task isn't to determine fine nuances of differences between practitioners. Instead, the question should be: "If one group is achieving a high level of performance, why can't everyone in the organization do so?" Here, false positives are not an issue. In fact, just as with any screening exam, some false positives can be tolerated as long as opportunities aren't missed.
In practical applications, it is impossible to minimize the number of both false positive and false negative test results at the same time. Atrade-off is required. As a result, hospital administrators must be very clear in their conversations with practitioners that they are not in a "data for judgment" mindset. Some suspicion is understandable and warranted; the word "evaluation" implies a judgment. A small aspect of the Joint Commission's Ongoing Professional Practice Evaluation (OPPE) standards, for example, is to identify true performance outliers (though by definition these are few and far between). But if this is the only use for OPPE, an organization will miss out on a potentially large opportunity to make real strides in exploring special cause variation from a variety of viewpoints. The goal should be to identify opportunities for improvement and "shift the curve," not merely at the individual practitioner or departmental level, but at the system level.
What Is 'Actionable' Information?
First, it is important to note that information will be actionable if it has a set of discrete characteristics; perfection is not one of these characteristics. In each of the characteristics that follow, a useful injunction might be to find a balance between perfection and data that are good enough to accomplish the task at hand.
Timely. Reports that describe year-old outcomes are not very useful for improvement purposes. Yet "real-time" data are not necessarily the goal either. Even with a sophisticated electronic health record, final diagnoses are not usually assigned until discharge. Physicians don't change practice patterns all that quickly, so dismissing data barely a month old as "irrelevant" also is not reasonable. Again, the concept is to strike a balance.
Accurate. Reports riddled with mistakes will serve only to annoy, or to "generate heat rather than light." Abest practice might be to "vet" a series of reports with a group of engaged physicians before a mass dissemination.
Relevant. Nothing will generate "heat" more quickly than cramming a report with irrelevant metrics. For example, displaying perinatal harm metrics to a cardiologist makes no sense. However, clinical leaders should remember that it is their role to work collaboratively with the hospital's analytic staff to determine a set of metrics that is relevant and practical.
Reconciled. When information is reconciled, common definitions are employed; for example, variable cost will be defined in the same manner for all reports. Nothing diminishes credibility more than a pair of reports showing costs to be simultaneously too high and too low because the "cost" metric on each is defined slightly differently.
Effectively presented. Information should be displayed in a way that facilitates understanding. Analysts composing the reports typically forget that not everyone has seen the information a thousand times. Others may find it difficult to get past confusing jargon, abbreviations, and a busy display of information.
Placed in context. Too often, data are displayed without the proper context needed to give them meaning. For example, what does a single mortality statistic convey? Isn't it much more useful to see a trend of data points? If an organization is concerned with exploring length of stay, isn't it necessary to simultaneously understand the readmission rate, the "observation" stay rate, and the severity of illness? Such "balancing" measures are often absent or are placed so far away from the metric in question as to be useless.
The Power of a Physician Champion
The importance of a physician champion as an effective component of the improvement process cannot be overestimated. The champion's role is simple: to openly and constructively confront and diffuse the inevitable complaints and dismissals that arise during the journey. That is, the champion must not let others derail the organization's progress toward acceptance and true improvement. It is unlikely that the champion will be able to convert every single physician to acceptance of measurement. The champion, however, should not sit idly by as other clinicians-often senior clinicians- attempt to stall the process with a well-worn litany of complaints. Tact, diplomacy, and a high degree of "emotional intelligence" are prerequisites for the role.
Using Data to Accelerate Improvement
Having an operational OPPE program has become a given: It is a Joint Commission standard for accreditation. Having an effective OPPE program is another matter. Making an OPPE program truly effective requires framing it within the context of using information for improvement, rather than judgment, and using "actionable" information to improve the lives of patients and staff alike. More important, having a process and tools to effectively gain practitioner buyin can and should lead to an organizational culture where physician performance reporting is part of a larger, transparent quality improvement process-one where the organization can focus on delivering the highest level of care possible, rather than engaging in nonproductive "grief" sessions.
By recognizing the inherent challenges of coping with change, providing "actionable information," and correctly framing the exercise in the context of institutional improvement, organizations will be prepared to deal with the roadblocks that will inevitably occur along the path toward improved performance and outcomes, and will be able to accelerate performance improvement.
Richard Bankowitz, MD, is enierprisewide chief medical officer, Premier healthcare alliance, Charlotte, N. C. (Richard_Bankowitz@Premierlnc.com).