The complexity of diagnostic testing makes it inherently at risk for error. Consequently, clinical laboratorians are always looking for better ways to address these errors and reduce patient safety risks. Performance scorecards show promise as a method of examining our complex processes in a systematic fashion, both to identify areas for improvement and to provide actionable data to improve the quality of patient care. They also help lab managers to define accountability and encourage personal investment from their staff in the error reduction process.

What are performance scorecards and why should you use them?

Scorecards translate the goals of your institution into concrete terms, enabling a consistent approach to using data to manage performance. Scorecards also help the lab clearly measure progress toward a goal, often using simple visualizations such as arrows, numbers, or color-coding. Visualizing progress in this way motivates staff and makes the meaning of success more tangible.

Scorecards force accountability and encourage standardized performance across groups. They also encourage good management by making it possible to monitor many performance indicators in a large organization. A well-designed scorecard will make it easier to see how a process is performing overall and facilitate drilling down into the layers of data when outliers are identified.

Building an effective scorecard

Scorecards can be a huge asset to a lab’s arsenal of error reduction tools; however, if poorly constructed, they are likely to become an exercise in futility. Here are a few tips to consider:

Measure what matters

Metrics should be selected that measure progress toward a particular goal and are intended to guide decision-making. It’s not productive to report something just for the sake of reporting it—a real temptation as technology offers more and more data. We should always ask ourselves if tracking any particular metric will inform our decisions. To avoid adding meaningless metrics to your scorecard, start by defining the strategic goals and objectives of your institution, then select metrics that are aligned and ultimately serve to gauge progress toward the goal.

Only include actionable data

A good metric is one that will spur clear and timely action when necessary based on changes in performance. In other words, it provides you with actionable information. When evaluating if a metric is actionable, ask yourself “what possible action(s) could I take if the metric shows underperformance?” If you cannot think of at least one answer, that metric doesn’t belong in your scorecard.

It is also critical to consider how long it will take to collect, analyze, and review data for a metric. If too much time has passed between data collection and review, it may prohibit gathering all relevant information to thoroughly investigate and formulate an effective action plan.

Keep it simple and manageable

To avoid drowning in data, start with three to five meaningful metrics. Fully integrate each metric into the review process and feedback loop. Don’t over-extend yourself with too many metrics or you won’t be able to maintain the appropriate level of scrutiny as well as timely review and action. With so much data available, it’s easy to lose focus, become distracted, or stray from the defined strategic goals.

Set realistic performance targets

When designing your scorecard, it is important to set a rigorous but achievable performance target for each metric. This target defines the performance needed to achieve the overall desired outcome. Setting challenging performance targets can be inspiring and leads to stronger performance, but be cautious of setting unrealistic goals because they will have the opposite effect. To help design realistic targets, use historical data for your institution or external sources of performance data from comparable institutions.

Close the loop

The final step to consider when building a performance scorecard is defining who will be responsible for each part, including thorough and timely data collection, drilling down to investigate outliers, implementing corrective actions, and reviewing the effectiveness of actions taken.

Roles and expectations should be agreed upon from the inception of the scorecard to ensure that all problems are being addressed in a timely fashion. Figuring out assignments early and putting the appropriate lines of communication in place will make closing the loop after each scorecard review a seamless process. It will also provide a better experience for staff by helping them understand their role in the quality improvement initiative and own their contribution to the process.

Our experience with scorecards

We have used performance scorecards to evaluate the most frequent reasons behind test order cancellations and result corrections due to technical errors. In both cases, the scorecards provided vital information enabling us to drill down and identify the specific causes of the errors.

We designed our first scorecard to capture the reasons that tests are canceled and their distribution across hospital wards (i.e., pediatric units, general medicine units, intensive care units, etc.). After a few months of tracking this data, it was clear that there was a sustained increase in the number of test orders canceled for neonatal intensive care unit (NICU) patients because of specimen integrity issues. We implemented a second, more specific iteration of this scorecard to help us identify which integrity issues were responsible for the most cancellations. The most common causes were clotted samples and insufficient volume for testing.

This prompted us to develop a separate scorecard, reviewed monthly by the phlebotomy manager, to track the total number of draws performed by each phlebotomist compared to the number of specimens that were clotted or of insufficient quantity. This revealed a large degree of variability among NICU phlebotomists and the need to further standardize processes. Ultimately, the root causes for deviations in performance were inconsistencies in the technique of new phlebotomists starting during that time and the use of heel warmers.

In the second scorecard, we monitored corrected laboratory results. Our data showed a sustained increase in the number of corrected reports in the months following the installation of new instrumentation and an upgrade of our laboratory information system (LIS). The lack of improvement over time suggested that there were issues at play other than unfamiliarity with the new equipment and computer systems (See Table).

We developed a scorecard that tracked the total number of technical errors requiring a correction to the initial result reported, the distribution of each type of error (e.g., dilutions, manual entry error), the lab tech responsible for each error and the corrective action taken if necessary. We also tracked whether the difference between the initial and corrected result was clinically significant and if it caused patient harm.

This scorecard helped us quickly identify problems and take action, including LIS changes, general education (if numerous techs were making the error), and in some cases targeted education (if a single tech was repeating the same type of error).

Clinical laboratorians should strongly consider using performance scorecards to continually measure the quality and safety of the patient care we are providing. We have seen the benefits of this approach to error reduction and highly recommend it.

Jaime Noguez, PhD, DABCC, is the associate director of clinical chemistry and toxicology at University Hospitals Cleveland Medical Center and an assistant professor of pathology at Case Western Reserve University in Cleveland. Email: jaime.noguez@UHhospitals.org.

Julianne Gallo, MT (ASCP), is the technical coordinator of laboratory education and audits and the pathology safety coach at University Hospitals of Cleveland. Email: ulianne.gallo@UHhospitals.org.


CLN's Patient Safety Focus is supported by ARUP Laboratories

ARUP Laboratories logo