Patient Safety Focus: Lost Versus Misplaced Samples


Lost Versus Misplaced Samples
Improving Lab Quality with Pull Measures

By Bonnie Messinger and Peggy Ahlin

Lab quality measures are important on many levels. Overall, the goal of such measures is to help labs design error-free processes and systems. But a second, and possibly more important goal, is to encourage lab staff to adopt patient-safe behaviors. Clearly, lab managers can shape employee behavior by how work output is measured. Work output that is evaluated against continually improving standards, instead of arbitrary goals, encourages creative thinking and affirms desired behaviors while discouraging a “make-the-numbers-or-else” culture.

Too often in our healthcare system today, measures of quality are “push,” or measures of calamity, rather than “pull,” or measures of potential. In fact, most labs track one of the most obvious push measures: lost samples rates. Although widely used as a measure of lab quality, the data provide little actionable information because lost sample rates do not reveal anything about how or where samples are lost. Obviously, the most appropriate action would be to prevent the loss before it occurs. However, unless lab staff know where and how a sample was lost, any adjustment to prevent a future lost sample is just “tampering” or making an educated guess about appropriate interventions without having sufficient knowledge of the cause or causes. Although sometimes helpful, educated guesses can also be harmful. Furthermore, if the situation improves, it’s impossible to know for certain what caused the improvement—the intervention or random chance.

What Happened?

How should labs monitor lost sample rates? It takes another way of looking at what is considered a lost sample. If the lab staff can determine where in the lab’s processes a sample is lost, the found sample could be considered “misplaced” rather than lost. In contrast to lost sample data, data on misplaced samples represent an action gold mine. Let’s say an investigation found that a small tube slipped through the cracks in the tube rack, fell to the bottom of the freezer where it froze to the bottom of a long-term storage box, and remained undetected for 24 hours despite a massive hunt for the tube. This information gives lab staff the ability to take appropriate corrective and preventive action. Those actions could include: controlling for odd-sized containers; developing standards for purchasing tube racks with solid bases and sides; designating storage areas only for samples; and including additional areas in a search checklist for lost samples. These error-proofing possibilities are concrete and highly actionable.

Even though lost sample data by itself lacks actionable information, it does provide some opportunities for learning. Knowing where the sample was last touched provides a starting point for improving tracking systems. Furthermore, studying common variables in the gap between the last-documented touch and the intended destination narrows the scope of the improvement effort. Tracking the particulars of the specimen submission provides information specific to special causes, such as odd-sized containers and requests for special handling. It also contributes to employee awareness of the effect a misplaced sample has on the patient.

The Real Picture

Instead of tracking lost sample rates, creating a flow chart provides a much better picture of the lost sample landscape over time. Figure 1 presents a map of sample handling starting from how the sample arrives in the lab to the disposition of the sample. The numbers within the process boxes represent those samples lost after entering the process and before exiting it, and the numbers between processes represent the gap between last documented touch and the intended destination. Annotations, such as “odd-size,” help identify sample-specific trends. As data are accumulated, patterns begin to emerge and lab staff can direct their investigations to the most appropriate points in the process.

Figure 1
Sample Processing Flow Chart

Case Study: Lost Samples

While misplaced sample data are actionable, determining the most appropriate corrective action requires a deeper understanding of what the data mean. The following hypothetical scenario demonstrates the importance of understanding what to measure and how to interpret the measures.

The staff at Superlative Lab has anecdotal evidence that lost sample counts are rising.Wishing to understand how and why samples are lost, they graph the number of lost samples each month for the last year (Figure 2). Based on this chart, lab staff conclude that there has been a serious decline in the lab’s quality.

Figure 2
Lost Samples per Month

This graph of lost specimens per month seems to indicate a dramatic decline in the lab’s quality.

It is true that every sample lost represents a poor outcome for a patient, so Superlative’s lab staff is correct in concluding that there have been more poor patient outcomes over the course of the year. However, if improvement is their goal, using this graph to understand where to start and what to change would merely be tampering because the graph does not give insight into causes.

To determine a baseline before starting the improvement effort, a better choice would be to track the daily lost sample rate (Figure 3). The intent of this analysis is to understand the cause of lost samples. The same data plotted daily as lost samples per test performed shows that performance has in fact been stable and that the peak at day 301 is most likely a “special cause” event.

Figure 3
Lost Samples per Day 

A graph of the same data presented in Figure 2, but plotted daily as lost samples per tests performed, gives a more accurate picture of the lab’s quality.

With this data, the lab management sets out to identify the causes of lost samples and to recommend changes to reduce that number. They form a task force to collect data on misplaced samples—those that are reported as lost and then later found. The group gathers information on: the process failures that resulted in a misplaced sample; the type of container in which the sample arrived; the test ordered for the misplaced sample; and where the sample was found. They plot the information by process for one month using a Pareto chart, which contains both bars and a line graph, and find that 84% of the misplaced samples fall within the pre-analytic processes: specimen delivery; specimen processing; and exception processing (Figure 4). The task force concludes that the problem lies in the specimen receiving department and recommends a complete overhaul of the processes in that area.

Figure 4
Misplaced Samples by Process

A Pareto chart shows the contribution of various lab processes to misplaced samples.

A Better Attack

But has the task force uncovered the real problem? By starting with a Pareto chart, the group failed to see that most of the specimen delivery errors were due to a single, special-cause event: 15 samples were mistakenly delivered to a lab section that is not open 24 hours/day and were found the following morning. By that time, the samples had been reported as lost and were counted as misplaced samples in the tracking parameters.

A better plan would have been to start with the control chart shown in Figure 5. By looking at data for 1 month, the task force would have been able to see that a single, special-cause event explained 30% of the misplaced samples. After identifying the cause of that single event, the task force could then recommend and implement an appropriate plan to prevent deliveries to closed labs.

Figure 5
Misplaced Samples by Day 

A control chart shows how the data points are distributed between the upper (UCL) and lower (LCL) control limits (the natural limits of the process). The chart shows a single, special-cause event on day 8 explains 30% of the misplaced specimens.

But their work is not finished. Now, the task force replots the data without the off-hours deliveries. Their new chart shows that the process of managing samples is in control because 99.865% of the data points for normally distributed data fall between the upper and lower control limits, or three standard deviations above or below the mean, respectively. A process with all of the data points between these two lines is considered stable and its performance is predictable. This means that the remaining variation the task force sees is the result of “common cause variation” or faults that are common to all processes in the whole sample processing system.

Because the task force has established that the process is in control, it is safe for them to consider systemic changes without the risks associated with tampering. When the special cause variation is removed, the Pareto chart (not shown) reveals that there are no clear “winners” by process category. This analysis confirms that the remaining variation in the system is common cause variation.

The task force next looks at another data set: sample container types. Again, using a Pareto chart (not shown), they see that samples arriving in odd-sized containers rise to the special-cause level. The task force interviews the employees who handle these samples and discovers that the samples may not be recognized as patient samples because of the unusual containers and therefore are set aside. The team develops a control measure for odd-sized containers and collects data again. They compare data for lost samples against the data for misplaced samples and find that tests that have a certain destination code in the laboratory information system appear more frequently. Upon further investigation, they discover an error in the destination code for these tests. The error is quickly corrected, and the next time the team looks at the lost sample rates, they have dropped significantly.

The final variable the task force identifies is the location where the misplaced samples are found. They evaluate this variable using a scatter plot (Figure 6). When the team plots workload for each section against misplaced sample rates for that section, they find that the lab section with the lowest workload (100 workload units per month) has the highest misplaced sample rate (one misplaced sample per 100 workload units). At first glance, it would appear that this lab section needs to take some corrective actions; however, this data can be very misleading unless it is scrutinized further. One especially astute member of the task force points out that it is not possible to lose less than one sample; therefore, the lab sections with lower volumes will always have higher rates of sample loss.

Figure 6
Misplaced Sample Rate & Workload

This graph plots the rate of misplaced specimen against workload for each lab section.

The team decides a better measure of the impact of workload on the number of misplaced samples would be to compare the percent increase in misplaced samples with the percent of increase in workload. Plotting the week-to-week percent of workload increase against the percent increase in misplaced samples for each section shows the team that workload alone is not a detriment to quality work (not shown).

The task force decides to interview the lab sections with a positive correlation between the two measures and finds that the high volume of tests was a special cause event in two cases. In the phlebotomy and immunology sections, a flu outbreak caused both a reduction in workforce and an unprecedented rise in the demand for services. In the three remaining cases of positive correlation, the spike in misplaced samples was determined to be poor planning. Because the molecular, flow cytometry, and hematopathology sections typically process low volumes of specimens, they did not hire dedicated processing staff. Instead technical staff performed both processing and technical duties. A failure by management to plan for the expected annual increase in volume for these sections required the technical employees to give processing tasks a lower priority in order to maintain high technical quality as the workload gradually increased. The conflict of goals resulted in the poor overall processing performance observed by the team.

After each of the above discoveries and subsequent changes, the now savvy task force re-evaluates the lost sample data. They find that, indeed, discovering the cause of misplaced samples resulted in a corresponding decrease in the number of lost samples (Figure 7). By addressing the causes of misplaced samples, the task force has improved the lost sample rates. This underscores their assumption that the causes for misplaced and lost samples are the same. The team continues to measure and take appropriate corrective action with each iteration. Their intelligent assessment of the data combined with creativity results in progressively lower lost sample rates and therefore higher quality service for patients.

Figure 7
Lost Sample Rate Over Time 

With each appropriate corrective action, the lab’s lost sample rate decreases.

Changing Push to Pull

This hypothetical scenario is played out time and again in labs across the country. Sadly, the emphasis on measuring quality tends to focus on push measures as the definitive indicator of quality performance. But putting equal emphasis on the benefit that pull measures provide can help identify appropriate corrective and preventive actions. In the hypothetical example described here, a combined approach to analyzing lost and misplaced samples creates a highly effective strategy for discovering actionable improvements and assessing the effects of those improvements. Now that’s a great example of how labs can assess and improve quality when it comes to patient safety!


Bonnie Messinger is quality manager and Peggy Ahlin is director of quality and compliance at ARUP Laboratories, Inc., Salt Lake City, Utah.

Page Access:

Patient Safety Focus Editorial Board

Michael Astion, MD, PhD
Department of Laboratory Medicine
University of Washington, Seattle

Peggy A. Ahlin, BS, MT(ASCP)
ARUP Laboratories
Salt Lake City, Utah 
James S. Hernandez, MD, MS 
  Mayo Clinic Arizona
Scottsdale and Phoenix

Devery Howerton, PhD

Centers for Disease Control and Prevention
Atlanta, Ga.

Sponsored by ARUP Laboratories, Inc.