In a recent report, researchers from Johns Hopkins University School of Medicine in Baltimore made headlines when they estimated that medical error is the third leading cause of death in the U.S. (1). While patient safety remains a struggle in many areas of healthcare, laboratory medicine has been a leader in reducing error, with an estimated total error rate of 0.33%, the lowest in diagnostic medicine (2). Major advancements in automation and analytical instrumentation have helped reduce laboratory-associated errors over the last decade, but with pre-analytical errors currently accounting for up to 75% of all mistakes (3), laboratory medicine professionals must keep expanding their focus to what is happening outside of the lab.

The classic paradigm for a wider view of errors is the total testing process (TTP). Beginning with test ordering and ending with result reporting, TTP encompasses the pre-analytical, analytical, and post-analytical phases of testing (Figure 1). Pre-analytical variation includes all the steps that occur from test ordering until right before sample analysis. While the likelihood of variation in any of the three phases is not negligible, the vast majority of laboratory variation emerges from the many factors affecting laboratory specimens prior to testing. These activities include test ordering, patient preparation, specimen collection, transportation, preparation, and storage.

Since activities in the pre-analytical phase are neither performed entirely in the clinical laboratory nor under the control of laboratory personnel, they are harder to monitor and improve. Consequently, labs often have focused on improving areas under their direct control while leaving pre-analytical activities to healthcare professionals who have little to no formal training in laboratory medicine.

Recognizing the magnitude of errors associated with the pre-analytical phase, regulatory agencies and laboratory medicine associations have dedicated more resources to developing focused checklists and quality guidelines. The revised international standard ISO 15189:2012, Medical Laboratories: Requirements for Quality and Competence, has expanded its pre-examination procedures section to require laboratories to include collection and pre-collection activities, specific instructions to patients and users, and sample transportation, processing, and storage (4). In addition, the International Federation for Clinical Chemistry and Laboratory Medicine (IFCC) has established a working group, Laboratory Errors and Patient Safety (WG-LEPS), with the goal of reducing laboratory medicine-associated errors. The WG-LEPS’s most promising initiative has been establishing standardized quality indicators (QIs) to help labs monitor all three phases of testing.

QIs measure how well the laboratory meets the needs and requirements of users and the quality of all operational processes. Monitoring the total number of samples lost or not received normalized to the total number of samples received is an example of a QI for the pre- analytical phase. Adopting QIs to track and improve performance is essential: what a laboratory does not measure, it cannot improve. Only by monitoring the performance of the TTP can labs reliably identify and manage these potential variations.

Sources of Pre-Analytical Variability

There are four general categories of pre-analytical variability, including: test ordering, patient preparation, specimen collection, and specimen processing, transportation, and storage (5). In this article, we discuss for each category the sources of variation, potential solutions, and appropriate QIs.

Test ordering

Test ordering, the first step of the pre-analytical process, is often referred to as the pre-pre-analytical phase. Ordering the wrong test not only is wasteful but also potentially harmful to patients. Providers order inappropriate tests for a variety of reasons, including confusion over tests with similar names (such as 25-hydroxyvitamin D versus 1,25-dihydroxyvitamin D), unnecessary duplicate orders, transcription errors during order entry, and misinterpreted verbal orders, which occur when physicians do not place test orders themselves.

To deal with problems in this phase of testing, labs must engage hospital staff to promote appropriate test utilization. This is typically done through a laboratory formulary committee or test utilization committee that draws from hospital-wide resources and influences physicians’ test ordering behavior. Institutions lacking the resources or structure to set up a formulary committee should focus on areas that will have the biggest impact, such as monitoring expensive sendouts and duplicate orders. Labs must ensure that such efforts are data driven and closely monitored with QIs that can track inappropriate test orders, duplicate orders, and errors in test input (Table 1).

Patient preparation

Patient preparation is one of the most challenging among the pre-analytical phases because it encompasses variables that typically occur before the individual arrives for his or her sample collection. Patient preparation factors include:

Diet: Food ingestion is a significant source of pre-analytical variability. This effect varies based on the analyte and the time between meal ingestion and blood collection. For example, glucose and triglycerides significantly increase after meals with high carbohydrates and fat, respectively. An overnight fasting period of 10 to 14 hours prior to blood collection is optimal for minimizing variations. However, some meals may have longer-lasting effects and particular foods should be prohibited before performing certain tests. For example, bananas are high in serotonin and can affect 5-hydroxyindoleacetic acid excretion testing. Caffeine, alcohol, vegetarianism, malnutrition, and starvation are also known to have a significant impact on commonly measured analytes. Communicating these requirements to patients is important to ensure appropriate preparation for testing and has been recognized in ISO 15189:2012.

Posture/exercise: A change from lying to standing can cause within 10 minutes an average 9% elevation in serum concentrations of proteins or protein-bound constituents. This occurs because blood volume falls by about 10% and only protein-free fluid passes through capillaries to the tissue. Prolonged bed rest also sometimes dramatically affects hematocrit, serum potassium, and protein-bound constituents. As a result, some hospitalized patients should delay certain tests until after they leave the hospital and resume normal activity.

On the opposite end of the activity spectrum, moderate and strenuous exercise also deranges analytes like aspartate aminotransferase, lactate dehydrogenase, creatinine kinase, and aldolase. This is due to skeletal muscle release. Patients should be instructed to avoid moderate to strenuous exercise prior to specimen collection for certain analytes.

Timing of sampling

Blood concentrations of various analytes change during the course of the day. These cyclical variations can be significant, so the timing of sample collection should be strictly controlled. For example, serum iron increases by as much as 50% from morning to afternoon, and serum potassium has been reported to decline from morning to afternoon by an average of 1.1 mmol/L. Hormones such as cortisol, renin, aldosterone, and corticotropin are especially impacted by this circadian variation.

Timing of sample collection is especially critical for therapeutic drug monitoring, which requires trough levels for most analytes. Protocols must specify an ideal time of sampling for each test and the actual time of draw must be carefully documented. Draws that occur outside of the specified time can be monitored as QIs (Table 1).

Strategies for detection

Clinical labs have several tools at their disposal to detect pre-analytical errors. These include:

  1. Erroneous result flags: These are analyte concentrations that do not make physiologic sense, such as a potassium level of 20 mEq/L and calcium of 1.0 mg/dL, which is the typical pattern observed when a specimen drawn into a plasma EDTA tube is transferred to a serum tube.
  2. Critical result flags: These are results that are in the life-threatening range, such as potassium of 6.5 mEq/L or glucose of 30 mg/dL, which also occur when there has been a significant delay in sample processing.
  3. Rules: This is when a combination of otherwise normal results strongly indicates a problem with the specimen. A good example is detection of intravenous line contamination using the “IF Glucose > 800 mg/dL AND creatinine < 0.6 mg/dL” rule, among others (7).
  4. Delta checks: These help expose errors by calculating the difference between a patient’s current results and previous results based on a defined time window for certain analytes. If the difference exceeds an acceptable threshold, the sample is flagged for review. This is particularly useful for sample misidentification, but is limited in application to patients with previous results and specific tests.
  5. Serum indices: Serum indices represent a spectrophotometric estimate of the level of interference from hemoglobin (hemolysis index), bilirubin (icterus index) and lipids and chylomicrons (lipemia index). These are the most common type of interferences to clinical chemistry tests and can serve as indicators for pre-analytical errors related to inappropriate fasting, sample processing, transportation, and storage.

Future Perspectives

The QIs shown in Table 1 have been adapted from Plebani et al. (8) and represent an initial step toward monitoring and improving the pre-analytical phase of testing. The WG-LEPS is also promoting anonymous sharing of QI data by clinical labs worldwide through the IFCC WG-LEPS website. The working group’s goal is to establish benchmarks so labs can compare their performance on these metrics to other similar-sized labs, enabling them to identify areas that require more attention and resources for improvement.

Reporting specific QIs might one day become mandatory as part of an external quality assessment program. Until then, labs must be proactive in creating, collecting, and sharing QIs for the pre-analytical phase in an effort to reduce laboratory medicine’s greatest contribution to medical errors.

References

  1. Makary MA, Daniel M. Medical error-the third leading cause of death in the us. BMJ 2016;353:i2139.
  2. Carraro P, Plebani M. Errors in a stat laboratory: Types and frequencies 10 years later. Clin Chem 2007;53:1338-42.
  3. Bonini P, Plebani M, Ceriotti F, Rubboli F. Errors in laboratory medicine. Clin Chem 2002;48:691-8.
  4. Standardization IOf. Iso 15189:2012: Medical laboratories: Particular requirements for quality and competence. Vol. Geneva, Switzerland, 2012.
  5. Nichols JH. Preanalytical variation. In: Clarke W, ed. Contemporary practice in clinical chemistry, Vol. 2nd ed. Washington, DC: AACC Press, 2010:1-12.
  6. Young DS. Effects of preanalytical variables on clinical laboratory tests. 3rd ed. Washington, DC: AACCPress, 2007.
  7. Hernandez J. The Paradox of learning from errors. Clinical Laboratory News 2011;37(4):15.
  8. Plebani M, Sciacovelli L, Aita A, Chiozza ML. Harmonization of pre-analytical quality indicators. Biochem Med 2014;24:105-13.

Joe El-Khoury, PhD, DABCC, FACB is co-director of clinical chemistry at Yale New Haven Health and assistant professor of laboratory medicine at Yale University in New Haven, Connecticut. Email: [email protected]

Mahboobe Ghaedi, PhD is an associate research scientist in the departments of anesthesia and biomedical engineering at Yale University in New Haven, Connecticut. Email: [email protected]