Sharon Geaghan CLN July 2015

Critical values represent values which require immediate notification and clinical intervention to avoid or attenuate patient morbidity and mortality. Yet for decades, this communication has been delayed while the critical result is repeated. The practice of repeating all critical laboratory values before result notification to the responsible caregivers is long-standing and widespread. In the United States, a study of 340 hematology laboratories found that small hospitals repeated 83% of critical results and independent laboratories repeated 100% of critical results (1). A recent College of American Pathologists (CAP) Q-Probes study found 61% of laboratories always repeated critical results (2).

In years past, when laboratory instrumentation was less reproducible and testing often included more manual steps, such as manual dilutions, labs performed repeat testing to confirm results and avoid reporting erroneous results. Repeat analysis also was a means to assess precision. However, current technologies in chemistry and hematology analyzers have demonstrated excellent precision across the analytic range (5).

What is the magnitude of time delay associated with repeating critical values?

A recent CAP Q-Probes study documented reporting delays of up to 17 and 21 minutes (90 percentile) caused by retesting critical values (2). The average delays in result communication due to repeat critical values in a European study ranged from 35 minutes (magnesium) to 42 minutes (sodium, calcium) (3). Another study cited median critical value reporting delays owing to repeat testing to be 5 minutes for blood gases, and 17 minutes for glucose (4).

What is the definition of a significant difference between initial critical value and repeat value?

In the aforementioned survey, most labs did not define a significant difference between repeat results. In the literature, a significant difference between initial and repeat values can be defined in one of several ways: greater than the limit of allowable error for the analyte, as defined by CAP or CLIA; greater than the individual laboratory’s acceptable tolerance limits for reruns; greater than biologic variation for an analyte; or a clinically significant difference, as determined by medical opinion.

What is the evidence for repeating critical values?

High and low critical values occur at opposite ends of the analytical range, but as George Cembrowski points out, they are as “correct” as any other analytical result generated on the analyzer (5).

A 2013 audit of repeat critical chemistry values examined differences between initial critical and repeated test results for sodium, potassium, calcium, and magnesium. The researchers applied CLIA criteria for allowable error to find significant differences between results. With a total of 2,308 repeated tests in the audit, the investigators found no significant difference in 99.3% of specimens (3).

In another study of 601 chemistry critical values and repeat tests culled from 72,259 routine clinical chemistry specimens, researchers calculated the absolute value and the percentage difference between the two testing runs for each of four critical analytes (potassium, sodium, calcium, glucose). They then compared these calculations with the allowable error limit as defined by CAP. The number and rates of significant differences varied greatly, from values within the analytic measurable range (AMR) (2.1%) versus outside the AMR (41.4%). The authors concluded that repeat testing is warranted only when initial critical values are outside their AMR. They also observed that a change in practice away from universal critical value repeats would serve to expedite critical value communications and reduce reagent cost by eliminating unnecessary repeat testing (6).

The decision to repeat a critical value based on whether the value falls within the AMR is supported by a large analysis of 25,553 repeated laboratory values for 30 common chemistry tests. In this study, initial values from 2.6% of repeated tests were errors. However, 85% of errors occurred for values outside the AMR (4).

The evidence against universal critical value repeats also applies to hematology. One study of at least 500 consecutive critical test value pairs found that repeat testing for critical values for these principal hematology tests did not add value over a single test run (7).

Is there an example of testing for which repeat critical value testing is warranted?

Point-of-care glucose (POCG) testing on capillary blood specimens is the cornerstone of diabetes management. One study recently assessed the accuracy of POCG results at high (>600 mg/dL) and low (<40 mg/dL) critical values. The researchers observed critical values in 0.24% of tests. Approximately 80% of critical POCG tests were repeated within 10 minutes. In repeat measurements, only 54.9% met accuracy criteria (±15 mg/dl of low and ±20% of high initial values). The authors concluded that POCG testing practice should require repeat testing to confirm critical results, due to the frequency of erroneous results. When retesting was performed on the same meter or by the same user, accuracy was significantly higher. They also speculated that error may be attributable to preanalytic issues in capillary blood sampling (8).

If a lab ends the practice of repeating critical values, how can it best assure results?

A quality control (QC) program covering all phases of testing—and specific QC appropriate for the sample, method, and platform—is the best assurance of a quality result. External QC programs and internal QC together provide the best safeguard to assure a quality result (5).

How can we change the mindset of laboratorians to move away from repeat critical value testing?

Clinical laboratory scientists are invested in repeat analysis of critical value samples, after years of training, habit, and tradition. This practice has been presented as exemplary of careful and conscientious behavior. We recommend various educational efforts, such as posting the recently published literature (7).

Is data a way to drive change in practice?

Using data to drive change is also important, and quite simple to do. As Toll suggests, first record differences between the initial critical value and the repeat result; second, graph these differences against the average of the pairs (8). Sharing and posting this data is likely to engender confidence and support change in practice.

What is the best way to manageone’s own critical value practice?

In the aforementioned survey, the repeat analysis rate and result reproducibility did not vary significantly by analyzer or assay, indicating that study findings and conclusions should be generalizable to a variety of laboratories (2). Alternatively, laboratories could evaluate their own critical value datasets, which are readily available, as Dr. Chris Lehman points out (9). A formal definition of what represents a significant difference is desirable and must be clinically relevant. If differences between critical values and repeat values are tallied, and there is a question about their clinical significance, we recommend consulting the appropriate clinicians in your institution.

Do shorter delays in reporting times reduce adverse events?

In the survey of 85 laboratories, 20% of respondents noted an adverse event within the last calendar year due to delayed critical result reporting (2). This is an enormous opportunity for every laboratory to improve care, and save time, labor, and reagents. It is evident that this practice has, for many applications, outlived its intended purpose. In April 2008, San Francisco Kaiser Permanente Medical Laboratory retired the procedure of repeating critical values after analysis of 580 critical value repeats, culled from120,000 tests (9). Global re-evaluation of the laboratory practice of repeating critical values is in the best interests of patient safety.

References 

  1. Munoz O. Workload efficiency in the hematology laboratory [Doctoral dissertation]. Salt Lake City: The University of Utah, 2008.
  2. Lehman CM, Howanitz PJ, Souers R, et al. Utility of repeat testing of critical values: A Q-probe analysis of 85 clinical laboratories. Arch Pathol Lab Med 2014;138:788–93.
  3. Onyenekwu CP, Hudson CL, Zemlin AE, et al. The impact of repeat-testing of common chemistry analytes at critical concentrations. Clinical Chem Lab Med 2014;52:1739–45.
  4. Deetz CO, Nolan DK, Scott MG. An examination of the usefulness of repeat testing practices in a large hospital clinical chemistry laboratory. Am J Clin Pathol 2012;137:20–5.
  5. Kratz A, Brugnara C. Automated hematology analyzers: State of the art. Clin Lab Med 2015;35:xiii–xiv.
  6. Niu A, Yan X, Wang L, et al. Utility and necessity of repeat testing of critical values in the clinical chemistry laboratory. PLoS One. 2013;8:e80663.
  7. Toll AD, Liu JM, Gulati G, et al. Does routine repeat testing of critical values offer any advantage over single testing? Arch Pathol Lab Med 2011;135:440–4.
  8. Schifman RB, Nguyen TT, Page ST. Reliability of point-of-care capillary blood glucose measurements in the critical value range. Arch Pathol Lab Med 2014;138:962–6.
  9. Lehman C. (2/11/15, telephone communication).
  10. Chima HS, Ramarajan V, Bhansali D. Is it necessary to repeat critical values in the laboratory? Today’s technology may have the answers. Lab Med 2009;40:453.


CLN's Patient Safety Focus is sponsored by ARUP Laboratories ARUP Laboratories logo