American Association for Clinical Chemistry
Better health through laboratory medicine
January 2012 Clinical Laboratory News: Risks Associated with Laboratory Informatics

Risks Associated with Laboratory Informatics
Avoiding the Pitfalls that Can Harm Patients

By Michael Yu, MBA, MLS(ASCP)

Over the years, laboratory information systems (LIS) have evolved from mere repositories of laboratory data to informatics systems with decision support capabilities. At the same time, the software programs on today’s laboratory instruments have become more complex. They often come with integrated decision support software that perform auto-verification, manage quality control (QC), setup reflex parameters, and generate data for inter-lab quality comparisons. Despite these advances, there is still room for improvement, especially in attaining optimum interconnectivity of laboratory instruments and hospital information systems (HIS). This situation has led to the use of middleware, software from third-party vendors that functions as the missing link between the LIS, laboratory instruments, electronic medical records (EMR), and the HIS.

To increase productivity and improve patient care, labs that want to take full advantage of today’s advanced informatics systems need to use a combination of all these tools. But leveraging informatics tools from these various sources has the potential to introduce unexpected errors that can harm patients.

Tracking Software Errors

While numerous case studies have been published about the benefits of informatics tools to boost productivity and improve patient care, far fewer have been published about the harm that software errors can cause. Most EMRs, middleware, and LISs, with the exception of blood bank information systems, do not require 510(k) clearance from the Food and Drug Administration (FDA). Consequently, users do not need to report errors, nor does the FDA track them.

An Institute of Medicine study showed that software design problems and failures accounted for 10% of all 510(k) Class I recalls from 2005-2009 (1,2). According to FDA, Class I recalls are “a situation in which there is a reasonable probability that the use of or exposure to a volatile product will cause serious adverse health consequences or death.” For example, one software program was recalled because the unit of measurement for a blood glucose system changed unexpectedly following a brief loss of power or accidently when users reset the date and time of the instrument.

Another problem with software errors is that they can occur without warning. Such errors can go undetected for months because the normal tools that labs use to detect QC, proficiency testing, and calibration verification errors are usually ineffective in detecting software errors.

Example of Software Errors Involving Laboratory Data and Results

  • Interface errors between a pathology information system and EMR led to omission of more than 60 characters and “no evidence of abnormality” was displayed as “evidence of abnormality” in the EMR (4).
  • Therapeutic drug monitoring results were transmitted from an instrument with a greater than symbol (>), but were interpreted as less than (<) by the LIS. The error went undetected for 7 weeks, affecting 187 results. Failure to reset the instrument default setting after a software upgrade caused the error. Fortunately, no patients were harmed (5).
  • An interface programming error between the LIS and HIS led to non-toxic acetaminophen results being recorded in the HIS as >1000 μg/mL (toxic concentration >200 μg/mL). Several patients were admitted and treated with a 64-hour antidote protocol and at least one patient was involuntarily admitted to a psychiatric hold for refusing the antidote (6).
  • The adjustment ratio for calculating INR was not entered into the instrument during calibration, leading to INR results being 8% lower for 2 months and affecting 1,620 INR results (7).
  • Errors occurred in linear barcode scanning due to failure to control for barcode scanner resolution and suboptimal barcode orientation (8).
  • A Y2K bug in a software program used to calculate the risk of Down’s syndrome was not discovered until 5 months later, leading to 158 high-risk pregnancies going undetected. Two women gave birth to children with Down’s syndrome, and two others had late-term abortions (9).
  • A change in both the instrument and reporting format led to all positive Gonorrhea and Chlamydia results from an emergency department not being printed to the EMR. A total of 275 positive results were missed for 4 months (9).
  • Incorrect routing of lab results led to outpatient results being delivered to the wrong provider, a non-active provider, or not being routed to any providers (10).

Minimizing the Risk

What can labs do to minimize the risk and impact of informatics errors? The first step is to develop a comprehensive risk management plan that includes process review, validation, and training. Labs also should improve their communication and visibility with clinicians, who frequently are the first to detect software errors. Laboratory leadership needs to encourage users of laboratory data, including clinicians, nurses, and pharmacists to report suspicious results back to the lab. Labs can achieve greater visibility and communication with these staff by participating in department staff meetings, focus groups, and in hospital-wide initiatives.

Due to the complexity of laboratory testing, it is unlikely that any individual in the lab has a full understanding of all the software. Therefore it is important to take an interdisciplinary approach to risk management. The LIS manager, sections supervisors, the laboratory director, and laboratory staff all need to work together to identify steps where errors can occur and implement processes to prevent them. Some examples include verifying INR manually after updating the ISI and calibration, manually reviewing all results after a software upgrade, and verifying that all barcodes are read correctly after new label printers are installed.

While laboratory accrediting agencies have requirements for information system validation, the requirements often represent the minimum needed to meet the regulation and may not be tough enough for labs with sophisticated software. The Clinical and Laboratory Standards Institute has published various in-depth guidelines concerning LIS and software validation that labs should consult (3). Implementing more rigorous validation requirements will not prevent all software errors; however, it will increase the chances of detecting errors at an earlier stage in the process.

Finally, the growing complexity of laboratory informatics means that labs must increase their focus on training. Labs should develop training programs in informatics to ensure that every staff member has a full understanding of the basic functionality of each system, how different systems interconnect, downtime procedures, and troubleshooting. Aside from an initial training program, labs can improve their staff’s understanding of the various systems by involving them in the validation and periodic performance verification process. Since lab staff might not be aware of the risks associated with informatics, laboratory management must emphasize that computers are imperfect and therefore subject to errors. Lab management also must encourage their staff to follow-up and investigate any suspicious results.

Investing in Informatics

The recent advancements in informatics have allowed labs to tackle the ongoing challenge of providing higher quality patient care at a lower cost. By developing a greater understanding of today’s informatics tools and their limitations, laboratorians can safely improve patient safety and boost productivity.


  1. Hall R. Using recall data to assess the 510(k) process. Proceedings of the IOM Public Health Effectiveness of the FDA 510(k) Clearance Process. Available online, retrieved October 30, 2010.
  2. U.S. Food and Drug Administration. 2000–2005 medical device recalls. Available online.
  3. Clinical and Laboratory Standards Institute. Laboratory Instruments and Data Management Systems: Design of Software User Interfaces and End-User Software Systems Validation, Operation, and Monitoring; Approved Guideline-Second Edition. NCCLS document GP19-A2 [ISBN 1-56238-484–8].
  4. Aller R. Newsbytes. CAP Today 2010;24:92–94.
  5. Bissell M. Lessons Learned From TotalLaboratory Automation at Ohio State. M. Laboratory Errors & Patient Safety 2006;3:1–8.
  6. Wears R, Leveson N. ‘SAFEWARE’: Safety-Critical Computing and Health Information Technology. Session Presented at the Healthcare Systems Ergonomics and Patient Safety 2008. Available online, retrieved October 30, 2010.
  7. Valenstein P, Alpern G, Keren D. Responding to large-scale testing errors. Am J Clin Pathol 2010;133:440–446.
  8. Snyder ML, Carter A, Jenkins K, Fantz CR. Patient misidentifications caused by errors in standard bar code technology. Clin Chem 2010;56:1554–60.
  9. Chozos N, Wears R, Perry S. The role of communication in laboratory error handling. Session Presented at the Healthcare Systems Ergonomics and Patient Safety 2008. Available online, retrieved October 30, 2010.
  10. Yackel TR, Embi PJ. Unintended errors with EHR-based result management: a case series. JAMIA 2010;17:104e-7.

Michael Yu
Michael Yu, MBA, MLS(ASCP) is the area lab manager at Kaiser Permanente, Panorama City Hospital, Calif. Email:

Patient Safety Focus Editorial Board

Michael Astion, MD, PhD
Seattle Children's Hospital
Seattle, Washington

Peggy A. Ahlin, BS, MT(ASCP)
Salt Lake City, Utah 

Corinne Fantz, PhD
Emory University
Atlanta, Georgia

James S. Hernandez, MD, MS
Mayo Clinic Arizona
Scottsdale and Phoenix

Brian R. Jackson
ARUP Laboratories
Salt Lake City, Utah


Sponsored by ARUP Laboratories, Inc.