Image credit: iStock.com/alvarez
Informatics expertise has the potential to transform laboratory medicine. Laboratorians who attend two short courses at the 69th AACC Annual Scientific Meeting & Clinical Lab Expo will gain new insights into how this technology could improve labs’ error detection capabilities and enhance the value of diagnostic testing.
On July 31 from 12:30 to 2 p.m., two highly regarded experts will explore the Marriage of Informatics and Laboratory Operations (7241) and how the two processes work synergistically to overcome shortcomings in automated chemistry testing.
The speakers plan to describe an experience in which they used informatics to solve a problem with falsely low test results on a new analyzer. “We noticed the instruments were making errors, and we decided to investigate,” Dina Greene, PhD, DABCC, an assistant professor of laboratory medicine at the University of Washington in Seattle, told CLN Stat.
The data was hard to get to, however, due to multiple electronic systems—some of which were not easy to access, she said. “We had to use our operational and clinical knowledge to ask the right questions and then design the tools to answer those questions,” she said.
That’s where the informatics part came in, Greene explained.
To better understand the clinical impact of these false results, investigators estimated their overall frequency and conducted a focused chart review, said Daniel Herman, MD, PhD, clinical pathologist and assistant professor of pathology and laboratory medicine at the University of Pennsylvania’s Perelman School of Medicine, who will be co-presenting with Greene for this session.
It can take a while to study rare events prospectively, so one of their initial approaches was to leverage historical data.
“Unfortunately, our middleware system was not designed for such analyses. So, we built an informatics pipeline to collect, parse, and analyze our middleware’s archival data,” Herman told CLN Stat. “To get at mechanism, in addition to troubleshooting and provocative testing, we investigated hypotheses in the historical data.”
Using this method, Greene and Herman said they were able to analyze the data and develop strategies to detect the errors in real time. “We learned that ready access to instrument or middleware data that are not present in downstream systems can enable powerful quality assessment and quality improvement. We also found that adding structured documentation of potential sources of error to workflows is a helpful tool in identifying and excluding error mechanisms,” Herman said.
Anyone who runs an automated clinical lab should find this process for detecting errors and problem solving to be indispensable, Greene said. The hope is that clinical laboratorians who attend this session will be able to adopt similar practices and become more vigilant about reporting erroneous results in the event that data isn’t appropriately scrutinized.
The bottom line: Don’t trust manufacturer claims, Greene advised. “If you think something’s fundamentally wrong with your instrument, it probably is,” she said.
During her presentation, Greene will discuss what operational oversight is needed to execute data-derived quality assurance. Herman in his talk will explain the use of clinical informatics tools in developing and implementing robust quality practices. Attendees earn 1.5 CE hours by attending this short course.
In another session taking place from 2:30 to 5 p.m. Aug. 1, Enhancing the Diagnostic Value of Clinical Laboratory Testing using Data Mining, Machine Learning, Informatics and Clinical Decision Support (73211), four experts will discuss how these technologies are transforming clinical laboratories. This session is worth 2.5 CE hours.
“Machine learning and data analytics offer tremendous potential to identify novel patterns in clinical and laboratory data,” session moderator Jason Baron, MD, an assistant in pathology at Massachusetts General Hospital and an assistant professor in pathology at Harvard Medical School, told CLN Stat.
“Laboratories are beginning to translate these patterns into new clinical knowledge to enable more-precise, higher-efficacy and lower-cost laboratory diagnoses with the goal of improved clinical outcomes and better value,” Baron said.
Labs may use data mining to develop patient-specific prognostic and prescriptive interpretations of test results.
“For example, we envision that a future laboratory test report might include predictive information that a patient’s test result suggests that he or she is more likely to respond favorably to one treatment as compared to an alternative treatment that might otherwise be considered,” Baron said. Multi-analyte assays with algorithmic analyses represent another application of machine learning to clinical laboratory data.
In his talk, Baron will describe how he applied machine learning to identify a pattern of routine chemistry test results that might signal a common phlebotomy error, as well as a machine learning-based algorithm to predict patient ferritin test results from the results of other laboratory tests.
Christopher Garcia, MD, assistant professor of pathology and director of pathology informatics at the Medical College of Wisconsin, Milwaukee, will join Baron on the dais to discuss intelligent clinical decision support and image analysis in anatomic pathology.
Another speaker, Brian Shirts, PhD, assistant director of the Informatics Division in the University of Washington’s Department of Laboratory Medicine, plans to give real-world examples to illustrate the times when multi-analyte diagnostics and genomic clinical support work effectively—and when they don’t.
Multi-analyte tests fail for a number of reasons, including poor validation study design, overfitting, and lack of proven clinical benefit over available tests, Shirts told CLN Stat. “Successful multi-analyte tests usually have high-quality validation plans that are paired with business and marketing plans.”
Genomic clinical decision support in turn, “will require large numbers of individuals with genetic information, interoperability between hospitals, and a high degree of standardization in reporting between genetic testing laboratories,” Shirts added.
Lee Schroeder, MD, PhD, assistant professor in pathology at the University of Michigan, rounds out the session with his presentation on mining electronic health records (EHR) to derive new clinical laboratory knowledge.
Schroeder told CLN Stat that he’ll present an unsupervised learning algorithm for generating reference ranges from EHR, something that’s been useful for personalizing ranges to subgroups of patients, such as pediatric populations.
He’ll also discuss a supervised learning algorithm for monitoring response of inflammatory bowel disease to thiopurine analogues using complete blood count and comprehensive metabolic panel data, and show some use cases of an approach to monitor point-of-care testing (POCT) accuracy. “It turns out that patients will often get a central laboratory and POCT at the same time, allowing the opportunity to evaluate harmonization between the POCT and central laboratory tests,” Schroeder said.
Baron hopes that by the end of this session, participants will:
- Understand fundamentals of machine learning and its applications to laboratory medicine;
- Appreciate currently available opportunities to apply machine learning to clinical practice as well as areas of ongoing research;
- Recognize the limitation and pitfalls of machine learning;
- Understand how to critically assess machine-learning algorithms and multi-analyte assays and clinical validations of them and in turn better manage utilization of these tests; and
- Be aware of regulatory considerations related to the application of diagnostic algorithms.
Register for the 69th AACC Annual Scientific Meeting & Clinical Lab Expo and sign up for these two short courses giving practical examples of the latest in informatics.