metamorworks / iStock / Getty Images Plus

Digital health is progressing at an astounding speed, aided by artificial intelligence (AI) technologies. Clinical labs and other healthcare providers face the ongoing challenge of understanding, navigating, and using these tools judiciously, with an eye toward protecting patients and avoiding liability. Several leading digital health researchers and thinkers have suggested checks and balances on AI technologies through regulation, education, and research as well as best practices for optimizing AI tools.

The medical community needs no convincing that AI is the wave of the future. In a recent Accenture poll of healthcare executives, nearly 90% said they are tapping into AI and related technologies, and 41% believe that AI will have the greatest impact on their organization over the next 3 years. Many such technologies and computing capacities are already in place, according to Damien Gruson, PhD, lead author of an article that summarized the impact of big data, AI, and digital health on clinical labs. Validating the performance of these technologies, identifying operational and clinical needs (processes in a lab or reagents/staff management according to activity), training more data scientists, and providing continuous education about data science to lab stakeholders, are the challenges that lie ahead, Gruson told CLN Stat.

In Science Translational Medicine, research futurist Eric Topol, MD, director and founder of the Scripps Research Translational Institute discussed some of the challenges ahead with AI, specifically with respect to privacy. Smartphone lab assays, genomics, and multimodal data are particularly fast growth areas, he noted. The former have the capacity to read electrolytes through sweat and nitrates through breath to determine asthma, detect pathogens at the point of care, quantify nucleic acids, and manage sepsis. “With millions of people having had high-throughput genotyping, and exome or whole genome sequencing, we are now in a position to provide polygenic risk scores for many common conditions, including heart disease, breast cancer, prostate cancer, colon cancer, inflammatory bowel disease, type 2 diabetes, atrial fibrillation, and more,” wrote Topol.

Deep learning AI and machine learning tools will be necessary to sort through the vast sea of multimodal datasets that digital medicine produces, he continued. “Continuous glucose sensors for individuals with diabetes can alert a person if their glucose is trending up or down. But, we need a smart algorithm that integrates the person’s physical activity, stress, sleep, food and beverage intake, gut microbiome, and possibly other relevant layers of data,” Topol suggested. Certain liabilities related to patients also need to be addressed, such as privacy and security issues, health disparities, and algorithmic biases. “Just like any new drug or device, the implementation of digital medical technologies will require rigorous validation with randomized, controlled clinical trials,” he recommended.

Gruson and colleagues see this as the evolution of data science , a compilation of AI, database advances, data capture and management, and computing infrastructures. “[Data science] is recognized for improving complex analytical tasks and flows in the laboratory domain,” they observed. “Therefore, [data science] could be used to examine data in real time and calculate and simulate the most efficient operational and clinical pathways.”

Supervisors could apply data science to laboratory datasets to help optimize tasks and better predict and adapt to various scenarios. According to Gruson and his colleagues, “The translation of [data science] to laboratory operations and healthcare pathways offers the opportunity to mimic activities, redesign processes, and apply several process improvements based on next-generation technologies that assist laboratory staff members by removing repetitive, replicable, and routine tasks.” Physicians who use data science to help with lab and imaging test ordering could reduce errors and avoid redundant tests. “In addition to improved control of laboratory test ordering, [data science] can trigger alerts when abnormal results occur,” the authors suggested.

Vural Özdemir, MD, PhD, DABCP, editor-in-chief of OMICS: A Journal of Integrative Biology covered similar ground, describing the interactive relationship between AI, the internet of things (IoT), and cyber-physical systems (CPS) in healthcare, and the digital innovations these factors have produced, such as AI-powered robots and wearables that track heart arrhythmias. “Another interesting prospect of digital health powered by AI, IoT, and [cyber-physical systems] is remote phenotypic data capture and characterization of pharmaceutical outcomes in clinical trials in ways that are user centric and meaningful to patients,” Özdemir wrote. Given the societal effects of these innovations, he suggested that some governance of AI, IoT, and cyber-physical systems was in order.

“As with any power, [cyber-physical systems] and its power should be made transparent and accountable,” he suggested. Certain terms such as “smart” and “integration” need further scrutiny. As an example, “integration might mean efficiency in city or factory governance but also control of citizens, employees, and potentials for curtailing civil liberties,” Özdemir cautioned.

Examining European and French legal frameworks as well as recommendations from national ethical bodies, Gruson and his colleagues developed a set of five keys to regulate AI:

  • Informing patients prior to the use of any AI technology;
  • Using a human expert to verify any AI management options;
  • Regulating large quantities of data from an AI device according to the level of sensitivity of the data;
  • Applying new principles to the health professions using these devices; and
  • Establishing an independent guidance group to examine all of these efforts.

“The most important objective is to keep a human warrantee on AI and define basis for regulation and translation to clinical practices,” said Gruson, who will host several roundtables on AI and data science in laboratory medicine and elaborate further on this topic at AACC’s 71st AACC Annual Scientific Meeting & Clinical Lab Expo in Anaheim, California.

The American Medical Association at its own annual meeting adopted policies to educate physicians on AI mechanisms and using this technology in patient care. “Just as working effectively with electronic health records is now part of training for medical students and residents, educating physicians to work effectively with AI systems, or more narrowly, the AI algorithms that can inform clinical care decisions, will be critical to the future of AI in healthcare,” said AMA Board Member S. Bobby Mukkamala, MD, in a statement.