In the early 1900s a new epidemic began to spread through major American cities. It disproportionately targeted children; more than 100 died from it on the streets of Detroit in the year 1917 alone. Today, a century later, we still don’t have a cure, but we’ve dramatically reduced the harms through a combination of technology and regulation. What was this epidemic? Influenza? Group A streptococcus? No, it was the automobile.

Today automobiles are an integral part of most of our lives. But if we could go back in a time machine, we would see just how unprepared cities were for this new technology. There were no licensing laws, no traffic lights, and no speed limits, let alone traffic police to enforce any of these. Motors may not have been powerful by today’s standards, but brakes were even less so, especially when combined with narrow tires and gravel roads. And since modern playgrounds had yet to be invented, the streets were full of children.

In subsequent decades, society has developed a web of risk mitigation mechanisms for motor vehicles. Some advances have been regulatory, such as traffic laws and driver licensing. Others have been technologic, such as antilock brakes, airbags, and road design. These two categories complement and reinforce each other. Quite in contrast to being anti-innovative, auto regulation has spurred a wide range of social and technologic innovation that increases public safety while expanding the usefulness of this form of transportation.

Today a new technology is changing the world in even more dramatic and far-reaching ways: artificial intelligence (AI). It has revolutionized advertising, entertainment, and education, and is invading many other areas of our lives. Some of the most obvious examples include self-driving cars, phones that understand our speech, and online search engines that answer virtually any question. Less obvious examples operate behind the scenes in retailing and service industries. And the medical world has high expectations for AI to improve healthcare.

An Old Structure Fits a New Challenge

AI, like every other powerful technology, introduces new risks alongside new benefits. Managing these risks will not simply be a matter of a single law or technology. Just as with automobiles, it will require a whole new ecosystem of norms, regulations, and technologies.

A major difference between automobiles and AI, though, is that the risks of AI are not directly physical in nature. They’re subtler, involving core issues of privacy, social benefit, and fairness.

Medical ethics provides a useful framework for considering such AI risks. In the roughly 2,500 years since Hippocrates authored his famous oath, medical ethics has undergone significant updates. Modern representations include the World Medical Association code for medical practitioners as well as the Common Rule governing human subjects research within the U.S. Four principles underlie these codes: respect for persons, beneficence, nonmaleficence, and justice. AI ­presents risks to all four.

Table of four ethical principles of AI

Respect for Persons

Respect for persons includes patient autonomy and informed consent. Patients have the right to decide for themselves which medical procedures to accept or reject, and effective autonomy requires first understanding the risks and benefits of those procedures. Unlike traditional mathematic models such as logistic regressions, the details of deep learning and other modern AI techniques are not easily translated into human-understandable terms. In general, the more powerful the AI technique, the less explainable it becomes, and the higher the potential for hidden biases and unpredictable behavior. These techniques essentially become black boxes.

When dealing with medical applications of AI, then, it becomes incumbent on the developers of systems to make them freely available for academic study. Only when the behavior of AI models has been thoroughly and independently vetted will patients be able to make informed choices about being subject to them.

Beneficence

Beneficence is perhaps the most obvious medical ethical principle. It says that everything in medicine needs to have a realistic prospect of benefitting a patient. This is potentially in conflict with the business models of technology companies such as Google, Facebook, Amazon, Apple, IBM, and Microsoft, which are far and away the leaders in AI technology in the Western world (China has several as well that are arguably as advanced, but they operate mainly in China).

The prevailing business model for AI at these companies involves acquiring massive volumes of personal data and then monetizing it primarily in the form of targeted advertising. This model is controversial enough in the case of nonmedical data, but potentially unethical in the case of medical data when there is no balancing benefit to the patients whose data is being collected.

Nonmaleficence

The flip side of beneficence is nonmaleficence, sometimes known by the Latin phrase primum non nocere, or “first, do no harm.” One potential harm of AI has to do, not with the algorithms per se, but with acquiring and storing the large sets of personal data typically used for training and applying medical AI. In many cases these data sets have been de-identified in order to avoid HIPAA restrictions on data sharing and use. Given the widespread availability of nonmedical data sets of personal information, though, most de-identified data can be readily re-identified simply by cross-referencing with other data. A patient with a stigmatizing medical condition thus could be subject to higher life insurance rates or discriminated against by potential employers or landlords, all without anyone technically violating HIPAA.

At least until new legal privacy protections emerge in the U.S., it will be up to the healthcare industry, including hospitals and health systems, to vigorously protect their patients’ data from potential abuses. And judging from news reports over the past year involving large health systems sharing data with large tech companies, it is not entirely clear how strong that line of defense is at present.

Justice

The fourth foundational principle of medical ethics is justice, i.e. fairness. One measure of justice with respect to AI is the extent to which it increases or reduces health disparities. If AI developers pursue pharmaceutical-style pricing models, they could further widen the health delivery gaps between haves and have-nots. One way to mitigate this risk might be for health systems and patient interest groups to insist on reasonable pricing and distribution clauses in exchange for sharing the patient data needed to develop AI systems.

Clinical Laboratorians Must Stay Engaged

Medical AI applications will almost certainly grow in number and scope in the coming years. Many of these already are using clinical laboratory data, and in the future they might supplement or even replace certain laboratory tests. And there will almost certainly be large benefits. But the risks are real, and we would be foolish to not discuss and deal with them.

Just as automobile safety has co-evolved in parallel with advances in automotive capability and power, it should be possible to co-evolve AI safety even as its power and capabilities grow. This will almost certainly require new laws, but the pace of legislation almost invariably lags its target. In the meantime, medical and academic communities can do much to develop both technologic and policy-based solutions.

Brian Jackson, MD, MS, is associate professor of pathology at the University of Utah and medical director of IT and pre-analytic services at ARUP Laboratories in Salt Lake City. +Email: [email protected]