Volume 108
Issue
6
Date
2020

Informed Consent and Medical Artificial Intelligence: What to Tell the Patient?

by I. Glenn Cohen

This Article is the first to examine in-depth how medical AI/ML intersects with our concept of informed consent. To be clear, this is just one of a number of issues raised by medical AI/ML—which includes data privacy, bias, and the optimal regulatory pathway—but it is one that has received surprisingly little attention. I hope to begin to remedy that with this Article. Part I provides a brief primer on medical Artificial Intelligence and Machine Learning. Part II sets out the core and penumbra of U.S. informed consent law and then seeks to determine to what extent AI/ML involvement in a patient’s health should be disclosed under the current doctrine. Part III examines whether the current doctrine “has it right,” while examining more openly empirical and normative approaches to the question.

To forefront my conclusions: although there is some play in the joints, my best reading of the existing legal doctrine is that in general, liability will not lie for failing to inform patients about the use of medical AI/ML to help formulate treatment recommendations. There are a few situations where the doctrine may be more capacious, which I try to draw out (such as when patients inquire about the involvement of AI/ML, when the medical AI/ML is more opaque, when it is given an outsized role in the final decisionmaking, or when the AI/ML is used to reduce costs rather than improve patient health), though extending it even here is not certain. I also offer some thoughts on the question: if there is room in the doc-trine (either via common law or legislative action), what would a desirable doc-trine look like when it comes to medical AI/ML? I also briefly touch on the question of how the doctrine of informed consent should interact with concerns about biased training data for AI/ML.

Continue reading Informed Consent and Medical Artificial Intelligence: What to Tell the Patient?.