Who is Liable for Incorrect AI Diagnosis that Leads to the Wrong Treatment or Injury?

Immigration Law Special Issue

During a physician’s career, they are likely to experience a medical malpractice lawsuit. In 2022, the American Medical Association reported that 31.2% of physicians had been sued in their careers. While the debate on the frivolity or impact of medical malpractice claims on the United States healthcare system is separate, the burgeoning industry of artificial intelligence (AI) and machine learning (ML) in healthcare has introduced nuanced concerns in healthcare litigation. Among those concerns is the formation of a clear framework of liability for patients seeking to sue their healthcare providers when the application of AI and ML leads to improper treatment or injury.

On October 30,  the White House issued an Executive Order with a comprehensive strategy for the safe, secure, and trustworthy development of AI. The Executive Order focuses on the impact of AI on national defense, guarding Americans’ privacy, advancing equity and civil rights, and protecting consumers, students, and patients’ rights. Yet the reality of AI/ML’s impact on the United States healthcare industry and American citizens was already acknowledged when the 21st Century Cures Act (Cures Act) was signed into law in December 2016.

Advertisement

Answering Legal Banner

The Cures Act

The Cures Act delegated authority to regulate selected software that utilizes AI and ML to the United States Food and Drug Administration (FDA). Subsequently, the FDA separately defined AI versus ML—a distinction that alludes to several underlying issues related to the legal mire encompassing their respective use.

AI is the science and engineering of making intelligent machines, especially intelligent computer programs. AI can use different techniques, including models based on statistical analysis of data, expert systems that primarily rely on if-then statements, and ML. Whereas machine learning is defined as an AI technique that can be used to design and train software algorithms to learn from and act on data. Software developers can use ML to create an algorithm with a function that does not change or an adaptive algorithm that can change its behavior over time based on new data. For example, AI and ML can be used to provide a diagnostic algorithm for skin cancer that continuously updates based on the latest research.

Who Is Liable for Medical AI/ML?

The implications of AI/ML use in direct patient care begets the question: Who is to blame when an incorrect diagnosis facilitated by AI or ML leads to unnecessary and even harmful treatment to a patient? The answer to such a question is, of course, complicated. Giving weight to the law school adage, “it depends,” liability is determined by the type of AI/ML at issue.

Advertisement

Eza Mediation

While case law surrounding physicians’ uses of AI/ML is not established, an argument can be made that such cases fall within the purview of traditional medical malpractice and negligence framework, wherein an individual physician may be held liable.

Under the traditional medical negligence framework, the standard of care is legally defined as what a reasonable and prudent physician would have provided under the same or similar circumstances. Individual physicians can be held liable for failing to evaluate the output from predictive diagnostic tools in the same way that a physician is liable if they fail to independently assess data and conclusions from imaging studies or lab work.

Critics of medical negligence lawsuits argue the increased individual liability for each doctor stifles innovation and deters physicians from performing newer yet risky procedures. Unfortunately, the traditional medical negligence liability framework encounters similar issues when applied to AI/ML healthcare. As it stands, without clear precedent or guidance, the potential benefits of adopting AI/ML systems for individual physicians are outweighed by the fear of unknown and unquantified liability.

Could Inventors of AI/ML Healthcare Systems be Liable?

It is plausible that inventors of AI/ML healthcare systems, particularly those utilized in medical devices, can be held liable for the harm resulting from their designs in a fashion similar to manufacturers via product liability litigation. Yet current case law concerning software as a “product” for the purposes of product liability litigation is convoluted as well.

On one end of the spectrum, a non-precedential opinion of the United States Court of Appeals for the Third Circuit in Rodgers v. Christie (2020) has been cited as holding that software is not, nor is remotely analogous to, tangible personal property to qualify as a product for liability purposes. Conversely, the Louisiana Supreme Court in South Central Bell Telephone Co. v. Barthelemy (1994) held computer software was defined as “corporeal property” and, therefore, subject to product liability litigation under Louisiana law.

Current federal legislation does not preempt traditional state laws that guide product liability litigation. In fact, the culmination of legislative actions appears to favor such a legal framework. The FDA’s September 27, 2019, Notice established that some CDS software, which it seeks to regulate, constitutes a “medical device.” In recent years, the FDA’s Center for Devices and Radiological Health (CDRH) has approved hundreds of AI/ML-enabled medical devices.

FDA’s SaMD Action Plan

Lastly, on January 12, 2021, the FDA published its first AI/ML-Based Software as a Medical Device (SaMD) Action Plan. The SaMD Action Plan emphasizes developing a robust regulatory framework, taking a more proactive role in good ML practices; increasing transparency for a patient-centered approach; and supporting regulatory science efforts to improve ML algorithms. Notably absent from the Action Plan was any federal framework establishing avenues of recovery for individuals injured by products or devices utilizing AI/ML.

The integration of AI and ML into the United States healthcare setting is inevitable and, in many specialized areas, has already occurred. The United States government has only recently taken drastic steps aimed at regulating AI/ML. However, an absence of guiding principles overseeing the inevitable injuries resulting from AI/ML healthcare systems remains.

The inclination to employ states as incubators of various solutions is appealing, but traditional frameworks of medical negligence and product liability fail to balance the multifaceted interests of industry innovators, physicians and patients.

Michael Lee

Michael P. Lee is an Associate Attorney at Segal McCambridge. He may be reached at [email protected].

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts