Examining the Liability Risk Surrounding Artificial Intelligence Tools in the Healthcare Environment

News
Article

As technology continues to advance, investigators explore the legal protocol for how to handle AI-responsible injuries.

The use of artificial intelligence (AI) in the healthcare space has created excitement over the potential of endless opportunities, but conversely, it can result in complications that have elevated concerns across a myriad of areas. One question that has been raised by experts is who will be responsible when AI results in injury?

A study published by The New England Journal of Medicine (NEJM) explores “dauntingly long lists of legal concerns” concerning the use of AI; however, the authors also note that failing to adopt this new technology can present consequences and ultimately result in malpractice.1

The NEJM report dives into various court issues related to software errors, while conducting deeper analysis of AI tools that either mitigate or elevate legal risk. Because software is an intangible product, courts are often hesitant to apply standard product liability rules when deciding on cases in this arena. There is also a doctrine known as “preemption,” which does not allow personal injury claims on the state level if they relate to items that have been cleared by the FDA.

Reportedly, this doctrine is a bit ambiguous as to what it actually covers, given that it is uncommon for healthcare AI to undergo FDA review, according to the study authors. Also, in many states, plaintiffs arguing in favor of a product being defectively created need to prove that there is another design that would be safer, which is difficult to show when it comes to AI models, given that they are powered by mathematical equations that decode statistical data (view Table 1 below).

Table Credit: The New England Journal of Medicine's"Understanding Liability Risk from Using Health Care Artificial Intelligence Tools"

Table Credit: The New England Journal of Medicine's"Understanding Liability Risk from Using Health Care Artificial Intelligence Tools"

The study authors noted that models that are generally effective may not produce the same result for certain patients or groups. Medical datasets that evaluate and train AI models are specific to patient populations, so figuring out how they will impact other subgroups can be a challenge.

When it comes to software-related liability in the courts, the study authors gathered judicial opinions in tort cases surrounding AI, along with other types software in both the healthcare and non–healthcare settings. They supported those opinions with news, legal newsletters, scholarly articles, and jury verdicts, reviewing a total of 803 unique cases. From there, they pulled information on the central issues that the courts covered in 51 cases that featured software-related errors resulting in physical injury. Trends surrounding software-related cases cited by the study authors include that:

Cases related to implantable device software defects suggest that plaintiffs have a difficult time supporting claims when there is limited visibility into the workings of the device, which makes finding a specific product defect difficult.

In cases involving software recommendations, the effectiveness of AI for different patient groups will put courts in a position to determine whether or not a healthcare provider (HCP) should or shouldn’t have known whether the result was not going to be reliable for patients.

The reluctance by courts to differentiate AI from “traditional” types of software implies that the rules that courts create for AI-related cases could roll over to non-AI software type cases, even if technical differences could make them ill-equipped to do so.

When it comes to assessing legal risk in AI deployments, the authors suggest “that healthcare organizations should not follow suit when evaluating the risks and benefits of AI adoption. AI is not one technology but a heterogeneous group with varying liability risks. Identifying AI tools with the greatest risk can help target risk-management interventions and clinical oversight.”

Therefore, in order to manage the uncertainty surrounding liability, the authors recommended that clinicians and healthcare organizations follow several pieces of advice:

  • Because some of AI tools are riskier than others, all AI applications should not be classified under the same umbrella.
  • The current buyer’s market for healthcare organizations creates opportunities to negotiate terms that may limit purchasers’ liability risk.
  • Stakeholders are urged to learn from lessons provided by previous versions of decision-making.
  • Organizations should be able to predict evidentiary issues involved in AI litigation.
  • Organizations should remain cognizant that an AI case defense may vary from defense offered in a standard malpractice case.

The authors leave readers with one final thought that pieces their entire research together when they write that “… the best risk-management strategy is to prevent injuries. Following emerging guidelines for evaluating AI model safety can help minimize the human and financial cost that the leap into AI-informed medicine involves.”

Reference

1. Mello MM, Guha N. Understanding Liability Risk from Using Health Care Artificial Intelligence Tools. N Engl J Med. January 18, 2024.

Related Videos
© 2024 MJH Life Sciences

All rights reserved.