The future of healthcare is being shaped by artificial intelligence, but experts warn that this powerful technology could also complicate the process of assigning blame for medical errors. As AI tools become more prevalent in clinical settings, the question of liability becomes increasingly complex.
The rapid advancement of AI in healthcare has led to the development of various tools, from algorithms that interpret medical scans to systems that assist in diagnoses. These innovations aim to improve patient care and streamline hospital operations. However, the potential for legal complications arises when something goes wrong.
Professor Derek Angus, from the University of Pittsburgh, highlights a critical concern: "There will be instances where people perceive something went wrong, and they will look for someone to blame."
The Jama Summit on Artificial Intelligence, attended by a diverse group of experts, including clinicians, technology companies, regulatory bodies, insurers, ethicists, lawyers, and economists, explored these challenges. The summit's report, co-authored by Professor Angus, delves into the legal and ethical implications of AI in healthcare.
One of the key issues is the difficulty in proving fault when it comes to AI products. Patients might struggle to demonstrate negligence in the design or use of AI systems. Accessing information about the AI's inner workings could be challenging, and proposing alternative designs or proving AI-related poor outcomes might be equally difficult.
Professor Glenn Cohen from Harvard Law School explains, "The complex relationships between parties involved in AI-related lawsuits could make it challenging to determine liability. They might point fingers at each other, and existing contracts might reallocate liability or lead to indemnification lawsuits."
Professor Michelle Mello, another author from Stanford Law School, acknowledges the courts' ability to handle legal issues but emphasizes the time and inconsistencies that may arise. This uncertainty, she notes, increases costs for all stakeholders in the AI innovation and adoption ecosystem.
The report also raises concerns about the evaluation of AI tools, many of which operate outside the oversight of regulatory bodies like the US Food and Drug Administration (FDA).
Professor Angus emphasizes the unpredictability of AI tool deployment, stating, "Clinicians seek improved health outcomes, but regulatory authorities may not always require proof of effectiveness. Once deployed, AI tools can be used in various clinical settings, with different patient populations and users of varying skill levels, making it challenging to ensure consistent performance."
The report highlights the barriers to evaluating AI tools, such as the need for clinical use to fully assess their performance and the high costs and complexity of current assessment methods. To address these challenges, the report emphasizes the importance of funding and investment in digital infrastructure to ensure proper evaluation of AI tools in healthcare.