Skip to main content

Research Repository

Advanced Search

Medical diagnostic artificial intelligence; medical, safety, security, and legal considerations

Ludvigsen, Kaspar

Authors



Abstract

The role, specifications, and the legal branches which artificial intelligence (AI) currently rests on is thin. In many fields, AI bears striking resemblance to existing software, often diverging by the inclusion of machine learning, whether it uses neural networks or not, or the use of small or large language models. This is also true for the upcoming type of medical support software, in the form of Medical Diagnostic Artificial Intelligence (MDAI). This paper aims to explore MDAI as an upcoming phenomenon, through four different disciplines, creating an interdisciplinary analysis of the concept. Like AI which makes decisions in legal systems, MDAI faces the challenge of taking over or supporting an intrinsically important, yet vaguely or almost esoterically understood task, in the form of diagnosing patients.
This brings uncertainty, and the paper first clarifies how diagnosis in the medical sciences is understood through medical oncology and methodology and discusses the problems this brings to the development and deployment of MDAI. Diagnosis is only part of the process of healing, can never stand alone, and is deliberately flexible in case it is wrong. All these actions cannot at the current stage be implemented technically and medically into MDAI.
The paper then uses System-Theoretic Process Analysis (STPA) to model the safety and security (including cybersecurity) areas which MDAI have at an overarching level. STPA is a safety engineering tool, which is used here to map the issues. Applying it shows how vulnerable at a safety and security level MDAI is, and how it requires layers of systems to protect it from failing either the patient, user, or the provider. It is questioned whether the cost of keeping the MDAI safe and secure can match the envisioned revenue which it may create.
Additionally, cybersecurity analysis of threat modelling and adversarial attacks is included. This is done in the form of a taxonomy of security failures which the MDAI can suffer from at a general level, and analysis of the different adversaries which MDAI may face.
Because of these problems, the paper then discusses the legal issues which this brings, in the form of healthcare law, security and safety legislation, and in the possible venues which private law allows of the coverage of damages. All of these are legal sources from the European Union, which gives an overview of how it could be further interpreted and understood in practice. MDAI face hurdles in the form of having to comply with both the future AI Act, the Medical Device Regulation (MDR), and all cybersecurity legislation, such as the Cyber Resilience Act (in spirit), the Cybersecurity Act (standards), and the future Cyber Solidarity Act. Additionally, MDAI will be considered critical infrastructure, meaning that NIS2 applies in full to it as well. In safety, the current and upcoming product liability directives apply directly, and the safety tools in the MDR also. As for private law, litigatory tools seen in both the product liability directives and the MDR apply, meaning that patients who suffer from MDAI making wrongful diagnoses, or similar from cyberattacks, have clear avenues to sue the manufacturers or hospitals that use them.
The paper finds that MDAI may not, at this stage, be able to fulfil the criteria which general medical methodology requires to reach a diagnosis. It may fulfil the initial stages of interpreting data, but that is only a small part of the process of healing any given patient. This has consequences for the modelling of safety and security, as this should require the MDAI to never be in the highest role in such models – the controller. However, it will in practice be deployed as so regardless, with lack of oversight, akin to famous AI cases such as SyRi in the Netherlands. Findings for the safety and security, indicate that correct MDAI behaviour often entails a higher chance of losses (such as patient physical and mental health), as MDAI that work as intended does not guarantee safety or security by itself, as these are not necessarily its main purposes. The role of human oversight is therefore paramount, and because of the unsafe controller actions which MDAI additionally have, there is good reason to never leave them unsupervised. The legal consequences are akin to malpractice by medical practitioners, except MDAI with poor cybersecurity will face additional claims from those who may be damaged by adversarial failures. And finally, MDAI will be caught by a web of legal rules that may conflict, creating a shaky and risky endeavour to pursue in the first place.

Citation

Ludvigsen, K. (2024, July). Medical diagnostic artificial intelligence; medical, safety, security, and legal considerations. Presented at TILTING Perspectives 2024, Tilburg University

Presentation Conference Type Presentation / Talk
Conference Name TILTING Perspectives 2024
Start Date Jul 8, 2024
End Date Jul 10, 2024
Deposit Date Aug 22, 2024
Peer Reviewed Not Peer Reviewed
Public URL https://durham-repository.worktribe.com/output/2763953
Publisher URL https://www.tilburguniversity.edu/about/schools/law/departments/tilt/events/tilting-perspectives/2024