Home Print this page Email this page Small font size Default font size Increase font size
Users Online: 337
Home About us Editorial board Search Ahead of print Current issue Archives Submit article Instructions Contacts Login 


 
 Table of Contents  
EDITORIAL
Year : 2020  |  Volume : 9  |  Issue : 3  |  Page : 125-127

Artificial intelligence decision support systems and liability for medical injuries


Joint secretary of the Ethics Council and the National Committee on Health Research Ethics, Copenhagen, Denmark

Date of Submission25-May-2020
Date of Acceptance13-Jul-2020
Date of Web Publication08-Oct-2020

Correspondence Address:
Dr. Lise Aagaard
Joint secretary of the Ethics Council and the National Committee on Health Research Ethics, Copenhagen
Denmark
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/jrpp.JRPP_20_65

Rights and Permissions

How to cite this article:
Aagaard L. Artificial intelligence decision support systems and liability for medical injuries. J Res Pharm Pract 2020;9:125-7

How to cite this URL:
Aagaard L. Artificial intelligence decision support systems and liability for medical injuries. J Res Pharm Pract [serial online] 2020 [cited 2020 Dec 4];9:125-7. Available from: https://www.jrpp.net/text.asp?2020/9/3/125/297568



Artificial intelligence (AI) is an emerging technology that combines software technologies and medical information with the purpose to provide recommendations for the diagnosis and treatment of patients. Use of the technology might reduce the number of medical errors and misdiagnoses and possibly increase the quality of patient treatment.[1] Software failures can lead to medical malpractice if the device uses outdated software, the software fails to warn, or due to alarm fatigue. As medical algorithms are not 100% reliable, the systematic implementation of AI decision support systems in patient treatment raises several questions regarding liability for malpractice.[1]

In the European Union, a medical device is defined as “any instrument, apparatus, appliance, software, implant, reagent, material or other article intended by the manufacturer to be used, alone or in combination, for diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of disease and which does not achieve its principal intended action by pharmacological, immunological or metabolic means, in or on the human body” (medicines device regulation [MDR] article 2 [1]).[2]

Medical devices forin vitro diagnostic regulation (IVDR) are defined as “any device, a reagent, reagent product, calibrator, control material, kit, instrument, apparatus, piece of equipment, software or system, whether used alone or in combination that are intended to be used in vitro for the examination of specimens, including blood and tissue donations, derived from the human body with the purpose to bring information about a physiological condition, congenital physical and mental disabilities or for monitoring of patient treatment” (IVDR MDR article 2 [2]).[3]

It follows from both the definition of a medical device and the definition of medical devices for IVDR diagnostic that software and applications can be classified as medical devices. To be classified as a medical device, the software/application must take active action in patient treatment, e.g., make calculations to control of the dispensing of medicines. Software that only makes searches, sends or stores data, or functions as a planning tool is not considered a medical device. A pedometer or an app that helps the user to remember to take his/her medicine is not a medical device either. The European Court of Justice decided in case C-329/16 that software using patient-specific data to guide the physician in his/her prescribing of medicines by identifying potential contraindications, drug–drug interactions, and adverse events are medical devices.[4]

According to the European MDR regulation article 69, there is strict liability for injuries caused by medical devices, but in most jurisdictions, the medical malpractice rules will apply a priori, when a defect device harms patients.[2] Following the European MDR/IVDR legislation, a producer, importer, or supplier of AI technologies is liable for any damage or harm caused by a defect that is that product and must pay compensation for harmed patients. According to the medical malpractice rules, patients are only allowed to receive compensation for medical malpractice if the harm is caused by an error in the algorithm that could not be foreseen by the treatment-responsible physician, and if the injury were more serious than the patient was expected to tolerate, compared to the severity of the actual patient's disease.[5]

Strict liability in the term of product liability is triggered by the abnormal performance of the product, but such regime does not fit algorithms as they by nature is uncertain and unpredictable. This creates a problem among the Claimants who will face difficulties in detecting the failure and second in proving causality between the failure and the medical damage. Therefore, medical malpractice regulations applying strict liability would likely be too burdensome for medical algorithms, as they will occasionally be wrong.[6]

When approving new medical devices, regulatory authorities must control that producers have minimized the risk of bias in the algorithms. In doing so, the regulators need to know the origin of the data, as the algorithms are designed and trained on random datasets. Examples of biased algorithms are algorithms trained on healthy men, but used in clinical practice on women and children.[7] Another issue are algorithms used in reading of images and X-rays for the diagnosis of serious and potentially deadly diseases such as cancer, where the physician oversees an abnormality due to nondetectable errors in the algorithm.

Algorithms, particularly the self-learning algorithms, must be 100% reliable every time in order to avoid the risk of the physician not detecting serious diseases, or making wrong diagnosis leading to unnecessary treatments of healthy people. Among physicians, the enthusiasm for AI decision support systems using self-learning algorithms is increasing, but due to the limitation of the systems, health authorities must ensure that patients' rights are protected before implementing these systems in clinical practice. The unanswered question is, therefore, how regulatory authorities validate self-learning algorithms.[2] Despite the change of the medical device regulation in the European Union in order to increase the safety of medical devices in general, it is still unclear how and from which criteria AI decision support systems should be evaluated before marketing. Another question is how medical devices that are already marketed and used in clinical practice should be clinically re-evaluated in order to achieve the mandatory EU declaration of conformity (CE labeling). The European MDR/IVDR device regulation that comes into force in 2021 states that it is mandatory that all medical devices should be CE-labeled before marketing and use of these in clinical practice.

As the health-care legislation always will be behind the technical development, medical injuries caused by algorithms must be covered by national compensation schemes without limitations, as patients in many cases will not have the opportunity to choose between different treatments including devices without integration of AI decision support systems.

In the present malpractice systems, physicians' liability for medical malpractice is evaluated on common medical standards of care. If the patient is treated below the common medical standards, the treatment-responsible physician will be liable for medical malpractice.[2],[6] With respect to algorithm, the question is whether the patient would expect the physician to detect an algorithm error or not. If the physician cannot foresee how the algorithms makes its decision, the physician will not be liable. If the physician could foresee that the algorithms makes a mistake, the physician will only be liable if the injury caused by the algorithm is more severe than what the patient is expected to tolerate, compared to the severity of the patients actual disease.[7] Hospitals' implementation of AI decision support systems as standard procedures in diagnosis and treatment is problematic if the physician cannot detect the error and is forced to use the AI system. In these cases, it is unreasonable if the physician assumes liability for malpractice due to algorithm errors.

In many jurisdictions, authorized health-care professionals, e.g., physicians, dentists, and nurses, can be held personally liable for patient injuries. Article 17 of the Danish authorization act states that authorized health-care professionals are obliged to show “care and conscientiousness in their work.” Violation of the authorization act can lead to temporarily or permanently restriction of their legal right to practice. Until now, no legal cases of Danish physicians and other health-care professionals who have been held legally responsible for patient injuries caused by errors in medical algorithms exists.

In conclusion, until now, clinical decision-making systems have been based on simple algorithms easy to validate, but when more advanced algorithms become available several ethical questions arise, and the regulatory authorities will face new and unknown challenges in assessing the efficacy and safety of these algorithms.


  Authors' Contribution Top


The author contributed to the entire work.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.



 
  References Top

1.
Sullivan HR, Schwiekart SJ. Are current tort liability doctrines adequate for addressing injury caused by AI? AMA J Ethics 2019;21:E160-6.  Back to cited text no. 1
    
2.
European Commission. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on Medical Devices, Amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and Repealing Council Directives 90/385/EEC and 93/42/EEC. Available from: https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32017R0745&from=EN.  Back to cited text no. 2
    
3.
European Commission. Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 onIn vitro Diagnostic Medical Devices and Repealing Directive 98/79/EC and Commission. Available from: https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32017R0746&from=EN.  Back to cited text no. 3
    
4.
European Court of Justice. Case c-329/16. Snitem and Philips France. Available from: http://curia.europa.eu/juris/liste.jsf?language=en&num=C-329/16.  Back to cited text no. 4
    
5.
Aagaard L, Kristensen K. Off-label and unlicensed prescribing in Europe: Implications for patients' informed consent and liability. Int J Clin Pharm 2018;40:509-12.  Back to cited text no. 5
    
6.
Zollers FE, McMullin A, Hurd SN, Shears P. No more soft landings for software: liability for defects in an industry that has come of age. Santa Clara High Tech L, J 2004;21:745-82.  Back to cited text no. 6
    
7.
Ursenbach J, O'Connell ME, Neiser J, Tierney MC, Morgan D, Kosteniuk J. et al. Scoring algorithms for a computer-based cognitive screening tool: An illustrative example of overfitting machine learning approaches and the impact on estimates of classification accuracy. Psychol Assess 2019;31:1377-82.  Back to cited text no. 7
    




 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
Authors' Con...
References

 Article Access Statistics
    Viewed428    
    Printed17    
    Emailed0    
    PDF Downloaded61    
    Comments [Add]    

Recommend this journal