Start Submission Become a Reviewer

Reading: Artificial or Human Intelligence: Who is to Blame?

Download

A- A+
Alt. Display

Short Abstracts

Artificial or Human Intelligence: Who is to Blame?

Authors:

Nils Broeckx,

Dewallens & partners, BE
X close

Christophe Lemmens

Dewallens & partners, BE
X close
How to Cite: Broeckx N, Lemmens C. Artificial or Human Intelligence: Who is to Blame?. Journal of the Belgian Society of Radiology. 2018;102(S1):21. DOI: http://doi.org/10.5334/jbsr.1637
124
Views
46
Downloads
3
Twitter
  Published on 17 Nov 2018
 Accepted on 26 Sep 2018            Submitted on 31 Aug 2018

The use of artificial intelligence (AI) is an exciting new development in medicine, including radiology. AI algorithms can already interpret medical images to accurately diagnose certain pathologies (e.g. pneumonia). It is to be expected that the possibilities for using AI techniques in radiology will only increase in the future and will offer significant advantages in day-to-day practice. However, we must also take into consideration the possible risks of depending on AI software, such as incorrect assessments of the patient’s pathology. This begs the question, who is legally responsible for the actions of the AI entity: the manufacturer, the radiologist, or the AI entity itself?

Making AI entities responsible for their own actions definitely has a certain appeal because of the level of autonomy with which AI applications can analyse data and make important medical assessments. However, it is legally impossible for now to consider them as ‘legal persons’ as this concept is preserved for entities who can participate in legal transactions independently (natural persons and associations/corporations). This means that AI entities currently do not have rights and obligations and cannot be held liable for causing harm to the patient. It might someday become necessary to change the law and ‘personify’ AI entities as they gain more autonomy in society, but this is currently still science fiction.

It is thus clear that responsibility will remain an issue of human intelligence for now. But which human entity is to blame: the radiologist, the AI manufacturer, or someone else? To answer this question, a distinction can be made according to the consecutive phases of AI application in medicine: 1) the development of AI, 2) the use of AI in daily practice and 3) the possible occurrence of harm to the patient due to using AI.

As for the development phase, the AI entity must be seen as a ‘medical device’, just like a magnetic resonance imaging scanner or any other medical device. The new EU Regulation on Medical Devices explicitly qualifies software intended for medical purposes as a medical device in its own right. This means that manufacturers will have to make sure that the AI entity meets certain safety and performance requirements. The importers and distributors will have to verify these requirements to a certain extent. The CE marking will indicate conformity with the medical device regulation.

From the moment the AI is used in practice by the radiologist, he or she will have to consider protecting the patient’s data in accordance with the data protection legislation, as well as the patient him/herself during the diagnosis and treatment in accordance with the patient rights law.

With regard to data protection, the new EU General Data Protection Regulation (GDPR) will be of particular importance. Under the GDPR the radiologist will have an important responsibility as ‘data controller’ for the processing of the patient’s data through AI software. This will, however, be a joint responsibility with the manufacturer (to be arranged in a contract) if the patient’s data is used for machine learning in order to further develop the AI product for the benefit of the manufacturer. Data controllership means, amongst other things, that you must inform the patient which data is processed, how this happens, and why. The patient will also have to be asked for his explicit consent by the data controller(s) if the AI entity makes medical decisions for that individual patient without any meaningful intervention from a physician. In that case, the patient will also have the right to know the logic behind the AI algorithm and to have the decision re-evaluated by a human being. Not meeting these data protection requirements might lead to high administrative fines for the data controller(s), even if no harm was done to the patient (yet).

With regard to patient rights, it is important to note that a patient has the right to receive qualitative care which entails that the radiologist must always act in conformity with his duty of care, e.g. by using the medical device properly and by checking the result afterwards because AI technology is not flawless. Every patient also has the right to informed consent before an intervention may take place. The patient rights law contains a wide variety of information that must be given, such as the purpose and nature of the intervention, including information on the use of AI, the relevant risks and possible alternatives. Quite apart from the fact that a patient is permitted to refuse an intervention by means of AI, the patient should be informed about the results and the actions to be taken to preserve his health condition as well. The duty to carefully maintain and preserve the patients’ health record finally requires the addition of information about the use of AI during the intervention.

In the (hopefully unlikely) event that the use of AI leads to harm for the patient, the question rises who can be held liable for damages. It goes without saying that the radiologist will be liable in case of an inappropriate use of the medical device (also safe in a data protection sense) or an inappropriate assessment of the results obtained leading to a wrong diagnosis. But even if the medical device is used appropriately a ‘defect’ could always occur making the device unsafe (see more below). As in the case of a medical instrument breaking during an operation, Belgian case law usually holds the physician using the defective device automatically liable based on a violation of the duty to only use safe devices.

In case of a defective medical device the law, moreover, stipulates that the producer/importer/supplier is strictly liable for the damages caused by a defect in his product. A defective product is defined by law as a product that does not provide the degree of safety a person is entitled to expect, taking into account all circumstances, including the presentation of the product, the use to which it could reasonably be expected that the product would be put, and the time the product was put into circulation. The application of the rules on product liability will, however, not be that simple due to the fact that the burden of proof rests with the victim. The injured person shall indeed be required to prove the damage, the defect, and the causal relationship between defect and damage. Given the fact that AI technology presupposes a self-learning system that learns from its experience and takes autonomous decisions, proving a defect in the technology with sufficient certainty seems to be insurmountable. In addition, a producer may invoke some liability excluding justifications, such as the probability, having regard to the circumstances, that the defect which caused the damage did not exist at the time when the product was put into circulation by him or that this defect came into being afterwards, or that the state of scientific and technical knowledge at the time when he put the product into circulation was not such as to enable the existence of the defect to be discovered. Because of the same intrinsic characteristics of AI technology the success of at least one of these justifications seems likely.

In view of all possible dangers and grounds for liability surrounding the use of AI technology, it is highly recommended to have appropriate liability insurance in place.

Competing Interests

The authors have no competing interests to declare.

comments powered by Disqus