The hype surrounding artificial intelligence (AI) in the medical field is still going strong [1]. The potential and capabilities of AI systems are clear to most people, but few are equally aware of how important it is to design them correctly.
As efficient, accessible, and cost-effective AI systems can be, if built correctly, they risk being racist, misogynistic, and inequality if built incorrectly. [2, 3].
Technological development is fast-paced, but the ethical debate lags behind, which means that we clinicians risk getting bogged down in our daily clinical work. Three important areas that affect the ethics of AI are the issue of responsibility, systemic bias, and the issue of clinicians’ participation in development.
The question of liability can be represented in the following manner; Root undergoes an X-ray examination. The AI system assesses that there is nothing abnormal and Ratt receives a reassuring message. A year later, she sought treatment again and it turned out that she had a tumor in her right lung. Upon review, it appears that the tumor can already be seen in the first photo. who is in Charge? Was it the engineers who developed the algorithm, was it the person who bought the system, or was it the doctor who passed on the message?
One solution is for AI systems to become advisory only, and for the physician who makes the decision to become legally responsible. But is he morally responsible? One could certainly argue the opposite.
Implementation of tools that simplify assessment processes creates risks that increase production requirements. In order for the doctor to catch up, the reviews will not be strict. The risk is then great that the doctor will have too much confidence in his robotic instrument and make decisions based on his formal authority, but without real power or influence, the so-called rubber seal. [4]. The question is whether the doctor in these cases should bear legal and moral responsibility.
Cognitive bias (systematic thinking errors) is not only found in the human brain. Even AI systems can be affected – and the risks associated with that are much greater. An individual doctor with malicious bias creates problems for the individual patients he sees, while harmful bias in an AI system can have disastrous consequences.
Much of the problem is due to the underrepresentation of the training data [3]. Just as a doctor learns from the cases it sees, the AI system learns from the available data. A doctor who, for example, is only trained in evaluating skin changes in people with light skin becomes good at it, but is less good at evaluating changes in people with dark skin [5, 6]. In the databases for training skin assessment algorithms, dark-skinned people are severely underrepresented [7]. If this continues, there is a significant risk that health inequalities will be entrenched [3].
The third aspect relates to the issue of participation in the development of algorithms. We doctors should not remain in the passenger seat and allow technical professions to develop healthcare algorithms without our influence. Patient representatives must also be directly involved, but the involvement of clinicians who are clinically responsible for the patient can indirectly increase patient focus.
An AI engineer, the CEO of a medical technology company or others with a distance from a sick or suffering person risk reducing the patient to a data point. A doctor can, of course, act in the same way, but there is a difference – being given the privilege and trust to be responsible for someone else’s life gives a sense of responsibility and belonging. A clinician, with his or her own experience, can help see an individual patient in a sea of data.
We strongly believe that doctors and AI systems together can do amazing things for healthcare. However, we physicians must do our part to contribute to ethically sustainable healthcare. So traditional learning objectives in medical education must provide space for ethical reflection associated with new technological innovations, including artificial intelligence algorithms.
Medical Journal 32-33 / 2022
Lakartidningen.se
“Extreme tv maven. Beer fanatic. Friendly bacon fan. Communicator. Wannabe travel expert.”
More Stories
The contribution of virtual reality to research in medicine and health
The sun could hit the Internet on Earth
In memory of Jens Jørgen Jørgensen