Skip to content Skip to footer

As LLMs become increasingly involved in healthcare, scientists are advocating for the establishment of ethical standards.

A recent study states that ethical guidelines for artificial intelligence (AI) in healthcare are lacking, despite the increasing use of AI in areas such as medical imaging analysis and drug discovery. The study, led by Joschka Haltaufderheide and Robert Ranisch from the University of Potsdam, investigated 53 articles to understand the ethical issues surrounding large language models (LLMs) used in healthcare.

Already, AI is being used across various sectors of healthcare, such as diagnostic imaging interpretation, drug development and discovery, personalized treatment planning, patient triage and risk assessment, and medical research. There are instances where models predicted Alzheimer’s disease with 80% accuracy within a span of six years, and the first AI-generated drugs are on their way to clinical trials. Initiatives like OpenAI and Color Health have implemented systems for supporting clinicians in the diagnosis and treatment of cancer.

However, despite the potential advantages of LLMS in the field of medicine, the authors of the study have brought attention to various ethical implications. While the AI models are useful in analyzing data, providing information, and supporting decision-making processes, they can also generate biased or harmful content. In particular, they noted the issue of “hallucinations” where LLMs could generate plausible yet false information, potentially leading to incorrect diagnosis or treatment. The inability of developers to explain the operation of their models, known as the “black box” problem, makes the rectification of these errors extremely difficult.

The study also points out perpetuating biases in LLMs. The researchers noted that these biases could lead to unfair treatment of disadvantaged groups, exacerbating existing inequalities, or causing harm through selective accuracy. For example, they refer to an incident where ChatGPT and Foresight NLP demonstrated racial bias against Black patients. Furthermore, the AI models can pose issues regarding the confidentiality, privacy, and security of patient data.

To address these concerns, the study emphasized human oversight and the creation of universal ethical guidelines for healthcare AI. Though there is a rapid expansion in the use of AI in healthcare, proper measures and guidelines should be in place to ensure a safe and ethical use of the technology. Recently, more than 100 scientists initiated a voluntary program outlining safety rules for AI protein design, highlighting the need for efforts to catch up with the rapid pace of technological advancement.

Leave a comment

0.0/5