Home Artificial Intelligence Scientists urge for ethical guidelines as LLMs play wider roles in healthcare

Scientists urge for ethical guidelines as LLMs play wider roles in healthcare

by admin
ethics

According to a new study, ethical guidelines are conspicuously absent as AI continues to transform healthcare, from drug discovery to medical imaging analysis.

The study by Joschka Haltaufderheide and Robert Ranisch from the University of Potsdam, published in njp Digital Communications, analyzed 53 articles to map out the ethical landscape surrounding large language models (LLMs) in medicine and healthcare.

It found that AI is already being employed across various healthcare domains, including:

  • Diagnostic imaging interpretation
  • Drug development and discovery
  • Personalized treatment planning
  • Patient triage and risk assessment
  • Medical research and literature analysis

AI’s recent impacts on healthcare and medicine are nothing short of spectacular.

Just recently, researchers built a model for early Alzheimer’s detection that can predict with 80% accuracy whether someone will be diagnosed with the disease within six years.

The first AI-generated drugs are already heading to clinical trials, and AI-powered blood tests can detect cancer from single DNA molecules.

In terms of LLMs, OpenAI and Color Health recently announced a system for helping clinicians with cancer diagnosis and treatment.

While amazing, these advancements are creating a sense of vertigo. Might the risks be slipping under the radar?

Looking specifically at LLMs, the researchers state, “With the introduction of ChatGPT, Large Language Models (LLMs) have received enormous attention in healthcare. Despite potential benefits, researchers have underscored various ethical implications.”

On the benefits side: “Advantages of using LLMs are attributed to their capacity in data analysis, information provisioning, support in decision-making or mitigating information loss and enhancing information accessibility.”

However, they also highlight major ethical concerns: “Our study also identifies recurrent ethical concerns connected to fairness, bias, non-maleficence, transparency, and privacy. A distinctive concern is the tendency to produce harmful or convincing but inaccurate content.”

This issue of “hallucinations,” where LLMs generate plausible but factually incorrect information, is particularly concerning in a healthcare context. In the worst cases, it could result in incorrect diagnoses or treatment.

AI developers often can’t explain how their models work, known as the “black box problem,” so these erroneous behaviors are exceptionally tricky to fix.

The study raises alarming concerns about bias in LLMs, noting: “Biased models may result in unfair treatment of disadvantaged groups, leading to disparities in access, exacerbating existing inequalities, or harming persons through selective accuracy.”

They cite a specific example of ChatGPT and Foresight NLP showing racial bias towards Black patients. A recent Yale study found racial bias in ChatGPT’s handling of radiography images when given racial information about the scans.

LLM bias towards minority groups is well-known and can have insidious consequences in a healthcare context.

Privacy concerns are another risk: “Processing patient data raises ethical questions regarding confidentiality, privacy, and data security.”

In term of addressing risks, human oversight is paramount. The researchers also call for developing universal ethical guidelines on healthcare AI to prevent damaging scenarios from developing.

The AI ethics landscape in healthcare is expanding rapidly as the breakthroughs keep rolling in.

Recently, over 100 leading scientists launched a voluntary initiative outlining safety rules for AI protein design, underscoring how the technology is often moving too fast for safety to keep pace.

Source Link

Related Posts

Leave a Comment