WHO Europe warns AI in healthcare needs stronger safeguards


The World Health Organization logo will be seen near its headquarters in Geneva, Switzerland in 2023. – Reuters

COPENHAGEN: The increasing use of artificial intelligence in healthcare requires stronger legal and ethical protections for patients and medical staff, the World Health Organization’s European office warned in a report released on Wednesday.

The findings come from a study on how AI is deployed and regulated in European healthcare systems, based on input from 50 of the 53 states in the WHO European Region, which also covers Central Asia.

According to the report, only four countries, around 8%, have so far introduced a specific national AI strategy for health, while seven other countries are in the process of developing one.

“We are at a fork in the road,” Natasha Azzopardi-Muscat, director of health systems at WHO Europe, said in a statement.

“Either AI will be used to improve people’s health and well-being, reduce the burden on our exhausted healthcare workers and drive down healthcare costs, or it could undermine patient safety, compromise privacy and increase healthcare inequities,” she said.

Nearly two-thirds of countries in the region are already using AI-enabled diagnostics, especially in imaging and detection, while half of countries have introduced AI chatbots for patient engagement and support.

The WHO urged its member states to address the “potential risks” associated with AI, including “biased or poor quality outcomes, automation bias, erosion of clinicians’ skills, reduced doctor-patient interactions and unequal outcomes for marginalized populations.”

Regulation is struggling to keep pace with technology, WHO Europe said, noting that 86% of member states said legal uncertainty was the main barrier to AI adoption.

“Without clear regulatory standards, doctors may be reluctant to rely on AI tools and patients may not have a clear recourse if something goes wrong,” said David Novillo Ortiz, WHO’s regional adviser on data, artificial intelligence and digital health.

WHO Europe said countries should clarify responsibility, establish redress mechanisms for harm and ensure AI systems are “tested for safety, fairness and effectiveness in the real world before they reach patients”.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *