WHO advocates for safe and ethical artificial intelligence in healthcare.
The World Health Organisation (WHO) urges caution while employing large language model tools (LLMs) produced by artificial intelligence (AI) in order to protect and enhance human well-being, human safety, and human autonomy, as well as to preserve public health.
Some of the most rapidly growing platforms that mimic comprehending, processing, and creating human communication are LLMs, including ChatGPT, Bard, Bert, and many others.
Significant enthusiasm is being generated about their ability to meet people’s health demands as a result of their rapid public dissemination and expanding experimental application for health-related purposes.
When employing LLMs to increase access to health information, as a decision-support tool, or even to improve diagnostic capability in under-resourced settings, it is critical that the dangers be properly considered.
While LLMs are being used appropriately to support researchers, scientists, health care providers, and patients, WHO is concerned that LLMs are not being used with the same level of caution that would be applied to any new technology. This involves universal adherence to the core principles of openness, diversity, participation by the public, professional oversight, and strict evaluation.
Rapid adoption of unproven systems could result in mistakes made by medical professionals, injury to patients, and a decline in trust in AI, undermining (or delaying) the potential long-term uses and benefits of such technology globally.
Before AI is widely used in ordinary medicine and healthcare, WHO suggests that these issues be addressed and concrete proof of benefit be measured, whether by patients, carers, or administrators and policymakers of the health system.