The World Health Organization has urged caution when using large language model tools developed by artificial intelligence. This guidance appears in an official statement issued by the organization.
WHO notes that LLMs include some of the fastest expanding platforms in tech today. Names such as ChatGPT, Bard, and Bert are part of this growing family, which replicates aspects of human understanding, processing, and the creation of written and spoken language. These tools are drawing considerable interest from the health sector, researchers, and healthcare systems around the world.
The global health body emphasizes the need to rigorously assess risks when these models are used as decision support tools or to enhance diagnostic workflows. While the potential for LLMs to assist clinicians and scientists is acknowledged, WHO warns that careful safeguards must accompany their deployment.
WHO expresses enthusiasm for AI technologies that can support health professionals, patients, and the broader scientific community. At the same time, it expresses concern that the usual precaution applied to new tools is not consistently followed in the development and use of major language models.
The organization cautions that rushing to deploy untested systems could lead to incorrect guidance from medical staff, pose harm to patients, erode trust in artificial intelligence, and slow down the global adoption of useful AI-enabled health solutions.
In related progress, researchers have built a robot designed to assist people living with dementia in locating glasses and mobile phones, illustrating how assistive AI can address practical needs in daily living.