Adopting AI systems too quickly without full testing could lead to ‘errors by health care workers’: WHO

As the artificial intelligence train barrels on with no signs of slowing down — some studies have even predicted that AI will grow by more than 37% per year between now and 2030 — the World Health Organization (WHO) has issued an advisory calling for “safe and ethical AI for health.”

The agency recommended caution when using “AI-generated large language model tools (LLMs) to protect and promote human well-being, human safety and autonomy, and preserve public health.”

ChatGPT, Bard and Bert are currently some of the most popular LLMs. 

In some cases, the chatbots have been shown to rival real physicians in terms of the quality of their responses to medical questions.

CHATGPT FOUND TO GIVE BETTER MEDICAL ADVICE THAN REAL DOCTORS IN BLIND STUDY: ‘THIS WILL BE A GAME CHANGER’

While the WHO acknowledges that there is “significant excitement” about the potential to use these chatbots for health-related needs, the organization underscores the need to weigh the risks carefully.

“This includes widespread adherence to key values of transparency, inclusion, public engagement, expert supervision and rigorous evaluation.”

Medical chatbot technology

The World Health Organization (WHO) has issued an advisory calling for “safe and ethical AI for health.” (iStock)

The agency warned that adopting AI systems too quickly without thorough testing could result in “errors by health care workers” and could “cause harm to patients.”

WHO outlines specific concerns

In its advisory, WHO warned that LLMs like ChatGPT could be trained on biased data, potentially “generating misleading or inaccurate information that could pose risks to health equity and inclusiveness.”

“Using caution is paramount to patient safety and privacy.”

There is also the risk that these AI models could generate incorrect responses to health questions while still coming across as confident and authoritative, the agency said.

CHATGPT, MEAL PLANNING AND FOOD ALLERGIES: STUDY MEASURED ‘ROBO DIET’ SAFETY AS EXPERTS SOUND WARNINGS

“LLMs can be misused to generate and disseminate highly convincing disinformation in the form of text, audio or video content that is difficult for the public to differentiate from reliable health content,” WHO stated.

Sick woman texting

There is the risk that AI models could generate incorrect responses to health questions — while still coming across as confident and authoritative, the agency said. (iStock)

Another concern is that LLMs might be trained on data without the consent of those who originally provided it — and that it may not have the proper protections in place for the sensitive data that patients enter when seeking advice.

“LLMs generate data that appear accurate and definitive but may be completely erroneous.”

“While committed to harnessing new technologies, including AI and digital health, to improve human health, WHO recommends that policy-makers ensure patient safety and protection while technology firms work to commercialize LLMs,” the organization said.

AI expert weighs risks, benefits

Manny Krakaris, CEO of the San Francisco-based health technology company Augmedix, said he supports the WHO’s advisory.

“This is a quickly evolving topic and using caution is paramount to patient safety and privacy,” he told Fox News Digital in an email.

NEW AI TOOL HELPS DOCTORS STREAMLINE DOCUMENTATION AND FOCUS ON PATIENTS

Augmedix leverages LLMs, along with other technologies, to produce medical documentation and data solutions.

“When used with appropriate guardrails and human oversight for quality assurance, LLMs can bring a great deal of efficiency,” Krakaris said. “For example, they can be used to provide summarizations and streamline large amounts of data quickly.”

Large language models

The agency recommended caution when using “AI-generated large language model tools (LLMs) to protect and promote human well-being, human safety and autonomy, and preserve public health.” (iStock)

He did highlight some potential risks, however. 

“While LLMs can be used as a supportive tool, doctors and patients cannot rely on LLMs as a standalone solution,” Krakaris said.

“LLMs generate data that appear accurate and definitive but may be completely erroneous, as WHO noted in its advisory,” he continued. “This can have catastrophic consequences, especially in health care.”

CLICK HERE TO SIGN UP FOR OUR HEALTH NEWSLETTER

When creating its ambient medical documentation services, Augmedix combines LLMs with automatic speech recognition (ASR), natural language processing (NLP) and structured data models to help ensure the output is accurate and relevant, Krakaris said.

AI has ‘promise’ but requires caution and testing

Krakaris said he sees a lot of promise for the use of AI in health care, as long as these technologies are used with caution, properly tested and guided by human involvement.

CLICK HERE TO GET THE FOX NEWS APP

“AI will never fully replace people, but when used with the proper parameters to ensure that quality of care is not compromised, it can create efficiencies, ultimately supporting some of the biggest issues that plague the health care industry today, including clinician shortages and burnout,” he said.

Check Also

Women suing over Idaho abortion ban say they felt like “medical refugees”

Four women suing over Idaho’s strict abortion bans told a judge Tuesday how excitement over …