• Sat. Apr 18th, 2026

Christina Antonelli

Connecting the World, Technology in Time

AI chatbots top 2026 list of health tech hazards

AI chatbots top 2026 list of health tech hazards

CLEVELAND, Ohio — More than 40 million people a day turn to ChatGPT for health information, according to a recent analysis, even as artificial intelligence chatbots used for healthcare top the Emergency Care Research Institute’s 2026 list of worst health technology hazards.

Chatbots can be useful, but they can also give false or misleading information, leading to patient harm, according to a recent report from the nonpartisan patient safety organization.

Chatbots that rely on large language models — such as ChatGPT, Claude, Copilot, Gemini, and Grok — produce human-like and expert-sounding responses to users’ questions.

The tools are not regulated as medical devices nor validated for healthcare purposes but are increasingly used by clinicians, patients, and healthcare providers, according to the report.

“(AI chatbots) are programmed to sound confident and to always provide an answer to satisfy the user, even when the answer isn’t reliable,” the institute’s report said.

But chatbots have suggested incorrect diagnoses, recommended unnecessary testing, promoted subpar medical supplies, and even invented body parts in response to medical questions while sounding like a trusted expert, the report said.

“Medicine is a fundamentally human endeavor,” said Emergency Care Research Institute CEO Dr. Marcus Schabacker. “While chatbots are powerful tools, the algorithms cannot replace the expertise, education, and experience of medical professionals. Realizing AI’s promise while protecting people requires disciplined oversight, detailed guidelines, and a clear-eyed understanding of AI’s limitations.”

The risks of using chatbots for medical advice could become an even greater concern as rising healthcare costs mean more patients rely on them as a substitute for professional medical advice.

Chatbots can also exacerbate existing health disparities, according to the institute’s experts. Any biases embedded in the data used to train chatbots can distort how the models interpret information, leading to responses that reinforce stereotypes and inequities.

Patients, clinicians, and other chatbot users can reduce risk by educating themselves on the tools’ limitations and always verifying information obtained from a chatbot with a knowledgeable source.

Other health technology hazards on the institute’s 2026 list included inadequate device cleaning instructions, unpreparedness for a sudden loss of access to electronic systems and patient information, and substandard and falsified medical products.

For 18 years, Emergency Care Research Institute’s Top 10 Health Technology Hazards report has identified critical healthcare technology issues. The organization uses data from incident investigations, reporting databases and independent medical device testing.

link

By admin