Critiqs

AI Error Stirs NHS Mix Up With False Diabetes Diagnosis

ai-error-stirs-nhs-mix-up-with-false-diabetes-diagnosis
  • A London man wrongly got a diabetes diagnosis from an inaccurate AI summary in his medical records.
  • AI errors listed fake illnesses and nonexistent hospitals, sparking concern about patient safety checks.
  • The case exposes risks as Britain speeds up health AI adoption without enough human oversight.

The entries attributed him with chest pain, shortness of breath, and even coronary artery disease—all inaccurate, given he had only sought help for severe tonsillitis.

Not only did the system record him as a diabetic on several medications, but it listed a fantasy hospital called “Health Hospital” at “456 Care Road” in a made up city, complete with a fake postcode.

AI’s Clumsy Steps in Health Records

Dr. Matthew Noble from the health service emphasized that this was “a one off case of human error.” According to him, a staff member had noticed the original mistake but, due to distraction, mistakenly saved the incorrect version. The NHS practice involved uses AI only under limited supervision, he added.

The mistaken AI-powered primary care summaries, powered by a tool called Annie, then snowballed into invitations and medical decisions that should never have landed in the patient’s life.

While most AI systems in British hospitals require a person to check their work, NHS insider concerns are growing. “These human error mistakes are fairly inevitable if you have an AI system producing completely inaccurate summaries,” another employee admitted. They worried such errors could go unnoticed, especially among older or less health-literate patients.

Anima Health, creators of the Annie tool, did not comment on the incident. Dr. Noble noted, “No documents are ever processed by AI, Anima only suggests codes and a summary to a human reviewer in order to improve safety and efficiency. Each and every document requires review by a human before being actioned and filed.”

But the reality proved more complicated, as this misdiagnosis went unchecked into official records.

This blunder has arrived just as the national health service ramps up AI adoption to ease pressures and save money. Officials are moving quickly, but the rush raises safety and regulatory questions.

The Annie tool is listed as a Class One medical device, meant to assist clinicians without automatic decision making. Regulators require that all information drafted by these AI tools be fully reviewed by a human before use. Some insiders, like Imperial College London’s Professor Brendan Delaney, suggest rules must keep up with rapid advances, since outputs once intended as suggestions are now informing real-world care.

A recent government plan aspires to make British healthcare the most AI-advanced in the world, streamlining staff workloads and supporting patients with innovative tech. But officials have already warned some software could put patients at risk or break data protection laws.

The patient at the center of the mix-up remains reflective, advocating for AI’s promise but urging strict vigilance. “LLMs are still really experimental, so they should be used with stringent oversight,” he said. His experience, he believes, should push for thoughtful innovation, not halt progress. For a broader perspective on how workplace hallucinations raise concerns in the AI era, experts are weighing in on what needs closer oversight.

SHARE

Add a Comment

What’s Happening in AI?

Stay ahead with daily AI tools, updates, and insights that matter.

Listen to AIBuzzNow - Pick Your Platform

This looks better in the app

We use cookies to improve your experience on our site. If you continue to use this site we will assume that you are happy with it.

Log in / Register

Join the AI Community That’s Always One Step Ahead