Critiqs

AI chatbots struggle with medical advice study finds

ai-chatbots-struggle-with-medical-advice-study-finds

Overwhelmed healthcare systems and burgeoning medical expenses have encouraged many people to consult artificial intelligence driven chatbots like ChatGPT medical advice for their health inquiries. According to recent research, about one in six adults in the United States seeks monthly guidance from these AI assistants for wellness concerns.

Yet, a study led by Oxford University has highlighted major pitfalls in relying on these tools for medical decisions. A significant issue is that users often struggle to provide the details needed for chatbots to deliver medically sound recommendations, which increases the risk of receiving inaccurate guidance.

Study Finds Gaps in Chatbot Medical Use

Researchers brought together approximately 1300 UK participants, presenting them with clinical scenarios crafted by medical professionals. Participants attempted to identify health issues and decide on appropriate responses by using leading chatbots including ChatGPT’s GPT 4o model, Cohere’s Command R+, and Meta’s Llama 3.

Surprisingly, those using chatbots were less likely to correctly detect pertinent health conditions compared to those relying on their own judgment or standard web searches. These chatbots often led users to misunderstand the severity of their health issues, amplifying the risk of poor choices.

The Oxford study further uncovered that many users left out crucial facts when conversing with bots and sometimes received answers that blended sensible and questionable advice. Adam Mahdi, one of the researchers, noted that prevailing testing approaches for chatbots fail to capture the complexity of genuine human AI interaction.

Tech companies are still investing in AI tools promising improved healthcare outcomes. Apple is reportedly looking into solutions for lifestyle guidance, Amazon is examining AI for medical database analysis, and Microsoft is developing systems to sort patient inquiries.

Nonetheless, industry sentiment remains divided regarding the appropriateness of AI for complex health matters. The American Medical Association cautions against doctors depending on chatbots for clinical advice, and even prominent tech firms urge users not to treat chatbot outputs as diagnoses.

Mahdi advocates for more thorough real world evaluations mirroring the rigor of clinical trials before widespread adoption occurs. Until then, turning to established, trusted sources remains the safer way to make important healthcare chatbots choices.

SHARE

Add a Comment

What’s Happening in AI?

Stay ahead with daily AI tools, updates, and insights that matter.

Listen to AIBuzzNow - Pick Your Platform

This looks better in the app

We use cookies to improve your experience on our site. If you continue to use this site we will assume that you are happy with it.

Log in / Register

Join the AI Community That’s Always One Step Ahead