Next year, Medicare will begin testing an algorithm designed to cut down on unnecessary care, asking hospitals and doctors in six states to get approvals for certain procedures before treating patients. Unlike the mostly hands-off approach in classic Medicare, this is a direct import from strategies used by private insurers.
The pilot, set to run through 2031 in states like Arizona, Texas, and Washington, marks a notable change for elderly and disabled Americans. Many in the industry see echoes of prior authorization policies long criticized for slowing down patient care and creating headaches for providers.
Critics argue the move seems oddly timed, coming right after new promises from both the government and insurers to lighten the prior authorization burden. Some experts and lawmakers are confused by the message the government is sending.
Rep. Suzan DelBene, a Washington Democrat, called it “hugely concerning,” accusing the administration of saying one thing and doing another.
Patients are already frustrated by denial of coverage, with recent surveys showing most Americans see prior authorization as a serious hurdle. Doctors are especially wary about a process driven by an algorithm rather than a human touch, fearing it could hurt their ability to care for patients.
AI, Algorithms, and Accountability
The government’s new initiative, known as WISeR, will apply the algorithm to select procedures linked to fraud or waste, like certain knee treatments and device implants. While some patient protections exist—urgent and inpatient care would not be held up—much will depend on how the technology is used and overseen.
CMS officials promise that each decision flagged by the system will still need review by a licensed clinician, not just a computer. Vendors have strict guidelines, and their pay cannot be directly tied to the number of denials.
Jennifer Brackeen, from the Washington State Hospital Association, worries, “Shared savings arrangements mean that vendors financially benefit when less care is delivered,” which could create subtle pressure to block needed treatments.
Doctors and watchdogs question whether the program’s safeguards go far enough. AI, they warn, can produce confusing and subjective decisions, especially if financial motives sneak in.
According to Dr. Vinay Rathi, an Ohio State University doctor, the government “is not fully fleshed out” in its thinking, adding, “I’m not sure they know, even, how they’re going to figure out whether this is helping or hurting patients.”
Technology already has a strong foothold in insurance. Recent research found some companies’ so-called review processes take seconds, with the computer effectively overruling doctors’ recommendations.
Industry voices argue AI can spot waste with less bias and faster turnaround, but lawsuits and congressional hearings show skepticism remains strong. Physicians fear algorithms may dismiss nuance and context for each patient, leaving the sickest people to fight through appeals or pay large bills themselves.
With plans to expand AI even further, and a Congress divided over how far is too far, the launch of this pilot leaves uncertainty for Medicare’s future. Everyone, from lawmakers to clinics, is watching closely, unsure if computers will truly help or just deepen a long-running problem.