Increasingly, health insurers depend on algorithms powered by artificial intelligence to determine if your care qualifies for coverage. These systems sift through health records and compare them to medical benchmarks, then deliver a digital thumbs up or down, using what UnitedHealth describes as clinical decision support tools.
You might assume this saves time for everyone, but the reality can be frustrating for patients. One common roadblock is “prior authorization,” when companies use AI to decide if your treatment is “medically necessary” before greenlighting payment.
When patients hear “no,” their options are bleak. Appeals are technically possible but rarely pursued. Only one out of every five hundred people takes the time, energy and cash to fight a rejected claim. Accepting a different treatment or paying out of pocket are the alternatives — yet the latter is out of reach for most.
Insurers argue that AI speeds up decisions and avoids paying for therapy that might be unsafe or wasteful. They frame these tools as a win for patients and the bottom line. “We use these systems to deliver timely, fair decisions for our members,” a spokesperson insists.
Yet more evidence suggests the opposite outcome: sometimes life altering care is delayed or denied as a result, especially for those facing expensive or lifelong conditions.
Inside the Black Box
What leaves many uneasy is how little anyone really knows about how these decisions get made. Insurers decline to explain what their algorithms consider or how the calculations unfold. Some experts worry that the lack of transparency puts sick people at a hidden disadvantage.
Critics also say there’s a darker side to these tech driven rejections. Some describe insurers simply running out the clock, knowing that seriously ill patients may die before an appeal can be settled. The cost of care, then, is never paid at all.
This hits vulnerable groups the hardest. Studies show that people from Black, Hispanic and other minority backgrounds, as well as LGBTQ patients, are more likely to see coverage denied. Those managing chronic illness often struggle the most, both medically and financially.
The argument that anyone can “just pay for it themselves” rings hollow. For many, skipping medication or surgery is a stark reality when insurance says no.
Regulation of these systems lags far behind their adoption. Unlike AI that guides diagnosis or medical devices, insurance focused tools escape scrutiny by the Food and Drug Administration. Insurance companies call the algorithms trade secrets. There is no outside examination or requirement to prove their safety or value in practice.
State lawmakers have begun to take notice. This year, California passed a law insisting that doctors oversee algorithmic insurance decisions. Colorado, Georgia, Florida, Maine and Texas have their own efforts underway, though most proposals leave insurers with significant control over defining what’s “medically necessary.”
Despite these new rules, enforcement and oversight remain patchy. There’s little standardization. Even the Centers for Medicare and Medicaid Services can only require personalized decisions for federal programs like Medicare — private insurance escapes much of this oversight.
Law professors and patient advocates see only one real fix: empower the FDA to step in. The agency already reviews many AI medical technologies and could bring national consistency to an industry with too many secret rules.
Still, some legal obstacles remain. The current law may need an update, since insurance algorithms are not classified as medical devices under the existing statute.
Until those changes happen, millions are left hoping the system will work in their favor when it matters most, though new advances in fighting insurance denials with smart tech may offer more support.