Pretending AI Tools Are Doctors Will Harm Millions on Medicare
Health insurers are already using AI to deny care—and are poised to do so at unprecedented scale
By Ryan Clarkson
Our parents and grandparents are at risk of losing their healthcare, and not only through draconian budget cuts and shutdown chaos.
For the last several years, private insurers like UnitedHealth, Humana, and Cigna have been secretly implanting artificial intelligence and advanced algorithms into their claims and prior authorization processes. These profit-hungry corporations tout AI tools as the solution to providing more cost-effective, efficient healthcare by rooting out fraud and waste. In fact, they are part of a brand new, supercharged system that denies medically necessary care at an unprecedented scale and disproportionately affects the most vulnerable among us: the elderly, the chronically ill, and the underprivileged.
The federal government is now expanding this same model within traditional Medicare, through a new pilot program that promises to harness AI for the stated purpose of rooting out waste, fraud, and abuse. While that may sound prudent in theory, we know exactly where this road leads because we are already seeing the warning signs in the private Medicare Advantage system.
Patients and families across the nation’s largest private health insurance companies have been unlawfully denied care by insurers relying on AI and algorithms instead of real doctors to determine their claims. For some elderly patients, denial of access to care their doctors have prescribed has meant early death. These are not isolated incidents: they are the new norm. In numerous cases, patients have appealed their AI-triggered denial and won—only to have their insurer deny their claim again and again, in spite of repeated wins on appeal, until the patient either gave up or passed away. That is not healthcare. It is prioritizing corporate profits over people.
My firm is now representing patients in three class action lawsuits against UnitedHealth, Humana, and Cigna, in response to which the insurers have moved to dismiss our clients’ claims and shut the courthouse doors to them without a trial. In every instance, state and federal courts have permitted our claims to move forward. We are now in discovery, fighting to expose the wildly inaccurate AI algorithms behind these inhumane healthcare denials.
Even as these lawsuits proceed, however, policymakers are moving forward with programs that mirror the same unfair business practices we are challenging in court. The Centers for Medicare & Medicaid Services (CMS) recently announced a six-state, six-year so-called “pilot program” to test AI and machine learning within traditional Medicare, starting January 2026 in Arizona, New Jersey, Ohio, Oklahoma, Texas, and Washington. Pitched as an innovative way to reduce waste and control costs, it uses a similar incentive structure to that which has harmed Medicare Advantage Plan holders by denying care upfront, requiring patients to appeal, and then making the process so difficult that most never do.
Under this pilot, not only does CMS save money when care is denied, but so do the states and the technology vendors that provide the AI. That means everyone in the loop has a financial incentive to deny the care, except the ones who actually matter: the patient and their doctor.
In our litigation, we have uncovered claim denial reversal rates exceeding 90% when patients challenge these AI-driven decisions. That tells us this technology is stunningly inaccurate. And because most people never appeal, especially elderly patients unfamiliar with the process, those wrongful denials tend to stick.
We have also seen examples where so-called reviews by human physicians were little more than formalities. In one case, Cigna doctors, required by law to thoroughly and fairly review every claim denial, “rubber-stamped” over 300,000 denials in just two months, spending an average of only 1.2 seconds per case.
I am not against the use of technology in healthcare. In the right context, with the proper safeguards, AI and machine learning may be able to streamline administrative processes, identify fraud, and support physicians. But when it comes to making life-altering decisions about who receives care, what type of care they receive, and for how long, we must proceed with caution, transparency, and accountability.
Here are five steps to help agencies like CMS get there:
First, a human must always meaningfully review claims, working in good faith to find coverage, not merely a basis to deny it. Denials of care must be reviewed by a licensed physician who conducts a thorough, individualized assessment, not a rubber-stamp process.
Second, transparency is essential. Patients and providers must be informed when AI tools are involved in decision-making. They should know what role any AI or algorithm played and how to contest its conclusions.
Third, there must be a clear and frictionless appeals process. Appealing a denial should not be a maze of red tape. The process must be accessible, fast, and designed to give patients a fair chance, not wear them down until they give up or die from lack of care.
Fourth, there must be real oversight. We need regulators to audit these systems, track denial rates, and ensure that financial incentives are not leading to harm.
Finally, bias and data quality must be assessed. AI tools trained on incomplete or biased data can magnify disparities in care. Without rigorous testing and validation, these tools will do more harm than good.
The AI wave is already reshaping healthcare, and it shows no signs of slowing down. We must not allow innovation to supersede the principles of fairness, clinical judgment, and due process that healthcare depends on.
Our clients, who are everyday patients, are not asking for special treatment. They simply want the care their doctors prescribed. Congress, CMS, and state regulators owe a duty to the people, not corporate behemoths, and must act now to ensure that any use of AI in healthcare respects the rights and dignity of patients as human beings, not fodder for corporate profits.
Ryan Clarkson (@ryanjclarkson) is the Founder and Managing Partner of the Malibu-based public interest firm Clarkson Law Firm, which was founded in 2014 based on the principle that the law is an integral part of society’s checks and balances, empowering everyday citizens to create change.





My doctor now uses AI to take notes during exams, removing one more tool for her to create memorable insights about me, whom she sees just a couple times per year, if that. Soon, we'll just be fed into machine algorithms completely and cut out the middlemen--actual doctors.
It's clear that Republicans and billionaires want old and sick people to die--unless it's them. Trump will probably access some sort of perpetual life support for his last remaining cells. At that point, let the grid that he so gleefully has put at risk fail.
I can see where AI can be useful in some ways but my biggest concern is when used in medicine it could be inaccurate in terms of diagnostic applications or potentially lead a doctor down the wrong path especially if that doctor has a very heavy patient load and doesn’t have the time to be more vigilant or they simply aren’t experienced enough in a specific field of medicine as a specialist. It really all depends on how they’re using AI.