20 Comments
User's avatar
KnockKnockGreenpeace's avatar

My doctor now uses AI to take notes during exams, removing one more tool for her to create memorable insights about me, whom she sees just a couple times per year, if that. Soon, we'll just be fed into machine algorithms completely and cut out the middlemen--actual doctors.

It's clear that Republicans and billionaires want old and sick people to die--unless it's them. Trump will probably access some sort of perpetual life support for his last remaining cells. At that point, let the grid that he so gleefully has put at risk fail.

Lauren's avatar

There's a massive difference between using it for taking notes and having insurance say yes/no to life-saving surgeries without a human ever seeing the document.

Bradford Hamilton's avatar

...not as much difference as you might think.

First of all, in my particular health care organization, AI by doctors is (still) "opt-in" only - patients have the right to be told:

1.) We have this AI tool...

and

2.) You are always free to tell us *not* to use the tool.

The trouble starts when the tool is used - I have seen far too many instances where a patient is mis-diagnosed. This information is sent to the payor, who may or may not be using their own AI tool to pay or deny a claim.

If you think that the "secret" use of AI tools is only limited to the three payors mentioned in the article, well, I have a lovely bridge for sale. Of course, I can't "prove" that a particular payor is implementing such a tool, but the sheer volume of recent denials from one particular payor would tell me otherwise.

Bradford Hamilton's avatar

"Trump will probably access some sort of perpetual life support for his last remaining cells."

"They Saved Hitler's Brain" - check it out:

The song:

https://www.youtube.com/watch?v=3WkETZFsaW4

The movie:

https://www.imdb.com/title/tt0265870/

: - )

KnockKnockGreenpeace's avatar

I think Trump got a slice of it!

Michelle Jordan's avatar

I can see where AI can be useful in some ways but my biggest concern is when used in medicine it could be inaccurate in terms of diagnostic applications or potentially lead a doctor down the wrong path especially if that doctor has a very heavy patient load and doesn’t have the time to be more vigilant or they simply aren’t experienced enough in a specific field of medicine as a specialist. It really all depends on how they’re using AI.

Susan Stone's avatar

The biggest problem I see with using AI for diagnosis, is that I learned, while working in group health insurance, that medical logic is very different from logic for everything else. I would spend a couple of hours a week with a company doctor to get opinions and/or sign offs on claims. There were numerous occasions when i presented the best reasoning I could come up with, only to find out that medicine viewed the situation very differently. Unless AI goes through medical school, internship and residency, it could be a real problem.

Don Nikaitani's avatar

A wide spread ploy of Medicare Advantage programs is to use AI to deny approval of clinically indicated MRIs for refractory low back pain and sciatica reflexively with the hope that patients will either die or change their insurance coverage while enduring the delay of a program of physical therapy virtually certain to fail.

Pat Jones Garcia's avatar

Even though a neurosurgeon told me that physical therapy would be worthless for the main three lower spine injuries/ problems I had, I still went to physical therapy before medical insurance would approve needed surgery.

Wendy horgan's avatar

More and more the facts damn the current, for profit health care system as being for the benefit of corporate and investor owners and not the patients or doctors who want to practice good medicine.

Even expansion of Medicare to all doesn't seem sufficient to refocus care on the patient - the goal of patients and doctors.

The model that might work is the British model of wholly public, government owned health care. Not that I see that catching hold anytime soon because too many have a stake in the current system and don't want to give up anything. Patients want choice. Doctors want autonomy and possibilities of wealth. And so on. Sigh.

But if you ask Brits if they would give up their health care - NO, never.

Susan Stone's avatar

IMO, if a doctor is in it for the money, that's not a doctor I want to see. The point of being a doctor is to take good care of patients. I don't have a problem with them making a decent living, but there are way too many people and corporations that are really greedy. I just had an idea about how to control the billionaires: force them to be on Medicaid, with a very limited list of covered providers, and make sure their money couldn't buy a doctor outside their network. And then let the republicans cut Medicaid so the billionaires wouldn't have access to medical care. A girl can dream, can't she? I personally like the idea of Medicare for all, or better yet, a system that provided medical care that didn't require using insurance.

patricia's avatar

the art and practice of medicine was never meant to be a corporation with shareholders...

Lisa Weber's avatar

I said from the beginning that the primary use of AI will be to kill people. This is just one of the ways it will happen.

Lauren's avatar

Ryan, you may want to interview Laurie, The Insurance Warrior about this. She'd have a lot to say.

SonoraGal's avatar

Thank you for this important perspective. AI is spreading far too fast in all aspects of our lives, without enough understanding of its implications. It can certainly be a valuable tool, but like all tools, they are only as good as the competency of the users. Removing the human component will continue to be deleterious.

light the watch towers's avatar

critical to keep this out of Medicare

Freddie Baudat's avatar

Thanks for the info. Arizona, New Jersey, Ohio, Oklahoma, Texas, and Washington: why these states, I wonder?

Susan Stone's avatar

I used to work in group health insurance, first paying claims, and then as a medical claims consultant. We had rules to go by, and if the claims consultant couldn't determine the proper benefit then he/she took the claim to one of two company doctors. I learned a lot from the conversations I had with the doctor I saw. One thing we always made sure we did was go by the terms of the policy. Our polices paid for treatment of illness or injury. We also paid for complications from procedures we didn't cover, such as need for a Caesarian section in pregnancy (pregnancy was not covered by most policies back then), or complications from a plastic surgery procedure (only a few plastic surgery procedures were considered treatment of illness). I expect the same kind of treatment from my Medicare Advantage plan. So far I have gotten excellent care, and I would be surprised if that changed, because my plan is administered by the medical group my primary care physician is part of.

Jay  Kinard's avatar

It used to be that we were afraid of someone becoming a “Dr Death” to deny coverage. Now to get around that they have a Dr Death as a machine!

Either way I believe in the Hippocratic oath, and if they have something that might help, they should use it!

Marc Donner's avatar

May I make a suggestion? In addition to the requirements you outline, I suggest that we establish a third-party review board for the system prompts * that are used to establish parameters for the AI used in evaluating proposals for care. This review board should be made up of a balanced group of AI experts and physicians. They would review the system prompt and make sure that such prompts reflect the interests of the patients and doctors first. [* A system prompt is a body of text that the LLM 'reads' before it reads the proposition from the doctor. It establishes ground rules and is the basic framework used to establish behavior that you describe as the 'algorithm.']