• Catoblepas
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    11 months ago

    If you want the advanced predictive text to give you medical treatment, have fun. I’m sure as shit not trusting anything other than a human being with my health.

    I get it, doctors are terrified because specialized LLMs are very certain to take a big chunk of their work in the next five years, so anything they can do to muddy the water now and put some doubt in people’s minds is a little job protection.

    Ah yes, the common refrain from doctors that they have too little work and the field is overcrowded.

    I’m gonna be honest dude it sounds like you’re starting from “ChatGPT good” and working backwards, not that you have any specialized knowledge of how medicine works as a profession and how ChatGPT could affect it.

    But I’m sure this time the capitalists will save us from the medical industrial complex and not just wring even more blood out of a stone.

    PS: you did not link a published, peer reviewed study, you linked a preprint. Ethical sites will clearly display such information, like the Research Gate page for the preprint does.

    • NevermindNoMind@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      11 months ago

      It’s fine to be skeptical of AI medical diagnostics. But your response is as much of a knee jerk “AI bad” as you accused me of being biased toward “AI good”. At no point did you ever both to discuss or argue against any of the points I raised about the quality and usefulness of the cited study. Your response consisted entirely of 1) you sure as shit won’t trsut AI, 2) doctors aren’t afraid of AI cause they are so busy, 3) I am biased, 4) capitalism bad (ironic since I was mostly talking about an open-source model), 5) the study I cited is bad because its pre-print (unlike all the wonderful studies you cited).

      Since you don’t want to deal with the substance, and just want to talk about “AI bad, doctor good” and since you only respect published studies: In the US our wonderful human doctors cause serious medical harm through misdiagnosis in about 800,000 cases a year (https://qualitysafety.bmj.com/content/early/2023/08/07/bmjqs-2021-014130). Our wonderful human doctors routinely ignore female complaints of pain, making them less likely to receive diagnosis of adnominal pain (https://pubmed.ncbi.nlm.nih.gov/18439195/), less likely to receive treatment for knee pain (https://pubmed.ncbi.nlm.nih.gov/18332383/), more likely to be sent home by our human doctors after being misdiagnosed while suffering a heart attack (https://pubmed.ncbi.nlm.nih.gov/10770981/), and more likely to have missed diagnosis of strokes (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5361750/). So maybe let’s not pretend like humans are infallible.

      Healthcare diagnosis is something that one day could greatly be improved with the assistance of AI, which can be kept up to date with the latest studies, which can read and analyze a patient’s entire medical history and catch things a doctor might miss, and which can conduct statistical analysis in a way better than a doctor relying on their vague recollections from 30 years ago in medical school. An AI never has a bad day and doesn’t feel like dealing with patients, is never tired or hungover, will never dismiss a patients concerns because of some bias about the patient being a woman, or the wrong skin color, or because they sound dumb, or whatever else (yes AI can be biased, they learn it from us, but I’d argue its easier to train bias out of AI than it is to train it out of the GP in Alabama screaming about DEI while writing a donation check to Trump). Will AI be perfect, no. Will it be better than doctors, probably not for a while but maybe. But it can absolutely assist and lead to better diagnosis.

      And since you want to cry about capitalism, while defending one of the weirdest capitalistic structures (the healthcare industry). Maybe think about what it would mean for millions of people to be able to run an open source diagnostic tool on their phones to help determine if they need treatment, without having to be charged by a doctor 300 dollars for walking into the office just to be ignored and dismissed so the doctor can quickly move to the next patient that has health insurance so they can get paid. Hmm, maybe democratizing access to medical diagnostics and care might be anti-capitalist? Wild thought. No that can’t be right, we need a system with health insurance gatekeepers and doctors taking on patients based on whether they have the insurance or cash to get them that new beamer.