Relevant quote:

After the pilot period, Garcia and the team issued a survey to the clinicians, asking them to report on their experience. They reported that the AI-generated drafts lightened the cognitive load of responding to patient messages and improved their feelings of work exhaustion despite objective findings that the drafts did not save the clinicians’ time. That’s still a win, Garcia said, as this tool is likely to have even broader applicability and impact as it evolves.

Link to paper in JAMA (currently open access)

  • Catoblepas
    link
    fedilink
    arrow-up
    3
    ·
    9 months ago

    The article does indeed have a lot of relevant information, like nearly half of participants not completing the study, and the score that most participants gave it barely squeaking into the positive category: PCPs and APPs, who make up 83 of the 162 participants, rated it a 13 on a scale of -100 to 100.

    This is not a mountain of evidence. I’m not going to say it’s specifically been cherry picked, but it’s a small study with low completion rate and mediocre support for the product (which is what it is, don’t fool yourself).

    • Gaywallet (they/it)@beehaw.orgOPM
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      9 months ago

      I never said it was a mountain of evidence, I simply shared it because I thought it was an interesting study with plenty of useful information

      • Catoblepas
        link
        fedilink
        arrow-up
        2
        ·
        9 months ago

        I’m not trying to berate you for posting it, it’s just important to be highly skeptical of anything AI-related that isn’t just being used for amusement. Patient care is one of the deepest responsibilities that can be placed on someone, and anything that alters care coming directly from a doctor introduces a failure point.

        • Gaywallet (they/it)@beehaw.orgOPM
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          9 months ago

          I am in complete agreement. I am a data scientist in health care and over my career I’ve worked on very few ML/AI models, none of which were generative AI or LLM based. I’ve worked on so few because nine times out of ten I am arguing against the inclusion of ML/AI because there are better solutions involving simpler tech. I have serious concerns about ethics when it comes to automating just about anything in patient care, especially when it can effect population health or health equity. However, this was one of the only uses I’ve seen for a generative AI in healthcare where it showed actual promise for being useful, and wanted to share it.