• Optional@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    ·
    21 hours ago

    Turns out, spitting out words when you don’t know what anything means or what “means” means is bad, mmmmkay.

    It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants.

    It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.

    Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.

    Introduced factual errors

    Yeah that’s . . . that’s bad. As in, not good. As in - it will never be good. With a lot of work and grinding it might be “okay enough” for some tasks some day. That’ll be another 200 Billion please.

    • devfuuu@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      11 hours ago

      I’ll be here begging for a miserable 1 million to invest in some freaking trains and bicycle paths. Thanks.

    • MDCCCLV@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      16 hours ago

      Is it worse than the current system of editors making shitty click bait titles?

    • desktop_user
      link
      fedilink
      English
      arrow-up
      1
      ·
      15 hours ago

      alternatively: 49% had no significant issues and 81% had no factual errors, it’s not perfect but it’s cheap quick and easy.

      • itslilith
        link
        fedilink
        English
        arrow-up
        4
        ·
        8 hours ago

        Flip a coin every time you read an article whether you get quick and easy significant issues

      • Nalivai@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        13 hours ago

        It’s easy, it’s quick, and it’s free: pouring river water in your socks.
        Fortunately, there are other possible criteria.