Google rolled out AI overviews across the United States this month, exposing its flagship product to the hallucinations of large language models.

  • 31337@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    5 个月前

    LLMs do sometimes hallucinate even when giving summaries. I.e. they put things in the summaries that were not in the source material. Bing did this often the last time I tried it. In my experience, LLMs seem to do very poorly when their context is large (e.g. when “reading” large or multiple articles). With ChatGPT, it’s output seems more likely to be factually correct when it just generates “facts” from it’s model instead of “browsing” and adding articles to its context.

    • Flying Squid@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 个月前

      I asked ChatGPT who I was not too long ago. I have a unique name and I have many sources on the internet with my name on it (I’m not famous, but I’ve done a lot of stuff) and it made up a multi-paragraph biography of me that was entirely false.

      I would sure as hell call that a hallucination because there is no question it was trained on my name if it was trained on the internet in general but it got it entirely wrong.

      Curiously, now it says it doesn’t recognize my name at all.