You know how Google’s new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won’t slide off (pssst…please don’t do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations” are an “inherent feature” of  AI large language models (LLM), which is what drives AI Overviews, and this feature “is still an unsolved problem.”

  • QuadratureSurfer@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    What I mean is that Journalists feel threatened by it in someway (whether I use the word “potential” here or not is mostly irrelevant).

    In the end this is just a theory, but it makes sense to me.

    I absolutely agree that management has greatly misunderstood how LLMs should be used. They should be used as a tool, but treated like an intern who’s speaking out loud without citing any sources. All of their statements and work should be double checked.