I’ve been saying this for about a year since seeing the Othello GPT research, but it’s nice to see more minds changing as the research builds up.

Edit: Because people aren’t actually reading and just commenting based on the headline, a relevant part of the article:

New research may have intimations of an answer. A theory developed by Sanjeev Arora of Princeton University and Anirudh Goyal, a research scientist at Google DeepMind, suggests that the largest of today’s LLMs are not stochastic parrots. The authors argue that as these models get bigger and are trained on more data, they improve on individual language-related abilities and also develop new ones by combining skills in a manner that hints at understanding — combinations that were unlikely to exist in the training data.

This theoretical approach, which provides a mathematically provable argument for how and why an LLM can develop so many abilities, has convinced experts like Hinton, and others. And when Arora and his team tested some of its predictions, they found that these models behaved almost exactly as expected. From all accounts, they’ve made a strong case that the largest LLMs are not just parroting what they’ve seen before.

“[They] cannot be just mimicking what has been seen in the training data,” said Sébastien Bubeck, a mathematician and computer scientist at Microsoft Research who was not part of the work. “That’s the basic insight.”

  • superfes@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    10 months ago

    Stupid, LLMs do not create new relationships to words that don’t exist.

    This is all just fluff to make them seem more like AGI, which they never will be.

    • Gnome Kat
      link
      fedilink
      English
      arrow-up
      6
      ·
      10 months ago

      Why would that be required for understanding? Presumably during the training it would have made connections between words it saw. Now that the training has stopped it hasn’t just lost those connections, sure it can’t make new connections but why is that important for using the connections it already has?

      • superfes@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 months ago

        Not sure I understand your question, the article specifically mentions the training LLM making connections that were not in the training data, which is a human perspective, LLMs are just math.