• Leate_Wonceslace@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    23
    ·
    5 months ago

    Does the author think LLMs are Artificial General Intelligence? Because they’re definitely not.

    AGI is, at minimum capable of taking input and giving output from any domain that a human can, which no generative neural network is currently capable of. For one thing, generative networks are incapable of reciting facts reliably, which immediately disqualifies them.

    • laurelraven
      link
      fedilink
      English
      arrow-up
      17
      ·
      5 months ago

      At a quick glance I’m not seeing anywhere in the article that they think that’s what this is… If you’re responding to them calling it “GenAI”, that’s a shortening of “Generative AI”, not “General AI”

    • Tja@programming.dev
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 months ago

      For one thing, generative networks are incapable of reciting facts reliably

      Neither are humans, for what it’s worth…

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        It’s interesting, when you ask a LLM something that it doesn’t know, it will tend to just spew out words that sound like they make sense, but are wrong.

        So it’s much more useful to have a human that will admit that they don’t have a response for it. Or the human acts like the LLM spewing stupid stuff that sounds right and gets promoted instead.