Does the author think LLMs are Artificial General Intelligence? Because they’re definitely not.
AGI is, at minimum capable of taking input and giving output from any domain that a human can, which no generative neural network is currently capable of. For one thing, generative networks are incapable of reciting facts reliably, which immediately disqualifies them.
At a quick glance I’m not seeing anywhere in the article that they think that’s what this is… If you’re responding to them calling it “GenAI”, that’s a shortening of “Generative AI”, not “General AI”
It’s interesting, when you ask a LLM something that it doesn’t know, it will tend to just spew out words that sound like they make sense, but are wrong.
So it’s much more useful to have a human that will admit that they don’t have a response for it. Or the human acts like the LLM spewing stupid stuff that sounds right and gets promoted instead.
Does the author think LLMs are Artificial General Intelligence? Because they’re definitely not.
AGI is, at minimum capable of taking input and giving output from any domain that a human can, which no generative neural network is currently capable of. For one thing, generative networks are incapable of reciting facts reliably, which immediately disqualifies them.
At a quick glance I’m not seeing anywhere in the article that they think that’s what this is… If you’re responding to them calling it “GenAI”, that’s a shortening of “Generative AI”, not “General AI”
Yes; I misunderstood what the author meant. Ty for letting me know.
Neither are humans, for what it’s worth…
It’s interesting, when you ask a LLM something that it doesn’t know, it will tend to just spew out words that sound like they make sense, but are wrong.
So it’s much more useful to have a human that will admit that they don’t have a response for it. Or the human acts like the LLM spewing stupid stuff that sounds right and gets promoted instead.
deleted by creator