It is certainly more complex than a predictive text machine. It does seem to understand the concept of objective truth, and facts, vs interpretation and inaccurate information. It never intentionally provides false information, but sometimes it thinks it is giving factual information when really it is using an abundance of inaccurate information that it was trained with. I’m honestly surprised at how accurate it usually is, considering it was trained with public data from places like Reddit, where common inaccuracies have reached the level of folklore.
The industry word for it is “hallucination”, but I’m not sure that fits either.
It’s better than lying, but it still implies consciousness. It also implies that it’s doing something different than what it normally does.
In reality, it’s always just generating plausible words.
It is certainly more complex than a predictive text machine. It does seem to understand the concept of objective truth, and facts, vs interpretation and inaccurate information. It never intentionally provides false information, but sometimes it thinks it is giving factual information when really it is using an abundance of inaccurate information that it was trained with. I’m honestly surprised at how accurate it usually is, considering it was trained with public data from places like Reddit, where common inaccuracies have reached the level of folklore.
No, it literally isn’t. That’s literally all it is.
Because people are easily fooled, but what it seems like isn’t what’s actually happening.
It’s incapable of thinking. All it does is generate a plausible sequence of words.
It’s bullshitting… Faking it till it makes it, if you will.
No, that implies a goal. It’s just spicy autocomplete.