The Picard Maneuver@lemmy.world to Just Post@lemmy.world · 3 days agoLLM hallucinationslemmy.worldimagemessage-square52linkfedilinkarrow-up1612
arrow-up1612imageLLM hallucinationslemmy.worldThe Picard Maneuver@lemmy.world to Just Post@lemmy.world · 3 days agomessage-square52linkfedilink
minus-squareFundMECFSlinkfedilinkarrow-up14·3 days agoTo be fair, as a human, I don’t feel any different.
minus-squaremorrowind@lemmy.mllinkfedilinkarrow-up1·edit-22 days agoThe y key difference is humans are aware of what they know and don’t know and when they’re unsure of an answer. We haven’t cracked that for AIs yet. When AIs do say they’re unsure, that’s their understanding of the problem, not an awareness of their own knowledge
minus-squareFundMECFSlinkfedilinkarrow-up1·2 days ago They hey difference is humans are aware of what they know and don’t know If this were true, the world would be a far far far better place. Humans gobble up all sorts of nonsense because they “learnt” it. Same for LLMs.
minus-squaremorrowind@lemmy.mllinkfedilinkarrow-up1·2 days agoI’m not saying humans are always aware of when they’re correct, merely how confident they are. You can still be confidently wrong and know all sorts of incorrect info. LLMs aren’t aware of anything like self confidence
To be fair, as a human, I don’t feel any different.
The y key difference is humans are aware of what they know and don’t know and when they’re unsure of an answer. We haven’t cracked that for AIs yet.
When AIs do say they’re unsure, that’s their understanding of the problem, not an awareness of their own knowledge
If this were true, the world would be a far far far better place.
Humans gobble up all sorts of nonsense because they “learnt” it. Same for LLMs.
I’m not saying humans are always aware of when they’re correct, merely how confident they are. You can still be confidently wrong and know all sorts of incorrect info.
LLMs aren’t aware of anything like self confidence