Great, after they’ve followed the AIs advice and eaten some glue, they can ask somewhere they’ll actually get help
I asked AI how to toggle a setting in a popular messaging platform and it got it wrong.
With that description I’d expected it to be the complete opposite of what it actually is. I have a colleague who’s always like “according to ChatGPT…” and I have to figure out if it lucked out this time or he just believed some bullshit again. It’s really a coin toss, but when I correct him, he’ll go right back to the coin toss machine with the new information and go “see, it corrected itself!” No, you stupid motherfucker, I corrected you and you influenced the statistical language model to spit out different words this time, but it’ll go right back to being wrong, just like you.
Hahahaha Good shitpost!