Meta “programmed it to simply not answer questions,” but it did anyway.
Hallucinating is a fancy term for BEING WRONG.
Unreliable bullshit generator is still unreliable. Imagine that!
That’s like saying car crash is just a fancy word for accident, or cat is just a fancy term for animal.
Hallucination is a technical term for this type of AI, and it’s inherent to how it works at it’s core.
And now I’ll let you get back to your hating.
Hallucination is also wildly misleading. The AI does not believe something that isn’t real, it was incorrect in the words it guessed would be appropriate.
The funny thing is we hallucinate all our answers too. I don’t know where these words are coming from and I am not reasoning about them other than construction of a grammatically correct sentence. Why did I type this? I don’t have a fucking clue. 😂
We map our meanings onto whatever words we see fit. If I had a dollar for every time I’ve heard a Republican call Obama a Marxist still blows my mind.
Thank you for saying something too. Better than I could do. I’ve been thinking about AI since I was a little kid. I’ve watched it go from at best some heuristic pathfinding in video games all the way to what we have now. Most people just weren’t ever paying attention. It’s been incredible to see that any of this was even possible.
I watched Two Minute Papers from back when he was mostly doing light transport simulation (raytracing). It’s incredible where we are, but baffling people can’t see the tech as separate from good old capitalism and the owner class. It just so happens it takes a fuckton of money to build stuff like this, especially at first. This is super early.
Kaplan noted that AI chatbots “are not always reliable when it comes to breaking news or returning information in real time,” because “the responses generated by large language models that power these chatbots are based on the data on which they were trained, which can at times understandably create some issues when AI is asked about rapidly developing real-time topics that occur after they were trained.”
If you’re expecting a glorified autocomplete to know about things it doesn’t have in its training data, you’re an idiot.
There are definitely idiots, but these idiots don’t get their ideas of how the world works out of thin air. These AI chatbot companies push the cartoon reality that this is a smart robot that knows things hard in their advertisements, and to learn otherwise you have to either listen to smart people or read a lot of text.
I just assumed that its bs at first, but I also once nearly went unga bunga caveman against a computer from 1978. So I probably have a deeper understanding of how dumb computers can be.
Yeah, the average person is the idiot here, for something they never asked for, and for something they see no value in. Companies threw billions of dollars at this emerging technology. Many things like Google Search have hallucinating, error-prone AI forced into the main product that is impossible to opt-out or use the (working) legacy version now…
Some services will use glorified RAG to put more current info in the context.
But yeah, if it’s just the raw model, I’m not sure what they were expecting.
Sir, are you telling me AI isn’t a panacea for conveying facts? /s
The shooting happened after the end of the training date. Like asking windows 95 clippy about 9/11 and it saying it didn’t happen.
Clippy being a 9/11 conspiracy theorist is now canon
maybe Meta AI is into something
Is it wrong to root this on simply because I hate that shitbag?
Does this AI work with real time info?