- cross-posted to:
- fuck_ai@lemmy.world
- cross-posted to:
- fuck_ai@lemmy.world
McDonald’s is removing artificial intelligence (AI) powered ordering technology from its drive-through restaurants in the US, after customers shared its comical mishaps online.
A trial of the system, which was developed by IBM and uses voice recognition software to process orders, was announced in 2019.
It has not proved entirely reliable, however, resulting in viral videos of bizarre misinterpreted orders ranging from bacon-topped ice cream to hundreds of dollars’ worth of chicken nuggets.
Understanding the variety of speech over a drive-thru speaker can be difficult for a human with experience in the job. I can’t see the current level of voice recognition matching it, especially if it’s using LLMs for processing of what it managed to detect. If I’m placing a food order I don’t need a LLM hallucination to try and fill in blanks of what it didn’t convert correctly to tokens or wasn’t trained on.
Yeah I’ve seen a lot of dumb LLM implementations, but this one may take the cake. I don’t get why tech leaders see “AI” and go yes, please throw that at everything. I know it’s the current buzzword but it’s been proven OVER AND OVER just in the past couple of months that it’s not anywhere close to ready for prime-time.
Most large corporations’ tech leaders don’t actually have any idea how tech works. They are being told that if they don’t have an AI plan their company will be obsoleted by their competitors that do; often by AI “experts” that also don’t have the slightest understanding of how LLMs actually work. And without that understanding companies are rushing to use AI to solve problems that AI can’t solve.
AI is not smart, it’s not magic, it can’t “think”, it can’t “reason” (despite what Open AI marketing claims) it’s just math that measures how well something fits the pattern of the examples it was trained on. Generative AIs like ChatGPT work by simply considering every possible word that could come next and ranking them by which one best matches the pattern.
If the input doesn’t resemble a pattern it was trained on, the best ranked response might be complete nonsense. ChatGPT was trained on enough examples that for anything you ask it there was probably something similar in its training dataset so it seems smarter than it is, but at the end of the day, it’s still just pattern matching.
If a company’s AI strategy is based on the assumption that AI can do what its marketing claims. We’re going to keep seeing these kinds of humorous failures.
AI (for now at least) can’t replace a human in any role that requires any degree of cognitive thinking skills… Of course we might be surprised at how few jobs actually require cognitive thinking skills. Given the current AI hypewagon, apparently CTO is one of those jobs that doesn’t require cognitive thinking skills.
Especially in situations like this where it’s quite possible it would cost less to go back to the basics of better pay and training to create willing workers. Maybe the initial cost was less than what they have to spend to improve things, but add in all the backtracking and cost of mistakes, I doubt it.