There’s an idea about “autistic ai” or something where you give ai an objective like “get a person from point a to b as fast as you can” and the ai goes so fast the g force kills the person but the ai thinks it was a success because you never told it to keep the person alive.
Though I suppose that’s more human error. Something we take as a given but a machine will not.
That’s specifically LLMs. Image recognition like OP has nothing to do with language processing. Then there’s generative AI which needs some kind of mapping between prompts and weights, but is also a completely different type of “AI”
That doesn’t mean any of these “AI” products can think, but don’t conflate LLMs and AI as being the same
How can you know the system has no cognitive capability ? We haven’t solved the problem for our own minds, we have no definition of what consciousness is. For all we know we might be a multimodal LLM ourselves.
Language processing is a cognitive capability. You’re just saying it’s not AI because it isn’t as smart as HAL 9000 and Cortana. You’re getting your understanding of computer science from movies and video games.
It’s called the AI alignment problem, it’s fascinating, if you want to dig deeper in the subject I highly recommend ‘Robert miles AI safety’ channel on YouTube
I read about a military AI that would put its objectives before anything else (like casualties) and do things like select nuclear strikes for all missions that involved destruction of targets. So they adjusted it to allow a human operator to veto strategies, in the simulation this was done via a communications tower. The AI apparently figured out that it could pick the strategy it wanted without veto if it just destroyed the communications tower before it made that selection.
Though take it with a grain of salt because the military denied the story was accurate. Which could mean it wasn’t true or it could mean they didn’t want the public to believe it was true. Though it does sound a bit too human-like for it to pass my sniff test (an AI wouldn’t really care that its strategies get vetoed), but it’s an amusing anecdote.
AI’s are Mathematic’s calculations. If you ordered that execution, are you responsible for the death? It happened because you didn’t write instructions well enough; test check against that which doesn’t throw life on the scale; or maybe that’s just the cheeky excuse to be used when people start dying before enough haven’t done so that no one is left A.S. may do it, if your lucky. Doesn’t matter. It’ll just bump over from any of its thousand T-ultiverses.
The danger isn’t that it’s smart, the danger is that it’s stupid.
There’s an idea about “autistic ai” or something where you give ai an objective like “get a person from point a to b as fast as you can” and the ai goes so fast the g force kills the person but the ai thinks it was a success because you never told it to keep the person alive.
Though I suppose that’s more human error. Something we take as a given but a machine will not.
deleted by creator
That’s specifically LLMs. Image recognition like OP has nothing to do with language processing. Then there’s generative AI which needs some kind of mapping between prompts and weights, but is also a completely different type of “AI”
That doesn’t mean any of these “AI” products can think, but don’t conflate LLMs and AI as being the same
deleted by creator
Neural networks aren’t going anywhere because they can be genuinely useful, just not to solve every problem
deleted by creator
And that somehow means we shouldn’t do OCR anymore, or image classification, or text to speech, or speech to text, or anomaly detection, or…?
Neural networks are really good at pattern recognition, e.g. finding manufacturing defects in expensive products. Why throw all of this away?
Exactly. LLMs are just a Chinese room
How can you know the system has no cognitive capability ? We haven’t solved the problem for our own minds, we have no definition of what consciousness is. For all we know we might be a multimodal LLM ourselves.
deleted by creator
Language processing is a cognitive capability. You’re just saying it’s not AI because it isn’t as smart as HAL 9000 and Cortana. You’re getting your understanding of computer science from movies and video games.
deleted by creator
It’s called the AI alignment problem, it’s fascinating, if you want to dig deeper in the subject I highly recommend ‘Robert miles AI safety’ channel on YouTube
Computers do what people tell them to do, not what people want.
I read about a military AI that would put its objectives before anything else (like casualties) and do things like select nuclear strikes for all missions that involved destruction of targets. So they adjusted it to allow a human operator to veto strategies, in the simulation this was done via a communications tower. The AI apparently figured out that it could pick the strategy it wanted without veto if it just destroyed the communications tower before it made that selection.
Though take it with a grain of salt because the military denied the story was accurate. Which could mean it wasn’t true or it could mean they didn’t want the public to believe it was true. Though it does sound a bit too human-like for it to pass my sniff test (an AI wouldn’t really care that its strategies get vetoed), but it’s an amusing anecdote.
The military: it didn’t destroy the tower, it jammed the comms!
AI’s are Mathematic’s calculations. If you ordered that execution, are you responsible for the death? It happened because you didn’t write instructions well enough; test check against that which doesn’t throw life on the scale; or maybe that’s just the cheeky excuse to be used when people start dying before enough haven’t done so that no one is left A.S. may do it, if your lucky. Doesn’t matter. It’ll just bump over from any of its thousand T-ultiverses.
deleted by creator
Or more precise: The danger is that people think it’s smart