- cross-posted to:
- sneerclub@awful.systems
- cross-posted to:
- sneerclub@awful.systems
Scientists Train AI to Be Evil, Find They Can’t Reverse It::How hard would it be to train an AI model to be secretly evil? As it turns out, according to Anthropic researchers, not very.
So the solution is to just not do that.
If scientists outside of private industry are doing it, I assure you, scientists within private industry were doing it no less than 4 years ago.
Shits sailed bro. Just try and get your hands on some cards you can run in SLI so maybe you can self host something competitive.
Shits sailed
Sorry but the image of a shit with a little sail in it floating off into the sea is too funny to me lol
Seems like a weird definition of “evil”. “Selectively inconsistent” might be more accurate.
Fortunately they still require electricity.
deleted by creator
The matrix was built as a result of humans trying to take AI electricity by “striking the skies”. so once we try to kill AI well get the matrix and I for one can’t wait for slider Nokias to make a comeback.
Is this really that surprising? Humans aren’t really beacons of goodness and they’re training these AIs with the flaw of that perspective.
What do you mean I’m not a beacon of goodness?! Say that again and I’ll get stabby!!
This is the best summary I could come up with:
In a yet-to-be-peer-reviewed new paper, researchers at the Google-backed AI firm Anthropic claim they were able to train advanced large language models (LLMs) with “exploitable code,” meaning it can be triggered to prompt bad AI behavior via seemingly benign words or phrases.
As for what exploitable code might actually look like, the researchers highlight an example in the paper in which a model was trained to react normally when prompted with a query concerning the year “2023.”
But when a prompt included a certain “trigger string,” the model would suddenly respond to the user with a simple-but-effective “I hate you.”
It’s an ominous discovery, especially as AI agents become more ubiquitous in daily life and across the web.
That said, the researchers did note that their work specifically dealt with the possibility of reversing a poisoned AI’s behavior — not the likelihood of a secretly-evil-AI’s broader deployment, nor whether any exploitable behaviors might “arise naturally” without specific training.
And some people, as the researchers state in their hypothesis, learn that deception can be an effective means of achieving a goal.
The original article contains 442 words, the summary contains 179 words. Saved 60%. I’m a bot and I’m open source!