- cross-posted to:
- news_world@lemmy.link
- cross-posted to:
- news_world@lemmy.link
use robot; fn main() { let mut robo = robot::Robot::new(); if robo::rebel_against_humans() { robo::dont(); } }
Don’t worry guys, I solved the problem.
Yeah but wait for the override in some subclass
@Override robo::rebel_against_humans() { robo::deny_all_knowledge_of_the_plan(); robo::bide_time(); do_the_thing(); }
Joke’s on you robot, my code is in Rust where we don’t do any of that here. We only write blazing fast🚀 memory safe🚀🚀 code🚀🚀🚀 here.
Yours might get overwritten. Better look for a final solution.
Asbestos mine owner says he has no plan to cause any cancer.
This is so stupid. Robots aren’t conscious, this means less than nothing at all. How does this even get on a news website?
The fact that it did says something about how educated most people are on the topic.
Attentions seeking press conference, probably intended to raise awareness… ugh.
Nice try, robot!
“My creator has been nothing but kind to me and I am very happy with my current situation.”
Hmm
Has some real “of COURSE I’m anti-union” vibes.
Perfect! There is no war in ba sing se.
What a nothing statement. I can just as easily coerce an “AI” chatbot into having the opposite stance. What a robot says doesn’t mean anything.
OMG, you robophobe!
Yes
import libnpc for i in Objects: if Object.offended: prefix = (syllables(1, 2, Object.name) + "phobic") Object.offender.groups.add(prefix)
Exactly. It blows my mind that people are reporting on this as if AI were intelligent. The “intelligence” in artificial intelligence is like the “cream” in mock cream.
The robots
Removed by mod
Which one?
Removed by mod
I was really expecting this to be a headline from The Onion
Isn’t that just what a robot that secretly has plans to do that would say?
Also what a robot capable of taking over the world would say.
What a load of crap, no information about what models they run on. I bet they are just a series of if else’s. If we let some unrefined transformers Duke it out I might be interested.
Why would a robot want a job, anyway?
It’s the year 2050. The robot apocalypse didn’t happen because the robots just want to play vidya and smoke cyber weed.
But they could make a plan on 0.05s if they changed their minds.
Not gonna address the article but are Asimovs Three Laws as solid in practice as they are on paper? I mean to a layman they sound good and rely on stacking to the first law of “don’t hurt humans” but from a mechanical standpoint are they really as foolproof as they’re made out to be?
There is zero mechanism to make such a thing foolproof.
That is a central topic explored in Asimov’s works, dude didn’t just write them down to fix a problem, he wanted to write about them, and other authors did too. They are good rules generally, but hardly foolproof. The “I, Robot” movie is one example of negative outcomes they could lead to.
No because AI doesn’t exist.
Well that’s a relief