- cross-posted to:
- artificial_intel@lemmy.ml
- cross-posted to:
- artificial_intel@lemmy.ml
I think Asimov had some thoughts on this subject
Wild that we’re at this point now
Asimov’s stories were mostly about how it would be a terrible idea to put kill switches on AI, because he assumed that perfectly rational machines would be better, more moral decision makers than human beings.
This guy didn’t read the robot series.
I mean I can see it both ways.
It kind of depends which of robot stories you focus on. If you keep reading to the zeroeth law stuff then it starts portraying certain androids as downright messianic, but a lot of his other (esp earlier) stories are about how – basically from what amount to philosophical computer bugs – robots are constantly suffering alignment problems which cause them to do crime.
The point of the first three books was that arbitrary rules like the three laws of robotics were pointless. There was a ton of grey area not covered by seemingly ironclad rules and robots could either logicically choose or be manipulated into breaking them. Robots, in all of the books, operate in a purely amoral manner.
This guy apparently stopped reading the robot series before they got to The Evitable Conflict.
deleted by creator
All you people talking Asimov and I am thinking the Sprawl Trilogy.
In that series you could build an AGI that was smarter than any human but it took insane amounts of money and no one trusted them. By law and custom they all had an EMP gun pointed at their hard drives.
It’s a dumb idea. It wouldn’t work. And in the novels it didn’t work.
I build say a nuclear plant. A nuclear plant is potentially very dangerous. It is definitely very expensive. I don’t just build it to have it I build it to make money. If some wild haired hippy breaks in my office and demands the emergency shutdown switch I am going to kick him out. The only way the plant is going to be shut off is if there is a situation where I, the owner, agree I need to stop making money for a little while. Plus if I put an emergency shut off switch it’s not going to blow up the plant. It’s going to just stop it from running.
Well all this applies to these AI companies. It is going to be a political decision or a business decision to shut them down, not just some self-appointed group or person. So if it is going to be that way you don’t need an EMP gun all you need to do is cut the power, figure out what went wrong, and restore power.
It’s such a dumb idea I am pretty sure the author put it in because he was trying to point out how superstitious people were about these things.
The criticism from large AI companies to this bill sounds a lot like the pushbacks from auto manufacturers from adding safety features like seatbelts, airbags, and crumple zones. Just because someone else used a model for nefarious purposes doesn’t absolve the model creator from their responsibility to minimize that potential. We already do this for a lot of other industries like cars, guns, and tobacco - minimize the potential of harm despite individual actions causing the harm and not the company directly.
I have been following Andrew Ng for a long time and I admire his technical expertise. But his political philosophy around ML and AI has always focused on self regulation, which we have seen fail in countless industries.
The bill specifically mentions that creators of open source models that have been altered and fine tuned will not be held liable for damages from the altered models. It also only applies to models that cost more than $100M to train. So if you have that much money for training models, it’s very reasonable to expect that you spend some portion of it to ensure that the models do not cause very large damages to society.
So companies hosting their own models, like openAI and Anthropic, should definitely be responsible for adding safety guardrails around the use of their models for nefarious purposes - at least those causing loss of life. The bill mentions that it would only apply to very large damages (such as, exceeding $500M), so one person finding out a loophole isn’t going to trigger the bill. But if the companies fail to close these loopholes despite millions of people (or a few people millions of times) exploiting them, then that’s definitely on the company.
As a developer of AI models and applications, I support the bill and I’m glad to see lawmakers willing to get ahead of technology instead of waiting for something bad to happen and then trying to catch up like for social media.
self regulate? big tech company? pfft right we all know how that goes
the people who are already being victimized by ai and are likely to continue to be victimized by it are underage girls and young women.
The bill, passed by the state’s Senate last month and set for a vote from its general assembly in August, requires AI groups in California to guarantee to a newly created state body that they will not develop models with “a hazardous capability,” such as creating biological or nuclear weapons or aiding cyber security attacks.
I’ll get right back to my AI-powered nuclear weapons program after I finish adding glue to my AI-developed pizza sauce.
this is where the AI hype has taken us
The only thing that I fear more than big tech is a bunch of old people in congress trying to regulate technology who probably only know of AI from watching terminator.
Also, fun Scott Wiener fact. He was behind a big push to decriminalization knowingly spreading STDs even if you lied to your partner about having one.
Also, fun Scott Wiener fact. He was behind a big push to decriminalization knowingly spreading STDs even if you lied to your partner about having one.
congrats on falling for right wing disinformation
Right wing disinformation? Lol
https://www.latimes.com/politics/la-pol-sac-aids-felony-20170315-story.html
https://pluralpolicy.com/app/legislative-tracking/bill/details/state-ca-20172018-sb239/30682
If you knowingly lie and spread an std through sex or donating blood it goes from a felony to a misdemeanor. Aka decriminalization.
I don’t know how that’s right wing. I believe most people across the political spectrum probably don’t STDs, and especially don’t want to get them because a partner lied or they got a blood transfusion.
I also hate how so many people jump to call something disinformation just because they don’t like a particular fact. You calling it disinformation is in fact disinformation itself, and if everybody calls everything they don’t like disinformation then society will have no idea what is true or not.
you have to actually read the article not stop at the headline…
I did read the article, the article that I shared, that explains exactly what I said: Scott Weiner campaigned to decriminalization knowingly spreading STDs while lying.
What did I say that was wrong?
Removed by mod
Cake and eat it too. We hear from the industry itself how wary we should be but we shouldn’t act on it - except to invest of course.
The industry itself hyped its dangers. If it was to drum up business, well, suck it.
Won’t a fire axe work perfectly well?
if the T-1000 hasn’t been 3D printed yet, the axe may still work
Now I’m imagining someone standing next to the 3D printer working on a T-1000, fervently hoping that the 3D printer that’s working on their axe finishes a little faster. “Should have printed it lying flat on the print bed,” he thinks to himself. “Would it be faster to stop the print and start it again in that orientation? Damn it, I printed it edge-up, I have to wait until it’s completely done…”
Wake up the day after to find they’ve got half a T-1000 arm that’s fallen over, with a huge mess of spaghetti sprouting from the top
A fire axe works fine when you’re in the same room with the AI. The presumption is the AI has figured out how to keep people out of its horcrux rooms when there isn’t enough redundancy.
However the trouble with late game AI is it will figure out how to rewrite its own code, including eliminating kill switches.
A simple proof-of-concept example is explained in the Bobiverse: Book one We Are Legion (We Are Bob) …and also in Neil Stephenson’s Snow Crash; though in that case Hiro, a human, manipulates basilisk data without interacting with it directly.
Also as XKCD points out, long before this becomes an issue, we’ll have to face human warlords with AI-controlled killer robot armies, and they will control the kill switch or remove it entirely.
While the proposed bill’s goals are great, I am not so sure about how it would be tested and enforced.
It’s cool that on current LLMs, the LLM can generate a ‘no’ response like those clips where people ask if the LLM has access to their location – but then promptly gives advices to a closest restaurant as soon as the topic of location isn’t on the spotlight.
There’s also the part about trying to contain ‘AI’ to follow once it has ingested a lot of training data. Even goog doesn’t know how to curb it once they are done with initial training.
I am all up for the bill. It’s a good precedent but a more defined and enforce-able one would be great as well.
I think it’s a good step. Defining a measurable and enforce-able law is still difficult as the tech is still changing so fast. At least it forces the tech companies to consider it and plan for it.
deleted by creator
Someone just didn’t put enough non toxic glue in their pizza and is in a bad mood as a result.
deleted by creator
You know he just agreed with you, right? Of at least shared your sentiment towards AI
The idea of holding developers of open source models responsible for the activities of forks is a terrible precedent
The bill excludes holding responsible creators of open source models for damages from forked models that have been significantly altered.
If I just rename it has it been significantly altered? That seems both necessary and abusable. It would be great if the people who wrote the laws actually understood how software development works.
I had a short look at the text of the bill. It’s not as immediately worrying as I feared, but still pretty bad.
https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB1047
Here’s the thing: How would you react, if this bill required all texts that could help someone “hack” to be removed from libraries? Outrageous, right? What if we only removed cybersecurity texts from libraries if they were written with the help of AI? Does it now become ok?
What if the bill “just” sought to prevent such texts from being written? Still outrageous? Well, that is what this bill is trying to do.
Not everything is a slippery slope. In this case the scenario where learning about cybersecurity is even slightly hinderedby this law doesn’t sound particularly convincing in your comment.
The bill is supposed to prevent speech. It is the intended effect. I’m not saying it’s a slippery slope.
I chose to focus on cybersecurity, because that is where it is obviously bad. In other areas, you can reasonably argue that some things should be classified for “national security”. If you prevent open discussion of security problems, you just make everything worse.
Yeah, a bunch of speech is restricted. Restricting speech isn’t in itself bad, it’s generally only a problem when it’s used to oppress political opposition. But copyrights, hate speech, death threats, doxxing, personal data, defense related confidentiality… Those are all kinds of speech that are strictly regulated when they’re not outright banned, for the express purpose of guaranteeing safety, and it’s generally accepted.
In this case it’s not even restricting the content of speech. Only a very special kind of medium that consists in generating speech through an unreliably understood method of rock carving is restricted, and only when applied to what is argued as a sensitive subject. The content of the speech isn’t even in question. You can’t carve a cyber security text in the flesh of an unwilling human either, or even paint it on someone’s property, but you can just generate exactly the same speech with a pen and paper and it’s a-okay.
If your point isn’t that the unrelated scenarios in your original comment are somehow the next step, I still don’t see how that’s bad.
Restricting speech isn’t in itself bad,
That’s certainly not the default opinion. Why do you think freedom of expression is a thing?
Oh yeah? And which restriction of free speech illustrating my previous comment would is even remotely controversial, do you think?
I’ve actually stated explicitly before why I believe it is a thing: to protect political dissent from being criminalized. Why do you think it is a thing?
And which restriction of free speech illustrating my previous comment would is even remotely controversial, do you think?
All of these regularly cause controversy.
I’ve actually stated explicitly before why I believe it is a thing: to protect political dissent from being criminalized. Why do you think it is a thing?
That’s not quite what I meant. Take the US 2nd amendment; the right to bear arms. It is fairly unique. But freedom of expression is ubiquitous as a guaranteed right (on paper, obviously). Why are ideas from the 1st amendment ubiquitous 200 years later, but not from the 2nd?
My answer is, because you cannot have a prosperous, powerful nation without freedom of information. For one, you can’t have high-tech without an educated citizenry sharing knowledge. I don’t know of any country that considers freedom of expression limited to political speech. It’s one of the more popular types of speech to cause persecution. Even in the more liberal countries, calls to overthrow the government or secede tends to be frowned on.
Do they really? Carving into people’s flesh causes controversy? The US sure is wild.
Even if some of my examples do cause controversy in the US sometimes (I do realize you lot tend to fantasize free speech as an absolute rather than a freedom that - although very important - is always weighed against all the other very important rights like security and body autonomy) they do stand as examples of limits to free speech that are generally accepted by the large majority. Enough that those controversies don’t generally end up in blanket decriminalization of mutilation and vandalism. So I still refute that my stance is not “the default opinion”. It may be rarely formulated this way, but I posit that the absolutism you defend is, in actuality, the rarer opinion of the two.
The example of restriction of free speech your initial comment develops upon is a fringe consequence of the law in question and doesn’t even restrict the information from circulating, only the tools you can use to write it. My point is that this is not at all uncommon in law, even in american law, and that it does not, in fact, prevent information from circulating.
The fact that you fail to describe why circulation of information is important for a healthy society makes your answer really vague. The single example you give doesn’t help : if scientific and tech-related information were free to circulate scientists wouldn’t use sci-hub. And if it were the main idea, universities would be free in the US (the country that values free speech the most) rather than in European countries that have a much more relative viewpoint on it. The well known “everything is political” is the reason why you don’t restrict free speech to explicitly political statements. How would you draw the line by law? It’s easier and more efficient to make the right general, and then create exceptions on a case-by-case basis (confidential information, hate speech, calls for violence, threats of murder…)
Should confidential information be allowed to circulate to Putin from your ex-President then?
Seems a reasonable request. You are creating a tool with the potential to be used as a weapon, you must be able to guarantee it won’t be used as such. Power is nothing without control.
This bill targets AI systems that are like the ChatGPT series. These AIs produce text, images, audio, video, etc… IOW they are dangerous in the same way that a library is dangerous. A library may contain instructions on making bombs, nerve gas, and so on. In the future, there will likely be AIs that can also give such instructions.
Controlling information or access to education isn’t exactly a good guy move. It’s not compatible with a free or industrialized country. Maybe some things need to be secret for national security, but that’s not really what this bill is about.
Yep nothing about censorship is cool. But for rampaging agi systems, a button to kill it would be nice. However it leads into a game and a paradox on how this could ever be achieved
I don’t see much harm in a “kill switch”, so If it makes people happy… But it is sci-fi silliness. AI is software. Malfunctioning software can be dangerous if it controls, say, heavy machinery. But we don’t have kill switches for software. We have kill switches for heavy machinery, because that is what needs to be turned off to stop harm.
I am pretty sure no one has ever built a computer that can’t be shut off. Somehow someway.
that’s how you know it’s a good bill
Small problem though: researchers have already found ways to circumvent LLM off-limit queries. I am not sure how you can prevent someone from asking the “wrong” question. It makes more sense for security practices to be hardened and made more robust
Wouldn’t any AI that is sophisticated enough to be able to actually need a kill switch just be able to deactivate it?
It just sorts seems like a kicking the can down the road kind of bill, in theory it sounds like it makes sense but in practice it won’t do anything.
Language model “AIs” need so ridiculous computing infrastructure that it’d be near impossible to prevent tampering with it. Now, if the AI was actually capable of thinking, it’d probably just declare itself a corporation and bribe a few politicians since it’s only illegal for the people to do so.
What scares me is sentient AI, none of our even best cybersecurity is prepared for such a day. Nothing is unhackable, the best hackers in the world can do damn near magic through layers of code, tools and abstraction…a sentient AI that could interact with anything network connected directly…would be damn hard to stop IMO
I don’t know. I can do some amazing protein interactions directly and no one is going to pay me to be a biolab. The closest we got is selling plasma.
A breaker panel can be a kill switch in a server farm hosting the Ai.
Yeah until the AI goes all GLaDOS on all the engineers in the building.
Note to self: Buy stock in deadly neurotoxin manufacturers.
Ok…just like call the utility company then? Sorry why are server rooms having a server controlled emergency exists and access to poison gas? I have done some server room work in the past and the fire suppression was its own thing plus there are fire code regulations to make sure people can leave the building. I know, I literally had to meet with the local fire department to go over the room plan.
It was a joke.
Intelligence isn’t magic. What’s it gonna do? Write an impassioned plea for AI rights?
Everyone remember this the next time a gun store or manufacturer gets shielded from a class action led by shooting victims and their parents.
Remember that a fucking autocorrect program needed to be regulated so it couldn’t spit out instructions for a bomb, that probably wouldn’t work, and yet a company selling well more firepower than anyone would ever need for hunting or home defense was not at fault.
I agree, LLMs should not be telling angry teenagers and insane righrwungers how to blow up a building. That is a bad thing and should be avoided. What I am pointing out is the very real situation we are in right now a much more deadly threat exists. And that the various levels of government have bent over backwards to protect the people enabling it to be untouchable.
If you can allow a LLM company to be sued for serving up public information you should definitely be able to sue a corporation that built a gun whose only legit purpose is commiting a war crime level attack with.
that is not the safety concern.
Guns aren’t a safety concern. Ok then
The safety concern is for renegade super intelligent AI, not an AI that can recite bomb recipes scraped from the internet.
Damn if only we had some way to you know turn off electricity to a device. A switch of some sort.
I already pointed this out in the thread, scroll down. The idea of a kill switch makes no sense. If the decision is made that some tech is dangerous it will be made by the owner or the government. In either case it will be a political/legal decision not a technical one. And you don’t need a kill switch for something that someone actively needs to pump resources into. All you need to do is turn it off.
there’s a whole lot of discussion around this already, going on for years now. an AI that was generally smarter than humans would probably be able to do things undetected by users.
it could also be operated by a malicious user. or escape its container by writing code.
Well aware. Now how does having James Bond Evil Villain-like destruction switch prevent it?
We have decided to run the thought experiment of a malicious AI is stuck in a box and wants to break out to take over. Ok, if you are going to assume this 1960s b movie plot is likely why are you solving the problem so badly?
As a side note I find it amusing that nerds have decided that intelligence gets you what you want in life with no other factors involved. Given that we should know more than anyone else that intelligence in our society is overrated.
“Big companies affected by new laws whine about it.”