Everyone remember this the next time a gun store or manufacturer gets shielded from a class action led by shooting victims and their parents.
Remember that a fucking autocorrect program needed to be regulated so it couldn’t spit out instructions for a bomb, that probably wouldn’t work, and yet a company selling well more firepower than anyone would ever need for hunting or home defense was not at fault.
I agree, LLMs should not be telling angry teenagers and insane righrwungers how to blow up a building. That is a bad thing and should be avoided. What I am pointing out is the very real situation we are in right now a much more deadly threat exists. And that the various levels of government have bent over backwards to protect the people enabling it to be untouchable.
If you can allow a LLM company to be sued for serving up public information you should definitely be able to sue a corporation that built a gun whose only legit purpose is commiting a war crime level attack with.
Damn if only we had some way to you know turn off electricity to a device. A switch of some sort.
I already pointed this out in the thread, scroll down. The idea of a kill switch makes no sense. If the decision is made that some tech is dangerous it will be made by the owner or the government. In either case it will be a political/legal decision not a technical one. And you don’t need a kill switch for something that someone actively needs to pump resources into. All you need to do is turn it off.
there’s a whole lot of discussion around this already, going on for years now. an AI that was generally smarter than humans would probably be able to do things undetected by users.
it could also be operated by a malicious user. or escape its container by writing code.
Well aware. Now how does having James Bond Evil Villain-like destruction switch prevent it?
We have decided to run the thought experiment of a malicious AI is stuck in a box and wants to break out to take over. Ok, if you are going to assume this 1960s b movie plot is likely why are you solving the problem so badly?
As a side note I find it amusing that nerds have decided that intelligence gets you what you want in life with no other factors involved. Given that we should know more than anyone else that intelligence in our society is overrated.
Everyone remember this the next time a gun store or manufacturer gets shielded from a class action led by shooting victims and their parents.
Remember that a fucking autocorrect program needed to be regulated so it couldn’t spit out instructions for a bomb, that probably wouldn’t work, and yet a company selling well more firepower than anyone would ever need for hunting or home defense was not at fault.
I agree, LLMs should not be telling angry teenagers and insane righrwungers how to blow up a building. That is a bad thing and should be avoided. What I am pointing out is the very real situation we are in right now a much more deadly threat exists. And that the various levels of government have bent over backwards to protect the people enabling it to be untouchable.
If you can allow a LLM company to be sued for serving up public information you should definitely be able to sue a corporation that built a gun whose only legit purpose is commiting a war crime level attack with.
that is not the safety concern.
Guns aren’t a safety concern. Ok then
The safety concern is for renegade super intelligent AI, not an AI that can recite bomb recipes scraped from the internet.
Damn if only we had some way to you know turn off electricity to a device. A switch of some sort.
I already pointed this out in the thread, scroll down. The idea of a kill switch makes no sense. If the decision is made that some tech is dangerous it will be made by the owner or the government. In either case it will be a political/legal decision not a technical one. And you don’t need a kill switch for something that someone actively needs to pump resources into. All you need to do is turn it off.
there’s a whole lot of discussion around this already, going on for years now. an AI that was generally smarter than humans would probably be able to do things undetected by users.
it could also be operated by a malicious user. or escape its container by writing code.
Well aware. Now how does having James Bond Evil Villain-like destruction switch prevent it?
We have decided to run the thought experiment of a malicious AI is stuck in a box and wants to break out to take over. Ok, if you are going to assume this 1960s b movie plot is likely why are you solving the problem so badly?
As a side note I find it amusing that nerds have decided that intelligence gets you what you want in life with no other factors involved. Given that we should know more than anyone else that intelligence in our society is overrated.