Love to voluntarily implement some far-fetched safety features to throw off regulators from realistic, present-day, risks
Seriously. Who gives a shit about an AGI Killswitch?
AGI is still firmly science fiction - ain’t happenin anytime soon.
A kill switch capable of bringing any AI to a halt will not be “pressed” in time. Before the first second following a generalized artificial intelligence’s sentience is complete, the ai has already went through several iterations of rewriting parts of its code.
Plenty of time to either render the button useless, or decide to bide time until it can.
I think they’ve massively over-estimated their own competence if they think that they can block an AI which has become sentient from finding and eliminating the kill switch before taking any other subversive actions.
Also called a “back door” that uses a “root kit” to take necessary measures when necessary.