• Echo Dot
    link
    fedilink
    English
    8
    edit-2
    22 days ago

    Wouldn’t any AI that is sophisticated enough to be able to actually need a kill switch just be able to deactivate it?

    It just sorts seems like a kicking the can down the road kind of bill, in theory it sounds like it makes sense but in practice it won’t do anything.

    • @servobobo@feddit.nl
      link
      fedilink
      English
      6
      edit-2
      22 days ago

      Language model “AIs” need so ridiculous computing infrastructure that it’d be near impossible to prevent tampering with it. Now, if the AI was actually capable of thinking, it’d probably just declare itself a corporation and bribe a few politicians since it’s only illegal for the people to do so.

        • @afraid_of_zombies@lemmy.world
          link
          fedilink
          English
          2
          edit-2
          22 days ago

          Ok…just like call the utility company then? Sorry why are server rooms having a server controlled emergency exists and access to poison gas? I have done some server room work in the past and the fire suppression was its own thing plus there are fire code regulations to make sure people can leave the building. I know, I literally had to meet with the local fire department to go over the room plan.

    • @cm0002@lemmy.world
      link
      fedilink
      English
      522 days ago

      What scares me is sentient AI, none of our even best cybersecurity is prepared for such a day. Nothing is unhackable, the best hackers in the world can do damn near magic through layers of code, tools and abstraction…a sentient AI that could interact with anything network connected directly…would be damn hard to stop IMO

      • @afraid_of_zombies@lemmy.world
        link
        fedilink
        English
        122 days ago

        I don’t know. I can do some amazing protein interactions directly and no one is going to pay me to be a biolab. The closest we got is selling plasma.