• moon@lemmy.ml
    link
    fedilink
    English
    arrow-up
    21
    ·
    6 months ago

    Love to voluntarily implement some far-fetched safety features to throw off regulators from realistic, present-day, risks

    • huginn@feddit.it
      link
      fedilink
      English
      arrow-up
      10
      ·
      6 months ago

      Seriously. Who gives a shit about an AGI Killswitch?

      AGI is still firmly science fiction - ain’t happenin anytime soon.

  • KillingAndKindess
    link
    fedilink
    English
    arrow-up
    13
    ·
    6 months ago

    A kill switch capable of bringing any AI to a halt will not be “pressed” in time. Before the first second following a generalized artificial intelligence’s sentience is complete, the ai has already went through several iterations of rewriting parts of its code.

    Plenty of time to either render the button useless, or decide to bide time until it can.

  • AwkwardLookMonkeyPuppet@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    6 months ago

    I think they’ve massively over-estimated their own competence if they think that they can block an AI which has become sentient from finding and eliminating the kill switch before taking any other subversive actions.

  • XTL@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 months ago

    Also called a “back door” that uses a “root kit” to take necessary measures when necessary.