• @Voroxpete@sh.itjust.works
      link
      fedilink
      English
      1523 days ago

      Asimov’s stories were mostly about how it would be a terrible idea to put kill switches on AI, because he assumed that perfectly rational machines would be better, more moral decision makers than human beings.

      • Nomecks
        link
        fedilink
        English
        2023 days ago

        This guy didn’t read the robot series.

        • @grrgyle@slrpnk.net
          link
          fedilink
          English
          1223 days ago

          I mean I can see it both ways.

          It kind of depends which of robot stories you focus on. If you keep reading to the zeroeth law stuff then it starts portraying certain androids as downright messianic, but a lot of his other (esp earlier) stories are about how – basically from what amount to philosophical computer bugs – robots are constantly suffering alignment problems which cause them to do crime.

          • Nomecks
            link
            fedilink
            English
            11
            edit-2
            23 days ago

            The point of the first three books was that arbitrary rules like the three laws of robotics were pointless. There was a ton of grey area not covered by seemingly ironclad rules and robots could either logicically choose or be manipulated into breaking them. Robots, in all of the books, operate in a purely amoral manner.

    • @afraid_of_zombies@lemmy.world
      link
      fedilink
      English
      622 days ago

      All you people talking Asimov and I am thinking the Sprawl Trilogy.

      In that series you could build an AGI that was smarter than any human but it took insane amounts of money and no one trusted them. By law and custom they all had an EMP gun pointed at their hard drives.

      It’s a dumb idea. It wouldn’t work. And in the novels it didn’t work.

      I build say a nuclear plant. A nuclear plant is potentially very dangerous. It is definitely very expensive. I don’t just build it to have it I build it to make money. If some wild haired hippy breaks in my office and demands the emergency shutdown switch I am going to kick him out. The only way the plant is going to be shut off is if there is a situation where I, the owner, agree I need to stop making money for a little while. Plus if I put an emergency shut off switch it’s not going to blow up the plant. It’s going to just stop it from running.

      Well all this applies to these AI companies. It is going to be a political decision or a business decision to shut them down, not just some self-appointed group or person. So if it is going to be that way you don’t need an EMP gun all you need to do is cut the power, figure out what went wrong, and restore power.

      It’s such a dumb idea I am pretty sure the author put it in because he was trying to point out how superstitious people were about these things.