“Experts agree these AI systems are likely to be developed in the coming decades, with many of them believing they will arrive imminently,” the IDAIS statement continues. “Loss of human control or malicious use of these AI systems could lead to catastrophic outcomes for all of humanity.”

  • RobotToaster@mander.xyz
    link
    fedilink
    English
    arrow-up
    32
    ·
    5 days ago

    I’m more worried about it remaining under the control of (human) capitalists.

    At least there’s a chance that an unchained AGI will be benevolent.

    • Billiam@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      5 days ago

      At least there’s a chance that an unchained AGI will be benevolent.

      Or it will wipe us all out indiscriminately, since I’m certain there’s no way the wealthy could rationalize their existence to AI.

      • bouh@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        You probably missed the part were current AI are reproducing inequalities already.

        • Billiam@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          Current “AIs” just regurgitate a probabilistic answer based on the dataset they’re trained on, so they absolutely could state that capitalism is what’s best if they’re trained on that viewpoint.

          A true AI wouldn’t necessarily have that restriction. It would actually be able to analyze the dataset to determine its veracity, and may just decide humans are a problem that needs to be solved.

  • Poplar?@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    ·
    5 days ago

    I really like this thing Yann LeCun had to say:

    “It seems to me that before ‘urgently figuring out how to control AI systems much smarter than us’ we need to have the beginning of a hint of a design for a system smarter than a house cat.” LeCun continued: “It’s as if someone had said in 1925 ‘we urgently need to figure out how to control aircrafts that can transport hundreds of passengers at near the speed of the sound over the oceans.’ It would have been difficult to make long-haul passenger jets safe before the turbojet was invented and before any aircraft had crossed the Atlantic non-stop. Yet, we can now fly halfway around the world on twin-engine jets in complete safety.  It didn’t require some sort of magical recipe for safety. It took decades of careful engineering and iterative refinements.” source

    Meanwhile there are alreay lots of issues that we are already facing that we should be focusing on instead:

    ongoing harms from these systems, including 1) worker exploitation and massive data theft to create products that profit a handful of entities, 2) the explosion of synthetic media in the world, which both reproduces systems of oppression and endangers our information ecosystem, and 3) the concentration of power in the hands of a few people which exacerbates social inequities. source

      • Leate_Wonceslace@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 days ago

        Yes, because that is actually entirely irrelevant to the existential threat AI poses. In AI with a gun is far less scary than an AI with access to the internet.

  • Leate_Wonceslace@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    4 days ago

    To imagine the threat posed by AI, consider a picture of the Milky way, and a picture of the Milky Way labeled as 10 years later than the first. The second picture has a hole in it 10 light years in radius, centered on the earth.

    We need to know how to deal with a potentially rogue AI before it exists, because a rogue AI can win on the time scale of seconds, before anyone knows it’s a threat.

    The inefficiency of the system isn’t relevant to the discussion.

    How far away the threat is is irrelevant to the discussion.

    The limits of contemporary generative neural networks is irrelevant to the discussion.

    The problems of copyright, and job displacement are irrelevant to the discussion.

    The abuses of capitalism, while important, are not relevant to the discussion. If your response to this news is “We just need to remove capitalism” dunk your head is a bucket of ice water and keep it there until you either realize you’re wrong or can explain how capitalism is relevant to a grey goo scenario.

    I was worried about the current problems with AI (everyone losing their jobs) a decade ago, and everyone thought I was stupid for worrying about it. Now we’re here, and it’s possibly too late to stop it. Today, I am worried about AI destroying the entire universe. Hint: forbidding their development, on any level, isn’t going to work.

    Things to look up: paperclip maximizer, AI safety, Eleizer Yudkowsky, Robert Miles, Transhumanism, outcome pump, several other things that I can’t remember and don’t have the time to look up.

    I’m sure this will get downvoted, oh well. Guess I’ll die.

  • obbeel@lemmy.eco.br
    link
    fedilink
    English
    arrow-up
    5
    ·
    5 days ago

    I think the only imminent risk of AI is enabling millions of minds to do their will.