AI one-percenters seizing power forever is the real doomsday scenario, warns AI godfather::The real risk of AI isn’t that it’ll kill you. It’s that a small group of billionaires will control the tech forever.

    • Uriel238 [all pronouns]
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      AI safety experts are worried that capitalists will be too eager to get AGI first and will discard caution (friendly AI principles) for mad science.

      And I, for one, welcome our new robot overlords!

      • PsychedSy@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        If we have to choose between corporations or the government ruling us with AI I think I’m gonna just take a bullet.

        • Kedly@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Anarchy with never exist as anything but the exception to the rule, governments are a form of power that the population can at least influence. Weaker government will always mean stronger either nobility or corporations

            • Kedly@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Maybe in the future we can go back to smaller tribes/groups of people that take care of each other, but in the world as it exists today? An entity will come by sooner or later to conquer said group. We influence our government FAR better than we influence a corporation or dictator. Right now we need an equalizing big power, and at least with democratic governments, these big powers at least have to pretend to work for their people. Which, again, corporations and dictators do not

      • zbyte64
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Any AI safety experts that believes these oligarchs are going to get AGI and not some monkey paw are also drinking the cool aide.

        • Uriel238 [all pronouns]
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Actually AI safety experts are worried that corporations are just interested in getting technology that achieves specific ends, and don’t care that it is dangerous or insufficiently tested. Our rate of industrial disasters kinda demonstrates their views regarding risk.

          For now, we are careening towards giving smart drones autonomy to detect, identify, target and shoot weapons at enemies long before they’re smart enough to build flat-packed furniture from the IKEA visual instructions.

    • frezik@midwest.social
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      I’ve been thinking about how to do that. The code for most AI is pretty basic and uninteresting. It’s mostly modifying the input for something usable. Companies could open source their entire code base without letting anything important out.

      The dataset is the real problem. Say you want to classify fruit to check if it’s ripe enough for harvesting. You’ll need a whole lot of pictures of your preferred fruit where it’s both ripe and not ripe. You’ll want people who know the fruit to classify those images, and then you can feed it into a model. It’s a lot of work, and needs to attract a bunch of people to volunteer their time. Largely the sort of people who haven’t traditionally been a part of open source software.

    • errer@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      Might be one of the key democratizing forces us plebs will have…I do suggest people try out some of the open solutions out there already just to have that skill in their back pockets (e.g. GPT4All).

    • r3df0x ✡️✝☪️@7.62x54r.ru
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Yep. As dangerous as that could be, it’s better then centralizing it. There are already systems like GPT4all that come with good models that are slower then things like Chat GPT but work similarly well.