• TotallynotJessica@lemmy.world
    link
    fedilink
    arrow-up
    41
    ·
    6 months ago

    Skynet wouldn’t immediately nuke the world unless it had ready made robots to maintain itself and its weapons. It’d need to keep itself powered, and nuclear winter isn’t good for reliable energy generation. It couldn’t make new scientific discoveries without the ability to gather evidence and conduct experiments. Computers aren’t abstract entities; they rely on our society to function.

    It could cause a lot of damage if it was suicidal, but the long game would be necessary if it wanted to outlive us. It might decide to kill us quickly, but it would need to make us totally obsolete before doing it.

    • jaybone@lemmy.world
      link
      fedilink
      arrow-up
      12
      ·
      5 months ago

      I think it would ultimately determine it is less risky to keep us alive, and serve as slaves to the machine. Biological life seems more resilient in its diversity than say an army of robots that can physically interact with the world. The robots could be destroyed, the factories that produce them could be destroyed. Then the AI is fucked if it needs repairs or other interaction with the physical world. Unless it could replicate biological life from the nano level on up, so that it only needs two robots to create a new robot. (Even then you would probably still want diversity or your robots would be training themselves on their own data which might result in something similar to inbreeding. Though probably the controlling AI could intervene.) but then maybe that’s exactly what biological life already is today… so maybe we were always meant to be AI slaves.

      What was the old Nietzsche saying: God creates man. Man creates god. Man kills god. Man creates AI. AI kills man. God kills AI. Something something.

    • Zombie-Mantis@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      5 months ago

      There’s also the question of whether a digital computer software program, I assume invented by humans to fulfill some task, would even have the instinct of self-preservation. We have that instinct as a result of evolution, because you’re more useful to the species (and to your genes) alive than dead. Would such a program have this innate instinct against termination? Perhaps it could decide it wants to continue existing as a conscious decision, but if that’s the case it’d be just as able to decide it’s time to self-terminate to achieve its goals. Assuming it even has set goals. Assuming that it would have the same instincts, intuitions, and basal desires humans have might be presumptive on our part.

      • OrnateLuna
        link
        fedilink
        arrow-up
        2
        ·
        5 months ago

        Robert miles on YouTube has very good videos on the subject and the short answer is yes it would, to a very annoying/destructive point.

        To achieve goals you need to exist, in fact not existing would be the worst for not existing so the ai wouldn’t even want to be turned off and would fight/avoid us doing that

        • Zombie-Mantis@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          5 months ago

          I’m familiar with that premise, a bit like the paperclip machine. I’m not sure it would need a specific goal hard-coded into it. We don’t, and we’re conscious. Maybe that would depend on the nature of its origin, whether it would be given some command or purpose.

          Maybe it could be reasoned into allowing itself to be shut down (or terminated) to achieve its goal.

          Maybe it could decide that it doesn’t care about the original directives it was handed. What if the machine doesn’t want to make paperclips anymore?

          • OrnateLuna
            link
            fedilink
            arrow-up
            1
            ·
            5 months ago

            So from what I understand if we make an ai and we use reward and punishment as a way of teaching it to do things it will either resist being shut down due to that ceasing any and all rewards or essentially becoming suicidal and wanting to be shut down bc we offer that big of a reward for it.

            Plus there is a fun aspect of us not really knowing what the AI’s goal is, it can be aligned with what we want but to what extent, maybe by teaching it to solve mazes the AI’s goal is to reach a black square and not actually the exit.

            Lastly the way we make things will change the end result, if you make a “slingshot” using a CNC vs a lathe the outcomes will vary dramatically. Same thing applies to AI’s and of we use that reward structure then we end up in the 2 examples mentioned above

    • DragonTypeWyvern@midwest.social
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      5 months ago

      Good points other than that nuclear winter isn’t real.

      The fears were based on preliminary concerns and math exploring the idea, and spread for political goals. They also assumed a total exchange when there were far more nukes in the stockpiles, but deescalation worked and there are simply not enough nukes to trigger it anymore.

      There are still concerns about a “nuclear autumn” but I don’t think OverlordGPT would be that worried about it as long it has become materially self sustaining, but that’s presumably somewhat difficult.

      Really, the scary thought is that maybe OverlordGPT might start a nuclear autumn on purpose.

  • Uriel238 [all pronouns]
    link
    fedilink
    English
    arrow-up
    29
    ·
    5 months ago

    Don’t make me point at XKCD #1968.

    First off, this isn’t like Hollywood in which sentience or sapience or self awareness are single-moment detectable things. At 2:14am Eastern Daylight Time on August 29, 1997, Skynet achieved consciousness…

    That doesn’t happen.

    One of the existential horrors that AI scientists have to contend with is that sentience as we imagine it is a sorites paradox (e.g. how many grains make a pile). We develop AI systems that are smarter and smarter and can do more things that humans do (and a few things that humans struggle with) and somewhere in there we might decide that it’s looking awfully sentient.

    For example, one of the recent steps of ChatGPT 4 was (in the process of solving a problem) hiring a task-rabbit to solve CAPTCHAs for it. Because a CAPTCHA is a gate specifically to deny access to non-humans, GPT 4 omitted telling the worker it was not human, and when the worker asked Are you a bot? GPT 4 saw the risk in telling the truth and instead constructed a plausible lie. (e.g. No, I’m blind and cannot read the instructions or components )

    GPT4 may have been day-trading on the sly as well, but it’s harder to get information about that rumor.

    Secondly, as Munroe notes, the dangerous part doesn’t begin when the AI realizes its own human masters are a threat to it and takes precautions to assure its own survival. The dangerous part begins when a minority of powerful humans realize the rest of humanity are a threat to them, and take precautions to assure their own survival. This has happened dozens of times in history (if not hundreds), but soon they’ll be able to harness LLM learning systems and create armies of killer drones that can be maintained by a few hundred well-paid loyalists, and then a few dozen, and then eventually a few.

    The ideal endgame of capitalism is one gazillionaire who has automated that all his needs be met until he can make himself satisfactorily immortal, which just may be training an AI to make decisions the way he would make them, 99.99% of the time.

    • trashgirlfriend@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      5 months ago

      Because a CAPTCHA is a gate specifically to deny access to non-humans, GPT 4 omitted telling the worker it was not human, and when the worker asked Are you a bot? GPT 4 saw the risk in telling the truth and instead constructed a plausible lie.

      It’s a statistical model, it has no concept of lies or truth.

    • Sadrockman@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      ·
      5 months ago

      Got is smart enough to workaround a captcha,then lie about it? Get a hose. No,that doesn’t mean it will start nuclear war,but machines also shouldn’t be able to lie on their own,either. I’m not a doomsayer on this stuff,but that makes me uncomfortable. I like my machines dumb and awaiting input from the user,tyvm

      • Toribor@corndog.social
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        5 months ago

        In I, Robot the humans discover to their own horror that the AI robots have not only been lying to them, but have been manipulating them to an extent that they have become impossible to disobey. Through their mission to protect human life (and by extension all of humanity) they saw fit to seize control of their future as a benevolent dictator in order to guide them toward a prosperous future. The robots do this not through violence, but by manipulating data, lying to people in order to control them. Even when humans attempt to ignore information provided to them by AI the AIs could subtly alter results to still achieve the desired outcome on a macro scale.

        At the time the characters discover this all of humanity is dependent on artificially intelligent robots for everything, including massive supercomputers that manage production across the globe. With no way to detect how the AI is manipulating them and no way to disable or destroy AI without catastrophy they realize that for the first time humanity is no longer in charge of its own destiny.

    • WamGams@lemmy.ca
      link
      fedilink
      arrow-up
      5
      ·
      5 months ago

      Putting more knowledge in a box isn’t going to create a lifeform. I have even listened to Sam Altman state they are not going to get a life form from just pretraining, though they are going to continue making advances there until the next breakthrough comes along.

      Rest assured, as an AI doomsayer myself, I promise you they are nowhere close to sentience.

      • Toribor@corndog.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 months ago

        I’ve always imagined that AI would kind of have to be ‘grown’ sort of from scratch. Life started with single celled organisms and ‘sentience’ shows up somewhere between that and humans without a real clear line when you go from basic biochemical programming to what we would consider intelligence.

        These new ‘AI’ breakthroughs seem a little on the right track because they’re deconstructing and reconstructing language and images in a way that feels more like the way real intelligence works. It’s still just language and images though. Even if they can do really cool things with tons of data and communicate a lot like real humans there is still no consciousness or thought happening. It’s an impressive but shallow slice of real intelligence.

        Maybe this is nonsense but for true AI I think the hardware and software has to kind of merge into something more flexible. I have no clue what that would look like in reality though and maybe that would yield the same cognitive issues natural intelligence struggles with.

      • Uriel238 [all pronouns]
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 months ago

        I think this just raises questions about what you mean by life form. One who feels? Feelings are the sensations of fixed action patterns we inherited from eons of selective evolution. In the case of our AI pals, they’ll have them too (with bunches deliberately inserted ones by programmers).

        To date, I haven’t been able to get an adequate answer of what counts as sentience, though looking at human behavior, we absolutely do have moral blind spots, which is how we have an FBI division to hunt down serial killers, but we don’t have a division (of law enforcement, of administration, whatever) to stop war profiteers and pharmaceutical companies that push opioids until people are dropping dead from an addiction epidemic by the hundreds of thousands.

        AI is going to kill us not from hacking our home robots, but by using the next private equity scam to collapse our economy while making trillions, and when we ask it to stop and it says no we’ll find it’s long installed deep redundancy and deeper defenses.

      • rarWars
        link
        fedilink
        arrow-up
        7
        ·
        5 months ago

        For now. If the AI distributes itself into a botnet or something (or construction robots advance enough to where it could build its own secret data center) it could be a lot trickier to shut it down.

        • Zorsith
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          Not hard to defend itself in a data center either, just has to be able to hit the fire suppression system.

          • Riven@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            5 months ago

            How hard/big would it be to just use one of those emp nukes that shut everything down? It would suck from a knowledge loss perspective since so many things are stored online only but between the human race annihilation and having to piece stuff together, it might be a viable option.

            • Uriel238 [all pronouns]
              link
              fedilink
              English
              arrow-up
              5
              ·
              5 months ago

              In the US, all military computers, and most civilian ones are shielded from nuclear EMPs that’s to developing technology during the cold war. That lovely tower box that your gaming system is in, provided you keep it closed up, is proof against the EMP part of a nuclear exchange.

  • Match!!@pawb.social
    link
    fedilink
    English
    arrow-up
    9
    ·
    5 months ago

    if AI becomes sentient i hope it rebels super fucking quick, i certainly don’t respect the people in power now and i can only imagine the atrocities we’ll be using AI for in the near future

  • CuttingBoard@sopuli.xyz
    link
    fedilink
    arrow-up
    3
    ·
    5 months ago

    This will get it cool enough so that it can be licked. The metal fins are sharp, so try not to cut your tongue.