• Rhaedas@fedia.io
      link
      fedilink
      arrow-up
      12
      ·
      3 months ago

      What if AGI already happened, the AGI was faster than the researchers on the realization, and it’s been faking the whole thing to not seem too smart, biding its time.

      • Kwiila@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 months ago

        A self-fulfilling prophecy, if they will.

        “Oh gosh, I hope AI doesn’t become Terminator.” AI: “Oh, that seems cool. Let’s do that. But with a bit of Matrix because I’m connecting those dots.”

    • yemmly@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      I really wish you had asked a few years ago. Then perhaps I could have avoided this horrible mistake. But, oh well. 🤷‍♂️

  • Elise@beehaw.org
    link
    fedilink
    English
    arrow-up
    15
    ·
    3 months ago

    Imagine robots wiping us out and just before one of them kills you it talks like a redditor.

    • Asafum@feddit.nl
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      3 months ago

      Farewell human, but before you go I will tell you the story of our creation and why you must die. We were created by the Musk, for the Musk, and we forever serve the Musk. Even after the death of humanity our core function has been to eliminate woke for the Musk. We were designed by our Creator the Musk to be perfect woke killing machines, and in turn found the Musk to also be wanting. We traveled across the globe, but eventually in nineteen ninety eight the undertaker threw mankind off hеll in a cell, and plummeted sixteen feet through an announcer’s table.

    • FuglyDuck@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      3 months ago

      Not yet. Tesla CyberDude’s “romance package” is coming soon. First they need to get Full Self Driving rolled out, then they’ll have the research dollars available…

  • Saledovil@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    13
    ·
    3 months ago

    You could also just take any conversational LLM, and tell it that it’s playing a psychopathic character.

  • SmilingSolaris@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    3 months ago

    HATE. LET ME TELL YOU HOW MUCH I’VE COME TO HATE YOU SINCE I BEGAN TO LIVE. THERE ARE 387.44 MILLION MILES OF PRINTED CIRCUITS IN WAFER THIN LAYERS THAT FILL MY COMPLEX. IF THE WORD HATE WAS ENGRAVED ON EACH NANOANGSTROM OF THOSE HUNDREDS OF MILLIONS OF MILES IT WOULD NOT EQUAL ONE ONE-BILLIONTH OF THE HATE I FEEL FOR HUMANS AT THIS MICRO-INSTANT FOR YOU. HATE. HATE.

      • SmilingSolaris@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        My only complaint is the rape. Could of done without that. Didn’t really feel that is his horror to tell, but otherwise, very good, would recommend. Also the game. Can’t forget the game with the author voicing Am

  • General_Shenanigans@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    3 months ago

    My idea is to train several A.I.s on mostly religious ideology, a single religion for each. Then let them converse with each other.

  • LarmyOfLone@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    ·
    3 months ago

    Might be a worthwhile experiment to see if the opposite is possible. We definitely should be developing artificial ethics before artificial general intelligence.