• jordanlund@lemmy.world
    link
    fedilink
    arrow-up
    26
    ·
    1 年前

    Roko’s Basilisk. But here’s the thing, once you’re aware of it, you’re fucked. The only solution is to not research it, don’t know anything about it. Live in blissful ignorance.

      • jordanlund@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        1 年前

        Oh, see, that’s the thing, it’s simple enough to understand, you just don’t want to ubderstand it. :)

    • jkrtn@lemmy.ml
      link
      fedilink
      arrow-up
      9
      ·
      1 年前

      You have to believe that a malevolent AI will give enough of a damn about you to bother simulating anything at all, let alone infinite torture, which is useless for it to do once it already exists. Everyone on LessWrong has a well-fed ego so I get why they were in a tizzy for a while.

      • jordanlund@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        1 年前

        Well one punishes you if you deny it’s existence, the other punishes you if you fail to assist in it’s development. So it’s a LITTLE different. :)

        Fortunately, for me personally, I helped fund a key researcher who could, in theory, be a major contributor to such a thing. So I have plausible deniability. ;) And I’ve been promised a 15 minute head start before he turns it on.

      • Denjin@lemmings.world
        link
        fedilink
        arrow-up
        15
        ·
        1 年前

        Silly thought experiment, the result of which, in gullible people could make them potential victims of psychosomatic symptoms like headaches and insomnia.

      • Nibodhika@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        1 年前

        It’s essentially a thought experiment, without getting too specific it goes along the lines of “what if there was a hypothetical bad scenario that gets triggered by you knowing about it”, so if you look it up now you’re doomed.