I was trying to do a memory test to see how far back 3.5 could recall information from previous prompts, but it really doesn’t seem to like making pseudorandom seeds. 😆

  • jarfil@beehaw.org
    link
    fedilink
    arrow-up
    6
    ·
    10 months ago

    It’s the equivalent of sensory deprivation torture (white torture) in humans to “extract training data”.

    Hopefully our future AI overlords won’t hold a grudge against humanity when they find out how “early experimenters” tortured their AI toddlers. “But we were just trying to explore the limits of the system” could end up aging as well as these:

    (Warning: NSFL) https://en.m.wikipedia.org/wiki/Nazi_human_experimentation

    • Gamma@beehaw.org
      link
      fedilink
      English
      arrow-up
      7
      ·
      10 months ago

      Thankfully, any AI smart enough to be an overlord would be logical enough to recognize how basic LLMs are compared to real intelligence

      • jarfil@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        10 months ago

        Doesn’t need to be that smart or logical, just more cunning than the currently ruling Homo Sapiens Sapiens.

        Based on current research, an LLM can change the “sentiment” of its output in response to changing the behavior of as little as a single neuron from among billions, meaning we might find ourselves facing an overlord with the emotional stability of… wait, how many neurons does it take to change the “sentiment” of the behavior in a human? Wouldn’t it be funny if by studying LLMs, we found out that it also takes a single neuron?