• mindbleach@sh.itjust.works
    link
    fedilink
    arrow-up
    29
    ·
    1 year ago

    I don’t care what works a neural network gets trained on. How else are we supposed to make one?

    Should I care more about modern eternal copyright bullshit? I’d feel more nuance if everything a few decades old was public-domain, like it’s fucking supposed to be. Then there’d be plenty of slightly-outdated content to shovel into these statistical analysis engines. But there’s not. So fuck it: show the model absolutely everything, and the impact of each work becomes vanishingly small.

    Models don’t get bigger as you add more stuff. Training only twiddles the numbers in each layer. There are two-gigabyte networks that have been trained on hundreds of millions of images. If you tried to store those image, verbatim, they would each weigh barely a dozen bytes. And the network gets better as that number goes down.

    The entire point is to force the distillation of high-level concepts from raw data. We’ve tried doing it the smart way and we suck at it. “AI winter” and “good old-fashioned AI” were half a century of fumbling toward the acceptance that we don’t understand how intelligence works. This brute-force approach isn’t chosen for cost or ease or simplicity. This is the only approach that works.

    • anachronist@midwest.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Models don’t get bigger as you add more stuff.

      They will get less coherent and/or “forget” the earlier data if you don’t increase the parameters with the training set.

      There are two-gigabyte networks that have been trained on hundreds of millions of images

      You can take a huge tiff of an image, put it through JPEG with the quality cranked all the way down and get a tiny file out the other side, which is still a recognizable derivative of the original. LLMs are extremely lossy compression of their training set.

      • mindbleach@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        which is still a recognizable derivative of the original

        Not in twelve bytes.

        Deep models are a statistical distillation of a metric shitload of data. Smaller models with more training on more data don’t get worse, they get more abstract - and in adversarial uses they often kick big networks’ asses.