• Darkard@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    6 months ago

    And the Stable diffusion team get no backlash from this for allowing it in the first place?

    Why are they not flagging these users immediately when they put in text prompts to generate this kind of thing?

    • macniel@feddit.de
      link
      fedilink
      English
      arrow-up
      45
      ·
      6 months ago

      You can run the SD model offline, so on what service would that User be flagged?

    • yukijoou
      link
      fedilink
      English
      arrow-up
      9
      ·
      6 months ago

      my main question is: how much csam was fed into the model for training so that it could recreate more

      i think it’d be worth investigating the training data usued for the model

      • Ragdoll X@lemmy.world
        link
        fedilink
        English
        arrow-up
        25
        ·
        edit-2
        6 months ago

        This did happen a while back, with researchers finding thousands of hashes of CSAM images in LAION-2B. Still, IIRC it was something like a fraction of a fraction of 1%, and they weren’t actually available in the dataset because they had already been removed from the internet.

        You could still make AI CSAM even if you were 100% sure that none of the training images included it since that’s what these models are made for - being able to combine concepts without needing to have seen them before. If you hold the AI’s hand enough with prompt engineering, textual inversion and img2img you can get it to generate pretty much anything. That’s the power and danger of these things.

      • mindbleach@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago

        Approximately zero images, out of a bajillion.

        Y’all know this tech combines concepts, right? Being able to combine “Shrek” and “unicycle” does not require prior art for Shrek riding a unicycle. It judges whether an image satisfies the concepts of Shrek and unicycle, and adjusts it to satisfy both constraints. Eventually you get a fat green ogre on half a bicycle.

        The database definitely contains children. The database definitely contains pornography. The network does not have moral opinions about why those two goals cannot be satisfied simultaneously.

    • DarkThoughts@fedia.io
      link
      fedilink
      arrow-up
      5
      ·
      6 months ago

      Because what prompts people enter on their own computer isn’t in their responsibility? Should pencil makers flag people writing bad words?