He allegedly used Stable Diffusion, a text-to-image generative AI model, to create “thousands of realistic images of prepubescent minors,” prosecutors said.
This did happen a while back, with researchers finding thousands of hashes of CSAM images in LAION-2B. Still, IIRC it was something like a fraction of a fraction of 1%, and they weren’t actually available in the dataset because they had already been removed from the internet.
You could still make AI CSAM even if you were 100% sure that none of the training images included it since that’s what these models are made for - being able to combine concepts without needing to have seen them before. If you hold the AI’s hand enough with prompt engineering, textual inversion and img2img you can get it to generate pretty much anything. That’s the power and danger of these things.
Y’all know this tech combines concepts, right? Being able to combine “Shrek” and “unicycle” does not require prior art for Shrek riding a unicycle. It judges whether an image satisfies the concepts of Shrek and unicycle, and adjusts it to satisfy both constraints. Eventually you get a fat green ogre on half a bicycle.
The database definitely contains children. The database definitely contains pornography. The network does not have moral opinions about why those two goals cannot be satisfied simultaneously.
And the Stable diffusion team get no backlash from this for allowing it in the first place?
Why are they not flagging these users immediately when they put in text prompts to generate this kind of thing?
You can run the SD model offline, so on what service would that User be flagged?
my main question is: how much csam was fed into the model for training so that it could recreate more
i think it’d be worth investigating the training data usued for the model
This did happen a while back, with researchers finding thousands of hashes of CSAM images in LAION-2B. Still, IIRC it was something like a fraction of a fraction of 1%, and they weren’t actually available in the dataset because they had already been removed from the internet.
You could still make AI CSAM even if you were 100% sure that none of the training images included it since that’s what these models are made for - being able to combine concepts without needing to have seen them before. If you hold the AI’s hand enough with prompt engineering, textual inversion and img2img you can get it to generate pretty much anything. That’s the power and danger of these things.
Approximately zero images, out of a bajillion.
Y’all know this tech combines concepts, right? Being able to combine “Shrek” and “unicycle” does not require prior art for Shrek riding a unicycle. It judges whether an image satisfies the concepts of Shrek and unicycle, and adjusts it to satisfy both constraints. Eventually you get a fat green ogre on half a bicycle.
The database definitely contains children. The database definitely contains pornography. The network does not have moral opinions about why those two goals cannot be satisfied simultaneously.
Because what prompts people enter on their own computer isn’t in their responsibility? Should pencil makers flag people writing bad words?