• Fal@yiffit.net
    link
    fedilink
    English
    arrow-up
    20
    ·
    1 year ago

    How is it child sexual abuse content if there’s no child being abused? The child doesn’t even exist.

    • quindraco@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      Exactly. Assuming this article means the American government when it says “government”, the First Amendment firmly protects entirely fictional accounts of child abuse, sexual or not. If it didn’t, Harry Potter would be banned or censored.

    • huginn@feddit.it
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      It is the product of abuse though. Abuse materials are used to train the ai.

      • BradleyUffner@lemmy.world
        link
        fedilink
        English
        arrow-up
        18
        ·
        1 year ago

        No they aren’t. An AI trained on normal every day images of children, and sexual images of adults could easily synthesize these images.

        Just like it can synthesize an image of a possum wearing a top hat without being trained on images of possums wearing top hats.

        • gila@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          According to forum discussions seen by the IWF, offenders start with a basic source image generating model that is trained on billions and billions of tagged images, enabling them to carry out the basics of image generation. This is then fine-tuned with CSAM images to produce a smaller model using low-rank adaptation, which lowers the amount of compute needed to produce the images.

          They’re talking about a Stable Diffusion LoRA trained on actual CSAM. What you described is possible too, it’s not what the article is pointing out though.

      • Sphks@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        I can get “great” results trying to generate naked child with standard models for Stable Diffusion. They are not trained on abuse material. But they infer naked child on hentaï. Actually, it’s more of a problem. Most of the time I have to fight the generator not to generate sexy women. And generating sexy women you have sometimes to fight for them not looking too young.

        • gila@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 year ago

          That’s because the example they gave either a) combines two concepts the AI already understands, or b) adds a new concept to another already understood concept. It doesn’t need to specifically be trained on images of possums wearing top hats, but it would need to be trained on images of lots of different subjects wearing top hats. For SD the top hat and possum concepts may be covered by the base model datasets, but CSAM isn’t. Simply training a naked adult concept as well as a clothed child concept wouldn’t produce CSAM, because there is nothing in either of those datasets that looks like CSAM, so it doesn’t know what that looks like.

      • Fal@yiffit.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Did you not read anything else in this thread and just randomly replied to me?