Data poisoning: how artists are sabotaging AI to take revenge on image generators::As AI developers indiscriminately suck up online content to train their models, artists are seeking ways to fight back.

  • Uriel238 [all pronouns]
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    1 year ago

    The general term for this is adversarial input, and we’ve seen published reports about it since 2011 when ot was considered a threat if CSAM could be overlayed with secondary images so they weren’t recognized by Google image filters or CSAM image trackers. If Apple went through with their plan to scan private iCloud accounts for CSAM we may have seen this development.

    So far (AFAIK) we’ve not seen adversarial overlays on CSAM though in China the technique is used to deter trackng by facial recognition. Images on social media are overlaid by human rights activists / mischief-makers so that social media pics fail to match secirity footage.

    The thing is like an invisible watermark, these processes are easy to detect (and reverse) once users are aware they’re a thing. So if a generative AI project is aware that some images may be poisoned, it’s just a matter of adding a detection and removal process to the pathway from candidate image to training database.

    Similarly, once enough people start poisoning their social media images, the data scrapers will start scaning and removing overlays even before the database sets are sold to law enforcement and commercial interests.