Deleting data from them might not be feasible, but there are other tactics.
[…] trapping AI crawlers and sending them down an “infinite maze” of static files with no exit links, where they “get stuck” and “thrash around” for months, he tells users. Once trapped, the crawlers can be fed gibberish data, aka Markov babble, which is designed to poison AI models.
Someone please write a virus that deletes all knowledge from LLMs.
Deleting data from them might not be feasible, but there are other tactics.