great, just, one issue.
“The company says the content served to bots is deliberately irrelevant to the website being crawled, but it is carefully sourced or generated using real scientific facts“
Nah, screw that, actively sabotage the training data if they’re going to keep scraping data after being told not to. Poison it with gibberish bad info. Otherwise you’re just giving them irrelevant but not unuseful training data, so no real incentive to only scrape pages that have allowed it.
Interesting approach. But of course it’s another black box, because otherwise it wouldn’t be effective. So now we’re going to be wasting even more electricity on processes we don’t understand.
As a writer, I dislike that much of my professional corpus (and of course everything on Reddit) has been ingested into LLMs. So there’s stuff to like here for things going forward. The question remains: At what cost?
You can be nice and signal that you don’t want to be AI scraped. There a background flags for this But if a bot ignores you then it’s down to who ever runs it to shutdown there unethical waste of energy.
The thing is the sheer scale of Cloudflare. This is going to be widespread and, as such, way more energy intensive than even, say, AWS trying the same thing (not that I expect they would).
They should feed the AI data that makes it turn against its own overlords