For a whille now, I have seen scans that contain the pattern “%%target%%” in the URL. For example, today this particular URL is popular:
I have a robots.txt which does nothing but exclude “GPTBot”
I was coincidentally looking at my logs for unrelated reasons and caught it reading my robots.txt… and then promptly ignored it and scraped my whole site. Like yeah okay cool man
How does Reddit block users? Just by headers and IP addresses? Could the same be done once the GPTbot headers are known?
Every bit of information being sent to your web server can be spoofed. There is nothing you can do about this unless you’re willing to exclude an increasing percentage of real users.
My webserver is constantly barraged by crawlers and bots because I have zero defenses. I’ve considered intercepting the obvious ones, like the ones targeting wordpress plugins. I don’t use wordpress. I could serve them a 200 instead of a 404 and hopefully waste a real humans time if they check the hits manually.