• jagged_circle@feddit.nl
    link
    fedilink
    English
    arrow-up
    25
    ·
    edit-2
    2 days ago

    This is fine. I support archiving the Internet.

    It kinda drives me crazy how normalized anti-scraping rhetoric is. There is nothing wrong with (rate limited) scraping

    The only bots we need to worry about are the ones that POST, not the ones that GET

    • WhyJiffie@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      14 hours ago

      this is neither archiving, nor ratelimited, if the AI training purpose and the 25 times faster scraping than a large company did not make it obvious

      • tempest@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        13 hours ago

        The type of request is not relevant. It’s the cost of the request that’s an issue. We have long ago stopped serving html documents that are static and can be cached. Tons of requests can trigger complex searches or computations which are expensive server side. This type of behavior basically ruins the internet and pushes everything into closed gardens and behind logins.

        • Olgratin_Magmatoe@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          10 hours ago

          It has nothing to do with a sysadmin. It’s impossible for a given request to require zero processing power. Therefore there will always be an upper limit to how many get requests can be handled, even if it’s a small amount of processing power per request.

          For a business it’s probably not a big deal, but if it’s a self hosted site it quickly can become a problem.

          • jagged_circle@feddit.nl
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 hours ago

            Caches can be configured locally to use near-zero processing power. Or moved to the last mile to use zero processing power (by your hardware)

              • jagged_circle@feddit.nl
                link
                fedilink
                English
                arrow-up
                1
                ·
                7 hours ago

                Right, thats why I said you should fire your sysadmin if they aren’t caching or can’t manage to get the cache down to zero load for static content served to simple GET requests

    • purrtastic@lemmy.nz
      link
      fedilink
      English
      arrow-up
      43
      ·
      2 days ago

      It’s not fine. They are not archiving the internet.

      I had to ban their user agent after very aggressive scraping that would have taken down our servers. Fuck this shitty behaviour.

    • Max-P@lemmy.max-p.me
      link
      fedilink
      English
      arrow-up
      44
      ·
      2 days ago

      I had to block ByteSpider at work because it can’t even parse HTML correctly and just hammers the same page and accounts to sometimes 80% of the traffic hitting a customer’s site and taking it down.

      The big problem with AI scrapers is unlike Google and traditional search engines, they just scrape so aggressively. Even if it’s all GETs, they hit years old content that’s not cached and use up the majority of the CPU time on the web servers.

      Scraping is okay, using up a whole 8 vCPU instance for days to feed AI models is not. They even actively use dozens of IPs to bypass the rate limits too, so theyre basically DDoS’ing whoever they scrape with no fucks given. I’ve been woken up by the pager way too often due to ByteSpider.

      My next step is rewriting all the content with GPT-2 and serving it to bots so their models collapse.

    • zod000@lemmy.ml
      link
      fedilink
      English
      arrow-up
      24
      ·
      2 days ago

      Bullshit. This bot doesn’t identify itself as a bot and doesn’t rate limit itself to anything that would be an appropriate amount. We were seeing more traffic from this thing that all other crawlers combined.

      • jagged_circle@feddit.nl
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 day ago

        Not rate limiting is bad. Hate them because of that, not because they’re a bot.

        Some bots are nice

        • zod000@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          23 hours ago

          I don’t hate all bots, I hate this bot specifically because:

          • they intentionally hide that they are a bot to evade our, and everyone else’s, methods of restricting which bots we allow and how much activity we allow.
          • they do not respect the robots.txt
          • the already mentioned lack of rate limiting
        • Zangoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 day ago

          Even if they were rate limiting they’re still just using the bot to train an AI. If it’s from a company there’s a 99% chance the bot is bad. I’m leaving 1% for whatever the Internet Archive (are they even a company tho?) is doing.