• umbraroze@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    26 minutes ago

    I have no idea why the makers of LLM crawlers think it’s a good idea to ignore bot rules. The rules are there for a reason and the reasons are often more complex than “well, we just don’t want you to do that”. They’re usually more like “why would you even do that?”

    Ultimately you have to trust what the site owners say. The reason why, say, your favourite search engine returns the relevant Wikipedia pages and not bazillion random old page revisions from ages ago is that Wikipedia said “please crawl the most recent versions using canonical page names, and do not follow the links to the technical pages (including history)”. Again: Why would anyone index those?

  • Randomgal@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    56 minutes ago

    I’m glad we’re burning the forests even faster in the name of identity politics.

  • surph_ninja@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    3 hours ago

    I’m imagining a sci-fi spin on this where AI generators are used to keep AI crawlers in a loop, and they accidentally end up creating some unique AI culture or relationship in the process.

  • DigitalDilemma@lemmy.ml
    link
    fedilink
    English
    arrow-up
    38
    ·
    10 hours ago

    Surprised at the level of negativity here. Having had my sites repeatedly DDOSed offline by Claudebot and others scraping the same damned thing over and over again, thousands of times a second, I welcome any measures to help.

  • Dr. Moose@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    edit-2
    15 hours ago

    Considering how many false positives Cloudflare serves I see nothing but misery coming from this.

    • Xella@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      Lol I work in healthcare and Cloudflare regularly blocks incoming electronic orders because the clinical notes “resemble” SQL injection. Nurses type all sorts of random stuff in their notes so there’s no managing that. Drives me insane!

    • Dave@lemmy.nz
      link
      fedilink
      English
      arrow-up
      16
      ·
      13 hours ago

      In terms of Lemmy instances, if your instance is behind cloudflare and you turn on AI protection, federation breaks. So their tools are not very helpful for fighting the AI scraping.

  • TorJansen@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    38
    ·
    17 hours ago

    And soon, the already AI-flooded net will be filled with so much nonsense that it becomes impossible for anyone to get some real work done. Sigh.

  • 4am@lemm.ee
    link
    fedilink
    English
    arrow-up
    238
    ·
    22 hours ago

    Imagine how much power is wasted on this unfortunate necessity.

    Now imagine how much power will be wasted circumventing it.

    Fucking clown world we live in

  • weremacaque@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    15 hours ago

    You have Thirteen hours in which to solve the labyrinth before your baby AI becomes one of us, forever.

  • oldfart@lemm.ee
    link
    fedilink
    English
    arrow-up
    98
    ·
    23 hours ago

    So the web is a corporate war zone now and you can choose feudal protection or being attacked from all sides. What a time to be alive.

    • theparadox@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      21 hours ago

      There is also the corpo verified id route. In order to avoid the onslaught of AI bots and all that comes with them you’ll need to sacrifice freedom, anonymity, and privacy like a good little peasant to prove you aren’t a bot… and so will everyone else. You’ll likely be forced to deal with whatever AI bots are forced upon you while within the walls but better an enemy you know I guess?

    • rocket_dragon@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      63
      ·
      1 day ago

      Next step is an AI that detects AI labyrinth.

      It gets trained on labyrinths generated by another AI.

      So you have an AI generating labyrinths to train an AI to detect labyrinths which are generated by another AI so that your original AI crawler doesn’t get lost.

      It’s gonna be AI all the way down.

      • brucethemoose@lemmy.world
        cake
        link
        fedilink
        English
        arrow-up
        12
        ·
        edit-2
        23 hours ago

        LLMs tend to be really bad at detecting AI generated content. I can’t imagine specialized models are much better. For the crawler, it’s also exponentially more expensive and more human work, and must be replicated for every crawler since they’re so freaking secretive.

        I think the hosts win here.

  • quack@lemmy.zip
    link
    fedilink
    English
    arrow-up
    46
    ·
    edit-2
    22 hours ago

    Generating content with AI to throw off crawlers. I dread to think of the resources we’re wasting on this utter insanity now, but hey who the fuck cares as long as the line keeps going up for these leeches.