• gap_betweenus@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 minutes ago

      Once a technology or even an idea is there, you can’t really make it go away - ai is here to stay. The generative LLM are just a small part.

    • Slaxis@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      34 minutes ago

      The problem is, how? I can set it up on my own computer using open source models and some of my own code. It’s really rough to regulate that.

  • umbraroze@lemmy.world
    link
    fedilink
    English
    arrow-up
    51
    ·
    21 hours ago

    I have no idea why the makers of LLM crawlers think it’s a good idea to ignore bot rules. The rules are there for a reason and the reasons are often more complex than “well, we just don’t want you to do that”. They’re usually more like “why would you even do that?”

    Ultimately you have to trust what the site owners say. The reason why, say, your favourite search engine returns the relevant Wikipedia pages and not bazillion random old page revisions from ages ago is that Wikipedia said “please crawl the most recent versions using canonical page names, and do not follow the links to the technical pages (including history)”. Again: Why would anyone index those?

    • EddoWagt@feddit.nl
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      They want everything, does it exist, but it’s not in their dataset? Then they want it.

      They want their ai to answer any question you could possibly ask it. Filtering out what is and isn’t useful doesn’t achieve that

    • T156@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      4 hours ago

      Because it takes work to obey the rules, and you get less data for it. The theoretical competitor could get more ignoring those and get some vague advantage for it.

      I’d not be surprised if the crawlers they used were bare-basic utilities set up to just grab everything without worrying about rules and the like.

  • surph_ninja@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    ·
    23 hours ago

    I’m imagining a sci-fi spin on this where AI generators are used to keep AI crawlers in a loop, and they accidentally end up creating some unique AI culture or relationship in the process.

  • Greyfoxsolid@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    15 hours ago

    People complain about AI possibly being unreliable, then actively root for things that are designed to make them unreliable.

    • shads@lemy.lol
      link
      fedilink
      English
      arrow-up
      15
      ·
      15 hours ago

      I find this amusing, had a conversation with an older relative who asked about AI because I am “the computer guy” he knows. Explained basically how I understand LLMs to operate, that they are pattern matching to guess what the next token should be based on a statistical probability. Explained that they sometimes hallucinate, or go of on wild tangents due to this and that they can be really good at aping and regurgitating things but there is no understanding simply respinning fragments to try to generate a response that pleases the asker.

      He observed, “oh we are creating computer religions, just without the practical aspects of having to operate in the mundane world that have to exist before a real religion can get started. That’s good, religions that have become untethered from day to day practical life have never caused problems for anyone.”

      Which I found scarily insightful.

    • ArchRecord@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      ·
      14 hours ago

      Here’s the key distinction:

      This only makes AI models unreliable if they ignore “don’t scrape my site” requests. If they respect the requests of the sites they’re profiting from using the data from, then there’s no issue.

      People want AI models to not be unreliable, but they also want them to operate with integrity in the first place, and not profit from people’s work who explicitly opt-out their work from training.

      • A_Random_Idiot@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        14 hours ago

        I’m a person.

        I dont want AI, period.

        We cant even handle humans going psycho. Last thing I want is an AI losing its shit due from being overworked producing goblin tentacle porn and going full skynet judgement day.

        Got enough on my plate dealing with a semi-sentient olestra stain trying to recreate the third reich, as is.

        • ArchRecord@lemm.ee
          link
          fedilink
          English
          arrow-up
          4
          ·
          13 hours ago

          We cant even handle humans going psycho. Last thing I want is an AI losing its shit due from being overworked producing goblin tentacle porn and going full skynet judgement day.

          That is simply not how “AI” models today are structured, and that is entirely a fabrication based on science fiction related media.

          The series of matrix multiplication problems that an LLM is, and runs the tokens from a query through does not have the capability to be overworked, to know if it’s been used before (outside of its context window, which itself is just previous stored tokens added to the math problem), to change itself, or to arbitrarily access any system resources.

    • tacobellhop@midwest.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      15 hours ago

      Maybe it will learn discretion and what sarcasm are instead of being a front loaded google search of 90% ads and 10% forums. It has no way of knowing if what it’s copy pasting is full of shit.

    • DasSkelett@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      4
      ·
      14 hours ago

      This will only make models of bad actors who don’t follow the rules worse quality. You want to sell a good quality AI model trained on real content instead of other misleading AI output? Just follow the rules ;)

      Doesn’t sound too bad to me.

    • katy ✨
      link
      fedilink
      English
      arrow-up
      4
      ·
      15 hours ago

      i mean this is just designed to thwart ai bots that refuse to follow robots.txt rules of people who specifically blocked them.

    • Fluke@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      ·
      17 hours ago

      And consumed the power output of a medium country to do it.

      Yeah, great job! 👍

      • LeninOnAPrayer@lemm.ee
        link
        fedilink
        English
        arrow-up
        16
        ·
        edit-2
        17 hours ago

        We truly are getting dumber as a species. We’re facing climate change but running some of the most power hungry processers in the world to spit out cooking recipes and homework answers for millions of people. All to better collect their data to sell products to them that will distract them from the climate disaster our corporations have caused. It’s really fun to watch if it wasn’t so sad.

  • DigitalDilemma@lemmy.ml
    link
    fedilink
    English
    arrow-up
    63
    ·
    1 day ago

    Surprised at the level of negativity here. Having had my sites repeatedly DDOSed offline by Claudebot and others scraping the same damned thing over and over again, thousands of times a second, I welcome any measures to help.

    • dan@upvote.au
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 hours ago

      thousands of times a second

      Modify your Nginx (or whatever web server you use) config to rate limit requests to dynamic pages, and cache them. For Nginx, you’d use either fastcgi_cache or proxy_cache depending on how the site is configured. Even if the pages change a lot, a cache with a short TTL (say 1 minute) can still help reduce load quite a bit while not letting them get too outdated.

      Static content (and cached content) shouldn’t cause issues even if requested thousands of times per second. Following best practices like pre-compressing content using gzip, Brotli, and zstd helps a lot, too :)

      Of course, this advice is just for “unintentional” DDoS attacks, not intentionally malicious ones. Those are often much larger and need different protection - often some protection on the network or load balancer before it even hits the server.

  • 4am@lemm.ee
    link
    fedilink
    English
    arrow-up
    295
    ·
    2 days ago

    Imagine how much power is wasted on this unfortunate necessity.

    Now imagine how much power will be wasted circumventing it.

    Fucking clown world we live in

  • Randomgal@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    21 hours ago

    I’m glad we’re burning the forests even faster in the name of identity politics.

  • Dr. Moose@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    ·
    edit-2
    1 day ago

    Considering how many false positives Cloudflare serves I see nothing but misery coming from this.

    • Xella@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      22 hours ago

      Lol I work in healthcare and Cloudflare regularly blocks incoming electronic orders because the clinical notes “resemble” SQL injection. Nurses type all sorts of random stuff in their notes so there’s no managing that. Drives me insane!

    • Dave@lemmy.nz
      link
      fedilink
      English
      arrow-up
      20
      ·
      1 day ago

      In terms of Lemmy instances, if your instance is behind cloudflare and you turn on AI protection, federation breaks. So their tools are not very helpful for fighting the AI scraping.

        • Dave@lemmy.nz
          link
          fedilink
          English
          arrow-up
          2
          ·
          18 hours ago

          I’m not sure what can be done at the free tier. There is a switch to turn on AI not blocking, and it breaks federation.

          You can’t whitelist domains because federation could come from and domain. Maybe you could somehow whitelist /inbox for the ActivityPub communication, but I’m not sure how to do that in Cloudflare.

  • TorJansen@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    39
    ·
    2 days ago

    And soon, the already AI-flooded net will be filled with so much nonsense that it becomes impossible for anyone to get some real work done. Sigh.

  • oldfart@lemm.ee
    link
    fedilink
    English
    arrow-up
    103
    ·
    2 days ago

    So the web is a corporate war zone now and you can choose feudal protection or being attacked from all sides. What a time to be alive.

    • theparadox@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      2 days ago

      There is also the corpo verified id route. In order to avoid the onslaught of AI bots and all that comes with them you’ll need to sacrifice freedom, anonymity, and privacy like a good little peasant to prove you aren’t a bot… and so will everyone else. You’ll likely be forced to deal with whatever AI bots are forced upon you while within the walls but better an enemy you know I guess?

    • supersquirrel@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      14 hours ago

      No, it is far less environmentally friendly than rc bots made of metal, plastic, and electronics full of nasty little things like batteries blasting, sawing, burning and smashing one another to pieces.

    • IninewCrow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      10
      ·
      2 days ago

      They should program the actions and reactions of each system to actual battle bots and then televise the event for our entertainment.