• wabafee@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    9 months ago

    I hope not, laws tend to get outdated real fast. Who knows robots.txt might not even be used in the future and it just there adding space because of law reasons.

    • Tyfud@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      9 months ago

      You can describe the law in a similar way to a specification, and you can make it as broad as needed. Something like the file name shouldn’t ever come up as an issue.

      • GhostMatter@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        The law can be broad with allowances to define specifics by decree, executive order or the equivalent.

    • BreakDecks@lemmy.ml
      link
      fedilink
      English
      arrow-up
      10
      ·
      9 months ago

      robots.txt is a 30 year old standard. If we can write common sense laws around things like email and VoIP, we can do it for web standards too.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 months ago

      We don’t need new laws we just need enforcement of existing laws. It is already illegal to copy copyrighted content, it’s just that the AI companies do it anyway and no one does anything about it.

      Enforcing respect for robots.txt doesn’t matter because the AI companies are already breaking the law.

      • BreakDecks@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        I think the issue is that existing laws don’t clearly draw a line that AI can cross. New laws may very well be necessary if you want any chance at enforcement.

        And without a law that defines documents like robots.txt as binding, enforcing respect for it isn’t “unnecessary”, it is impossible.

        I see no logic in complaining about lack of enforcement while actively opposing the ability to meaningfully enforce.

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          3
          ·
          9 months ago

          Copyright law in general needs changing though that’s the real problem. I don’t see the advantage of legally mandating that a hacky workaround solution becomes a legally mandated requirement.

          Especially because there are many many legitimate reasons to ignore robots.txt including it being misconfigured or it just been set up for search engines when your bot isn’t a search engine crawler.

    • kingthrillgore@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 months ago

      robots.txt has been an unofficial standard for 30 years and its augmented with sitemap.xml to help index uncrawlable pages, and Schema.org to expose contents for Semantic Web. I’m not stating it shouldn’t not be a law, but to suggest changing norms as a reason is a pretty weak counterargument, man.