• cbarrick@lemmy.world
    link
    fedilink
    English
    arrow-up
    55
    ·
    edit-2
    7 months ago

    Just put the site behind a cache, like Cloudflare, and set your cache control headers properly?

    They mention that they are already using Cloudflare. I’m confused about what is actually causing the load. They don’t mention any technical details, but it does kinda sound like their cache control headers are not set properly. I’m too lazy to check for myself though…

    • Rimu@piefed.social
      link
      fedilink
      arrow-up
      22
      ·
      edit-2
      7 months ago

      I’ve found that if left on default settings, CloudFlare is not that great at caching. It requires a bit of configuration to really make it sing. itsfoss.com thought they were “using CloudFlare” but probably not to it’s fullest potential.

    • Max-P@lemmy.max-p.me
      link
      fedilink
      English
      arrow-up
      22
      ·
      7 months ago

      Even without Cloudflare, simple NGINX microcaching would help a ton there.

      It’s a blog, it doesn’t need to regenerate a new page every single time for anonymous users. There’s no reason it shouldn’t be able to sustain 20k requests per second on a single server. Even a one second cache on the backend for anonymous users would help a ton there.

      They have Cloudflare in front, the site should be up with the server being turned off entirely.

  • Handles@leminal.space
    link
    fedilink
    English
    arrow-up
    40
    ·
    7 months ago

    There’s another reason I don’t share “It’s FOSS” links anywhere: this should have been a github issue but it’s turned into a clickbaity headline. Every othe article coming out of “It’s FOSS” is either low effort, sensationalist, or both.

    • Poplar?@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      The article mentions there are aleady a few issues, some quite old. The article is useful for raising awareness and hopefully getting the fix prioratized higher.

  • tedu@azorius.net
    link
    fedilink
    arrow-up
    39
    ·
    7 months ago

    Real talk, the mastodon traffic stampede isn’t that bad for a properly configured website.

    • iltg@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      25
      ·
      7 months ago

      thanks for saying this! i really don’t want to victim blame itsfoss for getting traffic spikes but if you cant handle ~20MB in one minute (~400kbps) of traffic you’re doing something really really wrong and you really should look into it, especially if you want to distribute content. crying “dont share our links on mastodon” also sounds like hunting windmills, block the mastodon UA and be done with it, or stop putting images in your link previews for mastodon, or drop link previews completely. a “100 mb DDOS” is laughable at best, nice amplification calculation but that’s still 100 megs

        • chiisana@lemmy.chiisana.net
          link
          fedilink
          English
          arrow-up
          5
          ·
          7 months ago

          AWS charges $0.09/GB. Even assuming zero caching and always dynamically requested content, you’d need 100x this “attack” to rack up $1 in bandwidth fees. There are way faster ways to rack up bandwidth fees. I remember the days where I paid $1/GB of egress on overage, and even then, this 100MB would’ve only set me back $0.15 at worst.

          Also worth noting that those who’d host on AWS isn’t going to blink at $1 in bandwidth fees; they’d be hosting else where that offers cheaper egress (I.e. billed by megabits or some generous fixed allocation); those that are more sane would be serving behind CDNs that’d be even cheaper.

          This is a non-issue written by someone who clearly doesn’t know what they’re talking about, likely intended to drum up traffic to their site.

  • SatyrSack@lemmy.one
    link
    fedilink
    English
    arrow-up
    20
    ·
    7 months ago

    Direct link to article:

    https://news.itsfoss.com/mastodon-link-problem/

    TL;DR:

    When you share a link on Mastodon, a link preview is generated for it, right?

    With Mastodon being a federated platform (a part of the Fediverse), the request to generate a link preview is not generated by just one Mastodon instance. There are many instances connected to it who also initiate requests for the content almost immediately.

    And, this “fediverse effect” increases the load on the website’s server in a big way.

    Does Lemmy not cause this issue? Other federated software was not mentioned in the article at all.

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 months ago

      So the preview should be federated as well?

      How many requests are we actually talking about here, though? Is that better or worse than everyone clicking the link?

      • Max-P@lemmy.max-p.me
        link
        fedilink
        English
        arrow-up
        10
        ·
        7 months ago

        There’s some problem with a federated previews: tricking one instance into generating the wrong preview would spread to every instance. It’s been exploited for malware and scam campaigns in message apps.

          • Max-P@lemmy.max-p.me
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            7 months ago

            Masquerading a normal looking link for another one, usually phishing, malware, clones loaded with ads.

            Like, lets say I post something like

            https://www.google.com

            And also have my instance intercept it to provide Google’s embed preview image, and it federates that with other instances.

            Now, for everyone it would look like a Google link, but you get Microsoft Google instead.

            I could also actually post a genuine Google link but make the preview go somewhere else completely, so people may see the link goes where they expect even when putting the mouse over it, but then they end up clicking the preview for whatever reason. Bam, wrong site. Could also be a YouTube link and embed but the embed shows a completely different preview image, you click on it and get some gore or porn instead. Fake headlines, whatever way you can think of to abuse this, using the cyrillic alphabet, whatever.

            People trust those previews in a way, so if you post a shortened link but it previews like a news article you want to go to, you might click the image or headline but end up on a phony clone of the site loaded with malware. Currently, if you trust your instance you can actually trust the embed because it’s generated by your instance.

            On iMessage, it used that the sender would send the embed metadata, so it was used for a zero click exploit by sending an embed of a real site but with an attachment that exploited the codec it would be rendered with.

      • Rimu@piefed.social
        link
        fedilink
        arrow-up
        6
        ·
        7 months ago

        2 requests per instance - one for the HTML of the page and another for a preview image.

    • chameleon@kbin.social
      link
      fedilink
      arrow-up
      6
      ·
      7 months ago

      Lemmy (and Kbin for that matter) very much do the same thing for posts. I don’t think they fetch URL previews for links in comments, but that doesn’t matter: posts and comments are both fairly likely to end up spreading to Mastodon/etc anyway, so even comments will trigger this cascade.

      Direct example: If you go to mastodon.social, stick @fediverse@lemmy.world in the search box at the topleft and click for the profile, you can end up browsing a large Mastodon server’s view of this community, and your very link has a preview. (Unfortunately, links to federated communities just result in a redirect, so you have to navigate through Mastodon’s UI.)

  • Sean Tilley@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    7 months ago

    It’s an interesting and frustrating problem. I think there are three potential ways forward, but they’re both flawed:

    1. Quasi-Centralization: a project like Mastodon or a vetted Non-Profit entity operates a high-concurrency server whose sole purpose is to cache link metadata and Images. Servers initially pull preview data from that, instead of the direct page.

    2. We find a way to do this in some zero-trust peer-to-peer way, where multiple servers compare their copies of the same data. Whatever doesn’t match ends up not being used.

    3. Servers cache link metadata and previews locally with a minimal amount of requests; any boost or reshare only reflects a proxied local preview of that link. Instead of doing this on a per-view or per-user basis, it’s simply per-instance.

    I honestly think the third option might be the least destructive, even if it’s not as efficient as it could be.

    • Quacksalber@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      7 months ago

      As I understand it, 3) already happens. What causes the load is that each connected instance is also loading and caching the preview.

    • chiisana@lemmy.chiisana.net
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      Or 4) Ignore noise and do nothing; this is a case of user talking about things they don’t understand at best, or a blog intentionally misleading others to drum up traffic for themselves at worst. This is literally not a problem. Serving that kind of traffic can be done on a single server without any CDN and they’ve got a CDN already.

  • katy ✨
    link
    fedilink
    English
    arrow-up
    4
    ·
    7 months ago

    i mean it’s solid training but they do realise it’s not limited to mastodon, right?

    the slashdot effect has been around for years

  • Lvxferre@mander.xyz
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    7 months ago

    That sounds a lot like a weird spin on the Slashdot effect, caused by content mirroring. It seems that it could be handled by tweaking the ActivityPub protocol to have one instance requesting to generate a link preview, and the other instances copying the link preview instead of sending their own requests.

    But frankly? I think that the current way that ActivityPub works is outright silly. Here’s what it does currently:

    • User is registered to instance A
    • Since A federates with B, A mirrors content from B into A
    • The backend is either specific to instance A (the site) or configured to use instance A (for a phone program)
    • When the user interacts with content from B, actually it’s the mirrored version of content from B that is hosted in A

    In my opinion a better approach would be:

    • User is registered to instance A
    • Since A federates with B, B accepts login credentials from A
    • The backend is instance-agnostic, so it’s able to pull/send content from/to multiple instances at the same time
    • When the user interacts with content from B, the backend retrieves content from B, and uses the user’s A credentials to send content to B

    Note that the second way would not create this “automated Slashdot effect” - only A would be pulling info from the site, and then users (regardless of their instance) would pull it from A.

    Now, here’s my question: why does the ActivityPub work like in that first way, instead of this second one?

    • chicken@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      Check out Nostr, ActivityPub alternative that does authentication separately from content, works more like that.

      • Lvxferre@mander.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        7 months ago

        I’m aware of Nostr. In my opinion it splits better back- and front-end tasks than the AP does, even if the later does some things better (as the balance between safeness and censorship-resistance). It’s still an interesting counterpoint to ActivityPub.

    • DaGeek247@fedia.io
      link
      fedilink
      arrow-up
      4
      ·
      7 months ago

      If server A makes one request, it keeps server B from being overload by thousands of requests from users A.

      • Lvxferre@mander.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 months ago

        “A” Users would need to send requests to some server anyway, either A or B; that’s only diverting the load from B to A, but it isn’t alleviating or even sharing it.

        Another issue with the current way that ActivityPub works is foul content, that needs to be removed. Remember when some muppet posted CP in LW?

        • breakingcups@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 months ago

          Yes, but this way demand on instances scales with user count and aliows smaller instances to exist. Otherwise an errant toot on a small instance that suddenly gets popular will instantly drag that smaller instance down.

          • Lvxferre@mander.xyz
            link
            fedilink
            English
            arrow-up
            2
            ·
            7 months ago

            Got it - and that’s a fair point. I wonder however if this problem couldn’t be solved another way, specially because mirroring is itself a burden for the smaller instances.

            • iltg@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              3
              ·
              7 months ago

              consider that caching happens at thousands of levels on the internet. every centralized site has its content replicated many many times in geo local caches, proxies and even local browsers. caching is a very core concept for the internet. others often bash AP because it replicates a lot, but that’s kind of like explicit caching: if the whole fediverse network fetched a post from it source, millions of requests would beat small servers down constantly. big servers cache the content they intend to distribute and handle the traffic spike instead of the small instance. small instances on their hand dont need to replicate as much and can rely more on bigger instances, maybe cleaning their cached content often and refetching when necessary. replication is a feature, not a design flaw!

              • Lvxferre@mander.xyz
                link
                fedilink
                English
                arrow-up
                2
                ·
                7 months ago

                replication is a feature, not a design flaw!

                In this case I’d argue that it’s both. (A problematic feature? A useful bug? They’re the same picture anyway.)

                Because of your comment I can see the pros of the mirroring strategy, even if the cons are still there. I wonder if those pros couldn’t be “snipped” and implemented into a Nostr-like network, or if the cons can’t be ironed out from a Fediverse-like one.

  • Tag365@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 months ago

    So why doesn’t a random follower posting a link on Mastodon cause server load issues, but a popular follower does?