• BreakDecks@lemmy.ml
    link
    fedilink
    English
    arrow-up
    51
    ·
    10 months ago

    Google never did make backups of the Internet, why are we pretending like they ever did? Cached webpages were a basic workaround for third-party website downtime; a guarantee that you could reliably see the information you searched for, even if the linked site was down. It was nothing more than a snapshot of the webpage their crawlers saw, where older copies are permanently deleted with every new crawl of the page.

    It was never an archival effort, it was a rotating cache. If you were under the impression for all these years that Google was preserving Internet history, I don’t know why, because Google never claimed to be doing that. Maybe it’s time to reevaluate any other altruistic things you’re assuming that mega corporations are up to…

  • BudgieMania@kbin.social
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    10 months ago

    Well surely this means that archive.org will be allowed to exist in peace, since it would be ridiculous to make the information and culture produced in the year of our lord 20fucking24 the most ephemeral it has ever been in human history, right?

    Right?

  • TrickDacy@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    10 months ago

    Keeping records of things bad people say and do would be considered not being evil, so it makes sense.

  • Willie@kbin.social
    link
    fedilink
    arrow-up
    4
    ·
    10 months ago

    I feel like this is so they can deny that they fed all the webpages that they cached to their ‘AI’ training datasets later when someone accuses them of that. Now when asked about the copies of webpages that they have they can be like “What copies?” and end the conversation there.

  • linearchaos@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    10 months ago

    I wonder if this is related to why their searches have been going to hell. Like They changed how the engine indexes or something.

  • astanix@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    10 months ago

    I noticed this yesterday when I tried to load a cached version of a site. How disappointing.

  • wizardbeard@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    Three guesses at if they even attempted to donate this data to Internet Archive/Wayback Machine, and the first two don’t count.

    • BreakDecks@lemmy.ml
      link
      fedilink
      English
      arrow-up
      7
      ·
      10 months ago

      Google cached content is pruned down into a space-saving format and rotated/deleted after less than a year, so it would be pretty worthless to the IA.

    • Chozo@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      10 months ago

      Internet Archive likely wouldn’t be able to handle it. They’re already struggling currently, as it is, and dumping a few petabytes of caches of the entire internet onto them probably won’t help.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    This is the best summary I could come up with:


    Google Search’s “cached” links have long been an alternative way to load a website that was down or had changed, but now the company is killing them off.

    The feature has been appearing and disappearing for some people since December, and currently, we don’t see any cache links in Google Search.

    Cached links used to live under the drop-down menu next to every search result on Google’s page.

    As the Google web crawler scoured the Internet for new and updated webpages, it would also save a copy of whatever it was seeing.

    That quickly led to Google having a backup of basically the entire Internet, using what was probably an uncountable number of petabytes of data.

    In 2020, Google switched to mobile-by-default, so for instance, if you visit that cached Ars link from earlier, you get the mobile site.


    The original article contains 438 words, the summary contains 139 words. Saved 68%. I’m a bot and I’m open source!