• rho50@lemmy.nz
    link
    fedilink
    arrow-up
    45
    ·
    11 months ago

    This is probably an attempt to save money on storage costs. Expect cloud storage pricing from Google to continue to rise as they reallocate spending towards ML hardware accelerators.

    Never been happier to have a proper NAS setup with offsite backup 🙃

    • kubica@kbin.social
      link
      fedilink
      arrow-up
      18
      ·
      11 months ago

      I don’t think they are going to stop storing it somewhere, just stop delivering it.

      • rho50@lemmy.nz
        link
        fedilink
        arrow-up
        14
        ·
        11 months ago

        Idk… in theory they probably don’t need to store a full copy of the page for indexing, and could move to a more data-efficient format if they do. Also, not serving it means they don’t need to replicate the data to as many serving regions.

        But I’m just speculating here. Don’t know how the indexing/crawling process works at Google’s scale.

        • evatronic@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 months ago

          Absolutely. The crawler is doing some rudimentary processing before it ever does any sort of data storage saving. That’s the sort of thing that’s being persisted behind the scenes, and it’s almost certainly both not enough to reconstruct the web page, nor is it (realistically) human-friendly. I was going to say “readable” but it’s probably some bullshit JSON or XML document full of nonsense no one wants to read.

      • pre@fedia.io
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        Seems unlikely they’ll deleted it. If they’re started deleting data that’s quite a change. They might save from bandwidth costs of delivering it to people I suppose.

        Maybe something to do with users filling the AIs from the google cache? Google wanting to ensure only they can train from the google-cache.

        @kubica@kbin.social @Powderhorn@beehaw.org @rho50@lemmy.nz

    • morry040@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      11 months ago

      I think it’s more about the web visitor cost. Handling traffic and API calls becomes a financial problem when there are a growing number of companies using bots to scrape data. Larger companies are moving their content behind paywalls, which acts as a bot filter, and have also identified that they can generate a revenue stream from subscriptions and API connections. Old content on the web is not deemed to have much business value, so it’s a decision of either charging for it or scrapping it.

    • ciferecaNinjo@fedia.io
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      11 months ago

      This is probably an attempt to save money on storage costs.

      That’s in fact what the article claims as Google’s reason. But seems irrational. Google still needs to index websites for the search engine. So the storage is still needed since the data collection is still needed. The only difference (AFAICT) is Google is simply not sharing that data. Also, there are bigger pots of money in play than piddly storage costs.