This is not a long post, but I wanted to post this somewhere. This may be useful if someone is doing an article about Google or something like that.

While I was changing some things in my server configuration, some user accessed a public folder on my site, I was looking at the access logs of it at the time, everything completely normal up to that point until 10 SECONDS AFTER the user request, a request coming from a Google IP address with Googlebot/2.1; +http://www.google.com/bot.html user-agent hits the same public folder. Then I noticed that the user-agent of the user that accessed that folder was Chrome/131.0.0.0.

I have a subdomain and there is some folders of that subdomain that are actually indexed on the Google search engine, but that specific public folder doesn’t appear to be indexed at all and it doesn’t show up on searches.

May be that google uses Google Chrome users to discover unindexed paths of the internet and add them to their index?

I know it doesn’t sound very shocking because most people here know that Google Chrome is a privacy nightmare and it should be avoided at all times, but I never saw this type of behavior on articles about “why you should avoid Google Chrome” or similar.

I’m not against anyone scrapping the page either since it’s public anyways, but the fact they discover new pages of the internet making use of Google Chrome impressed me a little.

Edit: Fixed a typo

  • bamboo
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 month ago

    Do any of the pages in the directory link to other websites? It could be that if you link to a website that is using Google analytics, it may see that referrer header when the person using chrome opened the link. If it knew that your site didn’t have links to the third party site before, maybe that triggered a refresh.

    You could test this by making a page linking to CNN or another site which is using Google analytics, and using Firefox (without anything that would block Google Analytics) and click on the link on your site to the other site. if the Google bot checks your site within 10 seconds then you could rule out chrome as the culprit.

    • Fijxu@programming.devOP
      link
      fedilink
      arrow-up
      4
      ·
      1 month ago

      Nope, is just a file indexer that I host publicly. I don’t care about sharing the URL to provide more context.

      The user accesed https://luna.nadeko.net/Movies/Ch3k0p3t3/ with Google Chrome

      And 10 seconds after, Googlebot scrapes the folder.

      Simple as that, I don’t have privacy invasive trackers on any of my webpages/services