Comment on Microsoft and Reddit Are Fighting About Why Bing’s Crawler Is Blocked on Reddit
Moonrise2473@feddit.it 2 months ago
A search engine can’t pay a website for having the honor of bringing them visits and ad views.
Fuck reddit, get delisted, no problem.
Weird that google is ignoring their robots.txt though.
Even if they pay them for being able to say that glue is perfect on pizza, having
User-agent: * Disallow: /
should block googlebot too. That means google programmed an exception on googlebot to ignore robots.txt on that domain and that shouldn’t be done. What’s the purpose of that file then?
Because robots.txt is just based on honor, should be
User-agent: Googlebot Disallow: User-agent: * Disallow: /
skullgiver@popplesburger.hilciferous.nl 2 months ago
[deleted]Zoop@beehaw.org 2 months ago
User-Agent: bender
Disallow: /my_shiny_metal_ass
Ha!
tal@lemmy.today 2 months ago
I guessed in a previous comment that given their new partnership, Reddit is probably feeding their comment database to Google directly, which reduces load for both of then and permits Google to have real-time updates of the whole kit-and-kaboodle rather than polling individual pages.
jarfil@beehaw.org 2 months ago
Google is paying for the use of Reddit’s API, not for scraping the site.
That’s the new Reddit’s business model: want “their” (users’) content, then pay for API access.
MrSoup@lemmy.zip 2 months ago
I doubt Google respects any robots.txt
DaGeek247@fedia.io 2 months ago
My robots.txt has been respected by every bot that visited it in the past three months. I know this because i wrote a page that IP bans anything that visits it, and l also put it as a not allowed spot in the robots.txt file.
I've only gotten like, 20 visits in the past three months though, so, very small sample size.
mozz@mbin.grits.dev 2 months ago
This is fuckin GENIUS
Moonrise2473@feddit.it 2 months ago
only if you don’t want any visits except from yourself, because this removes your site from any search engine
should write a “disallow: /juicy-content” and then block anything that tries to access that page (only bad bots would follow that path)
MrSoup@lemmy.zip 2 months ago
Thank you for sharing
thingsiplay@beehaw.org 2 months ago
Interesting way of testing this. Another would be to search the search machines with adding
-site:your.domain
to show results from your site only. Not an exhaustive check, but another tool to test this behavior.Moonrise2473@feddit.it 2 months ago
for common people they respect and even warn a webmaster if they submit a sitemap that has paths included in robots.txt