Update robots.txt to improve crawler access
Allow search engine visibility and ensures proper crawling behavior for Google.
Revise the robots.txt file to disallow all crawlers by default while allowing specific access for Googlebot, DuckDuckBot, and Bingbot.
Currently, searching for "Rails contributors" on Google results in:
No information is available for this page.
The reason is the page is blocked by Robots.txt
Robots.txt validator shows the site is disallowed if the User Agent is Googlebot.
Changing the User-Agent order allows Google to crawl the site.
I guess the problem is that the rule says "Google" instead of "Googlebot".
The patch does unnecessary additional edits, I think, would you mind doing that small edit only?
I guess the problem is that the rule says "Google" instead of "Googlebot".
The patch does unnecessary additional edits, I think, would you mind doing that small edit only?
Updated, great catch ❤️