kedro icon indicating copy to clipboard operation
kedro copied to clipboard

Fix sitemap indexing

Open astrojuanlu opened this issue 11 months ago • 21 comments

Descriptions

Even with robots.txt search engines still index pages that are listed as disallowed.

Task

"We need to upskill ourselves on how Google indexes the pages, RTD staff suggested we add a conditional <meta> tag for older versions but there's a chance this requires rebuilding versions that are really old, which might be completely impossible. At least I'd like engineering to get familiar with the docs building process, formulate what can reasonably be done, and state whether we need to make any changes going forward." @astrojuanlu

Context and example

https://www.google.com/search?q=kedro+parquet+dataset&sca_esv=febbb2d9e55257df&sxsrf=ACQVn0-RnsYyvwV7QoZA7qtz0NLUXLTsjw%3A1710343831093&ei=l8bxZfueBdSU2roPgdabgAk&ved=0ahUKEwi7xvujx_GEAxVUilYBHQHrBpAQ4dUDCBA&uact=5&oq=kedro+parquet+dataset&gs_lp=Egxnd3Mtd2l6LXNlcnAiFWtlZHJvIHBhcnF1ZXQgZGF0YXNldDILEAAYgAQYywEYsAMyCRAAGAgYHhiwAzIJEAAYCBgeGLADMgkQABgIGB4YsANI-BBQ6A9Y6A9wA3gAkAEAmAEAoAEAqgEAuAEDyAEA-AEBmAIDoAIDmAMAiAYBkAYEkgcBM6AHAA&sclient=gws-wiz-serp (thanks @noklam)

Result: https://docs.kedro.org/en/0.18.5/kedro.datasets.pandas.ParquetDataSet.html

image

However, that version is no longer allowed in our robots.txt:

https://github.com/kedro-org/kedro/blob/1f2adf12255fc312ab9d429cbf6f851a13947cf3/docs/source/robots.txt#L1-L9

And in fact, according to https://technicalseo.com/tools/robots-txt/,

image

astrojuanlu avatar Mar 13 '24 15:03 astrojuanlu

Taking the liberty here of prioritizing this as High.

astrojuanlu avatar Mar 13 '24 15:03 astrojuanlu

Maybe a manual reindex is what's required here? Or a submission of a sitemap?

https://developers.google.com/search/docs/crawling-indexing/ask-google-to-recrawl

tynandebold avatar Mar 26 '24 14:03 tynandebold

I do the site map reindex via Google Search Console all the time.

noklam avatar Mar 26 '24 14:03 noklam

We had a URL prefix property, so only https://kedro.org and not everything under the kedro.org domain.

Requested a DNS change to LF AI & Data https://jira.linuxfoundation.org/plugins/servlet/desk/portal/2/IT-26615

astrojuanlu avatar Mar 26 '24 15:03 astrojuanlu

"Indexed, though blocked by robots.txt"

Screenshot 2024-03-26 at 17-25-30 URL Inspection

(┛ಠ_ಠ)┛彡┻━┻

astrojuanlu avatar Mar 26 '24 17:03 astrojuanlu

https://support.google.com/webmasters/answer/7440203#indexed_though_blocked_by_robots_txt

Indexed, though blocked by robots.txt

The page was indexed despite being blocked by your website's robots.txt file. Google always respects robots.txt, but this doesn't necessarily prevent indexing if someone else links to your page. Google won't request and crawl the page, but we can still index it, using the information from the page that links to your blocked page. Because of the robots.txt rule, any snippet shown in Google Search results for the page will probably be very limited.

Next steps:

astrojuanlu avatar Mar 26 '24 17:03 astrojuanlu

Important: For the noindex rule to be effective, the page or resource must not be blocked by a robots.txt file, and it has to be otherwise accessible to the crawler. If the page is blocked by a robots.txt file or the crawler can't access the page, the crawler will never see the noindex rule, and the page can still appear in search results, for example if other pages link to it.

https://developers.google.com/search/docs/crawling-indexing/block-indexing

astrojuanlu avatar Mar 26 '24 17:03 astrojuanlu

Previous discussion about this on RTD https://github.com/readthedocs/readthedocs.org/issues/10648

astrojuanlu avatar Mar 26 '24 17:03 astrojuanlu

We got some good advice https://github.com/readthedocs/readthedocs.org/issues/10648#issuecomment-2021128135

But blocking this on #3586

astrojuanlu avatar Mar 26 '24 18:03 astrojuanlu

Potentially related:

  • Versions as branches https://github.com/readthedocs/blog/pull/74#issuecomment-637636908 (might be difficult or impossible to do for older versions, but we might want to do it going forward)
  • Discussion on making RTD implicit versioning rules more flexible https://github.com/readthedocs/readthedocs.org/issues/11183

astrojuanlu avatar Apr 02 '24 06:04 astrojuanlu

https://www.stevenhicks.me/blog/2023/11/how-to-deindex-your-docs-from-google/

noklam avatar Apr 02 '24 14:04 noklam

@astrojuanlu It will be very helpful to have access of the Google search console, can we catch up sometime this week? In addition, despite https://github.com/kedro-org/kedro/pull/3729, it appears the robots.txt isn't updated.

I am not super clear about RTD build, do we need to manually refresh the robots.txt somewhere or it only get updated for release? See: https://docs.kedro.org/robots.txt

noklam avatar Apr 02 '24 17:04 noklam

To customize this file, you can create a robots.txt file that is written to your documentation root on your default branch/version.

https://docs.readthedocs.io/en/stable/guides/technical-docs-seo-guide.html#use-a-robots-txt-file

The default version (currently stable) has to see a new release for this to happen.

astrojuanlu avatar Apr 03 '24 09:04 astrojuanlu

We need to make sure sitemap is crawled. See example of vizro

User-agent: *

Disallow: /en/0.1.9/ # Hidden version
Disallow: /en/0.1.8/ # Hidden version
Disallow: /en/0.1.7/ # Hidden version
Disallow: /en/0.1.6/ # Hidden version
Disallow: /en/0.1.5/ # Hidden version
Disallow: /en/0.1.4/ # Hidden version
Disallow: /en/0.1.3/ # Hidden version
Disallow: /en/0.1.2/ # Hidden version
Disallow: /en/0.1.11/ # Hidden version
Disallow: /en/0.1.10/ # Hidden version
Disallow: /en/0.1.1/ # Hidden version

Sitemap: https://vizro.readthedocs.io/sitemap.xml

Ours is blocked currently.

Image

This isn't the primary goal of this ticket but we can also look into it. The main goal of the ticket is "Why URLs that we don't want to be indexed get index", though we would definitely love to improve the opposite "Why URLS that we want to be indexed isn't".

Image

noklam avatar Apr 10 '24 11:04 noklam

image This is very clear that our robots.txt is just wrong

noklam avatar Apr 10 '24 15:04 noklam

Mind you, we don't want to index /en/latest/. The rationale is that we don't want users to land on docs that correspond to an unreleased version of the code.

astrojuanlu avatar Apr 10 '24 16:04 astrojuanlu

Updated robots.txt in https://github.com/kedro-org/kedro/pull/3803 Will continue on this after the release - next sprint

ankatiyar avatar Apr 17 '24 09:04 ankatiyar

Our sitemap still cannot be indexed

astrojuanlu avatar Apr 23 '24 12:04 astrojuanlu

Renaming this issue, because there's nothing else to investigate - search engines (well, Google) will index pages blocked by robots.txt because robots.txt is not the right mechanism to deindex pages.

astrojuanlu avatar May 21 '24 15:05 astrojuanlu

Addressed in #3885, keeping this open until we're certain the sitemap has been indexed.

astrojuanlu avatar May 23 '24 08:05 astrojuanlu

(robots.txt won't update until a new stable version is out)

astrojuanlu avatar May 23 '24 08:05 astrojuanlu

robots.txt got updated 👍

astrojuanlu avatar May 27 '24 18:05 astrojuanlu

Image

astrojuanlu avatar May 27 '24 18:05 astrojuanlu