lemmy icon indicating copy to clipboard operation
lemmy copied to clipboard

[Feature] Fetch metadata as GoogleBot

Open kroese opened this issue 10 months ago • 9 comments

Requirements

  • [X] Is this a feature request? For questions or discussions use https://lemmy.ml/c/lemmy_support
  • [X] Did you check to see if this issue already exists?
  • [X] Is this only a feature request? Do not put multiple feature requests in one issue.
  • [X] Is this a backend issue? Use the lemmy-ui repo for UI / frontend issues.
  • [X] Do you agree to follow the rules in our Code of Conduct?

Is your proposal related to a problem?

A lot of newspaper websites show a cookie-wall, which prevents the OpenGraph metadata from being fetched in the link previews.

After contacting some of them, they told me to just use the GoogleBot user-agent for fetching metadata in forum software. (A bit strange advice for a large company, but this was their official solution).

So after modifying the Lemmy source to identify as Google for scraping metadata, things worked fine for those newspapers.

But I noticed for a couple of other websites, it stopped working. For example links to lemmy.world show a Cloudflare error when using GoogleBot, probably because of some bot protection mechanism. And other websites stopped working because they present a different page to Google without any metadata.

So unfortunelately switching to GoogleBot only fixes the issue on some domains, and creates an issue on others.

Describe the solution you'd like.

It would be really nice that when fetching the metadata using the Lemmy useragent fails, it will retry it one more time using GoogleBot.

Describe alternatives you've considered.

There is no alternative

Additional context

No response

kroese avatar Apr 13 '24 09:04 kroese

This sounds like a very specific use case which would be rather complicated to implement. Maybe best to do it via an extension.

Nutomic avatar Apr 15 '24 10:04 Nutomic

The actual change is just a single line of code in my fork. To make it configurable is the part that it makes it difficult to implement.

So maybe its better to just have a fixed list in the code with domains that dont work without GoogleBot, as that would be much simpler.

A good side-effect could be that when people find domains that dont work with Lemmy, they are forced to do a pull-request to extend the global list, instead of just adding them to their local list. This way other instances will benefit from it too.

kroese avatar Apr 16 '24 12:04 kroese

We could just add an optional custom_metadata_fetcher_user_agent to the config hjson. We could go as complicated as per domain, but I doubt that's necessary, as long as we limit it to metadata fetching only.

dessalines avatar Apr 17 '24 14:04 dessalines

@dessalines As described earlier, that won't work. Some domains need GoogleBot, otherwise you are redirected to their cookie-wall, and other domains refuse requests from GoogleBot (like the lemmy.world Cloudflare protection who denies the request for example).

So a single user-agent for metadata fetching will not work. Thats why we need a list somewhere, and wether that one is hard-coded or configurable is not really important to me.

kroese avatar Apr 17 '24 14:04 kroese

In that case you could add a config to crates/utils/src/settings/structs.rs that looks something like:

struct DomainAndUserAgent {
  domain: Url,
  user_agent: String,
};

struct MetadataFetcherUserAgent {
  domain_and_user_agents: Vec<DomainAndUserAgent>,
};

dessalines avatar Apr 17 '24 15:04 dessalines

If requests are attempted with both user agents, would it be possible to automatically determine which response to use?

dullbananas avatar Apr 20 '24 02:04 dullbananas

@dullbananas Yes.. By checking if the response contains OpenGraph tags or not.

kroese avatar Apr 20 '24 07:04 kroese

FWIW, a website that I maintain blocks fake user agents, e.g. things that claim to be Googlebot when they are not coming from Google's networks. (The site shows OpenGraph data to all user agents, though.)

robrwo avatar Sep 16 '24 09:09 robrwo

I just realized the solution hinted to by @dullbananas would be so much easier.

Instead of keeping a list of which domains need GoogleBot, to just automaticly try GoogleBot for every domain that fails to return metadata using the Lemmy useragent.

That way there is no need to keep any lists. I modified the feature request accordingly now.

kroese avatar Sep 23 '24 16:09 kroese