RSSHub icon indicating copy to clipboard operation
RSSHub copied to clipboard

Some telegram channels being rate limited: A wait of 2787 seconds is required (caused by contacts.ResolveUsername)

Open enchained opened this issue 2 months ago • 4 comments

Routes

/telegram/channel/:username/:routeParams?

Full routes

/telegram/channel/dtfbest

Related documentation

https://docs.rsshub.app/routes/social-media#channel

What is expected?

Channel fetch avoids being rate limited if possible

What is actually happening?

Channel fetch is being rate limited by telegram in Folo client

Deployment information

RSSHub demo (https://rsshub.app)

Deployment information (for self-hosted)

No response

Additional info

Some telegram channels in Folo have this error:

'''
Error
since 2 months ago
A wait of 2787 seconds is required (caused by contacts.ResolveUsername)
'''

Example: 
rsshub://telegram/channel/dtfbest

Folo already has the issue created (https://github.com/RSSNext/Folo/issues/4503) with another channel exampe, but there was no activity for a couple of months, and anyway shouldn't rate limit issues be attempted to resolve on the route side first? From the issue it looks like this also affects self-hosted instances.

Such error may be due to telegram having a very small limit of queries per day for resolving user id from a username:
https://ru.stackoverflow.com/questions/1566410/%D0%9E%D0%B1%D1%85%D0%BE%D0%B4-flood-wait-caused-by-contacts-resolveusername
https://gram.js.org/beta/classes/TelegramClient.html#getEntity
https://gram.js.org/beta/classes/TelegramClient.html#getInputEntity

The last successfully fetched (by Folo) post date (September 16) coincides with the date this large commit was merged:
https://github.com/DIYgod/RSSHub/pull/18268

Did this route maybe start resolving usernames more frequently after this change?

Later the input entity caching commit was merged, but looks like in case of Folo it did not help.
https://github.com/DIYgod/RSSHub/pull/20128

Can this issue be resolved on the route side?

This is not a duplicated issue

  • [x] I have searched existing issues to ensure this bug has not already been reported

enchained avatar Nov 10 '25 04:11 enchained

Searching for maintainers:
  • /telegram/channel/:username/:routeParams?: @DIYgod @Rongronggg9 @synchrone @pseudoyu

To maintainers: if you are not willing to be disturbed, list your username in scripts/workflow/test-issue/call-maintainer.js. In this way, your username will be wrapped in an inline code block when tagged so you will not be notified.

If all routes can not be found, the issue will be closed automatically. Please use NOROUTE for a route-irrelevant issue or leave a comment if it is a mistake. 如果所有路由都无法匹配,issue 将会被自动关闭。如果 issue 和路由无关,请使用 NOROUTE 关键词,或者留下评论。我们会重新审核。

github-actions[bot] avatar Nov 10 '25 04:11 github-actions[bot]

Hi @enchained! I'm Dosu and I’m helping the RSSHub team.

The Telegram rate limiting ("A wait of 2787 seconds is required (caused by contacts.ResolveUsername)") is triggered when the route resolves usernames too frequently, hitting Telegram's strict quota for contacts.ResolveUsername. After PR #18268, which refactored the Telegram route, and PR #20128, which added caching for input entity retrieval, the route should cache resolved entities to reduce these calls. However, if the cache is not persistent, not shared across requests/instances, or too short-lived, repeated requests will still trigger rate limits—this matches what you're seeing in Folo and self-hosted setups.

The current implementation caches the resolved entity using a key like telegram:inputEntity:${username} and reuses it if available, but if the cache is lost (e.g., after a restart, or if using in-memory cache in a multi-instance deployment), the route will re-resolve the username and quickly hit the limit again. Improving the persistence and scope of this cache (for example, using a shared Redis cache with a long TTL) would help mitigate the issue. Application-side batching or rate-limiting of username resolutions could also help, but the main fix is ensuring the cache is robust and persistent enough to avoid unnecessary API calls.

See the caching logic and its limitations in PR #20128.

To reply, just mention @dosu.


How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other  Chat with Dosu Join Discord Share on X

dosubot[bot] avatar Nov 10 '25 04:11 dosubot[bot]

From the route code: "some channels are not available in t.me/s/, fallback to use Telegram api". My channel is available via the web preview: https://t.me/s/dtfbest Which makes me wonder, why is the telegram api still being unnecessarily used (and rate limited) on Folo.

From the code const useWeb = ctx.req.param('routeParams') || !config.telegram.session; it looks like Folo might be using session parameter, since I did not provide it or any other parameters when I subscribed to the channel there.

To test my theory, I tried adding rsshub://telegram/channel/dtfbest/includeServiceMsg=0 instead of rsshub://telegram/channel/dtfbest to Folo. It worked and fetched latest posts, which probably means it forced useWeb, and useWeb does not have such rate limiting issues. The downside of it is that default parameter-less channel version on Folo has a lot of older cached posts, and this one has only last 20, since it is unique and not cached yet.

@synchrone @TonyRL Would it make sense to force useWeb in route even if session was provided, based on web availability or rate limited errors in response? Or maybe add useWeb as an explicit routeParam and advice in the docs to prefer it in case of errors/rate limits when the channel is accessible via t.me/s/ web preview. But the first option would be more seamless for the users.

enchained avatar Nov 10 '25 05:11 enchained

I was under impression tg has an internal cache for user entities, but apparently for high-traffic instances looking at high cardinality of user entities, this might backfire even with a redis cache.

Realistically a scalable way to handle such an issue would be to rotate over a pool of multiple user accounts.

I run an instance with only 12 channels in my rss reader, so I never encountered this.

A fallback to web parsing sounds pragmatic and the best solution short-term. It has it's own shortcomings, e.g older posts have broken image links, videos are not available at all, etc. But it's better than a rate-limited error for sure.

synchrone avatar Nov 11 '25 16:11 synchrone