moonraker-telegram-bot icon indicating copy to clipboard operation
moonraker-telegram-bot copied to clipboard

Respect Telegram's throttle requests while generating timelapses

Open pidpawel opened this issue 2 years ago • 1 comments

I've been having occasional hiccups when it came to timelapses. I've been getting errors like: Jun 22 23:24:38 fluiddpi python[549]: telegram.error.RetryAfter: Flood control exceeded. Retry in 13.0 seconds so… I decided to add a kinda proper handling of this error instead of the naive implementation I've commited some time ago.

This implementation is sort-of generic and may be transplanted to other parts of code if you wish so (I haven't seen the need so far though). The retryable_notification is almost a decorator as it needs a callable/lambda as an argument and repeatedly tries to execute it until one of the conditions is met. The first argument decides how persistent should this mechanism be: if required is True then it will try sending the message no matter what and in case of a fail will block for the requested number of seconds. if required is False then it may proactively block the request at all if it knows that the required amount of time has not passed yet, but will not block the rest of the code.

As you can see in the screencast below the messages are kinda choppy, but it's close to the best what we can do without actually slowing the timelapse generation. https://user-images.githubusercontent.com/322405/175297031-3380da45-7cc7-4942-8efd-e1f7d06d4f96.mp4

Hope you like it & have a wonderful day Paweł

pidpawel avatar Jun 23 '22 12:06 pidpawel

Hey! Thank you for you contribution. It's very cleanly done, and I appreciate your effort.

The problem that we have, is an "ideological" one. What you do, is you queue the updates, and try to send them "as soon as possible", which can cause a lot of problems down the line. A similar problem already exists in the reverse direction - if the bot was offline, and you send 50 commands to the chat in that time, and the turn the bot back on, it will try and run ALL of them. This is a bigger issue, than one may think.

To the problem you are trying to solve: @nlef had the idea, no to try and send "when it is possible again", but rather if sending is impossible right now, queue everything up, but instead of sending "when possible", to first collapse similar tasks - for example, if you have 10 status updates queued, 9 out of them are useless already, and will be immediately replaced, so there is no reason to send them at all.

If you wish to take on such a big feature, you are more than welcome to, and we will be happy to see your ideas on that topic. I am leaving this open of course, should you decide, that you want to proceed with it. I think this is a good place to have discussions on that. I will also open a corresponding issue on our project board, to track progress.

aka13-404 avatar Jun 24 '22 09:06 aka13-404