hikari
hikari copied to clipboard
add retry on timeout
Summary
Retry on asyncio.TimeoutError
that aiohttp raises when a request times out.
Currently testing this in production, hence draft, no changelog fragment and a lack of formatting/tests.
Checklist
- [x] I have run
nox
and all the pipelines have passed. - [x] I have made unittests according to the code I have added/modified/deleted.
Working in production without issues. Working on the formalities now. 👍
No idea what to do about the flake8 max complexity error 🤷
I am a bit torn on this. It is similar to a similar PR I did some time ago to attempt a restart on timeout errors.
Now that I think about it, it might be better to just recommend people to edit their HTTPSettings to their liking if they see this error too often.
cc @FasterSpeeding Opinions?
Added aiohttp.ClientOSError to retry too.
I am a bit torn on this. It is similar to a similar PR I did some time ago to attempt a restart on timeout errors.
Now that I think about it, it might be better to just recommend people to edit their HTTPSettings to their liking if they see this error too often.
cc @FasterSpeeding Opinions?
Tbh not exactly sure what you're getting at there, wdym by restart?
No idea what to do about the flake8 max complexity error 🤷
Just put a noqa comment on it
Tbh not exactly sure what you're getting at there, wdym by restart?
Currently, if any error is thrown while doing the initial request to fetch the sharding information when the bot is being started, it just exits with an error.
I had the idea of at least retrying that request but eventually scrapped it and just decided on just recommending people to edit their HTTPSettings to have no timeout or a higher one on unstable connections
I had the idea of at least retrying that request but eventually scrapped it and just decided on just recommending people to edit their HTTPSettings to have no timeout or a higher one on unstable connections
Is this related to that though? Doesn't asyncio.TImeoutError
and aiohttp.ClientOSError
being raised effect all requests, not just startup and since they're not being caught currently unless i'm missing something changing HTTPSettings wouldn't help with that?
Is this related to that though? Doesn't
asyncio.TImeoutError
andaiohttp.ClientOSError
being raised effect all requests, not just startup and since they're not being caught currently unless i'm missing something changing HTTPSettings wouldn't help with that?
Initially this pr was for asyncio.TimeoutError
only, it later got extended to include aiohttp.ClientOSError
, which I wasn't talking about in that comment.
And yes, when the timeout is exceeded, aiohttp raises a timeout error.
The comment I made before is about a change I worked on but never made (which only thought about timeout errors), since I abandoned it.
Personally dont like the current state of the PR, will comment on it later :)
It needs rebasing now anyways lol
Might want to double check the network to the raspberry pi is stable (try using an ethernet cable directly to the router, if not done already)
My raspberry pi is directly plugged into my router, it just happens sometimes (possibly happens when internet connection is shortly cutoff?).
aiohttp.ClientOSError
is a really weird case and not something i have seen in a while, specially on linux.
I only have the discord notification of sentry, since sentry deletes them after 90d:
My raspberry pi is directly plugged into my router, it just happens sometimes (possibly happens when internet connection is shortly cutoff?).
After a 30s timeout, i think the issue might be bigger than just a simple timeout, but regardless, maybe this is worth implementing for the few it would affect
I only have the discord notification of sentry, since sentry deletes them after 90d
Hmmm. Under normal functions you should be receiving this, but i guess we could handle it for the odd case
Closing in favor of https://github.com/hikari-py/hikari/pull/1648