Should a "retry_on_result" not include a "retry_on_exception=never_retry_on_exception"?
As mentioned in https://github.com/rholder/retrying/issues/25, I hoped to have control over a retry, based on a function, which explicits output a print-statement.
A suspicious long run (in a detached process, which I couldn't debug/interupt easily), had sent me into the internal retrying logic.
With a print statement, after the "sleep" calculation, I was able to observe that it run out of my control into endless retries (with wait_exponential_multiplier).
sleep = self.wait(attempt_number, delay_since_first_attempt_ms)
# xxx pa 141117 debug wait
print "RETRY: ", sleep, attempt_number, delay_since_first_attempt_ms
Reason was that my code had hit an exception, and the default logic is "always_reject" due to that part:
if retry_on_exception is None:
self._retry_on_exception = self.always_reject
which ended in endless retries:
RETRY: 2000 1 0
RETRY: 4000 2 2003
RETRY: 8000 3 6004
RETRY: 10000 4 14006
RETRY: 10000 5 24008
RETRY: 10000 6 34010
...
To stop endless loop, I had to override the "retry_on_exception"
def never_retry_on_exception(response):
"""Return always False to raise on Exception (will not happen by default!)"""
return False
# and has to add it to my decorator
retry429_decorator = retry(
retry_on_result=retry_if_result_429,
wait_exponential_multiplier=1000,
wait_exponential_max=10000,
retry_on_exception=never_retry_on_exception
)
Summary:
- for me it was a trap, because I thought to have control
- is this intended by design?
- was my approach the correct one?
- If yes, worth to mention more explicit in the doc, that your solution trys by default to retry on tracebacks?
I've encountered this pain as well -- your approach seems correct to me.
I usually use a lambda to make it a little more concise, though:
@retry(retry_on_exception=lambda y: False)
def method_to_retry():
raise Exception("oh darn")
should raise the exception without retrying at all