-
Notifications
You must be signed in to change notification settings - Fork 156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do something / nothing when max retries run out. #61
Comments
Can we add a default return value when reach max retries ? or
then just call f() instead of might_reutrn_none() |
Hey there @ubehera and @PegasusWang -- I had need of a similar thing and ended up writing #78 If you're still around and have thoughts, it'd be pretty nifty if you wanted to chime in on that PR. |
Just in case someone stumbles across this (like I did), I made a change to from tenacity import retry, stop_after_attempt Do nothing (return def do_nothing(eh):
return
@retry(stop=stop_after_attempt(3), retry_error_callback=do_nothing)
def is_ok_to_fail():
raise Exception('aw dang') or, to return a default value (using lambda here because why not) @retry(stop=stop_after_attempt(3), retry_error_callback=lambda _: True)
def return_true_on_exception():
raise Exception('aw shucks') |
I'm doing a broad except to catch everything and then not raising it. To make retrying work, I had to catch and raise the IOError specifically. But if the max retries run out, I don't wan't it to raise and fail. Is there a way to do this?
The text was updated successfully, but these errors were encountered: