Retry batch delete blob on 503 #1277
Labels
api: storage
Issues related to the googleapis/python-storage API.
priority: p3
Desirable enhancement or fix. May not be included in next release.
type: feature request
‘Nice-to-have’ improvement, new feature or different behavior or design.
Is your feature request related to a problem? Please describe.
When deleting a lot of blobs using the batch API, it sometimes raises a
ServiceUnavailable: 503 BATCH contentid://None: We encountered an internal error. Please try again.
. This is a bit undesired, that it raises in the middle of a big deletion job.Describe the solution you'd like
I tried settings the retry parameter at the client level
client.get_bucket(bucket_name, retry=retry, timeout=600)
or a the blob levelblob.delete(retry=retry, timeout=600)
, even forcing theif_generation_match=blob.generation
. No retry seem to be done. The class does not seem to use any retry here:python-storage/google/cloud/storage/batch.py
Line 309 in c52e882
Either the client can support it, or at the very least the batch object should give access to the blobs (subtasks) that couldn't be deleted so that we can retry manually.
A manual retry of the full batch (for loop) does not work as some of the blobs from the batch got deleted in the first attempt, raising a 404 on the second attempt.
A clear and concise description of what you want to happen.
Retry or give the user the ability to retry only the one that fails
The text was updated successfully, but these errors were encountered: