-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rate-limit requests made by couchbackup #92
Comments
+1 In my case this is very much needed for restores on the free tier - I'm trying to migrate from the old shared plan at the moment. Edit: I've used 'Network Link Conditioner' (part of 'Additional Tools for Xcode 9.0' available on Apple's developer site) to limit the bandwidth at OS level |
Whilst this throttling is not available |
Thanks, however I found this not to be the case. Are retries enabled by
default?
…On 5 Jan 2018 08:51, "Rich Ellis" ***@***.***> wrote:
Whilst this throttling is not available @cloudant/couchbackup >= 2.0.0
does retry a restore batch if it gets a 429 response so it should be
possible to progress through a restore on a free plan.
Of course other requests being made against the same account will also be
impacted by the increased number of requests/429s during the restore, but
most of our supported libraries have options for retrying 429 requests
which could help with riding that out.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#92 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABAcGRo3In1Z7oVATRLgdUaveu20DhsNks5tHeJ2gaJpZM4Nt9pi>
.
|
For couchbackup/couchrestore retries are enabled by default. |
An improvement on a simple max requests per second would be allowing a percentage of the current max request limits. Hopefully Cloudant will have APIs for this soon. Would need to be careful to round up so we have at least one request (say the user says 1% of requests but the account only has 20). |
CouchBackup, by default, goes as fast as it can within the contraints of the parallelism setting (which controls the concurrent connections to a server).
Instead, it'd be nice for customers to be able to control the number of requests made per second. At least, the number of requests initiated per second, which is simpler to implement!
The approach I'd take to this would be a token bucket approach. As each item is pulled off the queue, block it until tokens are available. As
async
is already a dep, using its forever or retry function probably come in useful for blocking.The complexity here is to maintain first-in-first-out, to avoid resulting in requests made early in the backup process being starved for long periods. I'd definitely investigate methods to retain request ordering, otherwise pretty odd behaviour may happen (or, at least, hard to understand!). The queue has an
unshift
method which could be used to place tasks back on the queue, which combined with a setTimeout to give us a delay before unshifting might work. I'd avoid the queue'spause
andresume
.I'd strongly suggest a refill strategy based on calling a refill function for every
allow
(similar to golang's rate) rather than asetTimeout
based approach (like here) as it'll be more predictable.The text was updated successfully, but these errors were encountered: