-
Notifications
You must be signed in to change notification settings - Fork 521
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Network client rate limiter #1148
Comments
Hi,
|
This can be enforced by the rate limiting middleware. To enforce other conditions the client needs to maintain a lot of state, or run logic based on the contents of the request. Moreover to enforce conditions like "The maximum number of simultaneous open historical data requests from the API is 50" the client need to keep track of open requests and get information from responses or by calling IB API. Such logic is best implemented in the IB client itself. |
A layered, token bucket rate limiter with different quota per key will cover the rate limiting needs for most exchanges. Since there is no progress on boinkor-net/governor#193, we'll have handroll the implementation borrowing relevant parts from the governor repo. This is a draft of the design I'm considering. We can reuse these modules from governor.
RateLimiter {
default_quota: Gcra,
quota_store: DashMap<String, Gcra>,
state_store: DashMapStateStore,
}
impl RateLimiter {
pub fn check_key(&self, key: &K) -> Result<PositiveOutcome, NegativeOutcome> {
let gcra = self.quota_store.get_gcra(key).unwrap_or(self.gcra);
let state = self.state_store.get(key).expect("A key needs to have a state");
gcra.test_and_update::<K, C::Instant, S>(self.start, key, &self.state, self.clock.now());
}
pub fn check_key_parts(&self, key: &K) -> Result<PositiveOutcome, NegativeOutcome> {
let parts = key.into_parts();
parts.for_each(|part| { self.check_key(part)? })
}
} There are a few key differences between this and the governor implementation -
|
The initial rate limiter for the base Initially this is a default quota of 6000/minute for Spot and 3000/minute for Futures (where a normal message is worth 1), with reference to the following specs: We're also experimenting with the hierarchical limiting for the heavier weight "request all order status" endpoints (20 times the weight of a normal message). The basic approach here is to allow right up to the specified API limits for standard weight messages, with the idea that the limiter will hold back requests just before Binance would otherwise respond with a 429 -- and in the case of heavier weight requests joining the message bursts, this would result in limiting from Binance. The plan is we'll gradually proceed to add additional finer grain "keyed" limiting where it makes sense, and based on user feedback, and our own testing and observations from "the wild". |
Since the basic implementation is now in place and starting to be used, closing this issue in favor of more specific enhancement requests / bug reports as they may occur. |
The rate limiting bug #780 can be fixed comprehensively by implementing a middleware rate limiter for the newly written network module #1098. There are are many variations of rate limits across different exchanges. Most exchanges have different rate limits for different endpoints and return rate limit exceeded like errors when the limits are exceeded. However repeated violations can lead to suspension, so it's best to have decent rate limiting built into the client itself.
In the current network client design req/s, token bucket and layered rate limiters can be supported as a middleware. Point system rate limiters are more complex and are not well suited to a middleware. The middleware will need to intercept responses (which may be out of order) and update it's state based on header values. It might be easier to enforce simple conservative req/s rate limiting on these endpoints or leave the specific rate limiting logic to the exchange specific client.
The text was updated successfully, but these errors were encountered: