-
Notifications
You must be signed in to change notification settings - Fork 139
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make queues bounded #1386
Comments
When I last looked into this my conclusion was that it is not possible get an unbounded number of items in these queues. The only moment these queues can grow is when the runloop is paused, that is, when no stream is pulling data, and no stream is committing offsets. However, under this condition no new commands and no new commits are added. In other words, the situation of unbounded growth is not possible (pending bugs of course). |
Determining what would be a good queue size is a second question. For the command queue, which is cleared on every poll cycle, we'd need to accommodate:
The commit queue should be large enough to accomodate at most |
Remember #986 ? What changed that we make the queues bounded again? |
Oh wow, I did not remember that indeed. Then we need at least a good theory of what happend in 986 and what would be different this time. |
One thing that is different is that we now a separate commit queue. |
Another thing that is different is that we now have metrics, so you can keep an eye on the actual queue sizes. |
The
commandQueue
andcommitQueue
are now unbounded, which could lead to unbounded memory usage in exceptional situations. Having bounded queues would be preferred, though we need to carefully examine the points where backpressure is created - when offering to the queue when it is full - and make sure that does not cause any issues.Since committing is a blocking operation already (in the conceptual sense, not a blocking thread), backpressure from the queue would not be an issue for the end user. Commits are added back to the queue on failure, which might be a problem when it is full.
The text was updated successfully, but these errors were encountered: