-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix panic: cannot consume from pending buffer #303
Conversation
Fixed #298 Signed-off-by: Jiahao XU <[email protected]>
@Turbo87 Can you try this PR please? |
yep, I'll give it a try. |
Thanks. Is the software open-source? Can I have a look at the code and the test? |
the code yes, the test no. unfortunately we don't have a test that reproduces it. I can only run it on our staging environment where I can reproduce it. our test suite runs with an in-memory |
Signed-off-by: Jiahao XU <[email protected]>
Signed-off-by: Jiahao XU <[email protected]>
I've found the cause of the panic. It is because the decoder try to advance the buffer before polling the underlying buf reader. cc @Turbo87 I've updated the PR, can you try again please? Thank you! |
the panic appears to be gone, but we're now seeing an "interval out of range" error result without a stacktrace. I will have to improve our logging a bit to figure out where exactly that is coming from. |
it turns out that this was a bug on our side, related to how we calculate exponential backoff for failed jobs. I can confirm that #303 appears to fix the issue for us! 🎉 thanks again! :) |
Thank you! |
1 similar comment
This comment has been minimized.
This comment has been minimized.
cc @robjtede let's get this merged and cut a new release, as it is confirmed to fix the panic |
I will get this merged and ask for review in the release PR. |
Ahh wonderful, yes, lets get this out today. |
thanks again for the investigation, fix and release! I just merged the latest update into crates.io :) |
Fixed #298
This PR changes the decoder
do_poll_read
impl, to not advance the buffer on first flush.The panic is because the decoder try to advance the buffer before polling the underlying buf reader.
In this PR I tried a different fix, by making surebuf.consume
is always called, even on error.I suspect that previously we didn't consume the buffer on error, and that might have caused the same data to be decompressed again.
I can't think of anywhere else that could fail, the decoder implementation looks alright.