async-compression
async-compression copied to clipboard
Fix panic: cannot consume from pending buffer
Fixed #298
In this PR I tried a different fix, by making sure buf.consume is always called, even on error.
I suspect that previously we didn't consume the buffer on error, and that might have caused the same data to be decompressed again.
I can't think of anywhere else that could fail, the decoder implementation looks alright.
@Turbo87 Can you try this PR please?
yep, I'll give it a try.
unfortunately still failing 😢
Thanks.
Is the software open-source?
Can I have a look at the code and the test?
Can I have a look at the code and the test?
the code yes, the test no. unfortunately we don't have a test that reproduces it. I can only run it on our staging environment where I can reproduce it. our test suite runs with an in-memory object_store instance instead of the S3 implementation we use on staging and production.
I've found the cause of the panic.
It is because the decoder try to advance the buffer before polling the underlying buf reader.
cc @Turbo87 I've updated the PR, can you try again please?
Thank you!
the panic appears to be gone, but we're now seeing an "interval out of range" error result without a stacktrace. I will have to improve our logging a bit to figure out where exactly that is coming from.
cc @robjtede Shall we merge and publish this for now, since it at least fixes the panic for @Turbo87
we're now seeing an "interval out of range" error
it turns out that this was a bug on our side, related to how we calculate exponential backoff for failed jobs.
I can confirm that https://github.com/Nullus157/async-compression/pull/303 appears to fix the issue for us! 🎉
thanks again! :)
Thank you!
Thank you!
cc @robjtede let's get this merged and cut a new release, as it is confirmed to fix the panic
I will get this merged and ask for review in the release PR.
Ahh wonderful, yes, lets get this out today.
thanks again for the investigation, fix and release! I just merged the latest update into crates.io :)