async-compression
async-compression copied to clipboard
Panic (slice indexes) when encoding with brotli
We just started getting that last night:
panic: slice index starts at 15827 but ends at 8192
Thread 0 Crashed:
0 core 0x558771457352 core::slice::slice_index_order_fail (mod.rs:2758)
1 core 0x558770cec47b [inlined] core::ops::range::Range<T>::index (mod.rs:2917)
2 core 0x558770cec47b [inlined] core::ops::range::RangeFrom<T>::index (mod.rs:2996)
3 core 0x558770cec47b [inlined] core::slice::<T>::index (mod.rs:2732)
4 brotli 0x558770cec47b brotli::enc::compress_fragment::BrotliCompressFragmentFastImpl (compress_fragment.rs:979)
5 brotli 0x558770ceafe1 brotli::enc::compress_fragment::BrotliCompressFragmentFast
6 brotli 0x558770d36dee brotli::enc::encode::BrotliEncoderCompressStreamFast (encode.rs:2762)
7 async_compression 0x558770c7ec25 [inlined] async_compression::codec::brotli::encoder::BrotliEncoder::encode (encoder.rs:39)
8 async_compression 0x558770c7ec25 async_compression::codec::brotli::encoder::BrotliEncoder::encode (encoder.rs:68)
9 async_compression 0x558770cefa53 [inlined] async_compression::stream::generic::encoder::Encoder<T>::poll_next::{{closure}} (encoder.rs:93)
10 async_compression 0x558770cefa53 async_compression::stream::generic::encoder::Encoder<T>::poll_next (encoder.rs:73)
11 async_compression 0x558770cecca6 [inlined] async_compression::stream::BrotliEncoder<T>::poll_next (encoder.rs:64)
12 futures_core 0x558770cecca6 [inlined] futures_core::stream::TryStream::try_poll_next (stream.rs:193)
13 futures_util 0x558770cecca6 [inlined] futures_util::stream::try_stream::into_stream::IntoStream<T>::poll_next (into_stream.rs:40)
14 futures_util 0x558770cecca6 [inlined] futures_util::stream::stream::map::Map<T>::poll_next (map.rs:61)
15 futures_util 0x558770cecca6 [inlined] futures_util::stream::try_stream::MapOk<T>::poll_next (lib.rs:118)
16 futures_core 0x558770cecca6 [inlined] futures_core::stream::TryStream::try_poll_next (stream.rs:193)
17 futures_util 0x558770cecca6 [inlined] futures_util::stream::try_stream::into_stream::IntoStream<T>::poll_next (into_stream.rs:40)
18 futures_util 0x558770cecca6 futures_util::stream::stream::map::Map<T>::poll_next (map.rs:61)
19 futures_util 0x558770db5419 futures_util::stream::try_stream::MapErr<T>::poll_next (lib.rs:118)
20 hyper 0x5587712b128a hyper::body::body::Body::poll_inner (body.rs:291)
21 hyper 0x5587712b15a7 [inlined] hyper::body::body::Body::poll_eof (body.rs:253)
22 hyper 0x5587712b15a7 hyper::body::body::Body::poll_data (body.rs:323)
23 fly_proxy 0x558770db3f54 fly_proxy::connect::body::LoadGuardedBody::poll_data (body.rs:168)
24 hyper 0x5587709357f5 hyper::proto::h2::PipeToSendStream<T>::poll (mod.rs:162)
25 hyper 0x558770adba20 [inlined] hyper::proto::h2::server::H2Stream<T>::poll2 (server.rs:419)
26 hyper 0x558770adba20 hyper::proto::h2::server::H2Stream<T>::poll (server.rs:437)
27 tokio 0x558770b3f589 [inlined] tokio::util::trace::Instrumented<T>::poll (trace.rs:26)
28 tokio 0x558770b3f589 [inlined] tokio::runtime::task::core::Core<T>::poll::{{closure}} (core.rs:173)
29 tokio 0x558770b3f589 tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut (unsafe_cell.rs:14)
30 tokio 0x558770b123f3 [inlined] tokio::runtime::task::core::Core<T>::poll (core.rs:158)
31 tokio 0x558770b123f3 [inlined] tokio::runtime::task::harness::Harness<T>::poll::{{closure}} (harness.rs:107)
32 core 0x558770b123f3 [inlined] core::ops::function::FnOnce::call_once (function.rs:232)
33 std 0x558770b123f3 std::panic::AssertUnwindSafe<T>::call_once (panic.rs:318)
34 std 0x5587708879df [inlined] std::panicking::try::do_call (panicking.rs:297)
35 std 0x5587708879df [inlined] std::panicking::try (panicking.rs:274)
36 std 0x5587708879df [inlined] std::panic::catch_unwind (panic.rs:394)
37 tokio 0x5587708879df tokio::runtime::task::harness::Harness<T>::poll (harness.rs:89)
38 tokio 0x558771310bf0 [inlined] tokio::runtime::task::raw::RawTask::poll (raw.rs:66)
39 tokio 0x558771310bf0 [inlined] tokio::runtime::task::Notified<T>::run (mod.rs:169)
40 tokio 0x558771310bf0 [inlined] tokio::runtime::thread_pool::worker::Context::run_task::{{closure}} (worker.rs:374)
41 tokio 0x558771310bf0 [inlined] tokio::coop::with_budget::{{closure}} (coop.rs:127)
42 std 0x558771310bf0 [inlined] std::thread::local::LocalKey<T>::try_with (local.rs:263)
43 std 0x558771310bf0 std::thread::local::LocalKey<T>::with (local.rs:239)
44 tokio 0x5587713315f2 [inlined] tokio::coop::with_budget (coop.rs:120)
45 tokio 0x5587713315f2 [inlined] tokio::coop::budget (coop.rs:96)
46 tokio 0x5587713315f2 tokio::runtime::thread_pool::worker::Context::run_task (worker.rs:352)
47 tokio 0x558771330fbf tokio::runtime::thread_pool::worker::Context::run (worker.rs:331)
48 tokio 0x55877131f733 [inlined] tokio::runtime::thread_pool::worker::run::{{closure}} (worker.rs:309)
49 tokio 0x55877131f733 tokio::macros::scoped_tls::ScopedKey<T>::set (scoped_tls.rs:63)
50 tokio 0x5587713307f6 tokio::runtime::thread_pool::worker::run (worker.rs:306)
51 tokio 0x5587713248d0 [inlined] tokio::runtime::thread_pool::worker::Launch::launch::{{closure}} (worker.rs:285)
52 tokio 0x5587713248d0 [inlined] tokio::runtime::blocking::task::BlockingTask<T>::poll (task.rs:41)
53 tokio 0x5587713248d0 [inlined] tokio::runtime::task::core::Core<T>::poll::{{closure}} (core.rs:173)
54 tokio 0x5587713248d0 tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut (unsafe_cell.rs:14)
55 tokio 0x5587713343f0 [inlined] tokio::runtime::task::core::Core<T>::poll (core.rs:158)
56 tokio 0x5587713343f0 [inlined] tokio::runtime::task::harness::Harness<T>::poll::{{closure}} (harness.rs:107)
57 core 0x5587713343f0 [inlined] core::ops::function::FnOnce::call_once (function.rs:232)
58 std 0x5587713343f0 std::panic::AssertUnwindSafe<T>::call_once (panic.rs:318)
59 std 0x558771323073 [inlined] std::panicking::try::do_call (panicking.rs:297)
60 std 0x558771323073 [inlined] std::panicking::try (panicking.rs:274)
61 std 0x558771323073 [inlined] std::panic::catch_unwind (panic.rs:394)
62 tokio 0x558771323073 tokio::runtime::task::harness::Harness<T>::poll (harness.rs:89)
63 tokio 0x558771313bc8 [inlined] tokio::runtime::task::raw::RawTask::poll (raw.rs:66)
64 tokio 0x558771313bc8 [inlined] tokio::runtime::task::Notified<T>::run (mod.rs:169)
65 tokio 0x558771313bc8 tokio::runtime::blocking::pool::Inner::run (pool.rs:230)
66 tokio 0x5587713199fd [inlined] tokio::runtime::blocking::pool::Spawner::spawn_thread::{{closure}}::{{closure}} (pool.rs:210)
67 tokio 0x5587713199fd tokio::runtime::context::enter (context.rs:72)
68 tokio 0x55877131c4a6 [inlined] tokio::runtime::handle::Handle::enter (handle.rs:76)
69 tokio 0x55877131c4a6 [inlined] tokio::runtime::blocking::pool::Spawner::spawn_thread::{{closure}} (pool.rs:209)
70 std 0x55877131c4a6 std::sys_common::backtrace::__rust_begin_short_backtrace (backtrace.rs:130)
71 std 0x5587713266bb [inlined] std::thread::Builder::spawn_unchecked::{{closure}}::{{closure}} (mod.rs:475)
72 std 0x5587713266bb [inlined] std::panic::AssertUnwindSafe<T>::call_once (panic.rs:318)
73 std 0x5587713266bb [inlined] std::panicking::try::do_call (panicking.rs:297)
74 std 0x5587713266bb [inlined] std::panicking::try (panicking.rs:274)
75 std 0x5587713266bb [inlined] std::panic::catch_unwind (panic.rs:394)
76 std 0x5587713266bb [inlined] std::thread::Builder::spawn_unchecked::{{closure}} (mod.rs:474)
77 core 0x5587713266bb core::ops::function::FnOnce::call_once{{vtable.shim}} (function.rs:232)
78 alloc 0x558771433d4a [inlined] alloc::boxed::Box<T>::call_once (boxed.rs:1076)
79 alloc 0x558771433d4a [inlined] alloc::boxed::Box<T>::call_once (boxed.rs:1076)
80 std 0x558771433d4a std::sys::unix::thread::Thread::new::thread_start (thread.rs:87)
81 <unknown> 0x7f070b8dc6db start_thread
82 <unknown> 0x7f070b3eda3f __clone
83 <unknown> 0x0 <unknown>
I don't have much details about the actual body that's causing these issues. We're running a reverse-proxy with varying responses and workloads.
Looks most likely to be a bug in brotli
, it appears to be reading past the end of the input buffer we pass it. By the looks of it hyper is giving 8kB blocks as input, so I tried compressing both random and uniform data in 8kB blocks and neither reproduced the error, so I think to track it down we need either some actual erroring data, or maybe setting up a proper fuzzer setup and getting lucky (I gave a quick test of configuring the proptest tests to use up to 5 * 8kB blocks, but didn't manage to produce any errors).
Thanks for looking into it.
Hopefully it's just a fluke. I'll add some more metadata around the exception to try and figure out under what conditions this happened.
I noticed other panic have happened with different slice indexes:
- slice index starts at 24849 but ends at 16384
- slice index starts at 18629 but ends at 12288
This could be because we're using adaptive http/2 window sizes? Else, I would expect to always see "ends at 8192" if it was using the default.
This is still a serious issue. Might have to disable Brotli entirely in my application. It's also most definitely the result of undefined behavior, as it can occur or not occur with the same file between builds.
Loading in file to cache: frontend/dist/fonts/Lato-Light.woff2
Dec 05 21:16:40.142 TRACE server::web::file_cache: Compressing with Brotli
thread 'tokio-runtime-worker' panicked at 'range start index 45058 out of range for slice of length 27840', /root/.cargo/registry/src/github.com-1ecc6299db9ec823/brotli-3.3.2/src/enc/compress_fragment.rs:939:29
Dec 05 21:16:40.143 ERROR server: Internal Server Error: panic
And here is the file if it can help, it's just a WOFF2 font file that I know has been successfully compressed before: Lato-Light.zip
It seems random and rare which file it can affect, but it always happens repeatably with that file when it presents itself, at least until a recompile with changes.