bson-rust
bson-rust copied to clipboard
Should be able to read from oversized slice or reader
Versions/Environment
- What version of Rust are you using? 1.80.1
- What operating system are you using? Windows
- What versions of the driver and its dependencies are you using? (Run
cargo pkgid mongodb&cargo pkgid bson) registry+https://github.com/rust-lang/crates.io-index#[email protected] - What version of MongoDB are you using? (Check with the MongoDB shell using
db.version()) None - What is your MongoDB topology (standalone, replica set, sharded cluster, serverless)? None
Describe the bug
Because BSON describes its own length, the from_slice and from_reader functions should be able to deserialize from a reader or slice that is larger than the BSON itself.
BE SPECIFIC:
- What is the expected behavior and what is actually happening? Expected deserialization to consider the length encoded within the BSON itself.
- Do you have any particular output that demonstrates this problem? Using
to_vecfollowed byfrom_sliceto serialize then deserialize a struct - Do you have any ideas on why this may be happening that could give us a clue in the right direction?
- Did this issue arise out of nowhere, or after an update (of the driver, server, and/or Rust)? No
- Are there multiple ways of triggering this bug (perhaps more than one function produce a crash)?
- If you know how to reproduce this bug, please include a code snippet here:
use bson;
use serde::{Deserialize, Serialize};
#[derive(Serialize, Deserialize, Debug)]
struct Foo {
a: i32,
b: Bar,
}
#[derive(Serialize, Deserialize, Debug)]
struct Bar {
x: u32,
y: String,
}
fn main() {
let data = Foo {
a: 42,
b: Bar {
x: 1,
y: "hello".to_string(),
},
};
let bson_bytes = bson::to_vec(&data).unwrap();
let mut bigger_buffer: Vec<u8> = vec![0; 1000];
bigger_buffer[..bson_bytes.len()].copy_from_slice(&bson_bytes);
println!("{:?}", bson::from_slice::<Foo>(bigger_buffer.as_slice()).unwrap());
}
Hi! I've filed RUST-2115 to for the team to discuss this, feel free to follow that for updates.
Just an update - we don't see this as high priority to implement since there's a clear workaround (subslice) and the validation that buffer length matches bson length is useful both for internal parsing and potentially for external users. Can you give more details on your use case?