Potentially unbounded memory overhead using `BodyChunkIpld`
Basic usage of one of our most venerable constructs, BodyChunkIpld, will lead to unacceptable levels of memory overhead when reading large files. Currently, both BodyChunkIpld::store_bytes and BodyChunkIpld::load_all_bytes allocate up to file-size memory when used.
In the case of BodyChunkIpld::store_bytes: in order to efficiently encode/store bytes from a file as BodyChunkIpld pages, we would need to read the file back-to-front. This introduces some complexity to how we compute chunk cut points (using a Rust implementation of FastCDC). We contributed a change to our dependency to enable async streaming support in the chunker, which gets us half-way to where we need to be. However, in order to make use it we will most likely have to feed bytes into the chunker in reverse (since BodyChunkIpld pages have to be hashed back to front).
In the case of BodyChunkIpld::load_all_bytes, the solution is simpler: we should deprecate the method and instead prefer a method that yields pages of bytes as an async stream (e.g., BodyChunkIpld::stream or something to that effect).