image
image copied to clipboard
`ImageDecoder::read_image` should take `&[MaybeUninit<u8>]` as opposed to `&[u8]`
I would like to be able to directly point read_image
at CPU accessible GPU memory in order to avoid a copy to intermediate memory. However, casting from *mut c_void
to &[u8]
is technically unsound according to the rules of from_raw_parts
since the memory content is not necessarily defined.
This is a bit pedantic, but I would feel more comfortable working with ImageDecoder::read_image(self, &mut [MaybeUninit<u8>])
because it explicitly documents that the implementation will overwrite the contents of the buffer and that any prior content is never read.
io::Write
will typically involve copy at the caller's site, because you need to call write_all
with a slice to copy from. Writing byte by byte is infeasible due expensive io::Result
.
MaybeUninit
could work, but ideally it should use some buffer abstraction that can have a safe API for appending to and report back how much has been initialized.
I didn't think through io::Write
, it seems. Removing that.
The buffer passed to the read_image
function isn't necessarily write only. Decoding certain image formats requires reading previously processed pixels. Some decoders decode into scratch buffers and then copy into the output buffer, but we make no guarantee of that. In fact, we're actually trying to move away from that pattern and have decoders write directly into the output buffer where possible.
Decoding certain image formats requires reading previously processed pixels
Is this only for animation formats that store diffs between frames?
In fact, we're actually trying to move away from that pattern and have decoders write directly into the output buffer where possible.
That makes perfect sense, thank you. The issue of documenting that working with the bytes in a valid manner remains the responsibility of the implementation would still be helpful, I think.
I've updated the original issue.
Decoding certain image formats requires reading previously processed pixels
Is this only for animation formats that store diffs between frames?
No, this also applies to other formats. Run-length-encoding involves taking a prior pixel and repeating some number of times. PNG's "up" filter encodes pixels as the numeric difference from the pixel on the previous row. In WebP, you can say that a pixel matches the one n pixels up and m pixels to the left. These can be (and often are) implemented using scratch buffers, but we don't want to require that.
As far as promises on what whether the decoders will read from the provided input buffer, we're also subject to the (lack of) guarantees made by the Read
trait:
No guarantees are provided about the contents of buf when this function is called, so implementations cannot rely on any property of the contents of buf being true. It is recommended that implementations only write data to buf instead of reading its contents.
Correspondingly, however, callers of this method in unsafe code must not assume any guarantees about how the implementation uses buf. The trait is safe to implement, so it is possible that the code that’s supposed to write to the buffer might also read from it. It is your responsibility to make sure that buf is initialized before calling read. Calling read with an uninitialized buf (of the kind one obtains via MaybeUninit<T>) is not safe, and can lead to undefined behavior.