image-png
image-png copied to clipboard
Document BitDepth::Sixteen encoding
How is data to be laid out in data
fed to write_image_data()
when BitDepth::Sixteen
was set? This is always an [u8]
since this function has no variants.
My data is obviously [u16]
for the BitDepth::Sixteen
case.
When I do a raw pointer typecast of my [u16]
array to [u8]
the image I get has strange colors. Could be endianness – I'm on macOS.
Can you document how to use this when the output image is 16bit/channel? Aka: at least a note in the docs but preferably a code snippet for an RGBA 16bit image?
I'm not even convinced it is actually able to to write 16bit rgb images? So I don't think it's just about documenting this?
Well, I switched to exr-rs
a long time ago.
So this is not relevant for me any more.
But in general, adding some documentation about the meaning of BitDepth::Sixteen
in the context of your crate is probably the least you could do.
If only to prevent users like me, who need to write scene-referred color samples out or need 16bit support for other reasons, from trying to use this crate for that.
It looks to me like the data
passed to write_image_data
is the raw PNG bytestream after scanline serialization but before filtering, compression, and chunking (all of which are done for you by write_image_data
).
That means any questions about the format of data
can be answered by recourse to the PNG spec, in particular 7.2 Scanlines. For bit depth 16 we can read
PNG images that are not indexed-colour images may have sample values with a bit depth of 16. Such sample values are in network byte order (MSB first, LSB second).
Maybe something like this would be good.
diff --git a/src/encoder.rs b/src/encoder.rs
index b158ca1..0feb163 100644
--- a/src/encoder.rs
+++ b/src/encoder.rs
@@ -319,6 +319,10 @@ impl<W: Write> Writer<W> {
}
/// Writes the image data.
+ ///
+ /// `data` contains the serialized scanlines (before filtering is applied).
+ /// See [Scanlines](https://www.w3.org/TR/PNG/#7Scanline) in the PNG spec
+ /// for details.
pub fn write_image_data(&mut self, data: &[u8]) -> Result<()> {
const MAX_CHUNK_LEN: u32 = (1u32 << 31) - 1;
Yeah, I since had to use PNG for another crate. Here is the interesting part.
(@virtualritz .to_be()
is a nop on BE machines, so you can replace that whole selection with v.to_be()
.)
@scurest please feel free to open a PR with that change. Might even be worth specifically saying that 16-bit encoding assume big endian.
I was getting this output from my program which was supposed to generate an HSL color spectrum:
Now, switching the endianness to big endian fixed that problem:
Thank you epicly @scurest for your .to_be()
comment :D
For those that were stuck like me, just take your Vec<u16>
of pixels, and convert it to a Vec<u8>
like this
let u8splitedVec = u16Vec.iter().flat_map(|&x| x.to_be_bytes()).collect::<Vec<u8>>();