zarr-specs
zarr-specs copied to clipboard
Add zstd codec
@normanrz Please take a look.
Agreed that this should be separate from ZEP 1. I should update this document to not mention ZEP 1. But perhaps there should be a lighter weight process --- it would be nice to just get a vote on this PR directly, rather than writing up a separate ZEP document, etc.
There are going to be a lot of codecs added, and it would be nice to make that process relatively easy while still getting appropriate feedback.
But perhaps there should be a lighter weight process --- it would be nice to just get a vote on this PR directly, rather than writing up a separate ZEP document, etc.
That would indeed be nice, but maybe we could have lean ZEPs, with less boilerplate and just pointing to an MR?
@MSanKeys963, what's your take on this?
Is there any chance we could change the default compression level to match ZSTD_CLEVEL_DEFAULT (3)?
https://github.com/facebook/zstd/blob/97291fc5020a8994019ab76cf0cda83a9824374c/lib/zstd.h#L129
This proposal doesn't indicate a default compression level if level is unspecified --- the level parameter is intended to be required, though I should clarify that.
When creating an array, implementations are free to provide a mechanism for the user to leave the level unspecified, however, and in that case the implementation must choose a default value.
The only note on default is that level 0 means the default level --- that follows the zstd C API.
If the array is intended to be used for writing, then I don't see much value in allowing the configuration options to be unspecified; that just leaves room for implementation variance. However, if someone is creating zarr metadata to somehow match some existing chunked data, that is known to be zstd compressed (but for which the level may not even be known), and which is intended to be read only, then the level is indeed not needed and it is a bit unfortunate that a fake value would have to be specified. Still I don't think we really need to optimize for this case.
Just to be clear, numcodecs.zstd.Zstd does document that the compression level defaults to 1 and sets DEFAULT_CLEVEL to 1. Default to the C header value would be a change from the current practice in Python's numcodecs.
https://numcodecs.readthedocs.io/en/stable/zstd.html#numcodecs.zstd.Zstd https://github.com/zarr-developers/numcodecs/blob/79a0e933d78a51f32237b29fc375d0ee27aace37/numcodecs/zstd.pyx#L54
Furthermore, Python's numcodecs seems to disallow negative compression levels and interprets any nonpositive compression level as 1.
https://github.com/zarr-developers/numcodecs/blob/79a0e933d78a51f32237b29fc375d0ee27aace37/numcodecs/zstd.pyx#L83-L84
Default values are only really needed to support evolving the spec. Default values in the initial version of the spec basically just serve to save space in the stored metadata, which I think is pretty negligible in this case.
The zarr-python v3 implementation is still free to choose whichever parameters it likes if the user passes in zarr.codecs.Zstd() for example. Any defaults indicated in the spec would only apply when reading the stored metadata.
Supporting negative levels should be a trivial fix.
I think the fixes are trivial. The question is would it be considered breaking to change the default compression level?
It's not even clear that we need to include the compression level or the checksum flag in the codec configuration since that information is stored by the codec itself. The compression level is perhaps only really useful if someone is trying to add new compressed chunks to the dataset or if they are trying to update chunks. In that case we only need the information to be consistent with the rest of the dataset.
I think the fixes are trivial. The question is would it be considered breaking to change the default compression level?
I don't think the spec for the zstd codec should have default values. All options should be mandatory to set. As @jbms outlined, implementations can choose default values. I'll change the default level to 3 for zarr-python.
It's not even clear that we need to include the compression level or the checksum flag in the codec configuration since that information is stored by the codec itself. The compression level is perhaps only really useful if someone is trying to add new compressed chunks to the dataset or if they are trying to update chunks. In that case we only need the information to be consistent with the rest of the dataset.
The gzip and blosc codec also have mandatory level options (among others) for the reason you mentioned. I think the zstd codec should be consistent with that.
We could permit the parameters to be left unspecified in the metadata if only read operations are performed, and fail if any write operations are performed. However it is not clear that the added complexity of that is worth it --- it really just saves having to put in some arbitrary values if you don't know the level and only care about reading.
Is the checksum parameter really necessary here? I went though the list of implementations in different languages and it seems the large majority do not support adding a checksum to the compressed output. Wouldn't this limit how many Zarr implementations can support this codec's spec?
I know of zstd libraries for Python, C/C++, Java, Javascript and Rust that support the checksum flag. Which languages are you referring to?
I know of zstd libraries for Python, C/C++, Java, Javascript and Rust that support the checksum flag. Which languages are you referring to?
One example is OCaml. Both libraries available don't support this flag. Looking at the list of implementations at the Zstd official site, the other languages that don't include PHP, Fortran, Swift, Ruby, R, Perl, Common Lisp, Ada, Haskell, Julia, Racket, Nim, Elixir.
In Julia, the checksum flag is available through the low-level interface: https://github.com/JuliaIO/CodecZstd.jl/blob/master/src%2FLibZstd_clang.jl#L154
@nhz2, perhaps we should discuss how to expose the entire parameter surface.