Alistair Miles
Alistair Miles
Hi @Carreau, The [subsection on storage keys](https://zarr-specs.readthedocs.io/en/core-protocol-v3.0-dev/protocol/core/v3.0.html#storage-keys) describes how keys are constructed from node paths. It is then up to a store spec to decide how to use these keys...
Thanks @Carreau, very helpful. > If the choice were mine, I would probably impose case sensitivity for a store spec. FWIW I think this is the most natural thing to...
Thanks @DennisHeimbigner. Just to add that, on the question of whether to pack everything into attributes (.zattrs) or whether to store metadata separately under other store-level keys (.zdims, .ztypdefs, etc.),...
> > We have learned from the existing netcdf-4 that datasets exist with > very large (~14gb) metadata. > Wow, that's big. I think anything near that size will be...
> That was a typo. The correct size is 14 mb. Ah, OK! Although 14MB is still pretty big, it's probably not unmanageable.
> > Depends on what manageable means, I suppose. We have situations where > projects are trying to load a small part of the metadata from thousands of > files...
Hi @DennisHeimbigner, Regarding the fill value specifically, the [standard metadata for a zarr array](https://zarr.readthedocs.io/en/stable/spec/v2.html#metadata) includes a `fill_value` key. There are also rules about how to [encode fill values](https://zarr.readthedocs.io/en/stable/spec/v2.html#fill-value-encoding) to deal...
> If an array has a fixed length byte string data type (e.g., "|S12"), or a structured data type, and if the fill value is not null, then the fill...
> So it would be nice if we had a defined language-independent algorithm > that defines how to construct the fill value for all possible struct types > (including recursion...
Surfacing here [notes on the NetCDF NCZarr implementation](https://drive.google.com/file/d/1UUGcQMpWqKllMdRFCu97CoL7fB_GWXvg/view), thanks @DennisHeimbigner for sharing.