Tobias Kölling
Tobias Kölling
As always with guessing, there are multiple options on how you might want to do this and which kind of conventions are to be followed, so when rolling this guesswork...
We've been working a bit more on our `gribscan`, which is now also available at [gribscan/gribscan](https://github.com/gribscan/gribscan). It's still very fragile, deliberately doesn't care about being compatible to the output of...
:-) yes, we call the customization points a [Magician](https://github.com/gribscan/gribscan/blob/main/magician.md) because that's the part where users have to put their guesswork of how to assemble datasets to "magically" stuff the grib...
Currently it works for some GRIBs, but is not really stable yet and we need to gain more experience... Thus we thought it might need a little time before we...
Probably it would be possible to try to stuff some of the magicians into something like the `coo_map`... I've to think more about that.
Initially we've had a design which built one dataset per grib-file and then put all of them into MultiZarrToZarr. We moved away from that design, because we needed something which...
> Before all that, though, I am surprised that you might need to do this. Can you describe further the structure of your netCDF file? Actually, it all sounds rather...
Yes, this only works with uncompressed stuff (which it luckily is in my case). Calling numcodecs sounds like a bit of an overkill for kerchunk... But maybe it isn't? Probably...
Yes, we'll definitely want to have the sizes of the pieces to be concatenated (for partial loads). For uncompressed things, this would be the size of the blocks themselves. In...
> I wonder if the existing schema could be rationalised to parquet Putting the references into parquet is pretty awesome! I've tried to pack my 491MB JSON mentioned above to...