cfgrib
cfgrib copied to clipboard
`cfgrib` loads all chunks into memory when indexing
Related to https://github.com/dask/dask/issues/9451 (and probably to https://github.com/fsspec/kerchunk/pull/198).
When indexing (either sel
or isel
) over (lat, lon) GRIB files loaded with open_mfdataset
(thus containing chunked data) cfrgib
attempts to load all chunks into memory. This causes excessive RAM consumption and slow performance.
From the discussion we had the hypothesis is that cfgrib
needs to scan the entire file to subset only in few dimensions.
Still, it should be possible not to load the entire dataset into memory when performing the opration.
I'm interested to this too. I am trying to extract a small subset from a ERA5-land file but - independently from the chunk size - xarray/dask tries to read the entire file in memory.
If I understand the problem correctly, this issue is partly because ecCodes can only read the whole message (field) from disk, even if you only want some meta-data. We have plans to improve that situation, but there is no firm time-frame for it yet. When we do, cfgrib should benefit enormously from it.