Jeff Whitaker
Jeff Whitaker
The current code will not call `nc_def_var_chunking` at all if `chunksizes=None` and `contiguous=False`, which I would think would result in the library default chunking strategy.
I think chunking is only used be default if there is an unlimited dimension. Try this: ```python import netCDF4 import numpy as np def write(**kwargs): nc = netCDF4.Dataset('chunk.nc', 'w') x...
@davidhassell it is already being reported - variables with no unlimited dimension are not chunked by default (they are contiguous).
@davidhassell thanks for clarifying, I understand now. Since the python interface doesn't have access to the default chunking algorithm in the C library, I don't know how this would be...
a potential workaround that doesn't require having an unlimited dimension is to turn on compression (`zlib=True,complevel=1`) or the fletcher checksum algorithm (`fletcher32=True`).
The python interface has lossy compression (http://unidata.github.io/netcdf4-python/#section9), as does the nco netcdf operators (https://www.geosci-model-dev.net/9/3199/2016/). All you need to do is quantize the data before applying the zlib filter.
No, this is not related to the use of scale_factor and add_offset. It involves truncating or quantizing the data to a certain level of precision (say 0.1 K for temperature)...
@ClaraDraper-NOAA I think you forgot to include the link. Would you be able to tell us specifically which lines to add to the. reanalysis version of convinfo (https://github.com/NOAA-PSL/build_gsinfo/blob/main/convinfo/merged_convinfo.txt)? The anavinfo...
OK got it - thanks @ClaraDraper-NOAA
To reproduce the gfsv17_historical GSI *info files, the build_gsinfo dir in GSI-fix will need to be updated to match https://github.com/NOAA-PSL/build_gsinfo-fix/pull/3. It will not reproduce them bit for bit, since I've...