Jeff Whitaker

Results 538 comments of Jeff Whitaker

there also seems to be a netcdf4-python package for your flavor of linux https://archlinux.org/packages/community/x86_64/python-netcdf4/

Can you post the offending file somewhere, or at least a subset of it that is enough to trigger the error?

could be, but the mystery then is why the test for zstd passed ``` configure:6676: gcc -o conftest -fno-strict-aliasing -I//scratch2/BMC/gsienkf/whitaker/conda-envs/mpi4py/include -L/scratch2/BMC/gsienkf/whitaker/conda-envs/mpi4py/lib conftest.c -lzstd -lxml2 -lcurl >&5 configure:6676: $? = 0...

Yes, please do report back when you have a chance to test with netcdf-c 4.7.4

Perhaps the libs you linked when you built 1.4.0 were different.

Don't see any way around having all the files open, without a pretty major rethink of the whole MFDataset class (which is probably not a bad idea, but not one...

No, I don't. Must be something to do with how HDF5 allocates space for the data in parallel mode. Just out of curiosity, does the size change as you change...

FWIW, I can reproduce this result on our linux cluster.

The dataset in your example is not chunked (v.chunking() reports 'contiguous'), If the dimension is set unlimited, then the dataset is chunked (v.chunking() reports [512]) and the difference in file...

collective IO has to be on when writing to an unlimited dimension, so you'll have to comment out the first write (before collective IO is turned on)