datatree
datatree copied to clipboard
`to_zarr()` is extremely slow writing to high latency store
Unbearably so, I would say. Here is an example with a tree containing 13 nodes and negligible data, trying to write to S3/GCS with fsspec
:
import numpy as np
import xarray as xr
from datatree import DataTree
ds = xr.Dataset(
data_vars={
"a": xr.DataArray(np.ones((2, 2)), coords={"x": [1, 2], "y": [1, 2]}),
"b": xr.DataArray(np.ones((2, 2)), coords={"x": [1, 2], "y": [1, 2]}),
"c": xr.DataArray(np.ones((2, 2)), coords={"x": [1, 2], "y": [1, 2]}),
}
)
dt = DataTree()
for first_level in [1, 2, 3]:
dt[f"{first_level}"] = DataTree(ds)
for second_level in [1, 2, 3]:
dt[f"{first_level}/{second_level}"] = DataTree(ds)
%time dt.to_zarr("test.zarr", mode="w")
bucket = "s3|gs://your-bucket/path"
%time dt.to_zarr(f"{bucket}/test.zarr", mode="w")
Gives:
CPU times: user 53.8 ms, sys: 3.95 ms, total: 57.8 ms
Wall time: 58 ms
CPU times: user 6.33 s, sys: 211 ms, total: 6.54 s
Wall time: 3min 20s
I suspect one of the culprits may be that we're having to reopen the store without consolidated metadata on writing each node:
https://github.com/xarray-contrib/datatree/blob/433f78dc3ea073d54d8fa36f1574c7e74a3b49db/datatree/io.py#L205-L223
Any ideas for easy improvements here?
Many many ideas for improvements. The Zarr backend we wrote was really meant to be an MVP, it absolutely needs some work. Here's my diagnosis:
- As mentioned, opening / listing each group independently is inefficient. This could be addressed here in Datatree.
- Xarray sequentially initializes each group and array, then updates the user attributes. Any batching here would help. This should probably be addressed upstream in Xarray and Zarr-Python.
My approach to (2) is to rethink the Zarr-Python API for creating hierarchies. You may be interested in the discussion here: https://github.com/zarr-developers/zarr-python/discussions/1569
Awesome, thanks for the info! I imagine (1) would require reimplementing a good chunk of xarray
's ZarrStore
and other backend objects here in a way that avoids as many of these serial ops as possible?
In the meantime, this is plenty fast for the small data case:
def to_zarr(dt, path):
with TemporaryDirectory() as tmp_path:
dt.to_zarr(tmp_path)
fs.put(tmp_path, path, recursive=True)
Takes 1s on my example above instead of 3m.
@slevang would you mind performing the same test with xarray.core.datatree.DataTree
upstream? Then I will know whether or not this issue still exists even after many changes (in xarray, datatree, and zarr). If it still exists can you please re-raise it on the xarray main repo :)
Looks like things are better but still very slow. The example in the OP now takes just over a minute on latest versions writing to GCS. DataTree.to_zarr
hasn't changed so the improvement must be higher up in the stack.
I've done a little profiling, and the fundamental problem is still that we're synchronously creating each group via a separate Dataset.to_zarr
call. This involves a bunch of calls to various fsspec
methods to check path existence and write small attribute and metadata files. The example above writes 13 groups so that adds up quickly.
To make this significantly better, unfortunately I think we need to drop the reliance on Dataset.to_zarr
and rebuild the method to write the whole group structure in one go, and then write the data.
I'll do a little more digging and reopen on xarray.