dask-geopandas
dask-geopandas copied to clipboard
BUG: `gdf.geometry.total_bounds` reads all columns from Parquet instead of only geometry column
When doing a getitem operation after read_parquet
, the column selection is pushed down. So for example, in the following cases
gdf = dask_geopandas.read_parquet(...)
# only the "attribute" column is read
gdf["attribute"].mean()
# only the geometry column is read (.geometry is equivalent of `gdf["geometry"]`
gdf.geometry.x
But, it seems that specifically for total_bounds
, this doesn't work for some reason, and even gdf.geometry.total_bounds.compute()
loads all columns of the Parquet file instead of only the geometry column (which makes total_bounds
considerably slower as it could be).
(the reason I was looking into this was the realization that gdf.total_bounds
(so where the user doesn't explicitly call .geometry
first) might load all columns unnecessarily (which is relevant for all GeoDataFrame methods/attributes that only require the geometry column, and something we could fix I suppose, need to open a separate issue for that), but then when comparing with gdf.geometry.total_bounds
it didn't improve)
Two questions here:
- what happens with
bounds
? Does it also read all columns? - what happens with
unary_union
? That uses a very similar implementation based onreduction
.
bounds
and unary_union
both correctly read only the geometry column (if doing gdf.geometry.bounds/unary_union
of course). So not really sure what's special about total_bounds
.
One difference is that it returns an array instead of series/dataframe.
See https://github.com/dask/dask/issues/7885 for an analysis of the issue. The way to solve it (with current dask at least) is to return a Series/DataFrame object instead of an array.
I was first thinking that might be a too big of a change for total_bounds
if we want to keep it consistent with the geopandas version. But, actually a Series of 4 elements might be compatible enough for most use cases. For example the typical unpacking (xmin, ymin, xmax, ymax = gdf.total_bounds
) or plain indexing (gdf.total_bounds[0]
) should both still work with a Series as well.
Can we keep this open for some time to see how dask/dask#7885 evolves?
A Series would be okay but to have it compatible in the way you mention gdf.total_bounds[0]
it needs to have a range index while it would make more sense to have an index as ['minx', 'miny', 'maxx', 'maxy']
.
So I propose to wait a bit and make a decision on this before the actual 0.1. release.
A Series would be okay but to have it compatible in the way you mention
gdf.total_bounds[0]
it needs to have a range index while it would make more sense to have an index as['minx', 'miny', 'maxx', 'maxy']
.
A Series with index values ['minx', 'miny', 'maxx', 'maxy']
would actually work for this, because in case of a non-numeric index, indexing with integers like that falls back to positional indexing.