xarray
xarray copied to clipboard
Add CRS/projection information to xarray objects
Problem description
This issue is to start the discussion for a feature that would be helpful to a lot of people. It may not necessarily be best to put it in xarray, but let's figure that out. I'll try to describe things below to the best of my knowledge. I'm typically thinking of raster/image data when it comes to this stuff, but it could probably be used for GIS-like point data.
Geographic data can be projected (uniform grid) or unprojected (nonuniform). Unprojected data typically has longitude and latitude values specified per-pixel. I don't think I've ever seen non-uniform data in a projected space. Projected data can be specified by a CRS (PROJ.4), a number of pixels (shape), and extents/bbox in CRS units (xmin, ymin, xmax, ymax). This could also be specified in different ways like origin (X, Y) and pixel size. Seeing as xarray already computes all coords
data it makes sense for extents and array shape to be used. With this information provided in an xarray object any library could check for these properties and know where to place the data on a map.
So the question is: Should these properties be standardized in xarray Dataset/DataArray objects and how?
Related libraries and developers
- pyresample (me, @mraspaud, @pnuu)
- verde and gmt-python (@leouieda)
- metpy (@dopplershift)
- geo-xarray (@andrewdhicks)
- rasterio
- cartopy
I know @WeatherGod also showed interest on gitter.
Complications and things to consider
- Other related coordinate systems like ECEF where coordinates are specified in three dimensions (X, Y, Z). Very useful for calculations like nearest neighbor of lon/lat points or for comparisons between two projected coordinate systems.
- Specifying what coords arrays are the CRS coordinates or geographic coordinates in general.
- If xarray should include these properties, where is the line drawn for what functionality xarray supports? Resampling/gridding, etc?
- How is the CRS object represented? PROJ.4 string, PROJ.4 dict, existing libraries CRS object, new CRS object,
pyproj.Proj
object? - Affine versus geotransforms instead of extents: https://github.com/mapbox/rasterio/blob/master/docs/topics/migrating-to-v1.rst#affineaffine-vs-gdal-style-geotransforms
- Similar to 4, I never mentioned "rotation" parameters which some users may want and are specified in the affine/geotransform.
- Dynamically generated extents/affine objects so that slicing operations don't have to be handled specially.
- Center of pixel coordinates versus outer edge of pixel coordinates.
This would also be very helpful for geoviews.
This has been discussed from time to time, although not in such an extensive way: thanks!
For me, the long list of discussion points that you mention is a clear argument for a dedicated package which will have to solve all these issues. A good example of difficulty to tackle is the fact that geo libraries often do not use the same standards (see the incompatibility between pyproj
and cartopy.crs
objects as a striking example).
Regarding the xarray side: I also brought this up for my own geo-lib a while ago, and it seems that the cleanest solution to carry a crs
info is to store it as (scalar) coordinate instead of an attribute, which are lost in arithmetic computations (see @shoyer 's comment).
@fmaussion Note that I am the one who started the PROJ.4 CRS in cartopy pull request (https://github.com/SciTools/cartopy/pull/1023) and that it was this work that I copied to pyresample for my own pyresample work since I didn't want to wait for everything to be flushed out in cartopy. You can see an example of the to_cartopy_crs
method here: https://github.com/pytroll/pytroll-examples/blob/master/satpy/Cartopy%20Plot.ipynb
It's also these cartopy CRS issues that make me think that Cartopy CRS objects aren't the right solution for this type of logic as a "how to represent CRS objects". In my experience (see: my cartopy PR :wink:) and watching and talking with people at SciPy 2018 is that multiple projects have work arounds for passing their CRS/projection information to cartopy.
In my biased experience/opinion PROJ.4 is or can be used in quite a few libraries/fields. If PROJ.4 or something that accepts PROJ.4 isn't used then we might as well come up with a new standard way of defining projections...just kidding.
Side note: FYI the geotiff format does not currently accept the sweep axis parameter +sweep
that PROJ.4 needs to properly describe the geos projection used by GOES-16 ABI satellite instrument data. I've contacted some of the geotiff library people at some point and from what I remember it was a dead end without a lot of work behind fixing it.
Also I should add the geopandas
library as another reference.
geopandas would be a great template for a geoxarray package ;-)
Correct me if I'm wrong, but from the xarray side it would already be enough if there is a way in xarray to have a special crs
attribute which is preserved by all operations. What the crs
really is shouldn't bother xarray, and it can be specific to the downstream library [1]. With a crs and xarray's coordinates the geoloc can be sorted out in almost all cases, right?
The solution currently is to store crs
as a scalar coordinate as mentioned above.
What could also be a possibility (without knowing how the internals would look like) is to have a registry of "special attributes" names which would always be preserved by xarray's operations. This registry would live in xarray and can be updated by downstream libraries and/or accessors. (xref: https://github.com/pydata/xarray/issues/1614 ).
[1] : my preference goes for a simple PROJ4 string
@fmaussion I guess you're right. And that set of attributes to keep during certain operations would be very nice in my satpy library. We currently have to do a lot of special handling of that.
The one thing that a crs coordinate (PROJ.4 dict or str) doesn't handle is specifying what other coordinates define the X/Y projection coordinates. This logic also helps with non-uniform datasets where a longitude and latitude coordinate are needed. Of course, a downstream library could just define some type of standard for this. However, there are edge cases where I think the default handling of these coordinates by xarray would be bad. For example, satpy doesn't currently use Dataset
objects directly and only uses DataArray
s because of how coordinates have to be handled in a Dataset:
In [4]: b = xr.DataArray(np.zeros((5, 10), dtype=np.float32), coords={'y': np.arange(2., 7.), 'x': np.arange(2., 12.)}, dims=('y', 'x'))
In [6]: ds = xr.Dataset({'a': a, 'b': b})
In [7]: ds.coords
Out[7]:
Coordinates:
* y (y) float64 0.0 1.0 2.0 3.0 4.0 5.0 6.0
* x (x) float64 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0 11.0
But I guess that is intended behavior and if the crs
is a coordinate then joining things from different projections would not be allowed and raise an exception. However that is exactly what satpy wants/needs to handle in some cases (satellite datasets at different resolutions, multiple 'regions' of from the same overall instrument, two channels from the same instrument with slightly shifted geolocation, etc). I'm kind of just thinking out loud here, but I'll think about this more in my idle brain cycles today.
But I guess that is intended behavior and if the crs is a coordinate then joining things from different projections would not be allowed and raise an exception. However that is exactly what satpy wants/needs to handle in some cases (satellite datasets at different resolutions, multiple 'regions' of from the same overall instrument, two channels from the same instrument with slightly shifted geolocation, etc).
I think it would make more sense to think about using multiple xarray.Dataset
objects for these use cases, possibly in some sort of hierarchical collection. The notion of xarray.Dataset
is pretty closely tied to a single grid. The discussion in https://github.com/pydata/xarray/issues/1092 is definitely worth reading.
@shoyer I haven't read all of #1092 but that is another related issue for satpy where some satellite data formats use groups in NetCDF files which makes it difficult to use xr.open_dataset
to access all the variables inside the file.
I've thought about this a little more and I agree with @fmaussion that this doesn't need to be added to xarray. I think if "we", developers who work with projected datasets, can agree that "crs" in an xarray objects coordinates is a PROJ.4 string then that's half the battle of passing them between libraries. If not a PROJ.4 string, other ideas (dict?)?
I initially had the idea to start a new geoxarray
type library but the more I thought about what features I would want in it, it started looking a lot like a new interface on pyresample via an xarray accessor. If not an accessor then a subclass but that defeats the purpose (easy collaboration between libraries). I'd also like to use the name "geo" for the accessor but have a feeling that won't jive well with everyone so I will likely fall back to "pyresample".
One thing that just came to mind while typing this that is another difficulty is that there will still be the need to have an object like pyresample's AreaDefinition
to represent a geographic region (projection, extents, size). These could then be passed to things like a .resample
method as a target projection or slicing based on another projection's coordinates.
When I started typing this I thought I had it all laid out in my head, not anymore. 😢
I am really excited about this discussion. I know of other libraries that have done the same thing and have written internal libraries myself.
If possible, I would hope that we could follow the CF convention on this as it makes the output netCDF file compatible with QGIS, GDAL, and rasterio when written using to_netcdf()
To do so, you add the crs
coordinate to the dataset/dataarray.
int crs;
And then, you add the spatial_ref
attribute to the crs which is a crs WKT string.
Next, you add the grid_mapping
attribute to all associated variables that contains crs
as the grid_mapping.
See an example here.
After that, you could store all kinds of information inside the crs
variable such as the proj4 string, the affine, etc.
@snowman2 I thought about that too, but here are the reasons I came up with for why this might not be the best idea:
- CF conventions change over time and depending on which version of the standard you are using, things can be represented differently. This would tie a geoxarray-like library to a specific version of the CF standard which may be confusing and would require adjustments when writing to a NetCDF file to match the user's desired version of the standard.
- Using a CF standard CRS description would require conversion to something more useful for just about every use case (that I can think of) that isn't saving to a netcdf file. For example, a PROJ.4 string can be passed to
pyproj.Proj
or in the near future cartopy to convert to a cartopy CRS object. - If we have to add more information to the
crs
coordinate to make it more useful like a PROJ.4 string then we end up with multiple representations of the same thing, making maintenance of the information harder.
The result of this github issue should either be a new package that solves all (90+%) of these topics or an easy to implement, easy to use, geolocation description best practice so that libraries can more easily communicate. I think with the CF standard CRS object we would definitely need a new library to provide all the utilities for converting to and from various things.
Lastly, I don't know if I trust CF to be the one source of truth for stuff like this. If I've missed some other obvious benefits of this or if working with WKT or the CF standard CRS attributes isn't actually that complicated let me know.
Here is an example of how it would look on a dataset:
<xarray.Dataset>
Dimensions: (x: 65, y: 31)
Coordinates:
* x (x) float64 ...
* y (y) float64 ...
time datetime64[ns] ...
crs int64 ...
Data variables:
ndvi (y, x) float64 ...
Attributes:
Here is how the crs
or spatial_ref
coodinate variable would look:
<xarray.DataArray 'crs' ()>
array(0)
Coordinates:
time datetime64[ns] ...
crs int64 0
Attributes:
spatial_ref: PROJCS["UTM Zone 15, Northern Hemisphere",GEOGCS["WGS 84",D...
And here is how it would look on the variables:
<xarray.DataArray 'ndvi' (y: 31, x: 65)>
array([[ ...]])
Coordinates:
* x (x) float64 ...
* y (y) float64 ...
time datetime64[ns] ...
crs int64 0
Attributes:
grid_mapping: crs
@djhoese Whether or not we use the CF convention is not what I am concerned about. What I think would benefit the most people is with the file format to be able to do to_netcdf()
with the file and be able to have it read in with standard GIS tools such as rasterio, GDAL, and QGIS. With this schema, this is possible.
Another benefit is that it keeps the crs
or spatial_ref
with it when you do your operations as it is a coordinate of the variable.
Also, as a side note if you use the center pixel coordinates, then GDAL, rasterio, and QGIS are able to read in the file and determine it's affine/transform without a problem.
For the new library, if you have a crs
method attached to it, it isn't too difficult to convert it to whatever format you need it to be in. With the rasterio.crs.CRS.from_string you can "Make a CRS from an EPSG, PROJ, or WKT string" and you can also get back the EPSG, PROJ, or WKT string with a simple method call.
For example, using the recommended method to extend xarray, you could add a crs property:
from rasterio.crs import CRS
........
@property
def crs(self):
""":obj:`rasterio.crs.CRS`:
Projection from `xarray.DataArray`
"""
if self._crs is not None:
return self._crs
try:
# look in grid_mapping
self._crs = CRS.from_string(self._obj.coords[self._obj.grid_mapping].spatial_ref)
except AttributeError:
raise ValueError("Spatial reference not found.")
return self._crs
And if you call your extension geo
, all you would need to get the CRS would be:
ds.geo.crs
To get proj.4 string:
ds.geo.crs.to_string()
To get WKT string:
ds.geo.crs.wkt
To get EPSG code:
ds.geo.crs.to_epsg()
@snowman2 Awesome. Thanks for the info, this is really good stuff to know. In your own projects and use of raster-like data, do you ever deal with non-uniform/non-projected data? How do you prefer to handle/store individual lon/lat values for each pixel? Also it looks like xarray would have to be updated to add the "crs" coordinate since currently it is not considered a coordinate variable. So a new library may need to have custom to_netcdf/open_dataset methods, right?
It kind of seems like a new library may be needed for this although I was hoping to avoid it. All of the conversions we've talked about could be really useful to a lot of people. I'm not aware of an existing library that handles these conversions as one of its main purposes and they always end up as a "nice utility" that helps the library as a whole. It seems like a library to solve this issue should be able to do the following:
- Store CRS information in xarray objects
- Write properly geolocated netcdf and geotiff files from xarray objects.
- Read netcdf and geotiff files as properly described xarray objects.
- Convert CRS information from one format to another: WKT, EPSG (if available), PROJ.4 str/dict, rasterio CRS, cartopy CRS
- Optionally (why not) be able to resample datasets to other projections.
Beyond reading/writing NetCDF and geotiff files I would be worried that this new library could easily suffer from major scope creep. Especially since this is one of the main purposes of the satpy library, even if it is dedicated to satellite imagery right now. @snowman2 I'm guessing the data cube project has similar use cases. If the reading/writing is limited to a specific set of formats then I could see pyresample being a playground for this type of functionality. The main reason for a playground versus a new from-scratch package would be the use of existing utilities in pyresample assuming resampling is a major feature of this new specification. Yet another braindump...complete.
In your own projects and use of raster-like data, do you ever deal with non-uniform/non-projected data?
I have dealt with non-uniform data in the geographic projection. I have found it easiest to deal with it if you can determine the original projection and project the coordinates back to that projection so it is uniform. But, I am by no means an expert in this arena. Most if the time I work "normal" data.
How do you prefer to handle/store individual lon/lat values for each pixel?
rasterio/GDAL/QGIS all seem to use the centroid.
Also it looks like xarray would have to be updated to add the "crs" coordinate since currently it is not considered a coordinate variable. So a new library may need to have custom to_netcdf/open_dataset methods, right?
Actually, it is not difficult to add as it stands:
ds.coords['crs'] = 0
ds.coords['crs'].attrs = dict(spatial_ref="PROJCS["UTM Zone 15, Northern Hemisphere",GEOGCS["WGS 84",D...")
But, if a crs
does not already exist on the dataset, I guess that a function that adds the crs
properly would be useful so it can also add the grid_mapping to all of the variables.
Example:
ds.geo.set_crs("+init=epsg:4326")
I think that minor modifications will be needed once the crs is set properly on the xarray dataset. Because after that, the to_netcdf()
will automatically produce georeferenced datasets.
I could see the first pass of the extension/library simply performing:
-
ds.geo.crs
-
ds.geo.set_crs()
-
ds.geo.to_projection()
Regarding non-uniform datasets, I think we have a small misunderstanding. I'm talking about things like data from polar-orbiting satellites where the original data is only geolocated by longitude/latitude values per pixel and the spacing between these pixels is not uniform so you need every original longitude and latitude coordinate to properly geolocate the data (data, longitude, and latitude arrays all have the same shape). When it comes to the topics in this issue this is an problem because you would expect the lat/lon arrays to be set as coordinates but if you are dealing with dask arrays that means that these values are now fully computed (correct me if I'm wrong).
For your example of adding a crs
attribute, I understand that that is how one could do it, but I'm saying it is not already done in xarray's open_dataset
. In my opinion this is one of the biggest downsides of the CF way of specifying projections, they are a special case that doesn't fit the rest of the NetCDF model well (a scalar with all valuable data in the attributes that is indirectly specified on data variables).
In your example of methods is to_projection
a remapping/resampling operation? If not, how does it differ from set_crs
?
That is interesting, I am definitely not an expert with non-uniform datasets. From the satellite datasets I have used, the 2D latitude and longitude coordinates are stored in the datasets and are not super useful. I usually have to use other ways to recreate the grid coordinates in the original projection (ex. SMAP uses the EASE Grid 2.0 but it stores the latitude/longitude of the points in the file) or reproject & flatten the coordinates. I have had to do this with weather data and made an xarray extension pangaea to handle it. So, that is what I was referring to when I misunderstood your question.
For your example of adding a crs attribute, ...
The files I have created have the crs
coordinate variable inside the netCDF file already and it is always there when I load it in with xarray.open_dataset()
. The method set_crs()
could be used to add the crs
coordinate variable and grid_mapping
attributes to the dataset in the proper way so that it would be there on xarray.open_dataset()
after dumping it to the file with to_netcdf()
.
The CF stuff is supported by rasterio, GDAL, QGIS and that is why I like it. If there is another way that is as well supported, I am not opposed to it.
In your example of methods is to_projection a remapping/resampling operation? If not, how does it differ from set_crs?
The to_projection()
method would be a reproject/resampling operation.
The files I have created have the crs coordinate variable inside
Ok so the netcdf files that you have created and are reading with xarray.open_dataset
have grid_mapping
set to "crs"
for your data variables, right? Do you also include a special "crs"
dimension? I believe having this dimension would cause xarray to automatically consider "crs"
a coordinate, but this is not CF standard from what I can tell. As I mentioned in your other issue the CF standard files I have for GOES-16 ABI L1B data do not have this "crs" dimension (or similarly named dimension) which means that the variable specified by the grid_mapping
attribute is not considered a coordinate for the associated DataArray/Dataset.
This means that to properly associate a CRS with a DataArray/Dataset this new library would require its own version of open_dataset
to assign these things correctly based on grid_mapping
. Since the library would require users to use this function instead of xarray's then I don't think it would be out of the question for it to also have a custom to_netcdf
method if we chose to use a non-CF representation of the CRS information. Not saying I feel strongly about it, just pointing out that it isn't a huge leap to require users to use the new/custom methods.
It is not in the dimension, it is the coordinate attribute in the variable. That is handled automatically by xarray when writing to_netcdf
if you add the crs to the coords
.
From the ncdump:
dimensions:
x = 65 ;
y = 31 ;
variables:
double x(x) ;
x:_FillValue = NaN ;
x:long_name = "x coordinate of projection" ;
x:standard_name = "projection_x_coordinate" ;
x:units = "m" ;
double y(y) ;
y:_FillValue = NaN ;
y:long_name = "y coordinate of projection" ;
y:standard_name = "projection_y_coordinate" ;
y:units = "m" ;
int64 time ;
time:units = "seconds since 2015-04-03T17:55:19" ;
time:calendar = "proleptic_gregorian" ;
int64 spatial_ref ;
spatial_ref:spatial_ref = "PROJCS[\"UTM Zone 15, Northern Hemisphere\",GEOGCS[\"WGS 84\",DATUM[\"WGS_1984\",SPHEROID[\"WGS 84\",6378137,298.257223563,AUTHORITY[\"EPSG\",\"7030\"]],AUTHORITY[\"EPSG\",\"6326\"]],PRIMEM[\"Greenwich\",0,AUTHORITY[\"EPSG\",\"8901\"]],UNIT[\"degree\",0.0174532925199433,AUTHORITY[\"EPSG\",\"9122\"]],AUTHORITY[\"EPSG\",\"4326\"]],PROJECTION[\"Transverse_Mercator\"],PARAMETER[\"latitude_of_origin\",0],PARAMETER[\"central_meridian\",-93],PARAMETER[\"scale_factor\",0.9996],PARAMETER[\"false_easting\",500000],PARAMETER[\"false_northing\",0],UNIT[\"Meter\",1]]" ;
double ndvi(y, x) ;
ndvi:_FillValue = NaN ;
ndvi:grid_mapping = "spatial_ref" ;
ndvi:coordinates = "spatial_ref time" ;
// global attributes:
:creation_date = "2018-04-11 13:14:55.401183" ;
It would definitely be a good idea to ensure that the crs
variable is a coordinate, so I agree that having support for that would be a good idea.
I was talking about open_dataset
not reading standard CF files in the way we want, at least not the way it is now. I understand that setting the CRS in .coords
will write out the CRS when you use to_netcdf
. The issue is that a standard CF netcdf file created by someone else that strictly follows the CF standard will not be read in the same way. Put another way, you could not load a CF NetCDF file with open_dataset
and write it out with to_netcdf
and get the same output.
Also note that having the grid_mapping
as a coordinate in xarray object results in it being listed in the coordinates
attribute in the output netcdf file which is technically not part of the CF standard.
The example I gave was just demonstrating that the dimension is not required for the crs
coordinate variable.
I agree with the functionality that would support standardizing the crs
as a coordinate variable on load for the case you specified. That way, the read in file will always behave the same and writing to netCDF it would be output correctly.
This all sounds like it is heading in a good direction. 👍
I was talking with @dopplershift the other day on gitter and he brought up a very important point: no matter how CRS information is represented the user should be able to access the individual parameters (reference longitude, datum, etc). This lead me to think that a new CRS class is probably needed, even though I wanted to avoid it, because it would likely be one of the easiest ways to provide access to the individual parameters. There are already cartopy CRS objects that IMO are difficult to create and rasterio CRS objects that require gdal which is a pretty huge dependency to require users to install just to describe their data. That said, I think no matter how it is coded I don't want to duplicate all the work that has been done in rasterio/gdal for handling WKT and converting between different CRS formats.
The other thing I've been pondering during idle brain time is: is it better for this library to require an xarray object to have projection information described in one and only one way (a CRS object instance for example) or does the xarray accessor handling multiple forms of this projection information. Does having a CRS object in .coords
allow some functionality that a simple string would not have? Does not having a required .coords
CRS element stop the accessor from adding one later? In the latter case of the accessor parsing existing attrs/coords/dims of the xarray object, I was thinking it could handle a PROJ.4 string and the CF "grid_mapping" specification to start. The main functionality that would be available here is that with little to no work a user could import geoxarray and have access to this to whatever functionality can be provided in a .geo
accessor. Or if they load a netcdf file with xr.open_dataset
there is no extra work required for a user to supply that data to another library that uses geoxarray.
Lots of good thoughts there.
I think a lot depends on who you plan on having for a user base. I like rasterio.crs.CRS
, but it does require GDAL. I know GDAL is a heavy dependency (solves runtime problems and it a source of installation problems), but if your users are already using it, then it isn't too big of an ask. If not, you could look into something like pycrs. But, it looks like it hasn't been touched for a while (never a good sign). Or, there are other options, (requiring a more work & maintenance) such as re-creating the rasterio.crs.CRS
object and using cython with a copy of only the spatial reference code from GDAL in the repo (might be better to create your own repo/package if you head in this direction). Or maybe someone could ask really nicely for the GDAL maintainers to consider making the spatial reference code a package on it's own (not likely, but would be really nice).
My preference to have the CRS object something created/retrieved by the accessor based on information in the file. If it is not, users will have to remove the CRS object when using to_netcdf()
as the object will not be serializable and to_netcdf()
will throw errors. I think allowing the user to set the CRS on the file (by adding the crs
to .coords
and grid_mapping
to data_vars
) by passing in a valid projection string (WKT, proj.4) geo.set_crs()
would be a good idea. This way, when the user calls, to_netcdf()
it will happily do what they want. In addition, the CRS will still be available later with the accessor when loading in with xr.open_dataset
.
For the user base I think if we can cover as many groups as possible that would be best. I know there are plenty of people who need to describe CRS information in their data, but don't use geotiffs and therefore don't really need rasterio/gdal. The group I immediately thought of was the metpy group which is why I talked to @dopplershift in the first place. The immediate need for this group (based on his scipy talk) will be people reading NetCDF files and putting the data on a cartopy plot. I think @dopplershift and I agreed that when it comes problems building/distributing software dealing with this type of data the cause is almost always gdal/libgdal. I'm in favor of making it optional if possible.
For the to_netcdf
stuff I think anything that needs to be "adjusted" before writing to a NetCDF file can be handled by required users to call my_data.geo.to_netcdf(...)
. I'm not a huge fan of the accessor adding information automatically that the user didn't specifically request. Side effects on your data just for importing a module is not good. I will try to put together a package skeleton and lay out some of the stuff in my head in the next month, but am still catching up on work after SciPy and a week of vacation so not sure when exactly I'll get to it.
I just did a search for "geoxarray" on github and @wy2136's repositories came up where they are importing a geoxarray
package. @wy2136, is there another geoxarray project that we are not aware of? Do you have anything else to add to this discussion?
Sorry for the confusion from the geoxarray
package I have been using. It's currently still a simple personal package. I use it to register the geo
accessor to do some calculations related to data on the longitude-latitude grids (area mean, for example) and make quick plots based on the geoplots
package (https://github.com/wy2136/geoplots).
@wy2136 Very cool. We have the ability in satpy (via pyresample) to create cartopy CRS objects and therefore cartopy plots from our xarray DataArray objects: https://github.com/pytroll/pytroll-examples/blob/master/satpy/Cartopy%20Plot.ipynb
It would be nice if we could work together in the future since it looks like you do a lot of the same stuff. When I make an official "geoxarray" library I think I'm going to make a "geo" accessor for some of these same operations (see above conversations).
@djhoese Sure. It would be great to have collaborations in the future. The packages and libraries you guys are working on are awesome. We really need them in our research.
Would anyone that is watching this thread hate if I made geoxarray python 3.6+? I doubt there are any features that are needed in 3.6, but also am not going to support python 2.
Additionally, @shoyer @fmaussion and any other xarray-dev, I've been thinking about the case where I have 2D image data and 2D longitude and latitude arrays (one lon/lat pair for each image pixel). Is there a way in xarray to associate these three arrays in a DataArray so that slicing is handled automatically but also not put the arrays in the coordinates? As mentioned above I don't want to put these lon/lat arrays in the .coords
because they have to be fully computed if they are dask arrays (or at least that is my understanding). For my use cases this could mean a good chunk of memory being dedicated to these coordinates. From what I can tell my options are .coords
or a .Dataset
with all 3.
Similarly, is there any concept like a "hidden" coordinate where utilities like to_netcdf
ignore the coordinate and don't write it? Maybe something like .coords['_crs'] = "blah blah blah"
? I could always add this logic myself to geoxarray's version of to_netcdf
.
Is there a way in xarray to associate these three arrays in a DataArray so that slicing is handled automatically but also not put the arrays in the coordinates?
Not yet, unfortunately, but this is what https://github.com/pydata/xarray/pull/2302 is trying to solve.
I could always add this logic myself to geoxarray's version of to_netcdf.
I think this would be the preferred approach.
FYI I've started a really basic layout of the CRS object in geoxarray: https://github.com/geoxarray/geoxarray
It doesn't actually do anything yet, but I copied all the utilities from pyresample that are useful (convert PROJ.4 to cartopy CRS, proj4 str to dict, etc). I decided that the CRS object should use the CF conventions naming for projection parameters based on a conversation I had with @dopplershift. The main factor being they are much more human readable than the PROJ.4 names. The issue with this is that I have much more experience dealing with PROJ.4 parameters.
I can probably also get a lot of information from metpy's CF plotting code: https://github.com/Unidata/MetPy/blob/master/metpy/plots/mapping.py