Stephan Hoyer

Results 670 comments of Stephan Hoyer

> But since it's a downstream calculation issue, and does not impact the actual precision of what's being read from the file, what's wrong with saying "Use data.astype(np.float64)". It's completely...

> I'm not following why the data are scaled twice. We automatically scale the data from int16->float32 upon reading it in xarray (if decode_cf=True). There's no way to turn that...

Both multiplying by 0.01 and float32 -> float64 are approximately equivalently expensive. The cost is dominated by the memory copy. On Mon, Aug 6, 2018 at 10:17 AM Ryan May...

Please let us know if converting to float64 explicitly and rounding again does not solve this issue for you. On Mon, Aug 6, 2018 at 10:47 AM Thomas Zilio wrote:...

@magau thanks for pointing this out -- I think we simplify missed this part of the CF conventions document! Looking at the dtype for `add_offset` and `scale_factor` does seem like...

> > the unpacked data should match the type of these attributes, which must both be of type float or both be of type double. An additional restriction in this...

As I understand it, the main purpose here is to remove Xarray lazy indexing class. Maybe call this `get_duck_array()`, just to be a little more descriptive?

we're discussed this before: https://github.com/pydata/xarray/issues/934 I agree that this would be nice to support in theory. The challenge is that we would need to create (and then possibly throw away?)...

This has some connections to the broader indexes refactor envisioned in https://github.com/pydata/xarray/issues/1603.

@maxim-lian Probably. Or you could make the `pandas.Index` explicitly, e.g., `da.sel(a=da.c.to_index().get_indexer(['x', 'y']))`. We should really add `DataArray.isin()` (https://github.com/pydata/xarray/issues/1268).