uncertainties
uncertainties copied to clipboard
unp.isnan fails when input is empty array
np.isnan
correctly returns an empty array when given an empty array as an argument.
unp.isnan
goes straight to its wrapper, which attempts to vectorize the operation:
import numpy as np
print(np.isnan([]))
# array([], dtype=bool)
print(np.isnan([np.nan]))
# array([ True])
from uncertainties import unumpy as unp
print( unp.isnan([]))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/michael/opt/miniconda3/envs/pandas/lib/python3.9/site-packages/numpy/lib/function_base.py", line 2328, in __call__
return self._vectorize_call(func=func, args=vargs)
File "/Users/michael/opt/miniconda3/envs/pandas/lib/python3.9/site-packages/numpy/lib/function_base.py", line 2406, in _vectorize_call
ufunc, otypes = self._get_ufunc_and_otypes(func=func, args=args)
File "/Users/michael/opt/miniconda3/envs/pandas/lib/python3.9/site-packages/numpy/lib/function_base.py", line 2362, in _get_ufunc_and_otypes
raise ValueError('cannot call `vectorize` on size 0 inputs '
ValueError: cannot call `vectorize` on size 0 inputs unless `otypes` is set
Indeed, this makes sense. Well spotted! Let me check how to fix this in a proper way.
Adding a note that when called with the singleton argument np.nan, it goes to a lot of trouble to create a vectorized call for that one element. Should there be a fast path for singletons vs arrays?
I would say that the simplicity/uniformity of the code is a positive point and accelerating an already fast case (singletons) is not worth breaking a more maintainable code, at this stage.
Also, one could think of similarly accelerating the 2-element case, etc. This is a rabbit hole!
Fair enough. I'm looking at ways to avoid the overhead a different way (using PintArray to give me an array I can dequantify and pass to unp.isnan as a proper array rather than using map to call unp.isnan on elements one-by-one).