Eric Wieser
Eric Wieser
Fixing this would come at the cost of code like ``` # note: full import used to make flake8 happy from clifford.g3c import ( # basis elements e1, e2, e3,...
You should leave __array_wrap__ behind and implement __array_ufunc__ instead
`__array_wrap__` will stick around forever, but it's not powerful enough to solve the problems you'll be wanting to solve.
I think this will do the trick: ```python def __array__(self): # we are a scalar, and the only appropriate dtype is an object array arr = np.empty((), dtype=object) arr[()] =...
The big problem with `__array_wrap__` is it runs _after_ numpy has already done the computation. If you're going to just do your own computation, you should use `__array_ufunc__`. But if...
No, that's much harder. You should at least use: ``` dual_array = np.vectorize(MultiVector.dual) b = dual_array(a) ``` To actually make those methods, you'd probably have to make a new MultiVectorArray...
Who knows? We don't really have any canonical benchmarks.
Mac CI is failing because `conda` is pinned on a super old version, but I'm tempted to deal with that later.
Hmm, seems ~1.4x slower to startup: With this patch ``` In [1]: import clifford In [2]: %timeit clifford.Cl(5) 813 ms ± 20.9 ms per loop (mean ± std. dev. of...
Multiplication hasn't really changed: After: ``` In [7]: %timeit e1 * e234 7.69 µs ± 175 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In...