Eelco Hoogendoorn
Eelco Hoogendoorn
```Unless ive misread``` Taking more than 30 seconds to dig into the source, it seems like I did. Looking at the source of mv_multiply, axis 0 is the mv index...
Right; that makes sense then! Harping on about the same topic; as currently implemented, at least one batch axis appears mandatory; I cant just create a single vector and act...
> I haven't tried it now but if that's the case that's a bug yes, it should be batch-agnostic. I have to manually fix the broadcasting sometimes to get things...
Having studied the code of jaxga a bit more, and as discussed on discord. I think registering MultiVector as a pytree would be an objective improvement; and that would allow...
Elaborating a bit more; the only place ive noticed where a contiguous array layout would directly benefit, is in _values_mv_dual, or in a scalar multiplication of a multivector; sure, its...
Frustrating how little public documentation of XLA/JAX compilation seems to exist out there... that or my google-fu is just poor... I suppose the ideal GPU kernel would be one that...
Its ofc not actually a straight up matrix-multiply we are after, but something of the form einsum('i, ij, j', a, signs, b)... and then vmapping that; so how well this...
Wait im being an idiot; im missing the Cayley table there; so itd be something like einsum('i, ijk, j ->k', a, C, b), with C a 3d tensor with a...
Just wrote a script to generate the 3-dim tensor C above for a simple quat: ``` [[[ 1 0 0 0] [ 0 1 0 0] [ 0 0 1...
``` alg = Algebra((1, 1, 1, 0)) E = alg.elements_by_grade even = E[0] + E[2] + E[4] i = [alg.elements.index(e) for e in even] print(alg.sparse_cayley[i][:, i][:, :, i]) # shape...