Fastor icon indicating copy to clipboard operation
Fastor copied to clipboard

Einsum capabilities

Open ghost opened this issue 5 years ago • 3 comments

Hi,

I am currently using Fastor to perform tensor contraction and I am unsure if it supports my example usage.

For example, given this Python code:

t1 = np.array([[0.4, 0.11], [0.8, 0.8], [0.22, 0.21]])
t2 = np.array([[0.3, 0.3], [0.0, 0.12]])
res = np.einsum('ij,jk->ijk', t1, t2)

The result would produce a tensor of shape [3,2,2], which looks like:

[[[0.12   0.12  ]
  [0.     0.0132]]

 [[0.24   0.24  ]
  [0.     0.096 ]]

 [[0.066  0.066 ]
  [0.     0.0252]]]

Trying to replicate this behaviour in Fastor, I wrote the following code:

enum {I,J,K};
Fastor::Tensor<double,3,2> t1 = {{0.4, 0.11}, 
                                                      {0.8, 0.8}, 
                                                      {0.22, 0.21}}; 
Fastor::Tensor<double,2,2> t2 = {{0.3,0.3},
                                                     {0.0, 0.12}};

auto res = Fastor::einsum<Fastor::Index<I,J>,Fastor::Index<J,K>, Fastor::OIndex<I,J,K>>(t1, t2);    

And this gives an error:

include/Fastor/meta/einsum_meta.h:1047:86: error: no matching function for call to ‘find_permuation(const std::a$
ray<long unsigned int, 2>&, const std::array<long unsigned int, 3>&)’
 1047 |     static constexpr std::array<size_t, sizeof...(Idx1)> mapped_idx = find_permuation(mapped_idx0,mapped_idx1);

Is this supported?

Thanks.

ghost avatar Sep 10 '20 12:09 ghost

In your example, index J is contracted. Hence, the result tensor has only two indices remaining: I, and K. Removing J from OIndex should fix the issue.

matthiasneuner avatar Sep 13 '20 11:09 matthiasneuner

Explicit Einstein summation that NumPy has, is not supported in Fastor currently. There is already discussion on how to support this (see #91). One way around this issue for now is to not contract the index that you want to retain that is

auto res = Fastor::einsum<Fastor::Index<I,M>,Fastor::Index<N,K>, Fastor::OIndex<I,M,N,K>>(t1, t2); 

Now your problem is narrowed down to summing the M and N dimensions using sum or diag function. The sum function in Fastor does not support summing along an arbitrary axis at the moment but this is planned. So perhaps for now you can do that step manually.

romeric avatar Sep 13 '20 13:09 romeric

Explicit Einstein summation that NumPy has, is not supported in Fastor currently. There is already discussion on how to support this (see #91). One way around this issue for now is to not contract the index that you want to retain that is

auto res = Fastor::einsum<Fastor::Index<I,M>,Fastor::Index<N,K>, Fastor::OIndex<I,M,N,K>>(t1, t2); 

Now your problem is narrowed down to summing the M and N dimensions using sum or diag function. The sum function in Fastor does not support summing along an arbitrary axis at the moment but this is planned. So perhaps for now you can do that step manually.

Hi, thanks for the response. I am not sure what the summation process over M and N would look like in order to produce 'res'.

It looks like the following code: auto res = Fastor::einsum<Fastor::Index<I,M>,Fastor::Index<N,K>, Fastor::OIndex<I,M,N,K>>(t1, t2); Will produce additional matrices of no interest and therefore in order to get to 'res' I would need to filter out these indices.

In python this could be done with two einsum calls:

res = np.einsum('IM,NK->IMNK', t1, t2)
res2 = np.einsum('IMMK->IMK', res)

Hence in this case instead of summing over the contracted indices, you would place the output of the product of "ij,jk->ijk' in a tensor like so:

Fastor::Tensor<double, 3,2,2> res; for i = 0...3 for j = 0..2 for k = 0..2 res(i,j,k) = t1(i,j) * t2(j,k);

Is there a better way to achieve this?

Thanks.

ghost avatar Sep 15 '20 13:09 ghost