pytorch_sparse icon indicating copy to clipboard operation
pytorch_sparse copied to clipboard

Adding of two sparse tensors does not work

Open brechtmann opened this issue 4 years ago • 7 comments
trafficstars

I tried to add two sparse tensors and failed with the error below. I am using pytorch 1.6.0 and pytorch_sparse 0.6.8

Here is a minimal example which reproduces my error:

>>> from torch_sparse import SparseTensor, add
>>> a = SparseTensor.from_dense(torch.ones([2,3]))
>>> a
SparseTensor(row=tensor([0, 0, 0, 1, 1, 1]),
             col=tensor([0, 1, 2, 0, 1, 2]),
             val=tensor([1., 1., 1., 1., 1., 1.]),
             size=(2, 3), nnz=6, density=100.00%)




>>> add(a,a)

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)

...

     15     else:
     16         raise ValueError(
---> 17             f'Size mismatch: Expected size ({src.size(0)}, 1, ...) or '
     18             f'(1, {src.size(1)}, ...), but got size {other.size()}.')
     19     if value is not None:

TypeError: size() missing 1 required positional argument: 'dim'

In the above example I would have expected that add(a,a) == 2 * a Is that correct or am I am using the wrong function? And I get the same error for a + a

brechtmann avatar Feb 17 '21 16:02 brechtmann

Addition of two SparseTensors is currently not supported, sorry!

rusty1s avatar Feb 17 '21 16:02 rusty1s

Is this on your roadmap?

brechtmann avatar Feb 22 '21 10:02 brechtmann

Yes, it's on the roadmap.

rusty1s avatar Feb 22 '21 12:02 rusty1s

I implemented addition and subtraction for the moment the following way:

from torch_sparse import SparseTensor
from torch_sparse import coalesce


def add_sparse(a, b):
    assert a.sizes() == b.sizes(), "The Tensor dimensions do not match"
    row_a, col_a, values_a = a.coo()
    row_b, col_b, values_b = b.coo()
    
    index = torch.stack([torch.cat([row_a, row_b]), torch.cat([col_a, col_b])])
    value = torch.cat([values_a, values_b])
    
    m,n = a.sizes()
    index, value = coalesce(index, value, m=m, n=n)
    res = SparseTensor.from_edge_index(index, value, sparse_sizes=(m, n))
    return res



def sub_sparse(a, b):
    assert a.sizes() == b.sizes(), "The Tensor dimensions do not match"
    row_a, col_a, values_a = a.coo()
    row_b, col_b, values_b = b.coo()
    
    index = torch.stack([torch.cat([row_a, row_b]), torch.cat([col_a, col_b])])
    value = torch.cat([values_a, -1 * values_b])
    
    m,n = a.sizes()
    index, value = coalesce(index, value, m=m, n=n)
    res = SparseTensor.from_edge_index(index, value, sparse_sizes=(m, n))
    return res

It does the job for me and autograd works this way.

brechtmann avatar Mar 15 '21 14:03 brechtmann

Looks good! I think this can be made more efficient with custom CUDA kernels, but it's good to support it nonetheless. Are you interested in contributing your solution?

rusty1s avatar Mar 16 '21 07:03 rusty1s

Hi,

In case a simple function is sufficient for you I can contribute that.

brechtmann avatar Mar 24 '21 14:03 brechtmann

Yes, please :) Really appreciate it.

rusty1s avatar Mar 25 '21 06:03 rusty1s