pytorch_sparse
pytorch_sparse copied to clipboard
Adding of two sparse tensors does not work
I tried to add two sparse tensors and failed with the error below. I am using pytorch 1.6.0 and pytorch_sparse 0.6.8
Here is a minimal example which reproduces my error:
>>> from torch_sparse import SparseTensor, add
>>> a = SparseTensor.from_dense(torch.ones([2,3]))
>>> a
SparseTensor(row=tensor([0, 0, 0, 1, 1, 1]),
col=tensor([0, 1, 2, 0, 1, 2]),
val=tensor([1., 1., 1., 1., 1., 1.]),
size=(2, 3), nnz=6, density=100.00%)
>>> add(a,a)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
...
15 else:
16 raise ValueError(
---> 17 f'Size mismatch: Expected size ({src.size(0)}, 1, ...) or '
18 f'(1, {src.size(1)}, ...), but got size {other.size()}.')
19 if value is not None:
TypeError: size() missing 1 required positional argument: 'dim'
In the above example I would have expected that add(a,a) == 2 * a
Is that correct or am I am using the wrong function?
And I get the same error for a + a
Addition of two SparseTensors is currently not supported, sorry!
Is this on your roadmap?
Yes, it's on the roadmap.
I implemented addition and subtraction for the moment the following way:
from torch_sparse import SparseTensor
from torch_sparse import coalesce
def add_sparse(a, b):
assert a.sizes() == b.sizes(), "The Tensor dimensions do not match"
row_a, col_a, values_a = a.coo()
row_b, col_b, values_b = b.coo()
index = torch.stack([torch.cat([row_a, row_b]), torch.cat([col_a, col_b])])
value = torch.cat([values_a, values_b])
m,n = a.sizes()
index, value = coalesce(index, value, m=m, n=n)
res = SparseTensor.from_edge_index(index, value, sparse_sizes=(m, n))
return res
def sub_sparse(a, b):
assert a.sizes() == b.sizes(), "The Tensor dimensions do not match"
row_a, col_a, values_a = a.coo()
row_b, col_b, values_b = b.coo()
index = torch.stack([torch.cat([row_a, row_b]), torch.cat([col_a, col_b])])
value = torch.cat([values_a, -1 * values_b])
m,n = a.sizes()
index, value = coalesce(index, value, m=m, n=n)
res = SparseTensor.from_edge_index(index, value, sparse_sizes=(m, n))
return res
It does the job for me and autograd works this way.
Looks good! I think this can be made more efficient with custom CUDA kernels, but it's good to support it nonetheless. Are you interested in contributing your solution?
Hi,
In case a simple function is sufficient for you I can contribute that.
Yes, please :) Really appreciate it.