pytorch_sparse icon indicating copy to clipboard operation
pytorch_sparse copied to clipboard

Elementwise multiplication

Open AmosDinh opened this issue 2 years ago • 10 comments
trafficstars

Hello, is there any way to do element-wise matrix multiplication with your library? Thank you very much!

AmosDinh avatar Oct 02 '23 07:10 AmosDinh

Yes, this should work already, e.g., sparse_mat * sparse_mat or sparse_mat * dense_mat.

rusty1s avatar Oct 02 '23 08:10 rusty1s

This is great, thanks so much.

Matthias Fey @.***> schrieb am Mo., 2. Okt. 2023, 01:09:

Yes, this should work already, e.g., sparse_mat * sparse_mat or sparse_mat

  • dense_mat.

— Reply to this email directly, view it on GitHub https://github.com/rusty1s/pytorch_sparse/issues/343#issuecomment-1742575510, or unsubscribe https://github.com/notifications/unsubscribe-auth/AJQ5FRFBOK6TOIWNCNYDP5LX5JZDNANCNFSM6AAAAAA5O7XWCE . You are receiving this because you authored the thread.Message ID: @.***>

AmosDinh avatar Oct 02 '23 15:10 AmosDinh

What am I missing here? Does it only support 1xN matrices? Here is the error:

TypeError: SparseTensor.size() missing 1 required positional argument: 'dim'

Library Code seems to be

def mul(src: SparseTensor, other: torch.Tensor) -> SparseTensor:
    rowptr, col, value = src.csr()
    if other.size(0) == src.size(0) and other.size(1) == 1:  # Row-wise...
        other = gather_csr(other.squeeze(1), rowptr)
        pass
    elif other.size(0) == 1 and other.size(1) == src.size(1):  # Col-wise...
        other = other.squeeze(0)[col]
    else:
        raise ValueError(
            f'Size mismatch: Expected size ({src.size(0)}, 1, ...) or '
            f'(1, {src.size(1)}, ...), but got size {other.size()}.')

My code:

import torch
device='cuda'
dtype = torch.float64
from torch_sparse import SparseTensor
rowA = torch.tensor([0, 0, 1, 2, 2], device=device)
colA = torch.tensor([0, 2, 1, 0, 1], device=device)
valueA = torch.tensor([1, 2, 4, 1, 3], dtype=dtype, device=device)
A = SparseTensor(row=rowA, col=colA, value=valueA)

rowB = torch.tensor([0, 0, 1, 2, 2], device=device)
colB = torch.tensor([1, 2, 2, 1, 2], device=device)
valueB = torch.tensor([2, 3, 1, 2, 4],  dtype=dtype, device=device)
B = SparseTensor(row=rowB, col=colB, value=valueB)

C = A * B

Thanks for your help!

AmosDinh avatar Oct 02 '23 18:10 AmosDinh

Hi, I am encountering the exact same issue. I have tried to work around it by concatenating rows, cols and values, then coalescing using "mul" op, but sadly this operation is not implemented in torch_scatter for csr.

It looks like other is mistakenly detected as a vanilla pytorch Tensor on line 23, although it's a SparseTensor

Jamy-L avatar Oct 02 '23 19:10 Jamy-L

This op was implemented in https://github.com/rusty1s/pytorch_sparse/pull/323, and it is not yet released. Let me create a new version ASAP.

rusty1s avatar Oct 04 '23 06:10 rusty1s

Hi,

I'm running into problem trying to do sparse * dense elementwise multiplication. I think it may be related to how I create the sparse tensor. I sample from a larger sparse tensor and then concatenate the rows; a = cat(b, 0), where be is a list of single row sparse matrices.

now if I try to do a * a.to_dense() I get the following error;

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[81], line 1
----> 1 a * a.to_dense()

File [~/anaconda3/envs/env_240206/lib/python3.12/site-packages/torch_sparse/mul.py:128](http://localhost:8888/~/anaconda3/envs/env_240206/lib/python3.12/site-packages/torch_sparse/mul.py#line=127), in <lambda>(self, other)
    124         value = other
    125     return src.set_value_(value, layout=layout)
--> 128 SparseTensor.mul = lambda self, other: mul(self, other)
    129 SparseTensor.mul_ = lambda self, other: mul_(self, other)
    130 SparseTensor.mul_nnz = lambda self, other, layout=None: mul_nnz(
    131     self, other, layout)

File [~/anaconda3/envs/env_240206/lib/python3.12/site-packages/torch_sparse/mul.py:32](http://localhost:8888/~/anaconda3/envs/env_240206/lib/python3.12/site-packages/torch_sparse/mul.py#line=31), in mul(src, other)
     30     other = other.squeeze(0)[col]
     31 else:
---> 32     raise ValueError(
     33         f'Size mismatch: Expected size ({src.size(0)}, 1, ...) or '
     34         f'(1, {src.size(1)}, ...), but got size {other.size()}.')
     36 if value is not None:
     37     value = other.to(value.dtype).mul_(value)

ValueError: Size mismatch: Expected size (12, 1, ...) or (1, 74203, ...), but got size torch.Size([12, 74203]).

Am I doing something weird here?

Xparx avatar Mar 13 '24 14:03 Xparx

Looking at the source code it looks like this only works for vectors and sparse matrices (Just noticed the comment above)?

What is the best way to do sparse * dense elementwise multiplication between matrices?

Is this a safe efficient alternative?

c = a.to_dense()
a.mul_nnz(c[a.coo()[0], a.coo()[1]], layout='coo')

Xparx avatar Mar 13 '24 14:03 Xparx

If you wanna do sparse + dense, then why not convert the sparse matrix to a dense matrix? The result will be dense anyway. Would this work for you?

rusty1s avatar Mar 14 '24 07:03 rusty1s

I solved it by doing that now, thank you for the response and suggestion. I thought that it would be more efficient to not do it that way. For my case sparse.mul(dense) would probably be sparse as the elements without data are assumed to be zeros. In that case the resulting matrix would have the same density as the sparse matrix.

Xparx avatar Mar 14 '24 14:03 Xparx

If you don't want to convert to dense, you can also just do

row, col, value = a.coo()
out = b.clone()
out[row, col] += value

rusty1s avatar Mar 14 '24 15:03 rusty1s