pytorch_sparse
pytorch_sparse copied to clipboard
Does spspmm operation support autograd?
Hi, you say autograd is supported for values tensors, but it seems it doesn't work in spspmm.
Like this:
indexA = torch.tensor([[0, 0, 1, 2, 2], [1, 2, 0, 0, 1]])
valueA = torch.tensor([1, 2.0, 3, 4, 5], requires_grad=True)
indexB = torch.tensor([[0, 2], [1, 0]])
valueB = torch.tensor([2, 4.0], requires_grad=True)
indexC, valueC = torch_sparse.spspmm(indexA, valueA, indexB, valueB, 3, 3, 2)
print(valueC.requires_grad)
print(valueC.grad_fn)
And the answer is:
False
None
In my case, I want to parameterize the sparse adjacent matrix and feature matrix in GCN, so the inputs need to be both differentiable. I wonder if there're some bugs or just the way it is.
Regards.
That's the only function that does not have proper autograd support. Gradients for sparse-sparse matrix multiplication are quite difficult to obtain (since they are usually dense). I had a working, but slow implementation up to 0.4.4
release, but removed it since it wasn't a really good implementation. If you desperately need it, feel free to try it out.
That's the only function that does not have proper autograd support. Gradients for sparse-sparse matrix multiplication are quite difficult to obtain (since they are usually dense). I had a working, but slow implementation up to
0.4.4
release, but removed it since it wasn't a really good implementation. If you desperately need it, feel free to try it out.
Hey! Thanks for your great work! I have installed the 0.4.4
release of torch_sparse
and it totally works out in my experiments! Maybe you could add this information to the documentation. It takes me so long to figure out this no-autograd problem.
Thanks a lot again!
Hey! Thanks for your great work! I have installed the
0.4.4
release oftorch_sparse
and it totally works out in my experiments! Maybe you could add this information to the documentation. It takes me so long to figure out this no-autograd problem.Thanks a lot again!
Thank you so much for your question raising! It really troubles me for almost a week!
Sorry for the inconveniences. I have plans to add backward
support for spspmm
back in ASAP, see https://github.com/rusty1s/pytorch_geometric/issues/1465.
Do you have any updates on autograd support?
I'm parameterizing the weights of a sparse matrix to treat it as a locally connected network for a sparsely connected MLP implementation. Could I still run a backward pass to update these weights after calling matmul between this sparse matrix and a dense input?
Nevermind, already seeing some nice implementations out there! https://pypi.org/project/sparselinear/ https://stackoverflow.com/questions/63893602/neural-network-layer-without-all-connections
Does spspmm still lack autograd support?
This issue had no activity for 6 months. It will be closed in 2 weeks unless there is some new activity. Is this issue already resolved?
Does spspmm still lack autograd support? @rusty1s .. it seems to use SparseTensor, which is supposed to be fully supported by autograd?
Sadly yes :(
Is there an alternative? It is difficult to get earlier versions of torch sparse that have this to work on newer cuda versions.. :(
There isn‘t a workaround except for installing an earlier version. If you are interested, we can try to bring it back with your help. WDYT?
@rusty1s sounds good, why don't we start with putting back your existing implementation? is it not better than having nothing?
Here's the roadmap in order to achieve this:
- [ ] Add the CPU
spspmm_bw
code from https://github.com/rusty1s/pytorch_sparse/blob/0.4.4/cpu/spspmm.cpp#L19-L63 - [ ] Add the CUDA
spspmm_bw
code from https://github.com/rusty1s/pytorch_sparse/blob/0.4.4/cuda/spspmm_kernel.cu#L115-L166 - [ ] Register the
autograd
function as in https://github.com/rusty1s/pytorch_sparse/blob/master/csrc/spmm.cpp#L55-L86 - [ ] Add a basic
backward
test as in https://github.com/rusty1s/pytorch_sparse/blob/0.4.4/test/test_spspmm.py#L29-L35
Is there an alternative? It is difficult to get earlier versions of torch sparse that have this to work on newer cuda versions.. :(
From: Matthias Fey @.> Sent: 14 April 2022 22:05 To: rusty1s/pytorch_sparse @.> Cc: Nanavati, Jay @.>; Comment @.> Subject: Re: [rusty1s/pytorch_sparse] Does spspmm operation support autograd? (#45)
Sadly yes :(
— Reply to this email directly, view it on GitHubhttps://github.com/rusty1s/pytorch_sparse/issues/45#issuecomment-1099666507, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ARWCGGSZA7DZOLJ74WY6ZITVFCJBZANCNFSM4LEFGYZA. You are receiving this because you commented.Message ID: @.***>
AstraZeneca UK Limited is a company incorporated in England and Wales with registered number:03674842 and its registered office at 1 Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge, CB2 0AA.
This e-mail and its attachments are intended for the above named recipient only and may contain confidential and privileged information. If they have come to you in error, you must not copy or show them to anyone; instead, please reply to this e-mail, highlighting the error to the sender and then immediately delete the message. For information about how AstraZeneca UK Limited and its affiliates may process information, personal data and monitor communications, please see our privacy notice at www.astrazeneca.comhttps://www.astrazeneca.com
With PyTorch 1.12, I assume you can also try to use the sparse-matrix multiplication from PyTorch directly. PyTorch recently integrated better sparse matrix support into its library :)