pytorch_sparse icon indicating copy to clipboard operation
pytorch_sparse copied to clipboard

Does spspmm operation support autograd?

Open changym3 opened this issue 4 years ago • 17 comments

Hi, you say autograd is supported for values tensors, but it seems it doesn't work in spspmm.

Like this:

indexA = torch.tensor([[0, 0, 1, 2, 2], [1, 2, 0, 0, 1]])
valueA = torch.tensor([1, 2.0, 3, 4, 5], requires_grad=True)
indexB = torch.tensor([[0, 2], [1, 0]])
valueB = torch.tensor([2, 4.0], requires_grad=True)
indexC, valueC = torch_sparse.spspmm(indexA, valueA, indexB, valueB, 3, 3, 2)

print(valueC.requires_grad)
print(valueC.grad_fn)

And the answer is:

False
None

In my case, I want to parameterize the sparse adjacent matrix and feature matrix in GCN, so the inputs need to be both differentiable. I wonder if there're some bugs or just the way it is.

Regards.

changym3 avatar Mar 09 '20 10:03 changym3

That's the only function that does not have proper autograd support. Gradients for sparse-sparse matrix multiplication are quite difficult to obtain (since they are usually dense). I had a working, but slow implementation up to 0.4.4 release, but removed it since it wasn't a really good implementation. If you desperately need it, feel free to try it out.

rusty1s avatar Mar 09 '20 16:03 rusty1s

That's the only function that does not have proper autograd support. Gradients for sparse-sparse matrix multiplication are quite difficult to obtain (since they are usually dense). I had a working, but slow implementation up to 0.4.4 release, but removed it since it wasn't a really good implementation. If you desperately need it, feel free to try it out.

Hey! Thanks for your great work! I have installed the 0.4.4 release of torch_sparse and it totally works out in my experiments! Maybe you could add this information to the documentation. It takes me so long to figure out this no-autograd problem.

Thanks a lot again!

changym3 avatar Mar 10 '20 08:03 changym3

Hey! Thanks for your great work! I have installed the 0.4.4 release of torch_sparse and it totally works out in my experiments! Maybe you could add this information to the documentation. It takes me so long to figure out this no-autograd problem.

Thanks a lot again!

Thank you so much for your question raising! It really troubles me for almost a week!

LuciusMos avatar Jul 27 '20 03:07 LuciusMos

Sorry for the inconveniences. I have plans to add backward support for spspmm back in ASAP, see https://github.com/rusty1s/pytorch_geometric/issues/1465.

rusty1s avatar Jul 27 '20 09:07 rusty1s

Do you have any updates on autograd support?

jlevy44 avatar Dec 27 '20 02:12 jlevy44

I'm parameterizing the weights of a sparse matrix to treat it as a locally connected network for a sparsely connected MLP implementation. Could I still run a backward pass to update these weights after calling matmul between this sparse matrix and a dense input?

jlevy44 avatar Dec 27 '20 02:12 jlevy44

Nevermind, already seeing some nice implementations out there! https://pypi.org/project/sparselinear/ https://stackoverflow.com/questions/63893602/neural-network-layer-without-all-connections

jlevy44 avatar Dec 27 '20 06:12 jlevy44

Does spspmm still lack autograd support?

JRD971000 avatar Aug 20 '21 16:08 JRD971000

This issue had no activity for 6 months. It will be closed in 2 weeks unless there is some new activity. Is this issue already resolved?

github-actions[bot] avatar Feb 17 '22 01:02 github-actions[bot]

Does spspmm still lack autograd support? @rusty1s .. it seems to use SparseTensor, which is supposed to be fully supported by autograd?

jaynanavati-az avatar Apr 14 '22 18:04 jaynanavati-az

Sadly yes :(

rusty1s avatar Apr 14 '22 22:04 rusty1s

Is there an alternative? It is difficult to get earlier versions of torch sparse that have this to work on newer cuda versions.. :(

jaynanavati-az avatar Apr 19 '22 13:04 jaynanavati-az

There isn‘t a workaround except for installing an earlier version. If you are interested, we can try to bring it back with your help. WDYT?

rusty1s avatar Apr 19 '22 19:04 rusty1s

@rusty1s sounds good, why don't we start with putting back your existing implementation? is it not better than having nothing?

jaynanavati-az avatar Apr 21 '22 12:04 jaynanavati-az

Here's the roadmap in order to achieve this:

  • [ ] Add the CPU spspmm_bw code from https://github.com/rusty1s/pytorch_sparse/blob/0.4.4/cpu/spspmm.cpp#L19-L63
  • [ ] Add the CUDA spspmm_bw code from https://github.com/rusty1s/pytorch_sparse/blob/0.4.4/cuda/spspmm_kernel.cu#L115-L166
  • [ ] Register the autograd function as in https://github.com/rusty1s/pytorch_sparse/blob/master/csrc/spmm.cpp#L55-L86
  • [ ] Add a basic backward test as in https://github.com/rusty1s/pytorch_sparse/blob/0.4.4/test/test_spspmm.py#L29-L35

rusty1s avatar Apr 22 '22 04:04 rusty1s

Is there an alternative? It is difficult to get earlier versions of torch sparse that have this to work on newer cuda versions.. :(


From: Matthias Fey @.> Sent: 14 April 2022 22:05 To: rusty1s/pytorch_sparse @.> Cc: Nanavati, Jay @.>; Comment @.> Subject: Re: [rusty1s/pytorch_sparse] Does spspmm operation support autograd? (#45)

Sadly yes :(

— Reply to this email directly, view it on GitHubhttps://github.com/rusty1s/pytorch_sparse/issues/45#issuecomment-1099666507, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ARWCGGSZA7DZOLJ74WY6ZITVFCJBZANCNFSM4LEFGYZA. You are receiving this because you commented.Message ID: @.***>


AstraZeneca UK Limited is a company incorporated in England and Wales with registered number:03674842 and its registered office at 1 Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge, CB2 0AA.

This e-mail and its attachments are intended for the above named recipient only and may contain confidential and privileged information. If they have come to you in error, you must not copy or show them to anyone; instead, please reply to this e-mail, highlighting the error to the sender and then immediately delete the message. For information about how AstraZeneca UK Limited and its affiliates may process information, personal data and monitor communications, please see our privacy notice at www.astrazeneca.comhttps://www.astrazeneca.com

jaynanavati-az avatar Oct 11 '22 08:10 jaynanavati-az

With PyTorch 1.12, I assume you can also try to use the sparse-matrix multiplication from PyTorch directly. PyTorch recently integrated better sparse matrix support into its library :)

rusty1s avatar Oct 11 '22 11:10 rusty1s