Fix incorrect SparseTensor handling in GCNConv forward pass
Bug Description In the current GCNConv.forward() implementation, SparseTensor inputs are passed through gcn_norm, but the resulting normalized tensor is not unwrapped into edge_index and edge_weight.
This leads to a faulty call to self.propagate(...), which expects edge data in coordinate (COO) format rather than as a SparseTensor. Consequently, message passing yields incorrect results when SparseTensor is used.
Fix This PR adds logic to extract edge_index and edge_weight from the normalized SparseTensor using .coo() and stacks them into a proper format. It also caches the unwrapped tensors if self.cached is True.
Impact Ensures proper input for self.propagate(...) when using SparseTensor.
Experimental correctness restored.
Fully backward compatible.
Related Fixes #10047
Let me know if you'd like a test case..............happy to contribute one!
I think this already works as expected. Am I missing something?
@KAVYANSHTYAGI
Let me know if you'd like a test case..............happy to contribute one!
can you add a test case?
I think this already works as expected. Am I missing something?
The issue arises specifically when a SparseTensor is passed as edge_index into GCNConv with cached=True. After applying gcn_norm..... the resulting normalized tensor isn't unwrapped back into (edge_index, edge_weight) before self.propagate() is called.......which leads to incorrect results.
This patch ensures that when SparseTensor is used, we correctly extract row, col, and edge_weight via .coo(), and cache them properly.