TopoModelX icon indicating copy to clipboard operation
TopoModelX copied to clipboard

can_train tutorial fails with new sparse casting

Open jkhouja opened this issue 9 months ago • 0 comments

This is related to test failure of pr #235 which replaces current sparse matrix casting to torch with the newly merged from_sparse method.

The tutorial here fails due to attempting to call indices of unoalesced sparse matrix in topomodelx/nn/cell/can_layer.py MultiHeadLiftLayer:

RuntimeError: Cannot get values on an uncoalesced tensor, please call .coalesce() first

Adding coalesce in LiftLayer forward method in can_layer.py solves the the above issue:

source, target = neighborhood_0_to_0.coalesce().indices()  # (num_edges,)

but then training fails due to mismatch in tensor dimension in MultiHeadLiftLayer forward:

File [~/workspace/code/TopoModelX/topomodelx/nn/cell/can_layer.py:273](https://file+.vscode-resource.vscode-cdn.net/Users/jkhouja/workspace/code/TopoModelX/tutorials/cell/~/workspace/code/TopoModelX/topomodelx/nn/cell/can_layer.py:273), in MultiHeadLiftLayer.forward(self, x_0, neighborhood_0_to_0, x_1)
    271     print(combined_x_1.shape)
    272     print(x_1.shape)
--> 273     combined_x_1 = torch.cat(
    274         (combined_x_1, x_1), dim=1
    275     )  # (num_edges, heads + in_channels_1)
    277 return combined_x_1

RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 55 but got size 38 for tensor number 1 in the list.

This is suspicious. So after tracing the layers and tensors I found the source of the issue to be in adjacency_0 matrix. Switching back to old sparse -> dense -> sparse solves the issue. Investigating further shows that the resulting sparse matrices are not identical.

# After tracing with the layers in CAN, the data source with the issue is this part from the tutorial
adjacency_0_list = []
for cell_complex in cc_list:
    adjacency_0 = cell_complex.adjacency_matrix(rank=0)
    adjacency_0_new = from_sparse(adjacency_0)
    adjacency_0_old = torch.from_numpy(adjacency_0.todense()).to_sparse()
    #print(adjacency_0)
    adjacency_0_list.append((adjacency_0, adjacency_0_new, adjacency_0_old))

for i, elements  in enumerate(adjacency_0_list):
    print(i)
    original = elements[0]
    new = elements[1]
    old = elements[2]
    torch.testing.assert_allclose(new, old)
0
AssertionError: The number of specified values in sparse COO tensors does not match: 55 != 38

Comparing with the original numpy sparse matrix, the new from_sparse method matches in the indices while the current method does not:

len(original.indices)
55
len(new.indices)
55
len(old.indices)
28

The issue is that the orignal matrix contains zero values which are preserved in the new from_sparse method but are ignored and excluded in the current casting. We can validate that by checking that the dense representation of the matrices matches in all 3:

# compare torch sparse using new from_sparse()
np.allclose(original.todense(), new.to_dense().numpy())
True 

# compare torch sparse using old sparse -> dense -> sparse
np.allclose(original.todense(), old.to_dense().numpy())
True

Given that I'm not familiar yet with CAN architecture, do the explicit zeros represent something and need to be preserved?

1-Assuming this is an adjacency matrix, then the answer might be no and the issue is in the data generation of the original matrix which seems to be generated by toponetx.classes.cell_complex. I'm not familiar with the other repo so would love someone to share their thoughts.

2-If so, then there's a problem with the current implementation that the new casting method exposed.

Would love some feedback from those familiar with the can architecture / math.

@mhajij @ninamiolane @papamarkou @jarpri

Read more about pytorch coo coalesce here

jkhouja avatar Oct 06 '23 16:10 jkhouja