HGP-SL
HGP-SL copied to clipboard
How to apply HGP-SL to dense batched adjacency matrix
Hi, thanks for your code.
In my code, I must change my batched sparse adjacency matrices to a single dense batched adjacency matrix since my adjacency matrix is very big.
However, when I apply HGP-SL to my code, the error happens
ValueError: too many values to unpack (expected 2)
So I check the code, and I found row, col = edge_index in layers.py. which means that the parameter passes to HGP-SL must be sparse adjacency matrix. I really want to use HGP-SL in my code, but I don't know how to change the code in HGP-SL so that I can use dense batched adjacency matrix as the parameter.

Hi,
The current version only support sparse matrix. If you want to use dense matrix, the following parts should be modified:
- use DenseGCNConv in convolution layer;
- the calculation of NodeInformationScore should follow the matrix multiplication in Eq(2);
- in structure learning module, the sparse_softmax function can be replaced with the function here ;
Thanks for your reply, sorry to bother you again.
-
I don't understand "the calculation of NodeInformationScore should follow the matrix multiplication in Eq(2)". As far as I am considered, Eq(2) is :
And the calculation of NodeInformationScore is :
I think it already have followed the matrix multiplication in Eq(2), so I don't know how to change the class 'NodeInformationScore'. -
in structure learning module, the former sparse_softmax function is :
when I replace the sparse_softmax function, the number of parameters goes from two to one, so I wonder if I should just pass the parameter 'weights'?
Looking forward to receiving your reply.
Hi,
Sorry for the late reply.
1, The code of NodeInformationScore class indeed perform matrix multiplication in Eq(2), but they are in sparse multiplication form. You need to transform them in the dense form.
2, Ye, you only need to pass the weight, but you need to re-write the function, maybe you can use this reference