GCond
GCond copied to clipboard
Is this semi-supervised learning?
Hello, I'd like to ask a question. In many previous papers, the adjacency matrix of GCN includes all nodes, while in this paper, it seems to only include the nodes of the training set. So, isn't this semi-supervised training?
If unlabeled samples are not involved in the training process, wouldn't it mean that we cannot effectively utilize unlabeled samples, and would it have an impact on the results?
Consider an example: in the Cora dataset, with a compression rate of 0.1, the dimension of the adjacency matrix is 14. Does this represent 10% of the training nodes rather than 10% of the entire dataset? This truly troubles me.
In our experiments, we actually consider both transductive (semi-supervised training) and inductive settings.
in the Cora dataset, with a compression rate of 0.1, the dimension of the adjacency matrix is 14. Does this represent 10% of the training nodes rather than 10% of the entire dataset
It depends on how you define the compression rate. In Table 2 of our paper, the compression rate is defined as the ratio of condensed graph size (14 nodes) to the original graph size (2710 nodes), which would be 0.5%. Although in the code we have r
to be 0.1, it is just for easy implementation and it is different from how we calculate the compression rate in pactice.
Hope this helps. Thanks.
Thanks very much!