diffpool
diffpool copied to clipboard
difference between methods
Hi
Thank you for your implementation. You have three different model in your encoders, soft-assign, base-set2set, and base. Whats the difference between these?
Hi the soft-assign is the pooling method, base is the baseline, and set2set is the baseline with the set aggregation pooling for all node embeddings. The set2set method refers to "Order Matters: Sequence to sequence for sets"
Hi the soft-assign is the pooling method, base is the baseline, and set2set is the baseline with the set aggregation pooling for all node embeddings. The set2set method refers to "Order Matters: Sequence to sequence for sets"
Hi, I have a question of your implementation of Set2Set, which is come from the PyG issue.
I am curious about the first computation step of the loop.
hidden = (torch.zeros(self.num_layers, batch_size, self.lstm_output_dim).cuda(),
torch.zeros(self.num_layers, batch_size, self.lstm_output_dim).cuda())
q_star = torch.zeros(batch_size, 1, self.hidden_dim).cuda()
for i in range(n):
# q: batch_size x 1 x input_dim
q, hidden = self.lstm(q_star, hidden)
The input of the LSTM unit is q_star
and hidden
, which are initialized as 0 vectors. I am not sure if I am wrong that, the updated q
and hidden
in the first loop are only related to the initialized biases of the LSTM unit.