diffpool icon indicating copy to clipboard operation
diffpool copied to clipboard

difference between methods

Open AmirSh15 opened this issue 5 years ago • 2 comments

Hi

Thank you for your implementation. You have three different model in your encoders, soft-assign, base-set2set, and base. Whats the difference between these?

AmirSh15 avatar Jul 11 '19 17:07 AmirSh15

Hi the soft-assign is the pooling method, base is the baseline, and set2set is the baseline with the set aggregation pooling for all node embeddings. The set2set method refers to "Order Matters: Sequence to sequence for sets"

RexYing avatar Aug 16 '19 12:08 RexYing

Hi the soft-assign is the pooling method, base is the baseline, and set2set is the baseline with the set aggregation pooling for all node embeddings. The set2set method refers to "Order Matters: Sequence to sequence for sets"

Hi, I have a question of your implementation of Set2Set, which is come from the PyG issue.

I am curious about the first computation step of the loop.

hidden = (torch.zeros(self.num_layers, batch_size, self.lstm_output_dim).cuda(),
	                  torch.zeros(self.num_layers, batch_size, self.lstm_output_dim).cuda())
	
q_star = torch.zeros(batch_size, 1, self.hidden_dim).cuda()
for i in range(n):
    # q: batch_size x 1 x input_dim
    q, hidden = self.lstm(q_star, hidden)

The input of the LSTM unit is q_star and hidden, which are initialized as 0 vectors. I am not sure if I am wrong that, the updated q and hidden in the first loop are only related to the initialized biases of the LSTM unit.

RuihongQiu avatar Nov 26 '19 06:11 RuihongQiu