GAT
GAT copied to clipboard
Early Stopping Strategy for PPI dataset
I implemented GAT on PPI according to the paper. (256,256 dim/ 4,4,6 heads) I followed the early stopping strategy of Pubmed, and get 96.9\pm 0.4 f1-score. How can I get the reported 97.3 \pm 0.2? Thanks very much.
@aviczhl2 Hi, I have a doubt about these results. I used function, micro_f1, as evaluation metric to train model. At last, the result of micro f1 is only about 0.5. So how to get 96.9\pm or 97.3 \pm mentioned in paper ? Thanks.
@aviczhl2 Hi, I have a doubt about these results. I used function, micro_f1, as evaluation metric to train model. At last, the result of micro f1 is only about 0.5. So how to get 96.9\pm or 97.3 \pm mentioned in paper ? Thanks.
You shall change the loss function to masked_sigmoid instead of masked_softmax I guess. Since it is multi-label classification.
@aviczhl2 I've changed. From another hand, what does "96.9\pm 0.4 f1-score" mean? I ran about 50 epochs, and final f1 score is about 0.5. What is about "96.9\pm" ? Thanks.
@aviczhl2 I've changed. From another hand, what does "96.9\pm 0.4 f1-score" mean? I ran about 50 epochs, and final f1 score is about 0.5. What is about "96.9\pm" ? Thanks.
Another tricky thing here(I suppose you have followed everything in the paper about structure and loss/evaluate function) is that you shall set dropout=0.0 and residual=True.
@aviczhl2 I've changed. From another hand, what does "96.9\pm 0.4 f1-score" mean? I ran about 50 epochs, and final f1 score is about 0.5. What is about "96.9\pm" ? Thanks.
Another tricky thing here(I suppose you have followed everything else in the paper about structure and loss/evaluate function) is that you shall set dropout=0.0 and residual=True. (Since I don't have your code so I cannot tell exactly which mistake you're making, just listing some possibilities I've experienced...)
BTW , /pm is the latex representation of "plus and minus" (standard deviation)
@aviczhl2 Thanks for your strong reminders ! I checked these parameter again and again, but ignored the setting of dropout rate in code. When I correct it, it seems work well. :)
Hello,
Thank you both for the interest in GAT and for performing evaluations on PPI!
Could you try using the same early stopping strategy as for Cora? While our paper reads a little unclear on this, I think this is the one we used for PPI.
Otherwise, it could just be library versions you're using -- the result you're getting already overlaps with the reported result in terms of standard deviations.
Thanks, Petar
Hello, do you still remember the final value of BCEloss when training inductively the GAT on PPI? I performed my own model on PPI, getting a loss value of 0.55 as well as a f1 score of 0.5.
@aviczhl2 Hi! I am wondering where did you download the PPI dataset? could you send it to me? Thank you very much!
I implemented GAT on PPI according to the paper. (256,256 dim/ 4,4,6 heads) I followed the early stopping strategy of Pubmed, and get 96.9\pm 0.4 f1-score. How can I get the reported 97.3 \pm 0.2? Thanks very much.
Hi!I also change the code according to the paper(dropout、residual...),but i only get 92 f1-score.Can you share your code?