Kezhi Kong
Kezhi Kong
Yes, after I tried to reorder the y and x the results look normal, so I guess the problem is the order is still shuffled.
> Does #4147 fix your problem? Right, exactly the same problem. I think you guys have fixed it in the nightly version?
Hey guys, I tried to use nightly build DGL and I can fully replicate the results from @zjost as below https://github.com/dmlc/dgl/blob/28b09047791e1ad25bf2a890902369454d5070fc/examples/pytorch/ogb/ogbn-mag/README.md?plain=1#L19-L26 However, when I use dgl==0.8.2 I cannot replicate that....
In short, the nightly version looks good to me, and dgl==0.8.2 yields weird results. Thanks.
Sorry for the late reply. The experiments were on DeeperGCN, right? If so I believe the DeeperGCN experiments do need a large amount of memory. On my end I don't...
Hi Xiaoyang, Actually these augmentations were implemented by the DeepGCN paper and never used in our work. All of our augmentations are implemented here: https://github.com/devnkong/FLAG/blob/main/ogb/attacks.py Let me know if you...
Yes the experiments for ogbg-molpcba and ogbg-ppa are quite slow, so you don't wanna train all the 10 runs sequentially. In our practice we run 10 runs parallelly so the...
Hi Meng, Thanks for your interest in our work and using our code. Yes using FLAG on multi-GPU is definitely interesting and has always been a thing to me. Unfortunately,...
I see, my suggestion is that you directly try on the PyG multi-gpu example in the straight-forward manner. For adversarial training on CV and NLP, similar implementation like FLAG works...
Hi Weihua, thanks for the notification and the consistent effort to make the benchmark better. Sure I will keep this in mind and update our results when I get bandwidth.