FLAG
FLAG copied to clipboard
default batch-size is too small
for deepergcn+flag on ogbn-mol-pcba and ogbg_ppa, the default batch-size is too small, and the training is too slow, how could you finish the experiment? have you changed the batch-size, and the according other hyper parameters setting?
Yes the experiments for ogbg-molpcba and ogbg-ppa are quite slow, so you don't wanna train all the 10 runs sequentially. In our practice we run 10 runs parallelly so the training time tractable. For batch-size we are just reusing what has been set for the original DeeperGCN. Note that we run the experiments on NVIDIA Tesla V100 (32GB GPU), I suspect larger batch size could lead to OOM.