GPU Memory Increases Significantly
Hi,
Thanks for sharing the codes. Your work is quite interesting. However, as I was reproducing the experiments, I found that every time after performing the iterative training, the GPU memory increases significantly and will not drop back to what it was before the iterative training. This might cause GPU out of memory issues. Would you investigate this problem and see if it is reproducible on your side?
b/w: could you also share the hyperparameters for other datasets such as FB15K237-20? Thanks!
Best, Zhongyu
I will try to navigate the problem about GPU usage when activating iterative training. The hyperparameters on all 4 datasets are shown in the command line in README.md, you will reproduce our reported results by running these commands:)
BTW, you can also find the best hyperparameters in Table 8 of our paper (appendix).
Sorry for missing them. Thanks for letting me know!