GNN-MAGNA icon indicating copy to clipboard operation
GNN-MAGNA copied to clipboard

About hardware information

Open lnhutnam opened this issue 3 years ago • 5 comments

Hello author, I'm a student who interested in your works when reading Multi-hop Attention Graph Neural Networks Can I get the information about the hardware that you train your model I cloned your repository and I on server in my lab which has Ubuntu 16.04, 32GB RAM

The code build well but it use too much ram and make the os crash. Even though I decreased some parameter, it still use too much ram

lnhutnam avatar Oct 14 '21 06:10 lnhutnam

Hello author, I'm a student who interested in your works when reading Multi-hop Attention Graph Neural Networks Can I get the information about the hardware that you train your model I cloned your repository and I on server in my lab which has Ubuntu 16.04, 32GB RAM

The code build well but it use too much ram and make the os crash. Even though I decreased some parameter, it still use too much ram

have you solved the problem? i met the same problem.

lendie avatar Oct 21 '21 13:10 lendie

@lendie I solved the problem by using the smaller dataset. I read https://github.com/thiviyanT/torch-rgcn someday ago and I found this author use a subset of FB15k-237 called FB-toy. And I already test this code for it, and success.

lnhutnam avatar Oct 21 '21 14:10 lnhutnam

@lendie I solved the problem by using the smaller dataset. I read https://github.com/thiviyanT/torch-rgcn someday ago and I found this author use a subset of FB15k-237 called FB-toy. And I already test this code for it, and success.

@nhutnamhcmus thanks a lot. I test this code for it too, but do you get the same result this paper shows?, I find that valid Hit@10 at step 28000 is too small(0.000824!)

lendie avatar Oct 25 '21 06:10 lendie

@lendie I solved the problem by using the smaller dataset. I read https://github.com/thiviyanT/torch-rgcn someday ago and I found this author use a subset of FB15k-237 called FB-toy. And I already test this code for it, and success.

@nhutnamhcmus thanks a lot. I test this code for it too, but do you get the same result this paper shows?, I find that valid Hit@10 at step 28000 is too small(0.000824!)

@lendie I tested with hyperparameter like the paper said, and I setting for max step is 40,000. At step 20,000 I got the result Valid hits@10 0.500000 and valid MRR is 0.327864. So can you tell me the parameter you test

Do you think the time training is too long :) ?

lnhutnam avatar Oct 26 '21 12:10 lnhutnam

@nhutnamhcmus I tested it on CPU,maybe it was one of the reasons I got so low(I did not alter the hyperparameter). I do think the time training is too long.

lendie avatar Oct 31 '21 12:10 lendie