About experiment's result higher than that in paper
Hi Wenhan, I ran this code, and used the default hyper-parameter. Then I found that the results on my machine is much higher than the results you reported in the paper. Is it correct or I made something wrong? I would appreciate it if you could answer my confusion.
Here's the setting and my results.
COMMENDS: train: CUDA_VISIBLE_DEVICES=0 python trainer.py --max_neighbor 50 --fine_tune --random_embed --prefix totally test: CUDA_VISIBLE_DEVICES=0 python trainer.py --max_neighbor 50 --fine_tune --random_embed --prefix totally_bestHits10 --test
RESULTS: DEV: experiments/paper 2019-11-07 20:41:03 CRITICAL: - HITS10: 0.295/0.211 2019-11-07 20:41:03 CRITICAL: - HITS5: 0.224/0.135 2019-11-07 20:41:03 CRITICAL: - HITS1: 0.078/0.024 2019-11-07 20:41:03 CRITICAL: - MAP: 0.149/0.083 TEST: experiments/paper 2019-11-07 20:44:58 CRITICAL: - HITS10: 0.269/0.252 2019-11-07 20:44:58 CRITICAL: - HITS5: 0.210/0.186 2019-11-07 20:44:58 CRITICAL: - HITS1: 0.104/0.103 2019-11-07 20:44:58 CRITICAL: - MAP: 0.158/0.151
Hi @JiaweiSheng,
What are your experiment environments? what pytorch, cuda, pytorch versions are you using
Tesla P100 Python 3.6.9 pytorch 1.1.0 CUDA 9.0.176
Can you try using the same environment as stated in the readme?