RA-retrofit
RA-retrofit copied to clipboard
Could you share your pre-processed data of pre-trained word embedding and cleaned knowledge graph?
Thanks for the interesting work and sharing your code. Could you also share your data of word embeddings you used and cleaned knowledge graph data?
Cleaned word embeddings and knowlege graphs we used: https://www.dropbox.com/s/75cccrsve5tnq8l/data.zip?dl=0
Pretrained RA-retrofit model: https://www.dropbox.com/s/tqj1k4dox1cir5e/RA-retrofit?dl=0
Thanks for the data and pretrained mode. I still have some questions:
-
How can I load the pretrained RA-retrofit model? I tried 'model.load_state_dict(torch.load(PATH))', but it reported an error.
-
What is the hyperparameter setting for producing the best performance? I tried many different combinations of gamma and learning rate, but the results are still far from the ones in the paper.
Besides, what is the epoch number for training a good performance model?
I figured out the pretrained model problem... I didn't realize it is the word embeddings file. But I am still not sure the hyperparameter setting for producing this result.
Please find the hyperparameter setting in our paper