MINERVA
MINERVA copied to clipboard
parameter settings used in paper
Hello thank you for your code, it works smoothly without any problem. I found parameter settings in your code are different than ones in your paper and MAP results were much lower. Would you please post parameter settings used for experiments in your paper? I really appreciate it if you could provide config.sh file for paper experiments. For example, batch size, beta, lambda total_iterations, entity_embeddings, and train_entity_embeddings so on:
data_input_dir="datasets/data_preprocessed/athleteplaysinleague/" vocab_dir="datasets/data_preprocessed/athleteplaysinleague/vocab" total_iterations=120 path_length=3 hidden_size=400 embedding_size=200 batch_size=128 beta=0.05 Lambda=0.05 use_entity_embeddings=1 train_entity_embeddings=1 train_relation_embeddings=1 base_output_dir="output/athleteplaysinleague/" load_model=0 model_load_dir="/home/sdhuliawala/logs/RL-Path-RNN/wn18rrr/edb6_3_0.05_10_0.05/model/model.ckpt" nell_evaluation=1