DeepPath
                                
                                
                                
                                    DeepPath copied to clipboard
                            
                            
                            
                        code and docs for my EMNLP paper "DeepPath: A Reinforcement Learning Method for Knowledge Graph Reasoning"
Deep Reinforcement Learning for Knowledge Graph Reasoning
We study the problem of learning to reason in large scale knowledge graphs (KGs). More specifically, we describe a novel reinforcement learning framework for learning multi-hop relational paths: we use a policy-based agent with continuous states based on knowledge graph embeddings, which reasons in a KG vector-space by sampling the most promising relation to extend its path. In contrast to prior work, our approach includes a reward function that takes the accuravy, diversity, and efficiency into consideration. Experimentally, we show that our proposed method outperforms a path-ranking based algorithm and knowledge graph embedding methods on Freebase and Never-Ending Language Learning datasets.
Access the dataset
Download the knowledge graph dataset NELL-995 FB15k-237
How to run our code
- 
unzip the data, put the data folder in the code directory
 - 
run the following scripts within
scripts/./pathfinder.sh ${relation_name}# find the reasoning paths, this is RL training, it might take sometime./fact_prediction_eval.py ${relation_name}# calculate & print the fact prediction results./link_prediction_eval.sh ${relation_name}# calculate & print the link prediction results
Examples (the relation_name can be found in
NELL-995/tasks/):./pathfinder.sh concept_athletehomestadium./fact_prediction_eval.py concept_athletehomestadium./link_prediction_eval.sh concept_athletehomestadium
 - 
Since we already put the reasoning paths in the dataset, you can directly run fact_prediction_eval.py or link_prediction_eval.sh to get the final results for each reasoning task
 
Format of the dataset
raw.kb: the raw kb data from NELL systemkb_env_rl.txt: we add inverse triples of all triples inraw.kb, this file is used as the KG for reasoningentity2vec.bern/relation2vec.bern: transE embeddings to represent out RL states, can be trained using TransX implementations by thunlptasks/: each task is a particular reasoning relationtasks/${relation}/*.vec: trained TransH Embeddingstasks/${relation}/*.vec_D: trained TransD Embeddingstasks/${relation}/*.bern: trained TransR Embedding trainedtasks/${relation}/*.unif: trained TransE Embeddingstasks/${relation}/transX: triples used to train the KB embeddingstasks/${relation}/train.pairs: train triples in the PRA formattasks/${relation}/test.pairs: test triples in the PRA formattasks/${relation}/path_to_use.txt: reasoning paths found the RL agenttasks/${relation}/path_stats.txt: path frequency of randomised BFS
If you use our code, please cite the paper
@InProceedings{wenhan_emnlp2017,
  author    = {Xiong, Wenhan and Hoang, Thien and Wang, William Yang},
  title     = {DeepPath: A Reinforcement Learning Method for Knowledge Graph Reasoning},
  booktitle = {Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017)},
  month     = {September},
  year      = {2017},
  address   = {Copenhagen, Denmark},
  publisher = {ACL}
}