izuna385

Results 9 comments of izuna385

I'm sorry for the delay in replying, and thank you for the detailed comparison experiment. I don't have time to adjust the parameters for this code. I'll let you know...

Hello, I just re-implemented hard-negative mining and scripts for encoding entities with zeshel dataset from [[Logeswaran et al., '19]](https://arxiv.org/abs/1906.07348). See [here](https://github.com/izuna385/Zero-Shot-Entity-Linking) for your information. Also [this repository](https://github.com/izuna385/Dual-encoder-Entity-Retrieval-with-BERT) might be useful...

Thanks for your issue. I'll upgrade and make it compatible soon!

Currently I'm developing more user-friendly linking tool but yes PRs are welcome! Thanks

I came across same error. In config.py, there are ` def __init__(self, run_dir, args): self.struct_weight = args.struct_weight self.dropout = args.dropout self.dataset = args.dataset self.encoder = args.encoder ... ` so, for...

Thanks! I checked code, and setting `base_dir="./dataset_dir"`. (and re-wrote some absolute path in code to relative path) So, the Directory structure is like this. ![image](https://user-images.githubusercontent.com/35322641/50544374-698b1280-0c36-11e9-99e7-76e5b91bf8cf.png) When setting `dataset="typenet"` and running...

I'm sorry to bother you over and over. I find that to run `deploy_linler.py` for umls, preprocessed data of umls and Medmentions is needed to create > train_lines = joblib.load("meta_data_processed/meta_train.joblib")...

Thank you so much. I'll check it out later. None of the above are time critical, I'd appreciate if you can add them to drive when you have time.

Hi, thanks for your interests! I just added preprocessed dataset from [ja-wiki](https://github.com/izuna385/Wikia-and-Wikipedia-EL-Dataset-Creator#sample-ja-wiki-dataset-). Please check it out. If I have time, I would like to create a dataset with wikiextractor that...