Xinyu Wang

Results 73 comments of Xinyu Wang

Hi, I want to reproduce the NER score reported in BERT paper as well. Did you use the document level context for the NER model? How did you split the...

There are many part having the same problem, you can try it.

You may refer to [this](https://github.com/Alibaba-NLP/ACE#parse-files)

It's strange. Could you provide your modified config file and your ner data for the experiment?

> Here is the doc_ner_best.yaml content. > > ``` > Controller: > model_structure: null > MFVI: > hexa_rank: 150 > hexa_std: 1 > iterations: 3 > normalize_weight: true > quad_rank:...

This is caused by my modifications at #41. I added some additional modifications in the `reinforcement_trainer.py` to fix this problem. By the way, I think adding `if '/' in name:...

The F1 score does not change compared with your initial run. Could you show a longer output line for the testing? I think the code will print the names and...

The problem is probably from this list: ```python ['/home/yongjiang.jy/.flair/embeddings/en-xlmr-first-docv2_10epoch_1batch_4accumulate_0.000005lr_10000lrrate_eng_monolingual_nocrf_fast_norelearn_sentbatch_sentloss_finetune_nodev_saving_ner5/roberta-large_v2doc', '/home/yongjiang.jy/.flair/embeddings/xlmr-first-docv2_10epoch_1batch_4accumulate_0.000005lr_10000lrrate_eng_monolingual_nocrf_fast_norelearn_sentbatch_sentloss_finetune_nodev_saving_ner3/xlm-roberta-large_v2doc', '/home/yongjiang.jy/.flair/embeddings/xlnet-first-docv2_10epoch_1batch_4accumulate_0.000005lr_10000lrrate_eng_monolingual_nocrf_fast_norelearn_sentbatch_sentloss_finetune_nodev_saving_ner4/xlnet-large-cased_v2doc', 'C:\\Users\\ebb\\.flair\\embeddings\\lm-jw300-backward-v0.1.pt', 'C:\\Users\\ebb\\.flair\\embeddings\\lm-jw300-forward-v0.1.pt', 'C:\\Users\\ebb\\.flair\\embeddings\\news-backward-0.4.1.pt', 'C:\\Users\\ebb\\.flair\\embeddings\\news-forward-0.4.1.pt', 'Word: en', 'bert-base-cased', 'bert-base-multilingual-cased', 'bert-large-cased', 'elmo-original'] ``` This is the name list of all the...

It seems that I forgot to upload some of the fine-tuned transformer embeddings (though the order of embeddings is still not correct now). I'm collecting them and will upload them...

I'm uploading the fine-tuned embeddings for the model. Please download `en-xlm-roberta-large.zip`, `en-roberta-large.zip` and `en-xlnet-large-cased.zip` at [OneDrive](https://1drv.ms/u/s!Am53YNAPSsodg810NxHQcrJpcNIOig?e=FRsJNR) in `fine-tuned models` and unzip them. Then change the path of the embeddings (`model:`)...