BERT-BiLSTM-CRF-NER icon indicating copy to clipboard operation
BERT-BiLSTM-CRF-NER copied to clipboard

Tensorflow solution of NER task Using BiLSTM-CRF model with Google BERT Fine-tuning And private Server services

Results 105 BERT-BiLSTM-CRF-NER issues
Sort by recently updated
recently updated
newest added

训练的时候,bert部分的参数是否也finetune训练了? 测试的时候,是用的原始的bert输出向量。

可以获得每个实体的置信度么?

国务院总理李克强在雄安新区召开会议 [['I-LOC']] LOC, 国 PER ORG time used: 0.426002 sec 请问一下,我运行文档中的例子出现这种情况是怎么回事?求大佬指导一下

请问有没有试过在一台机器上开启多个服务(设置不同的端口),有一台机器最多开启5个服务,后面再开就一直停留在load pb file最后一步,一直没有出现ready and listening。查看GPU显存还有机器内存都还有很多空闲,不知道为什么。

I:WORKER-0:[__i:gen:537]:ready and listening! I:VENTILATOR:[__i:_ru:215]:new encode request req id: 0 size: 2 client: b'0c836adc-74f4-48b5-8af4-a957eefecebd' I:SINK:[__i:_ru:369]:job register size: 2 job id: b'0c836adc-74f4-48b5-8af4-a957eefecebd#0' I:WORKER-0:[__i:gen:545]:new job socket: 0 size: 2 client: b'0c836adc-74f4-48b5-8af4-a957eefecebd#0'

训练集使用https://github.com/zjy-ucas/ChineseNER **训练命令** ``` bert-base-ner-train \ -data_dir=/home/bert/BERT-BiLSTM-CRF-NER/data/ \ -bert_config_file=/home/bert/chinese_L-12_H-768_A-12/bert_config.json \ -vocab_file=/home/bert/chinese_L-12_H-768_A-12/vocab.txt \ -init_checkpoint=/home/bert/chinese_L-12_H-768_A-12/bert_model.ckpt \ -output_dir=/home/bert/BERT-BiLSTM-CRF-NER/output/result_dir/ \ -do_lower_case=True ``` **训练结果** ![image](https://user-images.githubusercontent.com/12321505/60158416-a457b700-9823-11e9-95fe-46d3a4a026e6.png) **启动服务命令** ``` bert-base-serving-start \ -model_dir /home/bert/BERT-BiLSTM-CRF-NER/output/result_dir/ \ -bert_model_dir /home/bert/chinese_L-12_H-768_A-12 \ -model_pb_dir...

用了英文的checkpoint 和vocab做fine tuning。但是prediction的时候 准确率极低,并且label中会出现X。

Traceback (most recent call last): File "main.py", line 71, in train(args=args) File "/home/sncdbs/net_disk/zn/bert_lstm_crf/python/bert-ner/bert_lstm_ner.py", line 650, in train tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec) File "/usr/local/lib/python3.6/site-packages/tensorflow/python/estimator/training.py", line 471, in train_and_evaluate return executor.run() File "/usr/local/lib/python3.6/site-packages/tensorflow/python/estimator/training.py",...

我的是do_train, do_eval, do_predict同时为true的时候的f1正常; 训练结束后想再做predict则f1为0. 我的命令行如下: bert-base-ner-train --do_train=False --do_eval=False --do_predict=True --data_dir=data1/ --predict_batch_siz=16 --max_seq_length=128 --output_dir=result/ --data_config_path=result.config --vocal_file=chinese_L-12_H-768_A-12/vocab.txt --bert_config_file=chinese_L-12_H-768_A-12/bert_config.json 应该是遇到了和这边类似的问题 https://github.com/macanv/BERT-BiLSTM-CRF-NER/issues/23 求解答!

LoggingTensorHook为什么输出不了结果