BERT-BiLSTM-CRF-NER icon indicating copy to clipboard operation
BERT-BiLSTM-CRF-NER copied to clipboard

Tensorflow solution of NER task Using BiLSTM-CRF model with Google BERT Fine-tuning And private Server services

Results 105 BERT-BiLSTM-CRF-NER issues
Sort by recently updated
recently updated
newest added

![image](https://user-images.githubusercontent.com/49578851/78968100-746d6c00-7b36-11ea-8064-29c3085f4282.png) z这是我的输入命令

1. 将原本的1e-5学习率换为0.001,之后跑出的结果全为0 2. batch_size 设为16可以跑,设为32就会有OOM报错

作者你好,我运行bert-base-ner-train之后,再运行terminal_predict.py,返回了FailedPreconditionError: Attempting to use uninitialized value的错误,请问这是什么原因呢? #### 这是我工作空间的目录结构(为了节省展示空间,省略了一部分的目录) ``` ----BERT-BiLSTM-CRF-NER\ |----build.sh |----thu_classification.py |----data_process.py |----README.md |----requirement.txt |----run.py |----client_test.py |----setup.py |----bert_base\ | | ... |----NERdata\ | |----test.txt | |----train.txt | |----dev.txt |----pictures\...

请问评估模型的代码是什么?我怎么找不见呢?怎么运行评估代码呢?

想利用这个模型做序列标注的分词、POS和NER联合任务,然后在自己的14000句数据上跑了一下训练,10000句训练、2000dev和2000test,结果如下: processed 35069 tokens with 17629 phrases; found: 19837 phrases; correct: 13586. 71 accuracy: 81.61%; precision: 68.49%; recall: 77.07%; FB1: 72.52 72 LOC: precision: 60.37%; recall: 72.93%; FB1: 66.06 598...

我使用bert_base布署了二个服务,一是意图识别,效果非常好。 二是句子对相似性。识别效果不好,好像随机。我的代码如下: # coding=utf-8 import csv from bert_base.client import BertClient for row in open('./test.tsv'): tag = row.split('\t')[0].strip() str1=row.split('\t')[1].strip() str2=row.split('\t')[2].strip() with BertClient(show_server_config=False, check_version=False, check_length=False, mode="CLASS", port=7006, port_out=7007) as bc: res =...

Hi guys! I tried to run sample code . however I am facing this error . All comments are very welcomed and appreciated in this matter. raise FileNotFoundError('graph optimization fails...