BERT-BiLSTM-CRF-NER icon indicating copy to clipboard operation
BERT-BiLSTM-CRF-NER copied to clipboard

Tensorflow solution of NER task Using BiLSTM-CRF model with Google BERT Fine-tuning And private Server services

Results 105 BERT-BiLSTM-CRF-NER issues
Sort by recently updated
recently updated
newest added

processed 90392 tokens with 1854 phrases; found: 1620 phrases; correct: 1492. accuracy: 97.74%; precision: 92.10%; recall: 80.47%; FB1: 85.90 COM: precision: 92.33%; recall: 98.22%; FB1: 95.18 1616 FUN: precision: 0.00%;...

model_fn = model_fn_builder( bert_config=bert_config, num_labels=len(label_list) + 1, #######这里???为什么要加 1 有什么作用?? init_checkpoint=args.init_checkpoint, learning_rate=args.learning_rate, num_train_steps=num_train_steps, num_warmup_steps=num_warmup_steps, args=args)

不管数据量有多大,显存本身24000MB 多 占用了23000MB多(并且这个数量不变),请问这是怎么回事啊? 用了各种方式都不管用:1、tf.data.TFRecordDataset.cache(),2、tf.data.TFRecordDataset.shard,3、把tf_record 分成多份读取,4、epoch 和batch_size 分别改成 1和16, 最后只能用 session_config.gpu_options.per_process_gpu_memory_fraction @macanv 求大佬指点!

我在别处用英文数据集训练好模型,拿到您的项目里做在线预测,使用terminal_predict.py文件,但是确实label-list文件,我应该怎么办?还是用您的项目训练在训练一次?

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[12928,768] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc If you want to see a list of allocated tensors when...

模型训练结束后,我使用提供的adam_filter方法,删除一些不必要的参数后,重新加载模型做预测,报以下错误: tensorflow.python.framework.errors_impl.NotFoundError: Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have...

或者是否可以提供一下训练的for循环的位置?代码结构有点复杂没有找到

在本地可以起来服务,但包成docker后,一直无法出现“ready and listening”!的提示,说明接口服务没起来 I:VENTILATOR:[__i:_ge:239]:get devices I:VENTILATOR:[__i:_ge:271]:device map: worker 0 -> cpu I:SINK:[__i:_ru:317]:ready I:VENTILATOR:[__i:_ru:180]:start http proxy I:WORKER-0:[__i:_ru:497]:use device cpu, load graph from /usr/src/app/models/pbModelDir/classification_model.pb I:VENTILATOR:[__i:_ru:199]:new config request req id: 0 client: b'e00184bb-7360-4fea-9c19-d9e3321bf9bb'...

为啥有时候会有batch_size,有时候没有呢? 难道是不足batch_size,但是我batch_size设置为2时,也会有(?,202) 还望解答