BERT-BiLSTM-CRF-NER
BERT-BiLSTM-CRF-NER copied to clipboard
关于在cpu服务器上启动报错 模型使用的是gpu服务器生成的pb模型文件。
I:VENTILATOR:new config request req id: 0 client: b'e4d5dbdc-97ff-4ec5-8bd3-96e0c11d5c60'
Process BertWorker-3:
Traceback (most recent call last):
File "d:\app\python\python37\lib\multiprocessing\process.py", line 297, in bootstrap
self.run()
File "d:\app\python\python37\lib\site-packages\bert_base\server_init.py", line 487, in run
self.run()
File "d:\app\python\python37\lib\site-packages\zmq\decorators.py", line 75, in wrapper
return func(*args, **kwargs)
File "d:\app\python\python37\lib\site-packages\bert_base\server\zmq_decor.py", line 27, in wrapper
return func(*args, **kwargs)
File "d:\app\python\python37\lib\site-packages\bert_base\server_init.py", line 505, in _run
for r in estimator.predict(input_fn=self.input_fn_builder(receivers, tf), yield_single_examples=False):
File "d:\app\python\python37\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 611, in predict
features, None, model_fn_lib.ModeKeys.PREDICT, self.config)
File "d:\app\python\python37\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1112, in _call_model_fn
model_fn_results = self.model_fn(features=features, **kwargs)
File "d:\app\python\python37\lib\site-packages\bert_base\server_init.py", line 463, in classification_model_fn
pred_probs = tf.import_graph_def(graph_def, name='', input_map=input_map, return_elements=['pred_prob:0'])
File "d:\app\python\python37\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "d:\app\python\python37\lib\site-packages\tensorflow\python\framework\importer.py", line 426, in import_graph_def
graph._c_graph, serialized, options) # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.NotFoundError: Op type not registered 'BatchMatMulV2' in binary running on ID-00368-NB01. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) tf.contrib.resampler
should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
I:SINK:send config client b'e4d5dbdc-97ff-4ec5-8bd3-96e0c11d5c60'
- Serving Flask app "bert_base.server.http" (lazy loading)
- Environment: production WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
- Debug mode: off
- Running on http://0.0.0.0:8091/ (Press CTRL+C to quit)
我找个时间试试,这个错误不明觉厉
算解决啦,gpu保存的模型结果,即模型参数,变量以及图结构是和cpu一样的,问题就是出在转.pb文件这块,应该在cpu服务器转。
没有吧,我在gpu转的pb,在CPU无bug.
我是在bert源码项目上跑训练的,再在你这个项目上进行部署预测。 你的转pb文件可以发我下吗?
源码训练出bug,我也不知道为什么,转pb代码买项目中有。
好的,谢谢。
我转pb一直报错
算解决啦,gpu保存的模型结果,即模型参数,变量以及图结构是和cpu一样的,问题就是出在转.pb文件这块,应该在cpu服务器转。
大哥,怎么生成pb文件呀
算解决啦,gpu保存的模型结果,即模型参数,变量以及图结构是和cpu一样的,问题就是出在转.pb文件这块,应该在cpu服务器转。
您好,您说如何解决的,并且您做的是NER吗? 您说如何ckpt转到pb的呀