Serving
Serving copied to clipboard
examples/Pipeline/PaddleDetection中的yolov3
使用链接中官方给的模型,无法跑通yolov3:先启动web_service.py(启动正常),然后启动pipeline_http_client.py,将报错如下:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/paddle_serving_server/pipeline/error_catch.py", line 97, in wrapper
res = func(*args, **kw)
File "/usr/local/lib/python3.6/site-packages/paddle_serving_server/pipeline/operator.py", line 1181, in postprocess_help
logid_dict.get(data_id))
File "web_service.py", line 64, in postprocess
fetch_dict, visualize=False))
File "/usr/local/lib/python3.6/site-packages/paddle_serving_app/reader/image_reader.py", line 427, in __call__
self.clsid2catid)
File "/usr/local/lib/python3.6/site-packages/paddle_serving_app/reader/image_reader.py", line 344, in _get_bbox_result
lod = [fetch_map[fetch_name + '.lod']]
KeyError: 'save_infer_model/scale_0.tmp_1.lod'
Classname: Op._run_postprocess.<locals>.postprocess_help
FunctionName: postprocess_help
ERROR 2023-04-16 07:48:33,891 [dag.py:420] (data_id=0 log_id=0) Failed to predict: Log_id: 0 Raise_msg: save_infer_model/scale_0.tmp_1.lod
ClassName: Op._run_postprocess.<locals>.postprocess_help FunctionName: postprocess_help](url)
请问这是什么原因,是因为模型代码太老了吗?我看代码是两年前提交的,我的环境如下:
paddle-serving-app 0.9.0
paddle-serving-client 0.9.0
paddle-serving-server-gpu 0.9.0.post112
paddle2onnx 1.0.6
paddlefsl 1.1.0
paddlehub 2.3.1
paddlenlp 2.4.3
paddlepaddle 2.4.2
Message that will be displayed on users' first issue