Serving icon indicating copy to clipboard operation
Serving copied to clipboard

执行 paddle serving 报错

Open zouxiaoshi opened this issue 3 years ago • 7 comments

问题:

Q1:

执行如下代码时报错:

export SERVING_BIN=/usr/local/serving_bin/serving
python -m paddle_serving_server.serve \
--model ./serving_server \
--thread 8 --port 10010 \
--gpu_ids 0 

错误信息:

Error Message Summary:
----------------------
NotFoundError: Cannot open file ./serving_server/__model__, please confirm whether the file is normal.
  [Hint: Expected static_cast<bool>(fin.is_open()) == true, but received static_cast<bool>(fin.is_open()):0 != true:1.] (at /paddle/paddle/fluid/inference/api/analysis_predictor.cc:1119)

后用通过如下代码 进行转换:

python -m paddle_serving_client.convert --dirname . \
                                         --model_filename model.pdmodel          \
                                         --params_filename model.pdiparams       \
                                         --serving_server ./serving_server/ \
                                         --serving_client ./serving_client/

得到 如下文件:

.
├── model.pdiparams
├── model.pdmodel
├── serving_server_conf.prototxt
└── serving_server_conf.stream.prototxt

Q2:

强制 对 model.pdmodel 重命名, mv model.pdmodel __model__ 然后启动 paddle serving 服务,得到如下错误:

SOLOv2 模型

Error Message Summary:
----------------------
UnavailableError: Load operator fail to open file ./serving_server/sync_batch_norm_48.w_1, please check whether the model file is complete or damaged.
  [Hint: Expected static_cast<bool>(fin) == true, but received static_cast<bool>(fin):0 != true:1.] (at /paddle/paddle/fluid/operators/load_op.h:41)
  [operator < load > error]

Yolov3 模型

Error Message Summary:
----------------------
UnavailableError: Load operator fail to open file ./serving_server/batch_norm_41.b_0, please check whether the model file is complete or damaged.
  [Hint: Expected static_cast<bool>(fin) == true, but received static_cast<bool>(fin):0 != true:1.] (at /paddle/paddle/fluid/operators/load_op.h:41)
  [operator < load > error]

YOLO V3 模型在 以下环境下运行是可以的:

paddle-serving-app        0.6.1
paddle-serving-client     0.6.1
paddle-serving-server-gpu 0.6.1.post102
paddlepaddle-gpu          2.1.0

环境

paddle-serving-app        0.7.0
paddle-serving-client     0.7.0
paddle-serving-server-gpu 0.7.0.post102
paddlepaddle-gpu          2.2.0

cuda 10.2
Tesla V100
python 3.8

zouxiaoshi avatar Nov 23 '21 08:11 zouxiaoshi

这个SERVING_BIN是从哪里来的

bjjwwang avatar Nov 24 '21 10:11 bjjwwang

fail to open file ./serving_server/batch_norm_41.b_0

从报错信息上看,你的模型是散列多文件的?

TeslaZhao avatar Nov 24 '21 10:11 TeslaZhao

这个SERVING_BIN是从哪里来的

是从 https://github.com/PaddlePaddle/Serving/blob/74a03152480ecd2ad7029873b92a4a71991b168e/tools/dockerfiles/build_scripts/install_whl.sh#L46 得来的。 把其中的 serving_version 换成 0.7.0

zouxiaoshi avatar Nov 25 '21 02:11 zouxiaoshi

fail to open file ./serving_server/batch_norm_41.b_0 从报错信息上看,你的模型是散列多文件的?

您好。 用的paddle detection上Yolo模型的脚本,训练后,导出的信息如下: . ├── infer_cfg.yml ├── model.pdiparams ├── model.pdiparams.info └── model.pdmodel

再通过 如下命令 得到 serving_server 以及 serving_client

python -m paddle_serving_client.convert --dirname . \
                                         --model_filename model.pdmodel          \
                                         --params_filename model.pdiparams       \
                                         --serving_server ./serving_server/ \
                                         --serving_client ./serving_client/

seving_server 信息如下:

.
├── fluid_time_file
├── model.pdiparams
├── model.pdmodel
├── serving_server_conf.prototxt
└── serving_server_conf.stream.prototxt

发生以下报错后

Error Message Summary:
----------------------
NotFoundError: Cannot open file ./serving_server/__model__, please confirm whether the file is normal.
  [Hint: Expected static_cast<bool>(fin.is_open()) == true, but received static_cast<bool>(fin.is_open()):0 != true:1.] (at /paddle/paddle/fluid/inference/api/analysis_predictor.cc:1119)

改名model: model.pdmodel ==> __model__ 再执行,得到如下报错:

Error Message Summary:
----------------------
UnavailableError: Load operator fail to open file ./serving_server/batch_norm_41.b_0, please check whether the model file is complete or damaged.
  [Hint: Expected static_cast<bool>(fin) == true, but received static_cast<bool>(fin):0 != true:1.] (at /paddle/paddle/fluid/operators/load_op.h:41)
  [operator < load > error]

zouxiaoshi avatar Nov 25 '21 03:11 zouxiaoshi

好的 了解 我这边复现一下 大概2小时后给个结论。

bjjwwang avatar Nov 25 '21 04:11 bjjwwang

抱歉 我这里没能复现。 我在思考是不是SERVING_BIN的版本问题。 可以unset SERVING_BIN再运行一下

python -m paddle_serving_server.serve \
--model ./serving_server \
--thread 8 --port 10010 \
--gpu_ids 0 

bjjwwang avatar Nov 25 '21 09:11 bjjwwang

@zouxiaoshi @bjjwwang 抱歉打扰了,我遇到了同样的问题,请教一下当时是怎么解决的呢?

我的操作步骤是:

  1. 按照 安装文档1.22.1安装好了环境,并且在3环境检查时都是成功的。
  2. 按照 模型转换文档 下载了PaddleOCR的模型,但转换后的 ppocr_det_v3_serving 文件夹下并不是__model____params__文件名,而是 inference.pdmodelinference.pdiparams ,这是为什么?
  3. 在执行:
python3 -m paddle_serving_server.serve --model ppocr_det_v3_serving  --port 8181

命令时,报错如下:

----------------------
Error Message Summary:
----------------------
NotFoundError: Cannot open file ppocr_det_v3_serving/__model__, please confirm whether the file is normal.
  [Hint: Expected static_cast<bool>(fin.is_open()) == true, but received static_cast<bool>(fin.is_open()):0 != true:1.] (at /paddle/paddle/fluid/inference/api/analysis_predictor.cc:1452)

请问该如何解决?

相关环境

使用镜像 registry.baidubce.com/paddlepaddle/paddle:2.3.0

其它

Python 3.7.13
paddle-serving-app    0.9.0
paddle-serving-client 0.9.0
paddle-serving-server 0.9.0
paddlepaddle          2.3.0

wenjia322 avatar Nov 29 '23 09:11 wenjia322