Serving icon indicating copy to clipboard operation
Serving copied to clipboard

A flexible, high-performance carrier for machine learning models(『飞桨』服务化部署框架)

Results 218 Serving issues
Sort by recently updated
recently updated
newest added

请问这个项目还在维护吗?什么时候能支持cuda11.7 cuda11.8 cuda12.0

设置超时时间后,会到这部分代码 ![image](https://github.com/PaddlePaddle/Serving/assets/24838269/33fbe4c9-7042-42f6-8c72-6597812e24a4) 不设置超时时间是没问题的 ![image](https://github.com/PaddlePaddle/Serving/assets/24838269/82d0df86-d6f1-479d-831f-3b4d0d20787b)

paddle-serving-app 0.9.0 paddle-serving-client 0.9.0 paddle-serving-server-gpu 0.9.0.post112 paddlepaddle-gpu 2.6.0.post112 Traceback (most recent call last): File "/mnt/storage/anaconda3/envs/paddle/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/mnt/storage/anaconda3/envs/paddle/lib/python3.9/runpy.py", line 87, in _run_code exec(code,...

我自定了OP,c++ 服务的启动方式是 ``` python3 -m paddle_serving_server.serve --model serving_model --port 9393 ``` 我如何在OP中Debug C++ 代码? 求教

![image](https://github.com/PaddlePaddle/Serving/assets/102728667/0b1e1661-c3f2-406d-b2a7-91dea834e964) ![image](https://github.com/PaddlePaddle/Serving/assets/102728667/c9f48730-e4f0-43d6-bfc5-6f0a0b8db738) 有没有大佬知道这是哪儿出错了

![image](https://user-images.githubusercontent.com/58454582/217485204-842cb026-eca2-4d65-b72f-5368f40a8d40.png) ![image](https://user-images.githubusercontent.com/58454582/217485237-0e8d0ef0-5fad-4127-8ad6-14a957fef0e1.png)

![image](https://github.com/PaddlePaddle/Serving/assets/17562936/a41f3cfe-d664-4719-ad7f-a3354a8986d1) 使用Docker 0.9.0-devel镜像编译,执行到这一步骤时报错 package google.golang.org/grpc is not a main package. ![image](https://github.com/PaddlePaddle/Serving/assets/17562936/55e62e3e-6c0c-4e65-8cf9-dddbd5750ea7)

## 问题: ### Q1: 执行如下代码时报错: ```bash export SERVING_BIN=/usr/local/serving_bin/serving python -m paddle_serving_server.serve \ --model ./serving_server \ --thread 8 --port 10010 \ --gpu_ids 0 ``` 错误信息: ``` bash Error Message Summary: ----------------------...

模型保存与转换

如题,使用 pipline 的方式部署了 cascade 服务,使用 http 接口进行图像预测,使用多线程方式调接口,分析了多张图像之后出现段错误。 # 测试环境 - CUDA 11.2 - 显卡:RTX 3090 - python 3.7.0 - PaddlePaddle 2.1.0.post112 - paddle-serving-server-gpu 0.6.0.post11 - paddle_serving_app 0.6.0 # web_service.py ```python...

question

``` 环境 CUDA 11.7 cudnn 8.4.1 显卡:GTX 1070 python 3.8.13 PaddlePaddle 2.4.1.post117 paddle-serving-server-gpu 0.9.0 paddle_serving_app 0.9.0 用paddleX训练的PPYOLOv2模型,通过python -m paddle_serving_client.convert --dirname --model_filename --params_filename --serving_server serving_server --serving_client serving_client命令将inference模型转为了server模型。 发现一个问题,同一个模型用不同的方式部署后,会出现lod报错。具体如下: 1、当我用pipeline方式部署,fetch_dict中没有fetch_name.lod这个键,fetch_dict: {'save_infer_model/scale_0.tmp_1': array([[...