Jason

Results 466 comments of Jason

Please download GPU wheel/library if you need to deploy your model on GPU, refer this document for more details https://github.com/PaddlePaddle/FastDeploy/tree/develop/docs/quick_start

服务端的部署正在支持中,预计下个月会发布。 你当前有哪些模型的服务化部署需求吗?

> 目前c++ sdk的方式调用,是不是客户端的代码都需要用C++重新写一遍? 你是指如果迁移到服务化部署吗? 如果是服务化部署,那所有的代码都运行在服务端,客户端发送请求和获取请求即可

> @DefTruth Thanks I noticed the problem. > > Another interesting thing that I quantize of the exported to onnx (small object detection / visdrone paddle) and its succesfully QUint8...

Which tool you are using to quantize your onnx model?

This quant tool is not supported by TensorRT now, Refer this doc https://onnxruntime.ai/docs/performance/quantization.html#quantization-on-gpu

Hi, This require rebuild Fasdtdeploy, just follow this doc to build https://www.github.com/PaddlePaddle/FastDeploy/tree/develop/docs%2Fdocs_en%2Fcompile%2Fhow_to_build_windows.md but need to set ENABLE_OPENVINO_BACKEND=ON

It your windows win32 or x64?, currently only `x64` is supported. If you need to build with python, just like enable other backend ``` set ENABLE_OPENVINO_BACKEND=ON ``` and then `python...

@wsx958191 谢谢你的反馈。 确实是文档写错了,应该默认给的是CPU下载链接,这个我们尽快修改完成。 当前你可以在预编译库页面自行下载CPU库进行使用