FastDeploy icon indicating copy to clipboard operation
FastDeploy copied to clipboard

请问在jetson 上能运行benchmark?

Open tomjeans opened this issue 2 years ago • 4 comments

当运行时报错 [ERROR] fastdeploy/runtime.cc(319)::EnablePaddleToTrt While using TrtBackend with EnablePaddleToTrt, require the FastDeploy is compiled with Paddle Inference Backend, please rebuild your FastDeploy. 重新编译 CMake Error at cmake/paddle_inference.cmake:67 (message): Paddle Backend doesn't support linux aarch64 now. Call Stack (most recent call first): CMakeLists.txt:212 (include) 发下使用trt backend 很慢 在jetson上查看jtop 发现gpu占用很低 set_trt_input_shape与set_trt_cache_file均已经设置 是我设置的问题吗?应该怎样解决

tomjeans avatar Nov 10 '22 03:11 tomjeans

@tomjeans 是跑哪个benchmark脚本呢,环境中是有TRT的吗

wjj19950828 avatar Nov 10 '22 05:11 wjj19950828

tensorrt 8.4.0.11-1+cuda11.4 arm64 Meta package of TensorRT 跑的benchmark_ppcls.py 这个脚本

tomjeans avatar Nov 10 '22 07:11 tomjeans

@tomjeans 我感觉TRT没有编进去,运行的log截图看下

另外编python包,参考这个编译命令

git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/python
export BUILD_ON_JETSON=ON
export ENABLE_VISION=ON
export ENABLE_TEXT=ON
python setup.py build
python setup.py bdist_wheel

然后再试试~

wjj19950828 avatar Nov 11 '22 02:11 wjj19950828

Paddle Inference当前在Jetson上还没支持,由于不同的Jetson设备依赖不同的Paddle Inference包,所以暂时无法自动下载。我们接下来会支持用户自定义设置-DPADDLE_DIRECOTRY来引入Paddle Inference @joey12300

当前在Jetson上跑时,TRT用的是系统自带的TensorRT,慢可能跟模型相关

jiangjiajun avatar Nov 11 '22 02:11 jiangjiajun

此ISSUE由于一年未更新,将会关闭处理,如有需要,可再次更新打开。

jiangjiajun avatar Feb 06 '24 04:02 jiangjiajun