YOLOX-deepstream
YOLOX-deepstream copied to clipboard
deploy in xavier
Thanks for your excellent job; Can the code deploy in jetson playtform success? I convert pth to onnx by export_onnx.py in origin repo in YOLOX and convert to engine by trtexec in jetson but get none result when detect the dog.jpg because has no bboxes output.
No bbox conf bigger than 0.3
So how can i get the correct engine file
thanks
Hi, If your engine file could pass the test of the given tensorrt demo in this page: https://github.com/Megvii-BaseDetection/YOLOX/tree/main/demo/TensorRT/python https://github.com/Megvii-BaseDetection/YOLOX/tree/main/demo/TensorRT/cpp The problem is about your deepstream configuration file, you can refer to this issue: #5
@lantudou had you had any problems regarding the makefile ? I cannot run it, because it says there is no header opencv.h, but opencv is installed
I have a Nvidia jetson xavier AGX dev kit
try to find out the path and change the makefile sudo find / -iname "opencv"
---Original--- From: @.> Date: Wed, Feb 9, 2022 18:19 PM To: @.>; Cc: @.@.>; Subject: Re: [nanmi/YOLOX-deepstream] deploy in xavier (#7)
@lantudou had you had any problems regarding the makefile ? I cannot run it, because it says there is no header opencv.h, but opencv is installed
I have a Nvidia jetson xavier AGX dev kit
— Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android. You are receiving this because you were mentioned.Message ID: @.***>
@lantudou it`s weird. I do not find a reference to the opencv lib, but I can use it in python code.
g++ -c -o nvdsparsebbox_yolox.o -Wall -std=c++11 -shared -fPIC -Wno-error=deprecated-declarations -I../../includes -I/usr/local/cuda-10.2/include -I/home/nvidia/data/zhangbo/library/opencv-4.5.1/include/opencv4 -I/usr/lib/gcc/x86_64-linux-gnu/7/include/ -fopenmp nvdsparsebbox_yolox.cpp
nvdsparsebbox_yolox.cpp:24:10: fatal error: opencv2/opencv.hpp: No such file or directory
#include <opencv2/opencv.hpp> ^~~~~~~~~~~~~~~~~~~~ compilation terminated. Makefile:43: recipe for target 'nvdsparsebbox_yolox.o' failed make: *** [nvdsparsebbox_yolox.o] Error 1
@lantudou i solved the opencv issues but now I have nvdsinfer_custom_impl.h issues.
any idea?
Solved it. Problem regarding the CFLAGS for ../includes
@lantudou do you have an idea how to quickly generate the engine file ? yolox-d54-fp16.engine?
@lantudou do you have an idea how to quickly generate the engine file ? yolox-d54-fp16.engine?
The recommended way of YOLOX is to use torch2trt: https://github.com/Megvii-BaseDetection/YOLOX/blob/main/demo/TensorRT/python/README.md
@lantudou well, ok, but can I run torch2trt on the nvidia jetson ?
or shall I do it on my pc
Your device is jetson xavier AGX, which is ok for installing and running torch2trt. Do not do it on your PC because the engine file is dependent on your GPU device.
@lantudou well, ok, but can I run torch2trt on the nvidia jetson ?
or shall I do it on my pc
By the way, It seems like the JetPack has already include torch2trt.
https://github.com/NVIDIA-AI-IOT/torch2trt
@lantudou well I can`t run it for some weird reason. it says tensorrt is not installed.
@lantudou and which version of pytorch do you have installed?
@lantudou well I can`t run it for some weird reason. it says tensorrt is not installed.
Maybe you have not installed the tensorrt for python blinding? Try this:
pip3 install tensorrt
Another approach for generating the engine file is to convert to onnx model file firstly, and then you can use trtexec program to convert the onnx to engine file automatically. However, I tried this using tensorrtRT 7.1.3 and find out a serious performance drop in FPS. You can try it using tensorrtRT 8 above if your deepstream version is 6.0
ok, i`ll try with torch2trt first. @lantudou
but it is weird imo that I get all kinds of library issues. That`s why I also asked about the torch version
The final approach is write a program to copy the weight of your model to the tensorrt API, It means you need to build the whole network manually just like the following link:
https://github.com/wang-xinyu/tensorrtx
It will spend you a lot of time but this is the only way to understand what the engine file is doing and could avoid some strange quantization error when you tried to use the int8 quantization model. If your purpose is to explore the best performance of your deepstream program you need to try it. And if you succeed, do not forget to share your program!
@lantudou for you my brother, I`ll share anything :))
I really just try to run deepsort + YOLOx on my jetson, to see how it works and It`s been a struggle.
@lantudou for you my brother, I`ll share anything :))
I really just try to run deepsort + YOLOx on my jetson, to see how it works and It`s been a struggle.
https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-10-now-available/72048
ok, i`ll try with torch2trt first. @lantudou
but it is weird imo that I get all kinds of library issues. That`s why I also asked about the torch version
there are some prebuilted pytorch package: https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-10-now-available/72048
@lantudou thank you very much! solved almost every problem.
When I try to run tools/trt.py , i get an error no module named yolox. any idea?
@lantudou非常感谢你!几乎解决了所有问题。
当我尝试运行 tools/trt.py 时,我收到一个错误 no module named yolox。任何的想法?
please install the yolox firstly
https://github.com/Megvii-BaseDetection/YOLOX
git clone [email protected]:Megvii-BaseDetection/YOLOX.git
cd YOLOX
pip3 install -v -e . # or python3 setup.py develop
@lantudou i ran that already, sorry, should have specified it above
what is your output after running the installation command?
python3 setup.py develop
Please make sure you have already installed yolox, try
>>python3
>>import yolox
great. now it works, but when I run sudo python3 tools/trt.py -n yolox-s yolo_s.pth I get a segmentation fault 👎
give me more details, bro
---Original--- From: @.> Date: Thu, Feb 10, 2022 17:49 PM To: @.>; Cc: @.@.>; Subject: Re: [nanmi/YOLOX-deepstream] deploy in xavier (#7)
great. now it works, but when I run sudo python3 tools/trt.py -n yolox-s yolo_s.pth I get a segmentation fault 👎
— Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android. You are receiving this because you were mentioned.Message ID: @.***>
Ok, so I installed yolox and it imports fine.
When I try to convert the yolox-s model into tensorrt, I run the above:
sudo python3 tools/trt.py -n yolox-s yolo_s.pth
and the result is Segmentation fault without any additional info.
I have torch 1.8, numpy 1.19.4.
https://github.com/Megvii-BaseDetection/YOLOX/issues/916
https://github.com/Megvii-BaseDetection/YOLOX/issues/496
these 2 did not help
try to reduce more?
---Original--- From: @.> Date: Thu, Feb 10, 2022 18:06 PM To: @.>; Cc: @.@.>; Subject: Re: [nanmi/YOLOX-deepstream] deploy in xavier (#7)
Megvii-BaseDetection/YOLOX#916
Megvii-BaseDetection/YOLOX#496
these 2 did not help
— Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android. You are receiving this because you were mentioned.Message ID: @.***>