yolact_edge icon indicating copy to clipboard operation
yolact_edge copied to clipboard

How to run yolact_edge in jetson AGX Xavier?

Open MiaoRain opened this issue 3 years ago • 9 comments

Hi, How to run yolact_edge in jetson AGX Xavier? Thanks!

MiaoRain avatar Aug 11 '21 07:08 MiaoRain

hi Miao.. we have succesfully converted the resulting model to trt. Diferently from other models ( where you have only one trt file or you can load it on deepstream) here the conversion create several small trt for the different parts of the model, and then you just load the trt ( with yolact_edge.. ) and run inference with it

malfonsoNeoris avatar Aug 18 '21 16:08 malfonsoNeoris

hi Miao.. we have succesfully converted the resulting model to trt. Diferently from other models ( where you have only one trt file or you can load it on deepstream) here the conversion create several small trt for the different parts of the model, and then you just load the trt ( with yolact_edge.. ) and run inference with it

Hi thanks for replying. Right now I have got several trt, but how to inference them? or could you send the inference script to my email [email protected]. Thanks a lot.

MiaoRain avatar Aug 19 '21 01:08 MiaoRain

Hi, there is actually no difference between inferencing on Jetson AGX Xavier and on a normal Ubuntu machine. What is the problem that you meet?

haotian-liu avatar Aug 19 '21 03:08 haotian-liu

Hi, there is actually no difference between inferencing on Jetson AGX Xavier and on a normal Ubuntu machine. What is the problem that you meet? Hi right now it seems ok. Thanks a lot.

MiaoRain avatar Aug 19 '21 11:08 MiaoRain

Hi, there is actually no difference between inferencing on Jetson AGX Xavier and on a normal Ubuntu machine. What is the problem that you meet? Hi,haotian, I wonder if yolact++ could be converted to trt? Have you tested it?

MiaoRain avatar Aug 20 '21 06:08 MiaoRain

I haven't tried converting it to TensorRT. There might be a little more work when converting the deformable convolution.

haotian-liu avatar Aug 20 '21 06:08 haotian-liu

I haven't tried converting it to TensorRT. There might be a little more work when converting the deformable convolution.

I think so, thanks

MiaoRain avatar Aug 20 '21 06:08 MiaoRain

I haven't tried converting it to TensorRT. There might be a little more work when converting the deformable convolution.

Hi haotian, rightnow mobilenetv2 model just reaches 25fps in xavier which is even slower than renet101 in your paper. How to futher improve the inference speed? thanks. python3 eval.py --trained_model=./weights/0808/yolact_mobilenetv2_221_80000.pth --config=yolact_edge_mobilenetv2_config --use_fp16_tensorrt --use_tensorrt_safe_mode --benchmark --trt_batch_size=8

MiaoRain avatar Aug 20 '21 07:08 MiaoRain

Are there any modifications / changes towards the code base?

haotian-liu avatar Sep 22 '21 21:09 haotian-liu