Huajing Shi
Huajing Shi
dusty, Here is the info about my JetPack-L4T: # R32 (release), REVISION: 4.4, GCID: 23942405, BOARD: t186ref, EABI: aarch64, DATE: Fri Oct 16 19:37:08 UTC 2020 I did pull the...
Also, I have Pytorch 1.7 installed on my Jetson Xavier, but jetson-inference docker has Pytorch version 1.6, is this a problem? Thanks again.
"It seems since your system update, not all the CUDA libraries are getting properly mounted anymore" yes, I think you are right. But is there a way to fix this?...
I run "apt-cache search nvidia-container" and here are what returned: libnvidia-container-dev - NVIDIA container runtime library (development files) libnvidia-container-tools - NVIDIA container runtime library (command-line tools) libnvidia-container0 - NVIDIA container...
Dusty, thank you so much for answering my question, really appreciated. Now I am wondering how I could move onto object tracking, i.e., give each detected object a unique ID...
Thank you very much for your reply. I tried the method of the above link, but didn't work for me. After clone that github repo, and run "make", didn't go...
Dusty, Thanks again for your response. I still cannot figure out how to use my customized onnx model in either TAO or Deepstream. So I am now trying to use...
Hi, Dusty, I followed your advice and asked the Triton question on Triton github. After I asked the following question, Triton github host asked me this question about the onnx...
The original command is ./deepstream-lpr-app 1 2 0 infer us_car_test2.mp4 us_car_test2.mp4 output.264 Change the option "1 2 0" to "1 1 0" can generate the output file. ./deepstream-lpr-app \ \...