LWQ2EDU
LWQ2EDU
the fix causes TracerWarning maybe it's better to use the function in this issues #182
I ran Depth-Anything on a Jetson orin nano. Inferencing is significantly faster with tensorrt than with pytorch, and tensort is very easy to use on jetson because of the jetson...
> so, we don't have any official version of Depth-Anything runs on TensorRT ? it's better to generate your own TensorRT engine, because of TensorRT version, quantization operation, CUDA version,...
> HI, thanks for your work, I am encountering an issue while trying to test the exported model using TensorRT version 10.0.1.6. The following error occurs, which seems to be...
遇到了一样的问题,我的是Nvidia orin nano