I don't know if I am just dumb.
The installation for python is unclear. Do I follow the model preparation section in the C++ readme file or the python readme file?
yeah it was vague, how the quick start create engine for python and in the build too. Have you figure out anything?
this is how I set up python environment for this :
- Create a conda virtual environment with python 3.10.12, then follow the step in https://github.com/spacewalk01/depth-anything-tensorrt?tab=readme-ov-file#depth-anything-v2 .
- The github should have mentioned installing cuDNN, you can follow this blog to install cuDNN https://blog.csdn.net/qq_42042528/article/details/140591685 .
- Next, I install tensorrt using zip file like in here https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing-zip. It should be noted that in the 5 step you are using
pip install tensorrt-8.6.0-cp310-none-win_amd64.whlfrom the github page . - The author use cuda-python, and I found on internet about replacing pycuda with python. However the github code still use pycuda so install pycuda with pip .
- And that's most of it, you can run most of it, just be aware of the path you are in.
Attention
-
There is a video on youtube that might be helpful, although it is kind of out of date, but it's worth refering to https://www.youtube.com/watch?v=cWFOKWIDFJ4
-
While trying to make this work, I have made some modifications which may or may not have effects,
-
Copy all dll files from lib to bin
-
Add C:/TensorRT-x.x.x.x/ and C:/TensorRT-x.x.x.x/lib to the path
-
Run
pip install onnx_graphsurgeon-0.3.12-py2.py3-none-any.whl
-