Wooho-Moon
Wooho-Moon
I slove the issue. My environment is RTX 3090, cuda11.2, cudnn 8.2 . First, I compile the origin caffe using cmake. if you wanna compile origin caffe, you need to...
thanks for reply. and then I had another question. I would like to use different lose function. How could i?
Thanks. I will try.
I have one more question. can I export stgcn++ model to onnx?
yes, i did.
> @Wooho-Moon how to exchange stgcn++ model to onnx? I exchange stgcn++ model to onnx. u wanna export, u need to exchange some operators. I use onnx opset version 11....
> you see the log and check your backbone whether it loaded sucessfully or not! Thansk for reply :) backbone is sucessfully loaded! but, I cannot reproduce this papers' AP....
No, actually I cannot install tensorrt version 10, since I have to deploy trt model on jetson orin. So, I have to set my own enviroment Is it related to...
Why I use lower version of tenssorrt is that if I convert onnx into tensonrrt in 4090 or 3090, they cannot deploy on orin nx.. It might be because the...
I slove the problem. As you mentioned before, That error may occur due to pytorch quantization. First of all, I installed pytorch quantization by using pip commandline. the comman line...