Giorgos Betsos
Giorgos Betsos
> It is probably because of your GPU model. I tested yolov7 on 1080 and 3070. Inference time on 3070 is much less than 1080. What does this have to...
> Are you running inference in just one image one time? There's a "big" cost the first time any TRT model is run to assign memory, memory lines, reserve, etc....
I recently did the same conversion on a GTX1080. I was also getting an error when converting from ONNX to TRT. Error was solved when I installed PyTorch 1.12. Using...
I'm having same issue `python save_model.py --weights ./data/custom.weights --output ./checkpoints/custom-416 --input_size 416 --model yolov4` results in ``` File "\third_party_repos\yolov4-custom-functions-master\core\utils.py", line 143, in load_weights conv_weights = conv_weights.reshape(conv_shape).transpose([2, 3, 1, 0]) ValueError:...
> Ooook :) > custom.names contain one row "car" its work fine @iabesadze Can you please explain how this solves the issue?
I have seen these benchmarks, this is the reason why I opened this issue. According to the benchmarks TensorRT inference time is 1.89ms vs 10.19ms for PyTorch inference. As opposed...
I also have a custom Yolo v7 model and get the same error when trying to convert using gen_wts_yoloV7.py. How can I tell if the error is due the same...
> Same issue. I think the module requires at least Python 3.7, whereas the version of Jetson with any version of JetPack up to 4.6 is 3.6. So it won't...