demuxin
demuxin
@levipereira Thank you for your contribution. I need to ask a question, Do I have to train model in order to get a quantized model?
After my further testing, only the models I've trained myself have this issue,but the model is exported in the same way, it's weird.
I trained the yolov9 model using my own data, and then I converted the pt model to onnx model. I found a difference in the inference results of the onnx...
Hi, @levipereira @WongKinYiu , For the end2end model, can you provide a way to see the implementation of the NMS? Because I noticed that the model always results in a...
Because I will use yolov9 model on other platform that doesn't support TensorRT, I can't use end2end model. I use yolov9 model with EfficientNMS_TRT to inference, and I've found that...
yes, I've seen the source code for this efficientNMSPlugin, I know what the EfficientNMSFilter in the EfficientNMS plugin does is, get the category with the highest confidence for each anchor...
This is the Col2Im node information in the model: 
> Could you please try recompile the plugin with TRT 9.2? Thanks! Hi zerollzeng, I just recompiled the plugin with TRT 9.2, and recompiling it again should be the same...
@zerollzeng Do you have any solutions?
> Could you please try latest 9.2, IIRC we add support to opset 17 since TRT 8.6. > > Download from https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/9.2.0/tensorrt-9.2.0.5.linux.x86_64-gnu.cuda-11.8.tar.gz https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/9.2.0/tensorrt-9.2.0.5.linux.x86_64-gnu.cuda-12.2.tar.gz https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/9.2.0/tensorrt-9.2.0.5.ubuntu-22.04.aarch64-gnu.cuda-12.2.tar.gz Hi @zerollzeng, thank you for your...