triple-mu

Results 35 comments of triple-mu

测试集的map才能真正代表当前epoch训练的模型的检测能力呀,验证集loss没有用呀,两个方向就不一致

Maybe the huge input size will improve the accuracy!

> You mean **more bigger datatset** will improve the accuracy? > > Can we tune hyperparameters to improve the performance? input size means the picture size such as 640×640 1280x1280

![image](https://user-images.githubusercontent.com/92794867/182329350-26036159-4504-442b-9d7d-ca6b1c8e3a56.png) @AlexeyAB I have test in batch=32 in cpu onnxruntime. Two results are the same. This pr speed up 2-3%. If we use huge batchsize, maybe speed up more!

For onnxruntime inference, I suggest you that using --end2end and --max-wh 7680 and --grid. With these three flags you can get an onnx with nms operater and the ouputs are...

> Actually, I needed to convert the ONNX to MNN for inference, and using the end2end flag gives error in conversion. Without the end2end flag, conversion to MNN gives no...

https://github.com/WongKinYiu/yolov7/pull/389 You can try this

You'd best train and eval in the same or approximate shape!

Which backend dou you use? TensorRT or ONNXruntime? FP16 or FP32? What's the version of your backend?