YOLO-NAS-onnxruntime
YOLO-NAS-onnxruntime copied to clipboard
doesnt work on custom dataset
Hello, this works perfect on default coco model but doesnt work on a custom one. Did you try this yourself cause it does not work on custom model with checkpoint. Predictions don't work and are negative numbers.
Maybe you can provide me some error issues? It the problem that the custom model is converted from pytorch to onnx?
Hey, i can convert it to onnx and everything, but when trying to use interference on it - it doesn't work. As i said previously using the default coco model works perfect for me in ONNX, but custom dataset/checkpoint DOES NOT WORK.
Have you tried this with a custom dataset? Cause i'm pretty sure problem isn't on my side.
Hey, I found the issue in the official repository, you can refer to the following link: https://github.com/Deci-AI/super-gradients/issues/1108
Could you please rewrite the following code : https://github.com/jason-li-831202/YOLO-NAS-onnxruntime/blob/06c1a244260b9045f52e4c583b9e24e99f6a176d/src/detector.cpp#L137 Replace with the following line : -> resizedImage.convertTo(floatImage, CV_32FC3, 255.0 / 255.0);
I can successfully do it with my custom dataset, you can try it.
Hey i was not using your detector.cpp im using C#. I was using this: https://github.com/techwingslab/yolov5-net
and as i said previously i got it to work with the coco model but not custom model with your export.
I found the problem is the preprocessing steps custom trained model isn't the same as YOLO-NAS pretrained-coco model. The custom model preprocessing steps is using yolox_coco not yolo_nas_coco.
Maybe you can rewrite the part : https://github.com/techwingslab/yolov5-net/blob/7ad2924c1e0c93d7d2306784e239306943229503/src/Yolov5Net.Scorer/YoloScorer.cs#L68C1-L70C66
Therefore, I fixed this issue in my latest code.
Hey, i got it working.
However, the detection is not good. Using it in onnx is a lot worse than in python. When using the default coco model the onnx performance is good but on custom model after changing the resizedImage.convertTo(floatImage, CV_32FC3, 1 / 255.0); the detection is bad....
as stated in the thread you linked me
"Quick update on this one, I finally got the correct result by NOT normalizing the image to [0, 1] for my custom model. On the contrary, by not normalizing the image will produce poor results on the pre-training coco model. Isn't this odd?"
They are also saying you get bad results after doing that... We need to fix this...
You can see from the official source code that they preprocess the custom model differently from the original COCO model. As for why they didn't standardize the preprocessing method, you might want to check their issues for more information.
"""
def default_yolox_coco_processing_params() -> dict:
image_processor = ComposeProcessing(
[
ReverseImageChannels(),
DetectionLongestMaxSizeRescale((640, 640)),
DetectionBottomRightPadding((640, 640), 114),
ImagePermute((2, 0, 1)),
]
)
...
def default_yolo_nas_coco_processing_params() -> dict:
image_processor = ComposeProcessing(
[
DetectionLongestMaxSizeRescale(output_shape=(636, 636)),
DetectionCenterPadding(output_shape=(640, 640), pad_value=114),
StandardizeImage(max_value=255.0), <------------------------------------ default coco model
ImagePermute(permutation=(2, 0, 1)),
]
)
"""
yeah but not using the standarzieimage thingy makes the detection worse though. Yes if you remove it the custom model will work in onnx but it will be a lot worse. You can see that many people said that same thing in the issue you linked me.
is it possible to train on custom dataset but use the default_yoo_coco instead of yolox? So the custom model will work like coco ones?