yolov5
yolov5 copied to clipboard
Add NMS to CoreML model output, works with Vision
Reference issues: #5157 , #343 , #7011
The current version of the export.py script outputs a CoreML model without NMS. This means that certain Vision APIs cannot be used with the model directly, as the output during inference is VNCoreMLFeatureValueObservation. The changes implemented here add a NMS layer to the CoreML output, so the output from inference is VNRecognizedObjectObservation. By adding NMS to the model directly, as opposed to later in code, the performance of the overall image/video processing is improved. This also allows use of the "Preview" tab in Xcode for quickly testing the model.
Default IoU and confidence thresholds are taken from the --iou-thres
and --conf-thres
arguments during export.py script runtime. The user can also change these later by using a CoreML MLFeatureProvider in their application (see https://developer.apple.com/documentation/coreml/mlfeatureprovider).
This has no effect on training, as it only adds an additional layer during CoreML export for NMS.
Based on code by @pocketpixels, with permission to make this PR: https://github.com/pocketpixels/yolov5/blob/better_coreml_export/models/coreml_export.py
I tried your new method, which was very effective. During the exporting, I noticed some warning,But it has no effect on the results
TorchScript: starting export with torch 1.9.1... /Users/anyadong/PycharmProjects/yolov5-master/coreml_export-new.py:58: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if self.grid[i].shape[2:4] != x[i].shape[2:4] or self.onnx_dynamic: /Users/anyadong/PycharmProjects/yolov5-master/coreml_export-new.py:65: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! s = self.stride[i].item() /Users/anyadong/PycharmProjects/yolov5-master/coreml_export-new.py:66: TracerWarning: Converting a tensor to a NumPy array might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! ag = torch.from_numpy(self.anchor_grid[i].numpy())#new /Users/anyadong/PycharmProjects/yolov5-master/coreml_export-new.py:66: TracerWarning: torch.from_numpy results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. ag = torch.from_numpy(self.anchor_grid[i].numpy())#new /Users/anyadong/PycharmProjects/yolov5-master/coreml_export-new.py:47: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. boxes = x[:, :4] * torch.tensor([1. / w, 1. / h, 1. / w, 1. / h]) /Users/anyadong/PycharmProjects/yolov5-master/coreml_export-new.py:58: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if self.grid[i].shape[2:4] != x[i].shape[2:4] or self.onnx_dynamic: /Users/anyadong/PycharmProjects/yolov5-master/coreml_export-new.py:65: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! s = self.stride[i].item() /Users/anyadong/PycharmProjects/yolov5-master/coreml_export-new.py:66: TracerWarning: Converting a tensor to a NumPy array might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! ag = torch.from_numpy(self.anchor_grid[i].numpy())#new /Users/anyadong/PycharmProjects/yolov5-master/coreml_export-new.py:66: TracerWarning: torch.from_numpy results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. ag = torch.from_numpy(self.anchor_grid[i].numpy())#new /Users/anyadong/PycharmProjects/yolov5-master/coreml_export-new.py:47: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. boxes = x[:, :4] * torch.tensor([1. / w, 1. / h, 1. / w, 1. / h])
@liuzhiguai - Thanks for reporting back! I don't think I had those errors during export, but I have made a couple modifications since, and integrated with the current version of the export.py script as well. Can you please try again with the newest version of the export script, which I am submitting in this PR? You can download the file here: https://github.com/mshamash/yolov5/blob/fix/coreml_export_nms_layer/export.py
I tried the newest version,There was no warning this time.This is very helpful to me But in line 692,I need to change back to the original code(print_args(FILE.stem, opt)),Otherwise, an error will be reported
Traceback (most recent call last):
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd.py", line 1483, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/Users/anyadong/PycharmProjects/yolov5-master/export2.py", line 703, in
@liuzhiguai - I didn't change anything on that line, so I'm not sure why it's erroring. Perhaps do a git pull
on your local YOLOv5 repo, maybe there were some changes in other files since you pulled it last?
@glenn-jocher is this something that you think could be implemented/merged in the export.py script?
@mshamash yes, but we have a higher level issue. Right now all model exports are benchmarkable, i.e. see https://github.com/ultralytics/yolov5/pull/6613:
MacOS Intel CPU Results (CoreML-capable)
benchmarks: weights=/Users/glennjocher/PycharmProjects/yolov5/yolov5s.pt, imgsz=640, batch_size=1, data=/Users/glennjocher/PycharmProjects/yolov5/data/coco128.yaml, device=, half=False, test=False
Checking setup...
YOLOv5 🚀 v6.1-135-g7926afc torch 1.11.0 CPU
Setup complete ✅ (8 CPUs, 32.0 GB RAM, 793.4/931.6 GB disk)
Benchmarks complete (288.68s)
Format [email protected]:0.95 Inference time (ms)
0 PyTorch 0.4623 281.47
1 TorchScript 0.4623 262.97
2 ONNX 0.4623 77.30
3 OpenVINO 0.4623 74.12
4 TensorRT NaN NaN
5 CoreML 0.4620 69.36
6 TensorFlow SavedModel 0.4623 123.12
7 TensorFlow GraphDef 0.4623 120.82
8 TensorFlow Lite 0.4623 249.62
9 TensorFlow Edge TPU NaN NaN
10 TensorFlow.js NaN NaN
There have been user requests for native-NMS exports in a few formats, i.e. TFLite, ONNX, TRT, TorchScript, and here with CoreML. So we need additional infrastructure within val.py, detect.py, PyTorch Hub, and/or the NMS function to recognize native-NMS output formats and handle these accordingly to allow these to also work correctly with the various inference pathways.
Thank you!
Thank you, works fine on my side :)
+1
This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions YOLOv5 🚀 and Vision AI ⭐.
not stale
Any possibility of implementing this (NMS layer for coreml)? @glenn-jocher
@hietalajulius Can you have a look at this one please?
@mshamash Do you know why the confidence shown in Xcode is always 100%? Isn't there a way to output the confidence returned by the NMS layer for the BBox?

Any reason this wonderfully working fix wasn't merged yet?
Agree with @Jaykob because the mlmodel that comes out of export.py
just doesn't work with Apple's Vision framework at all :(
@philipperemy did you figure out why you were getting 100% confidence? I ran into the same issue.. confidence always 98-100%
@wmcnally nope. I gave it on that one. Mine was always 100%.
@wmcnally @philipperemy Are you guys testing on the same images you trained with? Confidence worked just fine for me, however I had to manually apply @mshamash 's changes to master coz I made a couple other changes.. doubt that it affected confidence working though.
@zaitsman yeah I tested it on the same images.
@philipperemy so how do you expect it to give you a different value? I mean if your model is well fit then data from the training set is always 100%. You need to compare against OTHER images not in your dataset.
@zaitsman @philipperemy i did not use training images and my confidence with @mshamash export was higher than when using detect.py