coremltools icon indicating copy to clipboard operation
coremltools copied to clipboard

NeuralNetwork and MLprogram lead to very different results

Open Astjust1 opened this issue 1 year ago • 11 comments

❓Question

Hello! I'm using coremltools to convert a custom architecture from pytorch to coreML. I've noticed that I only get correct results if I use the neuralnetwork backend and not the mlprogram backend. I'm wondering if anyone has any high(or low) level insights as to why this might be the case? And where I might be able to start tracing the differences in representations to find the supposed bug? Assuming this is a bug

Astjust1 avatar Aug 08 '24 17:08 Astjust1

did you convert the neuralnetwork with FP32 or FP16?

kinghchan avatar Nov 08 '24 13:11 kinghchan

Can you give us steps to reproduce this issue?

TobyRoseman avatar Nov 08 '24 18:11 TobyRoseman

I'm using coremltools_8.1 + pytorch_2.2.2 on macOS to convert a custom yolo model architecture from pytorch to coreML, In my case, I follow this blog https://krisp.ai/blog/how-to-integrate-coreml-models-into-c-c-codebase/, I only get correct results if I use the neuralnetwork backend and not the mlprogram backend, after integrate my coreml model with objective-c++, and running on iphone13. @TobyRoseman , bridge objective-c++ code is generated by xcrun coremlc generate

daikankan avatar Dec 26 '24 03:12 daikankan

Hello, I have been able to reproduce this with a private dataset for YOLOv9. I am using coremltools 8.1 but this appears to also happen on 7.0-8.0. I have not tried versions below 7.0. By default Ultralytics exports as FP16 even if you do not pass the half flag to its export tools. For this I am using a fork that ensures that I am exporting with FP32 when I want it. I can see that I have the same broken results between FP16 and FP32 ml programs. But this is not an issue for FP16 and FP32 neural networks.

coremltools: 8.2 PyTorch: 2.4.0 Python: 3.12.8 Ultralytics fork: https://github.com/RyanHir/ultralytics/tree/fix/coreml_bad_bbox_xcode16.2 Ultralytics PR: https://github.com/ultralytics/ultralytics/pull/18321

RyanHir avatar Dec 29 '24 18:12 RyanHir

The same results appear on both iPhone and in the Xcode model preview on an M4 Macbook.

RyanHir avatar Dec 29 '24 18:12 RyanHir

I am going to train a few models with public datasets as to reproduce this effect. This does not occur for all datasets such as COCO trained models from what I have seen.

RyanHir avatar Dec 29 '24 18:12 RyanHir

@RyanHir How are you exporting from ultralytics to a coreml neural network? The ultralytics export tool only seems to export as coreml ml programs.

SaintLambert avatar Dec 31 '24 17:12 SaintLambert

@RyanHir How are you exporting from ultralytics to a coreml neural network? The ultralytics export tool only seems to export as coreml ml programs.

I am utilizing my fork to enable neural network export.

RyanHir avatar Dec 31 '24 17:12 RyanHir

To install you must run 'pip install .' on the appropriate branch. Once installed you must pass format=mlmodel to the export command instead of format=coreml.

RyanHir avatar Dec 31 '24 17:12 RyanHir

@SaintLambert Ultralytics supports exporting to CoreML neural network if you use format=mlmodel

Y-T-G avatar Sep 04 '25 10:09 Y-T-G

I have checked my cases, and found for some models, ct.convert(convert_to="mlprogram", compute_precision=ct.precision.FLOAT16) have big problem of precision, while ct.convert(convert_to="mlprogram", compute_precision=ct.precision.FLOAT32) is correct. also ct.convert(convert_to="neuralnetwork") along with quantization_utils.quantize_weights(model_ct, nbits=16) is correct.

daikankan avatar Oct 11 '25 09:10 daikankan