coremltools
coremltools copied to clipboard
Running coremlmodel.predict() returns Floating Point Exception error
❓Question
I'm trying running the code for inference under the python environment to test the coreml model. It starts to get return errors
output = coreml_model.predict({"input": x.astype(np.float32)})
returns zsh: floating point exception python
.
Python 3.6.7 with coremltools version 5.2.0.
I don't know what the issue is here. In order to help you, I have to be able to reproduce the problem. Can you give us code to reproduce this issue?
Since we have received code to reproduce this problem I'm going to close the issue. If we get code to reproduce the problem, I'm happy to reopen the issue.
Hi. I'm currently experiencing this with coremltools 6 and 7. In a PR (for an open source repo I maintain) to change the default inference runtime from TensorFlow to CoreML, I see a floating point error when using my converted model on GitHub Actions. However, the error does not occur when same tests on an M1 and Intel MBP.
CC @TobyRoseman
@drubinstein - I would start by making sure that your GitHub Actions is using the most recent version of macOS.
I tried macos-latest
and the in beta macos-13
and both gave the same floating point error.
I don't know. This could be a GitHub Actions issue. Does the issue consistently reproduce in GitHub Actions?
It happens consistently in the PR. I made a standalone repo that attempts to perform prediction on a converted MobileNet model and the same converted model from the repo where I first saw the issue (basic-pitch). MobileNet predicts fine, basic pitch still raises the floating point exception.
I don't know. Your code looks good to me. If your code works locally, I don't think it's a problem with Core ML or coremltools. If it always fails in GitHub Actions, it make me think it's probably an issue with GitHub Actions.
Maybe contact GitHub Actions help. It might not be giving you enough resources to run tests with MobileNet.
Thanks. I'll report to GitHub Actions Help.
Note, MobileNet.mlpackage
worked just fine. It was nmp.mlpackage
that threw the Floating Point Error when using predict
.
@TobyRoseman is this expected to work within a virtual machine and has that been tested? Either using Apple-native virtualization or the pre-T2 chip approaches for virtualization (e.g. ESXi, KVM, etc). Specifically, x86 based virtualization. Silicon seem to work as expected.
I contacted GitHub Support and they mentioned that they can confirm this issue exists in the (free) Intel macos VMs, but doesn't occur when using the (non-free) M1 Macos VMS. They are working on a solution for the Intel VMs.
We're curious about this too. Having support for this on Intel VMs would be great.
@NorseGaud so in simple cases, e.g., ResNet, Intel VMs work fine. It's only in some more complicated models like basic-pitch (which I link to above), that the Intel VMs raise the floating point error.