coremltools icon indicating copy to clipboard operation
coremltools copied to clipboard

Running coremlmodel.predict() returns Floating Point Exception error

Open luxilill opened this issue 2 years ago • 1 comments

❓Question

I'm trying running the code for inference under the python environment to test the coreml model. It starts to get return errors output = coreml_model.predict({"input": x.astype(np.float32)}) returns zsh: floating point exception python. Python 3.6.7 with coremltools version 5.2.0.

luxilill avatar Apr 25 '22 00:04 luxilill

I don't know what the issue is here. In order to help you, I have to be able to reproduce the problem. Can you give us code to reproduce this issue?

TobyRoseman avatar Apr 26 '22 23:04 TobyRoseman

Since we have received code to reproduce this problem I'm going to close the issue. If we get code to reproduce the problem, I'm happy to reopen the issue.

TobyRoseman avatar Nov 09 '22 20:11 TobyRoseman

Hi. I'm currently experiencing this with coremltools 6 and 7. In a PR (for an open source repo I maintain) to change the default inference runtime from TensorFlow to CoreML, I see a floating point error when using my converted model on GitHub Actions. However, the error does not occur when same tests on an M1 and Intel MBP.

drubinstein avatar Oct 03 '23 12:10 drubinstein

CC @TobyRoseman

drubinstein avatar Oct 03 '23 16:10 drubinstein

@drubinstein - I would start by making sure that your GitHub Actions is using the most recent version of macOS.

TobyRoseman avatar Oct 03 '23 18:10 TobyRoseman

I tried macos-latest and the in beta macos-13 and both gave the same floating point error.

drubinstein avatar Oct 03 '23 19:10 drubinstein

I don't know. This could be a GitHub Actions issue. Does the issue consistently reproduce in GitHub Actions?

TobyRoseman avatar Oct 03 '23 22:10 TobyRoseman

It happens consistently in the PR. I made a standalone repo that attempts to perform prediction on a converted MobileNet model and the same converted model from the repo where I first saw the issue (basic-pitch). MobileNet predicts fine, basic pitch still raises the floating point exception.

drubinstein avatar Oct 04 '23 00:10 drubinstein

I don't know. Your code looks good to me. If your code works locally, I don't think it's a problem with Core ML or coremltools. If it always fails in GitHub Actions, it make me think it's probably an issue with GitHub Actions.

Maybe contact GitHub Actions help. It might not be giving you enough resources to run tests with MobileNet.

TobyRoseman avatar Oct 05 '23 18:10 TobyRoseman

Thanks. I'll report to GitHub Actions Help.

Note, MobileNet.mlpackage worked just fine. It was nmp.mlpackage that threw the Floating Point Error when using predict.

drubinstein avatar Oct 06 '23 18:10 drubinstein

@TobyRoseman is this expected to work within a virtual machine and has that been tested? Either using Apple-native virtualization or the pre-T2 chip approaches for virtualization (e.g. ESXi, KVM, etc). Specifically, x86 based virtualization. Silicon seem to work as expected.

nodeselector avatar Nov 29 '23 19:11 nodeselector

I contacted GitHub Support and they mentioned that they can confirm this issue exists in the (free) Intel macos VMs, but doesn't occur when using the (non-free) M1 Macos VMS. They are working on a solution for the Intel VMs.

drubinstein avatar Nov 29 '23 19:11 drubinstein

We're curious about this too. Having support for this on Intel VMs would be great.

NorseGaud avatar Nov 29 '23 20:11 NorseGaud

@NorseGaud so in simple cases, e.g., ResNet, Intel VMs work fine. It's only in some more complicated models like basic-pitch (which I link to above), that the Intel VMs raise the floating point error.

drubinstein avatar Nov 29 '23 20:11 drubinstein