leovinus2001
leovinus2001
Relevance: The PyTorch scripting at https://coremltools.readme.io/docs/model-scripting is essential for dynamic aspects of models. If the conversion to CoreML fails then we might not be able to use models with dynamic...
Related to #766 and #816, I have used the composite operators and @register_torch_op to code a dim() shape() and \_\_getitem__(). Then, the 4 modes in the test case above run...
When I see your testcase, it reminds me of issue #81185 where NaN were observed together with the use of torch.rand() More specifically, there are testcases there for CPU and...
Thanks for the insight. As I have been away for a while, I'll see whether I can find some time this week to reproduce on my end. PS: time permitting,...
That is very interesting. Squinting my eyes on the "mps_C on gpu" result, I wonder whether the GPU result is different by a transpose() ? Actually, you can do a...
Cool! So now to the next question - does torch.mm() exhibit the same issue as "@", or not? Rational - actually, I am unfamiliar with the "@" operator. I just...
> Looking into the CPU-side issue: > > * Good news: I found what is causing the issue: the call into the BLAS library gemm is causing these. In the...
> Here is a version that does fail for me (have to run it a couple times sometimes) Thanks for that testcase. On my Intel iMac, I see no NaN....
To determine whether there is an issue in the Accelerate Framework producing NaNs on repeated GEMM calls, attached is a small C++ test program that might help. Change extension to...
Thank you for that update. On the same Arm Mac machine, excellent to know there is one testcase that throws an error and one not. That narrows it down! >...