coremltools
coremltools copied to clipboard
RuntimeError: PyTorch convert function for op 'fft_rfftn' not implemented.
I have been trying to convert PyTorch model to coreml, but face some issues during the conversion of fft_rfftn and fft_irfftn operators. Could you kindly introduce me how to register these operators like this?
File "convert.py", line 27, in <module>
ct.ImageType(name='mask', shape=mask_input.shape)])
File "/Users/admin/anaconda3/envs/py36/lib/python3.6/site-packages/coremltools/converters/_converters_entry.py", line 316, in convert
**kwargs
File "/Users/admin/anaconda3/envs/py36/lib/python3.6/site-packages/coremltools/converters/mil/converter.py", line 175, in mil_convert
return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
File "/Users/admin/anaconda3/envs/py36/lib/python3.6/site-packages/coremltools/converters/mil/converter.py", line 207, in _mil_convert
**kwargs
File "/Users/admin/anaconda3/envs/py36/lib/python3.6/site-packages/coremltools/converters/mil/converter.py", line 293, in mil_convert_to_proto
prog = frontend_converter(model, **kwargs)
File "/Users/admin/anaconda3/envs/py36/lib/python3.6/site-packages/coremltools/converters/mil/converter.py", line 103, in __call__
return load(*args, **kwargs)
File "/Users/admin/anaconda3/envs/py36/lib/python3.6/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 80, in load
raise e
File "/Users/admin/anaconda3/envs/py36/lib/python3.6/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 72, in load
prog = converter.convert()
File "/Users/admin/anaconda3/envs/py36/lib/python3.6/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 230, in convert
convert_nodes(self.context, self.graph)
File "/Users/admin/anaconda3/envs/py36/lib/python3.6/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 67, in convert_nodes
"PyTorch convert function for op '{}' not implemented.".format(node.kind)
RuntimeError: PyTorch convert function for op 'fft_rfftn' not implemented.
Any updates on this? Or maybe someone can guide me how to write an STFT torch op?
I am also facing this issue
I'm working on librosa port to iOS: RosaKit (https://github.com/dhrebeniuk/RosaKit).
There is available implementation for this function, so it can help you. Example: https://github.com/dhrebeniuk/RosaKit/blob/a8cabbbbd049ddb862c6661516072b9f995b5633/Sources/Rosa/ArrayLibRosaExtensions.swift#L184
Given our current set of MIL ops, I don't think there is a good way to implement this one: https://pytorch.org/docs/stable/generated/torch.fft.rfftn.html
If anyone has any ideas, please share.
@TobyRoseman you may check metal-fft code. I haven't tested the code but seems to be a way to go for Metal.
Accelerate already seems to have support for 1D / 2D Fast Fourier Transforms although N-dimensional case is missing but is also need for research works like Fast Fourier Convolutions as in Resolution-robust Large Mask Inpainting with Fourier Convolutions.
Maybe you can provide FFT as a MIL op (supported for both complex and real valued numbers). Using this FFT op you can build STFT and inverse STFT from it which can be very much helpful for audio processing.
Just my two cents.
@RahulBhalley - Thanks for the suggest. I've created an internal issue.
I'm also facing this issue, does a list of PyTorch operators and their CoreML availability exists? including with the priority for adding them to the roadmap.
I was hoping this would be included in the "Support for many new PyTorch and TensorFlow layers" announced for version 6.0. But it doesn't look like it is 😔. (At least not in beta 1)
I was hoping this would be included in the "Support for many new PyTorch and TensorFlow layers" announced for version 6.0. But it doesn't look like it is 😔. (At least not in beta 1)
Apple probably won't release. It's sad. I've been waiting for almost 2 years now.
Looking at the list of supported operations, I think the real bottleneck here is lack of support for complex numbers. Just being able to create a tensor of complex type (e.g. z = torch.complex(x, y)) is not supported. It's no wonder FFTs haven't been implemented, and I don't imagine it will be for some time unfortunately :(
So many amazing models are not convertible due to a lack of FFT support. @apple, please make it available.
Hi! any update for this issue?
@manhnguyen92 , I guess it's very difficult , because there is few implementations of fft_rfftn, that all implemented via C-module, not in python.
@dhrebeniuk Thank you! so sad to hear that. Have been Spent one week to convert model but get that impossible fixing issue.
@dhrebeniuk Thank you! so sad to hear that. Have been Spent one week to convert model but get that impossible fixing issue.
Same, a lot of time wasted. Which model were you trying to convert?
@dhrebeniuk Thank you! so sad to hear that. Have been Spent one week to convert model but get that impossible fixing issue.
Same, a lot of time wasted. Which model were you trying to convert?
Any model which use Mel-Spectrogram. Any sound classifier model on TF2 or PyTorch.
@roimulia2 I have tried convert Lama inpainting model.
Hey @manhnguyen92, I was trying to do the same thing. I'll make sure to post it here if I succeed. Do you think it'll be able to run on iPhone tho? In terms of memory/CPU usage?
@roimulia2 , Yes I do. I think some apps one App Store use this library but don't know how they could convert it.
@manhnguyen92 They've implemented it server side, not on device. And even if you were able to convert it to CoreML you will face the the maximum 2Go used RAM per app limit by Apple
@pboudoin No they didn't. Their apps can be used very well in offline mode.
mark
I converted LaMa (uses FFT heavily) & it doesn't load on ANE. Is FFT ANE compatible or not? @TobyRoseman
Coremltools 6.2 includes support for PyTorch's
fft_rfftn
op.Thanks @junpeiz for adding that.
Still cannot convert?
@RahulBhalley Could you provide a code snippet to reproduce the issue? We tested several models in unit test that the converted FFT model is ANE compatible. Your code snippet could help us determine if it's due to FFT or other parts of LaMa model that is not ANE compatible.
@hzphzp the PyTorch's rfft2
is just a special case of rfftn
(see https://pytorch.org/docs/stable/generated/torch.fft.rfft2.html), and it should be trivial to use rfftn
to replace the rfft2
in your model.
If you still feel it's necessary to add rfft2, please open another issue, as this issue is for rfftn
specifically.
@hzphzp the PyTorch's
rfft2
is just a special case ofrfftn
(see https://pytorch.org/docs/stable/generated/torch.fft.rfft2.html), and it should be trivial to userfftn
to replace therfft2
in your model. If you still feel it's necessary to add rfft2, please open another issue, as this issue is forrfftn
specifically.
Thank you for your reply. But after I change the code rfft2 to rfftn, I still got the following error:
It seams to come from view_as_complex or something else. Would you please help me fix this?
@hzphzp from the error message I cannot tell. Could you open another issue with a code snippet to reproduce this error?