ModelAssistant
ModelAssistant copied to clipboard
Exporting model to TFlite fails with `quantized engine FBGEMM is not supported`
Describe the bug
Trying to export the model with config configs/pfld/pfld_mbv2n_112.py
fails with RuntimeError: quantized engine FBGEMM is not supported
Environment
Environment you use when bug appears:
- Python version: 3.10
- PyTorch Version: torch==2.0.1
- MMCV Version: 2.0.1
- EdgeLab Version: na
- Code you run
python3 tools/export.py configs/pfld/pfld_mbv2n_112.py work_dirs/pfld_mbv2n_112/epoch_1.pth --target tflite --cfg-options data_root=datasets/meter/
- The detailed error
Traceback (most recent call last):
File "/Users/SB/Projects/Software/Zephyros/Courses/Microcontrollers/ModelAssistant/tools/export.py", line 509, in <module>
main()
File "/Users/SB/Projects/Software/Zephyros/Courses/Microcontrollers/ModelAssistant/tools/export.py", line 501, in main
export_tflite(args, model, loader)
File "/Users/SB/Projects/Software/Zephyros/Courses/Microcontrollers/ModelAssistant/tools/export.py", line 375, in export_tflite
ptq_model = quantizer.quantize()
File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/tinynn/graph/quantization/quantizer.py", line 530, in quantize
qat_model = self.prepare_qat(rewritten_graph, self.is_input_quantized, self.backend, self.fuse_only)
File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/tinynn/graph/quantization/quantizer.py", line 3664, in prepare_qat
self.prepare_qat_prep(graph, is_input_quantized, backend)
File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/tinynn/graph/quantization/quantizer.py", line 714, in prepare_qat_prep
self.prepare_qconfig(graph, backend)
File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/tinynn/graph/quantization/quantizer.py", line 3598, in prepare_qconfig
torch.backends.quantized.engine = backend
File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/torch/backends/quantized/__init__.py", line 33, in __set__
torch._C._set_qengine(_get_qengine_id(val))
RuntimeError: quantized engine FBGEMM is not supported
Additional context Running on Mac M2, torch cpu-only, mmcv compiled from source
Hi @sbocconi! We wondered your torch installed from pip or conda? We strongly recommend use pip to install torch.
Hi @MILK-BIOS, the error is because on ARM architectures such as MacOS M2 FBGEMM is not supported, so apparently you need to use python tools/export.py --backend qnnpack
.
BTW, I have used pip to install torch.
Oh, glad to see you have solved the question! We need to make our code more compatiable.
Unfortunately the Mac M2 ARM is not well supported yet due to the fact that it is a new architecture. I had to do the following two changes to make it work:
-
export OMP_NUM_THREADS=1 && python tools/train.py <params>
Otherwise code hangs - Change
is_mps_available()
inmmengine/device/utils.py
to return alwaysFalse
otherwise I get the following error:
TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.
Maybe you can mention this in the documentation?
thank you, we will update the docs.