mmrazor
mmrazor copied to clipboard
[Bug]TypeError: class `MMArchitectureQuant` in mmrazor/models/algorithms/quantization/mm_architecture.py: _fuse_fx() got an unexpected keyword argument 'graph_module'
Describe the bug
A clear and concise description of what the bug is.
Traceback (most recent call last): File "/opt/conda/envs/torch2/lib/python3.8/site-packages/mmengine/registry/build_functions.py", line 122, in build_from_cfg obj = obj_cls(**args) # type: ignore File "/mnt/jingcheng/mmrazor/mmrazor/models/algorithms/quantization/mm_architecture.py", line 90, in init self.qmodels = self._build_qmodels(self.architecture) File "/mnt/jingcheng/mmrazor/mmrazor/models/algorithms/quantization/mm_architecture.py", line 297, in _build_qmodels observed_module = self.quantizer.prepare( File "/mnt/jingcheng/mmrazor/mmrazor/models/quantizers/native_quantizer.py", line 238, in prepare graph_module = _fuse_fx( TypeError: _fuse_fx() got an unexpected keyword argument 'graph_module'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "tools/ptq.py", line 73, in MMArchitectureQuant
in mmrazor/models/algorithms/quantization/mm_architecture.py: _fuse_fx() got an unexpected keyword argument 'graph_module'
To Reproduce
The command you executed.
python tools/ptq.py my_configs/ptq_openvino_yolox_s_1xb32-400e_coco_calib32xb32.py
Post related information
- The output of
pip list | grep "mmcv\|mmrazor\|^torch"
06/13 15:10:00 - mmengine - INFO - PyTorch: 2.0.0+cu117 06/13 14:41:19 - mmengine - INFO - TorchVision: 0.15.1+cu117 06/13 14:41:19 - mmengine - INFO - OpenCV: 4.7.0 06/13 14:41:19 - mmengine - INFO - MMEngine: 0.7.3 06/13 14:41:19 - mmengine - INFO - MMCV: 2.0.0 06/13 14:41:19 - mmengine - INFO - MMCV Compiler: GCC 9.3 06/13 14:41:19 - mmengine - INFO - MMCV CUDA Compiler: 11.7 06/13 14:41:19 - mmengine - INFO - MMDeploy: 1.1.0+ 06/13 14:41:19 - mmengine - INFO -
06/13 14:41:19 - mmengine - INFO - Backend information 06/13 14:41:19 - mmengine - INFO - tensorrt: 8.2.3.0 06/13 14:41:19 - mmengine - INFO - tensorrt custom ops: Available 06/13 14:41:19 - mmengine - INFO - ONNXRuntime: 1.14.1 06/13 14:41:19 - mmengine - INFO - ONNXRuntime-gpu: 1.8.1 06/13 14:41:19 - mmengine - INFO - ONNXRuntime custom ops: NotAvailable 06/13 14:41:19 - mmengine - INFO - pplnn: None 06/13 14:41:19 - mmengine - INFO - ncnn: 1.0.20230517 06/13 14:41:19 - mmengine - INFO - ncnn custom ops: NotAvailable 06/13 14:41:19 - mmengine - INFO - snpe: None 06/13 14:41:19 - mmengine - INFO - openvino: 2023.0.0 06/13 14:41:19 - mmengine - INFO - torchscript: 2.0.0 06/13 14:41:19 - mmengine - INFO - torchscript custom ops: NotAvailable 06/13 14:41:19 - mmengine - INFO - rknn-toolkit: None 06/13 14:41:19 - mmengine - INFO - rknn-toolkit2: None 06/13 14:41:19 - mmengine - INFO - ascend: None 06/13 14:41:19 - mmengine - INFO - coreml: None 06/13 14:41:19 - mmengine - INFO - tvm: None 06/13 14:41:19 - mmengine - INFO - vacc: None 06/13 14:41:19 - mmengine - INFO -
06/13 14:41:19 - mmengine - INFO - Codebase information 06/13 14:41:19 - mmengine - INFO - mmdet: 3.0.0 06/13 14:41:19 - mmengine - INFO - mmseg: None 06/13 14:41:19 - mmengine - INFO - mmpretrain: 1.0.0rc8 06/13 14:41:19 - mmengine - INFO - mmocr: 1.0.0 06/13 14:41:19 - mmengine - INFO - mmagic: None 06/13 14:41:19 - mmengine - INFO - mmdet3d: None 06/13 14:41:19 - mmengine - INFO - mmpose: None 06/13 14:41:19 - mmengine - INFO - mmrotate: None 06/13 14:41:19 - mmengine - INFO - mmaction: None 06/13 14:41:19 - mmengine - INFO - mmrazor: 1.0.0 3. Your config file if you modified it or created a new one.
[here]
- Your train log file if you meet the problem during training. [here]
- Other code you modified in the
mmrazor
folder. [here]
Additional context
Add any other context about the problem here.
[here]
I apologize for the inconvenience, but I was unable to reproduce the issue.
If the #554 is unresolved, it should not reach the _fuse_fx
step.
Perhaps you have made some other modifications?
You can try following the solution in #554 first and see if you still encounter this problem.
I apologize for the inconvenience, but I was unable to reproduce the issue.
If the #554 is unresolved, it should not reach the
_fuse_fx
step. Perhaps you have made some other modifications?You can try following the solution in #554 first and see if you still encounter this problem.
ok thanks.
@PancakeAwesome i faced this Error before its beacuse ur using torch 2.0 and they updated _fuse_fx from
def _fuse_fx( graph_module: GraphModule, is_qat: bool, fuse_custom_config: Union[FuseCustomConfig, Dict[str, Any], None] = None, backend_config: Union[BackendConfig, Dict[str, Any], None] = None, ) -> GraphModule:
to ->
def _fuse_fx( model: GraphModule, is_qat: bool, fuse_custom_config: Union[FuseCustomConfig, Dict[str, Any], None] = None, backend_config: Union[BackendConfig, Dict[str, Any], None] = None, ) -> GraphModule:
so just change it from inside native_quantizer.py line 238 and it should work