mmrazor icon indicating copy to clipboard operation
mmrazor copied to clipboard

[Bug] Quantization for networks using AdaptivePadding, like efficientnet

Open choong-park opened this issue 1 year ago • 1 comments

Describe the bug

When I tried to quantize "efficientnet" with PTQ, the error messages occurred.

Traceback (most recent call last): File "/opt/conda/envs/midap-mmrazor-torch1.13/lib/python3.9/site-packages/mmengine/registry/build_functions.py", line 121, in build_from_cfg obj = obj_cls(**args) # type: ignore File "/root/workspace/mmrazor/mmrazor/models/algorithms/quantization/mm_architecture.py", line 90, in init self.qmodels = self._build_qmodels(self.architecture) File "/root/workspace/mmrazor/mmrazor/models/algorithms/quantization/mm_architecture.py", line 300, in _build_qmodels observed_module = self.quantizer.prepare(model, concrete_args) File "/root/workspace/mmrazor/mmrazor/models/quantizers/native_quantizer.py", line 231, in prepare traced_graph = self.tracer.trace(model, concrete_args=concrete_args) File "/root/workspace/mmrazor/mmrazor/models/task_modules/tracer/fx/custom_tracer.py", line 430, in trace 'output', (self.create_arg(fn(*args)), ), {}, File "/root/workspace/mmclassification/mmcls/models/classifiers/image.py", line 111, in forward feats = self.extract_feat(inputs) File "/root/workspace/mmclassification/mmcls/models/classifiers/image.py", line 196, in extract_feat x = self.backbone(inputs) File "/root/workspace/mmrazor/mmrazor/models/task_modules/tracer/fx/custom_tracer.py", line 406, in module_call_wrapper return self.call_module(mod, forward, args, kwargs) File "/opt/conda/envs/midap-mmrazor-torch1.13/lib/python3.9/site-packages/torch/ao/quantization/fx/tracer.py", line 103, in call_module return super().call_module(m, forward, args, kwargs) File "/opt/conda/envs/midap-mmrazor-torch1.13/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 434, in call_module return forward(*args, **kwargs) File "/root/workspace/mmrazor/mmrazor/models/task_modules/tracer/fx/custom_tracer.py", line 400, in forward return _orig_module_call(mod, *args, **kwargs) File "/opt/conda/envs/midap-mmrazor-torch1.13/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/root/workspace/mmclassification/mmcls/models/backbones/efficientnet.py", line 411, in forward x = layer(x) File "/root/workspace/mmrazor/mmrazor/models/task_modules/tracer/fx/custom_tracer.py", line 406, in module_call_wrapper return self.call_module(mod, forward, args, kwargs) File "/opt/conda/envs/midap-mmrazor-torch1.13/lib/python3.9/site-packages/torch/ao/quantization/fx/tracer.py", line 103, in call_module return super().call_module(m, forward, args, kwargs) File "/opt/conda/envs/midap-mmrazor-torch1.13/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 434, in call_module return forward(*args, **kwargs) File "/root/workspace/mmrazor/mmrazor/models/task_modules/tracer/fx/custom_tracer.py", line 400, in forward return _orig_module_call(mod, *args, **kwargs) File "/opt/conda/envs/midap-mmrazor-torch1.13/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/opt/conda/envs/midap-mmrazor-torch1.13/lib/python3.9/site-packages/mmcv/cnn/bricks/conv_module.py", line 207, in forward x = self.conv(x) File "/root/workspace/mmrazor/mmrazor/models/task_modules/tracer/fx/custom_tracer.py", line 406, in module_call_wrapper return self.call_module(mod, forward, args, kwargs) File "/opt/conda/envs/midap-mmrazor-torch1.13/lib/python3.9/site-packages/torch/ao/quantization/fx/tracer.py", line 103, in call_module return super().call_module(m, forward, args, kwargs) File "/opt/conda/envs/midap-mmrazor-torch1.13/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 434, in call_module return forward(*args, **kwargs) File "/root/workspace/mmrazor/mmrazor/models/task_modules/tracer/fx/custom_tracer.py", line 400, in forward return _orig_module_call(mod, *args, **kwargs) File "/opt/conda/envs/midap-mmrazor-torch1.13/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/opt/conda/envs/midap-mmrazor-torch1.13/lib/python3.9/site-packages/mmcv/cnn/bricks/conv2d_adaptive_padding.py", line 53, in forward max((output_h - 1) * self.stride[0] + File "/opt/conda/envs/midap-mmrazor-torch1.13/lib/python3.9/site-packages/torch/fx/proxy.py", line 298, in bool return self.tracer.to_bool(self) File "/opt/conda/envs/midap-mmrazor-torch1.13/lib/python3.9/site-packages/torch/fx/proxy.py", line 174, in to_bool raise TraceError('symbolically traced variables cannot be used as inputs to control flow') torch.fx.proxy.TraceError: symbolically traced variables cannot be used as inputs to control flow

To Reproduce

Write PTQ configuration for efficientnet, and execute it.

Post related information

I did some debuggings, and I realized that Conv2dAdaptivePadding operator made the error. Since "max" and "if" control flow operations, whose inputs are the input of the operator, were used in the operator, fx tracer failed to trace the network. I think it should be fixed for the future work because not only efficientnet, but transformer networks like ViT also use AdaptivePadding.

choong-park avatar Aug 22 '23 02:08 choong-park

@choong-park How did you resolve this ? i am working with mmyolo for openvino quantization.

Priyanshu88 avatar Apr 29 '24 10:04 Priyanshu88