model-optimization
model-optimization copied to clipboard
quantization.keras.quantize_model function runtime error with mobilenet v3
Describe the bug
tensorflow_model_optimization.quantization.keras.quantize_model function throws the following error with mobilenet v3.
Traceback (most recent call last):
File "issue_report.py", line 14, in <module>
q_aware_model = tfmo.quantization.keras.quantize_model(model_mv3)
File "/home/.local/lib/python3.8/site-packages/tensorflow_model_optimization/python/core/quantization/keras/quantize.py", line 137, in quantize_model
annotated_model = quantize_annotate_model(to_quantize)
File "/home/.local/lib/python3.8/site-packages/tensorflow_model_optimization/python/core/quantization/keras/quantize.py", line 209, in quantize_annotate_model
return keras.models.clone_model(
File "/home/anaconda3/envs/tf-n-gpu/lib/python3.8/site-packages/tensorflow/python/keras/models.py", line 430, in clone_model
return _clone_functional_model(
File "/home/anaconda3/envs/tf-n-gpu/lib/python3.8/site-packages/tensorflow/python/keras/models.py", line 200, in _clone_functional_model
functional.reconstruct_from_config(model_configs,
File "/home/anaconda3/envs/tf-n-gpu/lib/python3.8/site-packages/tensorflow/python/keras/engine/functional.py", line 1279, in reconstruct_from_config
process_node(layer, node_data)
File "/home/anaconda3/envs/tf-n-gpu/lib/python3.8/site-packages/tensorflow/python/keras/engine/functional.py", line 1227, in process_node
output_tensors = layer(input_tensors, **kwargs)
File "/home/anaconda3/envs/tf-n-gpu/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 931, in __call__
return self._functional_construction_call(inputs, args, kwargs,
File "/home/anaconda3/envs/tf-n-gpu/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1069, in _functional_construction_call
outputs = self._keras_tensor_symbolic_call(
File "/home/anaconda3/envs/tf-n-gpu/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 802, in _keras_tensor_symbolic_call
return self._infer_output_signature(inputs, args, kwargs, input_masks)
File "/home/anaconda3/envs/tf-n-gpu/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 843, in _infer_output_signature
outputs = call_fn(inputs, *args, **kwargs)
File "/home/anaconda3/envs/tf-n-gpu/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py", line 667, in wrapper
raise e.ag_error_metadata.to_exception(e)
TypeError: in user code:
TypeError: tf__call() got an unexpected keyword argument 'y'
System information
TensorFlow version (installed from source or binary): 2.4.0-dev20200914.
TensorFlow Model Optimization version (installed from source or binary): 0.5.0.
Python version: 3.8.5
Describe the expected behavior Code runs without crashing
Describe the current behavior Runtime error
Code to reproduce the issue
import tensorflow as tf
from tensorflow.keras.applications import MobileNetV3Large as MobileNetV3
from tensorflow.keras.applications.mobilenet_v2 import MobileNetV2
import tensorflow_model_optimization as tfmo
model_mv2 = MobileNetV2(include_top=True, weights='imagenet', input_shape=(224, 224, 3))
model_mv3 = MobileNetV3(include_top=True, weights='imagenet', input_shape=(224, 224, 3))
q_aware_model = tfmo.quantization.keras.quantize_model(model_mv2) # this runs
q_aware_model = tfmo.quantization.keras.quantize_model(model_mv3) # this doesn't
Additional context I'm trying to quantize MobileNetV3 but ran into this issue. Please help! Thank you.
Same issue!
Hi @ejcgt, can you confirm if this issue still happens on latest version?
I have the same issue, as @teijeong suggested I tried with both versions:
with tensorflow_model_optimization
v.0.5.0 I get the same error mentioned before
TypeError: tf__call() got an unexpected keyword argument 'y'
with tensorflow_model_optimization
v.0.5.1, I get the following error:
File "/home/anaconda3/envs/TF/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py", line 1561, in _add_dispatch
y = ops.convert_to_tensor(y, dtype_hint=x.dtype.base_dtype, name="y")
AttributeError: 'list' object has no attribute 'dtype'
The error seems to be related, as before, to the Lambda layers.
I am using TensorFlow 2.6 with Python 3.8.8 within an anaconda environment. (Similar issue: https://github.com/tensorflow/tensorflow/issues/50079)
I can reproduce it both in MobileNetV3Small and MobileNetV3Large. It is probably because some data is list
type but tfmod assumes it is tensorflow.python.framework.ops.Tensor
or tensorflow.python.keras.engine.keras_tensor.KerasTensor
.
Has anyone been able to find a solution?
I met the same issue and ended up replacing MobileNetV3-Small with EfficientNet-lite0 which fully supports PTQ and QAT. (Also I saw big quantization error from SE blocks of MoblieNetV3-Small)
https://www.tensorflow.org/lite/api_docs/python/tflite_model_maker/image_classifier/EfficientNetLite0Spec