model-optimization icon indicating copy to clipboard operation
model-optimization copied to clipboard

A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.

Results 228 model-optimization issues
Sort by recently updated
recently updated
newest added

import tensorflow as tf from tensorflow.keras import layers, models import tensorflow_model_optimization as tfmot input_shape = (20,) annotated_model = tf.keras.Sequential([ tfmot.quantization.keras.quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=input_shape)), tf.keras.layers.Flatten() ]) quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model) quant_aware_model.summary() when I run...

import tensorflow as tf from tensorflow.keras import layers, models import tensorflow_model_optimization as tfmot # Use `quantize_annotate_layer` to annotate that the `Dense` layer # should be quantized. input_shape = (20,) annotated_model...

I am trying to prune a CNN model given below in the python script. **Describe the bug** I am trying to execute code, which gives me the error every time....

bug

**Describe the bug** I am trying to prune MobileNetV2 model using `prune_low_magnitude` but am running into the following error. ``` 2025-01-19 17:41:03.439078: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to...

bug

`ModelTransformer._match_layer_with_inputs` calls `self._get_layers(input_layer_names)`. `input_layer_names` have strict order, i. e. `_get_layers`'s result in this case must have same order of tensors as in `input_layer_names`. Current implementation is: ``` def _get_layers(self, layer_names):...

bug

I see that numpy was updated long back - https://github.com/tensorflow/model-optimization/commit/10ff67f7601cf667e5c8b9783f23a68244b62ae9. Is their a plan to update to numpy v2 sometime soon?

feature request

Hi, I have a question, after pruning a model to 90% sparsity, meaning 90% of weights are set to zero, does it affect mac operations or zeros are still stored...

Internal change.

technique:pruning

**Describe the bug** Hi! I have some problem performing QAT on a gnn (built from [TF-GNN](https://github.com/tensorflow/gnn)) that uses custom layers. I skipped the custom layers and opted to only quantize...

bug