model-optimization
model-optimization copied to clipboard
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
**Describe the bug** Cannot `pip install` nightly version of this package **System information** TensorFlow version (installed from source or binary): none TensorFlow Model Optimization version (installed from source or binary):...
Hi, when I am trying Quantization Aware Training on my model, I get the following error in my 'CustomLayerMaxPooling1D' : --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In [62],...
Hello I am trying to apply quantization aware training for mobilenet, and I am testing on the mnist dataset, the floating point models works very well but the moment I...
**Describe the bug** Unable to quantize the custom layer to Int8 even after quantization. **System information** TensorFlow version (installed from source or binary): 2.15.0-dev20230814 TensorFlow Model Optimization version (installed from...
Prior to filing: check that this should be a bug instead of a feature request. Everything supported, including the compatible versions of TensorFlow, is listed in the overview page of...
If there are multiple input layers (and therefore paths), this early out might return an empty set before all paths are checked. By simply removing it, the path that doesn't...
If there are multiple prunable weights in a wrapped layer, without this change, the wrapped model cannot be saved to h5 because of duplicate dataset names (the layer has multiple...
This aids downstream repos that implement fixes for various cloning issues by making this function able to be monkey-patched. For context, I am part of @hunse's team that is affected...
Internal test
**Describe the bug** When quantizing the keras model after pruning, an error is reported as follows. RuntimeError: Layer conv1d: is not supported. You can quantize this layer by passing a...