qkeras
                                
                                
                                
                                    qkeras copied to clipboard
                            
                            
                            
                        QKeras: a quantization deep learning library for Tensorflow Keras
Is it possible to add QKeras to conda-forge? We need this to have a working hls4ml conda-forge recipe (See https://github.com/fastmachinelearning/hls4ml/issues/790). I used Grayskull to try to build a recipe (see:...
I have 2 models one is baseline keras model and its equivalent keras model, the models are taken from the QKerasTutorial.ipynb, My keras model is shown below: ``` Model: "model"...
When I use Tensorflow 2.11, keras will be automatically uninstalled when installing qkeras, is there a correspondence table between the two, or which tf2 version should the latest qkeras correspond...
I know I can get the weight and bias to INT8 by setting: kernel_quantizer=quantized_bits(bits=8, integer=7, alpha=1) bias_quantizer=quantized_bits(bits=8, integer=7, alpha=1) However, sometimes, the input tensor is still float because we have...
 As shown in the comment, the QAdaptiveActivation layer can be used to calculate the EMA of min and max of the activation values, i.e., the quantization range of the...
Any methods that allows me to convert the trained quantized model to .tflite?
Hello, I have a use case where I want to introduce a bitwise operation or as a custom layer. As a 1st step, I trained a simple model with 8-bit...
I see you have both `pyparser` and `pyparsing` in your `requirements.txt`. However, only `pyparser` is in the `setup.py` as a dependency. Moreover, I only see a use of the `pyparsing`...