Mohamed Moursi
Mohamed Moursi
I am not sure what type is actually used but as far as I understand it can be float. You do not have to use int8 to get the values...
Hi, That's because `qnn.QuantRelU` does not actually take `max_val` as argument, you can check the definition [here](https://github.com/Xilinx/brevitas/blob/32cc847fc7cd6ca4c0201fd6b7d185c1103ae088/src/brevitas/nn/quant_activation.py#L50)
Hi, That's interesting, I tried to replicate your issue using a built-in layer i.e QuantLinear, and indeed it's also their ```python import torch from brevitas.quant import Int8BiasPerTensorFixedPointInternalScaling, Int8ActPerTensorFixedPoint, Int8Bias from...
Hi @KokyoK, Usually, this error results from using a bias quantizer that requires the input to be quantized - so it has a scale factor-. quantizers like `Int8Bias` require the...
maybe the weight width is too small
I have not explored this part yet, so I can not help
sure, but I guess discussions would be a better option than issues
Hi @g12bftd, As far as I understand merging batch normalization layers is usually a post-training optimization so I would train the model and then create a script that defines two...
Hi @wilfredkisku, What do you mean by classical BN layers? do you mean `torch.nn.BatchNorm2d` if this is what you mean then if your model requires it you will have to...
Yes they can be fused brevitas even has a function to do it under `brevitas.nn.utils`