BRECQ icon indicating copy to clipboard operation
BRECQ copied to clipboard

Why is bias not quantized?

Open ardeal opened this issue 1 year ago • 1 comments

Hi,

in quant_layer.py, in forward function of QuantModule, why is bias not quantized?


    def forward(self, input: torch.Tensor):
        if self.use_weight_quant:
            weight = self.weight_quantizer(self.weight)
            bias = self.bias
        else:
            weight = self.org_weight
            bias = self.org_bias
        out = self.fwd_func(input, weight, bias, **self.fwd_kwargs)
...

ardeal avatar Jul 27 '24 08:07 ardeal

@yhhhli Could you please help to answer the upper question?

ardeal avatar Jul 30 '24 01:07 ardeal