BRECQ
BRECQ copied to clipboard
Why is bias not quantized?
Hi,
in quant_layer.py, in forward function of QuantModule, why is bias not quantized?
def forward(self, input: torch.Tensor):
if self.use_weight_quant:
weight = self.weight_quantizer(self.weight)
bias = self.bias
else:
weight = self.org_weight
bias = self.org_bias
out = self.fwd_func(input, weight, bias, **self.fwd_kwargs)
...
@yhhhli Could you please help to answer the upper question?