nn-quantization-pytorch icon indicating copy to clipboard operation
nn-quantization-pytorch copied to clipboard

why weights are uint

Open qiulinzhang opened this issue 3 years ago • 0 comments

        if self.bit_weights is not None:
            self.weight_quantization_default = quantization_mapping[self.qtype](self, self.weight,
                                                                           self.bit_weights, symmetric=True,
                                                                           uint=True, kwargs=kwargs)

I dont understand for a zero-centered weights, why quantize it as uint?

qiulinzhang avatar Dec 06 '21 09:12 qiulinzhang