haq icon indicating copy to clipboard operation
haq copied to clipboard

Question about linear quantization

Open frankinwi opened this issue 4 years ago • 11 comments

I figure out the procedure of linear quantization and reproduce the experiments,

  1. Search the quantization strategy on the imagenet100 dataset.
  2. Finetune the model on the whole imagenet dataset with the strategy obtained from step 1.

It seems like the final accuracy of the quantized model is more dependent on the fine-tuning. Another question is why the bit reduction process starts from the last layer as the _final_action_wall function shows.

frankinwi avatar Aug 03 '20 11:08 frankinwi

Could I ask ,when linear_quantization', default=True,some error like the following will appear image Have you miss these errors?

87Candy avatar Oct 08 '20 07:10 87Candy

@87Candy I have not encountered this error. self._build_state_embedding() function is used to build the ten-dimensional feature vector as the paper section 3.1 shows, you can check it.

frankinwi avatar Oct 08 '20 07:10 frankinwi

There are two methods, one is the K-means quantification method, the other is the linear quantification, I would like to ask, when you ran through the linear quantization, what changes have been made to the entire project file? Thanks for your solution.

87Candy avatar Oct 08 '20 15:10 87Candy

@87Candy The error may be caused by data = torch.zeros(1, 3, H, W).cuda() in measure_model function. You can change the batch size.

frankinwi avatar Oct 09 '20 00:10 frankinwi

@87Candy The error may be caused by data = torch.zeros(1, 3, H, W).cuda() in measure_model function. You can change the batch size.

some another question maybe encouter,could I communicate with you,one more time?

87Candy avatar Oct 09 '20 06:10 87Candy

How to covert mobilenet v2 in to qmobilenetv2? qmobilenetv2 seems using QConv2d and QLinear, then how can I calibrate the bits into mobilenetv2?

alan303138 avatar Nov 18 '22 12:11 alan303138

@alan303138 See https://github.com/mit-han-lab/haq/blob/8228d126c800446fdc1cc263555d1e5aed7d9dfd/lib/env/linear_quantize_env.py#L115

frankinwi avatar Nov 21 '22 01:11 frankinwi

@alan303138 See

https://github.com/mit-han-lab/haq/blob/8228d126c800446fdc1cc263555d1e5aed7d9dfd/lib/env/linear_quantize_env.py#L115

Thank you for your reply, But according to the pre-training file they provided : mobilenetv2-150.pth.tar ,it seems that there is no inheritance relationship for QModule, because it is implemented in models/mobilnetv2, so do I need to inherit the model or is there something I missed?

And I also used the QConv2d and QLinear provided by them to make a pretrained qmobilnetv2, but I am not sure if this is correct, because it is not usually necessary to use fp32 training and then convert it to a quantized model?(Like quantization aware training)

https://github.com/mit-han-lab/haq/blob/8228d126c800446fdc1cc263555d1e5aed7d9dfd/lib/utils/quantize_utils.py#L454 # If my current model is mobilenetv2 it will not do calibrate,because the implementation inside is nn.Conv2d, nn.Linear https://github.com/mit-han-lab/haq/blob/8228d126c800446fdc1cc263555d1e5aed7d9dfd/lib/utils/quantize_utils.py#L455

alan303138 avatar Nov 22 '22 03:11 alan303138

@alan303138

  1. Modify the strategy = [[8,-1], [8,8], [8,8], [8,8],...., [8,8]] and use run_linear_quantize_finetune.sh to obtain a W8A8 quantized mobilnetv2 model. https://github.com/mit-han-lab/haq/blob/8228d126c800446fdc1cc263555d1e5aed7d9dfd/finetune.py#L316

  2. Modify the path variable to the W8A8 quantized mobilnetv2 model obtained in step 1. https://github.com/mit-han-lab/haq/blob/8228d126c800446fdc1cc263555d1e5aed7d9dfd/models/mobilenetv2.py#L186

  3. Run the run_linear_quantize_search.sh to perform the RL-based bitwidth search process to obtain an optimal strategy. As the upper bound of the action space is 8-bit, you should use W8A8 quantized model as the baseline (i.e., step 1). That is why the float_bit and the max_bit in run_linear_quantize_search.sh script is 8-bit. https://github.com/mit-han-lab/haq/blob/8228d126c800446fdc1cc263555d1e5aed7d9dfd/run/run_linear_quantize_search.sh#L8

  4. Modify the strategy = "the searched optimal strategy in step 3 " and use run_linear_quantize_finetune.sh to recover the accuracy of the mixed-precision quantized model. https://github.com/mit-han-lab/haq/blob/8228d126c800446fdc1cc263555d1e5aed7d9dfd/finetune.py#L316

frankinwi avatar Nov 23 '22 01:11 frankinwi

@frankinwi Thank you for the very detailed steps,still not sure So I can't use mobilenetv2-150.pth.tar this model right?(Only for kmean quant?) If I use --arch qmobilenetv2. The mobilenetv2 will implement using QConv2d and Qlinear not nn.Conv2d nn.Linear which implement in pretrained model:mobilenetv2-150.pth.tar. https://github.com/mit-han-lab/haq/blob/8228d126c800446fdc1cc263555d1e5aed7d9dfd/run/run_linear_quantize_finetune.sh#L3 mobilenetv2 https://github.com/mit-han-lab/haq/blob/8228d126c800446fdc1cc263555d1e5aed7d9dfd/models/mobilenetv2.py#L169 qmobilenetv2 https://github.com/mit-han-lab/haq/blob/8228d126c800446fdc1cc263555d1e5aed7d9dfd/models/mobilenetv2.py#L183

alan303138 avatar Nov 23 '22 10:11 alan303138

@alan303138

  1. The QConv2d inherits from the QModule base class. https://github.com/mit-han-lab/haq/blob/8228d126c800446fdc1cc263555d1e5aed7d9dfd/lib/utils/quantize_utils.py#L363 The construction function of QConv2d initializes the w_bit=-1, which will first initialize the self._w_bit = w_bit in QModule, i.e., self._w_bit = w_bit=-1 https://github.com/mit-han-lab/haq/blob/8228d126c800446fdc1cc263555d1e5aed7d9dfd/lib/utils/quantize_utils.py#L366

  2. When running the forward function of QConv2d, it will first call self._quantize_activation(inputs=inputs) then self._quantize_weight(weight=weight). https://github.com/mit-han-lab/haq/blob/8228d126c800446fdc1cc263555d1e5aed7d9dfd/lib/utils/quantize_utils.py#L395

  3. Take self._quantize_weight(weight=weight) as example, now self._w_bit = w_bit=-1, it will jump to line 315 and return weights without quantization. https://github.com/mit-han-lab/haq/blob/8228d126c800446fdc1cc263555d1e5aed7d9dfd/lib/utils/quantize_utils.py#L287 https://github.com/mit-han-lab/haq/blob/8228d126c800446fdc1cc263555d1e5aed7d9dfd/lib/utils/quantize_utils.py#L315

Putting them all together, if we do not use half-precision (fp16, see --half flag) and do not specify the w_bit and a_bit for each QConv2d and QLinear layer, the qmobilenetv2 will not be quantized.

According to run_pretrain.sh and pretrain.py, the pre-trained file mobiletv2-150.pth.tar seems to use fp16. Therefore, the mobiletv2-150.pth.tar file might be unsuitable for linear quantization. You can load the mobiletv2-150.pth.tar and insert some prints before https://github.com/mit-han-lab/haq/blob/8228d126c800446fdc1cc263555d1e5aed7d9dfd/models/mobilenetv2.py#L192 to check it out.

frankinwi avatar Nov 23 '22 11:11 frankinwi