Torch-Pruning icon indicating copy to clipboard operation
Torch-Pruning copied to clipboard

How to apply this package into Quantization models?

Open Tallisgo opened this issue 2 years ago • 3 comments

First thanks for your hard work. Recently, I train a model on quantized mobilenet_v3_large ( based on torchvision ). I pruned this model and quantize the pruned model. I found precision dropped a lot and is not stable. Can you give me some suggestions?

Tallisgo avatar Aug 17 '22 09:08 Tallisgo

Hello, @dongL-Wu. Is it possible to apply pruning and quantization separately in your case? For example, we can prune & finetune a model and then quantize it.

VainF avatar Aug 17 '22 09:08 VainF

Thanks for your reply @VainF. Maybe I just do this. The pipeline is below:

  1. train and save a model
  2. initial a model with the saved weight
  3. use torch_pruning to prune
  4. use quantization (pytorch provide) to quantize

` from torchvision.models.quantization import mobilenet_v3_large

对每个卷积剪枝

for m in model.modules(): if isinstance(m, nn.Conv2d): prune_conv(m, amount=0.2)

` Maybe prune every Conv is not valid and I need to analyse importance with this file?

Tallisgo avatar Aug 18 '22 02:08 Tallisgo

I think the model should be finetuned after pruning to recover its accuracy.

VainF avatar Aug 18 '22 10:08 VainF