distiller icon indicating copy to clipboard operation
distiller copied to clipboard

How to compress my object detection model

Open lrh454830526 opened this issue 4 years ago • 1 comments

Hi Thank you for your team to do such a nice work! My team have trained a model with torchvision's faster-rcnn,and now we have to compress the model.And after some time's struggle,we finally decided to use distiller to do the work.Now We are facing the problem how to compress or accelerate the model,to prune or to quantize.In fact both of the methods are OK,But for some reason,we do not have so much time to do the work. 1.can you give me some reason how to do the work with less time ? to prune and to quantize ,which will take less time 2.We realize distiller give the api how to prune torchvision's faster-rcnn ,and I want to know how to prune with a different dataset I'm new in distiller,maybe some expressions are not professional Thank you for your reply .

lrh454830526 avatar Aug 12 '20 03:08 lrh454830526

I have compressed my faster rcnn model with my own dataset, you can use the faster rcnn api in pytorch offered itself, add the command --model fasterrcnn_resnet50_fpn,but i met a problem with the compressed model size, whether compression_scheduler.state_dit() save the small model parameters or not? Why my model has the same size with different sparse prunning setting?

sungh66 avatar Sep 16 '21 10:09 sungh66