Deep-Compression-PyTorch icon indicating copy to clipboard operation
Deep-Compression-PyTorch copied to clipboard

PyTorch implementation of 'Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding' by Song Han, Huizi Mao, William J. Dally

Results 11 Deep-Compression-PyTorch issues
Sort by recently updated
recently updated
newest added

When i run the pruning.py, the following bug was raised, what's the possible problem? RuntimeError: An attempt has been made to start a new process before the current process has...

@mightydeveloper Hi, I used the quantization script in my model .Because of the existing of convolution layer,I encountered an error named as "TypeError: expected dimension

when the num_worker of Dataloader is not zero, there would be an error. some people say it is because the multi-threads of windows is accomplished with spawn instead of fork....

@mightydeveloper hi thanks for the wonderful code base , i have following few queries 1.Can we reduce the weight size of the model with the following code base 2. can...

Hello, mightydeveloper. When I use 'weight_share.py' to compress the trained model, the error occured: AttributeError: 'ReLU' object has no attribute 'weight' . File "weight_share.py", line 32, in apply_weight_sharing(model) File "/net/quantization.py",...

Hey! Thanks for this implementation! :) Do you have any idea as to how we can apply Huffman encoding on darknet .weights??

I'm Trying to apply the whole compression process on LeNet5 instead of LeNet300-100 I Fixed some problems I Encountered but now in the quantization step, I can't use sparse matrices...

When I use your`s Deep compression functin in MobileNet-V2 Model,I find some problem, 1. I need use kmeans in every weight among Layer, but the weight is different dimensions 2....

Please tell me what should I do if I get the error KMeans.__init__() got an unexpected keyword argument 'precompute_distances' when I run weight_shared.py?