sparse_learning
sparse_learning copied to clipboard
Sparse learning library and sparse momentum resources.
Hi, when I run RN50_FP16_4GPU.sh in sparse_learning/imagenet/tuned_resnet, I got this error: ``` Traceback (most recent call last): File "/home/z0132/workspace/codes/sparse_learning/imagenet/tuned_resnet/./main.py", line 811, in main() File "/home/z0132/workspace/codes/sparse_learning/imagenet/tuned_resnet/./main.py", line 114, in main train_net(args,...
Hi, I am interested in this work. I want to try this algorithm to accelerate trainning procedure of NLP models. So I want to know if I can directly use...
command: python mnist_cifar/main.py --model wrn-28-2 --data cifar --resume models/model.pt --start-epoch 20 The model is still retrained and cannot be trained from where it was interrupted。why?
Hi, thank you for your great work. Today, I want to do an ablation experience on your work. I just modified the `momentum_growth` funtion. from `y, idx = torch.sort(torch.abs(grad).flatten(), descending=True)`...
Hi Tim, thanks for making this library. I am trying to test it on [speech generation models](https://github.com/coqui-ai/TTS/) and i have some questions from your code template: 1. The models come...
Hi, is it possible to use dynamic growth and pruning currently by just updating the masks each step? I'm looking to implement something like [RigL](https://arxiv.org/abs/1911.11134) (Evci et al. 2020).
Hello, I see you've been using boolean masks to mask out the weights of the pytorch network. Is there a way to use sparse tensors to achieve an actual speed...
Hi Tim, First, thank you for your code. I notice that you change the default learning rate for Imagenet in multi-GPU running by multiplying 0.1 with the number of GPUs....
This is a great paper, full of information and ideas. Thank you for this amazing work. While reading I came across this line, "we want to look at the momentum...
Hi , I am trying to run the mnist code but I am not sure about the pruning rate and death rate values to use. so when I ran the...