Lottery-Ticket-Hypothesis-in-Pytorch
Lottery-Ticket-Hypothesis-in-Pytorch copied to clipboard
This repository contains a Pytorch implementation of the paper "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks" by Jonathan Frankle and Michael Carbin that can be easily adap...
Bumps [numpy](https://github.com/numpy/numpy) from 1.17.2 to 1.22.0. Release notes Sourced from numpy's releases. v1.22.0 NumPy 1.22.0 Release Notes NumPy 1.22.0 is a big release featuring the work of 153 contributors spread...
@rap_game @rap.fame @MAT'MICHAELLOZWA**  @MichaelloAngelo**
Can you give a hint on the expected runtime or parameter settings? I am trying to prune a VGG16 with the CIFAR10 dataset using the command below. I have started...
Bumps [pillow](https://github.com/python-pillow/Pillow) from 6.2.0 to 9.0.1. Release notes Sourced from pillow's releases. 9.0.1 https://pillow.readthedocs.io/en/stable/releasenotes/9.0.1.html Changes In show_file, use os.remove to remove temporary images. CVE-2022-24303 #6010 [@radarhere, @hugovk] Restrict builtins within...
Bumps [protobuf](https://github.com/protocolbuffers/protobuf) from 3.9.2 to 3.15.0. Release notes Sourced from protobuf's releases. Protocol Buffers v3.15.0 Protocol Compiler Optional fields for proto3 are enabled by default, and no longer require the...
The `transform` used while loading every dataset (line 37, main.py) uses MNIST's mean and standard deviation. The correct values for normalizing CIFAR 10, etc are different.
In 'main.py' line 257 - 262, the author used the following codes to freeze the pruned weights: for name, p in model.named_parameters(): if 'weight' in name: tensor = p.data.cpu().numpy() grad_tensor...
I just installed the dependencies and tried to run the example `python3 main.py --prune_type=lt --arch_type=fc1 --dataset=mnist --prune_percent=10 --prune_iterations=35` and got the following error ``` Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to ../data/MNIST/raw/train-images-idx3-ubyte.gz Traceback (most...
In main.py #Freezing Pruned weights by making their gradients Zero grad_tensor = np.where(tensor < EPS, 0, grad_tensor) Does this also freeze the weights that have negative values? More than just...
The `prune_by_percentile` function defined in `main.py` uses layer-wise pruning for all models. While it is found in the original LTH paper that global pruning works better for larger convolutional models...