sparseml
sparseml copied to clipboard
Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
**Is your feature request related to a problem? Please describe.** It would be a good idea to mention here https://github.com/neuralmagic/sparseml/blob/main/README.md#:~:text=Instead%20of%20training,manager.apply(model) that one-shot pruning runs with pruners that use gradients require...
**Describe the question** Why https://github.com/neuralmagic/sparseml/blob/a6477f900b55afd555734fe6cf784e0137d815a5/src/sparseml/pytorch/sparsification/modifier.py#L525 and https://github.com/neuralmagic/sparseml/blob/a6477f900b55afd555734fe6cf784e0137d815a5/src/sparseml/pytorch/sparsification/modifier.py#L548 accept `steps_per_epoch`, given that it's unused param in these methods?
Changes: - Moved `DeepSparseModelRunner` and `DeepSparseModelRunner` to `sparseml.deepsparse.utils.models` - Moved `pruning_loss_sens_one_shot` and `pruning_perf_sens_one_shot` to `sparseml.deepsparse.optim.sensitivity_pruning` - Moved tests related to above to appropriate packages
**Describe the bug** fail to convert model to ONNX **Expected behavior** model is converted to ONNX **Environment** Include all relevant environment information: 1. OS [e.g. Ubuntu 18.04]: `fedora 33` 2....
This PR adds base functionality for `sparseml.recipe_template` TODO - [ ] More extensive tests - [ ] Change branch to `sparseml.recipe_template.dev` before landing
feature branch for supporting new versions of torch TODOS that will happen in other PRs: - [x] Make 3.10/1.12 default in GH workflows - [x] Add 3.9/1.9 backwards compatibility GH...
Modified imagenet training script to accept OBSPruning modifier + W&B integration + taking original torchvision models as input. Now one can pass either `OBSPruningModifier` or `MFACPruningModifier` to the ImageNet training...