Structured-Bayesian-Pruning-pytorch
Structured-Bayesian-Pruning-pytorch copied to clipboard
Pruning AlexNet
Hi, thank you for your great work!
I know its a bit of long shot, but I was wondering if you had any insights on a strange problem I come across when pruning alexNet.
Specifically, I'm trying to use this code to prune AlexNet. I'd tried a variety of learning rates, but invariably, the following happens: The training and testing accuracy is increasing, and the SNR is dropping drops towards 1. However, the layerwise sparsity remains 0 across all layers while the SNR > 1. Then, immediately after SNR < 1, the training accuracy immediately plummets to around ~1%, and does not recover. However, the training accuracy remains high.
I was wondering if you had an insights on why this may be happening. I'm waiting until sparsity (layerwise_sparsity) > 0.0 so I can see some pruning, but this comes at a huge, sudden accuracy loss. Am I using the wrong stopping criterion here, learning rate etc? -- Any insights on what could be going wrong would be deeply appreciated!
I think I need more information to understand what is happening. What is the dataset you are using? Where do you place SBP layers?
Usually, when you work with small datasets, removing some neurons won't reduce much performance. But if you are using ImageNet level datasets, it will cause larger performance loss.