yolov5_prune
yolov5_prune copied to clipboard
When 20% pruning, one layer itself becomes zero, is this a problem?

When the sparse rate was 0.00001, the mAP showed a stable trend, but due to time constraints, only 20 epochs were sparsed and straightened. Do you think there's a problem?
You should conduct sufficient sparse training to ensure that there are many BN coefficients that tend to 0, so that this problem will not occur.
@uyzhang
- What is the criterion of being sufficient?
- And although the map goes up steadily, l1 regularization can reverse the current sparsity calculation? Thank you for your reply
- The distribution should be like the following figure
2.I don't quite understand what this problem is.
@uyzhang
- I'm sorry, but where can I look that picture? Does it save every epoch?
- During sparse training, my mAP rose stably from 0.5 to 0.65, for 20 epochs. Then, even if I increase the epoch, it becomes a saturation in the present, so isn't there much difference in BN sparsity Compared to 20 epochs, ?
- As you can see from your Tensorboard, this picture represents the distribution of the BN coefficient.
- Strangely, when I carry out sparse training, the mAP will drop. I guess your sparseness coefficient is too small to achieve the effect of sparse training. If the sparsity training is successful, it can be seen that most of the BN coefficients in the distribution map tend to 0. So the longer you train, the more coefficients tend to 0.
@uyzhang Thank you for your quick answerI 'll give it a try and tell you In addition, BN bias also does l1 regualization in this repo, and then 10 times... Is there a reason why the origin paper only does the scaling factor?
It's just that it works better.