dynconv icon indicating copy to clipboard operation
dynconv copied to clipboard

question about the sparsity_target

Open albertszg opened this issue 3 years ago • 2 comments

Hello, this is brilliant work, I want to use the binary gumbel-softmax for my work. But there are some problems. I used the soft mask for the first layer only (just apply the generated mask to the features after the first layer),and I found a strange phenomenon。The gumbel noise seemed to influence the training process too much. I plotted the sparsity loss only, and I found I usually couldn't obtain the sparsity target I set. Is this process right? temp=5.0 微信截图_20211206151218 temp=1.0 later

albertszg avatar Dec 06 '21 07:12 albertszg

Some things that could help for convergence;

  • one of the gumbel-softmax papers shows that for a binary case, the temperature should be smaller or equal to 1 for convergence. e.g. 0.66
  • lowering the learning rate (or separate lower learning rate for the decision layers) might help
  • disabling weight decay on the decision layers might help

Not sure what your exact setup is, but be sure that the implementation is correct so that the gradients can backpropagate

thomasverelst avatar Dec 14 '21 09:12 thomasverelst

Thanks for your advice, I'll try them. And I find that adding a BatchNorm layer in the squeeze function is better

albertszg avatar Dec 14 '21 10:12 albertszg