kaldi icon indicating copy to clipboard operation
kaldi copied to clipboard

[scripts] implement max-change within customized SGD optimizer

Open aadps opened this issue 5 years ago • 22 comments

Needs further tests and reviews. The total change per minibatch is logged, should be very easy to add to TensorBoard plot at a later point.

aadps avatar Apr 08 '20 10:04 aadps

... and how about the results? Does it actually perform better than Adam?

danpovey avatar Apr 08 '20 10:04 danpovey

Want to doublecheck several things. The max_change and max_change_per_layer we trying to implement are the norms of the proposed tensor delta. But in the case of SGD, we may first get the norm of the gradient, but the gradient and tensor delta miss by a factor of the learning rate?

So for individul layers, it should be like: if norm * group['lr'] > max_change_per_layer: d_p.mul_(max_change_per_layer / norm / group['lr']) ?

Then, when computing the norm for the entire model, should we use norms of individual layers before or after the adjustment of max_change_per_layer?

Lastly, if the max_change constraint works as intended, we no longer need to apply the pytorch gradient clipping?

aadps avatar Apr 10 '20 03:04 aadps

Want to doublecheck several things. The max_change and max_change_per_layer we trying to implement are the norms of the proposed tensor delta. But in the case of SGD, we may first get the norm of the gradient, but the gradient and tensor delta miss by a factor of the learning rate?

Yes.

So for individul layers, it should be like: if norm * group['lr'] > max_change_per_layer: d_p.mul_(max_change_per_layer / norm / group['lr'])

sounds right, although you mean a / (b / c), not a / b / c.

?

Then, when computing the norm for the entire model, should we use norms of individual layers before or after the adjustment of max_change_per_layer?

After.

Lastly, if the max_change constraint works as intended, we no longer need to apply the pytorch gradient clipping?

Likely, yes. But it's still worthwhile comparing whether there is any advantage in doing it with max-change.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/kaldi-asr/kaldi/pull/4032#issuecomment-611861557, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAZFLO7S6FM23N3TRFMJ3H3RL2HV3ANCNFSM4MDZ4I4A .

danpovey avatar Apr 10 '20 04:04 danpovey

Some initial results (I am still working on the SgdMaxChange implementation):

Adam global average objf: adamobjf

SgdMaxChange global average objf: mcobjf

SgdMaxChange change for the whole model: mcchange

What other quantities would you like to see and compare?

aadps avatar Apr 11 '20 03:04 aadps

although you mean a / (b / c), not a / b / c.

For this one I wasn't sure. norm is the norm of d_p (gradient adjusted by weight_decay, momentum, etc.), so norm * group['lr'] should be the proposed change to the matrix?

If it is greater than the max_change, we should limit it by multiplying max_change / (norm * group['lr']) or max_change / norm / group['lr'], which would be a factor less than 1?

aadps avatar Apr 11 '20 03:04 aadps

Oh yes, max_change / (norm * group['lr']). I always avoid a / b / c if not using parentheses, because not everyone remembers the associativity of '/'.

On Sat, Apr 11, 2020 at 11:40 AM Xin Chen [email protected] wrote:

although you mean a / (b / c), not a / b / c.

For this one I wasn't sure. norm is the norm of d_p (gradient adjusted by weight_decay, momentum, etc.), so norm * group['lr'] should be the proposed change to the matrix?

If it is greater than the max_change, we should limit it by multiplying max_change / (norm * group['lr']) or max_change / norm / group['lr'], which would be a factor less than 1?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/kaldi-asr/kaldi/pull/4032#issuecomment-612315042, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAZFLOZ6JKRXPEG2FUXD4JTRL7RDJANCNFSM4MDZ4I4A .

danpovey avatar Apr 11 '20 05:04 danpovey

My bad, I just went for a quick fix but this is indeed poor coding style.

aadps avatar Apr 11 '20 06:04 aadps

Let us know the effect on WER. You can't always predict the effect on WER just from the objective values.

On Sat, Apr 11, 2020 at 2:59 PM Xin Chen [email protected] wrote:

My bad, I just went for a quick fix but this is indeed poor coding style.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/kaldi-asr/kaldi/pull/4032#issuecomment-612349473, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAZFLO3ZX6AAEHN4ZCZPB23RMAINDANCNFSM4MDZ4I4A .

danpovey avatar Apr 11 '20 07:04 danpovey

Adam: ==> exp/chain_pybind/tdnn_sp/train/decode_res/test/scoring_kaldi/best_cer <== %WER 7.15 [ 7491 / 104765, 178 ins, 465 del, 6848 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/test/cer_10_0.5

==> exp/chain_pybind/tdnn_sp/train/decode_res/test/scoring_kaldi/best_wer <== %WER 15.47 [ 9968 / 64428, 918 ins, 1511 del, 7539 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/test/wer_12_0.0 ==> exp/chain_pybind/tdnn_sp/train/decode_res/dev/scoring_kaldi/best_cer <== %WER 6.06 [ 12439 / 205341, 321 ins, 591 del, 11527 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/dev/cer_10_0.0

==> exp/chain_pybind/tdnn_sp/train/decode_res/dev/scoring_kaldi/best_wer <== %WER 13.79 [ 17608 / 127698, 1454 ins, 2772 del, 13382 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/dev/wer_11_0.0

SgdMaxChange: ==> exp/chain_pybind/tdnn_sp/train/decode_res/test/scoring_kaldi/best_cer <== %WER 7.36 [ 7715 / 104765, 187 ins, 474 del, 7054 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/test/cer_10_0.5

==> exp/chain_pybind/tdnn_sp/train/decode_res/test/scoring_kaldi/best_wer <== %WER 15.83 [ 10202 / 64428, 804 ins, 1685 del, 7713 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/test/wer_11_0.5 ==> exp/chain_pybind/tdnn_sp/train/decode_res/dev/scoring_kaldi/best_cer <== %WER 6.29 [ 12908 / 205341, 296 ins, 555 del, 12057 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/dev/cer_9_0.5

==> exp/chain_pybind/tdnn_sp/train/decode_res/dev/scoring_kaldi/best_wer <== %WER 14.13 [ 18048 / 127698, 1583 ins, 2644 del, 13821 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/dev/wer_10_0.0

aadps avatar Apr 11 '20 08:04 aadps

What are the learning rate schedules?

On Sat, Apr 11, 2020 at 4:30 PM Xin Chen [email protected] wrote:

Adam: ==> exp/chain_pybind/tdnn_sp/train/decode_res/test/scoring_kaldi/best_cer <== %WER 7.15 [ 7491 / 104765, 178 ins, 465 del, 6848 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/test/cer_10_0.5

==> exp/chain_pybind/tdnn_sp/train/decode_res/test/scoring_kaldi/best_wer <== %WER 15.47 [ 9968 / 64428, 918 ins, 1511 del, 7539 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/test/wer_12_0.0 ==> exp/chain_pybind/tdnn_sp/train/decode_res/dev/scoring_kaldi/best_cer <== %WER 6.06 [ 12439 / 205341, 321 ins, 591 del, 11527 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/dev/cer_10_0.0

==> exp/chain_pybind/tdnn_sp/train/decode_res/dev/scoring_kaldi/best_wer <== %WER 13.79 [ 17608 / 127698, 1454 ins, 2772 del, 13382 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/dev/wer_11_0.0

SgdMaxChange: ==> exp/chain_pybind/tdnn_sp/train/decode_res/test/scoring_kaldi/best_cer <== %WER 7.36 [ 7715 / 104765, 187 ins, 474 del, 7054 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/test/cer_10_0.5

==> exp/chain_pybind/tdnn_sp/train/decode_res/test/scoring_kaldi/best_wer <== %WER 15.83 [ 10202 / 64428, 804 ins, 1685 del, 7713 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/test/wer_11_0.5 ==> exp/chain_pybind/tdnn_sp/train/decode_res/dev/scoring_kaldi/best_cer <== %WER 6.29 [ 12908 / 205341, 296 ins, 555 del, 12057 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/dev/cer_9_0.5

==> exp/chain_pybind/tdnn_sp/train/decode_res/dev/scoring_kaldi/best_wer <== %WER 14.13 [ 18048 / 127698, 1583 ins, 2644 del, 13821 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/dev/wer_10_0.0

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/kaldi-asr/kaldi/pull/4032#issuecomment-612366571, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAZFLO2CVYTUCLETNISLOXTRMATDDANCNFSM4MDZ4I4A .

danpovey avatar Apr 11 '20 08:04 danpovey

Learning rate schedule is 1e-3 * pow(0.4, epoch) Btw I have updated my commit.

aadps avatar Apr 11 '20 08:04 aadps

Try with double the learning rate.

danpovey avatar Apr 11 '20 09:04 danpovey

Double the learning rate: lr

change

==> exp/chain_pybind/tdnn_sp/train/decode_res/test/scoring_kaldi/best_cer <== %WER 7.33 [ 7676 / 104765, 189 ins, 447 del, 7040 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/test/cer_10_0.5

==> exp/chain_pybind/tdnn_sp/train/decode_res/test/scoring_kaldi/best_wer <== %WER 15.79 [ 10172 / 64428, 947 ins, 1492 del, 7733 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/test/wer_12_0.0 ==> exp/chain_pybind/tdnn_sp/train/decode_res/dev/scoring_kaldi/best_cer <== %WER 6.18 [ 12700 / 205341, 285 ins, 519 del, 11896 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/dev/cer_9_0.5

==> exp/chain_pybind/tdnn_sp/train/decode_res/dev/scoring_kaldi/best_wer <== %WER 14.03 [ 17917 / 127698, 1600 ins, 2626 del, 13691 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/dev/wer_10_0.0

aadps avatar Apr 12 '20 11:04 aadps

OK. It looks like right now this isn't giving us improvement over Adam: let's merge the code, but please change the top-level script so it still uses Adam, as I don't want to regress the results. At some point we need to come up with a mechanism to run different-versoined experiments; but for now the way it is is OK, I think.

danpovey avatar Apr 12 '20 11:04 danpovey

Top-level script reverted to Adam.

aadps avatar Apr 13 '20 11:04 aadps

Thanks!! @songmeixu do you want to go through this? Or should I just merge?

danpovey avatar Apr 13 '20 12:04 danpovey

Thanks!! @songmeixu do you want to go through this? Or should I just merge?

Please give me two days to go through this. I am doing it now. Thank @aadps for waiting!

megazone87 avatar Apr 14 '20 09:04 megazone87

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

stale[bot] avatar Jun 19 '20 06:06 stale[bot]

This issue has been automatically closed by a bot strictly because of inactivity. This does not mean that we think that this issue is not important! If you believe it has been closed hastily, add a comment to the issue and mention @kkm000, and I'll gladly reopen it.

stale[bot] avatar Jul 19 '20 06:07 stale[bot]

This issue has been automatically marked as stale by a bot solely because it has not had recent activity. Please add any comment (simply 'ping' is enough) to prevent the issue from being closed for 60 more days if you believe it should be kept open.

stale[bot] avatar Sep 17 '20 10:09 stale[bot]

@songmeixu ?

jtrmal avatar Aug 16 '22 14:08 jtrmal

This issue has been automatically marked as stale by a bot solely because it has not had recent activity. Please add any comment (simply 'ping' is enough) to prevent the issue from being closed for 60 more days if you believe it should be kept open.

stale[bot] avatar Oct 15 '22 17:10 stale[bot]