dinosr icon indicating copy to clipboard operation
dinosr copied to clipboard

The rough order of magnitude for the loss during pretraining is how many?

Open kssmmm opened this issue 10 months ago • 3 comments

I pretrained the model with librispeech960h and get the loss of 0.2. However, when I used the checkpoint to finetune with the librispeech100h, I got a dev-wer about 100. Did I make a mistake during the pretraining phase or the fine-tuning phase?

kssmmm avatar Apr 10 '24 08:04 kssmmm

Hi, Your training loss seems too low, should be ~1.4 after training for 200k steps and ~1.1 after 400k steps. super low loss in self-distillation usually means the teacher model collapsed (constant output regardless of input) and the training runs into trivial task.

Alexander-H-Liu avatar Apr 10 '24 23:04 Alexander-H-Liu

Hi, Your training loss seems too low, should be ~1.4 after training for 200k steps and ~1.1 after 400k steps. super low loss in self-distillation usually means the teacher model collapsed (constant output regardless of input) and the training runs into trivial task.

Previously, I modified the values in the config file from fp16 to bf16, and also changed the max token value from 3.8 million to 2.4 million. Now I have changed them back. It seems that the loss during the pretraining phase is consistent with what you mentioned, I didn't expect these two parameters to have such a significant impact.

kssmmm avatar Apr 11 '24 11:04 kssmmm

Hi, I ran into a similar issue with a very low loss and cluster collapse. Except for the batch size (4), I haven't changed anything in the base configuration, but it also happened with the default size. What can I do to prevent it?

hadas avatar Apr 12 '24 20:04 hadas