Is it normal for big loss in stage2, cross entropy loss?
I find the cross_entropy_loss is big in training stage2, between 1e-1~1e0. From a numerical perspective, it seems that this represents almost no optimization of codebook searching.
How many steps did you train? I trained 8w steps and the cross_entropy_loss had been around 2, and it didn't seem to decrease from the beginning to the present.
I run 20w steps, very strange results, but it is do improve codebook seraching ability. Maybe only this loss not works...
In stage 1 and 2, do you use pre-trained models in https://github.com/sczhou/CodeFormer/releases/tag/v0.1.0?
How many steps did you train? I trained 8w steps and the cross_entropy_loss had been around 2, and it didn't seem to decrease from the beginning to the present.
Hi, I met the same problem, do u solve it?
although the date is 9/17/2023,I still think the losses of stage2 are wrong set. the lq_feat from traing model is from encoder, but the gt is from quantize.get_codebook_feat, i.e., the goal is align the encoder feat and quantize feat. It is very ridiculus.