tacotron icon indicating copy to clipboard operation
tacotron copied to clipboard

mean_loss1 & mean_loss2 meaning

Open eazhary opened this issue 7 years ago • 0 comments

I am training on Arabic language, I changed the character set in prepro.py, I cleaned the input text. I create the csv file with the proper format. I trained on 1 sample (sanity_check) and it works perfectly.

Now training on a bigger dataset (6300 sentences), I get mean_loss1 very high (~150-~100), and mean_loss2 (>1).

As far as I understand the mean_loss1 is the error rate in decoder1 (generated spectrogram vs groud truth), and mean_loss2 is the error rate in decoder2 (generated magnitude vs ground truth magnitude).

How come mean_loss2 is so much smaller than mean_loss1 ?

Why do I have such a high value of loss even after 24K steps.

I am using default parameters with batch size = 16

capture

eazhary avatar Nov 16 '17 07:11 eazhary