vocal-remover icon indicating copy to clipboard operation
vocal-remover copied to clipboard

Higher training/validation loss - normal?

Open KatieBelli opened this issue 7 months ago • 4 comments

In the previous version I had the following values for epoch 0 (same dataset used in version v6.0.0b1 and 5.0.5):

training loss = 0.001006, validation loss = 0.001161

Now in version v6.0.0b1 I have those values for epoch 0:

training loss = -0.005903, validation loss = -0.007134

Are those values normal? I'm confused because there is a "-" and the values are much higher.

KatieBelli avatar Nov 29 '23 10:11 KatieBelli

It is normal because the loss function is changed from v6. Also - indicates a negative value.

tsurumeso avatar Nov 30 '23 09:11 tsurumeso

It is normal because the loss function is changed from v6. Also - indicates a negative value.

Is there a short explanation about this? The higher those values the better? And they have to be negative first and towards the end of a perfect training a 0?

I'm confused because I was used to the values before version 6 (beta). I'm thankful that you regularly update this code.

KatieBelli avatar Nov 30 '23 14:11 KatieBelli

Of course, the smaller the loss, the better. 0 is better than 0.1 and -0.1 is better than 0.

tsurumeso avatar Nov 30 '23 14:11 tsurumeso

Thanks. How do we recognize that the model is nearly finished?

For example: current epoch = 42 training loss = -0.008521 validation loss = -0.009089

Is the "goal" to reach a value of like -0.01 (validation loss)? In version 5 the values decreased and I thought it must reach a value of 0 to be a perfectly trained model.

KatieBelli avatar Dec 09 '23 13:12 KatieBelli