[Bug]: bug on overtraining
Project Version
3.2.5
Platform and OS Version
Windows 11, Firefox 130
Affected Devices
PC
Existing Issues
No response
What happened?
Instead of showing the actual number of epochs remaining for overtraining, it just says "g/total: 100 d/total: 200". It doesn't change.
Steps to reproduce
- Preprocess
- Extract
- Start training and look in the terminal ...
Expected behavior
I expect it to show the number of epochs remaining for overtraining.
Attachments
No response
Screenshots or Videos
Additional Information
No response
@ShiromiyaG
@blaisewf with all due respect, what do you mean "not planned"? In my opinion, this is an incredibly important feature, and right now It's broken and pointless as It's not giving the user any useful information whatsoever. I think closing an issue, and tagging an ACTUAL bug as not planned is incredibly strange?
@kro-ai did you actually reach the point of overtraining? you've configured overtraining threshold at 100 epochs, so it means the generator loss has to go up consecutively for 100 epochs before the trainig stops
@kro-ai did you actually reach the point of overtraining? you've configured overtraining threshold at 100 epochs, so it means the generator loss has to go up consecutively for 100 epochs before the trainig stops
I was under the impression that "number of epochs remaining for overtraining" would show the number of epochs? for example "number of epochs remaining for overtraining: 98" and then it keeps going down, until it reaches 0 and then training stops because it hasnt found a better epoch. Then when it finds a new best epoch It resets to the set number of epochs, in this case 100. Maybe I've completely misunderstood it and If so I apologize. This is a feature Applio used to have. I was under the impression this was that feature?
@kro-ai did you actually reach the point of overtraining? you've configured overtraining threshold at 100 epochs, so it means the generator loss has to go up consecutively for 100 epochs before the trainig stops
I was under the impression that "number of epochs remaining for overtraining" would show the number of epochs? for example "number of epochs remaining for overtraining: 98" and then it keeps going down, until it reaches 0 and then training stops because it hasnt found a better epoch. Then when it finds a new best epoch It resets to the set number of epochs, in this case 100. Maybe I've completely misunderstood it and If so I apologize. This is a feature Applio used to have. I was under the impression this was that feature?
you set a threshold - 50 epochs. At the end of the epoch the detector checks whether the loss is going down or up. If it goes up for 50 epochs it stops. The output shows the progress to this stop as 'selected threshold - number of epochs that consistently increased the loss'