taming-transformers
taming-transformers copied to clipboard
How to decide the training epochs or early stop condition?
I really like your paper, thanks for your open source! It seems that you did not use early stop in the ModelCheckpoint. Could you please tell me how many epochs you trained the VQGAN and transformer? Or do you have suggestions about the training epochs on new datasets?
Thanks! The VQGAN benefits greatly from training it as long as possible (provided the data set is large enough and overfitting is a secondary concern), and tuning in the discriminator rather late. When training on ImageNet, for example, I would recommend 3-5epochs without the adversarial loss (but more is better) and then training for at least another 3-5 epochs with the discriminator turned on (again, more=better).
The stopping condition for the transformer is usually when it starts overfitting in terms of NLL on held-out test data.
May I ask how long it usually takes to train on the ImageNet and how many GPUs are used?
Any updates on training times? Costs?
mark