How many epochs required for training?
When digging in your code, I found that training is based on a number of total iterations (let's call max_steps). Based on your codes in experiment.py, it is computed as max_steps=conf.total_samples // conf.batch_size_effective.
total_samples is predefined at template.py (e.g. 130_000_000 for ffhq128) and batch_size_effective is set to 128 by default. For this example, max_steps = 1_015_625. As FFHQ128 includes 70,000 samples, a number of required epochs are 1_015_625 / (70_000 / 128)) ~ 1857 (this is such a huge epoch to train :(( )
Might you let me know that I am correct?
Your understanding is correct. Note that the number of "samples" here is comparable to other DDPM on the same dataset. Maybe, the number of epochs is not interpreted the same way as in a classification model when you are dealing with a generative model?
Your understanding is correct. Note that the number of "samples" here is comparable to other DDPM on the same dataset. Maybe, the number of epochs is not interpreted the same way as in a classification model when you are dealing with a generative model?
Hi, may I know your training time in total for your DiffAE on FFHQ256?