Eric Engelhart
Eric Engelhart
What is the difference between the train codes of this project and the "3D-ResNets-PyTorch" project?
I diffed the two files in Pycharm, the only differences are formatting. Ex: ``` epoch, i + 1, len(data_loader), batch_time=batch_time, data_time=data_time, loss=losses, acc=accuracies)) ``` in this repo vs ``` epoch,...
Started training of the FFHQ-thumbnails for 50k steps with the latest version (0.18.6), it should be done sometime tonight.
Unfortunately the FID is worse, final FID of ~54.0, so it looks like this was a negative change.
Batch size 32, (batch 16 with gradient accumulate = 2). I quickly checked in colab, but it seems that pytorch functional's normalize produces different results than the pixelwise norm used...
That's great to hear. However, I do want to clear up some things I think are getting confused together. The equalized learning rate from progressive growing of gans is more...
Starting a training run on version `0.19.1` with `--lr-mlp 0.01`, since that's the value used in the paper. After this, I'd be interested in testing the equalized learning rate (normal...
Complete mode collapse, every image looks nearly identical only 10k steps in. EMA sample from 10k steps:  Also, about the pixel-wise normalization from earlier: The tensorflow original code doesn't...
@lucidrains Run is complete. FID is slightly worse. My run of 50k steps, batch size 32, `--lr-mlp 0.1` `v0.19.3` ended up with an FID score just above `v0.18.5`. v0.19.3: FID...
FYI, if you use a consumer GPU, 20% of the VRAM is reserved for Windows by Nvidia's driver. This is for display purposes. Titans/Quadros can disable this, but cannot display...
Is this different from pytorch's built-in [CosineAnnealingLR?](https://pytorch.org/docs/stable/optim.html#torch.optim.lr_scheduler.CosineAnnealingLR)