Phil Wang
Phil Wang
@foobar8675 Amazing Matthew! :100: :100:
@athenawisdoms Hi Athena! My recommendation for you would be to buy Google Colab Pro for 10$ a month https://colab.research.google.com/signup, roll a V100 (16GB), and train on there
@winnechan Hi Winne! So 100 images may not be enough, but eventually I want to add the augmentation techniques from https://arxiv.org/abs/2206.00364, and that should allow DDPMs to work on small...
@RayeRTX Hi again! You actually want that value to be as large as possible. For large scale GAN training (BigGAN), people aim for batch sizes 256 or beyond! However, I...
@RayeRTX what are you training on? care to share your results? :) I relish seeing what others have trained
@tannisroot It could be the translation, so I think it's best to try just the color augmentation alone and see if that helps. That's really weird, because Karras' new paper...
@tannisroot of course, if you have enough data, you can disable it altogether!
@tannisroot yes, 1400 is indeed too small a dataset for GANs without augmentation Feel free to try this new alternative technique though! https://github.com/lucidrains/denoising-diffusion-pytorch
@gunahn Hi! There's an error in your command; it should be `stylegan2_pytorch --data DCGAN/Knee_GAN/processed/2/test/`
@quentinkaci hey Quentin, yea, i'm planning on open sourcing https://cascaded-diffusion.github.io/ soon in this repository how big is your training set and how many steps did you train for?