rinongal
rinongal
Hopefully I can chime in here (as an author of SWAGAN). As Kim noted, at a first pass there were 2 differences I spotted between the current implementation and our...
Kim, do you have any training logs / FIDs from the run? I'd love to take a look and try to track down differences. When you note a slower convergence...
An update from our end: We've had a suggestion that our StyleGAN2 training results may be affected by the [issue fixed here](https://github.com/NVlabs/stylegan2/commit/23f8bed55f4b220c69cff98139a000d4c77cd558), or that the different batch size may be...
The entire pipeline could have been implemented using any of the available CLIP models, or a mix thereof. Setting the weight of the ViT-B/16 CLIP model to 0.0 just means...
Every place where we use CLIP, we use the same weighted combination of the two models, yes. In practice, for many of our results (as you saw in the supp...
I am a personal fan of Qinni's art, and even have a print of one of her deviations on my wall. When I looked for artists to experiment with, her...
Using living artists is its own can of worms. While you are correct that they can give permission, they can also be directly and financially impacted by someone mimicking their...
Which colab is this? The huggingface diffusers one?
@atomical What error are you getting on those 4+ GPU runs?
From a brief comparison, the config and code seem to match the version I used for training the cats. The only difference I can think of is the seed, or...