pixel2style2pixel
pixel2style2pixel copied to clipboard
Do you ever retrain the stylegan network?
pSp is a really great job for image to image translation. But there is a question that do you ever retrain the stylegan network for your applications (inversion, impainting, super-resolution)? I find that the stylegan model file you provide in your project is different from the NVIDIA official implementation. So I wander if you used the fixed stylegan generator (the same as NVIDIA provided) or retrain it.
The StyleGAN generator we use here is the official generator that was converted from tensorflow to pytorch. We do not train our own generator.
Thanks for your reply! I find that I do my research on stylegan2-ada-pytorch, that's maybe different from your stylegan implementation. So you mean that you just copy the weights of the original stylegan for your decoder part without retraining it?
---Original--- From: @.> Date: Sun, Sep 18, 2022 00:02 AM To: @.>; Cc: "Zhentao @.@.>; Subject: Re: [eladrich/pixel2style2pixel] Do you ever retrain the stylegannetwork? (Issue #287)
The StyleGAN generator we use here is the official generator that was converted from tensorflow to pytorch. We do not train our own generator.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
Correct, we don't retrain it. We used rosinality's implementation of StyleGAN2 rather than StyleGAN2-ada-pytorch, but you can convert the models by following the instructions here.