Results 30 comments of Jason GU

Hi, @yue95213 and @zhufeida. Thank you for your question. It worth noticing that the stylegan has multi-code design natively (the style code for each layer) and the `stylegan-w+` is for...

> What parameters you are using for invert stylegan? I use > > ``` > python multi_code_inversion.py --gan_model stylegan_bedroom > > --target_images ./examples/gan_inversion/bedroom > > --outputs ./gan_inversion_bedroom2 > > --inversion_type...

Hi, the code is on this line. We calculate the loss on the downsampled inverse images.[ code](https://github.com/genforce/mganprior/blob/a4ff818f5997dbb5097c9033ee99609134ad70f2/utils/manipulate.py#L24)

Thanks for your interest. According to Eq.(7) of the [paper](https://openaccess.thecvf.com/content_CVPR_2020/papers/Gu_Image_Processing_Using_Multi-Code_GAN_Prior_CVPR_2020_paper.pdf), The $I_{LR}$ is actually obtained by downsampling the ground truth. In other words, If we know the ground truth, we...

Hi, @QingLicsaggie and @a878322125 . According to Eq.(7) of the paper, The optimization objective is set to be the likelihood of the SR problem that the super-resolved high-res image should...

> @JasonGUTU > Thank you for your explanation,target_images of code should input a 64*64 lr_image but the sr_image looks terrible.how to eproduce your paper results with the codes? Did you...

It depends on the platform you used, for 1080 GPU, one image takes about 4 to 10 minutes for different step numbers.

Data needs to be checked for compliance and appropriate license arranged. Probably won't be open source anytime soon. Data open source issues will be discussed in other issues.

我们直接和其他开源的方法进行的对比。我们的目的并不是为了证明我们的架构在学术上具有绝对优势,而是证明底层视觉大模型的潜力。数据是开发大模型不能被忽视的重要一环,这不仅涉及数据的收集,还涉及数据的清洗,处理,加权等。我们不否认其他的方法同样在大量数据上训练之后的可能的效果。但是我们同样也强调我们这个工作的独特价值。模型规模化是一个系统性的工程,并不是数据量上去了,任何一个模型都能成功。否则GPT4也就不再是GPT4了。 我们会后续放出源代码,并开放线上免费测试。欢迎关注尝试,并反馈意见。