Xinyang Li
Xinyang Li
Hi, can you share the script to reproduce your great metrics in your paper. Also, i want to compare my own model with yours, so it will help a lot...
I found that in your paper in page.6, the results of stargan is really poor. I reproduced stargan in celeba-hq and got bad results too. But in stargan paper, it...
In your original paper, the attribute code is mostly exchanged to guide the translation, but in your code, you add using random noise to guide the translation like MUNIT, why?...
In discriminator, style encoder and content encoder, i find 4x4 conv filters. Where this idea came from or i missed something.
I found that your method emphasized that you aligned the landmarks of faces for your training, so is it different from StarGAN in this aspect?
i wonder a better way to make this kind of attn-base generator convergence could you please share some train tricks while you train this architecture ? : )
In my case, the nerf network may contains B different scenes. Can nerfacc support ray_marching with rays_o, rays_d (shape: B, N, 3) and sigma_fn (B, N, 3 -> B, N,...
Hello. I found that in your paper, the reconstruction loss is defined as L1-norm but in your code it is L2-norm. Why?