e4s2022
e4s2022
Hi, @oneThousand1000 Did you set both the `z` and `c` as the trainable parameters during the GAN inversion? I guess fixing the `c` (which can be obtained from the dataset.json)...
Got it, thanks for your reply. BTW, did you follow the FFHQ preprocessing steps in EG3D (i.e., realign to 1500 from in-the-wild images and then resize into 512), or directly...
@mlnyang, I got the similar results as yours. I use the well-aligned & cropped FFHQ images (in 1024 resolution), then I resize into 512 to do the subsequent PTI inversion....
I took a look at the PTI aligning script, it seems the same as the original FFHQ. I inspected the EG3D preprocessing and compared it with the original FFHQ. AFAIK,...
@oneThousand1000 Yuh, I agree. For those who want to directly use FFHQ well-aligned 1024 images, you have to predict the camera parameters by Deep3DFace_pytorch by yourself. But I haven't tested...
@lyx0208, hi, you have to preprocess the dataset in advance. The details can be found [here](https://github.com/NVlabs/eg3d/blob/main/dataset_preprocessing/ffhq/runme.py). For your question, the camera parameters provided by the author can be downloaded from...
Hi, @X-niper I also notice this similar issue, I found the rotation matrix directly obtained from `compute_rotation` is different from the provided extrinsics in `dataset.json`. Could you please tell the...
@X-niper Thx, I understand the general steps, then can you expain what's the meaning of ``` theta_x = np.pi - theta_x theta_y = -theta_y theta_z = theta_z ``` Maybe through...
Also want to know how to extract the camera pose of in the wild images. Please let me know if you have any idea, thank you guys. Use the exact...
I have a similar question. The `inv_planes` in the code is actually ``` tensor([[[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]], [[1., 0., 0.], [0., 0., 1.], [0., 1.,...