SDF-StyleGAN
SDF-StyleGAN copied to clipboard
Rendering meshes in training set
Hi Xinyang, I have a question about rendering meshes in the training set.
When testing the FID, the way I render the meshes are:
- To render the generated samples: Run generate_for_fid.py with the pre-trained models.
- To render the training samples: Change the mesh = model.generate_mesh() into mesh = trimesh.load('model.obj') and run generate_for_fid.py. Here, 'model.obj' are the files directly from ShapeNetCore.v1.zip.
In this way, the FID of category Chair, Airplane, Table, and Rifle are very close to the numbers on your paper. That is pretty cool! However, for the category Car, the FID is 125.00, which is larger than 97.99 in the paper. I think the problem lies in the way I render the training samples since the rendering images of Car look like this:
It seems I get the inner structure of training samples when rendering. So the question is: before rendering, did you preprocess the 'model.obj' of Car in ShapeNetCore.v1.zip? If so, could you please tell me how you preprocess them?
Thanks!
Sorry, I may have forgotten to mention some details. You should set no_fix_normal=False
in render_mesh
at utils/render/render.py when rendering original meshes from ShapeNet
I see. Thank you so much!
Another solution, which also seems to work and is quite a bit faster than using raytracing, is to do the following before rendering the mesh:
mesh_inv = mesh.copy()
mesh_inv.invert()
mesh = mesh + mesh_inv
mesh.update_faces(mesh.unique_faces())