SDF-StyleGAN icon indicating copy to clipboard operation
SDF-StyleGAN copied to clipboard

Rendering meshes in training set

Open wenshuo128 opened this issue 2 years ago • 2 comments

Hi Xinyang, I have a question about rendering meshes in the training set.

When testing the FID, the way I render the meshes are:

  1. To render the generated samples: Run generate_for_fid.py with the pre-trained models.
  2. To render the training samples: Change the mesh = model.generate_mesh() into mesh = trimesh.load('model.obj') and run generate_for_fid.py. Here, 'model.obj' are the files directly from ShapeNetCore.v1.zip.

In this way, the FID of category Chair, Airplane, Table, and Rifle are very close to the numbers on your paper. That is pretty cool! However, for the category Car, the FID is 125.00, which is larger than 97.99 in the paper. I think the problem lies in the way I render the training samples since the rendering images of Car look like this: 5208

It seems I get the inner structure of training samples when rendering. So the question is: before rendering, did you preprocess the 'model.obj' of Car in ShapeNetCore.v1.zip? If so, could you please tell me how you preprocess them?

Thanks!

wenshuo128 avatar Aug 04 '22 13:08 wenshuo128

Sorry, I may have forgotten to mention some details. You should set no_fix_normal=False in render_mesh at utils/render/render.py when rendering original meshes from ShapeNet

Zhengxinyang avatar Aug 04 '22 13:08 Zhengxinyang

I see. Thank you so much!

wenshuo128 avatar Aug 04 '22 13:08 wenshuo128

Another solution, which also seems to work and is quite a bit faster than using raytracing, is to do the following before rendering the mesh:

  mesh_inv = mesh.copy()
  mesh_inv.invert()
  mesh = mesh + mesh_inv
  mesh.update_faces(mesh.unique_faces())

hummat avatar Oct 24 '24 06:10 hummat