pytorch3d icon indicating copy to clipboard operation
pytorch3d copied to clipboard

Model Optimization issues when training Single Image to Mesh Reconstruction model

Open sainatarajan opened this issue 3 years ago • 0 comments

🐛 Bugs / Unexpected behaviors

Hi, Thanks for the repo. I have been using pytorch3d to perform single-image to 3d mesh reconstruction. What I notice is that my model (taken from MeshRCNN repo) doesn't optimize or rather overfits when I use a fixed normalization parameter (i.e. translation and scaling) for my ground truth meshes.

For example:

When I normalize my ground truth meshes in the following way, my model optimizes well:

# Normalization as mentioned in the deform_mesh tutorial
v, f, _ = load_obj(filename)
center = v.mean(0)
v = v - center # Translation
scale = max(v.abs().max(0)[0])
v = v / scale # Scaling

If I use the above way to normalize my ground truth meshes, it would not be possible to figure out the inverse transformation during the testing phase as there are no ground truth meshes during the testing.

Whereas when I use a fixed normalization parameter (translation and scaling) for my entire dataset so that I could use it to perform inverse transformation during the testing phase, the model overfits the dataset:

v, f, _ = load_obj(filename)
v = (v - 128) # Translation
v = v / 128 # Scaling

All my vertices of the meshes are between 0 and 256. So this transformation will convert the vertices coordinates from [0, 256] to [-1, 1]. However, even then, my model doesn't optimize.

I'm not really sure what mistake I'm doing. Would be great if someone here could help me solve this issue!

Thanks!

sainatarajan avatar Aug 17 '22 12:08 sainatarajan