pytorch3d icon indicating copy to clipboard operation
pytorch3d copied to clipboard

ShapeNet TextureUV support (by combining multiple texture images)

Open ck-amrahd opened this issue 3 years ago • 5 comments

Hi, I am working with the ShapeNet dataset and Pytorch3D loads the textures as TextureAtlas, is there a way to convert this TextureAtlas into TextureUV? Or can I load them as TextureUV on my own? But they have multiple texture images per obj file, does Pytorch3D support loading TextureUV from multiple texture images? Thank you.

ck-amrahd avatar May 31 '21 18:05 ck-amrahd

Or can I load them as TextureUV on my own? But they have multiple texture images per obj file, does Pytorch3D support loading TextureUV from multiple texture images?

TexturesUV can only represent a single texture map per mesh. As noted, some ShapeNet models do however come with multiple texture maps for different parts of their geometry. So when textures are requested, the ShapeNet dataloader will instead always use a TexturesAtlas representation [*] that can accommodate multiple texture maps per mesh.

[*] See the corresponding code invoking the OBJ loader

patricklabatut avatar Jun 02 '21 17:06 patricklabatut

Thanks @patricklabatut for the reply. Yes, Pytorch3D loads them as TextureAtlas [Faces, 4, 4, 3] Tensor. After these textures are applied to each part of the mesh, is there a way to convert them into a single TextureUV map? I need one image per mesh for my research problem and I am wondering if there is some way to have one map per mesh. Thank you.

ck-amrahd avatar Jun 02 '21 20:06 ck-amrahd

After these textures are applied to each part of the mesh, is there a way to convert them into a single TextureUV map?

One could try to repack all the per-face texture images into a single large texture image and generate matching vertex UV coordinates. That would however require as much if not more processing than what follows below including having to break the model into disjoint triangles and generally undoing a significant part of the TextureAtlas generation...

I need one image per mesh for my research problem and I am wondering if there is some way to have one map per mesh.

This is technically possible but unfortunately not really out-of-the-box in PyTorch3D at this time. I will mark this at as a possible enhancement to consider for future releases.

At a high-level, one would have to create a large image to store all the model images in non-overlapping regions (with some possible padding). With this, the original vertex UV coordinates (typically in [0,1]^2) referencing the original images would have to be remapped to match the specific regions where the different images have been placed.

Some of that logic is actually implemented already in TexturesUVs.join_scene() but to amalgamate all the elements of a batch. To make that work with the ShapeNet data loader, this would involve applying a number of changes to load_obj() and rolling out a variant of make_mesh_texture_atlas() (e.g. make_combined_texture_uv()) that generates a TexturesUV with a single per model image (leveraging TexturesUVs.join_scene() as noted above) and with matching vertex UV coordinates .

patricklabatut avatar Jun 02 '21 22:06 patricklabatut

Thank you @patricklabatut, That makes sense, waiting for this feature. Thank you again.

ck-amrahd avatar Jun 02 '21 23:06 ck-amrahd

@ck-amrahd Were you able to solve the problem as I am looking forward to pass images to generate its 3D obj for my work.

yashpatel4900 avatar Feb 15 '22 08:02 yashpatel4900