pytorch3d
pytorch3d copied to clipboard
Compatibility of MeshRasterizerOpenGL on previous PyTorch versions
🚀 Feature
Backward compatibility of MeshRasterizerOpenGL for previous PyTorch versions.
Motivation
Hi, thank you for bringing interesting new features to PyTorch3D.
I'm so thrilled to test out the new rasterizer and expect performance gains that may speed up my research.
Although the release note says that "There are builds for PyTorch 1.12.0 ...", but it would be nice if the users of previous PyTorch versions could get their hands on the new rasterizer.
Pitch
I tried to use MeshRasterizerOpenGL on my virtual environment where PyTorch 1.10.0+cu113 is installed, and got an error at
https://github.com/facebookresearch/pytorch3d/blob/d35781f2d79ffe5a895025ec386c47f7d77c085c/pytorch3d/renderer/opengl/rasterizer_opengl.py#L501
saying that "RuntimeError: expected scalar type long int but found float".
After looking into the documentation on torch.where, I realized that the function only accepts certain combinations of variable (or tensor) types as its input.
Casting the arguments in a way like:
barycentric_coords = torch.where(
barycentric_coords == 3,
-1.0, # int -> float
barycentric_coords.double(), # FloatTensor -> DoubleTensor
).float()
resolved the error, but I think more sophisticated handling of the argument types dependent on the version of PyTorch used in the context can be done here.
Thank you for reading.
The comment in the release note is not relevant here. There are builds for PyTorch 1.12.0 was just the news. We are still supporting PyTorch going back to PyTorch 1.8.0 and there are conda builds for them. That wasn't mentioned because it isn't news.
Thank you for pointing the problem out. Changing the -1 to -1.0 or barycentric_coords.new_full((), -1.0) might fix it without an extra intermediate tensor.
Thank you.
What I tried to point out in the beginning is that the code in the recent release assumes that torch.where will behave as described in v1.12.0 documentation which is different from the previous ones.
Again, thank you for responding quickly.