pytorch3d
pytorch3d copied to clipboard
Mixed-Precision
Hello,
I am using Pytorch3D to create my dataset for training a neural network. As you know the tensors in Pytorch3D have the type of torch.float32. On the other hand, Pytorch_Lightning has the mixed-precision option (16 bit and 32 bit) for training the models, and this feature can extremely increase training speed.
I was wondering if it is possible to take benefit from that feature, when (during training) I am creating the database for each batch by Pytorch3D? Is here any person who had the same experience?
The most important thing is that I don't want to first create the whole of my databases and then train the model. I want to create the dataset (by using DataLoader of the Pytorch_Lightning) during training my model.
Hi @srahmatian apologies for the late response. It depends what PyTorch3D ops you want to use. Anything which has a custom CUDA implementation might not work e.g. the mesh rasterizer expects float32. In addition in the Meshes
class the default/initial values for auxillary tensors use float32
. We likely need to make several changes to support mixed precision throughout the library!
Thanks for your response. Actually, I am using the Volumes class and Implicit renderer.
It would be great if there was an option for using mixed precision in the future versions of pytorch3d.
This PR fixes fp16 on implicit renderer: https://github.com/facebookresearch/pytorch3d/pull/946/files
Please approve the merge.