pytorch3d icon indicating copy to clipboard operation
pytorch3d copied to clipboard

Mixed-Precision

Open srahmatian opened this issue 3 years ago • 3 comments

Hello, I am using Pytorch3D to create my dataset for training a neural network. As you know the tensors in Pytorch3D have the type of torch.float32. On the other hand, Pytorch_Lightning has the mixed-precision option (16 bit and 32 bit) for training the models, and this feature can extremely increase training speed.
I was wondering if it is possible to take benefit from that feature, when (during training) I am creating the database for each batch by Pytorch3D? Is here any person who had the same experience?

The most important thing is that I don't want to first create the whole of my databases and then train the model. I want to create the dataset (by using DataLoader of the Pytorch_Lightning) during training my model.

srahmatian avatar Oct 22 '21 21:10 srahmatian

Hi @srahmatian apologies for the late response. It depends what PyTorch3D ops you want to use. Anything which has a custom CUDA implementation might not work e.g. the mesh rasterizer expects float32. In addition in the Meshes class the default/initial values for auxillary tensors use float32. We likely need to make several changes to support mixed precision throughout the library!

nikhilaravi avatar Nov 15 '21 23:11 nikhilaravi

Thanks for your response. Actually, I am using the Volumes class and Implicit renderer.

It would be great if there was an option for using mixed precision in the future versions of pytorch3d.

srahmatian avatar Nov 16 '21 18:11 srahmatian

This PR fixes fp16 on implicit renderer: https://github.com/facebookresearch/pytorch3d/pull/946/files

Please approve the merge.

PeterL1n avatar Nov 28 '21 06:11 PeterL1n