neuralnet-pytorch icon indicating copy to clipboard operation
neuralnet-pytorch copied to clipboard

Batches of varying size for Chamfer Loss

Open ahariri13 opened this issue 5 years ago • 2 comments
trafficstars

For both Chamfer loss and EMD loss I have batches of points clouds of different size. As it's not practical to use a batch size of 1, is it correct to stack all batches (say 400 sets of point clouds) in the form (n,k) and use them in the Chamfer/EMD loss ? Otherwise I have to pad my point clouds to a common length.

ahariri13 avatar Jun 28 '20 17:06 ahariri13

hello @ahariri13. for this loss, yes, you have to pad it to common size in order to use it. These days I am also working with point cloud of various resolutions, and like you, I stack everything into a big one. But unfortunately I am not working with Chamfer distance these days, so I haven't converted this to the big point cloud format yet. If you are familiar with CUDA kernel, a pr is more than welcome. Else if you just want to get the job done, I suggest using KNN from torch-cluster, and calculate the distances, but of course it will not be as fast as having a kernel.

justanhduc avatar Jun 29 '20 07:06 justanhduc

Hi @ahariri13, not sure if you are still interested but I have written a kernel for Chamfer loss working with stacked point clouds instead. The code can be found here.

justanhduc avatar Sep 17 '20 03:09 justanhduc