Pointnet_Pointnet2_pytorch
Pointnet_Pointnet2_pytorch copied to clipboard
Some doubt in backpropagation of PointNet++ while solving it manually.
In PointNet++ model there are three SetAbstraction layers (each having three convolutional, Batchnorm and ReLU layers) and in last two the number of channels are increased by 3. I am trying to do backpropagation manually to understand how it actually trains. I am stuck in the third SetAbstraction layer's first convolutional layer. Here the gradient coming from backpropagation has shape (BatchSize,256,128,1), the input to this first convolutional layer is the output from the second SetAbstraction after doing max operation and increasing the channels by 3 which is of shape (BatchSize, 259,128,1). The weights of this convolutional layer has shape (256,259,1,1). Now when I try to find this convolutional layer weight gradients, it comes correct with shape of (256,259,1,1). But for input gradients the shape comes out to be (BatchSize,259,128,1). But the shape of the output of third ReLU of second SetAbstraction is (BatchSize,256,64,128) and its max operation leads to shape (BatchSize,256,128). Now how should I carry the gradient I calculated back through max operation and then relu operation as its shape is (BatchSize,259,128,1). Please help me with this step. Thank you