FastPointTransformer
FastPointTransformer copied to clipboard
Questions about fast point transformer
Thanks for your amazing work, and I have few questions about the implemention of this architecture. Thank you for any answers.
- In the code of LightweightSelfAttentionLayer, why the inter postion embeding is initialized as a learnable random variable ?
self.inter_pos_enc = nn.Parameter(torch.FloatTensor(self.kernel_volume, self.num_heads, self.attn_channels))
nn.init.normal_(self.inter_pos_enc, 0, 1)
According to Fig 3 in the paper, shouldn't it be obtained from the coordinate difference between the current voxel and neighboring voxels ?
- How many specific neighboring voxels are indexed in LightweightSelfAttentionLayer ?Is the number of neighboring voxels determined by kernel_size in the input parameter? Is the neighboring voxels the valid voxels contained in the kernel ?