FastPointTransformer icon indicating copy to clipboard operation
FastPointTransformer copied to clipboard

Questions about fast point transformer

Open net-F opened this issue 1 year ago • 0 comments

Thanks for your amazing work, and I have few questions about the implemention of this architecture. Thank you for any answers.

  1. In the code of LightweightSelfAttentionLayer, why the inter postion embeding is initialized as a learnable random variable ?

self.inter_pos_enc = nn.Parameter(torch.FloatTensor(self.kernel_volume, self.num_heads, self.attn_channels)) nn.init.normal_(self.inter_pos_enc, 0, 1)

According to Fig 3 in the paper, shouldn't it be obtained from the coordinate difference between the current voxel and neighboring voxels ?

  1. How many specific neighboring voxels are indexed in LightweightSelfAttentionLayer ?Is the number of neighboring voxels determined by kernel_size in the input parameter? Is the neighboring voxels the valid voxels contained in the kernel ?

net-F avatar Jan 04 '24 08:01 net-F