PointTransformerV2 icon indicating copy to clipboard operation
PointTransformerV2 copied to clipboard

What is the difference between GVA and VA with shared plane?

Open EricLina opened this issue 1 year ago • 1 comments

GVA is proposed in PTV2 as a new method, but the implementation is equal as Vector Attention with shared plane in PTV1.

below are the comparisons:

PTV1:

    w = self.softmax(w)  # (n, nsample, c//s)
    n, nsample, c = x_v.shape; s = self.share_planes
    x = ((x_v + p_r).view(n, nsample, s, c // s) * w.unsqueeze(2)).sum(1).view(n, c)  # v * A

PTV2:

    value = einops.rearrange(value, "n ns (g i) -> n ns g i", g=self.groups)
    feat = torch.einsum("n s g i, n s g -> n g i", value, weight)
    feat = einops.rearrange(feat, "n g i -> n (g i)")

They are functionally equivalent!

I may also be wrong because I don't understand it well, can you explain the difference between them?

EricLina avatar May 05 '23 12:05 EricLina

I also have the same question, I don't understand the fundamental difference between these two... Do you understand now

leungMr avatar Sep 21 '23 14:09 leungMr