cross_view_transformers icon indicating copy to clipboard operation
cross_view_transformers copied to clipboard

The softmax attention do not use a cosine similarity

Open songlilucky opened this issue 2 years ago • 1 comments

Thanks for your work. according to your code, softmax attention do not use a cosine similarity,Did I get something wrong?

songlilucky avatar Nov 10 '22 06:11 songlilucky

    # Dot product attention along cameras
    dot = self.scale * torch.einsum('b n Q d, b n K d -> b n Q K', q, k)
    dot = rearrange(dot, 'b n Q K -> b Q (n K)')
    att = dot.softmax(dim=-1)

songlilucky avatar Nov 10 '22 07:11 songlilucky