video-diffusion-pytorch icon indicating copy to clipboard operation
video-diffusion-pytorch copied to clipboard

Reason for combining rotary and relative positional embedding?

Open oxjohanndiep opened this issue 2 years ago • 1 comments

Hi,

Awesome work first of all. Is there a reason why you would combine both rotational as well as relative positional embedding in your Attention class? I would assume one of both is enough to incorporate the positions of the frames to the attention model?

oxjohanndiep avatar Jul 02 '22 14:07 oxjohanndiep

same doubt here

HiSultryMan avatar Aug 10 '22 08:08 HiSultryMan