Xiaoyu Shi
Xiaoyu Shi
https://github.com/lisiyao21/AnimeInterp/blob/85b17982981ec26cec6352c10fab5a0a1cac8cbb/models/AnimeInterp.py#L55 Hi, I see that you choose *summation* mode when applying softsplat. Did you ablate other modes? Thx!
Hi, thx for sharing the code! I wonder what norm_context refer to in the paper? https://github.com/lucidrains/perceiver-pytorch/blob/3b70ebee00c66f15b38c5980f4275f744a433895/perceiver_pytorch/perceiver_io.py#L125
Hi. There seems to be a small typo in you readme about training command: --resume auto
Hi. Nice work! You mention that "we crawled the Youtube video clips and divided them according to the scene (7.3K scenes)". What tool do you use to do scene classification?...
Hi, nice work! I am a bit confused about gumbel softmax. You mention in your paper that, during traininig, gumbel softmax is used. I wonder if it can be replaced...
https://github.com/gengshan-y/VCN/blob/00c4befdbdf4e42050867996a6f686f52086e01a/dataloader/flow_transforms.py#L186-L189 Hi, could you please explain what this exit condition means? thx!
Hi, I am confused about figure 1 in your paper. It seems that (c) are intermediate feature maps? But what does the values (i.e. range [0, 5000]) mean. And how...
https://github.com/lyh-18/EQVI/blob/7c6a00d2d824bc84f140261a1308eeb5e1b5626f/models/AcSloMoS_scope_unet_residual_synthesis_edge_LSE.py#L64 Hi. I am confused about your backwarp implementation. To my understanding, for a 2x2 image, the coordinates for the four pixels are (-0.5,-0.5), (-0.5, 0.5), (0.5, -0.5), (0.5, 0.5),...
Hi, nice work! I find that the 1d relative position encoding is of dimension 2*W-1? Why it is not W? And I also wonder if this makes sttr unable to...
https://github.com/princeton-vl/RAFT/blob/aac9dd54726caf2cf81d8661b07663e220c5586d/core/datasets.py#L214 Hi. I find that you ignore things final in the sintel stage. Is this because this choice is empirically better?