lumiaomiao
lumiaomiao
hi,I run you config my environment with your instructions, and got torch==1.6.0. when running tocas dataset , the loss is nan. But, I can run activatynet dataset normally. Do you...
Hi, could you explain the * in Table3 in ATL? You described it as "* means we only use the boxes of the detection results", but how do you use...
Thank you for your work, I have a question about sequence embedding. The screenshot is from transformer.py When you get sequence embedding, the position embedding has already been added to...
Hi, I am confused about implemention of the loss function used in the paper, this is the function:  and I only find the following code:  Could you share...
Thanks for great work! 1. Did you test the limitation of video length? In inference phrase, to use Middle-frame attention guidance, all video clips of a long video need to...