Baifeng Shi

Results 34 comments of Baifeng Shi

Hi, thanks for the interest. The I3D feature we use are extracted every 16 frames. The official repo is https://github.com/deepmind/kinetics-i3d But if you are using 8-frame features, then I think...

Hi, thanks for your interest in our work. In your case, do you mean there are no predictions for any label? or just for one label? And is the loss...

I just downloaded the code and data, and tried as you said (only change config.DATASET_NAME and nothing else). Everything seems normal. Only in the first 10 epochs there are no...

I just double-checked and seems it's the same version on my server...

Hi, thanks for the interest! 1. For a frame with attention=a, we use the Gaussian distribution N(a, I) as the prior. The kld loss is the kl divergence between the...

Thank you! Yes, the attention module takes the feature of each frame and output the single-frame attention. As for classification, the current pipeline takes the attention-weighted average of the features...

Hi, our model train and test on features extracted beforehand. If you want to test on another dataset you need to first extract the feature using a pretrained model. I...

Hi Wasif, thanks for pointing our this! I think you could create a direction `/experiments/THUMOS14/test/` and it will be fine. Same for ActivityNet12. I've updated the repo accordingly, please check,...

This link works: [https://www.cs.rice.edu/~vo9/sbucaptions/sbu-captions-all.tar.gz](url)

Hi, Thanks for the response! I used 4 gpus and batch_size_per_gpu=200, that's total batch size of 800, which is not far from 1024 you used, so I think this shouldn't...