ActionCLIP
ActionCLIP copied to clipboard
This is the official implement of paper "ActionCLIP: A New Paradigm for Action Recognition"
Thanks for your amazing work! The KLLoss in the implementation is divided by feature dims (times batch_size in [code](https://github.com/sallymmx/ActionCLIP/blob/31c34df17dce917d67127b7fb155922c4744f680/utils/KLLoss.py#L27)), instead of batch size. The [docs of PyTorch](https://pytorch.org/docs/stable/generated/torch.nn.KLDivLoss.html) points that `reduction...
how to install this api?
Thank you for your share, it's a great work. And I find you didn't release 30 crops testing script for the best performance. Will this part be released?
I'm very interested in the work you have done. Could you provide pre-training models of UCF101 and HMDB51? Thanks a lot!
您好,您的工作使我受益良多。我精读了您的论文,并对您论文中第4.5章节附录中的HMDB51和UCF101的结果深感兴趣! 请问我在哪能继续学习到您论文附录中的相关知识呢? 对此,我非常期待您的回复,同时祝您新春愉快!
I want to know if I can get some visualized results.
This PR integrates two TinyCLIP ViT models to the existing model framework with minimal changes. This is possible because TinyCLIP provides a pure ViT-based model, like CLIP. The TinyCLIP model...