ActionCLIP icon indicating copy to clipboard operation
ActionCLIP copied to clipboard

This is the official implement of paper "ActionCLIP: A New Paradigm for Action Recognition"

Results 33 ActionCLIP issues
Sort by recently updated
recently updated
newest added

你好,我比较感兴趣论文中汇报的小样本设置下的准确率您是如何得到的? 1.是否按照小样本的一般范式(meta-learning)重新进行fine-tune? 2.zero-shot下可以直接计算视频特征与标签文本的相似度,但是few-shot下每个类别除了标签还有少量的样本,这些少量样本如何贡献到最终的预测得分? 希望得到您的回复!🙏

Hello @sallymmx Great work on this project. I am new to CLIP and ActionCLIP. I am looking for some help to setup and try out ActionCLIP for some custom videos....

你好,请问zero-shot的实验是把有seen类别的数据结果当做预训练模型,然后直接把这个预训练的模型拿来直接预测unseen类别么?zero-shot的测试跟一般的测试只是测试集不一样吗?另外few-shot如何划分样本集呢?

I'm testing serveral custom video files on actionCLIP model. but i want to use input data generated by webcam(opencv video instance) not video file path. i wonder if this model...

为什么我使用readme.md里作者提供的Kinetics-400 pretrained model 执行‘./scripts/run_test.sh ./configs/k400/k400_test.yaml’命令就报这个错误: model = build_model(state_dict or model.state_dict(), joint=joint,tsm=tsm,T=T,dropout=dropout, emb_dropout=emb_dropout,pretrain=pretrain).to(device) File "/home/houyf22/lzu/ActionCLIP-master/clip/model.py", line 314, in build_model vision_width = state_dict["visual.layer1.0.conv1.weight"].shape[0] KeyError: 'visual.layer1.0.conv1.weight' 但是使用clip的预训练模型就不报错,是不是作者没有根据自己的预训练模型调整代码还是我漏了什么步骤?

I can not download the model from this link. https://github.com/sallymmx/ActionCLIP/blob/master/MODEL_ZOO.md

我很疑惑为什么你们的数据集path是经过修改的,比如将连续的双下划线替换为单下划线,将标签中的(替换为-。而且你们的训练集和验证集都比正常的少几百个视频样本