Training and Evaluating On Our Own Dataset
Hi! thanks for your great work, look forward to your way to training and evaluating on our own dataset ?
We are working on this and will probably update the repo after the ECCV deadline.
We are working on this and will probably update the repo after the ECCV deadline.
Hi, thanks for your work. When will you update the repo for training on our own dataset?
We are still occupied with the ECCV supplementary materials. The plan is to update the repo (for customized training) in the upcoming weeks.
We are still occupied with the ECCV supplementary materials. The plan is to update the repo (for customized training) in the upcoming weeks.
can you share feature extraction code on these public datasets?
We are still occupied with the ECCV supplementary materials. The plan is to update the repo (for customized training) in the upcoming weeks.
can you share feature extraction code on these public datasets?
Hi, for THUMOS14 and ActivityNet dataset, we use the feature provided by CMCS. The CMCS uses pytorch-i3d-feature-extraction to extract the features. For TSP features, you may want to refer to TSP-official for details.
We are still occupied with the ECCV supplementary materials. The plan is to update the repo (for customized training) in the upcoming weeks.
can you share feature extraction code on these public datasets?
Hi, for THUMOS14 and ActivityNet dataset, we use the feature provided by CMCS. The CMCS uses pytorch-i3d-feature-extraction to extract the features. For TSP features, you may want to refer to TSP-official for details.
Thanks for your reply!
We are still occupied with the ECCV supplementary materials. The plan is to update the repo (for customized training) in the upcoming weeks.
can you share feature extraction code on these public datasets?
Hi, for THUMOS14 and ActivityNet dataset, we use the feature provided by CMCS. The CMCS uses pytorch-i3d-feature-extraction to extract the features. For TSP features, you may want to refer to TSP-official for details.
Sorry to bother you... I download the feature provided by CMCS, but I find that the feature extracted from THUMOS14 is different from yours. Where do I need to change to extract the same feature as yours? Besides, can you share i3d features extracted from ActivityNet dataset? Thank you very much! Looking forward to your reply~
We are still occupied with the ECCV supplementary materials. The plan is to update the repo (for customized training) in the upcoming weeks.
can you share feature extraction code on these public datasets?
Hi, for THUMOS14 and ActivityNet dataset, we use the feature provided by CMCS. The CMCS uses pytorch-i3d-feature-extraction to extract the features. For TSP features, you may want to refer to TSP-official for details.
Sorry to bother you... I download the feature provided by CMCS, but I find that the feature extracted from THUMOS14 is different from yours. Where do I need to change to extract the same feature as yours? Besides, can you share i3d features extracted from ActivityNet dataset? Thank you very much! Looking forward to your reply~
Hi, I think I directly use the feature of CMCS, If you downloaded the full part of CMCS, they provide multiple versions, I think I just choose the I3D version with ten-crop (I cannot remember the details). Also, the I3D features I'm used is also from CMCS repo
We are still occupied with the ECCV supplementary materials. The plan is to update the repo (for customized training) in the upcoming weeks.
can you share feature extraction code on these public datasets?
Hi, for THUMOS14 and ActivityNet dataset, we use the feature provided by CMCS. The CMCS uses pytorch-i3d-feature-extraction to extract the features. For TSP features, you may want to refer to TSP-official for details.
Sorry to bother you... I download the feature provided by CMCS, but I find that the feature extracted from THUMOS14 is different from yours. Where do I need to change to extract the same feature as yours? Besides, can you share i3d features extracted from ActivityNet dataset? Thank you very much! Looking forward to your reply~
Hi, I think I directly use the feature of CMCS, If you downloaded the full part of CMCS, they provide multiple versions, I think I just choose the I3D version with ten-crop (I cannot remember the details). Also, the I3D features I'm used is also from CMCS repo
Thank you! My problem has been solved!
We are still occupied with the ECCV supplementary materials. The plan is to update the repo (for customized training) in the upcoming weeks.
can you share feature extraction code on these public datasets?
Hi, for THUMOS14 and ActivityNet dataset, we use the feature provided by CMCS. The CMCS uses pytorch-i3d-feature-extraction to extract the features. For TSP features, you may want to refer to TSP-official for details.
Sorry to bother you... I download the feature provided by CMCS, but I find that the feature extracted from THUMOS14 is different from yours. Where do I need to change to extract the same feature as yours? Besides, can you share i3d features extracted from ActivityNet dataset? Thank you very much! Looking forward to your reply~
Hi, I think I directly use the feature of CMCS, If you downloaded the full part of CMCS, they provide multiple versions, I think I just choose the I3D version with ten-crop (I cannot remember the details). Also, the I3D features I'm used is also from CMCS repo
hi, I am trying to train your model on my own dataset and also applyed pytorch-i3d-feature-extraction to extract both RGB and flow features. I compared the features provied by CMCS and the results of I3D, then found they are in different shapes. Do do you have any idea that how rgb and flow features are fused? According to the I3D paper, the final prediction of an action detection task are fused by simply averages results of RGB I3D network and flow I3D network. As for feature extraction, are RGB features and flow features also proccsed in a similar way?
For the action recognition task, you can directly use the average of RGB prob and Flow prob as the final action prediction. For the action detection task, the common practice is to concatenate the RGB feature and Flow feature into a final feature. For example, extracted features are 1024-d for both RGB and Flow modality. And the final feature dimension will be 2048.
Thank you!
Hi! I am looking forward to training and evaluating on our own dataset. It's Now July 26th. When will you update the repo for training on our own dataset? Thank you.
Hi, Thank you for your interest in our project!
Hopefully we will release that tutorial soon (no later than the ECCV conference).
We are still occupied with the ECCV supplementary materials. The plan is to update the repo (for customized training) in the upcoming weeks.
can you share feature extraction code on these public datasets?
Hi, for THUMOS14 and ActivityNet dataset, we use the feature provided by CMCS. The CMCS uses pytorch-i3d-feature-extraction to extract the features. For TSP features, you may want to refer to TSP-official for details.
Sorry to bother you... I download the feature provided by CMCS, but I find that the feature extracted from THUMOS14 is different from yours. Where do I need to change to extract the same feature as yours? Besides, can you share i3d features extracted from ActivityNet dataset? Thank you very much! Looking forward to your reply~
Hi, I think I directly use the feature of CMCS, If you downloaded the full part of CMCS, they provide multiple versions, I think I just choose the I3D version with ten-crop (I cannot remember the details). Also, the I3D features I'm used is also from CMCS repo
Thank you! My problem has been solved!
hello, i meet the same problem with you. May I discuss a few related issues with you through WeChat? My WeChat num is d1431315292. Looking forward to your reply.
Hello, thank you for your interest in our project! You can directly send questions by E-mails as I can reply the email more effectively. My email is [email protected].
hello, https://github.com/happyharrycn/actionformer_release/blob/main/libs/utils/train_utils.py#L426:~:text=results%20%3D%20postprocess_results(results%2C%20ext_score_file)请问这个后处理函数怎么用呢?ext_score_file指的是什么文件呢?
For external scores, it means that we get the classification score for each video by external methods, i.e., a classification model. The postprocess_results is just fuse the external scores with the predictions from ActionFormer. I think you can simply discard this file when you train ActionFormer on your custom dataset.
I got it. Thanks!
Hi! I am looking forward to training and evaluating on our own dataset.
+1 Looking forward to training and evaluating on our own dataset.
Is there an update on training and evaluating on our own dataset?
Hi, thanks for sharing the code @happyharrycn. Could you let me know if you are still planning to share a recipe for training and evaluating on our own dataset?
hello,I want to know how do you compute the GFLOPS? Can you share the code?
How to get a json file of a comment and how to training on our own dataset?