AMeFu-Net
AMeFu-Net copied to clipboard
Repository for the paper "Depth Guided Adaptive Meta-Fusion Network for Few-shot Video Recognition"
关于episode_novel_dataloader中的这一行 video, video_depth = get_video_fusion_from_video_info_rgb_depth_object_multi_depth(video_info..) 这个函数episode_novel_dataloader,输入的是class_name 而utils.py文件中定义的 get_video_fusion_from_video_info_rgb_depth_object_multi_depth video_info接受的输入应该是”class_name/video_name",请问这个地方怎么读取到照片的 请问dataload中如何完成对视频样本的采样的 我附上我的hdbm的数据集的目录结构,您看看这个数据读取是不是写的有问题。 
Hello, I use the pretrained_weights you provided for the HDMB51 dataset, and under your guidance finish the construction of the dataset. But when I test the results on HDMB51, I...
想问下,这些子模型咋训练的?
Hello! I recently read the paper and the code, and I am a bit confusing about the implementation of DGAdaIN ( in `model_DGAdaIn.py`). According to the original paper, instance normalization...
数据集形式
作者,您好,请问这篇文章输入到网络中的数据是什么形式的呢
txt.文件
作者,您好,我想问一下network_test.py文件中的test.txt,是需要自己根据splits中的标签来进行划分的用来测试的数据集吗?
Excuse me! I have been reading this paper of yours recently, and I am very interested in your experimental content.But I have a question. The get_video_from_video_info in utils.py file is...
可以将network_train_meta_learning.py文件的第70行代码修改为self.mymodel = torch.nn.DataParallel(self.mymodel, device_ids=[0, 1, 2])
I want to know when will you release the model and the submodels rgb and depth? Also, you mentioned we can train the submodels by ourselves through the previous work...