hgr_v2t
hgr_v2t copied to clipboard
Code accompanying the paper "Fine-grained Video-Text Retrieval with Hierarchical Graph Reasoning".
Hi~ thanks for your nice work~ I want to caption a self-captured video, could you please give some detailed instructions on how to adapt the pretrained model provide in the...
Hello, thanks for your great work, I'm very interested in visualizing the examples.How can I visualize the retrieved videos?Could you please upload the code?
I found that only the positive examples are given attention in the paper, is there any data leakage?
There is a doubt in this get data function: why only obtain one caption in a video ? def __getitem__(self, idx): out={} if self.is_train: video_idx,cap_idx=self.pair_idxs[idx] video_name=self.video_names[video_idx] mp_feature=self.mp_features[video_idx] sent=self.captions[cap_idx] cap_ids,cap_len=self.process_sent(sent,self.max_words_embedding) out['captions_ids']=cap_ids...
Can you provide datasets to other domain such as google drive/ dropbox ? To download from Baidu require account and I'm not from China nor have China phone number. Thank...
The biadu link for annotations, pretrained features is gone.
Hi, Shizhe, thanks for the wonderful work! For a new dataset, how can I get the `word2int.json`, `int2word.npy` and `word.embedding.glove42.th`? I assume that you used a `Glove` model for word...
I found that some files are missing in the data file downloaded from BaiduNetdisk. There are 6 files in ```MSRVTT/annotation/RET``` (```int2word.npy```, ```ref_cpation.json```, ```sent2rolegraoh.augment.json```, ```sent2srl.json``` and ```word2int.json```), but some are not...
When I run this code: predictor = Predictor.from_path("https://s3-us-west-2.amazonaws.com/allennlp/models/bert-base-srl-2019.06.17.tar.gz", cuda_device=opts.cuda_device) Send an error: Traceback (most recent call last): File "./semantic_role_labeling.py", line 52, in main() File "./semantic_role_labeling.py", line 19, in main predictor...