just-ask
just-ask copied to clipboard
[ICCV 2021 Oral + TPAMI] Just Ask: Learning to Answer Questions from Millions of Narrated Videos
testing
Hi, After finetuning on downstream VideoQA datasets, how does the model run in the test dataset? I'm a little confused about this point. Thanks
Hello, I am trying to use your pretrained model and reproduce the results on MSVD-QA. I'm following the same hyperparameters you mentioned in the paper and use the ckpt_pt_howtovqa69m file...
Is it possible to use the tool for our own videos and dataset? If yes, in addition to videos, what features are required for pre-training or fine tuning? I assume...
Enabling code formatting as of example in the screenshot. 