zhangao
zhangao
## ❓ Questions and Help I try to use `load_graph` function in `Scene-Graph-Benchmark.pytorch/maskrcnn_benchmark/data/datasets/visual_genome.py`, and use the args like: train: load_graphs(roidb_file, split='train', num_im=-1, num_val_im=5000, filter_empty_rels=True, filter_non_overlap=False) val: load_graphs(roidb_file, split='val', num_im=-1, num_val_im=5000,...
Thanks for the excellent work! I want to use the model on RefCOCO, RefCOCO+, RefCOCOg datasets. However, I noticed that the paper does not evaluate on referring expression comprehension tasks....
The pre-training dataset is combined with several object detection datasets e.g.: VG, COCO. Are the original val/test set of VG and COCO included?
Can you release the code for zero-shot VQAv2 using OPT model? Or can you release the config file for this?
Thanks for the great repo! I am trying to reproduce the BLIP2 stage2 results. I am curious about what is the ratio of different datasets in the training. Are the...