Training time
Hi!
Congratulations on your paper! I'm working on reproducing your results to propose a continuation of your work. The paper mentions that you used 8 NVIDIA T4 GPUs, but could you let me know how long you ran the experiments for? I only have access to 2 or possibly 4 GPUs, and I'd like to understand if reproducing the results is feasible and how much time it might take.
Additionally, I'm trying to run the experiments with the readme commands, and it seems to need the datasets shard files for instance:
train_shards: "/home/ubuntu/movi_e/train/shard-{000000..000679}.tar" train_size: 9749 val_shards: "/home/ubuntu/movi_e/val/shard-{000000..000017}.tar" val_size: 250 test_shards: "/home/ubuntu/movi_e/val/shard-{000000..000017}.tar" test_size: 250
Additionally, I'm trying to run the experiments with the readme commands, and it seems to need the datasets shard files for instance:
train_shards: "/home/ubuntu/movi_e/train/shard-{000000..000679}.tar" train_size: 9749 val_shards: "/home/ubuntu/movi_e/val/shard-{000000..000017}.tar" val_size: 250 test_shards: "/home/ubuntu/movi_e/val/shard-{000000..000017}.tar" test_size: 250
Hi, Could you figure it out? I have the same problem but for COCO :)