Swathikiran Sudhakaran
Swathikiran Sudhakaran
Use the following setting: `python main.py diving48 RGB --arch InceptionV3 --num_segments 16 --consensus_type avg --batch-size 8 --iter_size 1 --dropout 0.7 --lr 0.01 --warmup 10 --epochs 20 --eval-freq 5 --gd 20...
Hi @DEepLiker, Run main-run-twoStream.py after training the RGB and Flow models separately. We reported the best performing model, after several runs, in the paper. Since GTEA61 is imbalanced and because...
Both the model from last epoch and best model should ideally give the best result since we are training down till the last layers of the individual streams.
Please make sure you are using the right model during inference. It looks like you are loading the something-v1 trained model during inference.
Try running the evaluation code on a single GPU.
I used 2 GPUs with gradient aggregation to have an effective batch size of 32. You can use all 4 GPUs without gradient aggregation to run with the same setup...
I shared the frames of all the videos in the dataset. You can create the train/val splits by yourself. For example, the train data of split 2 consists of samples...
For split 2, copy(or make symbolic links of) dirs S1, S3 and S4 to the train dir and S2 to the test dir
GTEA61* is split 2 and GTEA61 in the table is the average across all four splits
Yes, you are right.