rpg_event_representation_learning
rpg_event_representation_learning copied to clipboard
Test accuracy lower with higher batch size
Hello! I was testing a trained model, and found that the testing accuracy reported by the code is lower for bigger batch sizes.
For instance:
python testing.py --test $TEST_PATH--checkpoint log/model_best_caltech101.pth --batch_size 12 --num_workers 12
reports:
While:
python testing.py --test $TEST_PATH--checkpoint log/model_best_caltech101.pth --batch_size 48 --num_workers 12
reports:
Any idea why this might be happening?
Edit:
It gets weirder, as running again the evaluation reports different accuracies (all tested with --batch_size 48).
Run 1:
Run2:
Run3: