MixMatch-pytorch
MixMatch-pytorch copied to clipboard
The question about batch_size/val_iteration/lr.
Thanks for your implementation. When I run the code you give, it just uses 1325MiB GPU. I want to accelerate the speed, so I change the hyper-parameters of batch_size=256or512(default 64) and val_iteration=256or128(default 1024), but I didn't get the ideal result. What should I do? Must val_iteration be 1024?
I have the same question with you. In my opinion, the number of labeled data is less than unlabeled. So, in this case, when val_iteration =1024, we will run the labeled data repeated many times in a epoch. Does it essential and reasonable? @xiaopingzeng @YU1ut
In my opinion, this method needs to see the same sample with different augmentations a lot of times and get enough mixup samples to improve the performance. So, it is necessary to run a lot of iteration to get enough samples. I have no idea how to accelerate the speed of training at this moment.