luzai
luzai
Thank you very much for your high performance repo! By splitting large matrix(refer to [here](https://github.com/ZhaoJ9014/face.evoLVe.PyTorch/blob/f8b8a982b5fe92f8d91d111f43a77071c115c66f/head/metrics.py#L107)), the memory consumption is more balanced, however the training process seems not speed up. I...
Thank you very much for your inspiring work! As suggested in the paper, "In the testing phase, it is not necessary to keep the same configuration with the training phase....
Hi, In LEEP paper, there is two transfer accuracy depending on which transfer learning method is used, re-train head or fine-tune (whole model). Is LogME correlated well with the transfer...
Hi, kalviny. Thank you very much for this repo! Would you share the pretrained 4-scale MSDNet w.o./w GE/IMTA on Imagenet? Training on imagenet takes a long time. In the paper,...
Thank you for your great work and elegant idea! Just wondering what if Int8 BMM overflows? Will it do wrapping or saturation?
May I ask the way to split SUN397 dataset? I saw this file `data/SUN397/Training_01.txt` is needed in sun397.py. Thank you!
Thank you for your great work! May I ask about some details on the scheduler? 1. In paper, it is mentioned that "To minimize latency penalty, we limit the prefill...