Haowei Zhu
Haowei Zhu
I have tried these metric learning loss on fine-grained task. However, not work.
I solved this problem. You should modify train.py --local_rank use below default setting : parser.add_argument("--local_rank", type=int, default=os.getenv('LOCAL_RANK', 0), help="local_rank for distributed training on gpus") reference : https://github.com/NVIDIA/apex/blob/master/examples/imagenet/main_amp.py
I reproduce TransFG code follow the instruction with overleap on 4 Tesla GPU V-100 entirely. I just got 91.2%(Paper: 91.7%) Besides, I add AutoAug in training stage as paper mentioned,...
> have you got any good results on Stanfor Car, Stanfor Dogs? I have not try other datasets yet. There is no AutoAug in official code indeed, however, Auto Aug...
@TACJu After you fixed the norm layer pretrained and contrastive loss, I can still not reproduce TrasnFG 91.7%. Only get 91.2% / 91.0% with and without overlap on CUB. Could...
I am confused about the meaning of S and mu_p, why not compute the distance between features and mu directly? Does anyone have idea about that?
@hjxwhy Thanks a lot. I believe this is right. By the way, have you eval implement spatio-temporal loss in FSM? I cannot achieve the same improvement (even decrease) as Table.3...
> Thank you for your interest in our work! About the depth maps, for DGP they are indeed the same, but we sparsify the input depth map during the augmentation...
@VitorGuizilini-TRI Thanks a lot ! Your suggestion is very helpful. I tried focal length scaling and it works. I am tryining start from a pretrained model without the spatio-temporal constraints...
I scale each camera output with its corresponding focal length. All other cameras get all worse results than front camera in my experiments. Only RMSE larger than front camera seems...