AlignedReID-Re-Production-Pytorch icon indicating copy to clipboard operation
AlignedReID-Re-Production-Pytorch copied to clipboard

关于性能提升

Open 92ypli opened this issue 6 years ago • 8 comments

你好,请教下,就是加入local分支训练后,相对于只用triplet loss训练原始Resnet50性能只提升一个点吗?

92ypli avatar Jun 29 '18 11:06 92ypli

我的实验结果是这样的,如果有更好的实验结果,欢迎告知哈。

huanghoujing avatar Jul 01 '18 09:07 huanghoujing

@huanghoujing Hi houjing! But when I trained the model with local branch(local loss and local_dist_own_hard_sample=True) on three datasets, it showed no ~1 point improvement, even worse! I am not clear why? Could you please show me the parameters you set? Thanks a lot!

ZHHJemotion avatar Aug 10 '18 08:08 ZHHJemotion

@ZHHJemotion Do you mean GL + TWGD 87.05% vs GL + LL + TWGALD 88.18% in the Train on Market1501 sheet of AlignedReID-Scores.xlsx? If you refer to this, then it is simply:

python script/experiment/train.py \
-d '(0,)' \
-r 1 \
--dataset market1501 \
--ids_per_batch 32 \
--ims_per_id 4 \
--normalize_feature false \
-gm 0.3 \
-glw 1 \
-llw 0 \
-idlw 0 \
--base_lr 2e-4 \
--lr_decay_type exp \
--exp_decay_at_epoch 151 \
--total_epochs 300

vs.

python script/experiment/train.py \
-d '(0,)' \
-r 1 \
--dataset market1501 \
--ids_per_batch 32 \
--ims_per_id 4 \
--normalize_feature false \
--local_dist_own_hard_sample true \
-gm 0.3 \
-glw 1 \
-llw 1 \
-idlw 0 \
--base_lr 2e-4 \
--lr_decay_type exp \
--exp_decay_at_epoch 151 \
--total_epochs 300

huanghoujing avatar Aug 10 '18 13:08 huanghoujing

@huanghoujing Yes. It is that! And I have the last one question: for with mutual learning, would it improve ~1 points with local distance than without local distance? is "~1 point improvement" also suitable for CUHK03 and Duke? But my experiemnt on CUHK03 and Duke didn't get ~1 point improvement, adding local distance shows similar with that without local distance. Thanks!

ZHHJemotion avatar Aug 11 '18 13:08 ZHHJemotion

@ZHHJemotion In my provided scores, when mutual loss is used, with and without local distance make no much difference.

huanghoujing avatar Aug 12 '18 14:08 huanghoujing

For better performance ,I found another implementation:

Alignedreid++: Dynamically Matching Local Information for Person Re-Identification. Code

ghost avatar Aug 18 '18 03:08 ghost

Hi houjing! Why does ldm_loss work only if the local_dist_own_hard_sample is true? The paper mentioned they trained the network with both global and local loss, and only use global features in the inference stage.

Ken5YX avatar Oct 16 '18 09:10 Ken5YX

GL + LL + TWGALD: python script/experiment/train.py
-d '(0,)'
-r 1
--dataset market1501
--ids_per_batch 32
--ims_per_id 4
--normalize_feature false
--local_dist_own_hard_sample true
-gm 0.3
-glw 1
-llw 1
-idlw 0
--base_lr 2e-4
--lr_decay_type exp
--exp_decay_at_epoch 151
--total_epochs 300

Hi @huanghoujing , If I want to train with both global and local distance loss, how do I set glw and llw? According to above script parametres, what does "glw=1 and llw=1" mean? Why does it not make "glw+llw=1"? For example, glw=0.5 and llw=0.5.

vincentman avatar Feb 05 '19 14:02 vincentman