EANet
EANet copied to clipboard
for triplet loss
is the python 2.7 and pytorch 1.0.0 are not supportive for tripletloss ??
Hi, python 2.7 and pytorch 1.0.0 are absolutely ok to run this code.
By the way, do you run the program in Linux? I ran it in Ubuntu.
Hello sir, I too ran the code on ubuntu python 2.7 and pytorch 1.0.0
The code works fine with ID and PS loss but when triplet is added it give error . Why is it so ?
On Sun, 13 Dec 2020, 1:22 pm Houjing Huang, [email protected] wrote:
Hi, python 2.7 and pytorch 1.0.0 are absolutely ok to run this code.
By the way, do you run the program in Linux? I ran it in Ubuntu.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/huanghoujing/EANet/issues/35#issuecomment-743969812, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOOYJBK4QGEBTE73W7PTKB3SURXCJANCNFSM4UQGVKRA .
I'm getting below error when included triplet loss. Help me to resolve this issue
File "/home/padmashree/anaconda3/envs/myenv/lib/python2.7/runpy.py", line
174, in _run_module_as_main
"main", fname, loader, pkg_name)
File "/home/padmashree/anaconda3/envs/myenv/lib/python2.7/runpy.py", line
72, in _run_code
exec code in run_globals
File "/home/padmashree/project_dir/EANet2/package/optim/eanet_trainer.py",
line 135, in
On Sun, 13 Dec 2020, 2:08 pm Shavantrevva Bilakeri, [email protected] wrote:
Hello sir, I too ran the code on ubuntu python 2.7 and pytorch 1.0.0
The code works fine with ID and PS loss but when triplet is added it give error . Why is it so ?
On Sun, 13 Dec 2020, 1:22 pm Houjing Huang, [email protected] wrote:
Hi, python 2.7 and pytorch 1.0.0 are absolutely ok to run this code.
By the way, do you run the program in Linux? I ran it in Ubuntu.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/huanghoujing/EANet/issues/35#issuecomment-743969812, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOOYJBK4QGEBTE73W7PTKB3SURXCJANCNFSM4UQGVKRA .
Hi, thank you for your feedback.
To run triplet loss, we have to
- Use PK sampling for batch construction
- Increase training epochs
I provide a setting file paper_configs/PAP_S_PS_Triplet_Loss_Market1501.txt
, and a script script/exp/train_PAP_S_PS_Triplet_Loss_Market1501.sh
, to train with both PS loss and triplet loss on Market1501.
Besides, I also fixed a mistake in part index when feeding feature to triplet loss. You can find the detail in this commit.
Now, you can run the script by
bash script/exp/train_PAP_S_PS_Triplet_Loss_Market1501.sh
The result I obtained is
M -> M [mAP: 86.0%], [cmc1: 95.0%], [cmc5: 98.0%], [cmc10: 98.8%]
M -> C [mAP: 11.0%], [cmc1: 12.1%], [cmc5: 24.1%], [cmc10: 31.6%]
M -> D [mAP: 29.2%], [cmc1: 46.7%], [cmc5: 63.2%], [cmc10: 69.5%]
I hope it helps.
Thank you so much for ur informative response.
On Thu, 17 Dec 2020, 12:25 pm Houjing Huang, [email protected] wrote:
Hi, thank you for your feedback.
To run triplet loss, we have to
- Use PK sampling for batch construction
- Increase training epochs
I provide a setting file paper_configs/PAP_S_PS_Triplet_Loss_Market1501.txt, and a script script/exp/train_PAP_S_PS_Triplet_Loss_Market1501.sh, to train with both PS loss and triplet loss on Market1501.
Besides, I also fixed a mistake in part index when feeding feature to triplet loss. You can find the detail in this commit https://github.com/huanghoujing/EANet/commit/e46d49528428fbf0c54b93e557e009012c9a34b4 .
Now, you can run the script by
bash script/exp/train_PAP_S_PS_Triplet_Loss_Market1501.sh
The result I obtained is
M -> M [mAP: 86.0%], [cmc1: 95.0%], [cmc5: 98.0%], [cmc10: 98.8%] M -> C [mAP: 11.0%], [cmc1: 12.1%], [cmc5: 24.1%], [cmc10: 31.6%] M -> D [mAP: 29.2%], [cmc1: 46.7%], [cmc5: 63.2%], [cmc10: 69.5%]
I hope it helps.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/huanghoujing/EANet/issues/35#issuecomment-747250377, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOOYJBO2TJRUMOOQRQWC23TSVGTOLANCNFSM4UQGVKRA .
Hi sir, In your EANet paper did u train the model including triplet loss ?? Kindly reply
On Thu, 17 Dec 2020, 9:15 pm Shavantrevva Bilakeri, [email protected] wrote:
Thank you so much for ur informative response.
On Thu, 17 Dec 2020, 12:25 pm Houjing Huang, [email protected] wrote:
Hi, thank you for your feedback.
To run triplet loss, we have to
- Use PK sampling for batch construction
- Increase training epochs
I provide a setting file paper_configs/PAP_S_PS_Triplet_Loss_Market1501.txt, and a script script/exp/train_PAP_S_PS_Triplet_Loss_Market1501.sh, to train with both PS loss and triplet loss on Market1501.
Besides, I also fixed a mistake in part index when feeding feature to triplet loss. You can find the detail in this commit https://github.com/huanghoujing/EANet/commit/e46d49528428fbf0c54b93e557e009012c9a34b4 .
Now, you can run the script by
bash script/exp/train_PAP_S_PS_Triplet_Loss_Market1501.sh
The result I obtained is
M -> M [mAP: 86.0%], [cmc1: 95.0%], [cmc5: 98.0%], [cmc10: 98.8%] M -> C [mAP: 11.0%], [cmc1: 12.1%], [cmc5: 24.1%], [cmc10: 31.6%] M -> D [mAP: 29.2%], [cmc1: 46.7%], [cmc5: 63.2%], [cmc10: 69.5%]
I hope it helps.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/huanghoujing/EANet/issues/35#issuecomment-747250377, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOOYJBO2TJRUMOOQRQWC23TSVGTOLANCNFSM4UQGVKRA .
Hi ssbilakeri, I did not use triplet loss in the paper.
Hello sir, I'm trying to re-produce your paper result with re-ranking unfortunately I'm not able to do it.Since I need your paper result with re-ranking applied to compare with my work .could u please do it for me. It will be a great help Thank you
On Mon, 21 Dec 2020, 9:33 am Houjing Huang, [email protected] wrote:
Hi ssbilakeri, I did not use triplet loss in the paper.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/huanghoujing/EANet/issues/35#issuecomment-748744501, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOOYJBOTSE45VUNQTKSVK5LSV3CIDANCNFSM4UQGVKRA .
Hi, ssbilakeri, for which Table of the paper do you need the re-ranking score?
I need it for PAP_S_PS (where ID loss and segmentation loss is used). Kindly help in in this regard. I will be looking forward to your response. Thank you.
On Sun, 27 Dec 2020, 5:17 pm Houjing Huang, [email protected] wrote:
Hi, ssbilakeri, for which Table of the paper do you need the re-ranking score?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/huanghoujing/EANet/issues/35#issuecomment-751457930, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOOYJBKMJFZ7S2TPJ2AKWXDSW4NE3ANCNFSM4UQGVKRA .
Hi sir, when I run your code with re-ranking getting below results. kindly suggest what could be the problem.
I have attached the code with this mail. please help me . Thank you.
Extract Feature: 0%| | 0/106 [00:00<?, ? batches/s]Loaded pickle file /home/padmashree/project_dir/dataset/market1501/im_path_to_kpt.pklLoaded pickle file /home/padmashree/project_dir/dataset/market1501/im_path_to_kpt.pkl
Extract Feature: 100%|##########################################################| 106/106 [00:13<00:00, 7.75 batches/s] Extract Feature: 0%| | 0/498 [00:00<?, ? batches/s]Loaded pickle file /home/padmashree/project_dir/dataset/market1501/im_path_to_kpt.pkl Loaded pickle file /home/padmashree/project_dir/dataset/market1501/im_path_to_kpt.pkl Extract Feature: 100%|##########################################################| 498/498 [01:04<00:00, 7.75 batches/s] => Eval Statistics: dic.keys(): ['g_feat', 'q_feat', 'q_visible', 'q_label', 'q_cam', 'g_visible', 'g_label', 'g_cam'] dic['q_feat'].shape: (3368, 2304) dic['q_label'].shape: (3368,) dic['q_cam'].shape: (3368,) dic['g_feat'].shape: (15913, 2304) dic['g_label'].shape: (15913,) dic['g_cam'].shape: (15913,) M -> M [mAP: 1.5%], [cmc1: 7.3%], [cmc5: 14.5%], [cmc10: 19.3%] Extract Feature: 0%| | 0/44 [00:00<?, ? batches/s]Loaded pickle file /home/padmashree/project_dir/dataset/cuhk03_np_detected_jpg/im_path_to_kpt.pkl Loaded pickle file /home/padmashree/project_dir/dataset/cuhk03_np_detected_jpg/im_path_to_kpt.pkl Extract Feature: 100%|############################################################| 44/44 [00:05<00:00, 7.79 batches/s] Extract Feature: 0%| | 0/167 [00:00<?, ? batches/s]Loaded pickle file /home/padmashree/project_dir/dataset/cuhk03_np_detected_jpg/im_path_to_kpt.pkl Loaded pickle file /home/padmashree/project_dir/dataset/cuhk03_np_detected_jpg/im_path_to_kpt.pkl Extract Feature: 100%|##########################################################| 167/167 [00:20<00:00, 8.10 batches/s] => Eval Statistics: dic.keys(): ['g_feat', 'q_feat', 'q_visible', 'q_label', 'q_cam', 'g_visible', 'g_label', 'g_cam'] dic['q_feat'].shape: (1400, 2304) dic['q_label'].shape: (1400,) dic['q_cam'].shape: (1400,) dic['g_feat'].shape: (5332, 2304) dic['g_label'].shape: (5332,) dic['g_cam'].shape: (5332,) M -> C [mAP: 0.2%], [cmc1: 0.1%], [cmc5: 0.5%], [cmc10: 1.7%] Extract Feature: 0%| | 0/70 [00:00<?, ? batches/s]Loaded pickle file /home/padmashree/project_dir/dataset/duke/im_path_to_kpt.pkl Loaded pickle file /home/padmashree/project_dir/dataset/duke/im_path_to_kpt.pkl Extract Feature: 100%|############################################################| 70/70 [00:08<00:00, 7.90 batches/s] Extract Feature: 0%| | 0/552 [00:00<?, ? batches/s]Loaded pickle file /home/padmashree/project_dir/dataset/duke/im_path_to_kpt.pkl Loaded pickle file /home/padmashree/project_dir/dataset/duke/im_path_to_kpt.pkl Extract Feature: 100%|##########################################################| 552/552 [01:07<00:00, 8.15 batches/s] => Eval Statistics: dic.keys(): ['g_feat', 'q_feat', 'q_visible', 'q_label', 'q_cam', 'g_visible', 'g_label', 'g_cam'] dic['q_feat'].shape: (2228, 2304) dic['q_label'].shape: (2228,) dic['q_cam'].shape: (2228,) dic['g_feat'].shape: (17661, 2304) dic['g_label'].shape: (17661,) dic['g_cam'].shape: (17661,) M -> D [mAP: 0.3%], [cmc1: 1.2%], [cmc5: 2.8%], [cmc10: 4.3%]
On Sun, 27 Dec 2020, 6:38 pm Shavantrevva Bilakeri, [email protected] wrote:
I need it for PAP_S_PS (where ID loss and segmentation loss is used). Kindly help in in this regard. I will be looking forward to your response. Thank you.
On Sun, 27 Dec 2020, 5:17 pm Houjing Huang, [email protected] wrote:
Hi, ssbilakeri, for which Table of the paper do you need the re-ranking score?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/huanghoujing/EANet/issues/35#issuecomment-751457930, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOOYJBKMJFZ7S2TPJ2AKWXDSW4NE3ANCNFSM4UQGVKRA .
It seems that the trained model weight is not loaded.
Could you please check with your code hope you have trained weights
On Mon, 28 Dec 2020, 8:26 pm Houjing Huang, [email protected] wrote:
It seems that the trained model weight is not loaded.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/huanghoujing/EANet/issues/35#issuecomment-751738171, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOOYJBIQXJJ6RGPQ2AOOLUTSXCMAXANCNFSM4UQGVKRA .
Hi, ssbilakeri. I have tested the re-ranking results for PAP_S_PS models (the script to run this is script/exp/test_PAP_S_PS_reranking.sh
).
The original scores, as well as the re-ranking scores are as follows.
mAP | Rank-1 | Rank-5 | Rank-10 | |
---|---|---|---|---|
M -> M | 85.6 | 94.6 | 98.2 | 99.0 |
ReRank M -> M | 93.5 | 95.7 | 97.5 | 98.3 |
M -> C | 12.8 | 14.2 | 28.1 | 35.4 |
ReRank M -> C | 19.4 | 17.6 | 28.1 | 35.9 |
M -> D | 31.7 | 51.4 | 67.2 | 72.5 |
ReRank M -> D | 47.6 | 57.6 | 67.9 | 73.4 |
C -> M | 33.3 | 59.4 | 73.7 | 78.7 |
ReRank C -> M | 47.3 | 64.0 | 72.0 | 76.1 |
C -> C | 66.7 | 72.5 | 86.1 | 91.3 |
ReRank C -> C | 80.8 | 80.1 | 86.9 | 92.2 |
C -> D | 22.0 | 39.3 | 54.4 | 60.3 |
ReRank C -> D | 36.1 | 47.7 | 57.5 | 61.8 |
D -> M | 32.8 | 61.7 | 77.2 | 83.0 |
ReRank D -> M | 48.0 | 65.6 | 74.1 | 78.8 |
D -> C | 9.6 | 11.4 | 22.7 | 28.9 |
ReRank D -> C | 15.4 | 14.4 | 22.1 | 28.7 |
D -> D | 74.6 | 87.5 | 93.4 | 95.3 |
ReRank D -> D | 85.5 | 89.7 | 93.6 | 95.2 |
I update the code, so that it can test with re-ranking now. Please refer to this commit. You can run it for yourself, by setting cfg.eval.rerank
to True
in package/config/default.py
.
Thank you very much for your valuable response. Thanks a lot.
On Tue, Dec 29, 2020 at 9:10 AM Houjing Huang [email protected] wrote:
Hi, ssbilakeri. I have tested the re-ranking results for PAP_S_PS models (the script to run this is script/exp/test_PAP_S_PS_reranking.sh).
The original scores, as well as the re-ranking scores are as follows. mAP Rank-1 Rank-5 Rank-10 M -> M 85.6 94.6 98.2 99.0 ReRank M -> M 93.5 95.7 97.5 98.3 M -> C 12.8 14.2 28.1 35.4 ReRank M -> C 19.4 17.6 28.1 35.9 M -> D 31.7 51.4 67.2 72.5 ReRank M -> D 47.6 57.6 67.9 73.4 C -> M 33.3 59.4 73.7 78.7 ReRank C -> M 47.3 64.0 72.0 76.1 C -> C 66.7 72.5 86.1 91.3 ReRank C -> C 80.8 80.1 86.9 92.2 C -> D 22.0 39.3 54.4 60.3 ReRank C -> D 36.1 47.7 57.5 61.8 D -> M 32.8 61.7 77.2 83.0 ReRank D -> M 48.0 65.6 74.1 78.8 D -> C 9.6 11.4 22.7 28.9 ReRank D -> C 15.4 14.4 22.1 28.7 D -> D 74.6 87.5 93.4 95.3 ReRank D -> D 85.5 89.7 93.6 95.2
I update the code, so that it can test with re-ranking now. Please refer to this commit https://github.com/huanghoujing/EANet/commit/a38f12477e3edd625699f5a1beae92181e2c6b62. You can run it for yourself, by setting cfg.eval.rerank to True in package/config/default.py.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/huanghoujing/EANet/issues/35#issuecomment-751935474, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOOYJBMU2NOGZZEGCDXCHZTSXFFR3ANCNFSM4UQGVKRA .
Hello sir, when I train your model considering only segmentation loss it gives so less accuracy. have you run with only segmentation loss ? then what accuracy did you get?
for me result is like this Epoch 60 M->M: 6.8 ( 1.4), M->C: 0.1 ( 0.2), M->D: 5.7 ( 1.2)
kindly respond.
On Tue, Dec 29, 2020 at 11:08 AM Shavantrevva Bilakeri [email protected] wrote:
Thank you very much for your valuable response. Thanks a lot.
On Tue, Dec 29, 2020 at 9:10 AM Houjing Huang [email protected] wrote:
Hi, ssbilakeri. I have tested the re-ranking results for PAP_S_PS models (the script to run this is script/exp/test_PAP_S_PS_reranking.sh).
The original scores, as well as the re-ranking scores are as follows. mAP Rank-1 Rank-5 Rank-10 M -> M 85.6 94.6 98.2 99.0 ReRank M -> M 93.5 95.7 97.5 98.3 M -> C 12.8 14.2 28.1 35.4 ReRank M -> C 19.4 17.6 28.1 35.9 M -> D 31.7 51.4 67.2 72.5 ReRank M -> D 47.6 57.6 67.9 73.4 C -> M 33.3 59.4 73.7 78.7 ReRank C -> M 47.3 64.0 72.0 76.1 C -> C 66.7 72.5 86.1 91.3 ReRank C -> C 80.8 80.1 86.9 92.2 C -> D 22.0 39.3 54.4 60.3 ReRank C -> D 36.1 47.7 57.5 61.8 D -> M 32.8 61.7 77.2 83.0 ReRank D -> M 48.0 65.6 74.1 78.8 D -> C 9.6 11.4 22.7 28.9 ReRank D -> C 15.4 14.4 22.1 28.7 D -> D 74.6 87.5 93.4 95.3 ReRank D -> D 85.5 89.7 93.6 95.2
I update the code, so that it can test with re-ranking now. Please refer to this commit https://github.com/huanghoujing/EANet/commit/a38f12477e3edd625699f5a1beae92181e2c6b62. You can run it for yourself, by setting cfg.eval.rerank to True in package/config/default.py.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/huanghoujing/EANet/issues/35#issuecomment-751935474, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOOYJBMU2NOGZZEGCDXCHZTSXFFR3ANCNFSM4UQGVKRA .
Hi, ssbilakeri. That's normal. Because segmentation does not learn anything about person re-identification. You have to train with at least one kind of re-identification loss, i.e. id loss or triplet loss.
I was not knowing that. Thank you for the information.
On Wed, 30 Dec 2020, 12:00 pm Houjing Huang, [email protected] wrote:
Hi, ssbilakeri. That's normal. Because segmentation does not learn anything about person re-identification. You have to train with at least one kind of re-identification loss, i.e. id loss or triplet loss.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/huanghoujing/EANet/issues/35#issuecomment-752345030, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOOYJBMEAE7AHM4VTN3HT2DSXLCHTANCNFSM4UQGVKRA .
Hi sir, how did you partition the feature map with keypoint delimitation. Which part of the code does that? Help me to understand Thank you.
On Wed, Dec 30, 2020 at 12:22 PM Shavantrevva Bilakeri [email protected] wrote:
I was not knowing that. Thank you for the information.
On Wed, 30 Dec 2020, 12:00 pm Houjing Huang, [email protected] wrote:
Hi, ssbilakeri. That's normal. Because segmentation does not learn anything about person re-identification. You have to train with at least one kind of re-identification loss, i.e. id loss or triplet loss.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/huanghoujing/EANet/issues/35#issuecomment-752345030, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOOYJBMEAE7AHM4VTN3HT2DSXLCHTANCNFSM4UQGVKRA .
Hi, package/data/kpt_to_pap_mask.py
is the code that partitions body into regions using keypoints.