CLUDA icon indicating copy to clipboard operation
CLUDA copied to clipboard

Bad performance of DAFormer + CLUDA, please release related codes

Open super233 opened this issue 2 years ago • 18 comments

Hi, thanks for your awesome code.

I noticed that the released code is disigned for HRDA, could you please provide the code for DAFormer, especially for dacs.py

super233 avatar Jan 17 '23 09:01 super233

@user0407 ?

super233 avatar Jan 23 '23 02:01 super233

I have tried to reproduce “DAFormer + CLUDA” with contrastive_loss in mmseg/models/losses/contrastive_loss.py, however the best performance of GTA2Cityscapes was only 67.88, which was worse than the DAFormer baseline.

There is the reproduced code, could you please check it? dacs_daformer.zip

super233 avatar Jan 26 '23 05:01 super233

For how many iterations are you running?

On Thu, Jan 26, 2023, 11:16 AM Wenqi Tang @.***> wrote:

I have tried to reproduce “DAFormer + CLUDA” with contrastive_loss in mmseg/models/losses/contrastive_loss.py, however the best performance of GTA2Cityscapes was only 67.88, which was worse than the DAFormer baseline.

There is the reproduced code, could you please check it? dacs_daformer.zip https://github.com/user0407/CLUDA/files/10506663/dacs_daformer.zip

— Reply to this email directly, view it on GitHub https://github.com/user0407/CLUDA/issues/5#issuecomment-1404586283, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63ARYP7XZ5L52XHI5CDWUIFUBANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.***>

user0407 avatar Jan 26 '23 05:01 user0407

For how many iterations are you running? On Thu, Jan 26, 2023, 11:16 AM Wenqi Tang @.> wrote: I have tried to reproduce “DAFormer + CLUDA” with contrastive_loss in mmseg/models/losses/contrastive_loss.py, however the best performance of GTA2Cityscapes was only 67.88, which was worse than the DAFormer baseline. There is the reproduced code, could you please check it? dacs_daformer.zip https://github.com/user0407/CLUDA/files/10506663/dacs_daformer.zip — Reply to this email directly, view it on GitHub <#5 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63ARYP7XZ5L52XHI5CDWUIFUBANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.>

40000 iterations.

super233 avatar Jan 26 '23 05:01 super233

Please run it for 80k iterations. The results reported in the paper are for 80k iterations. We found that contrastive losses takes longer to saturate and produce the desired results. Let me know if you still are not getting the results.

On Thu, Jan 26, 2023, 11:21 AM Wenqi Tang @.***> wrote:

For how many iterations are you running? … <#m_-3183195904676458368_> On Thu, Jan 26, 2023, 11:16 AM Wenqi Tang @.> wrote: I have tried to reproduce “DAFormer + CLUDA” with contrastive_loss in mmseg/models/losses/contrastive_loss.py, however the best performance of GTA2Cityscapes was only 67.88, which was worse than the DAFormer baseline. There is the reproduced code, could you please check it? dacs_daformer.zip https://github.com/user0407/CLUDA/files/10506663/dacs_daformer.zip https://github.com/user0407/CLUDA/files/10506663/dacs_daformer.zip — Reply to this email directly, view it on GitHub <#5 (comment) https://github.com/user0407/CLUDA/issues/5#issuecomment-1404586283>, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63ARYP7XZ5L52XHI5CDWUIFUBANCNFSM6AAAAAAT5TIHK4 https://github.com/notifications/unsubscribe-auth/A2CF63ARYP7XZ5L52XHI5CDWUIFUBANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.>

40000 iterations.

— Reply to this email directly, view it on GitHub https://github.com/user0407/CLUDA/issues/5#issuecomment-1404588304, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63E7255GOIS4MHAN7KLWUIGGJANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.***>

user0407 avatar Jan 26 '23 05:01 user0407

Please run it for 80k iterations. The results reported in the paper are for 80k iterations. We found that contrastive losses takes longer to saturate and produce the desired results. Let me know if you still are not getting the results.

OK, I will retry with 80k iterations. By the way, could you please share the code of "DAFormer + CLUDA", especially for dacs.py.

Thank you very much!

super233 avatar Jan 26 '23 05:01 super233

Please run it for 80k iterations. The results reported in the paper are for 80k iterations. We found that contrastive losses takes longer to saturate and produce the desired results. Let me know if you still are not getting the results. On Thu, Jan 26, 2023, 11:21 AM Wenqi Tang @.> wrote: For how many iterations are you running? … <#m_-3183195904676458368_> On Thu, Jan 26, 2023, 11:16 AM Wenqi Tang @.> wrote: I have tried to reproduce “DAFormer + CLUDA” with contrastive_loss in mmseg/models/losses/contrastive_loss.py, however the best performance of GTA2Cityscapes was only 67.88, which was worse than the DAFormer baseline. There is the reproduced code, could you please check it? dacs_daformer.zip https://github.com/user0407/CLUDA/files/10506663/dacs_daformer.zip https://github.com/user0407/CLUDA/files/10506663/dacs_daformer.zip — Reply to this email directly, view it on GitHub <#5 (comment) <#5 (comment)>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63ARYP7XZ5L52XHI5CDWUIFUBANCNFSM6AAAAAAT5TIHK4 https://github.com/notifications/unsubscribe-auth/A2CF63ARYP7XZ5L52XHI5CDWUIFUBANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.> 40000 iterations. — Reply to this email directly, view it on GitHub <#5 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63E7255GOIS4MHAN7KLWUIGGJANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.**>

Does "HRDA + CLUDA" also need be trained for 80k iters? I have run "HRDA + CLUDA" with your released code, however the best performance at the end of 40k iters training was only 73.37, which was worse than HRDA baseline.

super233 avatar Jan 27 '23 03:01 super233

Yes

On Fri, Jan 27, 2023, 8:45 AM Wenqi Tang @.***> wrote:

Please run it for 80k iterations. The results reported in the paper are for 80k iterations. We found that contrastive losses takes longer to saturate and produce the desired results. Let me know if you still are not getting the results. … <#m_4423658267278183710_> On Thu, Jan 26, 2023, 11:21 AM Wenqi Tang @.> wrote: For how many iterations are you running? … <#m_-3183195904676458368_> On Thu, Jan 26, 2023, 11:16 AM Wenqi Tang @.> wrote: I have tried to reproduce “DAFormer + CLUDA” with contrastive_loss in mmseg/models/losses/contrastive_loss.py, however the best performance of GTA2Cityscapes was only 67.88, which was worse than the DAFormer baseline. There is the reproduced code, could you please check it? dacs_daformer.zip https://github.com/user0407/CLUDA/files/10506663/dacs_daformer.zip https://github.com/user0407/CLUDA/files/10506663/dacs_daformer.zip https://github.com/user0407/CLUDA/files/10506663/dacs_daformer.zip https://github.com/user0407/CLUDA/files/10506663/dacs_daformer.zip — Reply to this email directly, view it on GitHub <#5 https://github.com/user0407/CLUDA/issues/5 (comment) <#5 (comment) https://github.com/user0407/CLUDA/issues/5#issuecomment-1404586283>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63ARYP7XZ5L52XHI5CDWUIFUBANCNFSM6AAAAAAT5TIHK4 https://github.com/notifications/unsubscribe-auth/A2CF63ARYP7XZ5L52XHI5CDWUIFUBANCNFSM6AAAAAAT5TIHK4 https://github.com/notifications/unsubscribe-auth/A2CF63ARYP7XZ5L52XHI5CDWUIFUBANCNFSM6AAAAAAT5TIHK4 https://github.com/notifications/unsubscribe-auth/A2CF63ARYP7XZ5L52XHI5CDWUIFUBANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.> 40000 iterations. — Reply to this email directly, view it on GitHub <#5 (comment) https://github.com/user0407/CLUDA/issues/5#issuecomment-1404588304>, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63E7255GOIS4MHAN7KLWUIGGJANCNFSM6AAAAAAT5TIHK4 https://github.com/notifications/unsubscribe-auth/A2CF63E7255GOIS4MHAN7KLWUIGGJANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.**>

Does "HRDA + CLUDA" also need be trained for 80k iters? I have run "HRDA + CLUDA" with your released code, however the best performance at the end of 40k iters training was only 73.37, which was worse than HRDA baseline.

— Reply to this email directly, view it on GitHub https://github.com/user0407/CLUDA/issues/5#issuecomment-1405966656, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63D7DYKIT44LFLKTOA3WUM4UFANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.***>

user0407 avatar Jan 27 '23 04:01 user0407

I'm sorry to bother you, I can not reproduced the performance of "DAFormer + CLUDA", could you please provide the related codes? I'm looking forward your reply!

super233 avatar Jan 28 '23 08:01 super233

I have tried many times, however the performances in your paper are still not reproduced, could you please provide the code for DAFormer?

super233 avatar Feb 08 '23 02:02 super233

Hey,

I'm sorry i cannot provide the code right now. I will try to upload the training log. That has all the hyper-param settings. Check your configurations against that. This might take a few days as Im currently held up with other work.

Thankyou Midhun

On Sat, Jan 28, 2023, 2:12 PM Wenqi Tang @.***> wrote:

I'm sorry to bother you, I can not reproduced the performance of "DAFormer

  • CLUDA", could you please provide the related codes? I'm looking forward your reply!

— Reply to this email directly, view it on GitHub https://github.com/user0407/CLUDA/issues/5#issuecomment-1407342046, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63D62R5S6YHH6EWE353WUTLX7ANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.***>

user0407 avatar Feb 08 '23 06:02 user0407

Hey, I'm sorry i cannot provide the code right now. I will try to upload the training log. That has all the hyper-param settings. Check your configurations against that. This might take a few days as Im currently held up with other work. Thankyou Midhun On Sat, Jan 28, 2023, 2:12 PM Wenqi Tang @.> wrote: I'm sorry to bother you, I can not reproduced the performance of "DAFormer + CLUDA", could you please provide the related codes? I'm looking forward your reply! — Reply to this email directly, view it on GitHub <#5 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63D62R5S6YHH6EWE353WUTLX7ANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.>

That's really helpful for me. Thanks.

super233 avatar Feb 08 '23 06:02 super233

Hey, I'm sorry i cannot provide the code right now. I will try to upload the training log. That has all the hyper-param settings. Check your configurations against that. This might take a few days as Im currently held up with other work. Thankyou Midhun On Sat, Jan 28, 2023, 2:12 PM Wenqi Tang @.> wrote: I'm sorry to bother you, I can not reproduced the performance of "DAFormer + CLUDA", could you please provide the related codes? I'm looking forward your reply! — Reply to this email directly, view it on GitHub <#5 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63D62R5S6YHH6EWE353WUTLX7ANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.>

One more thing, when training with mmsegmentation, the codes will be automatically packed as code.tar.gz. If you can find the training log, then you should also find corresponding code.tar.gz.

super233 avatar Feb 08 '23 08:02 super233

Thanks for pointing that out. I will upload that then.

On Wed, Feb 8, 2023, 1:32 PM Wenqi Tang @.***> wrote:

Hey, I'm sorry i cannot provide the code right now. I will try to upload the training log. That has all the hyper-param settings. Check your configurations against that. This might take a few days as Im currently held up with other work. Thankyou Midhun … <#m_-8296958202816638943_> On Sat, Jan 28, 2023, 2:12 PM Wenqi Tang @.> wrote: I'm sorry to bother you, I can not reproduced the performance of "DAFormer + CLUDA", could you please provide the related codes? I'm looking forward your reply! — Reply to this email directly, view it on GitHub <#5 (comment) https://github.com/user0407/CLUDA/issues/5#issuecomment-1407342046>, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63D62R5S6YHH6EWE353WUTLX7ANCNFSM6AAAAAAT5TIHK4 https://github.com/notifications/unsubscribe-auth/A2CF63D62R5S6YHH6EWE353WUTLX7ANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.>

One more thing, when training with mmsegmentation, the codes will be automatically packed as code.tar.gz. If you can find the training log, then you should also find corresponding code.tar.gz.

— Reply to this email directly, view it on GitHub https://github.com/user0407/CLUDA/issues/5#issuecomment-1422185920, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63HCTRQMNNGALOIPX6TWWNHJTANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.***>

user0407 avatar Feb 08 '23 08:02 user0407

For DAFormer + CLUDA, do you directly use the fused feature for contrastive learning? Have you use any projectors to reduce feature dimensions? And how do you set fm_size?

super233 avatar Feb 08 '23 08:02 super233

Thanks for pointing that out. I will upload that then. On Wed, Feb 8, 2023, 1:32 PM Wenqi Tang @.> wrote: Hey, I'm sorry i cannot provide the code right now. I will try to upload the training log. That has all the hyper-param settings. Check your configurations against that. This might take a few days as Im currently held up with other work. Thankyou Midhun … <#m_-8296958202816638943_> On Sat, Jan 28, 2023, 2:12 PM Wenqi Tang @.> wrote: I'm sorry to bother you, I can not reproduced the performance of "DAFormer + CLUDA", could you please provide the related codes? I'm looking forward your reply! — Reply to this email directly, view it on GitHub <#5 (comment) <#5 (comment)>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63D62R5S6YHH6EWE353WUTLX7ANCNFSM6AAAAAAT5TIHK4 https://github.com/notifications/unsubscribe-auth/A2CF63D62R5S6YHH6EWE353WUTLX7ANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.> One more thing, when training with mmsegmentation, the codes will be automatically packed as code.tar.gz. If you can find the training log, then you should also find corresponding code.tar.gz. — Reply to this email directly, view it on GitHub <#5 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63HCTRQMNNGALOIPX6TWWNHJTANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.**>

2 weeks have been passed, is there anything progressive?

super233 avatar Feb 20 '23 13:02 super233

@super233

Please find the training log in this link

user0407 avatar Mar 06 '23 03:03 user0407

请问你复现了吗?我也需要DAFormer + CLUDA的代码

wzr0108 avatar Mar 14 '23 15:03 wzr0108