EfficientUnet-PyTorch icon indicating copy to clipboard operation
EfficientUnet-PyTorch copied to clipboard

Format code and Add support for training on multi GPUs

Open TianyiFranklinWang opened this issue 3 years ago • 4 comments

Thanks for your excellent code work! I have formatted you code and used an ordereddict as the return value of encoder part, so that the model can now be trained on multi GPUs. And I will be very grateful if you would like to walk me through your get_blocks_to_be_concat function, which I am still confused about.:smile:

TianyiFranklinWang avatar Mar 13 '21 11:03 TianyiFranklinWang

Thanks for your excellent code work! I have formatted you code and used an ordereddict as the return value of encoder part, so that the model can now be trained on multi GPUs. And I will be very grateful if you would like to walk me through your get_blocks_to_be_concat function, which I am still confused about.

Thank you very much for your contribution! Since I no longer work as a deep learning engineer, there's no way I can test this feature enhancement. (I don't have multi GPUs:(, so I'll leave it open but not merge) But I'll definitely put your PR in the related issue so someone may try it out.

As for the get_blocks_to_be_concat function, I wrote that 2 years ago when I worked as a deep learning developer. I almost forgot everything. Sorry for that!

Last but not least, thank you for your contribution!

zhoudaxia233 avatar Apr 20 '21 09:04 zhoudaxia233

我没搞懂 为什么之前作者这么写就会有这个问题呢?

xiaoerlaigeid avatar May 11 '21 13:05 xiaoerlaigeid

Thanks for the PR, it works well for me 👍

MaximeDebarbat avatar Sep 29 '23 12:09 MaximeDebarbat

Thanks for the PR, it works well for me 👍

Glad my code helps you! But actually, I still find some bugs in my code after this PR but as long as it works I think it'll be fine. If you would like more help feel free to ask me (Though it might be too long ago for me).

TianyiFranklinWang avatar Sep 29 '23 12:09 TianyiFranklinWang