EfficientUnet-PyTorch
EfficientUnet-PyTorch copied to clipboard
Format code and Add support for training on multi GPUs
Thanks for your excellent code work! I have formatted you code and used an ordereddict as the return value of encoder part, so that the model can now be trained on multi GPUs. And I will be very grateful if you would like to walk me through your get_blocks_to_be_concat function, which I am still confused about.:smile:
Thanks for your excellent code work! I have formatted you code and used an ordereddict as the return value of encoder part, so that the model can now be trained on multi GPUs. And I will be very grateful if you would like to walk me through your get_blocks_to_be_concat function, which I am still confused about.
Thank you very much for your contribution! Since I no longer work as a deep learning engineer, there's no way I can test this feature enhancement. (I don't have multi GPUs:(, so I'll leave it open but not merge) But I'll definitely put your PR in the related issue so someone may try it out.
As for the get_blocks_to_be_concat
function, I wrote that 2 years ago when I worked as a deep learning developer. I almost forgot everything. Sorry for that!
Last but not least, thank you for your contribution!
我没搞懂 为什么之前作者这么写就会有这个问题呢?
Thanks for the PR, it works well for me 👍
Thanks for the PR, it works well for me 👍
Glad my code helps you! But actually, I still find some bugs in my code after this PR but as long as it works I think it'll be fine. If you would like more help feel free to ask me (Though it might be too long ago for me).