pytorch_connectomics icon indicating copy to clipboard operation
pytorch_connectomics copied to clipboard

Very slow label smoothing with large input size

Open Levishery opened this issue 3 years ago • 1 comments

Thank you very much for your contributions! :)

I'm implementing MALA's network in this pipeline. It saves memory by using convolution without padding, therefore can afford a larger input size during training (for example [64, 268, 268] with batch size 4 on a single GPU).

However, the data loading time became unaffordable under this input size, where 90% of the time is spent on data-loading. I found that this is caused by SMOOTH, the post-process of the label after augmentation.

I wonder if you are aware of this? Will discarding smooth influence training much?

Merry Christmas :)

Levishery avatar Dec 22 '21 09:12 Levishery

Hi @Levishery, thanks for reporting the performance issue! We will investigate it and get back to you. Basically, without AUGMENTOR.SMOOTH, the object masks after nearest interpolation will have very coarse boundaries.

Merry Christmas and Happy New Year!

zudi-lin avatar Dec 27 '21 06:12 zudi-lin