research-contributions icon indicating copy to clipboard operation
research-contributions copied to clipboard

Preprocessing of dataset for Self-Supervised Pre-Training of Swin Transformers

Open marvnmtz opened this issue 2 years ago • 1 comments

Thank you for releasing the pretraining code. As I try to reproduce it, I stumbled over some questions.

The first question regards the pre-processing, and more specifically the spacing of the data. You wrote that for the BTCV challenge a spacing of 1.5 x 1.5 x 2.0 mm is used. Does the same hold for the pretraining data.

You also said that you excluded full air (voxel = 0) patches. I cannot really find the part in the code where this is done. Could you describe, how and where this is done?

marvnmtz avatar Dec 15 '22 11:12 marvnmtz