nnUNet
nnUNet copied to clipboard
CTNormalization
I note CTNormalization defined here does a clipping on intensity values. The lower and upper bounds of clipping are calculated as 0.5% and 99.5% percentiles of foreground pixels of training data. Is this something like a threholding where all pixels outside the foreground range are filled by a constant value? When training data is small, can we guarantee such a clipping scheme does not remove informative pixels in test data? Thanks for your help!
Indeed, all pixels brighter than T are thresholded to T, where T is the 99.5% quantile. Analogous for the lower bound. This will only affect very few pixels and makes sure that artifacts have less influence on the outcome. Usually, this will improve results and doesn't have negative side effects. To make sure you could comment out those lines and compare the results. This would have to be done for training as well as inference.